DeepTurb - Deep Learning in and from Turbulence

1. Abstract

The application of machine learning and artificial intelligence to experimental measurements and numerical simulations of turbulent flows opens unique possibilities to process and classify complex data in new ways leading to a deeper fundamental understanding of turbulent fluid motion for a more efficient modeling. In this project, funded by the Carl Zeiss Foundation for a period of 5 years, the dynamics of characteristic fluid structures will be extracted from comprehensive data records of horizontally extended thermal convection flows by means of artificial intelligence. Machine learning will help to predict the dynamics in dimensionally reduced dynamical systems and accelerates the analysis of flow data from optical measurements. These applications also require an extension of the mathematical foundations of machine learning, such as by model predictive control techniques in the algorithms which can establish a more efficient prediction of the turbulence.

2. Motivation and first results

Turbulent fluid motion is one of the most intractable problems of classical physics due to the nonlinear model equations, for which in the three-dimensional case not even the existence and uniqueness of solutions have been proven. Turbulent flows are always characterized by complex vortex structures, which coexist and interact on different length and time scales. Therefore, their numerical simulation by computers or their measurement in experiments are very challenging. However, the knowledge of all flow details is not necessary for the solution of many practical problems, i.e., smaller vortices can be modelled, for example, in terms of effective eddy viscosities. Machine learning algorithms can perform this task and simulate and model parts of a turbulent flow or even the entire flow without having to solve the complex partial differential equations. In February 2020, almost a year ago, the Carl Zeiss Foundation project DeepTurb started. The project is a collaboration of four groups from the Departments of Computer Science (Patrick Mäder), Mathematics and Natural Sciences (Karl Worthmann) and Mechanical Engineering (Christian Cierpka, Jörg Schumacher) at the TU Ilmenau. The research focuses on thermally driven turbulent convection flows in horizontally extended layers. These flows are found in the atmosphere, in the interior of planets and stars, or in the oceans. Their better modelling and more efficient description can ultimately improve global circulation and climate models, in which convection processes must be parameterized. The current research of the project group focuses on a special class of machine learning algorithms: recurrent neural networks (RNN). These are networks with internal feedback, well suited for modelling physical dynamics. Reservoir computing models (RCM) belong among others to this group of RNNs. The reservoir is a large random network -- a highly simplified replica of the network of neurons in the human brain. These RCMs have already been applied to smaller nonlinear dynamical systems and two-dimensional turbulent flows in recent months. The first studies already showed that these special algorithms can well reproduce characteristic properties of turbulence. Simultaneously, new convection experiments were set up last year, allowing long-term measurements of the structures in turbulent convection and providing large amounts of data for training and optimizing the algorithms. The figure shows images of the turbulent fields of temperature, flow velocity, and local heat transport averaged over a short time interval. A long-term goal of the project is to reproduce the slow dynamics of these shown structures with RNNs.

Experimental recordings of turbulent convection in water obtained in the central plane of the cell. Data were averaged for a short time interval. Left: Temperature. Centre: Vertical velocity through the plane. Right: Local heat transport through the plane.

Many open questions still need to be answered in the upcoming months. How can the quality of prediction be improved depending on the network hyperparameters? How well does the network work when three-dimensional instead of two-dimensional simulation data or rather noisy experimental data are entered? However, the project participants are confident that they will also master these steps.