Abstract
This article summarizes and discusses recent developments with respect to artificial intelligence (AI) super-resolution as a subfilter model for large-eddy simulations. The focus is on the application of physics-informed enhanced super-resolution generative adversarial networks (PIESRGANs) for subfilter closure in turbulence and combustion applications. A priori and a posteriori results are presented for various applications, ranging from decaying turbulence to finite-rate chemistry flows. The high accuracy of AI super-resolution-based subfilter models is emphasized, and advantages and shortcoming are described.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
1 Introduction
Many turbulent and reactive simulations require models to reduce the computational cost. Popular approaches include large-eddy simulation (LES) for modeling (reactive) turbulence and flamelet models for predicting chemistry. LES relies on the filtered Navier–Stokes equations. The filter operation separates the flow in larger scales above the filter width and smaller scales below the filter width, called subfilter contributions. As a result, the filtered equations can be advanced for less computational cost, however, they require modeling for subfilter contributions. Accurate modeling of these unclosed terms is one of the key challenges for predictive LES. LES has been applied successfully to many different turbulent flows including reactive turbulent flows (Smagorinsky 1963; Pope 2000; Pitsch 2006; Beck et al. 2018; Goeb et al. 2021). The flamelet concept employs asymptotic and scale arguments to motivate that flow field and chemistry are only loosely coupled by the scalar dissipation rate, a measurement for the local mixing, in combustion. Consequently, advancing chemistry is reduced to solving coupled one-dimensional (1-D) differential equations, which are, for example, in mixture fraction space for non-premixed combustion. Challenges include how to tabulate the resulting flamelets efficiently and how to distribute the multiple flamelets across the domain for multiple representative interactive flamelet (MRIF) approaches (Peters 1986; Banerjee and Ierapetritou 2006; Ihme et al. 2009; Bode et al. 2019b).
Data-driven methods, such as machine learning (ML) and deep learning (DL), have gained a massive boost across almost all scientific domains, ranging from speech recognition (Hinton et al. 2012) and learning optimal complex control (Vinyals et al. 2019) to accelerating drug development (Bhati et al. 2021). Important steps towards the wider usage of ML/DL methods were the availability of more and larger (labeled) datasets as well as significant improvements with respect to graphics processing units (GPUs), which enabled high-speed GPUs and efficient execution of ML/DL operations on GPUs. One particular class of ML/DL is AI super-resolution, also called single image super-resolution (SISR), originally developed by the computer science community for increasing the resolution of 2-D images (i.e., to super-resolve images) beyond classical techniques, such as bicubic interpolation. The idea is that complex networks can extract and learn features during training with many images and are then able to add this information to images based on local information. Dong et al. (2014) introduced a super-resolution convolutional neural network (SRCNN), a deep convolutional neural network (CNN) which directly learns the end-to-end mapping between low and high resolution images. Several other works continuously improved this approach (Dong et al. 2015; Kim et al. 2016a, b; Lai et al. 2017; Simonyan and Zisserman 2014; Johnson et al. 2016; Tai et al. 2017; Zhang et al. 2018) to achieve better prediction accuracy by correcting multiple shortcomings of the original SRCNN. The switch from CNNs to generative adversarial networks (GANs) (Goodfellow et al. 2014), as proposed by Ledig et al. (2017), finally resulted in the development of enhanced super-resolution GANs (ESRGANs) by Wang et al. (2018).
The idea of AI super-resolution has been also successfully adopted for simulations of physical phenomena, from climate research (Stengel et al. 2020) to cosmology (Li et al. 2021). While many applications focus on super-resolving single time steps of simulations, Bode et al. (2019a, 2021, 2022), Bode (Bode 2022a, b, c) introduced an algorithm for employing AI super-resolution as a subfilter model for (reactive) LES. They developed the physics-informed enhanced super-resolution GAN (PIESRGAN) and demonstrated its application for various turbulent inert and reactive flows. To successfully use AI super-resolution to time-advance complex flows, accurate a priori results are necessary but not sufficient. Only if the model also gives good a posteriori results, i.e., when it is continuously used as model for multiple consecutive time steps during a simulation, it is promising for applying it to complex flows. Typically, good a posteriori results are much more difficult to achieve, as errors accumulate over time, especially if low-dissipation solvers are used. Consequently, a posteriori results are presented for all cases discussed in this article.
This work summarizes important modeling aspects of PIESRGAN in the next section. Afterward, its application to a decaying turbulence case, reactive spray setups, premixed combustion, and non-premixed combustion is described. This chapter finishes with conclusions for further developments of the AI super-resolution approach in general and the PIESRGAN in particular.
2 PIESRGAN
This section summarizes the PIESRGAN and explains the PIESRGAN-subfilter modeling approach. Details about the architecture, the time advancement algorithm, and implementation details are given. Note that the PIESRGAN modeling approach presented in this work follows a hybrid approach. AI super-resolution is only used on the smallest scales to reconstruct the subfilter contributions, while the well-known filtered equations for LES are used to advance the flow in time, i.e., the time integration is not integrated in the network. This approach is technically more complex than integrating the time integration in the network. However, it is also expected to be more general and universal. Turbulence is known to feature some universality on the smallest scales (Frisch and Kolmogorov 1995), which should be learnt by the network and be universal for many applications. The larger scales, which can be strongly affected by the geometry and setup and thus are fully case dependent, are considered by the filtered equations making PIESRGAN-subfilter models applicable for multiple cases.
2.1 Architecture
PIESRGAN is a GAN model, which is a generative model that aims to estimate the unknown probability density of observed data without an explicitly provided data likelihood function, i.e., with unsupervised learning. Technically, a GAN has two networks. The generator network is used for modeling and creates new modeled data. The discriminator network tries to distinguish whether data are generator-created or real data and provides feedback to the generator network. Thus, throughout the learning process, the generator gets better at creating data as close as possible to real data, and the discriminator learns to better identify fake data, which can be seen as two players carrying out a minimax zero-sum game to estimate the unknown data probability distribution.
The network architecture and training process are sketched in Fig. 1. Fully resolved 3-dimensional (3-D) data (“H”) are filtered to get filtered data (“F”). The filtered data is used as input to the generator for creating the reconstructed data (“R”). The accuracy of the reconstructed data is evaluated by means of the fully resolved data. The discriminator tries to distinguish between reconstructed and fully resolved data. The accuracy is measured by means of the loss function, which reads
where \(\beta _1\) to \(\beta _4\) are coefficients weighting the different loss term contributions with \(\sum _{i} \beta _i = 1\). The adversarial loss is the discriminator/generator relativistic adversarial loss (Jolicoeur-Martineau 2018), which measures both how well the generator is able to create accurate reconstructed data compared to the fully resolved data and how well the discriminator is able to identify fake data. The pixel loss and the gradient loss are defined using the mean-squared error (MSE) of the quantity and its gradient, respectively. The physics loss enforces physically motivated conditions, such as the conservation of mass, species, and elements, depending on the underlying physics of the problem. For the non-premixed temporal jet application in this work, it reads
where \(\beta _{41}\), \(\beta _{42}\), and \(\beta _{43}\) are coefficients weighting the different physical loss term contributions with \(\sum _{i} \beta _{4i} = 1\). The physically motivated loss term is very important for the application of PIESRGAN to flow problems. If the conservation laws are not fulfilled very well, the simulations tend to blow up rapidly, which is an important difference to super-resolution in the context of images. Errors which might be acceptable there can be easily too large for usage as a subfilter model (Bode et al. 2021).
The generator heavily uses 3-D CNN layers (Conv3D) (Krizhevsky et al. 2012) combined with leaky rectified linear unit (LeakyReLU) layers for activation (Maas et al. 2013). The residual in residual dense block (RRDB), which was introduced for ESRGAN, is essential for the performance of the state-of-the-art super-resolution. It replaces the residual block (RB) employed in previous architectures and contains fundamental architectural elements such as residual dense blocks (RDBs) with skip-connections. A residual scaling factor \(\beta _{\textrm{RSF}}\) helps to avoid instabilities in the forward and backward propagation. RDBs use dense connections inside. The output from each layer within the dense block (DB) is sent to all the following layers. The discriminator network is simpler. It inherits basic CNN layers (Conv3D) combined with LeakyReLU layers for activation with and without batch normalization (BN). The final layers contain a fully connected layer with LeakyReLU and dropout with dropout factor \(\beta _{\textrm{dropout}}\). A summary of all hyperparameters is given in Table 1.
2.2 Algorithm
The LES equations, which are Favre-filtered, are used to advance a PIESRGAN-LES in time. As consequence of the filter operation to the equations, unclosed terms appear, which require information from below the filter width to be evaluated. The LES subfilter algorithm aims to reconstruct this information to close the LES equations. This is done during every time step. For the cases with chemistry, the chemistry can be included in the PIESRGAN during the training process (Bode et al. 2022; Bode 2022a). As chemistry is often active locally, this can be also used to save computing time by adaptively solving only in relevant regions. The algorithm starts with the LES solution \(\Phi _\textrm{F}^{n}\) at time step n, which includes the entirety of all relevant fields in the simulation, and consists of repeating the following steps:
-
1.
Use the PIESRGAN to reconstruct \(\Phi _\textrm{R}^n\) from \(\Phi _\textrm{LES}^n\).
-
2.
(Only for nonuniversal quantities) Use \(\Phi _\textrm{R}^n\) to update the scalar fields of \(\Phi \) to \(\Phi _\textrm{R}^{n;\textrm{update}}\) by solving the unfiltered scalar equations on the mesh of \(\Phi _\textrm{R}^n\).
-
3.
Use \(\Phi _\textrm{R}^{n;\textrm{update}}\) to estimate the unclosed terms \(\Psi _\textrm{LES}^n\) in the LES equations of \(\Phi \) for all fields by evaluating the local terms with \(\Phi _\textrm{R}^{n;\textrm{update}}\) and applying a filter operator.
-
4.
Use \(\Psi _\textrm{LES}^n\) and \(\Phi _\textrm{LES}^n\) to advance the LES equations of \(\Phi \) to \(\Phi _\textrm{LES}^{n+1}\).
2.3 Implementation Details
PIESRGAN was implemented using a TensorFlow/Keras framework (Abadi et al. 2016; Keras 2019) in this work to efficiently employ GPUs. For all the examples discussed here, the data were split into training and testing sets to avoid reproduction of fully seen data. During the training and querying processes, it was found that consistent normalization of quantities is very important for highly accurate results (Bode et al. 2021). Furthermore, both operations are done based on subboxes, since reconstructing bigger boxes can become very memory intensive. Typically, each subbox is chosen large enough to cover the relevant physical scales (Bode et al. 2021). The filter width can become problematic if non-uniform meshes are employed. In these cases, training with multiple filter widths is suggested to achieve good accuracy throughout the entire domain (Bode 2022a).
The potential extrapolation capability of data-driven methods is always challenging. Many trained networks only work well in regions which were accessible during the training process. This can become very problematic for flow applications, where often data at low Reynolds numbers is abundant, while data at high Reynolds numbers is not computable at all, making transfer learning difficult. To deal with this problem, concepts such as a two-step training approaches (Bode et al. 2021) can be used relying on the further prediction width of GANs compared to single networks (Bode et al. 2022; Bode 2022a). In order to avoid this open question of extrapolation capabilities, only interpolation cases are presented in this work.
A basic version of PIESRGAN is available on GitLab (https://git.rwth-aachen.de/Mathis.Bode/PIESRGAN.git) for an interested reader.
3 Application to Turbulence
The application of PIESRGAN to non-reactive turbulence is a good starting point. Besides closing the filtered momentum equations, the evaluation of passive scalars is a key challenge toward applying PIESRGAN to turbulent reactive flows, as scalar mixing is especially important for non-premixed combustion cases. Furthermore, turbulence is assumed to be universal on the smallest scales that makes it reasonable to accurately learn the subfilter behavior by a complex network.
3.1 Case Description
A decaying turbulence case with a peak wavenumber \(\kappa _\textrm{p}\) of 15 m\(^{-1}\) and a maximum Taylor microscale-based Reynolds number Re\(_\lambda \) of about 88 is used as turbulent example case here. Turbulence with an initial turbulence intensity of \(u'_0=2\langle k \rangle /3\) with \(\langle k \rangle \) as ensemble-averaged turbulent kinetic energy was initialized on a uniform mesh with \(4096^3\) and solved along with passive scalars. The original direct numerical simulation (DNS) was computed with the solver psOpen (Gauding et al. 2019). psOpen employs the P3DFFT library for spatial decomposition and to perform the fast Fourier transform (FFT) (Pekurovsky 2012) of the incompressible Navier–Stokes equations formulated in spectral space, but with the non-linear term computed in physical space. Over time, the turbulent intensity decays, i.e., the Reynolds number decreases, resulting in larger turbulent structures. This makes the decaying turbulence case a very good baseline application, as many practical applications also features varying Reynolds numbers.
The corresponding PIESRGAN-LES was computed with CIAO, an arbitrary order finite-difference code (Desjardins et al. 2008). The physics-informed loss function only considered a condition for enforcing mass conservation. Further details can be found in Bode et al. (2021).
3.2 A Priori Results
For evaluating the accuracy of PIESRGAN, Fig. 2 shows 2-D slices of the fully resolved velocity and scalar fields, the filtered fields, and the reconstructed fields employing PIESRGAN. The visual agreement is good, and the network seems to be able to add sufficient information to the filtered fields to reconstruct the fully resolved data. Bode et al. (2021) pointed out that high accuracy can also be achieved in scenarios in which PIESRGAN needs to “extrapolate” training data using a two-step training approach. The two-step training approach combines fully resolved data for updating generator and discriminator and underresolved training data, which further update the generator. This is an important feature of the employed GAN approach as many practical use cases feature Reynolds numbers which cannot be computed with DNS.
In addition to the visual assessment of the PIESRGAN, Fig. 3 shows the dimensionless spectra for the velocity vector field and the passive scalar, denoted as \(\mathscr {S}\). The spectra are computed with the fully resolved fields, the filtered fields, and the reconstructed fields and are an important measurement for the prediction quality of PIESRGAN, as they quantify the distribution of turbulent energy and scalar among the length scales. The filter operation removes the smallest scales, and the task of the PIESRGAN model is to add the smallest scales to reconstruct the fully resolved distribution. The agreement is good for both spectra, however, not perfect for very high wavenumbers, i.e., for \(\kappa /\kappa _\textrm{p}\approx 80\). It is important to note that the numerics have a significant impact on the results in Fig. 3. Only high order and consistent numerics avoid significant noise for high wavenumbers in the reconstructed data.
3.3 A Posteriori Results
A PIESRGAN-LES must accurately predict the decay of turbulence, usually measured by means of the ensemble-averaged turbulent kinetic energy and the ensemble-averaged dissipation rate, denoted as \(\langle \varepsilon \rangle \). A uniform LES mesh of \(64^3\) was considered and the results are presented in Fig. 4. The prediction accuracy of PIESRGAN-LES is high. The results for a heavily underresolved simulation without LES model show that especially the ensemble-averaged dissipation rate is strongly underpredicted without model. This makes sense as the dissipation rate acts on the smallest scales which simply do not exist in the underresolved simulation due to the lack of resolution.
3.4 Discussion
The presented a posteriori results are remarkable as the trained network is able to reproduce the decay on a multiple orders of magnitude coarser mesh. One reason for this could be the universal character of turbulence on the smallest scales. From a computational point, a too drastic reduction of mesh size might not result in the fastest time-to-solution as the costs of subbox reconstruction increase with the reconstruction size. Thus, a finer LES mesh with smaller subbox reconstruction can be faster as demonstrated by the two turbulent combustion cases below. Furthermore, if the network is used as part of a multi-physics simulation, often LES meshes which are only 10–20 times coarser per direction than a DNS, which fully resolves the turbulence, are needed to accurately consider boundary conditions and other physical phenomena. In this context, it is also interesting to mention the effect of the Courant-Friedrichs-Lewy (CFL) number. Theoretically, coarser LES meshes also enable larger time steps. However, it was found that usually a time step size between the DNS and theoretical LES time step sizes is needed to accurately reproduce the DNS results. The reason might be that the CFL number is a numerical limit, however, the PIESRGAN-LES also needs to fulfil some intrinsic physical time step limitations.
Overall, PIESRGAN has many advantages for turbulent flows. It can not only be used to reduce the computing and storing cost but also to enable new workflows. For example, smaller domains can be computed first to get accurate training data. Afterward, the trained model is applied to a larger domain to achieve converged statistics. In addition to the discussed LES application, it could also be used as cheap turbulence generator for complex simulations.
4 Application to Reactive Sprays
Reactive sprays occur in many applications, such as diesel engines. Usually, the liquid fuel is injected into a combustion chamber where it finally burns. Before ignition can take place, multiple physical processes happen. The continuous liquid fuel phase splits into smaller ligaments and small droplets. These disperse droplets start evaporating and the resulting vapor mixes with the ambient gas forming a reactive mixture in which the combustion process occurs. The more these stages are spatially separated, the more similar the final combustion process becomes to classical non-premixed combustion. A measurement for this separation is the difference between lift-off length (LOL), i.e., the distance between nozzle tip and closest combustion events, and the liquid penetration length (LPL), i.e., the distance between nozzle tip and roughly furthest fuel in liquid phase. This work focuses on the Spray A and Spray C cases defined by the Engine Combustion Network (ECN) (2019).
4.1 Case Description
Spray A and Spray C are both single hole nozzles, however, while Spray A is designed to avoid cavitation, Spray C features cavitation. Additionally, Spray A has a smaller exit diameter like injectors used for diesel engines, while the exit diameter of Spray C is larger as for heavy-duty injectors. Both injectors were investigated with n-dodecane as fuel at standard reactive conditions, reading 150 MPa injection pressure, 22.8 kg/m\(^3\) ambient density, 15 % ambient oxygen concentration, 900 K ambient temperature, and 363 K fuel temperature. Furthermore, inert conditions, i.e., without ambient oxygen, were run for Spray A, while Spray C was also simulated with 1000, 1100, and 1200 K ambient temperatures. The cases are denoted as SA900, SC900, SC1000, SC1100, and SC1200 based on the used nozzle geometry and ambient temperature. Inert conditions are separately emphasized.
The cases were computed using CIAO with a similar setup as described by Goeb et al. (2021). More precisely, the initial droplets were generated based on a precomputed droplet size distribution for the Spray A case (Bode et al. 2014, 2015). For the Spray C case, a blob method utilizing the effective liquid diameter at the nozzle exit was employed. Breakup and evaporation were modeled with Kelvin-Helmholtz/Rayleigh-Taylor (KH/RT) (Patterson and Reitz 1998) and Bellan’s evaporation approach (Miller and Bellan 1999) for both cases. Velocity and mixing LES closure were based on PIESRGAN-subfilter modeling. Note that due to the lack of reactive spray DNS data and motivated by the separation of phenomena within the combustion process of sprays, the PIESRGAN was trained with the decaying turbulence data introduced in the previous sections.
The reaction mechanism by Yao et al. (2017) was used for all simulations. An MRIF approach was employed for chemistry modeling, which is also summarized in Fig. 5. The non-premixed flamelet approach assumes that chemistry and flow are only loosely coupled through the scalar dissipation rate. Consequently, two different sets of equations are solved in MRIF approaches. The first set are the usual flow equations solved in 3-D spatial space. The second set describes chemistry in the mixture fraction space Z which is only 1-D, and is called flamelet equations. Therefore, representing and solving the chemistry by means of the flamelet equations is much cheaper compared to solving the chemistry in full 3-D spatial space. As shown by the equations in Fig. 5, the mapping towards the flamelet space is done by weighted volume-averages, while the mapping back to physical space employs probability density functions (PDFs), typically constructed by means of the filtered mixture fraction and mixture fraction variance.
Thus, the MRIF approach typically requires a presumed functional form of the scalar dissipation rate in mixture fraction space f and the PDF of the mixture faction. For the functional form, often a presumed log-based profile is assumed (Pitsch et al. 1998), while a beta-PDF is often employed for the mixture fraction PDF. Both quantities are critical for LES, as they often have significant subfilter contributions. In the context of PIESRGAN modeling, both assumptions can be avoided by directly evaluating both profiles on the reconstructed fields which can improve the prediction results of the simulations. For the Spray C cases, the mixture fraction PDF was indeed evaluated based on the reconstructed data for the results presented here (Bode 2022b).
4.2 Results
The lack of DNS data makes a distinction between a priori and a posteriori results difficult. Instead LES results are compared with experimental data here (Engine Combustion Network 2019). Figures 6 and 7 compare the ignition delay time \(t_\textrm{i}\) and the LOL \(l_\textrm{LOL}\) for the considered spray cases. All simulations slightly underpredict the experimental results. This could be because of the chemical kinetics mechanism used which has a significant impact on the ignition delay time. Furthermore, the ignition delay time and consecutively LOL decrease with increasing ambient temperature. These trends are correctly predicted for Spray C by the PIESRGAN-LESs.
The near nozzle experimental data for the inert Spray A case allow a further evaluation of PIESRGAN-LES compared to classical LES with dynamic Smagorinsky (DS) model. Figure 8 compares the temporally and circumferentially averaged fuel mass fraction for an underresolved simulation without model, a DS-LES, and a PIESRGAN-LES with experimental data. The agreement is best between PIESRGAN-LES and experimental data. Note that a similar resolution is chosen for DS-LES and PIESRGAN-LES here. It seems that the PIESRGAN-LES is more robust with respect to coarser resolutions. If a finer resolution were to be used, the results for PIESRGAN-LES and DS-LES would become more similar.
4.3 Discussion
The reactive spray cases computed with PIESRGAN-subfilter model show that the PIESRGAN-based subfilter approach can be used to actually compute complex flows with high accuracy. In terms of operations needed per time step, the PIESRGAN-subfilter model is more expensive than a classical DS approach. Furthermore, the PIESRGAN approach generates additional cost for training of the network. However, the PIESRGAN approach has the advantage of naturally running on GPUs which are responsible for the majority of floating point operations per second (FLOPS) in current supercomputer systems.
As discussed, the PIESRGAN approach can be used to reduce model assumptions, such as those made for the mixture fraction PDF and functional form of the scalar dissipation rate, which is an advantage. The presented results demonstrate that simulations without the discussed presumed closures but with PIESRGAN closure are able to reasonably match experimental data. However, due to the lack of DNS data and the multiple models which are still involved, such as breakup models and the chemical mechanism, a detailed analysis of the impact of these closures on macroscopic quantities, such as LOL and ignition delay time, remains difficult. However, it can be concluded that the PIESRGAN approach is very robust even in heavily underresolved flow situations. This is an important feature for very complex simulations such as full engine simulations. In these cases, it is impossible to sufficiently resolve all parts and the robustness of closure models becomes significant.
5 Application to Premixed Combustion
In premixed combustion cases, fuel and oxidizer are completely mixed before combustion is allowed to take place. Typical examples include spark ignition engines and lean-burn gas turbines. Therefore, in contrast to non-premixed combustion, correctly predicting fuel-oxidizer mixing is less important for premixed combustion.
5.1 Case Description
Falkenstein et al. (2020a, b, c) computed a collection of premixed flame kernels with iso-octane/air mixtures under real engine conditions and with unity and constant Lewis numbers. The case with unity Lewis number, i.e., featuring the same diffusion coefficient for all scalar species, is used as demonstration case in this work. All simulations, DNS and PIESRGAN-LES, were computed with CIAO (Desjardins et al. 2008). The DNS relies on the low-Mach number limit of the Navier–Stokes equations employing the Curtiss–Hirschfelder approximation (Hirschfelder et al. 1964) for diffusive scalar transport and including the Soret effect. A mesh with \(960^3\) cells was used. The iso-octane reaction mechanism features 26 species (Falkenstein et al. 2020a). The setup puts one flame kernel in a homogeneous isotropic turbulence field. Consequently, the turbulence decays over time, while the flame kernel expands, wrinkles, and deforms from its originally spherical shape. As the resulting flame speed depends on the local curvature of the flame kernel, it is very important to accurately predict the flame surface density. For running PIESRGAN-LES, the training of PIESRGAN was performed with multiple filter stencil widths varying from 5 to 15 cells (Bode et al. 2022).
Often, a reaction progress variable is defined to describe the temporal state of a flame kernel. Falkenstein et al. (2020a) defined it as sum of the mass fractions of \(\textrm{H}_2\), \(\textrm{H}_2\textrm{O}\), \(\textrm{CO}\), and \(\textrm{CO}_2\) and introduced a simplified reaction progress variable \(\zeta \). The simplified reaction progress variable behaves according to a transport equation with the thermal diffusion coefficient as diffusion coefficient reading
employing Einstein’s summation notation, with \(\rho \) as fluid density, t as time, \(u_j\) as velocity vector, \(x_j\) as space vector, \(D_\textrm{th}\) as thermal diffusion coefficient, and \(\dot{\omega }_\zeta \) as chemical source term of the simplified reaction progress variable, which is the sum of the source terms of the species used for the definition of the reaction progress variable. The evolution of one flame kernel realization is visualized in Fig. 9.
In contrast to the decaying turbulence and reactive spray cases presented in the previous sections, it is not sufficient to only train the PIESRGAN with turbulence data for finite-rate chemistry cases. Instead, the fully trained network based on decaying homogeneous isotropic turbulence was only used as starting network, which was further updated with finite-rate chemistry data. As a consequence, reconstruction is learnt for all species fields, and the optional solution step with the unfiltered transport equations on the finer mesh of the reconstructed data is employed. This combination of reconstructing and solving was found to be crucial for the accuracy of finite-rate chemistry flows (Bode et al. 2022; Bode 2022a).
5.2 A Priori Results
Reconstruction results for the simplified reaction progress variable, two species mass fractions, and one velocity component are compared with fully resolved and filtered fields in Fig. 10. The agreement between fully resolved fields and reconstructed fields is good. The filtered data, which were filtered over 15 cells, are less sharp due to the smoothing of small-scale structures.
5.3 A Posteriori Results
Multiple quantities can be tracked during the evolution of the flame kernel. The flame surface density \(\Sigma \) can be evaluated by means of a phase indicator function \(\Gamma (\textbf{x},t)\), defined for a reaction variable progress variable threshold value \(\zeta _0\) as \(\Gamma (\textbf{x},t) = \mathcal {H}(\zeta (\textbf{x},t) - \zeta _0)\), with \(\mathcal {H}\) being the Heaviside step function. The surface density is then given by
employing volume-averaging. Moreover, the corresponding characteristic length scale \(L_\Sigma \) can be defined as
As for the decaying turbulence case before, the averaged turbulent kinetic energy decays. In contrast to this, the flame surface density is expected to increase significantly and the characteristic length scale \(L_\Sigma \) should increase slightly. This is shown in Fig. 11. The agreement between DNS and PIESRGAN-LES results is good.
5.4 Discussion
The accuracy of PIESRGAN for premixed combustion cases is very promising. This enables PIESRGAN-LES to be a very useful tool for evaluation of cycle-to-cycle variations (CCVs) and other complex phenomena in engines. A potential workflow could first compute two DNS realizations and other complex phenomena of premixed flame kernels, which are used for on-the-fly training of the PIESRGAN. The trained network is then used to compute multiple PIESRGAN-LES realizations of the premixed flame kernel setup and enable sufficient statistics to study CCVs. Bode et al. (2022a) also showed a certain robustness of the PIESRGAN-subfilter model with respect to setup variations, which might be partly a result of the GAN approach. Consequently, PIESRGAN could also be employed to optimize geometries of turbines or devise optimal operating conditions to reduce harmful emissions.
As discussed in the context of reactive sprays, the reconstruction approach could also be used to improve conventional models, typically relying on filtered probability functions. Instead, a PIESRGAN approach allows to directly evaluate the filtered density function (FDF) increasing the model accuracy.
6 Application to Non-premixed Combustion
In non-premixed combustion cases, fuel and oxidizer are initially separated. As a consequence, mixing and continuous interdiffusion is necessary to establish a flame. Typical examples are furnaces, diesel engines, and jet engines.
6.1 Case Description
The study of non-premixed temporally evolving planar jets (Denker et al. 2020, 2021) was also performed with the CIAO code (Desjardins et al. 2008) and featured multiple nonreactive and reactive cases with a highest initial jet Reynolds number of 9850. It used methane as fuel, modeled by a reaction mechanism with 28 species. The largest case used \(1280\times 960\times 960\) cells and is visualized in Fig. 12 by means of the mixture fraction Z and its scalar dissipation rate defined as
with D as diffusivity, \(x_i\) as spatial coordinate, and utilizing Einstein’s summation notation. The temporal jet setup has two periodic directions: the flow direction (from left to right) and the spanwise direction (perpendicular to the cut view in Fig. 12). The moving layer of fuel is in the center and surrounded by originally quiescent air. At the late time step shown, the central fuel stream has already experienced significant bending due to turbulence, resulting in the lack of fuel in the upper half at about one quarter length of the domain. Furthermore, it can be seen that the layer in which scalar dissipation is active is broader than the fuel layer and the scalar dissipation rate structures are much finer than the mixture fraction structures resulting from the derivative. Only one realization per parameter combination was computed, however, the spanwise direction was chosen in such a way that turbulent statistics evaluated in the two periodic directions converged. The nonperiodic direction was chosen large enough to prevent interaction of the jet with the boundary.
As for the premixed case, a PIESRGAN with learnt chemistry was employed for the results presented here.
6.2 A Priori Results
The scalar dissipation rate, i.e., the measurement of local mixing, is very essential for non-premixed combustion as it requires the fuel and oxidizer streams to be mixed first, resulting in a lower limit for the scalar dissipation rate required for burning. As indicated by Fig. 12, the scalar dissipation rate is a quantity which acts on the smallest scales making it difficult for LES as it usually has significant contributions below the filter width. Furthermore, extinction (and later reignition) can occur in regions where the scalar dissipation rate becomes too large, typically estimated by the quenching scalar dissipation rate in so-called stationary flamelet solutions, denoted as \(\chi _\textrm{q}\). Overall, the scalar dissipation rate is a very well suited quantity to evaluate the prediction accuracy of the PIESRGAN-model. The PDF \(\mathscr {P}\) of the scalar dissipation rate is shown in Fig. 13. As expected, the filtering leads to a lack of regions with very high scalar dissipation rate. These missing values are successfully reconstructed by the PIESRGAN-model via the mass fraction fields, i.e., the scalar dissipation rate shown in the figure is a post-processed quantity relying on other reconstructed quantities of the simulation data. The result in the log-log plot looks very good, however, note that the increase of probability (from about \(\chi = 0.1\) to 1 s\(^{-1}\)) is much better predicted with the reconstructed data than with filtered data alone, but far from perfect.
6.3 A Posteriori Results
Typically, a non-premixed flame is located on surfaces of roughly stoichiometric mixture fraction, which makes the scalar dissipation rate conditioned on the stoichiometric mixture fraction an interesting quantity. Furthermore, a dimensionless time is introduced, denoted as \(t^*\). This time is shifted to make different cases comparable with the starting point defined as the time when the variance of the scalar dissipation rate at stoichiometric conditions is zero. The normalization is done with the jet time defined with the jet height and its bulk velocity as 32.3 mm/20.7 m/s. The time evolution of the ensemble-averaged density-weighted scalar dissipation rate conditioned on the stoichiometric mixture fraction is compared between DNS and PIESRGAN-LES in Fig. 14. The LES used training data of varying filter widths with stencil sizes of 7–15 cells per direction (Bode 2022a). The prediction of the LES is very good even though the peak is slightly underpredicted.
6.4 Discussion
The non-premixed case emphasizes two important points with respect to PIESRGAN modeling. First, as seen for the decaying turbulence case, the accuracy for predicting mixing is very high. This is crucial for many applications going far beyond combustion cases. Second, PIESRGAN is able to statistically predict a local phenomenon like quenching, which is very challenging for classical LES models. Both points make PIESRGAN very promising for predictive LES of even more complex configurations.
The non-premixed case with more than one billion grid points and 28 species, chosen as an example in this section, also highlights the capability of PIESRGAN to be used for recomputing the largest available reactive DNS. This is technically remarkable and only possible due to the rapid developments in the fields of ML/DL and supercomputers in general.
7 Conclusions
AI super-resolution is a powerful tool to improve various aspects of state-of-the-art simulations. These include the reduction of storage and input/output (I/O), a better comparability between experimental and simulation data, and highly accurate subfilter models for LES, as demonstrated by the examples discussed in this work. The remarkable progress in the fields of ML/DL and supercomputing in general, especially with respect to GPU computing, has made ML/DL-based techniques competitive and in some aspects even superior compared to classical approaches, and it is expected that the rapid developments in this field will continue in the upcoming years.
The presented applications ranging from turbulence to non-premixed combustion focused on the high accuracy of PIESRGAN-based approaches in a priori and a posteriori tests. Especially, the a posteriori accuracy is striking unveiling the potential of the PIESRGAN-subfilter approach. Compared to classical methods, the LES mesh can be often significantly reduced as the PIESRGAN technique was found to be more robust in underresolved flow situations.
From a technical point of view, PIESRGAN-based models are simple to use as they can be easily implemented in frameworks, such as Keras/TensorFlow and PyTorch, which are used by a very large community. The trained network can be coupled to any simulation code by just adapting the existing application programming interface (API) to external libraries.
PIESRGAN-based subfilter modeling is a relatively new technique and thus many questions are still open. The presented architecture resulted in good results but it is expected that it could be further improved. The approach of physics-informed loss function compared to physics-informed network layers seems to be reasonable and has the advantage of a trivial implementation while resulting in equally accurate predictions. One of the most important topics in the context of data-driven approaches is the extrapolation capability, i.e., how accurate are predictions outside of the training range. The recent publications (Bode et al. 2019a, 2021, 2022; Bode 2022a, b, c) show some promising properties in this regard for PIESRGAN, but it should be investigated in more detail in the future. Additionally, the combustion community has computed petabytes of DNS data for various combustion configuration. Given the demonstrated generality of PIESRGAN in the sense that the same architecture worked very well for multiple configurations, the combination of DNS database and PIESRGAN could be already very useful to advance combustion research. PIESRGAN was also shown to be universal enough to use the same trained network for physical parameter variations. Thus, many optimization problems could be easily accelerated.
References
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Morre S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattengerg M, Wicke M, Yu Y, Zheng X (2016) TensorFlow: large-scale machine learning on heterogeneous systems
Banerjee I, Ierapetritou GM (2006) An adaptive reduction scheme to model reactive flow. Combust Flame 144(3):619–633
Beck AD, Flad DG, Munz C-D (2018) Neural networks for data-based turbulence models. arXiv:1806.04482
Bhati A, Wan S, Alfe D, Clyde A, Bode M, Tan L, Titov M, Merzky A, Turilli M, Jha S, Highfield RR, Rocchia W, Scafuri N, Succi S, Kranzlmüller D, Mathias G, Wifling D, Donon Y, Di Megio A, Vallecorsa S, Ma H, Trifan A, Ramathan A, Brettin T, Partin A, Xia F, Duan X, Stevens R, Coveney PV (2021) Pandemic drugs at pandemic speed: infrastructure for accelerating COVID-19 drug discovery with hybrid machine learning- and physics-based simulations on high performance computers. Interface Focus, 20210018
Bode M (2022a) Applying physics-informed enhanced super-resolution generative adversarial networks to turbulent non-premixed combustion on non-uniform meshes and demonstration of an accelerated simulation workflow. arXiv preprint arXiv:2210.16248
Bode M (2022b) Applying physics-informed enhanced super-resolution generative adversarial networks to large-eddy simulations of ECN Spray C. SAE Technical Paper 2022-01-0503
Bode M (2022c) Applying physics-informed enhanced super-resolution generative adversarial networks to finite-rate-chemistry flows and predicting lean premixed gas turbine combustors. arXiv prepring arXiv:2210.16219
Bode M, Diewald F, Broll D, Heyse J, et al (2014) Influence of the injector geometry on primary breakup in diesel injector systems. SAE Technical Paper 2014-01-1427
Bode M, Falkenstein T, Le Chenadec V, Kang S, Pitsch H, Arima T, Taniguchi H (2015) A new Euler/Lagrange approach for multiphase simulations of a multi-hole GDI injector. SAE Technical Paper 2015-01-0949
Bode M, Gauding M, Kleinheinz K, Pitsch H (2019a) Deep learning at scale for subgrid modeling in turbulent flows: regression and reconstruction. LNCS 11887:541–560
Bode M, Collier N, Bisetti F, Pitsch H (2019b) Adaptive chemistry lookup tables for combustion simulations using optimal B-spline interpolants. Combust Theory Model 23(4):674–699
Bode M, Gauding M, Lian Z, Denker D, Davidovic M, Kleinheinz K, Jitsev J, Pitsch H (2021) Using physics-informed enhanced super-resolution generative adversarial networks for subfilter modeling in turbulent reactive flows. Proc Combust Inst 38:2617–2625
Bode M, Gauding M, Goeb D, Falkenstein T, Pitsch H (2022) Applying physics-informed enhanced super-resolution generative adversarial networks to turbulent premixed combustion and engine-like flame kernel direct numerical simulation data. arXiv prepring arXiv:2210.16206
Denker D, Attili A, Gauding M, Niemietz K, Bode M, Pitsch H (2020) Dissipation element analysis of non-premixed jet flames. J Fluid Mech 905:A4
Denker D, Attili A, Boschung J, Hennig F, Gauding M, Bode M, Pitsch H (2021) A new modeling approach for mixture fraction statistics based on dissipation elements. Proc Combust Inst 38:2681–2689
Desjardins O, Blanquart G, Balarac G, Pitsch H (2008) High order conservative finite difference scheme for variable density low Mach number turbulent flows. J Comput Phys 227(15):7125–7159
Dong C, Loy CC, He K, Tang X (2014) Learning a deep convolutional network for image super-resolution. In: European conference on computer vision, pp 184–199
Dong C, Loy CC, He K, Tang X (2015) Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intel 38(2):295–307
Engine Combustion Network (2019) https://ecn.sandia.gov
Falkenstein T, Kang S, Cai L, Bode M, Pitsch H (2020a) DNS study of the global heat release rate during early flame kernel development under engine conditions. Combust Flame 213:455–466
Falkenstein T, Rezchikova A, Langer R, Bode M, Kang S, Pitsch H (2020b) The role of differential diffusion during early flame kernel development under engine conditions - part i: analysis of the heat-release-rate response. Combust Flame 221:502–515
Falkenstein T, Chu H, Bode M, Kang S, Pitsch H (2020c) The role of differential diffusion during early flame kernel development under engine conditions - part ii: effect of flame structure and geometry. Combust Flame 221:516–529
Frisch U, Kolmogorov AN (1995) Turbulence: the legacy of AN Kolmogorov. Cambridge University Press, Cambridge
Gauding M, Wang L, Goebbert JH, Bode M, Danaila L, Varea E (2019) On the self-similarity of line segments in decaying homogeneous isotropic turbulence. Comput & Fluids 180:206–217
Goeb D, Davidovic M, Cai L, Pancharia P, Bode M, Jacobs S, Beeckmann J, Willems W, Heufer KA, Pitsch H (2021) Oxymethylene ether - n-dodecane blend spray combustion: experimental study and large-eddy simulations. Proc Combust Inst 38:3417–3425
Goodfellow IJ, Pouget-Agadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial networks. arXiv:1406.2661
Hinton G, Deng L, Yu D, Dahl G, Mohamed A, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process Mag 29
Hirschfelder JO, Curtiss CF, Bird RB, Mayer MG (1964) Molecular theory of gases and liquids
Ihme M, Schmitt C, Pitsch H (2009) Optimal artificial neural net- works and tabulation methods for chemistry representation in LES of a bluff-body swirl-stabilized flame. Proc Combust Inst 32:1527–1535
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, pp 694–711
Jolicoeur-Martineau A (2018) The relativistic discriminator: a key element missing from standard GAN. arXiv:1807.00734
Keras (2019) https://keras.rstudio.com/index.html
Kim J, Lee JK, Lee KM (2016a) Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1646–1654
Kim J, Lee JK, Lee KM (2016b) Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1637–1645
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105
Lai W-S, Huang J-B, Ahuja N, Yang M-H (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 624–632
Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681–4690
Li Y, Ni Y, Croft RAC, Di Matteo T, Bird S, Feng Y (2021) AI-assisted superresolution cosmological simulations. Proc Natl Acad Sci 118:e2022038118
Maas AL, Hannun AY, Ng AY (2013) Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the 30th international conference on machine learning, p 30
Miller RS, Bellan J (1999) Direct numerical simulation of a confined three-dimensional gas mixing layer with one evaporating hydrocarbon-droplet-laden stream. J Fluid Mech 384:293–338
Patterson MA, Reitz RD (1998) Modeling the effects of fuel spray characteristics on diesel engine combustion and emission. SAE Technical Paper
Pekurovsky D (2012) P3DFFT: a framework for parallel computations of Fourier transforms in three dimensions. SIAM J Sci Comput 34:192–209
Peters N (1986) Laminar flamelet concepts in turbulent combustion. In: Twenty-First symposium (International) combustion, pp 1231–1250
Pitsch H, Chen M, Peters N (1998) Unsteady flamelet modeling of turbulent hydrogen-air diffusion flames. In: Twenty-Seventh symposium (International) combustion, vol 27, pp 1057–1064
Pitsch H (2006) Large-eddy simulation of turbulent combustion. Ann Rev Fluid Mech 38:453–482
Pope SB (2000) Turbulent flows. Cambridge University Press, Cambridge
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556
Smagorinsky J (1963) General circulation experiments with the primitive equations: I. The basic experiment. Mon Weather Rev 91(3):99–164
Stengel K, Glaws A, Hettinger D, King RN (2020) Adversarial super-resolution of climatological wind and solar data. Proc Natl Acad Sci 117:16805–16815
Tai Y, Yang J, Liu X, Xu C (2017) Memnet: a persistent memory network for image restoration. In: Proceedings of the IEEE international conference on computer vision, pp 4539–4547
Vinyals O, Babuschkin I, Czarnecki WM, Mathieu M, Dudzik A, Chung J, Choi DH, Powell R, Ewalds T, Georgiev P, Oh J, Horgan D, Kroiss M, Danihelka I, Huang A, Sifre L, Cai T, Agapiou JP, Jaderberg M, Vezhnevets AS, Leblond R, Pohlen T, Dalibard V, Budden D, Sulsky Y, Molloy J, Paine TL, Gulcehre C, Wang Z, Pfaff T, Wu Y, Ring R, Yogatama D, Wünsch D, McKinney K, Smith O, Schaul T, Lillicrap T, Kavukcuoglu K, Hassabis D, Apps C, Silver D (2019) Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575:350–354
Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, Qiao Y, Loy CC (2018) ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the European conference on computer vision (ECCV)
Yao T, Pei Y, Zhong B-J, Som S, Lu T, Luo KH (2017) A compact skeletal mechanism for n-dodecane with optimized semi-global low-temperature chemistry for diesel engine simulations. Fuel 191:339–349
Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y (2018) Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European conference on computer vision (ECCV), pp 286–301
Acknowledgements
The author acknowledges computing time grants for the projects JHPC55 and TurbulenceSL by the JARA-HPC Vergabegremium provided on the JARA-HPC Partition part of the supercomputer JURECA at Jülich Supercomputing Centre, Forschungszentrum Jülich, the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer JUWELS at Jülich Supercomputing Centre (JSC), and funding from the European Union’s Horizon 2020 research and innovation program under the Center of Excellence in Combustion (CoEC) project, grant agreement no. 952181.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Bode, M. (2023). AI Super-Resolution: Application to Turbulence and Combustion. In: Swaminathan, N., Parente, A. (eds) Machine Learning and Its Application to Reacting Flows. Lecture Notes in Energy, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-031-16248-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-16248-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16247-3
Online ISBN: 978-3-031-16248-0
eBook Packages: EnergyEnergy (R0)