1 Introduction

1.1 Contemporary Materials Science

The works outlined in the present review have been motivated by the following two-fold observation. In the past couple of decades, what we believe to be the most spectacular changes in materials science are

  1. (i)

    the increasing multi-scale nature of the materials considered: materials used to be mostly considered at one single scale, the effect of the finer scales being only phenomenologically accounted for in the model at the largest scale; when absolutely necessary, the effect of some micro-scale structure was explicitly considered, but then it was at most for one such scale and almost exclusively sequentially: information was passed from the micro-scale to the macro-scale; modern materials science increasingly explicitly and concurrently considers models of a given material at many different scales.

  2. (ii)

    the increasing imperfect character of the materials considered: more and more often, deterministic or random sources of disorder are considered within an ordered phase: the simplicity of periodic structures is not a valid approximation any longer for the degree of practical relevance and accuracy that modern materials science requires; crystalline materials are actually polycrystalline materials and consist of mono-crystalline grains, each of them possibly of a different crystalline structure, each crystalline structure being itself flawed because sprinkled of defects and dislocations; the imperfections, or violations of periodicity, affect every possible scale, and actually cut through scales.

As a result, the real materials that contemporary materials scientists have to model have a multi-scale, imperfect, possibly random nature. Such materials have several characteristic length-scales that possibly differ from one another by orders of magnitude but must be accounted for simultaneously. At possibly each such scale, they have defects. Their qualitative and quantitative response might therefore differ a lot from the idealized scenario long considered.

Our intent here is to present several mathematical and numerical endeavors that aim to better model, understand and simulate non-periodic multi-scale problems.

The specific theoretical context in which we develop our discussion is homogenization of simple, second order elliptic equations in divergence form with highly oscillatory coefficients:

$$\begin{aligned} - {\mathrm{div}}\left[ A_\varepsilon (x) \nabla u^\varepsilon \right] = f, \end{aligned}$$
(1)

in a domain \(\mathcal{D}\subset {\mathbb R}^d\), with, say, homogeneous Dirichlet boundary conditions \(u^\varepsilon =0 \) on \(\partial \mathcal{D}\). This particular case is to be thought of as a prototypical case. It is intuitively clear that the same approaches carry over to other settings. Current works are indeed directed toward extending many of the considerations here to other types of equations, as will be clear in the exposition below.

We conclude this introductory section with a quick presentation of the classical theory. The reader familiar with this theory may of course skip the presentation and directly proceed to Sect. 2.

1.2 Basics of Homogenization Theory

1.2.1 Periodic Homogenization

To begin with, we recall some well known, basic ingredients of elliptic homogenization theory in the periodic setting, see the classical references [8, 29, 42] for more details, or an overview in [1, Chap. 1] . We consider the problem

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\left[ A_{per}\left( \frac{x}{\varepsilon }\right) \nabla u^\varepsilon \right] = f \quad \text {in} \quad \mathcal{D}, \\ u^\varepsilon =0 \quad \text {on} \quad \partial \mathcal{D}, \end{array}\right. \end{aligned}$$
(2)

where the matrix \(A_{per}\) is \({\mathbb Z}^d\)-periodic, bounded and bounded away from zero, and (for simplicity) symmetric. The corrector problem associated to Eq. 2 reads, for \({\mathbf{p}}\) fixed in \({\mathbb R}^d\),

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\left( A_{per}(y)\left( {\mathbf{p}}+ \nabla w_{per,{\mathbf{p}}} \right) \right) =0, \\ w_{per,{\mathbf{p}}} \text { is } {\mathbb Z}^d \text {-periodic}. \end{array} \right. \end{aligned}$$
(3)

It has a unique solution up to the addition of a constant. This solution is meant to describe prototypical fine oscillations of the exact solution \(u^\varepsilon \) for \(\varepsilon \) small. Then, the homogenized coefficients read

$$\begin{aligned}{}[{A_{per}^*}]_{ij} = \int \limits _Q {\mathbf{e}}_i^T A_{per}(y)\left( {\mathbf{e}}_j + \nabla w_{per,{\mathbf{e}}_j}(y) \right) dy, \end{aligned}$$
(4)

where Q is the unit cube and \({\mathbf{e}}_i\), \(1\le i\le d\) are the canonical vectors of \({\mathbb R}^d\). The main result of periodic homogenization theory for Eq. 2 is that, as \(\varepsilon \) vanishes, the solution \(u^\varepsilon \) to Eq. 2 converges to \(u^*\) solution to

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\left[ {A_{per}^*}\nabla u^* \right] = f \quad \text {in} \quad \mathcal{D}, \\ u^*=0 \quad \text {on} \quad \partial \mathcal{D}. \end{array}\right. \end{aligned}$$
(5)

The convergence holds in \(L^2(\mathcal{D})\), and weakly in \(H^1_0(\mathcal{D})\). The correctors \(w_{per,{\mathbf{e}}_i}\) may then also be used to “correct” \(u^*\) in order to show that, in the strong topology \(H^1(\mathcal{D})\), \(u^\varepsilon -u^{\varepsilon ,1}(x)\) converges to zero, for \(\displaystyle u^{\varepsilon ,1}(x)=u^*(x)+\varepsilon \sum \nolimits _{i=1}^d\partial _{x_i}u^*(x)\,w_{per,{\mathbf{e}}_i}(x/\varepsilon )\). The rate of convergence may also be made precise.

The practical conclusion is that, at the price of only computing the d periodic problems of Eq. 3, the solution to Eq. 2 can be efficiently approached for \(\varepsilon \) small.

1.2.2 Random Homogenization

A first option to outreach the simplistic setting of periodic structures is to consider random structures. Of course, materials are never random in nature, but randomness is a suitable, practical way to encode the ignorance of, or at best the uncertainty on the intimate microscopic structure of the material considered.

For homogenization, the random setting is a highly non trivial extension of the periodic setting. Many questions, in particular for nonlinear equations, still remain open in the random case although they are solved and well documented in the periodic case. Fortunately, in the case of linear diffusion equations such as Eq. 1, the state of affairs is that, loosely speaking, all the results of convergence still essentially hold true but (a) they are more difficult to prove and (b) the convergence rates are even more difficult to establish.

To fix the ideas, we now give some more formal details on one random case. For brevity, we skip all technicalities related to the definition of the probabilistic setting, which we assume discrete stationary and ergodic (we refer e.g. to [2] for all details). We now fix \(A(.,\omega )\) a square matrix of size d, again bounded and bounded away from zero, symmetric, which is assumed stationary in the sense

$$\begin{aligned} \forall {\mathbf{k}}\in {\mathbb Z}^d, \quad A(x+{\mathbf{k}}, \omega ) = A(x,\tau _{\mathbf{k}}\omega ) \text{ almost } \text{ everywhere } \text{ in } x, \text{ almost } \text{ surely } \end{aligned}$$
(6)

(where \(\tau \) is an ergodic group action). This amounts to assuming that the law of \(A(.,\omega )\) is \({\mathbb Z}^d\)-periodic. Then we consider the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\left( A\left( \frac{x}{\varepsilon }, \omega \right) \nabla u^\varepsilon \right) = f \quad \text {in} \quad {\mathcal D}, \\ u^\varepsilon = 0 \quad \text {on} \quad \partial {\mathcal D}. \end{array}\right. \end{aligned}$$
(7)

Standard results of random homogenization [8, 29] apply and allow to find the homogenized problem for Eq. 7. These results generalize the periodic results recalled in Sect. 1.2.1. The solution \(u^\varepsilon \) to Eq. 7 converges to the solution to Eq. 5 where the homogenized matrix is now defined as:

$$\begin{aligned}{}[A^*]_{ij} = {\mathbb E}\left( \int \limits _Q {\mathbf{e}}_i^T A\left( y,\cdot \right) \,\left( {\mathbf{e}}_j+\nabla w_{{\mathbf{e}}_j}(y,\cdot )\right) \,dy\right) , \end{aligned}$$
(8)

where for any \({\mathbf{p}}\in {\mathbb R}^d\), \(w_{\mathbf{p}}\) is the solution (unique up to the addition of a random constant) to

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\left[ A\left( y,\omega \right) \left( {\mathbf{p}}+ \nabla w_{\mathbf{p}}(y,\omega ) \right) \right] =0, \quad \mathrm{a.s. on}\quad {\mathbb R}^d,\\ \nabla w_{\mathbf{p}} \quad \text{ is } \text{ stationary } \text{ in } \text{ the } \text{ sense } \text{ of } \text{ Eq. }~6, \\ \displaystyle {\mathbb E}\left( \int \limits _Q \nabla w_{\mathbf{p}}(y,\cdot )\,dy\right) = {\mathbf{0}}. \end{array} \right. \end{aligned}$$
(9)

A striking difference between the random setting and the periodic setting can be observed comparing Eqs. 3 and 9. In the periodic case, the corrector problem is posed on a bounded domain, namely the periodic cell Q. In sharp contrast, the corrector problem in Eq. 9 of the random case is posed on the whole space \({\mathbb R}^d\), and cannot be reduced, at the theoretical level, to a problem posed on a bounded domain. The fact that the random corrector problem is posed on the entire space has far reaching consequences both for theory and for numerical practice. To some extent, the unboundedness of the domain on which the corrector problem is posed is a common denominator of all the settings that we will address in the present survey. This unboundedness of the corrector problem is also a fundamental characteristic feature of the practically relevant problems of materials science. We cannot emphasize enough this fact.

In order to approximate Eq. 9 numerically, truncations of the problem have to be considered, typically on large domains \(Q_N = [0,N]^d\) and using periodic boundary conditions. The actual homogenized coefficients are only captured in the asymptotic regime \(Q_N\rightarrow {\mathbb R}^d\). Overall, it is fair to consider that the approach is very expensive computationally, and often actually prohibitively expensive. Therefore, in many practical situations, the size of the “large” domain \(Q_N\) considered is in fact small, and the number of realizations of the random microstructure considered therein to approach the expectation in Eq. 8 is also dramatically limited. Put differently, there is a large gap looming between the actual practice and the regime where the theory provides relevant information.

Important theoretical questions about the quality and the rate of the convergence in terms of the truncation size arise: see, in particular, the pioneering works by Bourgeat and Piatnitski [17, 18] and, more broadly and recently, a series of works by F. Otto, A. Gloria, S.  Armstrong, Ch. Smart, J.-C. Mourrat and their many collaborators, see e.g. [25, 26] for examples of contributions.

2 A Mathematical Toolbox for “Weakly” Random Problems

We begin with this section our study of homogenization of non-periodic problems. We have already mentioned that one possible option is the random setting. And we have mentioned the practical difficulties it raises. In many practical situations, however, the real material under consideration is not far from being a periodic material. At zero-th order of approximation, the material can be considered periodic, and it is only at a higher order that disorder might play a role. We choose, in this section, to encode this disorder using randomness. When the “material” under study is the geological bedrock, there is of course no reason for this assumption to be valid, and the classical random model of Sect. 1.2.2 might be more relevant. In contrast, the assumption makes a lot of sense when considering manufactured materials, where the defect of periodicity typically owes to flaws in the process: the material was meant to be periodic, but it is actually not. The practically relevant question is to understand whether or not, despite its smallness, the microscopic amount of randomness might affect the macroscale at order one. Solving this question requires to come up with a modeling strategy for the imperfect material.

Our purpose here is to outline a modeling strategy that accounts for the presence of randomness in a multi-scale computation, but specifically addresses the case when the amount of randomness present in the system is small. In this case, we call the material weakly random. The weakly random material is thus considered as a small perturbation of a periodic material. Our purpose is to introduce a toolbox of possible modeling strategies that all keep the computational workload limited (in comparison to a direct attack of the problem as if, like in Sect. 1.2.2, the randomness was not small) and that provides an approximation of the response of the material which one may certify by error estimates.

As mentioned above, the simple diffusion equation Eq. 1 is a perfect prototypical testbed for our toolbox. It is ubiquitous in several, if not all engineering sciences and life sciences. Although we have not developed our theory and computations for other, more general equations and settings, we are convinced that the same line of approach (namely small amount of randomness as compared to a reference periodic setting, plus expansion in the randomness amplitude, and simplified computations) can be useful in many contexts.

2.1 Random Deformations of the Periodic Setting

A first random setting, which has been introduced and studied in [11] and is not, mathematically, a particular case of the classical stationary setting recalled in Sect. 1.2.2, consists of random deformations of a periodic structure. As said above, it is motivated by the consideration of random geometries that have some specific proximity to the periodic setting. The periodic setting is here taken as a reference configuration, somewhat similarly to the classical mathematical formalization of continuum mechanics where a reference configuration is used to define the state of the material under study. Another related idea, in a completely different context, is the consideration of a reference element for finite element computations. The real situation is then seen via a mapping from the reference configuration to the actual configuration. Here, this mapping is a random mapping (otherwise, one would know everything on the material up to a change of coordinates and there would be poor practical interest in the approach). Assuming some regularity of this mapping induces constraints on the sets of geometries that the microstructures of the material can take. Put differently, the material structure, even though it is not entirely known, is not arbitrarily disordered.

We fix some \({\mathbb Z}^d\)-periodic \(A_{per}\), assumed to satisfy the usual properties of boundedness and coerciveness, and we consider the following specific form of the coefficient \(A_\varepsilon \) in Eq. 1

$$\begin{aligned} A_\varepsilon \left( x , \omega \right) \,=\,A_{per}\left( \Phi ^{-1}\left( \frac{x}{\varepsilon }, \omega \right) \right) , \end{aligned}$$
(10)

where the function \(\Phi (\cdot ,\omega )\) is assumed to be, almost surely, a diffeomorphism from \({\mathbb R}^d\) to \({\mathbb R}^d\). The diffeomorphism, called a random stationary diffeomorphism, is assumed to additionally satisfy

$$\begin{aligned}&\text {essinf}_{\omega \in \Omega ,\, x\in {\mathbb R}^d} \left[ \det (\nabla \Phi (x,\omega ))\right] = \nu >0, \end{aligned}$$
(11)
$$\begin{aligned}&\text {esssup}_{\omega \in \Omega , \, x\in {\mathbb R}^d} \left( |\nabla \Phi (x,\omega )| \right) = M <\infty ,\end{aligned}$$
(12)
$$\begin{aligned}&\nabla \Phi (x,\omega )\quad \text{ is } \text{ stationary } \text{ in } \text{ the } \text{ sense } \text{ of } \text{ Eq. }~6. \end{aligned}$$
(13)

Note that the first two assumptions enforce the “homogeneity” of the diffeomorphism: the deformed periodic structure does not implode nor explode anywhere.

Homogenization holds for the above problem (the details are made precise in [11]). The homogenized problem again reads as in Eq. 5 with the homogenized matrix given by:

$$\begin{aligned}{}[A^*]_{ij}= & {} \det \left( {\mathbb E}\left( \int \limits _Q \nabla \Phi (z,\cdot )dz \right) \right) ^{-1}\nonumber \\&\quad \quad \times \,{\mathbb E}\left( \int \limits _{\Phi (Q,\cdot )} {\mathbf{e}}_i^T A_{per}\left( \Phi ^{-1}(y,\cdot )\right) \left( {\mathbf{e}}_j+\nabla w_{{\mathbf{e}}_j}(y,\cdot ) \right) \,dy\right) , \end{aligned}$$
(14)

where for any \({\mathbf{p}}\in {\mathbb R}^d\), \(w_{\mathbf{p}}\) is the solution (unique up to the addition of a random constant and belonging to the suitable functional space) to

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\left[ A_{per}\left( \Phi ^{-1}(y,\omega )\right) \left( {\mathbf{p}}+ \nabla w_{\mathbf{p}} \right) \right] =0, \quad \mathrm{a.s. on}\quad {\mathbb R}^d, \\ w_{\mathbf{p}}(y,\omega ) = \tilde{w}_{\mathbf{p}}\left( \Phi ^{-1}(y,\omega ),\omega \right) , \quad \nabla \tilde{w}_{\mathbf{p}} \quad \text{ is } \text{ stationary } \text{ in } \text{ the } \text{ sense } \text{ of } \text{ Eq. }~6, \\ \displaystyle {\mathbb E}\left( \int \limits _{\Phi (Q,\cdot )} \nabla w_{\mathbf{p}}(y,\cdot )dy\right) = {\mathbf{0}}. \end{array} \right. \end{aligned}$$
(15)

At first sight, there seems to be no simplification whatsoever in considering the above system Eq. 15, which even looks way more complex than the classical random problem Eq. 9. The key point, though, is that the introduction of a new modeling “parameter”, namely the random diffeomorphism \(\Phi \), allows to in some sense introduce a distance between the periodic case (\(\Phi =Id\)) and the random case (\(\Phi \not =Id\)) considered. Our next step consists in proceeding in this direction.

2.2 Small Random Perturbations of the Periodic Setting

We now superimpose to the setting defined in the previous section the assumption that the material considered is a small perturbation of a periodic material. This is formalized upon writing

$$\begin{aligned} \Phi (x,\omega ) = x + \eta \,\Psi (x,\omega ) + O(\eta ^2), \end{aligned}$$
(16)

where \(\Psi \) is any random field such that \(\Phi \) is a random stationary diffeomorphism that satisfies Eqs. 11-13 for \(\eta \) sufficiently small.

Fig. 1
figure 1

Source [21]

Small random deformation of a periodic structure. In the unperturbed periodic environment, the inclusions are circular and periodic. The deformation of each inclusion is performed randomly.

It has been shown in [11] that, when \(\Phi \) is such a perturbation of the identity map (see Fig. 1), the solution to the corrector problem of Eq. 15 may be developed in powers of the small parameter \(\eta \). It reads \(\widetilde{w}_{\mathbf{p}}(x,\omega ) = w_{per,{\mathbf{p}}}(x) + \eta w_{\mathbf{p}}^1(x,\omega ) + O(\eta ^2)\), where \(w_{per,{\mathbf{p}}}\) is the periodic corrector defined in Eq. 3 and where \(w_{\mathbf{p}}^1\) solves

$$\begin{aligned} \left\{ \begin{array}{l} - {\mathrm{div}}\, \left[ A_{per} \, \nabla w_{\mathbf{p}}^1 \right] \\ \quad \quad \quad \quad = {\mathrm{div}}\, \left[ -A_{per} \, \nabla \Psi \, \nabla w_{per,{\mathbf{p}}} - (\nabla \Psi ^T - ( {\mathrm{div}}\, \Psi ) \text {Id}) \, A_{per} \, ({\mathbf{p}} + \nabla w_{per,{\mathbf{p}}}) \right] , \\ \nabla w_{\mathbf{p}}^1 \ \text{ is } \text{ stationary } \text{ and } \ \displaystyle {\mathbb E} \left( \int \limits _Q \nabla w_{\mathbf{p}}^1 \right) = {\mathbf{0}}. \end{array} \right. \end{aligned}$$
(17)

The problem of Eq. 17 in \(w_{\mathbf{p}}^1\) is random in nature, but it is in fact easy to see, taking the expectation, that \(\overline{w}_{\mathbf{p}}^1 = {\mathbb E} (w_{\mathbf{p}}^1)\) is periodic and solves the deterministic problem

$$\begin{aligned}&- {\mathrm{div}}\, \left[ A_{per} \, \nabla \overline{w}_{\mathbf{p}}^1 \right] \\&\quad = {\mathrm{div}}\, \left[ -A_{per} \, {\mathbb E}(\nabla \Psi ) \, \nabla w_{per,{\mathbf{p}}} - ({\mathbb E}(\nabla \Psi ^T) - {\mathbb E}( {\mathrm{div}}\, \Psi ) \text {Id}) \, A_{per} \, ({\mathbf{p}} + \nabla w_{per,{\mathbf{p}}}) \right] . \end{aligned}$$

This is useful because, on the other hand, the knowledge of \(w_{\mathbf{p}}^0\) and \(\overline{w}_{\mathbf{p}}^1\) suffices to obtain a first order expansion (in \(\eta \)) of the homogenized matrix. Indeed, \( A_{per}^*\) being the periodic homogenized tensor as defined in Eq. 4, and

$$\begin{aligned} A_{ij}^1= & {} - \int \limits _Q {\mathbb E} ( {\mathrm{div}}\, \Psi ) \, [A_{per}^*]_{ij} + \int \limits _Q ({\mathbf{e}}_i + \nabla w_{per,{\mathbf{e}}_i}^0)^T A_{per} \, {\mathbf{e}}_j \, {\mathbb E}( {\mathrm{div}}\, \Psi ) \\&\quad \quad \quad \quad \quad \quad \quad \quad + \int \limits _Q \left( \nabla \overline{w}_{{\mathbf{e}}_i}^1 - {\mathbb E}(\nabla \Psi ) \nabla w_{per,{\mathbf{e}}_i}^0 \right) ^T A_{per} \, {\mathbf{e}}_j, \end{aligned}$$

we then have

$$\begin{aligned} {A^*} = A_{per}^* + \eta A^1 + O(\eta ^2). \end{aligned}$$
(18)

For \(\eta \) sufficiently small in function of the accuracy expected, the approach therefore provides a computational strategy to approximately compute the homogenized tensor that bypasses the classical random problem and only considers (a sequence of) deterministic, periodic problems.

2.3 Rare but Possibly Large Random Perturbations

The previous section has shown that a perturbative approach can be an interesting modeling and computational strategy for cases when the structure of the material is random but “close” to a periodic structure. We now proceed in a similar direction by presenting an alternative perturbative approach, described in full details in [3, 4]. We consider

$$\begin{aligned} A_{\eta }(x, \omega ) = A_{per}(x) + b_{\eta }(x, \omega ) \,C_{per}(x), \end{aligned}$$
(19)

instead of a coefficient \(A_{per}\left( \Phi ^{-1}(.,\omega )\right) \) with \(\Phi \) of the form Eq. 16. In Eq. 19, \(A_{per}\) is again a periodic matrix modeling the unperturbed material, \(C_{per}\) is a periodic matrix modeling the perturbation, and \( b_{\eta }(., \omega )\) is a random field that is, in some sense, small. Consider then the case

$$\begin{aligned} b_{\eta }(x,\omega ) = \sum _{{\mathbf{k}} \in \mathbb {Z}^d} \mathbf {1}_{\{Q+ {\mathbf{k}}\}}(x)B_{\eta }^k(\omega ), \end{aligned}$$
(20)

where the \(B_{\eta }^k\) are, say, independent identically distributed random variables. One particularly interesting case (see [3, 4] for this case and others) is that when the common law of the \(B_{\eta }^k\) is a Bernoulli law of parameter \(\eta \) (see Fig. 2).

We now explain formally our approach. The mathematical correctness of the approach has been established in the works [23, 40].

Fig. 2
figure 2

Source [3]

Defects in a periodic structure. In the unperturbed periodic environment, the inclusions are periodic. The elimination of some of these inclusions are the defects considered. The elimination may be deterministic (as in Sect. 3 below), or random (as in Sect. 1.2.2). One may also consider small probabilities of elimination and construct the corresponding mathematical setting (as in Sect. 2.3)

To start with, we notice that in the corrector problem

$$\begin{aligned} - {\mathrm{div}}\left[ A_\eta \left( y,\omega \right) \left( {\mathbf{p}}+ \nabla w_{\mathbf{p}}(y,\omega ) \right) \right] =0, \end{aligned}$$
(21)

the only source of randomness comes from the coefficient \(A_\eta \left( y,\omega \right) \). Therefore, in principle, if one knows the law of this coefficient \(A_\eta \), one knows the law of the corrector function \(w_{\mathbf{p}}(y,\omega )\) and therefore may compute the homogenized coefficient \(A^*\), the latter being a function of this law. When the law of \(A_\eta \) is an expansion in terms of a small coefficient, so is the law of \(w_{\mathbf{p}}\). Consequently, \(A_\eta ^*\) must be attainable using an expansion.

Heuristically, on the cube \(Q_N\) and at order 1 in \(\eta \), the probability to see the perfect periodic material (entirely modeled by the matrix \(A_{per}\)) is \((1-\eta )^{N^d}\approx 1-N^d\eta +O(\eta ^2)\), while the probability to see the unperturbed material on all cells except one (where the material has matrix \(A_{per} + C_{per}\)) is \(N^d\,(1-\eta )^{N^d-1}\eta \approx N^d\eta +O(\eta ^2)\). All other configurations, with more than two cells perturbed, contribute at orders higher than or equal to \(\eta ^2\). This gives the intuition (indeed confirmed by a mathematical proof) that the first order correction indeed comes from the difference between the material perfectly periodic except on one cell and the perfect material itself: \(A_{\eta }^* = A_{per}^*+ \eta A_{1,*} + o(\eta )\) where \(A_{per}^*\) is the homogenized matrix for the unperturbed periodic material and

$$\begin{aligned} A_{1,*}\, {\mathbf{e}}_i = \lim _{N \rightarrow + \infty } \int \limits _{Q_N}\left[ (A_{per}+\mathbf {1}_{Q}C_{per})(\nabla w_{{\mathbf{e}}_i}^N + {\mathbf{e}}_i) - A_{per}(\nabla w_{per, {\mathbf{e}}_i} + {\mathbf{e}}_i )\right] , \end{aligned}$$
(22)

where \(w_{{\mathbf{e}}_i}^{N}\) solves

$$\begin{aligned} -\mathrm {div}\left( (A_{per}+ \mathbf {1}_{Q}C_{per}) ({\mathbf{e}}_i +\nabla w_{{\mathbf{e}}_i}^{N}) \right) = 0 \quad \mathrm {in} \quad Q_N, \quad w_{{\mathbf{e}}_i}^{N} \,\text {is}\,Q_N-\mathrm {periodic}. \end{aligned}$$
(23)

Note that the integral appearing on the right-hand side of Eq. 22 is not normalized: it a priori scales as the volume \(N^d\) of \(Q_N\) and has finite limit only because of cancellation effects between the two terms in the integrand.

This perturbative approach has been extensively tested. It has been observed that the large N limit for cubes of size N is already accurately approximated for limited values of N. As in the previous section (Sect. 2.2), the computational efficiency of the approach is clear: solving the two periodic problems with coefficients \(A_{per}\) and \(A_{per}+ \mathbf {1}_{Q}C_{per}\) for a limited size N is much less expensive than solving the original, random corrector problem for a much larger size N. When the second order term is needed, configurations with two defects have to be computed. They all can be seen as a family of PDEs, parameterized by the geometrical location of the defects (see again Fig. 2). Reduced basis techniques have been shown to allow for a definite speed-up in the computation, see [33].

On an abstract level, we note that, in the proposed approach for the “weakly” random regime, the determination of the homogenized tensor for a material containing defects with random locations is reduced to a set of computation of the solutions to correctors problems such as Eq. 23 for materials with defects at some particular deterministic locations. This naturally establishes a methodological link with our next section where we indeed consider materials with deterministic defects. The link is actually more than methodological: the theoretical results of Sect. 3 establishing that the corrector problems with deterministic defects are uniquely solvable in a suitable class of functions are readily useful in the random setting for the foundation of the approach described here in Sect. 2.

3 Deterministic Defects Within an Otherwise Periodic Structure

We return to the generic multi-scale diffusion equation Eq. 1. Under quite general and mild assumptions on the diffusion (possibly matrix-valued) coefficient \(A_\varepsilon \) (which needs not be of the form \(A_\varepsilon =A_{per}(x/\varepsilon )\) or obey any structural assumption of that type), presumably varying at the tiny scale \(\varepsilon \), the equation admits an homogenized limit, which is indeed of the same form as Eq. 1, namely Eq. 5. Celebrated results along these lines are due to S. Spagnolo, E. De Giorgi and L. Tartar and their respective collaborators, see [42]. The strength of such results is their generality. They are obtained by a compactness argument. Schematically the sequence of inverse operators \(\left[ - {\mathrm{div}} (A_\varepsilon \nabla . )\right] ^{-1}\) is (weakly) compact in the suitable topology, converges, up to an extraction, and its limit can be proven to be an operator of the same type, namely \(\left[ - {\mathrm{div}} (A^* \nabla . )\right] ^{-1}\). On the other hand, and precisely because of the generality, not much is known on the limit \(A^*\). This contrasts with periodic homogenization which is both explicit (the limit coefficient \(A^*\) is known by a formula, namely Eq. 4, in function of the, also known, corrector) and precised (the rate of convergence of \(u^\varepsilon \) to \(u^*\) is known for a large variety of norms). Besides their theoretical interest per se, the combined two ingredients allow for envisioning, in practice, a numerical approach for the computation of the homogenized limit, certified by a numerical analysis that guarantees a control of the numerical error committed, in function of \(\varepsilon \) and the discretization parameters.

The question arises to find settings sufficiently general that still allow for the quality of results of the periodic setting. The recent decade has witnessed several mathematical endeavors in this direction. We describe here such an endeavor and give one prototypical example of such a setting, where we illustrate the novelty of the mathematical questions involved (Fig. 3).

Fig. 3
figure 3

Source [12]

Localized defects in a periodic structure. Some periodic cells in the center of the domain are perturbed. The error \(u^\varepsilon -u^{\varepsilon ,1}\) is displayed when calculating \(u^{\varepsilon ,1}\) using (left) the periodic corrector \(w_{per,{\mathbf{p}}}\) solution to Eq. 3 and (right) the adjusted corrector \(w_{\mathbf{p}}\) solution to Eq. 24. In the former case, the size of the committed error is almost a “defect detector”. In the latter case, the error is homogeneous throughout the domain, recovering the quality of the approximation of the unperturbed periodic case.

Consider Eq. 1 and assume that \(A_\varepsilon =A(./\varepsilon )\) where the coefficient A models a periodic material perturbed by a localized defect. This setting, mathematically, may be encoded in \(A=A_{per} +\tilde{A}\) for \(\tilde{A}\in L^p({\mathbb R}^d)\) for some \(p<+\infty \). Clearly, the presence of this defect does not affect the macroscopic behavior, that is the homogenized equation for the same homogenized coefficient \(A^*\), only actually depending on averages of A over large, asymptotically infinite volumes, for which the addition of a function such as \(\tilde{A}\) does not matter. On the other hand, when it comes to making this limit more precise, one intuitively realizes, zooming in locally in the material, that the corrector equation that describes the microscopic response of the material reads as

$$\begin{aligned} \displaystyle - {\mathrm{div}} (A ({\mathbf{e}}_i+ \nabla w_{{\mathbf{e}}_i}) )=0. \end{aligned}$$
(24)

This equation is different from Eq. 3, and, in sharp contrast with Eq. 3 (and similarly to what we observed for Eq. 9 in the random setting), does not reduce to an equation set on a bounded domain with periodic boundary conditions. Note that, for the particular choice \(\tilde{A}=\mathbf {1}_{Q}C_{per}\), Eq. 23 is a particular instance of Eq. 24 when \(N=+\infty \). In essence, Eq. 24 is posed on the entire ambient space \({\mathbb R}^d\), a reflection of the fact that, at the microscopic scale, the defect has broken the periodicity of the environment: the local response is affected by the defect and depends on the state of the whole microscopic structure. A considerable mathematical difficulty follows. The classical toolbox for the study of the well-posedness of (here linear) equations on bounded domains: the Lax-Milgram Lemma in the coercive case, the Fredholm Alternative, etc., all techniques that one way or another rely upon the boundedness of the domain or the compactness of the setting, are now ineffective. Should A be random stationary, then Eq. 24 would read as Eq. 9 and admit an equivalent formulation on the abstract probability space. This would make up for compactness, but other significant complications would arise. For Eq. 24, the difficulty must be embraced. A related difficulty is to define the set of admissible functions for solutions, or the variational space in an energetic formulation of the problem. In the specific case \(A=A_{per} +\tilde{A}\) with \(\tilde{A}\in L^p({\mathbb R}^d)\), one seeks for the solution to Eq. 24 under the form \(w_{{\mathbf{e}}_i}=w_{per,{\mathbf{e}}_i}+\tilde{w}_{{\mathbf{e}}_i}\) that is, with reference to the periodic solution \(w_{per,{\mathbf{e}}_i}\), somewhat in echo to what we achieved in Sect. 2.3. Equation 24 then rewrites as

$$ - {\mathrm{div}} \,(A \, \nabla \tilde{w}_{{\mathbf{e}}_i} )= {\mathrm{div}}\, (\tilde{f})\,,$$

where \(\tilde{f}\in L^p({\mathbb R}^d)\), which, by homogeneity, suggests that the suitable functional space for \(\nabla \tilde{w}\) is \(L^p({\mathbb R}^d)\). The question then arises to know whether the operator \([\nabla ]\,[ {\mathrm{div}} (A \, \nabla \,.)]^{-1}\,[ {\mathrm{div}}]\) acts continuously in \(L^p({\mathbb R}^d)\). The answer depends on the properties of the coefficient A. In the present setting, it is positive for all \(1<p<+\infty \). The theoretical analysis to reach this conclusion heavily relies upon the celebrated works [5,6,7] by M. Avellaneda and F. H. Lin for the periodic case (see also [30, 41]).

The consideration of the one-dimensional version of the problem clearly shows (this particular example is worked out in [12]) that when one considers the specific corrector w solution to \(\displaystyle -\frac{d}{dy}\,\left( (a_{per}+\tilde{a})(y)\,\left( 1+ \frac{d}{dy}\,w(y)\right) \right) =0\), instead of the periodic corrector \(w_{per}\) solution to \(\displaystyle -\frac{d}{dy}\,\left( a_{per}(y)\,\left( 1+ \frac{d}{dy}\,w_{per}(y)\right) \right) =0\), then the quality of the (two-scale, first order) approximation of the solution \(u^\varepsilon \) is immediately improved near the defect and at the scale of the defect.

In dimensions higher than or equal to two, the proof is more difficult. Under appropriate conditions, the solution \(u^\varepsilon \) is well approximated in \(H^1\) norm, both at scale one and at scale \(\varepsilon \) (thus in particular in \(L^\infty \) norm), by the first order expansion \(\displaystyle u^{\varepsilon ,1}(x)=u^*(x)+\varepsilon \sum \nolimits _{i=1}^d\partial _{x_i}u^*(x)\,w_{{\mathbf{e}}_i}(x/\varepsilon )\) constructed using the specific correctors \(w_{{\mathbf{e}}_i}\). The latter approximation property does not in general hold true for the periodic first-order approximation \(\displaystyle u_{per}^{\varepsilon ,1}(x)=u^*(x)+\varepsilon \sum \nolimits _{i=1}^d\partial _{x_i}u^*(x)\,w_{per,{\mathbf{e}}_i}(x/\varepsilon )\) constructed using the periodic corrector \(w_{per,{\mathbf{e}}_i}\). One may even make precise the rate of convergence in function of the small parameter \(\varepsilon \), and likewise may prove similar convergence for different Sobolev or Hölder norms. The proof of these convergences has first been presented in the case \(p=2\) (and slightly formally) in [12]. All results and extensions are carried out in a series of works [9, 10, 13,14,15].

The procedure above is not restricted to the linear diffusion problem Eq. 1. One may consider semi-linear equations, quasi-linear equations, systems, etc. And of course it gets all the more delicate as the complexity of the equation increases. One such example, namely an Hamilton-Jacobi equation, is the purpose of the work [19] and also the subject of work in progress by the author and his collaborators, see  [16, 20, 28].

Various other cases of defects may be considered for homogenization problems that are otherwise “simple”. They may formally decay at infinity (like the “localized” functions \(\tilde{A}\) manipulated above), or not. In the former case, the problem at infinity (that is the problem obtained upon translating the equation far away from the defect) is identical to the underlying periodic problem. In the latter case, the situation may sensitively depend upon what the problem “at infinity” looks like. There may even exist several such problems. Another prototypical example is related to the modeling of grain boundaries in materials science: two, different, periodic structures are connected across an interface. The defect is, say, a plane separating the two structures, and at large distances from this interface, different periodic structures are present, depending upon which side of the interface is considered, see [13]. The corresponding mathematical problem is theoretically challenging, and practically relevant. In all cases, the purpose is to identify the homogenized, macroscopic limit, while, in the meantime, retain some of the microscopic features that make the problem relevant.

4 Multi-scale Finite Element Approaches and Nonperiodicity

Multi-scale Finite Element Methods, abbreviated as MsFEM, have proved to be efficient in a number of contexts. In essence, these approaches are based upon choosing, as specific finite dimensional basis to expand the numerical solution upon, a set of functions that themselves are solutions to a highly oscillatory local problem, at scale \(\varepsilon \), involving the differential operator present in the original equation. This problem-dependent basis set, precomputed (in an offline stage), is likely to better encode the fine-scale oscillations of the solution and therefore allow to capture the solution more accurately. Numerical observation along with mathematical arguments prove that this is indeed generically the case. The versatility of the classical FEM is lost, but with MsFEM, their efficiency is restored for multi-scale problems.

The standard version of the approach has been originally introduced by T. Hou and his collaborators (see the textbook [24] for a general introduction). There exist many variants of such a multi-scale approach, within the formalism of MsFEM or beyond it, and many outstanding numerical analysts and computational analysts have contributed to the field. Classical examples include the Variational multi-scale Method introduced by Hughes et al. the Local Orthogonal Decomposition method by Malqvist and Peterseim, the localization and subspace decomposition method of R. Kornhuber and H. Yserentant, etc. It is not our purpose here to review all these works. We would like to concentrate ourselves here on an issue that is intrinsically related to the context of our discussion, namely breakings of the periodic structure of a material, and its consequence on the accuracy of a dedicated numerical approach.

We recall, on the prototypical multi-scale diffusion problem Eq. 1, that the MsFEM approach, in one of its simplest variant, consists of the following three steps:

  1. 1.

    Introduce a discretization of \(\mathcal{D}\) with a coarse mesh; throughout this article, we work with the \(\mathbb {P}^1\) Finite Element space

    $$\begin{aligned} V_H=\text {Span}\left\{ \phi ^0_i, \ 1\le i\le N_{V_H}\right\} \subset H^1_0(\mathcal{D}). \end{aligned}$$
    (25)
  2. 2.

    Solve the local problems (one for each basis function for the coarse mesh)

    $$\begin{aligned} -\text {div }\left( A_\varepsilon \nabla \psi _i^{\varepsilon ,\mathbf {K}}\right) =0 \quad \text {in }\mathbf {K}, \qquad \psi ^{\varepsilon ,\mathbf {K}}_i=\phi ^0_i \quad \text {on }\partial \mathbf {K}, \end{aligned}$$
    (26)

    on each element \(\mathbf {K}\) of the coarse mesh \(\mathcal {T}_H\), in order to build the multi-scale basis functions. This is typically performed off-line, using a fine mesh \(\mathcal {T}_h\), with \(h\ll H\).

  3. 3.

    Apply a standard Galerkin approximation of Eq. 1 on the space

    $$\begin{aligned} \text {Span} \left\{ \psi ^\varepsilon _i, \ 1 \le i \le N_{V_H} \right\} \subset H^1_0(\mathcal{D}), \end{aligned}$$
    (27)

    where \(\psi ^\varepsilon _i\) is such that \(\left. \psi ^\varepsilon _i \right| _{\mathbf {K}} = \psi ^{\varepsilon ,\mathbf {K}}_i\) for all \(\mathbf {K}\in \mathcal {T}_H\).

The error analysis of this MsFEM method has been performed for \(A_\varepsilon = A_\mathrm{per}\left( \cdot /\varepsilon \right) \) with \(A_\mathrm{per}\) a fixed periodic matrix. Assuming that the basis functions are perfectly determined (that is, \(h=0\)), the main error estimate, under the usual assumption of regularity of the data and the mesh, reads as

$$\begin{aligned} \Vert u^\varepsilon -u^\varepsilon _H\Vert _{H^1(\mathcal{D})}\le C \left( H + \sqrt{\varepsilon } + \sqrt{\frac{\varepsilon }{H}} \right) , \end{aligned}$$
(28)

where C is a constant independent of H and \(\varepsilon \).

When the coarse mesh size H is close to the scale \(\varepsilon \), a so-called resonance phenomenon, encoded in the term \(\sqrt{\varepsilon /H}\) in Eq. 28, occurs and deteriorates the numerical solution. The oversampling method is a popular technique to reduce this effect. In short, the approach, which is non-conforming, consists in setting each local problem on a domain slightly larger than the actual element \(\mathbf {K}\) considered, so as to become less sensitive to the arbitrary choice of boundary conditions on that larger domain, and next truncate on the element the functions obtained. That approach allows to significantly improve the results compared to using linear boundary conditions as in Eq. 26. In the periodic case, the following estimate holds

$$ \Vert u^\varepsilon -u^\varepsilon _H\Vert _{H^1(\mathcal {T}_H)}\le C \left( H+\sqrt{\varepsilon }+ \frac{\varepsilon }{H} \right) , $$

where \( \Vert u^\varepsilon -u^\varepsilon _H \Vert _{H^1(\mathcal {T}_H)} = \sqrt{\sum \limits _{\mathbf {K} \in \mathcal {T}_H} \Vert u^\varepsilon -u^\varepsilon _H \Vert ^2_{H^1(\mathbf {K})}}\) is the \(H^1\) broken norm of \(u^\varepsilon -u^\varepsilon _H\).

The boundary conditions imposed on \(\partial \mathbf {K}\) in Eq. 26 are the so-called linear boundary conditions. Besides the linear boundary conditions, and the oversampling technique we have just mentioned, there are many other possible boundary conditions for the local problems. They may give rise to conforming, or non-conforming approximations. The choice sensitively affects the overall accuracy. In an informal way, the whole history of improvements of the original version of MsFEM can be revisited as the history of improvements of the choice of suitable “boundary conditions” for Eq. 26.

The question of how much the choice of boundary conditions for the local problems Eq. 26 alters the overall accuracy is all the more crucial in the context of non-periodic structures. A prototypical case of the difficulty is that of perforated materials. Consider the Poisson problem set on a domain with perforations of size \(\varepsilon \). For a generic mesh, the edges (or, alternately, the facets in a three-dimensional setting) of the mesh may intersect the perforations. It is intuitive that difficulties then arise since the (linear or else than linear) postulated behavior of the basis functions along the edges has little chance to accurately capture the actual behavior of the exact solution, given the perforations. Of course, one may use oversampling in order to circumvent this difficulty, but then the approach is non conformal and other difficulties arise, besides the increased computational cost. Also, one may consider meshing the domain in such a way that the edges intersect as few perforations as possible. For a periodic array of perforations, this is a decent solution. But in a non-periodic setting, and this is all the more true in a fully disordered array of perforations, this is impractical. A possible option introduced in [34], and extended in [35, 38, 39] and other subsequent works by different authors, is to resort to “weak” boundary conditions, in the form of Crouzeix-Raviart boundary conditions. The Dirichlet boundary conditions on \(\partial \mathbf {K}\) in Eq. 26 are then replaced by conditions of the type

$$\begin{aligned} \int \limits _{\text{ e }dge} \psi _i^{\varepsilon ,\mathbf {K}}= & {} 0\quad \text {or}\quad 1, \\ n_{\text{ e }dge} \cdot A_\varepsilon \nabla \psi _i^{\varepsilon ,\mathbf {K}}= & {} \text {Constant }, \end{aligned}$$

on all edges, where the local function \(\psi _i^{\varepsilon ,\mathbf {K}}\) is now associated to an edge i. For this approach, under technical assumptions, the error estimate is identical to that for linear boundary conditions, namely Eq. 28.

Fig. 4
figure 4

Source [35]

Two extreme cases of meshes regarding intersections with the perforations: no intersection at all (top), or as many intersections as possible (bottom). The Crouzeix-Raviart version of MsFEM is, roughly, equally accurate in both situations.

More importantly, upon using such “weak” boundary conditions in the context of a perforated computational domain (and adding other, generic ingredients, such as bubble functions), the accuracy, if not improved, is now significantly more robust with respect to the existence of intersection between edges and perforations. A “stress-test” considering two extreme scenarios illustrates this property: see in [35] the detailed comparison of the results obtained with the MsFEM method and different boundary conditions for the local problems for the shifted meshes in Fig. 4.

Let us conclude this section by emphasizing the formal link between the existence results for the non-periodic corrector \(w_{\mathbf{p}}\) that have been examined in the previous section and the actual local basis functions \(\psi _i^{\varepsilon ,\mathbf {K}}\) of the MsFEM approaches discussed here. Up to irrelevant technicalities and details, the corrector and the local functions are, intrinsically, the same mathematical object: they are obtained by zooming in locally and solving the problem at the scale of its heterogeneities.

5 Homogenization Under Partial Information

One way or another, all the approaches described so far, both at the theoretical level and the numerical level, rely on the full knowledge of the coefficient \(A_\varepsilon \). It turns out that there are several practical contexts where such a knowledge is incomplete, or sometimes merely unavailable. From an engineering perspective (think e.g. of experiments in Mechanics), there are indeed numerous prototypical situations for Eq. 1 where the response \(u^\varepsilon \) can be measured for some loadings f, but where \(A_\varepsilon \) is not completely known, let alone the fact that it is periodic or not. In these situations, it is thus not possible to use homogenization theory, nor to proceed with any MsFEM-type approach or with the similar approaches mentioned above. Finding a pathway alternate to standard approaches is thus a practically relevant question. We are interested in approaches valid for the different regimes of \(\varepsilon \), which make no use of the knowledge on the coefficient \(A_\varepsilon \), but only use some responses of the medium obtained for certain given solicitations. Questions similar in spirit have been addressed two decades ago by Durlofsky. The point is also to define an effective coefficient only using outputs of the system. They are however different in practice (see [36] for a detailed discussion).

For simplicity, we restrict ourselves to cases when Eq. 1 admits (possibly up to some extraction) a homogenized limit Eq. 5 where the homogenized matrix coefficient \(A^*\) is deterministic and constant. This restrictive assumption on the class of \(A^*\) (and thus on the structure of the coefficient \(A_\varepsilon \) in Eq. 1) is useful for our theoretical justifications, but not mandatory for the approach to be applicable.

For any constant matrix \(\overline{A}\), we consider generically the problem with constant coefficients

$$\begin{aligned} - {\mathrm{div}}\,\left( \,\overline{A}\,\nabla \overline{u} \, \right) =f. \end{aligned}$$
(29)

We investigate, for any value of the parameter \(\varepsilon \), how we may define a constant symmetric matrix such that the solution \(u(\overline{A},f)=\overline{u}\) to Eq. 29 with matrix \(\overline{A}\) best approximates the solution to Eq. 1. The best constant matrix \(\overline{A}\) is (temporarily) defined as a minimizer of

$$\begin{aligned} I_\varepsilon = \inf _{\text {constant matrix} \overline{A} > 0} \quad \sup _{ \begin{array}{c} f \in L^2({\mathcal D}), \\ \Vert f \Vert _{L^2({\mathcal D})}= 1 \end{array} } \quad \left\| u^\varepsilon (f) - u(\overline{A},f) \right\| _{L^2({\mathcal D})}^2, \end{aligned}$$
(30)

where we have explicitly emphasized the dependency upon the right-hand side f of the solutions to Eq. 1 and Eq. 29. The norm in Eq. 30 is an \(L^2\) norm (and not e.g. an \(H^1\) norm) because, for sufficiently small \(\varepsilon \), we wish the best constant matrix \(\overline{A}\) to be close to \(A^*\), while \(u^\varepsilon \) strongly converges to \(u^*\) only in the \(L^2\) norm but not in the \(H^1\) norm. The key point is that Eq. 30 is only based on the knowledge of the outputs \(u^\varepsilon \) (that could be e.g. experimentally measured), and not on that of \(A_\varepsilon \) itself. The theoretical study of the minimization problem Eq. 30 has been carried out in [36]. In particular it has been proven that, under classical assumptions, the matrices \(\overline{A}\) with energy asymptotically close to the infimum \(I_\varepsilon \) all converge to \(A^*\) as \(\varepsilon \) vanishes. In passing, we note that the approach provides, at least in some settings, a characterization of the homogenized matrix which is an alternative to the standard characterization of homogenization theory. To the best of our knowledge, this characterization, although probably known, has never been made explicit in the literature.

In fact (and this does not alter the above theoretical results), the actual minimization problem we use for the practice reads as

$$\begin{aligned} I^{\text {pract}}_\varepsilon = \inf _{\text {constant matrix} \overline{A} > 0 }\quad \sup _{ \begin{array}{c} f \in L^2({\mathcal D}), \\ \Vert f \Vert _{L^2({\mathcal D})}= 1 \end{array} } \quad \left\| -\Delta ^{-1}\,\left( - {\mathrm{div}} \overline{A}\,\nabla \,u^\varepsilon (f) - f\right) \right\| _{L^2({\mathcal D})}^2, \end{aligned}$$
(31)

where \(-\Delta ^{-1}\) is the inverse laplacian operator supplied with homogeneous Dirichlet boundary conditions. The function minimized in Eq. 31 is related to the one of Eq. 30 through the application, inside the \(L^2\) norm of the latter, of the zero-order differential operator \(\Delta ^{-1}\, {\mathrm{div}} (\overline{A}\,\nabla \,.\,)\). Note that, in sharp contrast with Eq. 30, the function to minimize in Eq. 31 is now, formally, a second-order polynomial in function of \(\overline{A}\). This property significantly speeds up the computations of the infimum. The specific choice Eq. 31 has been suggested to us by Albert Cohen.

Note also that, in practice, we cannot maximize upon all right-hand sides f in \(L^2({\mathcal D})\) (with unit norm) and that we therefore replace the supremum by a maximization upon a finite-dimensional set of thoughtfully selected right-hand sides.

In [36, 37], we have presented a series of numerical experiments using the above approach. Our tests have established that the approach is in particular able to accurately identify the homogenized matrix \(A^*\) in the periodic case (with a computational time that is much larger than the classical approach, but this is not the point). More importantly, it is also able to complete this task in the random case (where the classical approach can be prohibitively expensive). Finally, and since no particular structure of the coefficient \(A_\varepsilon \) is used, it may be applied to a large variety of non-periodic structures.

A remark is in order: in both cases of periodic and random homogenization, the classical approach computes the homogenized coefficients by first approximating the corrector function. A fair comparison between the approaches can therefore only be achieved if the above approach also provides some approximation of the corrector function. It is indeed the case: the latter function can also be obtained in our approach, at a reduced additional computational cost, as demonstrated in [36].

Fig. 5
figure 5

Source [27]

Homogenization approach within an Arlequin-type coupling: The fine-scale highly oscillatory model and the coarse-grained model (tentatively identical to the homogenized model) co-exist in an overlap region. The three regions described in the body of our text are displayed, along with the fine and coarse meshes.

A variant of the above approach, originally introduced in [22], is currently under investigation in [27]. The purpose of this variant is also to approximate \(A^*\) without explicitly using \(A_\varepsilon \), and to achieve this in a robust, engineering-type manner. In a nutshell, the approach consists in considering a domain divided in three regions, see Fig. 5. The inner region and the outer region respectively contain only the oscillatory model of Eq. 1 and the tentative homogenized model of Eq. 29. In between these two regions, an overlap region where both models exist is used for a smooth coupling. Specifically, the coupling is performed using an Arlequin-type approach (see again [22]) but this is not mandatory for the approach to perform. A linear Dirichlet boundary condition, say \(u=x_1\), is imposed on the external surface of the domain. It intuitively plays the role of the right-hand side function f in Eq. 31. At \(\varepsilon \) fixed presumably small, one then solves the minimization problem

$$\begin{aligned} J_\varepsilon = \inf _{\text {constant matrix} \overline{A} > 0} \quad \left\| \nabla (u(\overline{A})-x_1)\right\| _{L^2({\mathcal D})}^2. \end{aligned}$$
(32)

In the limit of \(\varepsilon \) vanishing, it is established that \(J_\varepsilon \) also vanishes and the only minimizer is obtained for \(\overline{A}\,{\mathbf{e}}_1=A^*\,{\mathbf{e}}_1\), where \({\mathbf{e}}_1=\nabla (x_1)\) is the first canonical vector of the ambient space \({\mathbb R}^d\). Repeating this procedure along each dimension of \({\mathbb R}^d\) allows to eventually identify the matrix \(A^*\). Several computational improvements of the original approach are introduced in [27]. A numerical analysis is also presented.