Abstract
As a counterpoint to classical stochastic particle methods for diffusion, we develop a deterministic particle method for linear and nonlinear diffusion. At first glance, deterministic particle methods are incompatible with diffusive partial differential equations since initial data given by sums of Dirac masses would be smoothed instantaneously: particles do not remain particles. Inspired by classical vortex blob methods, we introduce a nonlocal regularization of our velocity field that ensures particles do remain particles and apply this to develop a numerical blob method for a range of diffusive partial differential equations of Wasserstein gradient flow type, including the heat equation, the porous medium equation, the Fokker–Planck equation, and the Keller–Segel equation and its variants. Our choice of regularization is guided by the Wasserstein gradient flow structure, and the corresponding energy has a novel form, combining aspects of the well-known interaction and potential energies. In the presence of a confining drift or interaction potential, we prove that minimizers of the regularized energy exist and, as the regularization is removed, converge to the minimizers of the unregularized energy. We then restrict our attention to nonlinear diffusion of porous medium type with at least quadratic exponent. Under sufficient regularity assumptions, we prove that gradient flows of the regularized porous medium energies converge to solutions of the porous medium equation. As a corollary, we obtain convergence of our numerical blob method. We conclude by considering a range of numerical examples to demonstrate our method’s rate of convergence to exact solutions and to illustrate key qualitative properties preserved by the method, including asymptotic behavior of the Fokker–Planck equation and critical mass of the two-dimensional Keller–Segel equation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
For a range of partial differential equations, from the heat and porous medium equations to the Fokker–Planck and Keller–Segel equations, solutions can be characterized as gradient flows with respect to the quadratic Wasserstein distance. In particular, solutions of the equation
where \(\rho \) is a curve in the space of probability measures, are formally Wasserstein gradient flows of the energy
where \({\mathcal {L}}^d\) is d-dimensional Lebesgue measure. This implies that solutions \(\rho (t,x)\) of (1) satisfy
for a generalized notion of gradient \(\nabla _{W_2}\), which is formally given by
where \(\delta {\mathcal {E}}/\delta \rho \) is the first variation density of \({\mathcal {E}}\) at \(\rho \) (c.f. [3, 27, 28, 82]).
Over the past twenty years, the Wasserstein gradient flow perspective has led to several new theoretical results, including asymptotic behavior of solutions of nonlinear diffusion and aggregation–diffusion equations [27, 28, 70], stability of steady states of the Keller–Segel equation [10, 12], and uniqueness of bounded solutions [26]. The underlying gradient flow theory has been well developed in the case of convex (or, more generally, semiconvex) energies [2, 3, 5, 24, 55, 77, 82, 83], and more recently, is being extended to consider energies with more general moduli of convexity [6, 26, 28, 35].
Wasserstein gradient flow theory has also inspired new numerical methods, with a common goal of maintaining the gradient flow structure at the discrete level, albeit in different ways. Recent work has considered finite volume, finite element, and discontinuous Galerkin methods [9, 16, 21, 61, 80]. Such methods are energy decreasing, positivity preserving, and mass conserving at the semidiscrete level, leading to high-order approximations. They naturally preserve stationary states, since dissipation of the free energy provides inherent stability, and often also capture the rate of asymptotic decay. Another common strategy for preserving the gradient flow structure at the discrete level is to leverage the discrete-time variational scheme introduced by Jordan et al. [55]. A wide variety of strategies have been developed for this approach: working with different discretizations of the space of Lagrangian maps [42, 56, 67,68,69], using alternative formulations of the variational structure [43], making use of convex analysis and computational geometry to solve the optimality conditions [8], and many others [11, 17, 23, 29, 31, 47, 48, 84].
In this work, we develop a deterministic particle method for Wasserstein gradient flows. The simplest implementation of a particle method for Eq. (1), in the absence of diffusion, begins by first discretizing the initial datum \(\rho _0\) as a finite sum of N Dirac masses, that is,
where \(\delta _{x_i}\) is a Dirac mass centered at \(x_i \in {\mathord {{\mathbb {R}}}^d}\). Without diffusion and provided sufficient regularity of V and W, the solution \(\rho ^N\) of (1) with initial datum \(\rho ^N_0\) remains a sum of Dirac masses at all times t, so that
and solving the partial differential equation (1) reduces to solving a system of ordinary differential equations for the locations of the Dirac masses,
The particle solution \(\rho ^N(t)\) is the Wasserstein gradient flow of the energy (2) with initial data \(\rho _0^N\), so in particular the energy decreases in time along this spatially discrete solution. The ODE system (5) can be solved using range of fast numerical methods, and the resulting discretized solution \(\rho ^N(t)\) can be interpolated in a variety of ways for graphical visualization.
This simple particle method converges to exact solutions of equation (1) under suitable assumptions on V and W, as has been shown in the rigorous derivation of this equation as the mean-field limit of particle systems [22, 24, 52]. Recent work, aimed at capturing competing effects in repulsive–attractive systems and developing methods with higher-order accuracy, has considered enhancements of standard particle methods inspired by techniques from classical fluid dynamics, including vortex blob methods and linearly transformed particle methods [19, 36, 46, 49]. Bertozzi and the second author’s blob method for the aggregation equation obtained improved rates of convergence to exact solutions for singular interaction potentials W by convolving W with a mollifier \(\varphi _\varepsilon \). In terms of the Wasserstein gradient flow perspective this translates into regularizing the interaction energy \((1/2) \int (W*\rho ) \,d\rho \) as \((1/2) \int (W*\varphi _\varepsilon *\rho )\,d\rho \).
When diffusion is present in Eq. (1), the fundamental assumption underlying basic particle methods breaks down: particles do not remain particles, or in other words, the solution of (1) with initial datum (3) is not of the form (4). A natural way to circumvent this difficulty, at least in the case of linear diffusion (\(m=1\)), is to consider a stochastic particle method, in which the particles evolve via Brownian motion. Such approaches were originally developed in the classical fluids case [33], and several recent works have considered analogous methods for equations of Wasserstein gradient flow type, including the Keller–Segel equation [50, 52, 53, 62]. The main practical disadvantage of these stochastic methods is that their results must be averaged over a large number of runs to compensate for the inherent randomness of the approximation. Furthermore, to the authors’ knowledge, such methods have not been extended to the case of degenerate diffusion \(m>1\).
Alternatives to stochastic methods have been explored for similar equations, motivated by particle-in-cell methods in classical fluid, kinetic, and plasma physics equations. These alternatives proceed by introducing a suitable regularization of the flux of the continuity equation [34, 75]. Degond and Mustieles considered the case of linear diffusion (\(m=1\)) by interpreting the Laplacian as induced by a velocity field v, \(\Delta \rho = \nabla \cdot (v \rho )\), \(v = \nabla \rho /\rho \), and regularizing the numerator and denominator separately by convolution with a mollifier [40, 74]. For this regularized equation, particles do remain particles, and a standard particle method can be applied. Well-posedness of the resulting system of ordinary differential equations and a priori estimates relevant to the method were studied by Lacombe and Mas-Gallic [58] and extended to the case of the porous medium equation by Oelschläger and Lions and Mas-Gallic [60, 63, 66]. In the case \(m=2\) on bounded domains, Lions and Mas-Gallic succeeded in showing that solutions to the regularized equation converge to solutions of the unregularized equation, as long as the initial data has uniformly bounded entropy. Unfortunately, this assumption fails to hold when the initial datum is given by a particle approximation (3), and consequently Lions and Mas-Gallic’s result doesn’t guarantee convergence of the particle method. Oelschläger [66], on the other hand, succeeded in proving convergence of the deterministic particle method, as long as the corresponding solution of the porous medium equation is smooth and positive. An alternative approach, now known as the particle strength exchange method, incorporates instead the effects of diffusion by allowing the weights of the particles \(m_i\) to vary in time. Degond and Mas-Gallic developed such a method for linear diffusion (\(m=1\)) and proved second order convergence with respect to the initial particle spacing [38, 39]. The main disadvantage of these existing deterministic particle methods is that, with the exception of Lions and MasGallic’s work when \(m=2\), they do not preserve the gradient flow structure [60]. Other approaches that respect the method’s variational structure have been recently proposed in one dimension by approximating particles by non-overlapping blobs [25, 30]. For further background on deterministic particle methods, we refer the reader to Chertock’s comprehensive review [32].
The goal of the present paper is to introduce a new deterministic particle method for equations of the form (1), with linear and nonlinear diffusion (\(m \ge 1\)), that respects the problem’s underlying gradient flow structure and naturally extends to all dimensions. In contrast to the above described work, which began by regularizing the flux of the continuity equation, we follow an approach analogous to Bertozzi and the second author’s blob method for the aggregation equation and regularize the associated internal energy \({\mathcal {F}}\). For a mollifier \(\varphi _\varepsilon (x) = \varphi (x/\varepsilon )/\varepsilon ^d\), \(x \in {\mathord {{\mathbb {R}}}^d}\), \(\varepsilon >0\), we define
For more general nonlinear diffusion, we define
As \(\varepsilon \rightarrow 0\), we prove that the regularized internal energies \({\mathcal {F}}^m_\varepsilon \) \(\Gamma \)-converge to the unregularized energies \({\mathcal {F}}^m\) for all \(m \ge 1\); see Theorem 4.1. In the presence of a confining drift or interaction potential, so that minimizers exist, we also show that minimizers converge to minimizers; see Theorem 4.5. For \(m \ge 2\) and semiconvex potentials \(V,W \in C^2({\mathord {{\mathbb {R}}}^d})\), we show that the gradient flows of the regularized energies \({\mathcal {E}}_\varepsilon ^m\) are well-posed and are characterized by solutions to the partial differential equation
Under sufficient regularity conditions, we prove that solutions of the regularized gradient flows converge to solutions of Eq. (1); see Theorem 5.8. When \(m=2\) and the initial datum has bounded entropy, we show that these regularity conditions automatically hold, thus generalizing Lions and Mas-Gallic’s result for the porous medium equation on bounded domains to the full Eq. (1) on all of \({\mathord {{\mathbb {R}}}^d}\); see Corollary 5.9 and [60, Theorem 2].
For this regularized Eq. (8), particles do remain particles; see Corollary 5.5. Consequently, our numerical blob method for diffusion consists of taking a particle approximation for (8). We conclude by showing that, under sufficient regularity conditions, our blob method’s particle solutions converge to exact solutions of (1); see Theorem 6.1. We then give several numerical examples illustrating the rate of convergence of our method and its qualitative properties.
A key advantage of our approach is that, by regularizing the energy functional and not the flux, we preserve the problem’s gradient flow structure. Still, at first glance, our regularization of the energy (6) may seem less natural than other potential choices. For example, one could instead consider the following more symmetric regularization
for more general nonlinear diffusion,
Although studying the above regularization is not without interest, we focus our attention on the regularization in (6) and (7) for numerical reasons. Indeed, computing the first variation density of \({\mathcal {U}}_\varepsilon \) gives
as compared to
for \({\mathcal {F}}_\varepsilon \). In the first case, one can see that replacing \(\rho \) by a sum of Dirac masses still requires the computation of an integral convolution with \(\varphi _\varepsilon \). Indeed, if \(\rho = \sum _{i=1}^N \delta _{x_i} m_i\), where \((x_i)_{i=1}^N\) are N particles in \({\mathord {{\mathbb {R}}}}^d\) with masses \(m_i > 0\), then, for all \(x\in {\mathord {{\mathbb {R}}}}^d\),
which does not allow for a complete discretization of the integrals. On the contrary, in the second case, all convolutions involve \(\rho \), so a similar computation (as it can be found in the proof of Corollary 5.5) shows that they reduce to finite sums, which are numerically less costly.
Another advantage of our approach, in the \(m=2\) case, is that our regularization of the energy can naturally be interpreted as an approximation of the porous medium equation by a very localized nonlocal interaction potential. In this way, our proof of the convergence of the associated particle method provides a theoretical underpinning to approximations of this kind in the computational math and swarming literature [57, 59]. Further advantages our blob method include the ease with which it may be combined with particle methods for interaction and drift potentials, its simplicity in any dimension, and the good numerical performance we observe for a wide choice of interaction and drift potentials.
Our paper is organized as follows. In Sect. 2, we collect preliminary results concerning the regularization of measures via convolution with a mollifier, including a mollifier exchange lemma (Lemma 2.2), and relevant background on Wasserstein gradient flow and weak convergence of measures. In Sect. 3, we prove several results on the general regularized energies (7), which are of a novel form from the perspective of Wasserstein gradient flow theory, combining aspects of the well-known interaction and internal energies. We show that these regularized energies are semiconvex and differentiable in the Wasserstein metric and characterize their subdifferential with respect to this structure; see Propositions 3.10–3.12. In Sect. 4, we prove that \({\mathcal {F}}_\varepsilon \) \(\Gamma \)-converges to \({\mathcal {F}}\) as \(\varepsilon \rightarrow 0\) and that minimizers converge to minimizers, when in the presence of a confining drift or interaction term; see Theorems 4.1 and 4.5 . With this \(\Gamma \)-convergence in hand, in Sect. 5 we then turn to the question of convergence of gradient flows, restricting to the case \(m \ge 2\). Using the framework introduced by Sandier and Serfaty [76, 78], we prove that, under sufficient regularity assumptions, gradient flows of the regularized energies converge as \(\varepsilon \rightarrow 0\) to gradient flows of the unregularized energy, recovering a generalization of Lions and Mas-Gallic’s results when \(m=2\); see Theorem 5.8 and Corollary 5.9. Finally, in Sect. 6, we prove the convergence of our numerical blob method, under sufficient regularity assumptions, when the initial particle spacing h scales with the regularization like \(h = o(\varepsilon )\); see Theorem 6.1.
We close with several numerical examples, in one and two dimensions, analyzing the rate of convergence to exact solutions with respect to the 2-Wasserstein metric, \(L^1\)-norm, and \(L^\infty \)-norm and illustrating qualitative properties of the method, including asymptotic behavior of the Fokker–Planck equation and critical mass of the two-dimensional Keller–Segel equation; see Sect. 6.3. In particular, for the heat equation and porous medium equations (\(V=W=0\), \(m=1,2,3\)), we observe that the 2-Wasserstein error depends linearly on the grid spacing \(h \sim N^{-1/d}\) for \(m=1,2,3\), while the \(L^1\)-norm depends quadratically on the grid spacing for \(m=1,2\) and superlinearly for \(m=3\). We apply our method to study long time behavior of the nonlinear Fokker–Planck equation (\(V=\left| \cdot \right| ^2/2\), \(W = 0\), \(m=2\)), showing that the blob method accurately captures convergence to the unique steady state. Finally, we conduct a detailed numerical study of equations of Keller–Segel type, including a one-dimensional variant (\(V=0, W = 2\chi \log \left| \cdot \right| , \chi >0, m=1,2\)) and the original two-dimensional equation (\(V=0\), \(W = \Delta ^{-1}\), \(m=1\)). The one-dimensional equation has a critical mass 1, and the two-dimensional equation has critical mass \(8 \pi \), at which point the concentration effects from the nonlocal interaction term balance with linear diffusion (\(m=1\)) [13, 41]. We show that the same notion of criticality is present in our numerical solutions and demonstrate convergence of the critical mass as the grid spacing h and regularization \(\varepsilon \) are refined.
There are several directions for future work. Our convergence theorem for \(m \ge 2\) requires additional regularity assumptions, which we are only able to remove in the case \(m=2\) when the initial data has bounded entropy. In the case of \(m>2\) or more general initial data, it remains an open question how to control certain nonlocal norms of the regularized energies, which play an important role in our convergence result; see Theorem 5.8. Formally, we expect these to behave as approximations of the BV-norm of \(\rho ^m\), which should remain bounded by the gradient flow structure; see Eqs. (24) and (25). When \(1\le m<2\), it is not clear how to use these nonlocal norms to get the desired convergence result or whether an entirely different approach is needed. Perhaps related to these questions is the fact that our estimate on the semiconvexity of the regularized energies (6) deteriorates as \(\varepsilon \rightarrow 0\), while we expect that the semiconvexity should not deteriorate along smooth geodesics; see Proposition 3.11. Finally, while our results show convergence of the blob method for diffusive Wasserstein gradient flows, they do not quantify the rate of convergence in terms of h and \(\varepsilon \). In particular, a theoretical result on the optimal scaling relation between h and \(\varepsilon \) remains open, though we observe good numerical performance for \(\varepsilon = h^{1-p}\), \(0< p \ll 1\). In a less technical direction, we foresee a use of the presented ideas in conjunction with splitting schemes for certain nonlinear kinetic equations [1, 20], as well as in the fluids [49], since our numerical results demonstrate comparable rates of convergence to the particle strength exchange method, which has already gained attention in these contexts [40].
2 Preliminaries
2.1 Basic notation
For any \(r>0\) and \(x \in {\mathord {{\mathbb {R}}}^d}\) we denote the open ball of center x and radius r by \(B_r(x)\). Given a set \(S \subset {\mathord {{\mathbb {R}}}}^d\), we write \(1_{S}:{\mathord {{\mathbb {R}}}}^d \rightarrow \{0,1\}\) for the indicator function of S, i.e., \(1_S(x) = 1\) for \(x \in S\) and \(1_S(x) = 0\) otherwise. We say a function \(A:{\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}}\) has at most quadratic growth if there exist \(c_0, c_1 >0\) so that \(|A(x)| \le c_0 + c_1|x|^2\) for all \(x \in {\mathord {{\mathbb {R}}}^d}\).
Let \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\) denote the set of Borel probability measures on \({\mathord {{\mathbb {R}}}^d}\), and for, any \(p\in {\mathord {{\mathbb {N}}}}\), \({\mathcal P}_p({\mathord {{\mathbb {R}}}^d})\) denotes elements of \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\) with finite pth moment, \(M_p({\mathord {{\mathbb {R}}}}^d) := \textstyle \int _{\mathord {{\mathbb {R}}}^{d}}|x|^p\,d\mu (x) < +\,\infty \). We write \({\mathcal {L}}^d\) for the d-dimensional Lebesgue measure, and for given \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\), we write \(\mu \ll {\mathcal {L}}^d\) if \(\mu \) is absolutely continuous with respect to the Lebesgue measure. Often we use the same symbol for both a probability measure and its Lebesgue density, whenever the latter exists. We let \(L^p(\mu ;{\mathord {{\mathbb {R}}}^d})\) denote the Lebesgue space of functions with pth power integrable against \(\mu \).
Given \(\sigma \) a finite, signed Borel measure on \({\mathord {{\mathbb {R}}}}^d\), we denote its variation by \(|\sigma |\). For a Borel set \(E \subset {\mathord {{\mathbb {R}}}^d}\) we write \(\sigma (E)\) for the \(\sigma \)-measure of set E. For a Borel map \(T : {\mathord {{\mathbb {R}}}}^d \rightarrow {\mathord {{\mathbb {R}}}}^d\) and \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}}^d)\), we write \(T_\# \mu \) for the push-forward of \(\mu \) through T. We let \(\mathop {\mathrm{id}}: {\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}^d}\) denote the identity map on \({\mathord {{\mathbb {R}}}}^d\) and define \((\mathop {\mathrm{id}},T) : {\mathord {{\mathbb {R}}}}^d \rightarrow {\mathord {{\mathbb {R}}}}^d \times {\mathord {{\mathbb {R}}}}^d\) by \((\mathop {\mathrm{id}},T)(x) = (x,T(x))\) for all \(x \in {\mathord {{\mathbb {R}}}^d}\). For a sequence \((\mu _n)_n \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and some \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\), we write \(\mu _n {\mathop {\rightharpoonup }\limits ^{*}}\mu \) if \((\mu _n)_n\) converges to \(\mu \) in the weak-\(^*\) topology of probability measures, i.e., in the duality with bounded continuous functions.
2.2 Convolution of measures
A key aspect of our approach is the regularization of the energy (2) via convolution with a mollifier. In this section, we collect some elementary results on the convolution of probability measures, including a mollifier exchange lemma, Lemma 2.2.
For any \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and measurable function \(\phi \), the convolution of \(\phi \) with \(\mu \) is given by
whenever the integral converges. We consider mollifiers \(\varphi \) satisfying the following assumption.
Assumption 2.1
(mollifier) Let \(\varphi = \zeta * \zeta \), where \(\zeta \in C^2({\mathord {{\mathbb {R}}}^d};[0,\infty ))\) is even, \(\Vert \zeta \Vert _{L^1({\mathord {{\mathbb {R}}}^d})} =1\), and
This assumption is satisfied by both Gaussians and smooth functions with compact support. Assumption 2.1 also ensures that \(\varphi \) has finite first moment. For any \(\varepsilon >0\), we write
Throughout, we use the fact that the definition of convolution allows us to move mollifiers from the measure to the integrand. In particular, for any \(\phi \) bounded below and \(\psi \in L^1({\mathord {{\mathbb {R}}}^d})\) even,
Likewise, the technical assumption that \(\varphi = \zeta * \zeta \), and therefore that \(\varphi _\varepsilon = \zeta _\varepsilon * \zeta _\varepsilon \), allows us to regularize integrands involving the mollifier \(\varphi _\varepsilon \); indeed, the following lemma provides sufficient conditions for moving functions in and out convolutions with mollifiers within integrals. (See also [60] for a similar result.) This is an essential component in the proofs of both main results, Theorems 4.1 and 5.8 , on the the \(\Gamma \)-convergence of the regularized energies and the convergence of the corresponding gradient flows. See “Appendix A” for the proof of this lemma.
Lemma 2.2
(mollifier exchange lemma) Let \(f:{\mathord {{\mathbb {R}}}}^d \rightarrow {\mathord {{\mathbb {R}}}}\) be Lipschitz continuous with Lipschitz constant \(L_f>0\), and let \(\sigma \) and \(\nu \) be finite, signed Borel measures on \({\mathord {{\mathbb {R}}}}^d\). There is \(p = p(q,d)>0\) so that
We conclude this section with a lemma stating that if a sequence of measures converges in the weak-\(^*\) topology of \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\), then the mollified sequence converges to the same limit. We refer the reader to “Appendix A” for the proof.
Lemma 2.3
Let \(\mu _\varepsilon \) be a sequence in \({\mathcal P}({\mathord {{\mathbb {R}}}}^d)\) such that \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \) as \(\varepsilon \rightarrow 0\) for some \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}}^d)\). Then \(\varphi _\varepsilon *\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \).
2.3 Optimal transport, Wasserstein metric, and gradient flows
We now describe basic facts about optimal transport, including the Wasserstein metric and associated gradient flows. (See also [2, 3, 5, 77, 82, 83] for further background and more details on the definitions and remarks found in this section.)
For \(\mu ,\nu \in {\mathcal P}({\mathord {{\mathbb {R}}}}^d)\), we denote the set of transport plans from \(\mu \) to \(\nu \) by
where \(\pi ^1,\pi ^2:{\mathord {{\mathbb {R}}}}^d \times {\mathord {{\mathbb {R}}}}^d \rightarrow {\mathord {{\mathbb {R}}}}^d\) are the projections of \({\mathord {{\mathbb {R}}}}^d\times {\mathord {{\mathbb {R}}}}^d\) onto the first and second copy of \({\mathord {{\mathbb {R}}}}^d\), respectively. The Wasserstein distance \(W_2(\mu ,\nu )\) between two probability measures \(\mu ,\nu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) is given by
and a transport plan \(\gamma _\mathrm {o}\) is optimal if it attains the minimum in (9). We denote the set of optimal transport plans by \(\Gamma _\mathrm {o}(\mu ,\nu )\). If \(\mu \) is absolutely continuous with respect to the Lebesgue measure, then there is a unique optimal transport plan \(\gamma _\mathrm {o}\), and
for a Borel measurable function \(T_\mathrm {o}:{\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}^d}\). \(T_\mathrm {o}\) is unique up to sets of \(\mu \)-measure zero and is known as the optimal transport map from \(\mu \) to \(\nu \). Convergence with respect to the Wasserstein metric is stronger than weak-\(^*\) convergence. In particular, if \((\mu _n)_n \subset {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) and \(\mu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\), then
In order to define Wasserstein gradient flows, we will require the following notion of regularity in time with respect to the Wasserstein metric.
Definition 2.4
(absolutely continuous) \( \mu \in AC^2_\mathrm{loc}((0,\infty );P_2({\mathord {{\mathbb {R}}}^d}))\) if there is \(f\in L^2_\mathrm{loc}((0,\infty ))\) so that
Along such curves, we have a notion of metric derivative.
Definition 2.5
(metric derivative) Given \( \mu \in AC^2_\mathrm{loc}((0,\infty );P_2({\mathord {{\mathbb {R}}}^d}))\), its metric derivative is
An important class of curves in the Wasserstein metric are the (constant speed) geodesics. Given \(\mu _0,\mu _1 \in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\), geodesics connecting \(\mu _0\) to \(\mu _1\) are of the form
If \(\gamma _\mathrm {o}\) is induced by a map \(T_\mathrm {o}\), then
More generally, given \(\mu _1,\mu _2,\mu _3\in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\), a generalized geodesic connecting \(\mu _2\) to \(\mu _3\) with base \(\mu _1\) is given by
with \(\pi ^{1,i}:{\mathord {{\mathbb {R}}}}^d \times {\mathord {{\mathbb {R}}}}^d \times {\mathord {{\mathbb {R}}}}^d \rightarrow {\mathord {{\mathbb {R}}}}^d\times {\mathord {{\mathbb {R}}}}^d\) the projection of onto the first and ith copies of \({\mathord {{\mathbb {R}}}}^d\). When the base \(\mu _1\) coincides with one of the endpoints \(\mu _2\) or \(\mu _3\), generalized geodesics are geodesics.
A key property for the uniqueness and stability of Wasserstein gradient flows is that the energies are convex, or more generally semiconvex, along generalized geodesics.
Definition 2.6
(semiconvexity along generalized geodesics) We say a functional \({\mathcal {G}}:{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\rightarrow (-\,\infty ,\infty ]\) is semiconvex along generalized geodesics if there is \(\lambda \in {\mathord {{\mathbb {R}}}}\) such that for all \(\mu _1,\mu _2,\mu _3 \in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) there exists a generalized geodesic connecting \(\mu _2\) to \(\mu _3\) with base \(\mu _1\) such that
where
For any subset \(X\subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and functional \({\mathcal {G}}:X \rightarrow (-\,\infty ,\infty ]\), we denote the domain of \({\mathcal {G}}\) by \(D({\mathcal {G}}) = \{ \mu \in X \mid {\mathcal {G}}(\mu ) < +\,\infty \}\), and we say that \({\mathcal {G}}\) is proper if \(D({\mathcal {G}}) \ne \emptyset \). As soon as a functional is proper and lower semicontinuous with respect to the weak-* topology, we may define its subdifferential; see [3, Definition 10.3.1 and Eq. 10.3.12]. Following the approach in [24], the notion of subdifferential we use in this paper is, in fact, the following reduced one.
Definition 2.7
(subdifferential) Given \({\mathcal {G}}:{\mathcal P}_2({\mathord {{\mathbb {R}}}^d}) \rightarrow (-\,\infty ,\infty ]\) proper and lower semicontinuous, \(\mu \in D({\mathcal {G}})\), and \(\xi :{\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}^d}\) with \(\xi \in L^2(\mu ;{\mathord {{\mathbb {R}}}^d})\), then \(\xi \) belongs to the subdifferential of \({\mathcal {G}}\) at \(\mu \), written \(\xi \in \partial {\mathcal {G}}(\mu )\), if as \(\nu \xrightarrow {W_2} \mu \),
The Wasserstein metric is formally Riemannian, and we may define the tangent space as follows.
Definition 2.8
Let \(\mu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\). The tangent space at \(\mu \) is
where the closure is taken in \(L^2(\mu ;{\mathord {{\mathbb {R}}}}^d)\).
We now turn to the definition of a gradient flow in the Wasserstein metric (c.f. [3, Proposition 8.3.1, Definition 11.1.1]).
Definition 2.9
(gradient flow) Suppose \({\mathcal {G}}:{\mathcal P}_2({\mathord {{\mathbb {R}}}^d}) \rightarrow {\mathord {{\mathbb {R}}}}\cup \{+\,\infty \}\) is proper and lower semicontinuous. A curve \(\mu \in AC^2_\mathrm{loc}((0,+\,\infty ); {\mathcal P}_2({\mathord {{\mathbb {R}}}^d}))\) is a gradient flow of \({\mathcal {G}}\) if there exists a velocity vector field \(v:(0,\infty )\times {\mathord {{\mathbb {R}}}}^d \rightarrow {{\mathord {{\mathbb {R}}}}^d}\) with \(-v(t) \in \partial {\mathcal {G}}(\mu (t)) \cap {{\,\mathrm{Tan}\,}}_{\mu (t)}{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d) \) for almost every \(t >0\) such that \(\mu \) is a weak solution of the continuity equation
i.e., \(\mu \) is a solution to the continuity equation in duality with \(C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\).
We close this section with the following definition of the Wasserstein local slope.
Definition 2.10
(local slope) Given \({\mathcal {G}}:{\mathcal P}_2({\mathord {{\mathbb {R}}}^d}) \rightarrow (-\,\infty ,\infty ]\), its local slope is
where the subscript \(+\) denotes the positive part.
Remark 2.11
When the functional \({\mathcal {G}}\) in Definition 2.9 is in addition semiconvex along geodesics the local slope \(|\partial {\mathcal {G}}|\) is a strong upper gradient for \({\mathcal {G}}\). In this case a gradient flow of \({\mathcal {G}}\) is characterized as being a 2-curve of maximal slope with respect to \(|\partial {\mathcal {G}}|\); see [3, Theorem 11.1.3].
3 Regularized internal energies
The foundation of our blob method is the regularization of the internal energy \({\mathcal {F}}\) via convolution with a mollifier. This allows us to preserve the gradient flow structure and approximate our original partial differential equation (1) by a sequence of equations for which particles do remain particles. In this section, we consider several fundamental properties of the regularized internal energies \({\mathcal {F}}_\varepsilon \), including convexity, lower semicontinuity, and differentiability. In what follows, we will suppose that our internal energies satisfy the following assumption.
Assumption 3.1
(internal energies) Suppose \(F \in C^2(0,+\,\infty )\) satisfies \(\lim _{s \rightarrow +\,\infty } F(s) = +\,\infty \) and either F is bounded below or \(\liminf _{s\rightarrow 0} F(s) / s^\beta >-\,\infty \) for some \(\beta >-2/(d+2)\). Suppose further that \(U(s) = sF(s)\) is convex, bounded below, and \(\lim _{s \rightarrow 0} U(s) = 0\).
Thanks to this assumption we can define the internal energy corresponding to F by
If F is bounded below, this is well-defined on all of \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\). If \(\liminf _{s\rightarrow 0} F(s) / s^\beta >-\,\infty \) for some \(\beta >-2/(d+2)\), this is well-defined on \({\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\); see [3, Example 9.3.6].
Remark 3.2
(nondecreasing) Assumption 3.1 implies that F is nondecreasing. Indeed, by the convexity of U(s) and the fact that \(\lim _{s \rightarrow 0} sF(s) =0\),
which leads to \(F'(s)\ge 0\) for all \(s \in (0,\infty )\).
Our assumption does not ensure that \({\mathcal {F}}\) is convex along Wasserstein geodesics, unless F is convex.
Remark 3.3
(McCann’s convexity condition) McCann’s condition [65] on the internal density U for the convexity of the internal energy \({\mathcal {F}}\) can be stated on the function F instead: the function \(s \mapsto F(s^{-d})\) is nonincreasing and convex on \((0,\infty )\), i.e.,
which, by Remark 3.2, holds when for example F is convex and satisfies Assumption 3.1.
We regularize the internal energies by convolution with a mollifier.
Definition 3.4
(regularized internal energies) Given \(F:(0,\infty )\rightarrow {\mathord {{\mathbb {R}}}}\) satisfying Assumption 3.1, we define, for all \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}}^d)\), the regularized internal energies by
Note that, for all \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and \(\varepsilon >0\), \({\mathcal {F}}_\varepsilon (\mu )< F(\left\| \varphi _\varepsilon \right\| _{L^\infty ({\mathord {{\mathbb {R}}}}^d)}) < \infty \).
An important class of internal energies satisfying Assumption 3.1 are given by the (negative) entropy and Rényi entropies.
Definition 3.5
The entropy and Rényi entropies, and their regularizations, are given by
Note that, as per our observation just below the definition of \({\mathcal {F}}\), the entropy \({\mathcal {F}}^1\) is well-defined on \({\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) and the Rényi entropies (\({\mathcal {F}}^m, m>1\)) are well-defined on all of \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\). Also note that the regularized entropies (\({\mathcal {F}}^m_\varepsilon , m \ge 1, \varepsilon >0\)) are well-defined on all of \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\).
In order to approximate solutions of Eq. (1), we will consider combinations of the above regularized internal energies with potential and interaction energies.
Definition 3.6
(regularized energies) Let \(V,W: {\mathord {{\mathbb {R}}}^d}\rightarrow (-\,\infty ,\infty ]\) be proper and lower semicontinuous. Suppose further that W is locally integrable. For all \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) define
When \(F=F_m\) for some \(m\ge 1\), then we denote \({\mathcal {E}}\) by \({\mathcal {E}}^m\) and \({\mathcal {E}}_\varepsilon \) by \({\mathcal {E}}_\varepsilon ^m\).
The regularized internal energy in Definition 3.4 incorporate a blend of interaction and internal phenomena, through the convolution with the mollifier, or potential, \(\varphi _\varepsilon \) and the composition with the function F. To our knowledge, this is a novel form of functional on the space of probability measures. We now describe some of its basic properties: energy bounds and lower semicontinuity, when F is the logarithm or a power, and differentiability, convexity and subdifferential characterization when F is convex. For the existence and uniqueness of gradient flows associated to this regularized energy, see Sect. 5.
Remark 3.7
Although the regularized energy in Definition 3.4 is of a novel form, it was noticed in [71, Proposition 6.9] that a previous particle method for diffusive gradient flows leads to a similar regularized internal energy after space discretization [25, 30]. The essential difference between these two methods stands in the choice of the mollifier, which, instead of satisfying 2.1, is a very singular potential.
We begin with inequalities relating the regularized internal energies to the unregularized energies. See “Appendix A” for the proof, which is a consequence of Jensen’s inequality and a Carleman-type estimate on the lower bound of the entropy [30, Lemma 4.1].
Proposition 3.8
Let \(\varepsilon >0\). If \(m = 1\), suppose \(\mu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\), and if \(m>1\), suppose \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\). Then,
where \(C_\varepsilon = C_\varepsilon (m,\mu ) \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Furthermore, for all \(\delta >0\), we have
For all \(\varepsilon >0\), the regularized entropies are lower semicontinuous with respect to weak-* convergence (\(m>1\)) and Wasserstein convergence (\(m=1\)). For \(m>2\), we prove this using a theorem of Ambrosio, Gigli, and Savaré on the convergence of maps with respect to varying measures; see Proposition B.2. For \(1<m \le 2\), this is a consequence of Jensen’s inequality. For \(m=1\), we apply both Jensen’s inequality and a version of Fatou’s lemma for varying measures; see Lemma B.3. In this case, we also require that the mollifier \(\varphi \) is a Gaussian, so that we can get the bound from below required by Fatou’s lemma. We refer the reader to “Appendix A” for the proof.
Proposition 3.9
(lower semicontinuity) Let \(\varepsilon >0\). Then
-
(i)
\({\mathcal {F}}^m_\varepsilon \) is lower semicontinuous with respect to weak-\(^*\) convergence in \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\) for all \(m>1\);
-
(ii)
if \(\varphi \) is a Gaussian, then \({\mathcal {F}}^1_\varepsilon \) is lower semicontinuous with respect to the quadratic Wasserstein convergence in \({\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\).
When F is convex, the regularized internal energies are differentiable along generalized geodesics. The proof relies on the fact that F is differentiable and \(\varphi _\varepsilon \in C^2({\mathord {{\mathbb {R}}}^d})\), with bounded Hessian; see “Appendix A”.
Proposition 3.10
(differentiability) Let F satisfy Assumption 3.1 and be convex. Given \(\mu _1,\mu _2,\mu _3 \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) and \(\gamma \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d}\times {\mathord {{\mathbb {R}}}^d}\times {\mathord {{\mathbb {R}}}^d})\) with \(\pi ^i_\#\gamma = \mu _i\), let \(\mu _\alpha ^{2\rightarrow 3} = \left( (1-\alpha )\pi ^2+\alpha \pi ^3\right) _\# \gamma \) for \(\alpha \in [0,1]\). Then
A key consequence of the preceding proposition is that the regularized energies are semiconvex along generalized geodesics, as we now show.
Proposition 3.11
(convexity) Suppose F satisfies Assumption 3.1 and is convex. Then \({\mathcal {F}}_\varepsilon \) is \(\lambda _F\)-convex along generalized geodesics, where
Proof
Let \((\mu _\alpha ^{2\rightarrow 3})_{\alpha \in [0,1]}\) be a generalized geodesic connecting two probability measures \(\mu _2,\mu _3\in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) with base \(\mu _1\in {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\); see (10).
We have, using the above-the-tangent inequality for convex functions,
Therefore, by Proposition 3.10,
which gives the result. \(\square \)
We now use the previous results to characterize the subdifferential of the regularized internal energy. The structure of argument is classical (c.f. [3, 24, 55]), but due to the novel form of our regularized energies, we include the proof in “Appendix A”.
Proposition 3.12
(subdifferential characterization) Suppose F satisfies Assumption 3.1 and is convex. Let \(\varepsilon >0\) and \(\mu \in D({\mathcal {F}}_\varepsilon )\). Then
where
In particular, we have \( |\partial {\mathcal {F}}_\varepsilon |(\mu ) = \left\| \nabla \frac{\delta {\mathcal {F}}_\varepsilon }{\delta \mu } \right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})}\).
As a consequence of this characterization of the subdifferential, we obtain the analogous result for the full energy \({\mathcal {E}}_\varepsilon \), as in Definition 3.6. See “Appendix A” for the proof.
Corollary 3.13
Suppose F satisfies Assumption 3.1 and is convex. Let \(\varepsilon >0\) and \(\mu \in D({\mathcal {E}}_\varepsilon )\). Suppose \(V, W \in C^1({\mathord {{\mathbb {R}}}}^d)\) are semiconvex, with at most quadratic growth, and W is even. Then
where
In particular, we have \(|\partial {\mathcal {E}}_\varepsilon |(\mu ) = \left\| \nabla \frac{\delta {\mathcal {E}}_\varepsilon }{\delta \mu } \right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})}\).
4 \(\Gamma \)-convergence of regularized internal energies
We now turn to the convergence of the regularized energies and, when in the presence of confining drift or interaction terms, the corresponding convergence of their minimizers. In this section, and for the remainder of the work, we consider regularized entropies and Rényi entropies of the form \({\mathcal {F}}_\varepsilon ^m\) for \(m\ge 1\). We begin by showing that \({\mathcal {F}}_\varepsilon ^m\) \(\Gamma \)-converges to \({\mathcal {F}}\) as \(\varepsilon \rightarrow 0\) with respect to the weak-\(^*\) topology.
Theorem 4.1
(\({\mathcal {F}}_\varepsilon \) \(\Gamma \)-converges to \({\mathcal {F}}\)) If \(m =1\), consider \((\mu _\varepsilon )_\varepsilon \subset {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) and \(\mu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\), and if \(m >1\), consider \((\mu _\varepsilon )_\varepsilon \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\).
-
(i)
If \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \), we have \(\liminf _{\varepsilon \rightarrow 0 } {\mathcal {F}}^m_\varepsilon (\mu _\varepsilon ) \ge {\mathcal {F}}^m(\mu )\).
-
(ii)
We have \(\limsup _{\varepsilon \rightarrow 0} {\mathcal {F}}^m_\varepsilon (\mu ) \le {\mathcal {F}}^m(\mu )\).
Proof
We begin by showing the result for \(1 \le m \le 2\), in which case the function F is concave. We first show part (i). By Proposition 3.8, for all \(\varepsilon >0\),
By Lemma 2.3, \( \mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \) implies \(\zeta _\varepsilon * \mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \). Therefore, by the lower semicontinuity of \({\mathcal {F}}^m\) with respect to weak-\(^*\) convergence [3, Remark 9.3.8],
which gives the result. We now turn to part (ii). Again, by Proposition 3.8, for all \(\varepsilon >0\),
where \(C_\varepsilon \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Therefore, \(\limsup _{\varepsilon \rightarrow 0} {\mathcal {F}}^m_\varepsilon (\mu ) \le {\mathcal {F}}^m(\mu )\).
We now consider the case when \(m>2\). Part (ii) follows quickly: by Proposition 3.8, Young’s convolution inequality, and the fact that \( \Vert \zeta _\varepsilon \Vert _{L^1({\mathord {{\mathbb {R}}}^d})} = 1\), for all \(\varepsilon >0\) we have
Taking the supremum limit as \(\varepsilon \rightarrow 0\) then gives the result. Let us prove part (i). Without loss of generality, we may suppose that \(\liminf _{\varepsilon \rightarrow 0} {\mathcal {F}}^m_\varepsilon (\mu _\varepsilon )\) is finite. Furthermore, there exists a positive sequence \((\varepsilon _n)_n\) such that \(\varepsilon _n \rightarrow 0\) and \(\lim _{n \rightarrow + \infty } {\mathcal {F}}_{\varepsilon _n}^m(\mu _{\varepsilon _n}) = \liminf _{\varepsilon \rightarrow 0} {\mathcal {F}}^m_\varepsilon (\mu _\varepsilon )\). In particular, there exists \(C>0\) for which \({\mathcal {F}}^m_{\varepsilon _n}(\mu _{\varepsilon _n}) < C\) for all \(n \in {\mathbb {N}}\). By Jensen’s inequality for the convex function \(x \mapsto x^{m-1}\) and the fact that \(\zeta _\varepsilon * \zeta _\varepsilon = \varphi _\varepsilon \) for all \(\varepsilon >0\),
Thus, since \({\mathcal {F}}^m_{\varepsilon _n}(\mu _{\varepsilon _n}) < C\) for all \(n \in {\mathord {{\mathbb {N}}}}\), we have \(\Vert \zeta _{\varepsilon _n}*\mu _{\varepsilon _n}\Vert _{L^2({\mathord {{\mathbb {R}}}^d})}< C':= (C(m-1))^{1/2(m-1)}\). We now use this bound on the \(L^2\)-norm of \(\zeta _{\varepsilon _n}*\mu _{\varepsilon _n}\) to deduce a stronger notion of convergence of \(\zeta _{\varepsilon _n}*\mu _{\varepsilon _n}\) to \(\mu \). First, since \((\mu _{\varepsilon _n})_n\) converges weakly-\(^*\) to \(\mu \) as \(n\rightarrow \infty \), Lemma 2.3 ensures that \((\zeta _{\varepsilon _n}*\mu _{\varepsilon _n} - \mu _{\varepsilon _n})_n\) converges weakly-\(^*\) to 0. Since the \(L^2\)-norm is lower semicontinuous with respect to weak-\(^*\) convergence [65, Lemma 3.4], we have
so that \(\mu \in L^2({\mathord {{\mathbb {R}}}^d})\). Furthermore, up to another subsequence, we may assume that \((\zeta _{\varepsilon _n}*\mu _{\varepsilon _n})_n\) converges weakly in \(L^2\). Also, since \(\zeta _{\varepsilon _n}*\mu _{\varepsilon _n} {\mathop {\rightharpoonup }\limits ^{*}}\mu \), for all \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\) we have
so \((\zeta _{\varepsilon _n}*\mu _{\varepsilon _n})_n\) converges weakly in \(L^2\) to \(\mu \). By the Banach–Saks theorem (c.f. [73, Sect. 38]), up to taking a further subsequence of \((\zeta _{\varepsilon _n}*\mu _{\varepsilon _n})_n\), the Cesàro mean \((v_k)_k\) defined by
converges to \(\mu \) strongly in \(L^2\). Finally, for any \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\), this ensures
so that
We now use this stronger notion convergence to conclude our proof of part (i). Since \(m>2\) and
by part (i) of Proposition B.2, up to another subsequence, there exists \(w \in L^1(\mu ;{\mathord {{\mathbb {R}}}^d})\) so that for all \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\),
Furthermore, recalling the definition of the regularized energy and applying [3, Theorem 5.4.4(ii)],
Therefore, to finish the proof, it suffices to show that \(w(x) \ge \mu (x)\) for \(\mu \)-almost every \(x \in {\mathord {{\mathbb {R}}}^d}\). By Lemma 2.2 and the fact that \(\zeta _{\varepsilon _n} *\zeta _{\varepsilon _n} = \varphi _{\varepsilon _n}\) for all \(n\in {\mathord {{\mathbb {N}}}}\), there exists \(p>0\) and \(C_\zeta >0\) so that for all \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\),
Combining this with Eq. (18), we obtain
Finally, using Eq. (17) and the definition of \((v_k)_k\) as a sequence of convex combinations of the family \(\{\zeta _{\varepsilon _i} *\mu _{\varepsilon _i}\}_{i\in \{1,\ldots ,k\}}\), for all \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\) with \(f \ge 0\) we have
Since the limit in (19) exists, it coincides with its Cesàro mean on the right-hand side of the above equation. Thus, for all \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\) with \(f \ge 0\),
This gives \(w(x) \ge \mu (x)\) for \(\mu \)-almost every \(x \in {\mathord {{\mathbb {R}}}^d}\), which completes the proof. \(\square \)
Now, we add a confining drift or interaction potential to our internal energies, so that energy minimizers exist and we may apply the previous \(\Gamma \)-convergence result to conclude that minimizers converge to minimizers. For the remainder of the section we consider energies of the form \({\mathcal {E}}_\varepsilon ^m\) given in Definition 3.6, with the following additional assumptions on V and W to ensure that the energy is confining.
Assumption 4.2
(confining assumptions) The potentials V and W are bounded below and one of the following additional assumptions holds:
Under these assumptions, the regularized energies \({\mathcal {E}}_\varepsilon ^m\) are lower semicontinuous with respect to weak-\(^*\) convergence (\(m>1\)) and Wasserstein convergence (\(m=1\)), where for the latter we assume \(\varphi \) is a Gaussian (c.f. Proposition 3.9, and [3, Lemma 5.1.7], [65, Lemma 3.4] and [79, Lemma 2.2]).
Remark 4.3
(tightness of sublevels) Assumptions (CV) and (CV) ensure that the set \(\{ \mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d}) \mid \int V \,d \mu \le C \}\) is tight for all \(C>0\); c.f. [3, Remark 5.1.5]. Likewise, Assumption (CW) on W ensures that the set \(\{ \mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d}) \mid \int W*\mu \,d \mu \le C \}\) is tight up to translations for all \(C >0\); c.f. [79, Theorem 3.1].
We now prove existence of minimizers of \({\mathcal {E}}_\varepsilon ^m\), for all \(\varepsilon >0\).
Proposition 4.4
Let \(\varepsilon >0\). For \(m>1\), if either Assumption (CV) or (CW) holds, then minimizers of \({\mathcal {E}}^m_\varepsilon \) over \({\mathcal P}({\mathord {{\mathbb {R}}}^d})\) exist. For \(m=1\), if (CV\('\)) holds and \(\varphi \) is a Gaussian, then minimizers of \({\mathcal {E}}^1_\varepsilon \) over \({\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) exist.
Proof
First suppose \(m >1\), so that \({\mathcal {F}}_\varepsilon \ge 0\) and \({\mathcal {E}}^m_\varepsilon \) is bounded below. By Remark 4.3, if (CV) holds, then any minimizing sequence of \({\mathcal {E}}^m_\varepsilon \) has a subsequence that converges in the weak-\(^*\) topology. Likewise, if (CW) holds, then any minimizing sequence of \({\mathcal {E}}^m_\varepsilon \) has a subsequence that, up to translation, converges in the weak-\(^*\) topology. By lower semicontinuity of \({\mathcal {E}}^m_\varepsilon \), the limits of minimizing sequences are minimizers of \({\mathcal {E}}^m_\varepsilon \).
Now, suppose \(m =1\). By Proposition 3.8, for all \(\delta >0\) and \(\mu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\),
Consequently, by the assumption in (CV\('\)) and the fact that W is bounded below by, say, \({{\tilde{C}}} \in {\mathord {{\mathbb {R}}}}\), we can choose \(\delta = C_0/2\) and obtain
Hence any minimizing sequence \((\mu _n)_n \subset {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) has uniformly bounded second moment. Thus, \((\mu _n)_n\) has a subsequence that converges in the weak-\(^*\) topology to a limit with finite second moment. By the lower semicontinuity of \({\mathcal {E}}^m_\varepsilon \) the limit must be a minimizer of \({\mathcal {E}}^m_\varepsilon \). \(\square \)
Finally, we conclude that minimizers of the regularized energy converge to minimizers of the unregularized energy.
Theorem 4.5
(minimizers converge to minimizers) Suppose \(m>1\). If Assumption (CV) holds, then for any sequence \((\mu _\varepsilon )_\varepsilon \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) such that \(\mu _\varepsilon \) is a minimizer of \({\mathcal {E}}^m_\varepsilon \) for all \(\varepsilon >0\), we have, up to a subsequence, \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \), where \(\mu \) is minimizes \({\mathcal {E}}^m\). Alternatively, if Assumption (CW) holds, then we have \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \), up to a subsequence and translation, where again \(\mu \) minimizes \({\mathcal {E}}^m\).
Now suppose \(m =1\). If Assumption (CV\('\)) holds and \(\varphi \) is a Gaussian, then for any sequence \((\mu _\varepsilon )_\varepsilon \subset {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) such that \(\mu _\varepsilon \) is a minimizer of \({\mathcal {E}}^1_\varepsilon \) for all \(\varepsilon >0\), we have, up to a subsequence, \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \), where \(\mu \) minimizes \({\mathcal {E}}^1\).
Proof
The proof is classical. We include it for completeness.
We only prove the result under Assumptions (CV)/(CV\('\)) since the argument for (CW) is analogous. For any \(\varepsilon >0\), since \(\mu _\varepsilon \) is a minimizer of \({\mathcal {E}}^m_\varepsilon \) we have that \({\mathcal {E}}^m_\varepsilon (\mu _\varepsilon ) \le {\mathcal {E}}^m_\varepsilon (\nu )\) for all \(\nu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) if \(m>1\), and for all \(\nu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) if \(m=1\). Taking the infimum limit of the left-hand side and the supremum limit of the right-hand side, Theorem 4.1(ii) ensures that
Since \({\mathcal {E}}^m\) is proper there exists \(\nu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) if \(m>1\) and \(\nu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) if \(m=1\) so that the right-hand side is finite. Thus, up to a subsequence, we may assume that \( \{{\mathcal {E}}^m_\varepsilon (\mu _\varepsilon )\}_\varepsilon \) is uniformly bounded. When \(m >1\), \({\mathcal {F}}_\varepsilon (\mu ) \ge 0\) for all \(\varepsilon \ge 0\), and this implies that \(\{\int V \,d \mu _\varepsilon \}_\varepsilon \) is uniformly bounded, so \(\{\mu _\varepsilon \}_\varepsilon \) is tight. When \(m=1\), the inequality in (20) ensures that \(\{M_2(\mu _\varepsilon )\}_\varepsilon \) is uniformly bounded, so again \(\{\mu _\varepsilon \}_\varepsilon \) is tight. Thus, up to a subsequence, \((\mu _\varepsilon )_\varepsilon \) converges weakly-\(^*\) to a limit \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) if \(m>1\) and \(\mu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) if \(m=1\). By Theorem 4.1(i) and the inequality in (21), we obtain
for all \(\nu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) if \(m>1\) and for all \(\nu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) if \(m=1\). Therefore, \(\mu \) is a minimizer of \({\mathcal {E}}^m\). \(\square \)
Remark 4.6
(convergence of minimizers) One the main difficulties for improving the topology in which the convergence of the minimizers happen is that we do not control \(L^m\)-norms of the regularized minimizing sequences due to the special form of our regularized energy. This is the main reason we only get weak-\(^*\) convergence in the previous result and the main obstacle to improve results for the \(\Gamma \)-convergence of gradient flows, as we shall see in the next section.
5 \(\Gamma \)-convergence of gradient flows
We now consider gradient flows of the regularized energies \({\mathcal {E}}_\varepsilon ^m\), as in Definition 3.6, for \(m \ge 2\) and prove that, under sufficient regularity assumptions, gradient flows of the regularized energies converge to gradient flows of the unregularized energy as \(\varepsilon \rightarrow 0\). For simplicity of notation, we often write \({\mathcal {E}}_\varepsilon ^m\) and \({\mathcal {F}}_\varepsilon ^m\) for \(\varepsilon \ge 0\) when we refer jointly to the regularized and unregularized energies.
We begin by showing that the gradient flows of the regularized energies are well-posed, provided that V and W satisfy the following convexity and regularity assumptions.
Assumption 5.1
(convexity and regularity of V and W) The potentials \(V, W \in C^1({\mathord {{\mathbb {R}}}^d})\) are semiconvex, with at most quadratic growth, and W is even. Furthermore, there exist \(C_0, C_1 >0\) so
Remark 5.2
(\(\omega \)-convexity) More generally, our results naturally extend to drift and interaction energies that are merely \(\omega \)-convex; see [35]. However, given that the main interest of the present work is approximation of diffusion, we prefer the simplicity of Assumption (5.1), as it allows us to focus our attention on the regularized internal energy.
Proposition 5.3
Let \(\varepsilon \ge 0\) and \(m\ge 2\). Suppose \({\mathcal {E}}^m_\varepsilon \) is as in Definition 3.6 and V and W satisfy Assumption 5.1. Then, for any \(\mu _0\in \overline{D({\mathcal {E}}_\varepsilon ^m)}\), there exists a unique gradient flow of \({\mathcal {E}}_\varepsilon ^m\) with initial datum \(\mu _0\).
Proof
It suffices to verify that \({\mathcal {E}}_\varepsilon ^m\) is proper, coercive, lower semicontinuous with respect to 2-Wasserstein convergence, and semiconvex along generalized geodesics; c.f. [3, Theorem 11.2.1]. (See also [3, Eq. (2.1.2b)] for the definition of coercive.) If \(\varepsilon >0\), then \({\mathcal {F}}^m_\varepsilon \) is finite on all of \({\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\), and if \(\varepsilon =0\), then \({\mathcal {F}}^m\) is proper. Thus, our assumptions on V and W ensure that \({\mathcal {E}}_\varepsilon ^m\) is proper. Clearly \({\mathcal {F}}^m_\varepsilon \) is bounded below. Hence, since the semiconvexity of V and W ensures that their negative parts have at most quadratic growth, \({\mathcal {E}}_\varepsilon ^m\) is coercive.
For \(\varepsilon >0\), Proposition 3.9 ensures that \({\mathcal {F}}_\varepsilon ^m\) is lower semicontinuous with respect to weak-\(^*\) convergence, hence also 2-Wasserstein convergence. For \(\varepsilon = 0\), the unregularized internal energy \({\mathcal {F}}^m\) is also lower semicontinuous with respect to weak-\(^*\) and 2-Wasserstein convergence [65, Lemma 3.4]. Since V and W are lower semicontinuous and their negative parts have at most quadratic growth, the associated potential and interaction energies are lower semicontinuous with respect to 2-Wasserstein convergence [3, Lemma 5.1.7, Example 9.3.4]. Therefore, \({\mathcal {E}}_\varepsilon ^m\) is lower semicontinuous for all \(\varepsilon \ge 0\).
For \(\varepsilon >0\), Proposition 3.11 ensures that \({\mathcal {F}}_\varepsilon ^m\) is semiconvex along generalized geodesics in \({\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\). For \(\varepsilon =0\), the unregularized internal energy \({\mathcal {F}}^m\) is convex [65, Theorem 2.2]. For V and W semiconvex, the corresponding drift \(\int V \,d \mu \) and interaction \((1/2)\int (W*\mu ) \,d \mu \) energies are semiconvex [3, Proposition 9.3.2], [24, Remark 2.9]. Therefore, the resulting regularized energy \({\mathcal {E}}^m_\varepsilon \) is semiconvex. \(\square \)
In the case \(\varepsilon =0\), gradient flows of the energies \({\mathcal {E}}^m\) are characterized as solutions of the partial differential equation (1); c.f. [3, Theorems 10.4.13 and 11.2.1], [24, Theorem 2.12]. Now, we show that gradient flows of the regularized energies \({\mathcal {E}}_\varepsilon ^m\) can also be characterized as solutions of a partial differential equation.
Proposition 5.4
Let \(\varepsilon >0\) and \(m\ge 2\). Suppose \({\mathcal {E}}_\varepsilon ^m\) is as in Definition 3.6 and V and W satisfy Assumption 5.1. Then, \(\mu _\varepsilon \in AC^2_\mathrm{loc}((0,+\,\infty ); {\mathcal P}_2({\mathord {{\mathbb {R}}}^d}))\) is the gradient flow of \({\mathcal {E}}^m_\varepsilon \) if and only if \(\mu _\varepsilon \) is a weak solution of the continuity equation with velocity field
Moreover, \(\int _0^T \Vert v(t)\Vert ^2_{L^2(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d})} \,dt <\infty \) for all \(T >0\).
Proof
Suppose \(\mu _\varepsilon \in AC^2_\mathrm{loc}((0,+\,\infty ); {\mathcal P}_2({\mathord {{\mathbb {R}}}^d}))\) is the gradient flow of \({\mathcal {E}}^m_\varepsilon \). Then, by Definition 2.9 and Corollary 3.13, \(\mu _\varepsilon \) is a weak solution to the continuity equation with velocity field (22). Conversely, suppose \(\mu _\varepsilon \) is a weak solution to the continuity equation with velocity field (22). By Corollary 3.13, \(-v(t) \in \partial {\mathcal {E}}(\mu (t)) \cap {{\,\mathrm{Tan}\,}}_{\mu (t)}{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) for almost every \(t \in (0,\infty )\). Furthermore, since \(\int _0^T \Vert v(t)\Vert ^2_{L^2(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d})}\,dt <\infty \) for all \(T >0\), \(\mu _\varepsilon \in AC^2_\mathrm{loc}((0,+\,\infty ); {\mathcal P}_2({\mathord {{\mathbb {R}}}^d}))\) by [3, Theorem 8.3.1]. \(\square \)
A consequence of the previous proposition is that, for the regularized energies \({\mathcal {E}}^m_\varepsilon \), particles remain particles, i.e. a solution of the gradient flow with initial datum given by a finite sum of Dirac masses remains a sum of Dirac masses, and the evolution of the trajectories of the particles is given by a system of ordinary differential equations.
Corollary 5.5
Let \(\varepsilon >0\) and \(m\ge 2\), and let V and W satisfy Assumption 5.1. Fix \(N \in {\mathord {{\mathbb {N}}}}\). For \(i \in \{1, \ldots , N\}:=I\), fix \(X_i^0 \in {\mathord {{\mathbb {R}}}^d}\) and \(m_i \ge 0\) satisfying \(\sum _{i \in I} m_i = 1\). Then the ODE system
is well-posed for all \(T>0\). Furthermore, \(\mu _\varepsilon = \sum _{i \in I} \delta _{X_i(\cdot )} m_i\) belongs to \(AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) and is the gradient flow of \({\mathcal {E}}_\varepsilon ^m\) with initial conditions \(\mu _\varepsilon (0) := \sum _{i \in I} \delta _{X_i^0}m_i\).
Proof
To see that (23) is well-posed, first note that the function
is Lipschitz. Likewise, Assumption 5.1 ensures \(y_i \mapsto \nabla V(y_i)\) and \(y_i \mapsto \sum _{j\in I} \nabla W(y_i-y_j)\) are continuous and one-sided Lipschitz. Therefore, the ODE system (23) is well-posed forward in time.
Now, suppose \((X_i)_{i=1}^N\) solves (23) with initial data \((X_i^0)_{i =1}^N\) on an interval [0, T], for some fixed T. We abbreviate by \(v_i = v_i(X_1, X_2, \ldots , X_N)\) the velocity field for \(X_i\) in (23). For any test function \(\varphi \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d}\times (0,T))\), the fundamental theorem of calculus ensures that, for all \(i \in I\),
Combining this with (23), we obtain
Multiplying both sides by \(m_i\), summing over i, and taking \(\mu _\varepsilon = \sum _{i\in I} \delta _{X_i(\cdot )} m_i\) for \(t\in [0,T]\) gives
for v as in (22). Therefore, \(\mu _\varepsilon \) is a weak solution of the continuity equation with velocity field v. Furthermore, for all \(T >0\)
by the continuity of \(\nabla V\), \(\nabla W\), and \(\varphi _\varepsilon \). Therefore, by Proposition 5.4, we conclude that \(\mu _\varepsilon \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) and \(\mu _\varepsilon \) is the gradient flow of \({\mathcal {E}}^m_\varepsilon \). \(\square \)
We now turn to the \(\Gamma \)-convergence of the gradient flows of the regularized energies, using the scheme introduced by Sandier–Serfaty [76] and then generalized by Serfaty [78], which provides three sufficient conditions for concluding convergence. We will use the following variant of Serfaty’s result, which allows for slightly weaker assumptions on the gradient flows of the regularized energies, but follows from the same argument as Serfaty’s original result. (See also Remark 2.11 on the correspondence between Wasserstein gradient flows and curves of maximal slope.)
Theorem 5.6
(c.f. [78, Theorem 2]) Let \(m\ge 2\). Suppose that, for all \(\varepsilon >0\), \(\mu _\varepsilon \) belongs to \(AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) and is a gradient flow of \({\mathcal {E}}_\varepsilon ^m\) with well-prepared initial data, i.e.,
Suppose further that there exists a curve \(\mu \) in \({\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) such that, for almost every \(t \in [0,T]\), \( \mu _\varepsilon (t) {\mathop {\rightharpoonup }\limits ^{*}}\mu (t)\) and
-
(S1)
\(\displaystyle \liminf _{\varepsilon \rightarrow 0} \int _0^t |\mu _\varepsilon '|(s)^2 \,d s \ge \int _0^t|\mu '|(s)^2\,d s\),
-
(S2)
\(\displaystyle \liminf _{\varepsilon \rightarrow 0} {\mathcal {E}}^m_\varepsilon (\mu _\varepsilon (t)) \ge \displaystyle {\mathcal {E}}^m(\mu (t))\),
-
(S3)
\(\displaystyle \liminf _{\varepsilon \rightarrow 0} \int _0^t |\partial {\mathcal {E}}^m_\varepsilon |^2(\mu _\varepsilon (s)) \,ds \ge \int _0^t |\partial {\mathcal {E}}^m|^2(\mu (s))\,ds\).
Then \(\mu \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\), and \(\mu \) is a gradient flow of \({\mathcal {E}}^m\).
For simplicity of notation, in what follows we shall at times omit dependence on time when referring to curves in the space of probability measures.
In order to apply Serfaty’s scheme in the present setting to obtain \(\Gamma \)-convergence of the gradient flows, a key assumption is that the following quantity is bounded uniformly in \(\varepsilon >0\) along the gradient flows \(\mu _\varepsilon \) of the regularized energies \({\mathcal {E}}^m_\varepsilon \):
where we use the abbreviation \(p_\varepsilon :=(\varphi _\varepsilon *\mu _\varepsilon )^{m-2}\mu _\varepsilon \). This quantity differs from \(\left\| \nabla \delta {\mathcal {F}}_\varepsilon ^m/\delta \mu _\varepsilon \right\| _{L^1(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d})}\) merely by the placement of the absolute value sign:
Serfaty’s scheme allows one to assume, without loss of generality, that \(|{\mathcal {F}}^m_\varepsilon |(\mu _\varepsilon )\) is bounded uniformly in \(\varepsilon >0\) for almost every \(t\in [0,T]\), and Hölder’s inequality ensures that \(|{\mathcal {F}}^m_\varepsilon |(\mu _\varepsilon ) = \left\| \nabla \delta {\mathcal {F}}_\varepsilon ^m/\delta \mu _\varepsilon \right\| _{L^2(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d})} \ge \left\| \nabla \delta {\mathcal {F}}_\varepsilon ^m/\delta \mu _\varepsilon \right\| _{L^1(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d})}\); see Proposition 3.12. Consequently, we miss the bound we require on \(\Vert \mu _\varepsilon \Vert _{BV_\varepsilon ^m}\) merely by placement of the absolute value sign in inequality (24).
Still, \(\Vert \mu _\varepsilon \Vert _{BV_\varepsilon ^m}\) has a useful heuristic interpretation. Through the proof of Theorem 5.8, we obtain
see the inequality (33) and Proposition B.2. Consequently, one may think of \(\Vert \mu _\varepsilon \Vert _{BV_\varepsilon ^m} \) as a nonlocal approximation of the \(L^1\)-norm of the gradient of \(\mu ^m\).
We begin with a technical lemma we shall use to prove the convergence of the gradient flows.
Lemma 5.7
Let \(\varepsilon >0\) and \(m \ge 2\), and let \(T >0\) and \(\mu _\varepsilon \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\). Then for any Lipschitz function \(f: [0,T]\times {\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}}\) with constant \(L_f>0\), there exists \(r>0\) so that
where \(C_\zeta >0\) is as in Assumption 2.1.
Proof
We argue similarly as in Lemma 2.2. Let \(f: [0,T]\times {\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}}\) be Lipschitz with constant \(L_f>0\). Then,
By Assumption 2.1, \(C_\zeta \) is so that \(\zeta (x) \le C_\zeta |x|^{-q}\) for \(q >d+1\) for all \(x \in {\mathord {{\mathbb {R}}}^d}\). Choose \({\bar{r}}\) so that
Now, we break the integral with respect to \(d \mu _\varepsilon (y)\) above into integrals over the domain \(B_{\varepsilon ^{{\bar{r}}}}(x)\) and \({\mathord {{\mathbb {R}}}^d}{\setminus } B_{\varepsilon ^{{\bar{r}}}}(x)\), bounding the above quantity by
First, we consider \(I_1\). Since, in the integral, \(|x-y| < \varepsilon ^{{\bar{r}}}\), we obtain
Now, we consider \(I_2\). We apply the inequality in (52) to obtain \(\zeta _\varepsilon (x-y)|x-y| \le C_\zeta \varepsilon ^{\tilde{r}}\) with \(\tilde{r}:={{\bar{r}}}(1-q)+q-d\) in the integral—the inequality in (26) ensures \({\tilde{r}}>1\). Consequently,
where, in the last inequality, we use that \(\Vert \nabla \zeta _\varepsilon \Vert _{L^1({\mathord {{\mathbb {R}}}^d})} = \Vert \nabla \zeta \Vert _{L^1({\mathord {{\mathbb {R}}}^d})}/\varepsilon \) and, by Jensen’s inequality for the concave function \(s^{(m-2)/(m-1)}\),
Since \(0\le (m-2)/(m-1) <1\), Jensen’s inequality gives
This gives the result by taking \(r:=\min ({{\bar{r}}},{\tilde{r}} -1)\). \(\square \)
With this technical lemma in hand, we now turn to the \(\Gamma \)-convergence of the gradient flows.
Theorem 5.8
Let \(m \ge 2\), and let V and W be as in Assumption 5.1. Fix \(T>0\) and suppose that \(\mu _\varepsilon \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) is a gradient flow of \({\mathcal {E}}^m_\varepsilon \) for all \(\varepsilon >0\) satisfying
for some \(\mu (0) \in D({\mathcal {E}}^m)\). Furthermore, suppose that the following hold:
-
(A1)
\(\sup _{\varepsilon >0} \int _0^T \Vert \mu _\varepsilon (t) \Vert _{BV_\varepsilon ^m}dt < \infty \);
-
(A2)
there exists \(\mu : [0,T] \rightarrow {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) such that \(\zeta _\varepsilon *\mu _\varepsilon (t) \rightarrow \mu (t)\) in \(L^1([0,T];L^m_\mathrm{loc}({\mathord {{\mathbb {R}}}^d}))\) as \(\varepsilon \rightarrow 0\), and \(\sup _{\varepsilon >0} \int _0^T \Vert \zeta _\varepsilon * \mu _\varepsilon (t)\Vert ^m_{L^m({\mathord {{\mathbb {R}}}}^d)} \,dt < \infty \).
Then \(\mu _\varepsilon (t) {\mathop {\rightharpoonup }\limits ^{*}}\mu (t)\) for almost every \(t \in [0,T]\), \(\mu \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\), and \(\mu \) is the gradient flow of \({\mathcal {E}}^m\) with initial data \(\mu (0)\).
Proof
First, we note that \(\mu _\varepsilon (t) {\mathop {\rightharpoonup }\limits ^{*}}\mu (t)\) for almost every \(t \in [0,T]\). This follows from (A2), which ensures \(\zeta _\varepsilon *\mu _\varepsilon (t) \rightarrow \mu (t)\) in \(L^1([0,T];L^m_\mathrm{loc}({\mathord {{\mathbb {R}}}^d}))\), hence \(\zeta _\varepsilon *\mu _\varepsilon (t) \rightarrow \mu (t)\) in distribution for almost every \(t \in [0, T]\). Then, since \(\zeta _\varepsilon *\mu _\varepsilon (t) - \mu _\varepsilon (t) \rightarrow 0\) in distribution for all \(t \in [0,T]\), we obtain \(\mu _\varepsilon (t) \rightarrow \mu (t)\) in distribution. Finally, since weak-* convergence and convergence in distribution are equivalent when \(\mu _\varepsilon \) and \(\mu \) are both probability measures [3, Remark 5.1.6], we obtain \(\mu _\varepsilon (t) {\mathop {\rightharpoonup }\limits ^{*}}\mu (t)\) for almost every \(t \in [0,T]\).
It remains to verify conditions (S0), (S1), (S2), and (S3) from Theorem 5.6. Item (S0) holds by assumption (A0). Item (S1) follows by the same argument as in [37, Theorem 5.6]. Item (S2) is an immediate consequence of the fact that \(\mu _\varepsilon (t) {\mathop {\rightharpoonup }\limits ^{*}}\mu (t)\) for almost every \(t \in [0,T]\), our main \(\Gamma \)-convergence Theorem 4.1, and the lower semicontinuity of the potential and interaction energies with respect to weak-\(^*\) convergence [3, Lemma 5.1.7].
We devote the remainder of the proof to showing Condition (S3). We shall use the following fact throughout: combining Assumption (A2) with Proposition 3.8 implies that
To prove (S3) we may assume, without loss of generality, that \(\liminf _{\varepsilon \rightarrow 0} \int _0^T|\partial {\mathcal {E}}^m_\varepsilon |(\mu _\varepsilon (t))^2\,dt\) is finite, so by Fatou’s lemma
so \(\liminf _{\varepsilon \rightarrow 0} |\partial {\mathcal {E}}^m_\varepsilon |(\mu _\varepsilon (t)) < \infty \) for almost every \(t \in [0,T]\). In particular, up to taking subsequences, we may assume that, for almost every \(t \in [0,T]\), \(\{|\partial {\mathcal {E}}^m_\varepsilon |(\mu _\varepsilon (t))\}_\varepsilon \) is bounded uniformly in \(\varepsilon >0\). By Corollary 3.13,
Furthermore, note that if
then \(|\partial {\mathcal {E}}^m|(\mu ) = \Vert \xi \Vert _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})}\); c.f. [3, Theorem 10.4.13]. Thus, to prove (S3) it suffices to show that
when (31) holds for almost every \(t \in [0,T]\). Furthermore, the inequality in (32) is, by Proposition B.2(ii), a consequence of
for all \(f \in C_\mathrm {c}^\infty ([0,T] \times {\mathord {{\mathbb {R}}}^d})\). Observe that Proposition B.2 is stated for probability measures—we can easily rescale \(d\mu _\varepsilon \otimes d{\mathcal {L}}^d\) to be a probability measure by diving the above equations by \(T>0\).
First, we address the terms with the drift and interaction potentials V and W. Combining Assumption 5.1 on V and W with Assumption (A5.8) on \(\mu _\varepsilon \) ensures that \(|\nabla V|\) is uniformly integrable in \(d \mu _\varepsilon \otimes d{\mathcal {L}}^d\) and \((x,y) \mapsto |\nabla W(x-y)|\) is uniformly integrable \(d \mu _\varepsilon \otimes d \mu _\varepsilon \otimes d{\mathcal {L}}^d\).Therefore, by [3, Lemma 5.1.7], \((\mu _\varepsilon )_\varepsilon \) converging weakly-\(^*\) to \(\mu \) ensures that
Now we deal with proving the diffusion part of (31) (that is, for almost every \(t \in [0,T]\), we have \(\mu (t)^m \in W^{1,1}({\mathord {{\mathbb {R}}}^d})\) and \(\nabla \mu (t)^m = \eta (t) \mu (t)\) for \(\eta \in L^2(\mu ;{\mathord {{\mathbb {R}}}^d})\)), and with proving that
Recalling the abbreviation \(p_\varepsilon := (\varphi _\varepsilon *\mu _\varepsilon )^{m-2} \mu _\varepsilon \), we rewrite the inner integral on the left-hand side of (34) as
Applying Lemma 5.7 together with (29) and (A3), and integrating by parts, we obtain
Now we move \(\nabla f\) out of the convolution. By Lemma 2.2, there exists \(p>0\) so
where we again use (27). Using the inequality in (28) and that \(\{\int _0^T {\mathcal {F}}^m_\varepsilon (\mu _\varepsilon (t))\,dt\}_\varepsilon \) is uniformly bounded in \(\varepsilon \),
To conclude the proof, we aim to apply Proposition B.2 (iii), and we begin by verifying the hypotheses of this proposition. First, note that since \(\zeta _\varepsilon *\mu _\varepsilon \rightarrow \mu \) in \(L^1([0,T];L^m_\mathrm{loc}({\mathord {{\mathbb {R}}}^d}))\) for \(m \ge 2\) as \(\varepsilon \rightarrow 0\), we also have \(\zeta _\varepsilon *\mu _\varepsilon \rightarrow \mu \) in \(L^1([0,T];L^2_\mathrm{loc}({\mathord {{\mathbb {R}}}^d}))\). Let \(w_\varepsilon = \varphi _\varepsilon *\mu _\varepsilon \). By definition, \(\int w_\varepsilon d \mu _\varepsilon = \int (\zeta _\varepsilon *\mu _\varepsilon )^2\,d{\mathcal {L}}^d\). Thus, Assumption (A2) and the fact that \(\zeta _\varepsilon *\mu _\varepsilon ({\mathord {{\mathbb {R}}}^d})= 1\) imply
so that \(w_\varepsilon \in L^1([0,T],L^1(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d}))\). Furthermore, for any \(h \in L^\infty ([0,T];W^{1,\infty }({\mathord {{\mathbb {R}}}^d}))\), the mollifier exchange Lemma 2.2 and the convergence of \(\zeta _\varepsilon *\mu _\varepsilon \) to \(\mu \) in \(L^1([0,T];L^2_\mathrm{loc}({\mathord {{\mathbb {R}}}^d}))\) give
as \(\varepsilon \rightarrow 0\). Thus, \(w_\varepsilon \in L^1([0,T];L^1(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d}))\) converges weakly to \(\mu \in L^1([0,T];L^1(d \mu ))\) in the sense of Definition B.1 as \(\varepsilon \rightarrow 0\). As before, while this definition is stated for probability measures, we can easily rescale \(d\mu _\varepsilon \otimes d{\mathcal {L}}^d\) to be a probability measure by diving the above equations by \(T>0\).
We now seek to show that, for all \(g \in C_\mathrm {c}^\infty ([0,T]\times {\mathord {{\mathbb {R}}}^d})\),
When \(m=2\), this follows from Eq. (37). Suppose \(m>2\). Let \(\kappa : {\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}}\) be a smooth cutoff function with \(0 \le \kappa \le 1\), \(\Vert \nabla \kappa \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 1\), \(\Vert D^2 \kappa \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 4\), \(\kappa (x) = 1\) for all \(|x| < 1/2\) and \(\kappa (x) = 0\) for all \(|x| > 2\). Given \(R>0\), define \(\kappa _R := \kappa (\cdot /R)\), so that \(\Vert \nabla \kappa _R\Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 1/R\). Then, by Jensen’s inequality for the convex function \(s \mapsto s^{m-1}\), Lemma 2.2, and Assumption (A),
Combining this with (37), where we may choose \(h = \kappa _R g\) for any \(g \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\), we have that \((\kappa _R w_\varepsilon )_\varepsilon \) converges strongly in \(L^{m-1}(\mu _\varepsilon ;{\mathord {{\mathbb {R}}}^d})\) to \(\kappa _R \mu \in L^{m-1}(\mu ;{\mathord {{\mathbb {R}}}^d})\) as \(\varepsilon \rightarrow 0\), in the sense of Definition B.1. Finally, since Assumption (A5.8) ensures that \(\int _0^T M_{m-1}(\mu _\varepsilon (t)) \,ds\) is bounded uniformly in \(\varepsilon \), we may apply Proposition B.2(iii) to conclude that for all \(g \in C_\mathrm {c}^\infty ([0,T]\times {\mathord {{\mathbb {R}}}^d})\),
Taking \(g = \nabla f\), choosing \(R>1\) so that \(\kappa _R \equiv 1\) on the support of \(\nabla f\), and combining the above equation with Eq. (35), we obtain
We now prove that \(\mu \) has the necessary regularity. In particular, we show that for almost every \(t \in [0,T]\), we have \(\mu ^m \in W^{1,1}({\mathord {{\mathbb {R}}}^d})\) and \(\nabla \mu ^m = \eta \mu \) for \(\eta \in L^2(\mu ;{\mathord {{\mathbb {R}}}^d})\). Inequality (30) ensures that, up to subsequences \(\{\int _0^t |\partial {\mathcal {F}}^m_\varepsilon |^2(\mu _\varepsilon (t))\,dt\}_\varepsilon \) is bounded uniformly in \(\varepsilon >0\). Thus, by Hölder’s inequality, there exists \(C>0\) so that
for all \(\varepsilon >0\). Combining this with (38) gives
Hence \(\nabla (\mu ^m)\) has finite measure on \([0,T]\times {\mathord {{\mathbb {R}}}^d}\), so we may rewrite (38) as
By another application of Hölder’s inequality, this guarantees
Riesz representation theorem then ensures that there exists \(\eta \in L^2([0,t];L^2(\mu ;{\mathord {{\mathbb {R}}}^d}))\) so that \(\eta \mu = \nabla (\mu ^m)\). In particular, this implies \(\nabla ( \mu (t)^m) \in L^1({\mathord {{\mathbb {R}}}^d})\) for almost every \(t \in [0,T]\), so \(\mu ^m \in W^{1,1}({\mathord {{\mathbb {R}}}^d})\) for almost every \(t \in [0,T]\) and we may rewrite (39) as
which completes the proof. \(\square \)
We conclude this section by showing that, in the case when \(m=2\) and for \(V, W \in C^2({\mathord {{\mathbb {R}}}^d})\) with bounded Hessians, whenever the initial data of the gradient flows have finite second moments and internal energies, we automatically obtain Assumptions (A5.8)–(A2). Consequently, in this special case, we are able to conclude the convergence of the gradient flows without these additional assumptions.
Corollary 5.9
Let \(\varepsilon >0\) and \(m=2\). In addition to satisfying Assumption 5.1, assume that \(V,W \in C^2({\mathord {{\mathbb {R}}}^d})\) have bounded Hessians \(D^2 V\) and \(D^2 W\). Fix \(T>0\), and suppose \(\mu _\varepsilon \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) is a gradient flow of \({\mathcal {E}}_\varepsilon ^m\) satisfying
Then, there exists \(\mu \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) such that
and \(\mu \) is the gradient flow of \({\mathcal {E}}^m\) with initial data \(\mu (0)\).
Remark 5.10
(Previous work, \(m=2\)) The above theorem generalizes a result by Lions and Mas-Gallic [60] on a numerical scheme for the porous medium equation \(\partial _t \mu = \Delta \mu ^2\) on a bounded domain with periodic boundary conditions to equations of the form (1) on Euclidean space.
Proof of Corollary 5.9
First, we show that \(\sup _{\varepsilon >0} \Vert \zeta _\varepsilon *(\mu _\varepsilon (0))\Vert _{L^2({\mathord {{\mathbb {R}}}^d})} < \infty \). The fact that \(D^2V\) and \(D^2 W\) are bounded ensures |V| and |W| grow at most quadratically. Combining this with Eqs. (40)–(41), which ensure \(\{{\mathcal {E}}^m_\varepsilon (\mu (0))\}_\varepsilon \) and \(\{M_2(\mu _\varepsilon (0))\}_\varepsilon \) are bounded uniformly in \(\varepsilon >0\), we obtain
Furthermore, since the energy \({\mathcal {F}}^2_\varepsilon \) decreases along solutions to the gradient flow, we have
Next, we show that our assumption that the initial data has bounded entropy (41) ensures \(\int _0^t \Vert \nabla \zeta _\varepsilon *(\mu _\varepsilon (s))\Vert _{L^2({\mathord {{\mathbb {R}}}^d})}^2\,ds < C(1+T)+M_2(\mu _\varepsilon (t))\) for all \(t \in [0,T]\), for some \(C>0\) depending on d, V, W and \(\sup _{\varepsilon >0} \int \log \mu _\varepsilon (0)\,d\mu _\varepsilon (0)\). Formally differentiating the entropy \({\mathcal {F}}^1(\mu ) = \int \log (\mu )\,d\mu \) along the gradient flows \(\mu _\varepsilon \), we expect that, for all \(t \in [0,T]\),
Hence, for any \(t \in [0,T]\),
This computation can be made rigorous by first proving the analogous inequality along discrete time gradient flows using the flow interchange method of Matthes et al. [64, Theorem 3.2] and then sending the timestep to zero to recover the above inequality in continuous time. Thus, there exists \(K_0>0\) depending on V, W and \(\sup _{\varepsilon >0} {\mathcal {F}}^1(\mu _\varepsilon (0)) \) so that, for all \(t\in [0,T]\),
Finally, by a Carleman-type estimate [30, Lemma 4.1], we have \({\mathcal {F}}^1(\nu ) \ge -(2 \pi )^{d/2} - M_2(\nu )\) for any \(\nu \in {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\). Therefore,
Now, we use this estimate to show that \(\{M_2(\mu _\varepsilon (t))\}_\varepsilon \) is uniformly bounded in \(\varepsilon \) for all \(t \in [0,T]\). Let \(\kappa \) be a smooth cutoff function with \(0 \le \kappa \le 1\), \(\Vert \nabla \kappa \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 1\), \(\Vert D^2 \kappa \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 4\), \(\kappa (x) = 1\) for all \(|x| < 1/2\) and \(\kappa (x) = 0\) for all \(|x| > 2\). Given \(R>0\), define \(\kappa _R(x) = \kappa (x/R)\), so that \(\Vert \nabla \kappa _R\Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 1/R\) and \(\Vert D^2 \kappa _R\Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} \le 4/R^2\). Then there exists \(C_\kappa >0\) so that for all \(R>1\), \(|\nabla (\kappa _R(x) x^2)| \le C_\kappa |x|\) and \(|D^2 (\kappa _R(x) x^2)| \le C_\kappa \) for all \(x\in {\mathord {{\mathbb {R}}}^d}\). By Proposition 5.4, \(\mu _\varepsilon \) is a weak solution of the continuity equation. Therefore choosing \(\kappa _R(x) |x|^2\) as our test function, we obtain, for all \(t \in [0,T]\),
Since \(D^2 V\) and \(D^2 W\) are bounded, \(|\nabla V|\) and \(|\nabla W|\) grow at most linearly. Consequently, there exists \(C'>0\), depending on V, W, and \(C_\kappa \) so that
Likewise, by Lemma 5.7, there exists \(r>0\) so that, for all \(t\in [0,T]\),
for \(C''\) depending on \(C_\kappa \), \(\sup _{\varepsilon >0} {\mathcal {F}}_\varepsilon ^2(\mu _\varepsilon (0))\), and \(\Vert \nabla \zeta \Vert _{L^1({\mathord {{\mathbb {R}}}^d})}\). In the second inequality, we use that
Therefore, there exists \(C''>0\) so that, for all \(t \in [0,T]\),
As the right-hand side is independent of \(R>1\), by sending \(R \rightarrow +\,\infty \) by the dominated convergence theorem we obtain that for \(\varepsilon ^r<1/(2C'')\),
Therefore, by Gronwall’s inequality, there exists \(\tilde{C}\) depending on \(C'\), \(C''\) and T (and independent of \(\varepsilon \)) so that
We may combine this with the inequality in (43) to obtain, for all \(t\in [0,T]\),
We now use these results to verify the assumptions of Theorem 5.8 hold, so that we may apply this result to conclude convergence of the gradient flows. Assumption (A5.8) is a consequence of the inequality in (45). Assumption (A5.8) is a consequence of the inequalities in (42), (44) and (46).
It remains to show Assumption (A2). First, note that since \(\sup _{\varepsilon >0} \Vert \zeta _\varepsilon *\mu _\varepsilon \Vert _{L^\infty ([0,T]\times {\mathord {{\mathbb {R}}}^d})} < \infty \), every subsequence of \((\zeta _\varepsilon *\mu _\varepsilon )_\varepsilon \) has a further subsequence, which we also denote by \((\zeta _\varepsilon *\mu _\varepsilon )_\varepsilon \), that converges weakly in \(L^2([0,T]\times {\mathord {{\mathbb {R}}}^d})\) to some \(\nu \) as \(\varepsilon \rightarrow 0\), and for which \(\zeta _\varepsilon *\mu _\varepsilon (t) \rightharpoonup \nu (t)\) weakly in \(L^2({\mathord {{\mathbb {R}}}^d})\) for all \(t \in [0,T]\). By uniqueness of limits and (40), we have \(\nu (0) = \mu (0)\) almost everywhere.
Next, note that (42) and (46) ensure that \(\sup _{\varepsilon >0}\Vert \zeta _\varepsilon *\mu \Vert _{L^2([0,T];H^1({\mathord {{\mathbb {R}}}^d}))}<\infty \). In particular we have \(\sup _{\varepsilon >0}\Vert \kappa _R \zeta _\varepsilon *\mu \Vert _{L^2([0,T];H^1({\mathord {{\mathbb {R}}}^d}))}<\infty \) for the smooth cutoff function \(\kappa _R\), \(R>1\). Therefore, by the Rellich–Kondrachov Theorem (c.f. [44, Sect. 5.7]), for almost every \(t \in [0,T]\), up to another subsequence, \((\kappa _R \zeta _\varepsilon *\mu _\varepsilon (t))_\varepsilon \) converges strongly in \(L^2({\mathord {{\mathbb {R}}}^d})\) to some \(\nu _R(t)\). In particular, for any \(f \in C_\mathrm {c}^\infty (B_{R/2}(0))\),
so \(\nu = \nu _R\) almost everywhere in \(B_{R/2}(0)\). Since \(R>1\) is arbitrary, this shows that for all \(t \in [0,T]\), \(\zeta _\varepsilon *\mu _\varepsilon (t) \rightarrow \nu (t)\) strongly in \(L^2_\mathrm{loc}({\mathord {{\mathbb {R}}}^d})\). Finally, using again that \(\{\Vert \zeta _\varepsilon *\mu _\varepsilon (t)\Vert _{L^2({\mathord {{\mathbb {R}}}^d})}\}_t\) is bounded uniformly in \(t \in [0,T]\), the dominated convergence theorem ensures that \(\zeta _\varepsilon *\mu _\varepsilon (t) \rightarrow \nu (t)\) in \(L^1([0,T];L^2_\mathrm{loc}({\mathord {{\mathbb {R}}}^d})\) as \(\varepsilon \rightarrow 0\). This completes the Proof of Assumption (A2).
As we have now verified the conditions of Theorem 5.8, we now conclude that \(\mu _\varepsilon (t) {\mathop {\rightharpoonup }\limits ^{*}}\nu (t)\) for almost every \(t \in [0,T]\), for some \(\nu \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) which is the gradient flow of \({\mathcal {E}}^2\) with initial data \(\mu (0)\). By Proposition 5.3, the gradient flow of \({\mathcal {E}}^2\) with initial data \(\mu (0)\) is unique. Thus, since any subsequence of \((\mu _\varepsilon )_\varepsilon \) has a further subsequence which converges to \(\nu \), the full sequence must converge to \(\mu \), which gives the result. \(\square \)
6 Numerical results
6.1 Numerical method and convergence
We now apply the theory of regularized gradient flows developed in the previous sections to develop a blob method for diffusion, allowing us to numerically simulate solutions to partial differential equations of Wasserstein gradient flow type (1). We begin by describing the details of our numerical scheme and applying Theorem 5.8 to prove its convergence, under suitable regularity assumptions.
Theorem 6.1
Assume \(m\ge 2\) and V and W satisfy Assumption 5.1. Suppose \(\mu (0) \in D({\mathcal {E}}^m)\) is compactly supported in \(B_R(0)\), the ball of radius R centered at the origin. For fixed grid spacing \(h>0\), define the grid indices \(Q_R^h := \{ i \in {\mathbb {Z}}^d : |ih| \le R \}\) and approximate \(\mu (0)\) by the following sequence of measures:
where \(Q_i\) is the cube centered at ih of side length h. Next, for \(\varepsilon >0\), define the evolution of these measures by
where \(\{X_i(t)\}_{i \in Q_R^h}\) are solutions to the ODE system (23) on a time interval [0, T] with initial data \(X_i(0) = ih\). If \(h = o(\varepsilon )\) as \(\varepsilon \rightarrow 0\) and Assumptions (A5.8)–(A2) from Theorem 5.8 hold, then \((\mu _\varepsilon (t))_\varepsilon \) converges in the weak-\(^*\) topology to \(\mu (t)\) as \(\varepsilon \rightarrow 0\) for almost every \(t \in [0,T]\), where \(\mu (t)\) is the unique solution of (1) with initial datum \(\mu (0)\).
Proof
By Corollary 5.5, \(\mu _\varepsilon \in AC^2([0,T];{\mathcal P}_2({\mathord {{\mathbb {R}}}}^d))\) is the gradient flow of \({\mathcal {E}}^m_\varepsilon \) with initial condition \(\mu _\varepsilon (0)\) for all \(\varepsilon >0\). To apply Theorem 5.8 and obtain the result, it remains to show that Assumption (A0) holds. In particular, we must show that, assuming \(h = o(\varepsilon )\),
Define \(T: {\mathord {{\mathbb {R}}}^d}\rightarrow {\mathord {{\mathbb {R}}}^d}\) by \(T(y) = ih\) for \(y \in Q_i\) and \(i \in Q_R^h\). Then T is a transport map from \(\mu (0)\) to \(\mu _\varepsilon (0)\) and \(|T(y)-y| \le h\) for all \(y \in {\mathord {{\mathbb {R}}}^d}\). By construction,
so \(\mu _\varepsilon (0) {\mathop {\rightharpoonup }\limits ^{*}}\mu (0)\) as \(\varepsilon \rightarrow 0\) (and so, as \(h\rightarrow 0\)). Likewise, for all \(\varepsilon ,h>0\), \({\mathop {\mathrm{supp }}}\mu _\varepsilon (0) \subseteq B_R(0)\). Consequently, since V and W are continuous,
Thus, it remains to show that
By Theorem 4.1, we have that \(\liminf _{\varepsilon \rightarrow 0} {\mathcal {F}}^m_\varepsilon (\mu _\varepsilon (0)) \ge {\mathcal {F}}^m(\mu _\varepsilon (0)).\) By Proposition 3.8, for all \(\varepsilon >0\) we have
Consequently, to show that \(\limsup _{\varepsilon \rightarrow 0} {\mathcal {F}}^m_\varepsilon (\mu _\varepsilon (0)) \le {\mathcal {F}}^m(\mu (0)) = \Vert \mu (0)\Vert _{L^m({\mathord {{\mathbb {R}}}}^d)}^m/(m-1)\), it suffices to show that \(\zeta _\varepsilon *\mu _\varepsilon (0) \rightarrow \mu (0)\) in \(L^m\) as \(\varepsilon \rightarrow 0\).
For simplicity of notation, we suppress the dependence on time and show \(\zeta _\varepsilon *\mu _\varepsilon \rightarrow \mu \) in \(L^m\) as \(\varepsilon \rightarrow 0\). By the assumptions that \(\mu \in D({\mathcal {E}}^m)\) with compact support and V and W are continuous, we have \(\mu \in L^m({\mathord {{\mathbb {R}}}^d})\). Consequently \(\zeta _\varepsilon *\mu \rightarrow \mu \) in \(L^m\) as \(\varepsilon \rightarrow 0\), and it is enough to show that \(\zeta _\varepsilon *\mu _\varepsilon - \zeta _\varepsilon *\mu \rightarrow 0\) in \(L^m\). Using that T is a transport map from \(\mu _\varepsilon \) to \(\mu \),
Combining the decay of \(\nabla \zeta \) from Assumption 2.1 with the fact that \(\nabla \zeta \) is continuous, there exists \(C>0\) so that \(|\nabla \zeta (x)| \le C (1_{B}(x) + |x|^{-q'}1_{{\mathord {{\mathbb {R}}}^d}{\setminus } B}(x))\), where \(B = B_1(0)\) is the unit ball centered at the origin. Note that if \(|x-y| \ge 2h\), then for all \(\alpha \in [0,1]\), \(|x- (1-\alpha ) T(y) - \alpha y| \ge |x-y| -h \ge |x-y|/2\) and \(|x- (1-\alpha ) T(y) - \alpha y| \le 3|x-y|/2\). Thus, by the assumptions on our mollifier, we have
Thus, taking the \(L^m\)-norm with respect to x, doing a change of variables, and applying Minkowski’s inequality, we obtain
where \(c>0\) depends on \(C, \Vert \nabla \zeta \Vert _\infty \), and the space dimension. Therefore, provided that \(h = o(\varepsilon )\) as \(\varepsilon \rightarrow 0\), we obtain that \(\zeta _\varepsilon *\mu _\varepsilon - \zeta _\varepsilon *\mu \rightarrow 0\) in \(L^m\). \(\square \)
Remark 6.2
(compact support of initial data) In Theorem 6.1, we assume that the initial datum of the exact solution \(\mu (0) \in D({\mathcal {E}}^m)\) is compactly supported. More generally, under the same assumptions on V, W, and m, given any \(\nu _0 \in D({\mathcal {E}}^m) \cap {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) without compact support, there exists \({\tilde{\nu }}_0 \in D({\mathcal {E}}^m)\) with compact support such that \(\nu _0\) and \({\tilde{\nu }}_0\) are arbitrarily close in the Wasserstein distance. Furthermore, by the contraction inequality for gradient flows of \({\mathcal {E}}^m\), the solution \(\nu \) with initial data \(\nu _0\) and the solution \({\tilde{\nu }}\) with initial data \({\tilde{\nu }}_0\) satisfy
where \(C>0\) depends on T and the semiconvexity of V and W [3, Theorem 11.2.1]. In this way, any solution of (1) with initial datum in \( D({\mathcal {E}}^m) \cap {\mathcal P}_2({\mathord {{\mathbb {R}}}^d})\) can be approximated by a solution with compactly supported initial datum, so that our assumption of compact support in Theorem 6.1 is not restrictive.
Remark 6.3
In Theorem 6.1, we proved that, as long as Assumptions (A5.8)–(A2) from Theorem 5.8 hold along the particle solutions \(\{\mu _\varepsilon \}_\varepsilon \), then any limit of these particle solutions must be the corresponding gradient flow of the unregularized energy. Verifying these conditions analytically can be challenging; see Theorem 5.9. However, numerical results can provide confidence that these conditions hold along a given particle approximation.
A sufficient condition for Assumption (A5.8) is that the \((m-1)\)th moment of the particle solution
is bounded uniformly in \(t, \varepsilon \), and h. In particular, this is satisfied if the particles remain compactly supported in a ball.
A sufficient condition for Assumption (A5.8) is that
with \(p_\varepsilon =(\varphi _\varepsilon *\mu _\varepsilon )^{m-2}\mu _\varepsilon \), remains bounded uniformly in t, \(\varepsilon \), and h. In fact, for purely diffusive problems, we observe that this quantity is not only bounded uniformly in \(\varepsilon \) and h, but decreases in time along our numerical solutions; see Fig. 3 in the preprint version of the manuscript, arXiv:1709.09195. For the nonlinear Fokker–Planck equation, we observe that this quantity is bounded uniformly in \(\varepsilon \) and h and converges to the corresponding norm of the steady state as \(t \rightarrow \infty \); see Fig. 6 in the preprint version.
A sufficient condition for Assumption (A2) is that the blob solution converges to a limit in \(L^1\) and \(L^\infty \), uniformly on bounded time intervals. Again, we observe this numerically, in both one and two dimensions, and both for purely diffusive equations and the nonlinear Fokker–Planck equation; see Fig. 4 below. In this way, Assumptions (A5.8)–(A2) may be verified numerically in order to give confidence that the limit of any blob method solution is, in fact, the correct exact solution.
6.2 Numerical implementation
We now describe the details of our numerical implementation. In all of the numerical examples which follow, our mollifiers \(\zeta _\varepsilon \) and \(\varphi _\varepsilon \) are given by Gaussians,
In addition to Gaussian mollifiers, we also performed numerical experiments with a range of compactly supported and oscillatory mollifiers and observed similar results. In practice, Gaussian mollifiers provided the best balance between speed of computation and speed of convergence.
We construct our numerical particle solutions \(\mu _\varepsilon (t)\) as described in Theorem 6.1. As a mild simplification, we consider the mass of each particle to be given by \(m_i = \mu (0,ih)h^d\), where \(\mu (0,ih)\) is the value of the initial datum \(\mu (0)\) at the grid point ih. For the numerical examples we consider, in which \(\mu (0)\) is a continuous function, the rate of convergence is indistinguishable from defining \(m_i\) as in (47).
The system of ordinary differential equations that prescribes the evolution of the particle locations [c.f. (23) and (48)] can be solved numerically in a variety of ways, and we observe nearly identical results independent of our choice of ODE solver. In analogy with previous work on blob methods in the fluids case [7], we find that the numerical error due to the choice of time discretization is of lower order than the error due to the regularization and spatial discretization. We implement the blob method in Python, using the Numpy, SciPy, and Matplotlib libraries [51, 54, 81]. In particular, we compute the evolution of the particle trajectories via the SciPy implementation of the Fortran VODE solver [15], which uses either a backward differentiation formula (BDF) method or an implicit Adams method, depending on the stiffness of the problem.
Our convergence result, Theorem 6.1, requires that \(h = o(\varepsilon )\) as \(\varepsilon \rightarrow 0\). Numerically, we observe the fastest rate of convergence with \(\varepsilon = h^{1- p}\), for \(0 < p \ll 1\), as \(h \rightarrow 0\). Since computational speed decreases as p approaches 0, we take \(\varepsilon = h^{0.99}\) in the following simulations. In these examples, we discretize the initial data on a line (\(d=1\)) or square of sidelength 5.0 (\(d=2\)), centered at 0.
Finally, to visualize our particle solution (48) and compare it to the exact solutions in \(L^p\)-norms, we construct a blob solution obtained by convolving the particle solution with a mollifier,
By Lemma 2.3, if \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \) as \(\varepsilon \rightarrow 0\), where \(\mu \) is the exact solution, then we also have \({\tilde{\mu }}_\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \). Consequently our convergence result, Theorem 6.1, also applies to this blob solution.
We measure the accuracy of our numerical method with respect to the \(L^1\)-, \(L^\infty \)-, and Wasserstein metrics. To compute the \(L^1\)- and \(L^\infty \)-errors, we take the difference between the exact solution and the blob solution (50) and evaluate discrete \(L^1\)- and \(L^\infty \)-norms using the following formulas:
We compute the Wasserstein distance between our particle solution \(\mu _\varepsilon \) in (48) and the exact solution \(\mu \) in one dimension using the formula
where \(F_{\mu _\varepsilon }^{-1}\) and \(F_\mu ^{-1}\) are the generalized inverses of the cumulative distribution functions of \(\mu \) and \(\mu _\varepsilon \), respectively; c.f. [3, Theorem 6.0.2]. We evaluate the integral in (51) numerically using the SciPy implementation of the Fortran library QUADPACK [72]. In two dimensions, we compute the Wasserstein error by discretizing the exact and blob solutions as piecewise constant functions on a fine grid and then using the Python Optimal Transport library to compute the discrete Wasserstein distance between them. In particular, we use the Earth Mover’s Distance function in this library, which is based on the network simplex algorithm introduced by Bonneel et al. [14].
6.3 Simulations
Using the method described in the previous section, we now give several examples of numerical simulations. We consider initial data given by linear combinations of Gaussian and Barenblatt profiles, which we denote as follows:
with
and \( K = K(m,d)\) chosen so that \(\int \psi _{m}(\tau ,x) dx = 1\).
In Fig. 1, we compare exact and numerical solutions to the heat and porous medium equations (\(V=W=0\), \(m=1,2,3\)), with initial data given by a Gaussian (\(m=1\)) or Barenblatt (\(m=2, 3\)) function with scaling \(\tau = 0.0625\). The top row shows the evolution of the density on a large spatial scale, at which the exact and numerical solutions are visually indistinguishable for \(m=1\) and \(m=2\). However, for \(m=3\) the fat tails of the numerical simulation peel away from the exact solution at small times. The second row depicts the numerical simulations for \(m=3\) on a smaller spatial scale, illustrating how the tails of the numerical simulation converge to the exact solution as the spacing of the computational grid is refined.
In Fig. 2, we compute solutions of the one-dimensional heat and porous medium equations (\(V=W=0\), \(m=1,2,3\)), illustrating the role of the diffusion exponent m. The initial data is given by a linear combination of Gaussians, \(\rho _0(\cdot ) = 0.3 \psi _1(\cdot +1,0.0225)+0.7 \psi _1(\cdot -1, 0.0225)\), and the grid spacing is \(h = 0.01\). For \(m=1\), the infinite speed of propagation of support of solutions to the heat equation is reflected both at the level of the density, for which the gap between the two bumps fills quickly, and also in the particle trajectories, which quickly spread to fill in areas of low mass. In contrast, for \(m=2\) and \(m=3\), we observe finite speed of propagation of support, as well as the emergence of Barenblatt profiles as time advances.
In Fig. 3, we analyze the rate of convergence of our numerical scheme in two dimensions. We compute the error between numerical and exact solutions of the heat and porous medium equations (\(m =1,2,3\)), with respect to the 2-Wasserstein distance, \(L^1\)-norm and \(L^\infty \)-norm, and examine the scaling of the error with the grid spacing h. (Recall that \(\varepsilon = h^{0.99}\) throughout.) Plotting the errors on a logarithmic scale, we observe that the Wasserstein error depends linearly on the grid spacing for all values of m. The \(L^1\)-norm scales quadratically for \(m=1\) and 2 and superlinearly for \(m=3\). Finally, the \(L^\infty \)-error scales quadratically for \(m=1\), superlinearly for \(m=2\), and sublinearly for \(m=3\). This deterioration of the rate of \(L^\infty \)-convergence for \(m=3\) is due to the sharp transition at the boundary of the exact solution; see the second row of Fig. 1. (The rate of convergence is similar in one dimension; see Fig. 4 in the preprint version of this manuscript, arXiv:1709.09195.)
In Fig. 4, we simulate solutions to the nonlinear Fokker–Planck equation (\(V(\cdot ) = \left| \cdot \right| ^2/2\), \(W=0\), \(m=2\)) and consider the rate of convergence to the steady state of the equation, \(\psi _2(0.25,x)\). In the top row, we compute the error between the numerical solution at time \(t=1.2\) and the steady state with respect to the Wasserstein, \(L^1\)-, and \(L^\infty \)-norms for various choices of grid spacing h. We consider solutions with Barenblatt initial data (\(m=2\), \(\tau = 0.15\)). We plot the error’s dependence on h with a logarithmic scale and compute the slope of the line of best fit to determine the scaling relationship between the error and h. We observe similar rates of convergence as in the case of the heat and porous medium equations; see Fig. 3. In the bottom rows, we give snapshots of the evolution of the blob method solution, as it converges to the steady state. We consider Barenblatt initial data (\(m=2\), \(\tau = 0.15\)) and double bump initial data given by a linear combination of Barenblatts, \(\rho _0(x) = 0.7\psi _2(x-(1.25,0),0.1)+0.3\psi _2(x+(1.25,0),0.1)\). The grid spacing is \(h = 0.02\).
In the remaining numerical examples, we apply our method to simulate solutions of Keller–Segel type equations, with the interaction potential W given by \(2 \chi \log \left| \cdot \right| \) for \(\chi >0\). In one dimension, the derivative of this potential is not integrable, and we remove its singularity it setting it equal to \(2 \chi /\varepsilon \) for all \(x\in {\mathord {{\mathbb {R}}}^d}\) such that \(|x| < \varepsilon \). In two dimensions, the gradient of this potential is integrable, and we regularize it by convolving it with a mollifier \(\varphi _\varepsilon \), as done in previous work by the second author on a blob method for the aggregation Eq. [36].
In Fig. 5, we consider the one-dimensional variant of the Keller–Segel equation (\(V=0\), \(W(\cdot ) = 2 \chi \log \left| \cdot \right| \), \(m=1\)) studied in [18]. Its interest is that it has a defined critical value \(\chi \) for unit mass leading to the dichotomy of blow-up versus global existence. For \(\chi = 1.5\) and initial data of mass one, solutions blow up in finite time. We consider initial data given by a Gaussian \(\psi _1(\tau ,\cdot )\), \(\tau = 0.25\), discretized on the interval \([-\,4.5,4.5]\) with grid spacing \(h = 0.009\). We compare the evolution of the second moment of our blob method solutions with the second moment of the exact solution. We also compare our results with those obtained in previous work via a one-dimensional Discrete Gradient Flow (DGF) particle method [25, 30]. By refining our spatial grid with respect to the DGF particle method, we observe modest improvements. (Alternative simulations, with similar spatial and time discretizations as used in the DGF method, yielded similar results as obtained by DGF.) The blow-up of solution is not only evident in the second moment, which converges to zero linearly in time, but also in the evolution of the particle trajectories. In particular, we observe particle trajectories merging on several occasions as time advances.
In Fig. 6, we consider a nonlinear variant of the Keller–Segel equation (\(V=0\), \(W(\cdot ) = 2 \chi \log \left| \cdot \right| \), \(m=2\)) in one dimension, with initial data and discretization as in Fig. 5. We observe the convergence to a steady state both at the level of the second moment and the particle trajectories.
In Figs. 7, 8 and 9 we consider the classical Keller–Segel equation (\(V=0\), \(W(\cdot ) = 1/(2 \pi )\log \left| \cdot \right| \), \(m=1\)) in two dimensions. In Figs. 7 and 8, the initial datum is given by a Gaussian \(\psi _1(\tau ,\cdot )\), \(\tau = 0.16\), scaled to have mass that is either supercritical (\(> 8 \pi \)), critical (\(=8 \pi \)), or subcritical (\(< 8 \pi \)) with respect to blowup behavior. In particular, for supercritical initial data, solutions blow up in finite time [13, 41]. In Fig. 7, we consider the evolution of the second moment for solutions obtained for fixed grid spacing \(h = 0.0\bar{3}\) and varying mass \(7 \pi \), \(8 \pi \), and \(9 \pi \). We observe that the second moment depends linearly on time, and we compute its slope using the line of best fit. We then analyze how the slope of this line converges to the theoretically predicted slope as the grid spacing \(h \rightarrow 0\).
In Fig. 8, we consider the evolution of the second moment for the supercritical mass solution from Fig. 7 on a longer time interval. As in the one-dimensional case (see Fig. 5), we are able to get approximately halfway to the time when the second moment becomes zero before the second moment of our numerical solution begins to peel away from the second moment of the exact solution. Indeed, one of the benefits of our blob method approach is that the numerical method naturally extends to two and more dimensions, and we observe similar numerical performance independent of the dimension. We also plot the evolution of particle trajectories, observing the tendency of trajectories in regions of larger mass to be driven largely by pairwise attraction, while trajectories in regions of lower mass feel more strongly the effects of diffusion.
Finally, in Fig. 9, we consider the evolution of the second moment for double bump initial data, with initial mass \(7 \pi \), \(8 \pi \), and \(9 \pi \). The slopes of the second moment agree well with the theoretically predicted slopes given in Fig. 7.
References
Agueh, M.: Local existence of weak solutions to kinetic models of granular media. Arch. Ration. Mech. Anal. 221(2), 917–959 (2016)
Ambrosio, L., Gigli, N.: A user’s guide to optimal transport. In: Modelling and Optimisation of Flows on Networks, Volume 2062 of Lecture Notes in Mathematics, pp. 1–155. Springer, Heidelberg (2013)
Ambrosio, L., Gigli, N., Savaré, G.: Gradient Flows in Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics ETH Zürich. Birkhäuser, Basel (2008)
Ambrosio, L., Gigli, N., Savaré, G., et al.: Bakry–émery curvature-dimension condition and Riemannian Ricci curvature bounds. Ann. Probab. 43(1), 339–404 (2015)
Ambrosio, L., and Savaré, G.: Gradient flows of probability measures. In: Handbook of Differential Equations: Evolutionary Equations, vol. 3, pp. 1–136. North-Holland, Amsterdam (2007)
Ambrosio, L., Serfaty, S.: A gradient flow approach to an evolution problem arising in superconductivity. Commun. Pure Appl. Math. 61(11), 1495–1539 (2008)
Anderson, C., Greengard, C.: On vortex methods. SIAM J. Numer. Anal. 22(3), 413–440 (1985)
Benamou, J.-D., Carlier, G., Mérigot, Q., Oudet, E.: Discretization of functionals involving the Monge–Ampère operator. Numer. Math. 134(3), 611–636 (2016)
Bessemoulin-Chatard, M., Filbet, F.: A finite volume scheme for nonlinear degenerate parabolic equations. SIAM J. Sci. Comput. 34(5), B559–B583 (2012)
Blanchet, A., Calvez, V., Carrillo, J.A.: Convergence of the mass-transport steepest descent scheme for the sub-critical Patlak–Keller–Segel model. SIAM J. Numer. Anal. 46(2), 691–721 (2008)
Blanchet, A., Calvez, V., Carrillo, J.A.: Convergence of the mass-transport steepest descent scheme for the subcritical Patlak–Keller–Segel model. SIAM J. Numer. Anal. 46(2), 691–721 (2008)
Blanchet, A., Carlen, E.A., Carrillo, J.A.: Functional inequalities, thick tails and asymptotics for the critical mass Patlak–Keller–Segel model. J. Funct. Anal. 262(5), 2142–2230 (2012)
Blanchet, A., Dolbeault, J., Perthame, B.: Two-dimensional Keller–Segel model: optimal critical mass and qualitative properties of the solutions. Electron. J. Differ. Equ. 2006(44), 1–32 (2006)
Bonneel, N., van de Panne, M., Paris, S., Heidrich, W.: Displacement interpolation using Lagrangian mass transport. ACM Trans. Graph. 30(6), 158:1–158:12 (2011)
Brown, P.N., Hindmarsh, A.C., Byrne, G.D.: DVODE: Variable-Coefficient Ordinary Differential Equation Solver. http://www.netlib.org/ode/vode.f. Accessed Mar 2018
Burger, M., Carrillo, J.A., Wolfram, M.-T.: A mixed finite element method for nonlinear diffusion equations. Kinet. Relat. Models 3(1), 59–83 (2010)
Calvez, V., Gallouët, T.O.: Particle approximation of the one dimensional Keller–Segel equation, stability and rigidity of the blow-up. Discrete Contin. Dyn. Syst. Ser. A 36(3), 1175–1208 (2015)
Calvez, V., Perthame, B., Sharifi Tabar, M.: Modified Keller–Segel system and critical mass for the log interaction kernel. In: Stochastic Analysis and Partial Differential Equations, Volume 429 of Contemporary Mathematics, pp. 45–62. American Mathematical Society, Providence (2007)
Campos-Pinto, M., Carrillo, J.A., Charles, F., Choi, Y.-P.: Convergence of a linearly transformed particle method for aggregation equations. Preprint (2015)
Carlen, E.A., Gangbo, W.: Solution of a model Boltzmann equation via steepest descent in the 2-Wasserstein metric. Arch. Ration. Mech. Anal. 172(1), 21–64 (2004)
Carrillo, J.A., Chertock, A., Huang, Y.: A finite-volume method for nonlinear nonlocal equations with a gradient flow structure. Commun. Comput. Phys. 17(1), 233–258 (2015)
Carrillo, J.A., Choi, Y.-P., Hauray, M.: The derivation of swarming models: mean-field limit and Wasserstein distances. In: Collective Dynamics from Bacteria to Crowds: An Excursion Through Modeling, Analysis and Simulation, Volume 553 of CISM Courses and Lectures, pp. 1–46. Springer, Vienna (2014)
Carrillo, J.A., Craig, K., Wang, L., Wei, C.: On primal dual splitting methods for nonlinear equations with a gradient flow structure. Work in preparation
Carrillo, J.A., Di Francesco, M., Figalli, A., Laurent, T., Slepčev, D.: Global-in-time weak measure solutions and finite-time aggregation for nonlocal interaction equations. Duke Math. J. 156(2), 229–271 (2011)
Carrillo, J.A., Huang, Y., Patacchini, F.S., Wolansky, G.: Numerical study of a particle method for gradient flows. Kinet. Relat. Models 10(3), 613–641 (2017)
Carrillo, J.A., Lisini, S., Mainini, E.: Uniqueness for Keller–Segel-type chemotaxis models. Discrete Contin. Dyn. Syst. 34(4), 1319–1338 (2014)
Carrillo, J.A., McCann, R.J., Villani, C.: Kinetic equilibration rates for granular media and related equations: entropy dissipation and mass transportation estimates. Rev. Mat. Iberoam. 19(3), 971–1018 (2003)
Carrillo, J.A., McCann, R.J., Villani, C.: Contractions in the 2-Wasserstein length space and thermalization of granular media. Arch. Ration. Mech. Anal. 179(2), 217–263 (2006)
Carrillo, J.A., Moll, J.S.: Numerical simulation of diffusive and aggregation phenomena in nonlinear continuity equations by evolving diffeomorphisms. SIAM J. Sci. Comput. 31(6), 4305–4329 (2009/2010)
Carrillo, J.A., Patacchini, F.S., Sternberg, P., Wolansky, G.: Convergence of a particle method for diffusive gradient flows in one dimension. SIAM J. Math. Anal. 48(6), 3708–3741 (2016)
Carrillo, J.A., Ranetbauer, H., Wolfram, M.-T.: Numerical simulation of nonlinear continuity equations by evolving diffeomorphisms. J. Comput. Phys. 327, 186–202 (2016)
Chertock, A.: A Practical Guide to Deterministic Particle Methods. http://www4.ncsu.edu/~acherto/papers/Chertock_particles.pdf
Cottet, G.-H., Koumoutsakos, P.D.: Vortex Methods. Cambridge University Press, Cambridge (2000). (Theory and practice)
Cottet, G.-H., Raviart, P.-A.: Particle methods for the one-dimensional Vlasov–Poisson equations. SIAM J. Numer. Anal. 21(1), 52–76 (1984)
Craig, K.: Nonconvex gradient flow in the Wasserstein metric and applications to constrained nonlocal interactions. Proc. Lond. Math. Soc. 114(1), 60–102 (2017)
Craig, K., Bertozzi, A.L.: A blob method for the aggregation equation. Math. Comput. 85(300), 1681–1717 (2016)
Craig, K., Topaloglu, I.: Convergence of regularized nonlocal interaction energies. SIAM J. Math. Anal. 48(1), 34–60 (2016)
Degond, P., Mas-Gallic, S.: The weighted particle method for convection–diffusion equations. I. The case of an isotropic viscosity. Math. Comput. 53(188), 485–507 (1989)
Degond, P., Mas-Gallic, S.: The weighted particle method for convection–diffusion equations. II. The anisotropic case. Math. Comput. 53(188), 509–525 (1989)
Degond, P., Mustieles, F.-J.: A deterministic approximation of diffusion equations using particles. SIAM J. Sci. Stat. Comput. 11(2), 293–310 (1990)
Dolbeault, J., Perthame, B.: Optimal critical mass in the two-dimensional Keller–Segel model in \({\mathbb{R}}^{2}\). C. R. Math. Acad. Sci. Paris 339(9), 611–616 (2004)
Düring, B., Matthes, D., Milišic, J.P.: A gradient flow scheme for nonlinear fourth order equations. Discrete Contin. Dyn. Syst. Ser. B 14(3), 935–959 (2010)
Evans, L., Savin, O., Gangbo, W.: Diffeomorphisms and nonlinear heat flows. SIAM J. Math. Anal. 37(3), 737–751 (2005)
Evans, L.C.: Partial Differential Equations, Volume 19 of Graduate Studies in Mathematics, 2nd edn. American Mathematical Society, Providence (2010)
Feinberg, E.A., Kasyanov, P.O., Zadoianchuk, N.V.: Fatou’s lemma for weakly converging probabilities. Theory Probab. Appl. 58(4), 683–689 (2014)
Goodman, J., Hou, T.Y., Lowengrub, J.: Convergence of the point vortex method for the \(2\)-D Euler equations. Commun. Pure Appl. Math. 43(3), 415–430 (1990)
Gosse, L., Toscani, G.: Identification of asymptotic decay to self-similarity for one-dimensional filtration equations. SIAM J. Numer. Anal. 43(6), 2590–2606 (2006)
Gosse, L., Toscani, G.: Lagrangian numerical approximations to one-dimensional convolution–diffusion equations. SIAM J. Sci. Comput. 28(4), 1203–1227 (2006)
Hauray, M.: Wasserstein distances for vortices approximation of Euler-type equations. Math. Models Methods Appl. Sci. 19(8), 1357–1384 (2009)
Huang, H., Liu, J.-G.: Error estimate of a random particle blob method for the Keller–Segel equation. Math. Comput. 86(308), 2719–2744 (2014)
Hunter, J.D.: Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9(3), 90–95 (2007)
Jabin, P.-E.: A review of the mean field limits for Vlasov equations. Kinet. Relat. Models 7(4), 661–711 (2014)
Jabin, P.-E., Wang, Z.: Mean field limit for stochastic particle systems. In: Active Particles. Advances in Theory, Models, and Applications, Modeling and Simulation in Science, Engineering and Technology, vol. 1, pp. 379–402. Birkhäuser, Cham (2017)
Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: Open Source Scientific Tools for Python. http://www.scipy.org/ (2001)
Jordan, R., Kinderlehrer, D., Otto, F.: The variational formulation of the Fokker–Planck equation. SIAM J. Math. Anal. 29(1), 1–17 (1998)
Junge, O., Matthes, D., Osberger, H.: A fully discrete variational scheme for solving nonlinear Fokker–Planck equations in multiple space dimensions. SIAM J. Numer. Anal. 55(1), 419–443 (2017)
Klar, A., Tiwari, S.: A multiscale meshfree method for macroscopic approximations of interacting particle systems. Multiscale Model. Simul. 12(3), 1167–1192 (2014)
Lacombe, G., Mas-Gallic, S.: Presentation and analysis of a diffusion–velocity method. In: ESAIM Proceedings of Flows and Related Numerical Methods (Toulouse, 1998), vol. 7, pp. 225–233. Society for Industrial and Applied Mathematics, Paris (1999)
Leverentz, A.J., Topaz, C.M., Bernoff, A.J.: Asymptotic dynamics of attractive–repulsive swarms. SIAM J. Appl. Dyn. Syst. 8(3), 880–908 (2009)
Lions, P.-L., Mas-Gallic, S.: Une méthode particulaire déterministe pour des équations diffusives non linéaires. C. R. Acad. Sci. Paris Sér. I Math. 332(4), 369–376 (2001)
Liu, J.-G., Wang, L., Zhou, Z.: Positivity-preserving and asymptotic preserving method for 2D Keller-Segel equations. Math. Comput. 87(311), 1165–1189 (2018)
Liu, J.-G., Yang, R.: A random particle blob method for the Keller–Segel equation and convergence analysis. Math. Comput. 86(304), 725–745 (2017)
Mas-Gallic, S.: The diffusion velocity method: a deterministic way of moving the nodes for solving diffusion equations. Transp. Theory Stat. Phys. 31(4–6), 595–605 (2002)
Matthes, D., McCann, R.J., Savaré, G.: A family of nonlinear fourth order equations of gradient flow type. Commun. Partial Differ. Equ. 34(10–12), 1352–1397 (2009)
McCann, R.J.: A convexity principle for interacting gases. Adv. Math. 128(1), 153–179 (1997)
Oelschläger, K.: Large systems of interacting particles and the porous medium equation. J. Differ. Equ. 88(2), 294–346 (1990)
Osberger, H., Matthes, D.: Convergence of a variational Lagrangian scheme for a nonlinear drift diffusion equation. ESAIM Math. Model. Numer. Anal. 48(3), 697–726 (2014)
Osberger, H., Matthes, D.: Convergence of a fully discrete variational scheme for a thin-film equation. Radon Ser. Comput. Appl. Math. 18, 356–399 (2017)
Osberger, H., Matthes, D.: A convergent Lagrangian discretization for a nonlinear fourth order equation. Found. Comput. Math. 1–54 (2015)
Otto, F.: The geometry of dissipative evolution equations: the porous medium equation. Commun. Partial Differ. Equ. 26(1–2), 101–174 (2001)
Patacchini, F.S.: A Variational and Numerical Study of Aggregation–Diffusion Gradient Flows. Ph.D. Thesis, Imperial College London (2017)
Piessens, R., de Doncker-Kapenga, E., Überhuber, C.W., Kahaner, D.K.: QUADPACK: A Subroutine Package for Automatic Integration. Computational Mathematics, vol. 1. Springer, Berlin (1983)
Riesz, F., Nagy, B.S.: Functional Analysis. Dover Books on Advanced Mathematics. Dover Publications, Inc., New York (1990)
Russo, G.: Deterministic diffusion of particles. Commun. Pure Appl. Math. 43(6), 697–733 (1990)
Russo, G.: A particle method for collisional kinetic equations. I. Basic theory and one-dimensional results. J. Comput. Phys. 87(2), 270–300 (1990)
Sandier, E., Serfaty, S.: Gamma-convergence of gradient flows with applications to Ginzburg–Landau. Commun. Pure Appl. Math. 57(12), 1627–1672 (2004)
Santambrogio, F.: Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling. Progress in Nonlinear Differential Equations and Their Applications, vol. 87. Birkhäuser, Cham (2015)
Serfaty, S.: Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. Discrete Contin. Dyn. Syst. 31(4), 1427–1451 (2011)
Simione, R., Slepčev, D., Topaloglu, I.: Existence of ground states of nonlocal-interaction energies. J. Stat. Phys. 159(4), 972–986 (2015)
Sun, Z., Carrillo, J.A., Shu, C.-W.: A discontinuous Galerkin method for nonlinear parabolic equations and gradient flow problems with interaction potentials. Preprint (2017)
van der Walt, S., Colbert, C., Varoquaux, G.: The NumPy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13(2), 22–30 (2011)
Villani, C.: Topics in Optimal Transportation. Graduate Studies in Mathematics, vol. 58. American Mathematical Society, Providence (2003)
Villani, C.: Optimal Transport: Old and New. Grundlehren der Mathematischen Wissenschaften, vol. 338. Springer, Berlin (2009)
Westdickenberg, M., Wilkening, J.: Variational particle schemes for the porous medium equation and for the system of isentropic Euler equations. M2AN Math. Model. Numer. Anal. 44(1), 133–166 (2010)
Acknowledgements
The authors thank Andrew Bernoff, Andrea Bertozzi, Eric Carlen, Yanghong Huang, Inwon Kim, Dejan Slepčev, and Fangbo Zhang for many helpful discussions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by L. Ambrosio.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
JAC was partially supported by the Royal Society via a Wolfson Research Merit Award and by EPSRC Grant Number EP/P031587/1. KC was supported by a UC President’s Postdoctoral Fellowship and NSF DMS-1401867. FSP was partially supported by a 2015 Doris Chen mobility award through Imperial College London, and also acknowledges a 2015 SIAM student travel award. The authors were supported by NSF RNMS (KI-Net) Grant #11-07444, and acknowledge the CNA at CMU for their kind support of a visit to Pittsburgh in the final stages of this work. This work used XSEDE Comet at the San Diego Supercomputer Center through allocation ddp287, which is supported by NSF ACI-1548562.
Appendices
Appendix A. Proofs of preliminary results
We now turn to the proofs of some of the elementary lemmas and propositions from Sects. 2 and 3 . We begin with the proof of the mollifier exchange lemma.
Proof of Lemma 2.2
By the Lipschitz continuity of f,
Set \(p:= (q-d)/q >0\). Decomposing the domain of the integration of \(|\nu |\) into \(B_{\varepsilon ^p}(x)\) and \({\mathord {{\mathbb {R}}}^d}{\setminus } B_{\varepsilon ^p}(x)\), we may bound the above quantity by
By the decay assumption on \(\zeta \) (see Assumption 2.1), for all \(x,y \in {\mathord {{\mathbb {R}}}^d}\) with \(|x-y|>\varepsilon ^p\) we have
Thus, we conclude our result by estimating the above quantity by
\(\square \)
We now give the proof that if \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \), then \(\varphi _\varepsilon *\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \).
Proof of Lemma 2.3
By [3, Remark 5.1.6], it suffices to show that \(\varphi _\varepsilon *\mu _\varepsilon \) converges to \(\mu \) in distribution, that is, in the duality with smooth, compactly supported functions. For all \(f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\),
Since \(\mu _\varepsilon {\mathop {\rightharpoonup }\limits ^{*}}\mu \), the second term goes to zero. We bound the first term as follows:
which goes to zero as \(\varepsilon \rightarrow 0\). \(\square \)
Next, we prove the inequalities relating the regularized internal energies to the unregularized internal energies.
Proof of Proposition 3.8
We begin with (11). To prove the left inequality, we may assume without loss of generality that \(\mu \in D({\mathcal {F}})\). First, we show the result for the entropy (\(m=1\)). Note that
where \({\mathcal {H}}\) is the relative entropy; that is, for all \(\nu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\),
By Jensen’s inequality for the convex function \(s \mapsto s \log s\), the relative entropy is nonnegative, which gives the result. Now, we show the left inequality in (11) for \(1<m\le 2\). By the above-the-tangent property of the concave function \(F_m\) and Hölder’s inequality, we get
Since \(\mu \in D({\mathcal {F}}^m)\) implies \(\mu \in L^m({\mathord {{\mathbb {R}}}^d})\), the first term goes to zero as \(\varepsilon \rightarrow 0\) and the second term remains bounded. This gives the result.
We now turn to the right inequality in (11) in the case \(1\le m\le 2\). By the fact that \(\varphi _\varepsilon = \zeta _\varepsilon * \zeta _\varepsilon \) and Jensen’s inequality for the concave function \(F_m\), for all \(x \in {\mathord {{\mathbb {R}}}^d}\) we have
Consequently, we deduce
Now, we show (12). Since \(F_m\) is convex for \(m\ge 2\), this is simply a consequence of reversing the inequalities in the last two inequalities.
Finally, we consider the lower bounds (13). When \(m=1\), these follow from the right inequality in (11), a Carleman-type estimate [30, Lemma 4.1] ensuring that \({\mathcal {F}}_\varepsilon ^m(\zeta _\varepsilon *\mu ) \ge -(2\pi /\delta )^{d/2} - \delta M_2(\zeta _\varepsilon *\mu )\) for all \(\delta >0\), and the fact that
When \(m>1\), we simply use that \(F_m\ge 0\). \(\square \)
We now give the proof that, for all \(\varepsilon >0\), the regularized energies are lower semicontinuous with respect to weak-* convergence (\(m>1\)) and Wasserstein convergence (\(m=1\)), where in the latter case, we require \(\varphi \) to be a Gaussian.
Proof of Proposition 3.9
First, we note that for any sequence \((\mu _n)_n \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) such that \(\mu _n {\mathop {\rightharpoonup }\limits ^{*}}\mu \) and any sequence \(x_n \rightarrow x\), we have
since \(\varphi _\varepsilon (x - \cdot ) \in C_b({\mathord {{\mathbb {R}}}^d})\).
We now show (i). Suppose \(\mu _n {\mathop {\rightharpoonup }\limits ^{*}}\mu \). By Lemma B.3, we have
By inequality (53),
Combining the two previous inequalities, we obtain \(\liminf _{n\rightarrow \infty } {\mathcal {F}}^m_\varepsilon (\mu _n) \ge {\mathcal {F}}^m_\varepsilon (\mu )\), giving the result.
Next, we show (ii). Suppose \(\mu _n \rightarrow \mu \) in the Wasserstein metric. Since \(\varphi \) is a Gaussian, there exist \(x_0\in {\mathord {{\mathbb {R}}}}^d\) and \(C_0,C_1\in {\mathord {{\mathbb {R}}}}\) so that, for n sufficiently large,
Define \(f_n:= \log (\varphi _{\varepsilon }* \mu _n)\) and \(q(\cdot ):= C_0|\cdot -x_0|^2 + C_1\). Then, by Lemma B.3, we have
Since \(\mu _n \rightarrow \mu \) in the Wasserstein metric,
Furthermore, by (53) and the fact that \(\log (\cdot )\) is continuous on \((0, +\,\infty )\),
Thus, combining (55), (56), and (57), we obtain,
which gives the result. \(\square \)
Now we turn to the proof that the regularized energies are differentiable along generalized geodesics.
Proof of Proposition 3.10
By definition, for all \(\alpha \in [0,1]\),
Therefore, we deduce
where \(c_{s,\alpha }(y,z) = (1-s)\varphi _\varepsilon *\mu _1(y) + s\varphi _\varepsilon *\mu _\alpha ^{2\rightarrow 3}((1-\alpha )y + \alpha x)\). Using Taylor’s theorem compute
where \(D_\alpha (y,z,v,w)\) is a term depending on the Hessian of \(\varphi _\varepsilon \) satisfying
Hence, since \(F'\) is nondecreasing,
where \(|C_\alpha | \le 4\alpha ^2 \Vert D^2\varphi _\varepsilon \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} F'( \Vert \varphi _\varepsilon \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})}) (\int |x|^2 \,d\mu _2(x) + \int |x|^2 \,d\mu _3(x))\). Note that \(c_{s,\alpha }(y,z)\) converges pointwise to \(\varphi _\varepsilon *\mu _2(y)\) as \(\alpha \rightarrow 0\) since
Thus, to complete the result, it suffices to show that there exists \(g \in L^1(\gamma \otimes \gamma )\) so that
since the result then follows by the dominated convergence theorem. Since \(F'\) is nondecreasing we may take
which ends the proof. \(\square \)
Next, we apply the result of the previous proof to characterize the subdifferential of the regularized energies.
Proof of Proposition 3.12
Suppose v is given by Eq. (16). This part of the proof is closely inspired by that of [24, Proposition 2.2]. For all \(x,y \in {\mathord {{\mathbb {R}}}}^d\) define \(G(\alpha ) = F(\varphi _\varepsilon *\mu _\alpha ((1-\alpha )x + \alpha y))\) for all \(\alpha \in [0,1]\), where \(\mu _\alpha = ((1-\alpha )\pi ^1 + \alpha \pi ^2)_\#\gamma \), with some \(\gamma \in \Gamma _\mathrm {o}(\mu ,\mu _1)\), connects \(\mu _0=\mu \) and \(\mu _1\). Now define
where \(\lambda = -2F'(\Vert \varphi _\varepsilon \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})})\Vert D^2\varphi _\varepsilon \Vert _{L^\infty ({\mathord {{\mathbb {R}}}^d})} = \lambda _F\); see (15). We write \([a,b]_\alpha := (1-\alpha )a +\alpha b\) for any \(a,b\in {\mathord {{\mathbb {R}}}}^d\). Let us compute the first two derivatives of G for all \(\alpha \in [0,1]\):
and
Since \(F'' \ge 0\), \(F'\ge 0\) and \(\left\| D^2\varphi _\varepsilon \right\| _{L^\infty ({\mathord {{\mathbb {R}}}^d})}\) is finite, we have
Now, by Taylor’s theorem,
and therefore, using (59) leads to
which shows that f is nondecreasing, and so \(f(1) \ge \lim _{\alpha \rightarrow 0} f(\alpha )\), which implies (after integrating against \(d\gamma (x,y)\))
Then, by (58) and antisymmetry of \(\nabla \varphi _\varepsilon \), compute
Hence
which shows that \(\delta {\mathcal {F}}_\varepsilon /\delta \mu _0 \in \partial {\mathcal {F}}_\varepsilon (\mu _0)\). We now prove that \(v \in {{\,\mathrm{Tan}\,}}_\mu {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\). Consider a vector-valued function \(\xi \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}}^d)^d\), and for any \(x,y\in {\mathord {{\mathbb {R}}}}^d\) define \(H(\alpha ) = F(\int _{\mathord {{\mathbb {R}}}^{d}}\varphi _\varepsilon (x-y + \alpha (\xi (x) - \xi (y))\,d\mu (y))\) for all \(\alpha \in [0,1]\). Then
Now compute, using the antisymmetry of \(\nabla \varphi _\varepsilon \),
where passing the limit \(\alpha \rightarrow 0\) inside the integral in the first line is justified by the fact that \(H'\) is bounded. Then, by the definition of the local slope of \({\mathcal {F}}_\varepsilon \),
Therefore, by the previous computation,
since, by definition of the 2-Wasserstein distance,
Then, by replacing \(\xi \) with \(-\xi \), by arbitrariness of \(\xi \) and by density of \(C_\mathrm {c}^\infty \) in \(L^2(\mu ;{\mathord {{\mathbb {R}}}^d})\), we get
which shows the desired result. Since \(|\partial {\mathcal {F}}_\varepsilon |(\mu )\) is the unique minimal norm element of \(\partial {\mathcal {F}}_\varepsilon \), this also shows that we actually have equality in the right-hand side above.
Suppose now that \(v \in \partial {\mathcal {F}}_\varepsilon (\mu ) \cap {{\,\mathrm{Tan}\,}}_\mu {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\). Fix \(\psi \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d})\) and define \(\mu _\alpha = (\mathop {\mathrm{id}}+ \alpha \nabla \psi )_\# \mu \) and \({\hat{\mu }}_\alpha = (\mathop {\mathrm{id}}- \alpha \nabla \psi )_\# \mu \) for all \(\alpha \in [0,1]\). For \(\alpha \) sufficiently small, \(x^2/2+ \alpha \psi (x)\) is convex and \(\mathop {\mathrm{id}}+ \alpha \nabla \psi \) is the optimal transport map from \(\mu \) to \(\mu _\alpha \), so \(\Gamma _\mathrm {o}(\mu ,\mu _\alpha ) = \{\mathop {\mathrm{id}}\times (\mathop {\mathrm{id}}+ \alpha \nabla \psi ) \}\). Similarly, \(\Gamma _\mathrm {o}({\hat{\mu _\alpha }},\mu ) = \{\mathop {\mathrm{id}}\times (\mathop {\mathrm{id}}- \alpha \nabla \psi ) \}\). Since \(v \in \partial {\mathcal {F}}^m_\varepsilon (\mu )\), taking \(\nu = \mu _\alpha \) in Definition 2.7 of the subdifferential, for \(\alpha \) sufficiently small, gives
and
Combining this with Proposition 3.10, we obtain
Rewriting the expression from Eq. (14) gives
Thus, for \(w = v - \nabla \varphi _\varepsilon * \left( F' \circ (\varphi _\varepsilon * \mu ) \mu \right) + F'(\varphi _\varepsilon * \mu )\nabla \varphi _\varepsilon * \mu \), we have \(\int \left\langle w , \nabla \psi \right\rangle d \mu = 0\), i.e. \(\nabla \cdot (w \mu ) = 0\) in the sense of distribution. By [3, Proposition 8.4.3], since \(v \in {{\,\mathrm{Tan}\,}}_\mu {\mathcal P}_2({\mathord {{\mathbb {R}}}}^d)\) we get \(\left\| v-w\right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})} \ge \left\| v\right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})}\). Since we have already shown that the vector in (16) is the element of minimal norm of \(\partial {\mathcal {F}}_\varepsilon \), we get that \(\left\| v-w\right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})} \le \left\| v\right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})}\), and so \(\left\| v-w\right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})} = \left\| v\right\| _{L^2(\mu ;{\mathord {{\mathbb {R}}}^d})}\). Again using [3, Proposition 8.4.3], we obtain \(w=0\), which ends the proof. \(\square \)
Finally, we prove the characterization of the subdifferential of the full regularized energies \({\mathcal {E}}^m_\varepsilon \).
Proof of Corollary 3.13
Write \(\lambda _V\in {\mathord {{\mathbb {R}}}}\) and \(\lambda _W\in {\mathord {{\mathbb {R}}}}\) the semiconvexity constants of V and W, respectively. The proof follows the same steps as that of Proposition 3.12 with the only difference being the definitions of the functions G, f and H. Given \(x,y \in {\mathord {{\mathbb {R}}}}^d\), we define, for all \(\alpha \in [0,1]\),
and
where \(\mu _0\), \(\mu _1\), \(\lambda \) and \(\xi \) are as in the Proof of Proposition 3.12. \(\square \)
Appendix B. Weak convergence of measures
In this appendix, we recall several fundamental results on the weak convergence of measures. We begin with a result due to Ambrosio, Gigli, and Savaré on convergence of maps with respect to varying probability measures. This plays a key role in our proofs of both the \(\Gamma \)-convergence of the energies and the \(\Gamma \) convergence of the gradient flows.
Definition B.1
(weak convergence with varying measures; c.f. [3, Definition 5.4.3]) Given a sequence \((\mu _n)_n \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) converging in the weak-\(^*\) topology to some \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\), we say that a sequence \((v_n)_n\) with \(v_n \in L^1(\mu _n;{\mathord {{\mathbb {R}}}^d})\) for all \(n \in {\mathord {{\mathbb {N}}}}\) converges weakly to some \(v \in L^1(\mu ;{\mathord {{\mathbb {R}}}^d})\) if
Furthermore, we say that \((v_n)_n\) converges strongly to v in \(L^p\), \(p>1\), if
Proposition B.2
(properties of convergence with varying measures; c.f. [3, Theorem 5.4.4]) Let \((\mu _n)_n \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\), \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and \((v_n)_n\) be such that \(v_n \in L^1(\mu _n;{\mathord {{\mathbb {R}}}^d})\) for all \(n\in {\mathord {{\mathbb {N}}}}\). Suppose \(\mu _n {\mathop {\rightharpoonup }\limits ^{*}}\mu \) and \(\sup _{n \in {\mathbb {N}}} \Vert v_n\Vert _{L^p(\mu _n;{\mathord {{\mathbb {R}}}^d})} < \infty \) for some \(p>1\). The following items hold.
-
(i)
There exists a subsequence of \((v_n)_n\) converging weakly to some \(w \in L^1(\mu ;{\mathord {{\mathbb {R}}}^d})\).
-
(ii)
If \((v_n)_n\) weakly converges to some \(v \in L^1(\mu ;{\mathord {{\mathbb {R}}}^d})\), then
$$\begin{aligned} \liminf _{n \rightarrow \infty } \Vert v_n\Vert _{L^p({\mu _n};{\mathord {{\mathbb {R}}}^d})} \ge \Vert v\Vert _{L^p(\mu ;{\mathord {{\mathbb {R}}}^d})} \quad \hbox { for all}\ p \ge 1. \end{aligned}$$ -
(iii)
If \((v_n)_n\) strongly converges in \(L^p\) to some \(v \in L^p(\mu ;{\mathord {{\mathbb {R}}}^d})\) and \(\sup _{n \in {\mathbb {N}}} M_p(\mu _n) < \infty \), then
$$\begin{aligned} \lim _{n \rightarrow \infty } \int f |v_n|^p d \mu _n = \int f |v|^p d \mu \quad \hbox { for all}\ f \in C_\mathrm {c}^\infty ({\mathord {{\mathbb {R}}}^d}). \end{aligned}$$
We close by recalling a generalization of Fatou’s lemma, for varying measures.
Lemma B.3
(Fatou’s lemma for varying measures; see, e.g., [45, Theorem 1.1], [4, Lemma 3.3]) Consisder a sequence \((\mu _n)_n \subset {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) and \(\mu \in {\mathcal P}({\mathord {{\mathbb {R}}}^d})\) so that \(\mu _n {\mathop {\rightharpoonup }\limits ^{*}}\mu \). Then for any sequence \((f_n)_n\) of nonnegative functions on \({\mathord {{\mathbb {R}}}}^d\), we have
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.