The asymptotic behavior of the gamma function for large values of its argument can be summarized as follows: for any a ≥ 0, we have the following asymptotic equivalences (see Titchmarsh [96, Section 1.87])

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Gamma(x+a) ~\sim ~ x^a\,\Gamma(x) & &\displaystyle \mbox{as }x\to\infty{\,},{} \end{array} \end{aligned} $$
(6.1)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \Gamma(x) ~\sim ~ \sqrt{2\pi}{\,}e^{-x}x^{x-\frac{1}{2}} & &\displaystyle \mbox{as }x\to\infty{\,},{} \end{array} \end{aligned} $$
(6.2)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \Gamma(x+1) ~\sim ~ \sqrt{2\pi x}{\,}e^{-x}x^x & &\displaystyle \mbox{as }x\to\infty{\,},{} \end{array} \end{aligned} $$
(6.3)

where both formulas (6.2) and (6.3) are known by the name Stirling’s formula .

In this chapter, we investigate the asymptotic behaviors of the multiple \(\log \Gamma \)-type functions and provide analogues of the formulas above.

More specifically, for these functions we establish analogues of Wendel’s inequality, Stirling’s formula, and Burnside’s formula for the gamma function. We also introduce the concept of the asymptotic constant, an analogue of Stirling’s constant, and an analogue of Binet’s function related to the log-gamma function, and we show how all these generalized concepts can be used in the asymptotic analysis of multiple \(\log \Gamma \)-type functions. We also establish a general asymptotic equivalence for these functions.

We revisit Gregory’s summation formula, with an integral form of the remainder, and show how it can be derived very easily in this context. Using this formula, we then introduce a generalization of Euler’s constant and provide a geometric interpretation.

6.1 Generalized Wendel’s Inequality

Recall that if a function g lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\), then the function Σg lies in \(\mathcal {R}^{p+1}_{\mathbb {R}}\) by Proposition 5.6. At first glance, this observation may seem rather unimportant. However, its explicit statement tells us that for any a ≥ 0 we have

$$\displaystyle \begin{aligned} \rho_x^{p+1}[\Sigma g](a)\to 0\qquad \mbox{as }x\to\infty, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \Sigma g(x+a)-\Sigma g(x)-\sum_{j=1}^p{\textstyle{{{a}\choose{j}}}}\,\Delta^{j-1}g(x) ~\to ~0\qquad \mbox{as }x\to\infty{\,}. \end{aligned} $$
(6.4)

This is actually a nice convergence result that reveals the asymptotic behavior of the difference Σg(x + a) − Σg(x) for large values of x. The special case when p = 1 was established by Webster [98, Theorem 6.1].

When \(g(x)=\ln x\) and p = 1, this result reduces to

$$\displaystyle \begin{aligned} \ln\Gamma(x+a)-\ln\Gamma(x)-a\ln x ~\to ~0\qquad \mbox{as }x\to\infty{\,}, \end{aligned}$$

which is precisely the additive version of the asymptotic equivalence given in (6.1). We thus observe that (6.4) immediately provides an analogue of the asymptotic equivalence (6.1) for all the multiple \(\log \Gamma \)-type functions.

Now, we observe that formula (6.1) was also established by Wendel [99], who first provided a short and elegant proof of the following double inequality

$$\displaystyle \begin{aligned} \left(1+\frac{a}{x}\right)^{a-1} \leq ~ \frac{\Gamma(x+a)}{\Gamma(x){\,}x^a} ~\leq ~ 1{\,}, \qquad x>0{\,},\quad 0\leq a\leq 1{\,}, \end{aligned} $$
(6.5)

or equivalently, in the additive notation,

$$\displaystyle \begin{aligned} (a-1)\ln\left(1+\frac{a}{x}\right) ~\leq ~ \rho_x^2[\ln\circ\Gamma](a) ~\leq ~ 0{\,}, \qquad x>0{\,},\quad 0\leq a\leq 1{\,}, \end{aligned} $$
(6.6)

where

$$\displaystyle \begin{aligned} \rho_x^2[\ln\circ\Gamma](a) ~=~ \ln\Gamma(x+a)-\ln\Gamma(x)-a\ln x. \end{aligned} $$
(6.7)

We can readily see that this double inequality is actually a simple application of Lemma 2.7 to the log-gamma function with p = 1. Its generalization to all the multiple \(\log \Gamma \)-type functions is then straightforward and we present it in the following theorem. We call it the generalized Wendel inequality.

Theorem 6.1 (Generalized Wendel’s Inequality)

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and let ± stand for 1 or − 1 according to whether g lies in \(\mathcal {K}^p_+\) or \(\mathcal {K}^p_-\) . Let also x > 0 be so that g is p-convex or p-concave on [x, ) and let a ≥ 0. Then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} 0 ~\leq ~ \pm (-1)\,\varepsilon_{p+1}(a){\,}\rho_x^{p+1}[\Sigma g](a) & \leq &\displaystyle \pm (-1)\left|{\textstyle{{{a-1}\choose{p}}}}\right|\left(\Delta^p\Sigma g(x+a)-\Delta^p\Sigma g(x)\right)\\ & \leq &\displaystyle \pm (-1)\,\lceil a\rceil\left|{\textstyle{{{a-1}\choose{p}}}}\right|\Delta^pg(x), \end{array} \end{aligned} $$

with equalities if a ∈{0, 1, …, p}. In particular, \(\rho _x^{p+1}[\Sigma g](a)\to 0\) as x ∞. If p ≥ 1, we also have

$$\displaystyle \begin{aligned} \begin{array}{rcl} 0 ~\leq ~ \pm (-1)\,\varepsilon_p(a){\,}\rho_x^p[g](a) & \leq &\displaystyle \pm (-1)\left|{\textstyle{{{a-1}\choose{p-1}}}}\right|\left(\Delta^{p-1}g(x+a)-\Delta^{p-1}g(x)\right)\\ & \leq &\displaystyle \pm (-1)\,\lceil a\rceil\left|{\textstyle{{{a-1}\choose{p-1}}}}\right|\Delta^pg(x), \end{array} \end{aligned} $$

with equalities if a ∈{0, 1, …, p − 1}. In particular, \(\rho _x^p[g](a)\to 0\) as x ∞.

Proof

Negating g if necessary, we can assume that it is p-convex on [x, ). By the existence Theorem 3.6, the function Σg is then p-concave on [x, ). By Lemma 2.5 and Proposition 4.11, the function Δp g is negative and increases to zero on [x, ). Thus, for any a ≥ 0 we have

$$\displaystyle \begin{aligned} (-1)\sum_{j=0}^{\lceil a\rceil -1}\Delta^p g(x+j) ~\leq ~ (-1)\,\lceil a\rceil\Delta^pg(x). \end{aligned}$$

We then derive the first inequalities by applying Lemma 2.7 to f =  Σg. Suppose now that p ≥ 1. By Corollary 4.19, we have that g is (p − 1)-concave on [x, ). We then derive the remaining inequalities by applying Lemma 2.7 to f = g. □

A symmetrized version of the generalized Wendel inequality can be easily obtained simply by taking the absolute value of each of its sides. This provides a coarsened, but simplified form of the generalized Wendel inequality. For instance, when \(g(x)=\ln x\) and p = 1 we then obtain the following inequality

$$\displaystyle \begin{aligned} \big|\ln\Gamma(x+a)-\ln\Gamma(x)-a\ln x\big| ~\leq ~ |a-1|\,\ln\left(1+\frac{a}{x}\right), \qquad x>0{\,},~a\geq 0{\,}, \end{aligned} $$
(6.8)

that is, in the multiplicative notation,

$$\displaystyle \begin{aligned} \left(1+\frac{a}{x}\right)^{-\left|a-1\right|} \leq ~ \frac{\Gamma(x+a)}{\Gamma(x){\,}x^a} ~\leq ~ \left(1+\frac{a}{x}\right)^{\left|a-1\right|}, \qquad x>0{\,},~a\geq 0{\,}. \end{aligned} $$
(6.9)

We then have the following immediate corollary, which provides a symmetrized version of the generalized Wendel inequality.

Corollary 6.2

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . Let also x > 0 be so that g is p-convex or p-concave on [x, ) and let a ≥ 0. Then we have

$$\displaystyle \begin{aligned} \left|\rho_x^{p+1}[\Sigma g](a)\right| ~\leq ~ \left|{\textstyle{{{a-1}\choose{p}}}}\right|\left|\Delta^p\Sigma g(x+a)-\Delta^p\Sigma g(x)\right| ~\leq ~ \lceil a\rceil\left|{\textstyle{{{a-1}\choose{p}}}}\right| |\Delta^pg(x)|, \end{aligned}$$

with equalities if a ∈{0, 1, …, p}. In particular, \(\rho _x^{p+1}[\Sigma g](a)\to 0\) as x ∞. If p ≥ 1, we also have

$$\displaystyle \begin{aligned} \left|\rho_x^p[g](a)\right| ~\leq ~ \left|{\textstyle{{{a-1}\choose{p-1}}}}\right|\left|\Delta^{p-1}g(x+a)-\Delta^{p-1}g(x)\right| ~\leq ~ \lceil a\rceil\left|{\textstyle{{{a-1}\choose{p-1}}}}\right| |\Delta^pg(x)|, \end{aligned}$$

with equalities if a ∈{0, 1, …, p − 1}. In particular, \(\rho _x^p[g](a)\to 0\) as x ∞.

Example 6.3

Applying Theorem 6.1 and Corollary 6.2 to the function \(g(x)=\ln x\), for which we have p = 1 +deg g = 1 and \(\Sigma g(x)=\ln \Gamma (x)\), we immediately retrieve the inequalities (6.5)–(6.9) and hence also the asymptotic equivalence (6.1). Further inequalities can actually be obtained by considering higher values of p. For instance, since g also lies in \(\mathcal {D}^2\cap \mathcal {K}^2\), we can set p = 2 in Corollary 6.2 and we then obtain the inequalities

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle {\left(1+\frac{1}{x}\right)^{{a\choose 2}}\left(1+\frac{a}{x}\right)^{-\left|{a-1\choose 2}\right|}\left(1+\frac{a}{x+1}\right)^{\left|{a-1\choose 2}\right|} ~\leq ~ \frac{\Gamma(x+a)}{\Gamma(x){\,}x^a}}\\ & \leq &\displaystyle \left(1+\frac{1}{x}\right)^{{a\choose 2}}\left(1+\frac{a}{x}\right)^{\left|{a-1\choose 2}\right|}\left(1+\frac{a}{x+1}\right)^{-\left|{a-1\choose 2}\right|}. \end{array} \end{aligned} $$

Thus, we can see that the central function in these inequalities can always be “sandwiched” by finite products of powers of rational functions. For further inequalities involving this central function, see, e.g., Srivastava and Choi [93, pp. 106–107]. \(\lozenge \)

Discrete Version of the Generalized Wendel Inequality

The restrictions to the natural integers of the generalized Wendel inequality and its symmetrized form are obtained by setting \(x=n\in \mathbb {N}^*\) in the inequalities of Theorem 6.1 and Corollary 6.2. In view of identity (5.4), the symmetrized forms then reduce to those of the existence Theorem 3.6.

For instance, when \(g(x)=\ln x\) and p = 1, the symmetrized version of generalized Wendel’s inequality is given in (6.8) while its discrete version can take the form

$$\displaystyle \begin{aligned} |\ln\Gamma(x)-f_n^1[\ln](x)| ~\leq ~ |x-1|\,\ln\left(1+\frac{x}{n}\right), \qquad x>0,~n\in\mathbb{N}^*, \end{aligned}$$

where

$$\displaystyle \begin{aligned} f_n^1[\ln](x) ~=~ \sum_{k=1}^{n-1}\ln k-\sum_{k=0}^{n-1}\ln(x+k)+x\ln n. \end{aligned}$$

This latter inequality clearly generalizes Gauss’ limit (1.6), which simply expresses that

$$\displaystyle \begin{aligned} \ln\Gamma(x) ~=~ \lim_{n\to\infty} f_n^1[\ln](x),\qquad x>0. \end{aligned}$$

6.2 The Asymptotic Constant

We now introduce a new important concept that will play a key role in our theory, namely the asymptotic constant. This concept will actually be used intensively throughout the rest of this book.

Definition 6.4 (Asymptotic Constant)

The asymptotic constant associated with a function \(g\in \mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) is the number

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \int_0^1\Sigma g(t+1){\,}dt ~=~ \int_0^1(\Sigma g(t)+g(t)){\,}dt{\,}. \end{aligned} $$
(6.10)

Using Definition 6.4, we can readily see that the following identity holds for any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\)

$$\displaystyle \begin{aligned} \int_x^{x+1}\Sigma g(t){\,}dt ~=~ \sigma[g]+\int_1^xg(t){\,}dt,\qquad x>0. \end{aligned} $$
(6.11)

Indeed, both sides are functions of x that have the same derivative and the same value at x = 1.

Example 6.5 (Raabe’s Formula)

Taking \(g(x)=\ln x\) in (6.10), we obtain

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \int_0^1\ln\Gamma(t+1){\,}dt ~=~ -1+\frac{1}{2}\,\ln(2\pi){\,}. \end{aligned}$$

Combining this result with (6.11), we obtain the following more general identity

$$\displaystyle \begin{aligned} \int_x^{x+1}\ln\Gamma(t){\,}dt ~=~ \frac{1}{2}\,\ln(2\pi)+x\ln x-x{\,},\qquad x>0. \end{aligned}$$

This identity is known by the name Raabe’s formula (see, e.g., Cohen and Friedman [30]). We will discuss this formula and investigate its analogues in Sect. 8.5. \(\lozenge \)

Identity (6.11) will also play a very important role in this work. In this respect, it is clear that the integral

$$\displaystyle \begin{aligned} \int_x^{x+1}\Sigma g(t){\,}dt,\qquad x>0, \end{aligned} $$
(6.12)

cancels out the cyclic variations of any 1-periodic additive component of Σg in the sense that the function

$$\displaystyle \begin{aligned} x ~\mapsto ~ \int_x^{x+1}\omega(t){\,}dt \end{aligned}$$

is constant for any 1-periodic function \(\omega \colon \mathbb {R}_+\to \mathbb {R}\). Thus, the integral (6.12) can be interpreted as the trend of the function Σg, just as a moving average enables one to decompose a time series into its trend and its seasonal variation. In this light, identity (6.11) simply tells us that the trend of the function Σg is precisely the antiderivative of g (up to an additive constant).

Let us end this section with the following two technical results related to the asymptotic constant.

Proposition 6.6

Let g 1 and g 2 lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and let \(c_1,c_2\in \mathbb {R}\) . If c 1 g 1 + c 2 g 2 lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) , then

$$\displaystyle \begin{aligned} \sigma[c_1g_1+c_2g_2] ~=~ c_1\sigma[g_1]+c_2\sigma[g_2]. \end{aligned}$$

Moreover, we have \(\sigma [\boldsymbol {1}]=\frac {1}{2}\) , where \(\boldsymbol {1}\colon \mathbb {R}_+\to \mathbb {R}\) is the constant function 1(x) = 1.

Proof

The first part of the statement is an immediate consequence of Proposition 5.7. Now, we clearly have Σ1 = x − 1 and hence \(\sigma [\boldsymbol {1}]=\frac {1}{2}\). □

Proposition 6.7

Let g lie in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) , let a ≥ 0, and let \(h\colon \mathbb {R}_+\to \mathbb {R}\) be defined by the equation h(x) = g(x + a) for x > 0. Then

$$\displaystyle \begin{aligned} \sigma[h] ~=~ \sigma[g]+\int_1^{a+1}g(t){\,}dt -\Sigma g(a+1). \end{aligned}$$

Proof

Using Proposition 5.8 we obtain

$$\displaystyle \begin{aligned} \sigma[h] ~=~ \int_0^1\Sigma g(t+a+1){\,}dt -\Sigma g(a+1) ~=~ \int_{a+1}^{a+2}\Sigma g(t){\,}dt -\Sigma g(a+1). \end{aligned}$$

We then get the result using (6.11). □

6.3 Generalized Binet’s Function

The Binet function related to the log-gamma function is the function \(J\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equation (see, e.g., Cuyt et al. [31, p. 224])

$$\displaystyle \begin{aligned} J(x) ~=~ \ln\Gamma(x)-\frac{1}{2}\ln(2\pi)+x-\left(x-\frac{1}{2}\right)\ln x\qquad \mbox{for }x>0. \end{aligned} $$
(6.13)

Using identity (6.7) and Raabe’s formula (see Example 6.5), we can easily provide the following integral form of Binet’s function

$$\displaystyle \begin{aligned} J(x) ~=~ -\int_0^1 \rho_x^2[\ln\circ\Gamma](t){\,}dt,\qquad x>0. \end{aligned}$$

This latter identity motivates the following definition, in which we introduce a generalization of Binet’s function. Recall first that, for any \(q\in \mathbb {N}\) and any x > 0, the function \(t\mapsto \rho _x^q[g](t)\) is continuous whenever so is g. In this case, since it also vanishes at t = 0, it must be integrable on (0, 1).

Definition 6.8 (Generalized Binet’s Function)

For any \(g\in \mathcal {C}^0\) and any \(q\in \mathbb {N}\), we define the function \(J^q[g]\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} J^q[g](x) ~=~ -\int_0^1\rho_x^q[g](t){\,}dt\qquad \mbox{for }x>0. \end{aligned} $$
(6.14)

We say that the function J q[g] is the generalized Binet function associated with the function g and the parameter q.

Taking \(g=\ln \circ \Gamma \) and q = 1 +deg g = 2 in identity (6.14), we thus simply retrieve the Binet function \(J(x)=J^2[\ln \circ \Gamma ](x)\) related to the log-gamma function, as defined in (6.13).

In the following two propositions, we collect a few immediate properties of the generalized Binet function. To this end, recall first that, for any \(n\in \mathbb {N}\), the nth Gregory coefficient (also called the nth Bernoulli number of the second kind) is the number G n defined by the equation (see, e.g., [20,21,22, 72])

$$\displaystyle \begin{aligned} G_n ~=~ \int_0^1{\textstyle{{{t}\choose{n}}}}{\,}dt\qquad \mbox{for }n\geq 0. \end{aligned}$$

The first few values of G n are: \(1, \frac {1}{2}, -\frac {1}{12}, \frac {1}{24}, -\frac {19}{720},\ldots \). These numbers are decreasing in absolute value and satisfy the equations

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}|G_n| ~=~ 1\qquad \mbox{and}\qquad G_n ~=~ (-1)^{n-1}|G_n|\quad \mbox{for }n\geq 1. \end{aligned} $$
(6.15)

Proposition 6.9

Let \(g\in \mathcal {C}^0\) and \(q\in \mathbb {N}\) . Then, for any x > 0, we have

$$\displaystyle \begin{aligned} J^q[g](x) ~=~ \sum_{j=0}^{q-1}G_j\Delta^jg(x)-\int_x^{x+1}g(t){\,}dt{\,}. \end{aligned} $$
(6.16)

In particular,

$$\displaystyle \begin{aligned} \Delta J^q[g] = J^q[\Delta g]\qquad \mathit{\mbox{and}}\qquad J^{q+1}[g]-J^q[g] ~=~ G_q\,\Delta^q g. \end{aligned} $$
(6.17)

Proof

Identity (6.16) follows immediately from (1.7). The other two identities are trivial. □

Proposition 6.10

Let g lie in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) and let \(q\in \mathbb {N}\) . Then, for any x > 0 and any \(n\in \mathbb {N}^*\) , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} J^{q+1}[\Sigma g](x) & =&\displaystyle \Sigma g(x)-\sigma[g]-\int_1^xg(t){\,}dt + \sum_{j=1}^qG_j\Delta^{j-1} g(x){\,},{} \end{array} \end{aligned} $$
(6.18)
$$\displaystyle \begin{aligned} \begin{array}{rcl} J^{q+1}[\Sigma g](n) & =&\displaystyle \int_0^1\left(f_n^q[g](t)- \Sigma g(t)\right){\,}dt{\,}.{} \end{array} \end{aligned} $$
(6.19)

In particular,

$$\displaystyle \begin{aligned} \Delta J^{q+1}[\Sigma g] = J^{q+1}[g]{\,},\qquad J^{q+1}[c+\Sigma g] ~=~ J^{q+1}[\Sigma g],\quad c\in\mathbb{R}, \end{aligned}$$

and

$$\displaystyle \begin{aligned} \sigma[g] ~=~ -J^1[\Sigma g](1). \end{aligned}$$

Proof

Identity (6.18) follows from (6.11) and (6.16). Identity (6.19) follows from (5.4) and (6.14). The remaining identities are trivial. □

As we will see in the rest of this book, many subsequent definitions and results can be expressed in terms of the generalized Binet function.

6.4 Generalized Stirling’s Formula

Interestingly, the Binet function \(J(x)=J^2[\Sigma \ln ](x)\) defined in (6.13) clearly satisfies the following identity (compare with Artin [11, p. 24])

$$\displaystyle \begin{aligned} \Gamma(x) ~=~ \sqrt{2\pi}{\,}x^{x-\frac{1}{2}}{\,}e^{-x+J(x)} \end{aligned}$$

and hence Stirling’s formula (6.2) simply states that J(x) → 0 as x →. This observation seems to reveal a way to find a counterpart of Stirling’s formula for any continuous multiple \(\log \Gamma \)-type function. In fact, we only need to show that the function J p+1[ Σg] vanishes at infinity whenever g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). In the next theorem and its corollary, we establish this fact by simply integrating each side of the generalized Wendel inequality and its symmetrized version on a ∈ (0, 1).

Let us first define the sequence \(n\mapsto \overline {G}_n\) by the equations

$$\displaystyle \begin{aligned} \overline{G}_n ~=~ 1-\sum_{j=1}^n|G_j| ~=~ \sum_{j=n+1}^{\infty}|G_j|\qquad \mbox{for }n\in\mathbb{N}.{}\end{aligned} $$

In view of (6.15), we see that the sequence \(n\mapsto \overline {G}_n\) decreases to zero. Its first values are: \(1, \frac {1}{2}, \frac {5}{12}, \frac {3}{8}, \frac {251}{720},\ldots \). Moreover, from the straightforward identity (see, e.g., Graham et al. [41, p. 165])

$$\displaystyle \begin{aligned} (-1)^n{\textstyle{{{t-1}\choose{n}}}} ~=~ 1-\sum_{j=1}^n(-1)^{j-1}{\textstyle{{{t}\choose{j}}}}{\,},\end{aligned} $$

we easily derive

$$\displaystyle \begin{aligned} \int_0^1\left|{\textstyle{{{t-1}\choose{n}}}}\right|{\,}dt ~=~ (-1)^n\int_0^1{\textstyle{{{t-1}\choose{n}}}}{\,}dt ~=~ \left|\int_0^1{\textstyle{{{t-1}\choose{n}}}}{\,}dt\right| ~=~ \overline{G}_n{\,}. \end{aligned} $$
(6.20)

We now have the following two results, which immediately follow from Theorem 6.1, Corollary 6.2, and identities (6.20).

Theorem 6.11

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and let ± stand for 1 or − 1 according to whether g lies in \(\mathcal {K}^p_+\) or \(\mathcal {K}^p_-\) . Let also x > 0 be so that g is p-convex or p-concave on [x, ). Then we have

$$\displaystyle \begin{aligned} 0 ~\leq ~ \pm (-1)^p{\,}J^{p+1}[\Sigma g](x) &\leq \pm {\,}(-1)^{p+1}\int_0^1{\textstyle{{{t-1}\choose{p}}}}\left(\Delta^p\Sigma g(x+t)\right.\\&\left.\quad -\Delta^p\Sigma g(x)\right)dt\\ &\leq \pm{\,}(-1)\,\overline{G}_p\,\Delta^pg(x). \end{aligned} $$

In particular, J p+1[ Σg](x) → 0 as x ∞. If p ≥ 1, we also have

$$\displaystyle \begin{aligned} \begin{array}{rcl} 0 ~\leq ~ \pm (-1)^{p+1}{\,}J^p[g](x) & \leq &\displaystyle \pm {\,}(-1)^p\int_0^1{\textstyle{{{t-1}\choose{p-1}}}}\left(\Delta^{p-1}g(x+t)-\Delta^{p-1}g(x)\right)dt\\ & \leq &\displaystyle \pm{\,}(-1)\,\overline{G}_{p-1}\,\Delta^pg(x). \end{array} \end{aligned} $$

In particular, J p[g](x) → 0 as x ∞.

Corollary 6.12

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . Let also x > 0 be so that g is p-convex or p-concave on [x, ). Then we have

$$\displaystyle \begin{aligned} \left|J^{p+1}[\Sigma g](x)\right| ~\leq ~ \left|\int_0^1{\textstyle{{{t-1}\choose{p}}}}\left(\Delta^p\Sigma g(x+t)-\Delta^p\Sigma g(x)\right)dt\right| ~\leq ~ \overline{G}_p{\,}|\Delta^pg(x)|. \end{aligned}$$

In particular, J p+1[ Σg](x) → 0 as x ∞. If p ≥ 1, we also have

$$\displaystyle \begin{aligned} \left|J^p[g](x)\right| ~\leq ~ \left|\int_0^1{\textstyle{{{t-1}\choose{p-1}}}}\left(\Delta^{p-1}g(x+t)-\Delta^{p-1}g(x)\right)dt\right| ~\leq ~ \overline{G}_{p-1}{\,}|\Delta^pg(x)|. \end{aligned}$$

In particular, J p[g](x) → 0 as x ∞.

Both Theorem 6.11 and Corollary 6.12 state that J p+1[ Σg] vanishes at infinity whenever g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). This result is precisely the analogue of Stirling’s formula for all the continuous multiple \(\log \Gamma \)-type functions. As it is one of the central results of our theory, we state it explicitly in the following theorem. We call it the generalized Stirling formula. We also include the property that J p[g] vanishes at infinity.

Theorem 6.13 (Generalized Stirling’s Formula)

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . Then both functions J p+1[ Σg] and J p[g] vanish at infinity. More precisely, we have

$$\displaystyle \begin{aligned} \Sigma g(x) -\int_1^x g(t){\,}dt +\sum_{j=1}^pG_j\Delta^{j-1}g(x) ~\to ~ \sigma[g]\qquad \mathit{\mbox{as }}x\to\infty \end{aligned} $$
(6.21)

and

$$\displaystyle \begin{aligned} \int_x^{x+1}g(t){\,}dt -\sum_{j=0}^{p-1}G_j\Delta^jg(x) ~\to ~ 0\qquad \mathit{\mbox{as }}x\to\infty{\,}. \end{aligned} $$
(6.22)

Proof

By Theorem 6.11, the functions J p+1[ Σg] and J p[g] vanish at infinity when p ≥ 0 and p ≥ 1, respectively. The function J p[g] also vanishes at infinity when p = 0; indeed, in this case |g(x)| eventually decreases to zero and we have

$$\displaystyle \begin{aligned} |J^0[g](x)| ~=~ \left|\int_0^1g(x+t){\,}dt\right| ~\leq ~ |g(x)| ~\to ~ 0\qquad \mbox{as }x\to\infty. \end{aligned}$$

Formulas (6.21) and (6.22) then immediately follow from (6.16) and (6.18). □

The generalized Stirling formula (6.21) is actually the highlight of this chapter. It enables one to investigate the asymptotic behavior of the function Σg for large values of its argument. It also justifies the name “asymptotic constant” given to the quantity σ[g] introduced in Definition 6.4. Moreover, combining (6.4) with (6.21), we immediately derive the asymptotic behavior of Σg(x + a) for any a ≥ 0. We also observe that alternative formulations of (6.21) in the case when p = 1 were established by Krull [54, p. 368] and later by Webster [98, Theorem 6.3].

In the special case when g lies in \(\mathcal {D}^{-1}\cap \mathcal {K}^0\), the generalized Stirling formula and the asymptotic constant take very special forms. We present them in the following proposition.

Proposition 6.14

If g lies in \(\mathcal {D}^{-1}\cap \mathcal {K}^0\) , then we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~\to ~ \sum_{k=1}^{\infty}g(k)\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned} $$
(6.23)

If, in addition, we have \(g\in \mathcal {C}^0\) , then g is integrable at infinity and

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{k=1}^{\infty}g(k)-\int_1^{\infty}g(t){\,}dt. \end{aligned}$$

Proof

By definition of the map Σ, we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sum_{k=1}^{\infty}g(k)-\sum_{k=0}^{\infty}g(x+k),\qquad x>0. \end{aligned}$$

where the second series tends to zero as x → by Theorem 3.13. The claimed expression for σ[g] then immediately follows from formula (6.21). □

Example 6.15

Let us apply our results to the concave function \(g(x)=\ln x\) with p = 1. Using (6.16) and (6.18), we first obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} J^2[\ln\circ\Gamma](x) & =&\displaystyle J(x) ~=~ \ln\Gamma(x)-\frac{1}{2}\ln(2\pi)+x-\left(x-\frac{1}{2}\right)\ln x{\,},\\ J^1[\ln](x) & =&\displaystyle 1-(x+1)\,\ln\left(1+\frac{1}{x}\right). \end{array} \end{aligned} $$

Now, Theorem 6.11 provides the following inequalities for any x > 0

$$\displaystyle \begin{aligned} 0 ~\leq ~ J(x) ~\leq ~ \frac{1}{2}(x+1)^2\,\ln\left(1+\frac{1}{x}\right)-\frac{x}{2}-\frac{3}{4} ~\leq ~ \frac{1}{2}\,\ln\left(1+\frac{1}{x}\right){\,}, \end{aligned} $$
(6.24)
$$\displaystyle \begin{aligned} 0 ~\leq ~ -1+(x+1)\,\ln\left(1+\frac{1}{x}\right) ~\leq ~ \ln\left(1+\frac{1}{x}\right). \end{aligned}$$

That is, in the multiplicative notation,

$$\displaystyle \begin{aligned} 1 ~\leq ~ \frac{\Gamma(x)}{\sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}} ~\leq ~ e^{-\frac{x}{2}-\frac{3}{4}}\left(1+\frac{1}{x}\right)^{\frac{1}{2}(x+1)^2} \leq ~ \left(1+\frac{1}{x}\right)^{\frac{1}{2}}, \end{aligned} $$
(6.25)
$$\displaystyle \begin{aligned} \left(1+\frac{1}{x}\right)^x ~\leq ~ e ~\leq ~ \left(1+\frac{1}{x}\right)^{x+1}. \end{aligned}$$

Thus, we retrieve Stirling’s formula (6.2) and (6.3), together with the well-known asymptotic equivalence (compare with Artin [11, p. 20])

$$\displaystyle \begin{aligned} \left(1+\frac{1}{x}\right)^x ~\sim ~ e\qquad \mbox{as }x\to\infty. \end{aligned}$$

It is actually quite remarkable that the first two inequalities in (6.24) and (6.25) are precisely what we get when we “integrate” the additive version of the Wendel inequality (6.5) on the unit interval (0, 1).

Now, the coarsened inequality

$$\displaystyle \begin{aligned} \left|J^{p+1}[\Sigma g](x)\right| ~\leq ~ \overline{G}_p{\,}|\Delta^pg(x)| \end{aligned}$$

given in Corollary 6.12 takes the following simple form (in the multiplicative notation)

$$\displaystyle \begin{aligned} \left(1+\frac{1}{x}\right)^{-\frac{1}{2}} \leq ~ \frac{\Gamma(x)}{\sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}} ~\leq ~ \left(1+\frac{1}{x}\right)^{\frac{1}{2}}. \end{aligned}$$

Note that tighter inequalities can also be obtained by considering higher values of p in Corollary 6.12. For instance, taking p = 2 we obtain

$$\displaystyle \begin{aligned} \left(1+\frac{1}{x}\right)^{-\frac{3}{4}}\left(1+\frac{2}{x}\right)^{\frac{5}{12}} ~\leq ~ \frac{\Gamma(x)}{\sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}} ~\leq ~ \left(1+\frac{1}{x}\right)^{\frac{11}{12}}\left(1+\frac{2}{x}\right)^{-\frac{5}{12}}. \end{aligned}$$

Taking p = 3 we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \left(1+\frac{1}{x}\right)^{-\frac{23}{24}} \left(1+\frac{2}{x}\right)^{\frac{13}{12}} \left(1+\frac{3}{x}\right)^{-\frac{3}{8}} ~\leq ~ \frac{\Gamma(x)}{\sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}}\\ & &\displaystyle \phantom{\left(1+\frac{1}{x}\right)^{-\frac{23}{24}} \left(1+\frac{2}{x}\right)^{\frac{13}{12}} \left(1+\frac{3}{x}\right)^{-\frac{3}{8}}}\leq ~ \left(1+\frac{1}{x}\right)^{\frac{31}{24}} \left(1+\frac{2}{x}\right)^{-\frac{7}{6}} \left(1+\frac{3}{x}\right)^{\frac{3}{8}}. \end{array} \end{aligned} $$

Thus, we see that the central function in these inequalities can always be bracketed by finite products of radical functions. \(\lozenge \)

In the last part of Example 6.15, we have illustrated the possibility of obtaining closer bounds for the generalized Binet function \(J^{p+1}[\Sigma \ln ](x)\) by considering in Corollary 6.12 any value of p that is higher than 1 +deg g. Actually, it is not difficult to see that this feature applies to every continuous multiple \(\log \Gamma \)-type function. We discuss this topic in Appendix D and show that the inequalities actually get tighter and tighter as p increases.

Remark 6.16

We observe that Theorem 6.11 together with the generalized Stirling formula (Theorem 6.13) have been immediately obtained by “integrating” the generalized Wendel inequality (Theorem 6.1) on the unit interval. In turn, the generalized Wendel inequality is a straight application of Lemma 2.7 to the function f =  Σg. These remarkable facts show the considerable importance of Lemma 2.7 in this theory: it was first crucial to derive our uniqueness and existence results, and now it provides very nice counterparts of Wendel’s inequality and Stirling’s formula, with short and elegant proofs. We will use Lemma 2.7 again in Sect. 6.7 for an in-depth investigation of Gregory’s summation formula. \(\lozenge \)

Improvements of Stirling’s Formula

The following estimate of the gamma function is due to Gosper [40]

$$\displaystyle \begin{aligned} \Gamma(x) ~\sim ~ \sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}\left(1+\frac{1}{6x}\right)^{\frac{1}{2}}\qquad \mbox{as }x\to\infty, \end{aligned}$$

and is more accurate than Stirling’s formula. On the basis of this alternative approximation, Mortici [76] provided the following narrow inequalities

$$\displaystyle \begin{aligned} \left(1+\frac{\alpha}{2x}\right)^{\frac{1}{2}} ~< ~ \frac{\Gamma(x)}{\sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}} ~< ~ \left(1+\frac{\beta}{2x}\right)^{\frac{1}{2}},\qquad \mbox{for }x\geq 2, \end{aligned}$$

where \(\alpha =\frac {1}{3}\) and β = (391∕30)1∕3 − 2 ≈ 0.353. We actually observe that the quest for finer and finer bounds and approximations for the gamma function has gained an increasing interest during this last decade (see [26, 28, 29, 36, 65, 75,76,77,78, 100, 101] and the references therein). Some of these investigations could be generalized to various multiple Γ-type functions. New results along this line would be welcome.

Webster’s Double Inequality

We have seen that Theorems 6.1 and 6.11 provide very useful bounds for both quantities \(\rho _x^{p+1}[\Sigma g](a)\) and J p+1[ Σg](x). It is actually possible to provide tighter bounds for these quantities using again the p-convexity or p-concavity properties of the function g. For instance, one can show that if g lies in \(\mathcal {D}^1\cap \mathcal {K}^1\) and if x > 0 and a > 0 are so that g is concave on [x + a, ), then the following double inequality hold

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle {\sum_{k=0}^{\lfloor a\rfloor}g(x+k) + (\{a\}-1){\,}g(x+a)-a{\,}g(x) ~\leq ~ \rho^2_x[\Sigma g](a)}\\ & \leq &\displaystyle \sum_{k=0}^{\lfloor a\rfloor} g(x+k)-g(x+a)+\{a\}{\,}g(x+\lfloor a\rfloor+1)-a{\,}g(x).{} \end{array} \end{aligned} $$
(6.26)

This inequality was actually provided by Webster [98, Eq. (6.4)] to establish the limit (6.4) in the case when p = 1.

Now, assuming that g is continuous, we can integrate every expression in the inequalities above on a ∈ (0, 1), and we then obtain the following bounds for J 2[ Σg](x)

$$\displaystyle \begin{aligned} \begin{array}{rcl} 0 & \leq &\displaystyle -J^2[g](x) ~\leq ~ J^2[\Sigma g](x)\\ & \leq &\displaystyle -J^2[g](x)-\int_0^1 t{\,}g(x+t){\,}dt+\frac{1}{2}{\,}g(x+1).{} \end{array} \end{aligned} $$
(6.27)

For instance, for \(g(x)=\ln x\), we obtain (in the multiplication notation)

$$\displaystyle \begin{aligned} 1 ~\leq ~ e^{-1}\left(1+\frac{1}{x}\right)^{x+\frac{1}{2}} \leq ~ \frac{\Gamma(x)}{\sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}} ~\leq ~ e^{-\frac{x}{2}-\frac{3}{4}}\left(1+\frac{1}{x}\right)^{\frac{1}{2}(x+1)^2}, \end{aligned} $$
(6.28)

which provides a better lower bound in the inequalities (6.25).

In Appendix E, we discuss this interesting issue and provide a generalization to multiple \(\log \Gamma \)-type functions of the Webster double inequality (6.26) and its “integrated” version (6.27).

Generalized Stirling’s Constant

The number \(\sqrt {2\pi }\) arising in Stirling’s formula (6.2) and Example 6.15 is called Stirling’s constant (see, e.g., Finch [37]). For certain multiple Γ-type functions, analogues of Stirling’s constant can be easily defined as follows.

Definition 6.17 (Generalized Stirling’s Constant)

For any function \(g\in \mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) that is integrable at 0, we define the number

$$\displaystyle \begin{aligned} \overline{\sigma}[g] ~=~ \sigma[g]-\int_0^1g(t){\,}dt ~=~ \int_0^1\Sigma g(t){\,}dt.{} \end{aligned}$$

We say that the number \(\exp (\overline {\sigma }[g])\) is the generalized Stirling constant associated with g.

When g is integrable at 0, the generalized Stirling constant exists and hence the generalized Stirling formula (6.21) can take the following form

$$\displaystyle \begin{aligned} \Sigma g(x) -\int_0^x g(t){\,}dt +\sum_{j=1}^pG_j\Delta^{j-1}g(x) ~\to ~ \overline{\sigma}[g]\qquad \mbox{as }x\to\infty{\,}. \end{aligned}$$

It is important to note that, contrary to the generalized Stirling constant, the asymptotic constant σ[g] exists for any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\), even if it is not integrable at 0. For instance, for the function \(g(x)=\frac {1}{x}\), we have that σ[g] is the Euler constant γ (see Example 8.19) while \(\overline {\sigma }[g]\) does not exist.

This shows that the asymptotic constant is the “good” constant to consider in this new theory. It actually enables us to derive for multiple \(\log \Gamma \)-type functions analogues of several properties of the gamma function. For instance, we have seen that it was very useful to derive the generalized Stirling formula. To give a second example, we will see in Sect. 8.6 that it also enables us to derive analogues of Gauss’ multiplication formula for the gamma function.

6.5 Analogue of Burnside’s Formula

Let us recall Burnside’s formula , which states that

$$\displaystyle \begin{aligned} \Gamma(x) ~\sim ~ \sqrt{2\pi}\left(\frac{x-\frac{1}{2}}{e}\right)^{x-\frac{1}{2}}\qquad \mbox{as }x\to\infty. \end{aligned} $$
(6.29)

This formula actually provides a much better approximation of the gamma function than Stirling’s formula. It was first established by Burnside [27] (see also Mortici [75]) and then rediscovered by Spouge [91]. In this section, we provide an analogue of Burnside’s formula for any continuous Γp-type function when p = 0 and p = 1, and we note that such an analogue no longer exists when p ≥ 2.

Let us first state the following corollary, which particularizes the generalized Stirling formula when the function g lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\). This corollary actually follows immediately from (6.11) and (6.21).

Corollary 6.18

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\) . Then

$$\displaystyle \begin{aligned} \Sigma g(x)-\int_x^{x+1}\Sigma g(t){\,}dt ~\to ~ 0 \qquad \mathit{\mbox{as }}x\to\infty{\,}. \end{aligned}$$

Equivalently,

$$\displaystyle \begin{aligned} \Sigma g(x) -\int_1^xg(t){\,}dt ~\to ~ \sigma[g] \qquad \mathit{\mbox{as }}x\to\infty{\,}. \end{aligned}$$

Corollary 6.18 tells us that, when g lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\), the function Σg(x) coincides asymptotically with its trend (i.e., the integral (6.12)) and, in a sense, behaves asymptotically like the antiderivative of function g.

It is natural to think that a more accurate trend of Σg can be obtained by considering the centered version of the integral (6.12), namely

$$\displaystyle \begin{aligned} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}}\Sigma g(t){\,}dt ~=~ \sigma[g]+\int_1^{x-\frac{1}{2}}g(t){\,}dt,\qquad x>\textstyle{\frac{1}{2}}{\,}. \end{aligned}$$

On this matter, in the following proposition we provide a double inequality that shows that Σg(x) coincides asymptotically with this latter trend whenever g lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\) or in \(\mathcal {C}^0\cap \mathcal {D}^1\cap \mathcal {K}^1\). However, it is not difficult to see that in general this result no longer holds when g lies in \(\mathcal {C}^0\cap \mathcal {D}^2\cap \mathcal {K}^2\). The logarithm of the Barnes G-function (see Sect. 10.5) could serve as an example here.

Proposition 6.19

Let p ∈{0, 1}, \(g\in \mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) , and x > 0 be so that g is p-convex or p-concave on [x, ). Then

$$\displaystyle \begin{aligned} \left|\Sigma g\left(x+\frac{1}{2}\right)-\int_x^{x+1}\Sigma g(t){\,}dt\right| ~\leq ~ \left|J^{p+1}[\Sigma g](x)\right| ~\leq ~ \overline{G}_p{\,}|\Delta^p g(x)|. \end{aligned}$$

In particular,

$$\displaystyle \begin{aligned} \Sigma g(x)-\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}\Sigma g(t){\,}dt ~\to ~ 0 \qquad \mathit{\mbox{as }}x\to\infty{\,}, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \Sigma g(x) -\int_1^{x-\frac{1}{2}}g(t){\,}dt ~\to ~ \sigma[g] \qquad \mathit{\mbox{as }}x\to\infty{\,}. \end{aligned}$$

Proof

Using Corollary 6.12, we see that it is enough to prove the first inequality. Let

$$\displaystyle \begin{aligned} h(x) ~=~ \Sigma g\left(x+\frac{1}{2}\right)-\int_x^{x+1}\Sigma g(t){\,}dt. \end{aligned}$$

Consider first the case when p = 0 and suppose for instance that g lies in \(\mathcal {K}_+^0\); hence Σg is decreasing on [x, ). If h(x) ≥ 0, then we clearly have

$$\displaystyle \begin{aligned} |h(x)| ~=~ h(x) ~\leq ~ \Sigma g(x)-\int_x^{x+1}\Sigma g(t){\,}dt ~=~ J^1[\Sigma g](x). \end{aligned}$$

If h(x) ≤ 0, then we have

$$\displaystyle \begin{aligned} |h(x)| ~=~ \int_x^{x+1}\Sigma g(t){\,}dt-\Sigma g\left(x+\frac{1}{2}\right) ~\leq ~ \int_x^{x+\frac{1}{2}}\Sigma g(t){\,}dt-\frac{1}{2}\Sigma g\left(x+\frac{1}{2}\right) \end{aligned}$$

and it is geometrically clear that the latter quantity is less than J 1[ Σg](x).

Suppose now that p = 1 and for instance that g lies in \(\mathcal {K}_+^1\); hence Σg is concave on [x, ). Applying the Hermite-Hadamard inequality to Σg on the interval [x, x + 1], we obtain that h(x) ≥ 0. Applying the trapezoidal rule to Σg on the intervals \([x,x+\frac {1}{2}]\) and \([x+\frac {1}{2},x+1]\), we obtain the following inequality

$$\displaystyle \begin{aligned} h(x) ~\leq ~ \int_x^{x+1}\Sigma g(t){\,}dt-\frac{1}{2}\,\Sigma g(x+1)-\frac{1}{2}\,\Sigma g(x), \end{aligned}$$

where the right-hand quantity is exactly − J 2[ Σg](x). This completes the proof. □

Applying Proposition 6.19 to the function \(g(x)=\ln x\) with p = 1, we retrieve Burnside’s formula (6.29). Thus, Proposition 6.19 gives an analogue of Burnside’s formula for any continuous Γp-type function when p ∈{0, 1}. It also shows that this new formula provides a better approximation than the generalized Stirling formula whenever g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) with p ∈{0, 1}.

6.6 A General Asymptotic Equivalence

The following result provides a sufficient condition for a continuous multiple \(\log \Gamma \)-type function to be asymptotically equivalent to its (possibly shifted) trend.

Proposition 6.20

Let g lie in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) and let a ≥ 0 and \(c\in \mathbb {R}\) . When c +  Σg vanishes at infinity, we also assume that

$$\displaystyle \begin{aligned} c+\Sigma g(n+1) ~\sim ~c+\Sigma g(n)\qquad \mathit{\mbox{as }}n\to_{\mathbb{N}}\infty. \end{aligned} $$
(6.30)

Then we have

$$\displaystyle \begin{aligned} c+\Sigma g(x+a) ~\sim ~ c+\int_x^{x+1}\Sigma g(t){\,}dt\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned} $$
(6.31)

If g does not lie in \(\mathcal {D}^{-1}_{\mathbb {N}}\) , then we also have

$$\displaystyle \begin{aligned} \Sigma g(x+a) ~\sim ~ c+\int_1^x g(t){\,}dt\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned}$$

Proof

Let us first prove that (6.30) holds for any g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\), even if c +  Σg does not vanish at infinity. Of course, this result clearly holds if g is eventually a polynomial (since so is Σg in this case). Thus, we will now assume that g is not eventually a polynomial.

Suppose first that p = 1 +deg g = 0. If g lies in \(\mathcal {D}^{-1}_{\mathbb {N}}\), then (6.30) follows immediately from (6.23). If g lies in \(\mathcal {D}^0_{\mathbb {N}}\setminus \mathcal {D}^{-1}_{\mathbb {N}}\), then it is not integrable at infinity by the integral test for convergence. By the generalized Stirling formula (6.21), it follows that the eventually monotone sequence n↦ Σg(n) is unbounded. This sequence is actually eventually strictly monotone; indeed, otherwise the function \(\Delta \Sigma g=g\in \mathcal {K}^0\) would vanish in any unbounded interval of \(\mathbb {R}_+\), and hence would eventually be identically zero, a contradiction. We then obtain

$$\displaystyle \begin{aligned} \frac{c+\Sigma g(n+1)}{c+\Sigma g(n)} ~=~ 1+\frac{g(n)}{c+\Sigma g(n)} ~\to ~ 1\qquad \mbox{as }n\to_{\mathbb{N}}\infty, \end{aligned}$$

and hence (6.30) holds whenever p = 0.

Suppose now that p = 1 +deg g ≥ 1. In this case, we have that Δp g lies in \(\mathcal {D}^0\cap \mathcal {K}^0\). By the uniqueness Theorem 3.1, we also have

$$\displaystyle \begin{aligned} \Delta^p\Sigma g ~=~ c_p+\Sigma\Delta^p g \end{aligned}$$

for some \(c_p\in \mathbb {R}\), and it is clear (by minimality of p) that this latter function cannot vanish at infinity. Moreover, we can show as above that the sequence n↦ Σ Δp g(n) is eventually strictly monotone. In view of the first case, we then have

$$\displaystyle \begin{aligned} \frac{\Delta^p\Sigma g(n+1)}{\Delta^p\Sigma g(n)} ~=~ \frac{c_p+\Sigma\Delta^p g(n+1)}{c_p+\Sigma\Delta^p g(n)} ~\to ~ 1\qquad \mbox{as }n\to_{\mathbb{N}}\infty. \end{aligned}$$

Let us now show that the sequence

$$\displaystyle \begin{aligned} n ~ \mapsto ~ \frac{c+\Delta^{p-1}\Sigma g(n+1)}{c+\Delta^{p-1}\Sigma g(n)} \end{aligned}$$

exists for large values of n and converges to 1. By minimality of p, the function Δp−1 Σg lies in \(\mathcal {D}^2_{\mathbb {N}}\setminus \mathcal {D}^1_{\mathbb {N}}\) and hence the sequence n↦ Δp−1 Σg(n) is unbounded. Moreover, we can show as above that this sequence is eventually strictly monotone. Hence, the sequence above eventually exists and, using the Stolz-Cesàro theorem (see Lemma 5.20), we have that

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\frac{c+\Delta^{p-1}\Sigma g(n+1)}{c+\Delta^{p-1}\Sigma g(n)} ~=~ \lim_{n\to\infty}\frac{\Delta^p\Sigma g(n+1)}{\Delta^p\Sigma g(n)} ~=~ 1. \end{aligned}$$

Iterating this process, we finally see that condition (6.30) holds for any \(p\in \mathbb {N}\).

We can now easily see that

$$\displaystyle \begin{aligned} c+\Sigma g(x+a) ~\sim ~ c+\Sigma g(x)\qquad \mbox{as }x\to\infty. \end{aligned} $$
(6.32)

Indeed, this result clearly holds if both x and a are integers. For instance we have

$$\displaystyle \begin{aligned} c+\Sigma g(n+2) ~\sim ~ c+\Sigma g(n+1) ~\sim ~ c+\Sigma g(n)\qquad \mbox{as }n\to_{\mathbb{N}}\infty. \end{aligned}$$

Otherwise, assuming for instance that Σg is eventually increasing and nonnegative, for sufficiently large x we have

$$\displaystyle \begin{aligned} \frac{c+\Sigma g(\lfloor x+a\rfloor)}{c+\Sigma g(\lceil x\rceil)} ~\leq ~ \frac{c+\Sigma g(x+a)}{c+\Sigma g(x)} ~\leq ~ \frac{c+\Sigma g(\lceil x+a\rceil)}{c+\Sigma g(\lfloor x\rfloor)}{\,}, \end{aligned}$$

and (6.32) then follows by the squeeze theorem.

Finally, assuming again that Σg is eventually increasing and nonnegative, for sufficiently large x we have

$$\displaystyle \begin{aligned} 1 ~=~ \frac{c+\Sigma g(x)}{c+\Sigma g(x)} ~\leq ~ \frac{c+\int_x^{x+1}\Sigma g(t){\,}dt}{c+\Sigma g(x)} ~\leq ~ \frac{c+\Sigma g(x+1)}{c+\Sigma g(x)} \end{aligned}$$

and, using again the squeeze theorem, we immediately obtain the first claimed asymptotic equivalence.

Now, if g does not lie in \(\mathcal {D}^{-1}_{\mathbb {N}}\), then Σg(x) tends to infinity as x →. Using (6.11), we then have

$$\displaystyle \begin{aligned} \frac{c+\int_1^x g(t){\,}dt}{\Sigma g(x+a)} ~=~ \frac{c-\sigma[g]}{\Sigma g(x+a)} + \frac{\int_x^{x+1} \Sigma g(t){\,}dt}{\Sigma g(x+a)} ~\to ~ 1\qquad \mbox{as }x\to\infty{\,}, \end{aligned}$$

which completes the proof. □

Remark 6.21

Let us show that the assumption on the function c +  Σg cannot be ignored in Proposition 6.20. Consider the functions \(f\colon \mathbb {R}_+\to \mathbb {R}\) and \(g\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equations

$$\displaystyle \begin{aligned} f(x) ~=~ \frac{x-1}{2^x}\left(1+\frac{1}{4}\,\sin x\right)\quad \mbox{and}\quad g(x) ~=~ \Delta f(x)\qquad \mbox{for }x>0. \end{aligned}$$

It is clear that f lies in \(\mathcal {D}^0_{\mathbb {N}}\) and that g lies in \(\mathcal {D}^{-1}_{\mathbb {N}}\). Moreover, it is not difficult to see that the inequalities

$$\displaystyle \begin{aligned} -2^{x+2}f'(x) ~\geq ~ x\qquad \mbox{and}\qquad 2^{x+4}g'(x) ~\geq ~ x \end{aligned}$$

eventually hold, which shows that both f and g lie in \(\mathcal {K}^0\). By the uniqueness theorem it follows that f =  Σg. However, we can readily see that the sequence

$$\displaystyle \begin{aligned} n ~\mapsto ~\frac{\Sigma g(n+1)}{\Sigma g(n)} \end{aligned}$$

does not converge, which shows that (6.30) does not hold when c = 0. It is then possible to show that the equivalence (6.31) does not hold either.

Now, to see that the last asymptotic equivalence in Proposition 6.20 need not hold if g lies in \(\mathcal {D}^{-1}_{\mathbb {N}}\), take for instance

$$\displaystyle \begin{aligned} g(x) ~=~ \frac{2}{(x+1)(x+2)}\qquad \mbox{and}\qquad \Sigma g(x) ~=~ \frac{x-1}{x+1}{\,}. \end{aligned}$$

We then have

$$\displaystyle \begin{aligned} \lim_{x\to\infty}\frac{c+\int_1^x g(t){\,}dt}{\Sigma g(x+a)} ~=~ c+\ln\frac{9}{4}{\,}. \end{aligned}$$

\(\lozenge \)

6.7 The Gregory Summation Formula Revisited

Let \(g\in \mathcal {C}^0\), \(q\in \mathbb {N}\), and let 1 ≤ m ≤ n be integers. Integrating both sides of identity (3.8) on x ∈ (0, 1), we immediately obtain the following identity

$$\displaystyle \begin{aligned} \int_m^ng(t){\,}dt ~=~ \sum_{k=m}^{n-1}g(k)+\sum_{j=1}^qG_j(\Delta^{j-1}g(n)-\Delta^{j-1}g(m))+R^q_{m,n}[g]{\,}, \end{aligned} $$
(6.33)

where

$$\displaystyle \begin{aligned} R^q_{m,n}[g] ~=~ \int_0^1\sum_{k=m}^{n-1}\rho_k^{q+1}[g](t){\,}dt ~=~ \int_0^1(f^q_m[g](t)-f^q_n[g](t)){\,}dt. \end{aligned} $$
(6.34)

Identity (6.33) is nothing other than Gregory’s summation formula (see, e.g., [17, 50, 73]) with an integral form of the remainder. Note that, just like identity (2.10), Eq. (6.33) is a pure identity in the sense that it holds without any restriction on the form of g(x), except that here we asked g to be continuous.

Combining (6.14) with (6.34) we immediately see that this identity can be simply written in terms of the generalized Binet function as

$$\displaystyle \begin{aligned} \sum_{k=m}^{n-1}J^{q+1}[g](k) + R^q_{m,n}[g] ~=~ 0{\,}. \end{aligned} $$
(6.35)

Equivalently, if g lies in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma g)\), using (6.19) and (6.34) we see that this identity can also take the form

$$\displaystyle \begin{aligned} J^{q+1}[\Sigma g](n)-J^{q+1}[\Sigma g](m) +R^q_{m,n}[g] ~=~ 0{\,}. \end{aligned} $$
(6.36)

The next lemma, which is yet another straightforward consequence of Lemma 2.7, provides an upper bound for \(|R^q_{m,n}[g]|\) when g is q-convex or q-concave on [m, ). Under this latter assumption, we can then use Gregory’s formula (6.33) as a quadrature method for the numerical computation of the integral of g over the interval [m, n).

Lemma 6.22

Let g lie in \(\mathcal {C}^0\cap \mathcal {K}^q\) for some \(q\in \mathbb {N}\) and let \(m\in \mathbb {N}^*\) be so that g is q-convex or q-concave on [m, ). Then, for any integer n  m, we have

$$\displaystyle \begin{aligned} |R^q_{m,n}[g]| ~ \leq ~ \overline{G}_q{\,}|\Delta^q g(n)-\Delta^q g(m)|. \end{aligned} $$
(6.37)

Proof

This result is an immediate consequence of Lemma 2.7. Indeed, we can write

$$\displaystyle \begin{aligned} |R^q_{m,n}[g]| ~=~ \left|\sum_{k=m}^{n-1}\int_0^1\rho^{q+1}_k[g](t){\,}dt\right| ~\leq ~ \overline{G}_q{\,}\left|\sum_{k=m}^{n-1}\Delta^{q+1}g(k)\right|, \end{aligned}$$

where the latter sum clearly telescopes to Δq g(n) − Δq g(m). □

Example 6.23

Let us compute numerically the integral

$$\displaystyle \begin{aligned} I ~=~ \int_{\pi}^{2\pi}\ln x{\,}dx ~=~ 4.809854526737\ldots \end{aligned}$$

using Gregory’s summation formula (6.33) and the upper bound (6.37) of its remainder. Using an appropriate linear change of variable, we obtain

$$\displaystyle \begin{aligned} I ~=~ \int_1^ng(t){\,}dt,\qquad \mbox{where}\quad g(t) ~=~ \frac{\pi}{n-1}\,\ln\left(\frac{\pi}{n-1}{\,}(t-1)+\pi\right). \end{aligned}$$

Taking n = 20 and q = 10 for instance, we obtain

$$\displaystyle \begin{aligned} I ~\approx ~ \sum_{k=1}^{19}g(k)+\sum_{j=1}^{10}G_j(\Delta^{j-1}g(20)-\Delta^{j-1}g(1)) ~=~ 4.809854526746\ldots \end{aligned}$$

and (6.37) gives \(|R^{10}_{1,20}[g]|\leq 5.9\times 10^{-11}\). \(\lozenge \)

In the following result, we give sufficient conditions on the function g for the sequence \(q\mapsto R^q_{m,n}[g]\) to converge to zero. Gregory’s formula (6.33) then takes a special form.

Proposition 6.24

Let \(g\in \mathcal {C}^0\cap \mathcal {K}^{\infty }\), \(p\in \mathbb {N}\) , and let 1 ≤ m  n be integers. Suppose that, for every integer q  p, the function g is q-convex or q-concave on [m, ). Suppose also that the sequence qΔ q g(n) − Δ q g(m) is bounded. Then we have

$$\displaystyle \begin{aligned} R_{m,n}^q[g] ~\to~ 0 \qquad \mathit{\mbox{as }}q\to_{\mathbb{N}}\infty, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \int_m^ng(t){\,}dt ~=~ \sum_{k=m}^{n-1}g(k)+\sum_{j=1}^{\infty}G_j(\Delta^{j-1}g(n)-\Delta^{j-1}g(m)){\,}. \end{aligned}$$

If g lies in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma g)\) , then the latter identity also takes the form

$$\displaystyle \begin{aligned} \Sigma g(n)-\Sigma g(m) ~=~ \int_m^ng(t){\,}dt-\sum_{j=1}^{\infty}G_j(\Delta^{j-1}g(n)-\Delta^{j-1}g(m)){\,}. \end{aligned}$$

Proof

Under the assumptions of this proposition, the sequence \(q\mapsto R^q_{m,n}[g]\) converges to zero by Lemma 6.22. (Recall that the sequence \(n\mapsto \overline {G}_n\) converges to zero.) The result then immediately follows from Gregory’s formula (6.33). The last part then follows from identity (5.2). □

Example 6.25

Taking \(g(x)=\ln x\) and m = p = 1 in Proposition 6.24, we obtain the following identity

$$\displaystyle \begin{aligned} \ln n! ~=~ 1-n+\left(n+\frac{1}{2}\right)\ln n+\frac{1}{12}\ln\left(\frac{n+1}{2n}\right)-\frac{1}{24}\ln\left(\frac{4n(n+2)}{3(n+1)^2}\right)+\cdots \end{aligned}$$

which holds for any \(n\in \mathbb {N}^*\). \(\lozenge \)

A Geometric Interpretation of Gregory’s Formula

For any \(g\in \mathcal {C}^0\) and any \(q\in \mathbb {N}\), we let \(\overline {P}_q[g]\colon [1,\infty )\to \mathbb {R}\) denote the piecewise polynomial function whose restriction to any interval [k, k + 1), with \(k\in \mathbb {N}^*\), is the interpolating polynomial of g with nodes at k, k + 1, …, k + q. That is,

$$\displaystyle \begin{aligned} \overline{P}_q[g](x) ~=~ P_q[g](k,k+1,\ldots,k+q;x),\qquad x\in [k,k+1), \end{aligned} $$
(6.38)

or equivalently, using (2.9),

$$\displaystyle \begin{aligned} \begin{array}{rcl} \overline{P}_q[g](x) & =&\displaystyle P_q[g](\lfloor x\rfloor,\lfloor x\rfloor+1,\ldots,\lfloor x\rfloor+q;x)\\ & =&\displaystyle \sum_{j=0}^q{\textstyle{{{\{x\}}\choose{j}}}}\,\Delta^jg(\lfloor x\rfloor){\,},\qquad x\geq 1. \end{array} \end{aligned} $$

In the following proposition, we provide an integral expression for the remainder \(R^q_{m,n}[g]\) in terms of the function \(\overline {P}_q[g]\).

Proposition 6.26

For any \(g\in \mathcal {C}^0\) , any \(q\in \mathbb {N}\) , and any integers 1 ≤ m  n, we have

$$\displaystyle \begin{aligned} R^q_{m,n}[g] ~=~ \int_m^n(g(t)-\overline{P}_q[g](t)){\,}dt. \end{aligned} $$
(6.39)

Proof

Using (2.11) and (6.14) we then obtain

$$\displaystyle \begin{aligned} -J^{q+1}[g](k) ~=~ \int_0^1\rho_k^{q+1}[g](t){\,}dt ~=~ \int_k^{k+1}(g(t)-\overline{P}_q[g](t)){\,}dt. \end{aligned}$$

The result then follows from (6.35). □

Proposition 6.26 immediately provides an interesting interpretation of Gregory’s formula as a quadrature method. It actually shows that Gregory’s formula approximates the integral of g over the interval [m, n) by replacing g with the piecewise polynomial function \(\overline {P}_q[g]\). In particular, the remainder \(R^q_{m,n}[g]\) reduces to zero whenever g is a polynomial of degree less than or equal to q.

We also observe that Gregory’s formula reduces to the “left” rectangle method (left Riemann sum) when q = 0, and the trapezoidal rule when q = 1. However, it does not reduce to Simpson’s rule when q = 2. In fact, Gregory’s formula does not correspond to a Newton-Cotes quadrature rule when q ≥ 2.

Now, if g is q-convex or q-concave on [m, ), then for any k ∈{m, m + 1, …, n − 1} and any t ∈ [0, 1), using Lemma 2.7 and identity (2.11) we obtain

$$\displaystyle \begin{aligned} 0 ~\leq ~ \pm (-1)^q\,\rho^{q+1}_k[g](t) ~=~ \pm (-1)^q\left(g(k+t)-\overline{P}_q[g](k+t)\right), \end{aligned}$$

where ± stands for 1 or − 1 according to whether g is q-convex or q-concave on [m, ). This observation provides the following additional geometric interpretation. It shows that, on the interval [k, k + 1), the graph of g lies over or under that of \(\overline {P}_q[g]\) according to whether ± (−1)q is 1 or − 1. As an immediate consequence, the quantity |J q+1[g](k)| is precisely the surface area between both graphs over the interval [k, k + 1) while the remainder \(|R^q_{m,n}[g]|\) is the surface area between both graphs over the interval [m, n).

Example 6.27

With the function \(g(x)=\ln x\) and the parameter q = 1 we associate the piecewise linear function

$$\displaystyle \begin{aligned} \overline{P}_1[g](x) ~=~ \ln\lfloor x\rfloor + (x-\lfloor x\rfloor)\ln\left(1+\frac{1}{\lfloor x\rfloor}\right). \end{aligned}$$

Since g is concave, for any integer n ≥ 1 the graph of g on [1, n) lies over (or on) that of \(\overline {P}_1[g]\), which is the polygonal line through the points (k, g(k)) for k = 1, …, n. The value (see (6.36))

$$\displaystyle \begin{aligned} R^1_{1,n}[g] ~=~ J(1)-J(n) ~=~ -\ln\Gamma(n)+\left(n-\frac{1}{2}\right)\ln n -n+1{\,}, \end{aligned}$$

where J(x) is Binet’s function defined in (6.13), is then nothing other than the remainder in the trapezoidal rule on [1, n) with the integer nodes 1, …, n. Geometrically, it measures the surface area between the graph of g and the polygonal line. \(\lozenge \)

Alternative Integral Form of the Remainder

The following proposition yields an alternative integral form of the remainder \(R^q_{m,n}[g]\) when g lies in \(\mathcal {C}^{q+1}\) for some \(q\in \mathbb {N}^*\). Consider first the (kernel) function \(K^q_{m,n}\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equation

$$\displaystyle \begin{aligned} K^q_{m,n}(t) ~=~ \frac{1}{q!}{\,}R^q_{m,n}[(\boldsymbol{\cdot} -t)^q_+]\qquad \mbox{ for }t\in\mathbb{R}_+. \end{aligned}$$

It is not difficult to show that this function lies in \(\mathcal {C}^{q-1}\) and has the compact support [m, n + q − 1].

Proposition 6.28

Suppose that g lies in \(\mathcal {C}^{q+1}\) for some \(q\in \mathbb {N}^*\) and let 1 ≤ m  n be integers. Then we have

$$\displaystyle \begin{aligned} R^q_{m,n}[g] ~=~ \int_m^{n+q-1} K^q_{m,n}(t){\,}D^{q+1}g(t){\,}dt. \end{aligned}$$

Proof

By Taylor’s theorem, the following identity

$$\displaystyle \begin{aligned} g(x) ~=~ P_q(x) + \int_m^{n+q-1}\frac{(x-t)^q_+}{q!}{\,}D^{q+1}g(t){\,}dt \end{aligned}$$

holds on the interval [m, n + q − 1] for some polynomial P q of degree less than or equal to q. The result then follows from the definition of the remainder \(R^q_{m,n}[g]\) and the fact that \(R^q_{m,n}[P_q]=0\). □

Interestingly, if the function \(K^q_{m,n}\) does not change in sign (and we conjecture that \((-1)^q{\,}K^q_{m,n}\) is nonnegative), then by the mean value theorem for definite integrals the remainder also takes the form

$$\displaystyle \begin{aligned} R^q_{m,n}[g] ~=~ D^{q+1}g(\xi)\,\int_m^{n+q-1}K^q_{m,n}(t){\,}dt \end{aligned}$$

for some ξ ∈ [m, n + q − 1].

Remark 6.29

We observe that Jordan [50, p. 285] claimed that

$$\displaystyle \begin{aligned} \mbox{``}~R^q_{m,n}[g] ~=~ G_{q+1}(n-m)\,\Delta^{q+1} g(\xi)~\mbox{''} \end{aligned}$$

for some ξ ∈ (m, n). However, taking for instance g(x) = x 2 and (q, m, n) = (0, 1, 2), we can see that this form of the remainder is not correct. Nevertheless, several examples suggest that Jordan’s statement could possibly be corrected by assuming that ξ ∈ (m − 1, n − 1). This question thus remains open. \(\lozenge \)

General Gregory’s Formula and Euler-Maclaurin’s Formula

The following proposition provides Gregory’s formula in its general form using our integral expression for the remainder.

Proposition 6.30 (General Form of Gregory’s Formula)

Let \(a\in \mathbb {R}\), \(n,q\in \mathbb {N}\) , h > 0, and \(f\in \mathcal {C}^0([a,\infty ))\) . Then we have

where

$$\displaystyle \begin{aligned} R^q_{1,n+1}[f^h_a] ~=~ \int_0^1\sum_{k=1}^n\rho_k^{q+1}[f^h_a](t){\,}dt\quad \mathit{\mbox{and}}\quad f^h_a(x) ~=~ f(a+(x-1)h). \end{aligned}$$

Moreover, if f is q-convex or q-concave on [a, ), then

$$\displaystyle \begin{aligned} |R^q_{1,n+1}[f^h_a]| ~\leq ~ \overline{G}_q\left|(\Delta_{[h]}^{q}f)(a+nh)-(\Delta_{[h]}^{q}f)(a)\right|. \end{aligned}$$

Here, Δ [h] denotes the forward difference operator with step h > 0.

Proof

This formula can be obtained immediately from (6.33) and (6.34) replacing n with n + 1 and then setting m = 1 and g(x) = f(a + (x − 1)h). The last part follows from Lemma 6.22. □

The general Gregory formula is often compared with the corresponding Euler-Maclaurin summation formula. We will use the latter in Chap. 8, so we now state it in its general form (for background see, e.g., Apostol [8], Gel’fond [39], Lampret [62], Mariconda and Tonolo [67], and Srivastava and Choi [93]).

Recall first that the Bernoulli numbers B 0, B 1, B 2, … are defined implicitly by the single equation (see, e.g., Gel’fond [39, Chapter 4] and Graham et al. [41, p. 284])

$$\displaystyle \begin{aligned} \sum_{j=0}^m{\textstyle{{{m+1}\choose{j}}}}B_j ~=~ 0^m,\qquad m\in\mathbb{N}{\,}. \end{aligned} $$
(6.40)

The first few values of B n are: \(1, -\frac {1}{2}, \frac {1}{6}, 0, -\frac {1}{30}, 0,\ldots \). Recall also that, for any \(n\in \mathbb {N}\), the nth degree Bernoulli polynomial B n(x) is defined by the equation

$$\displaystyle \begin{aligned} B_n(x) ~=~ \sum_{k=0}^n{\textstyle{{{n}\choose{k}}}}{\,}B_{n-k}{\,}x^k\qquad \mbox{for }x\in\mathbb{R}. \end{aligned}$$

Proposition 6.31 (Euler-Maclaurin’s Formula)

Let \(N\in \mathbb {N}^*\), \(f\in \mathcal {C}^1([a,b])\) , and h = (b  a)∕N, for some real numbers a < b. Then we have

If, in addition, \(f\in \mathcal {C}^{2q}([a,b])\) for some \(q\in \mathbb {N}^*\) , then

where

$$\displaystyle \begin{aligned} R ~=~ - h^{2q+1}\int_0^N\frac{B_{2q}(\{t\})}{(2q)!}{\,}f^{(2q)}(a+th){\,}dt \end{aligned}$$

and

$$\displaystyle \begin{aligned} |R| ~\leq ~ h^{2q}{\,}\frac{|B_{2q}|}{(2q)!}\int_a^b|f^{(2q)}(x)|{\,}dx{\,}. \end{aligned}$$

Here \(f\in \mathcal {C}^k([a,b])\) means that \(f\in \mathcal {C}^k(I)\) for some open interval I containing [a, b].

Remark 6.32

We observe (to paraphrase Jordan [50, p. 285]) that Euler-Maclaurin’s formula is more advantageous than Gregory’s formula if we deal with functions whose derivatives are less complicated than their differences. However, there are functions for which Euler-Maclaurin’s formula leads to divergent series while the corresponding Gregory’s formula-based series (see Proposition 6.24) are convergent. For instance, this may be due to the fact that, for any x > 0, the sequence \(n\mapsto D^n\frac {1}{x}\) is unbounded while the sequence \(n\mapsto \Delta ^n\frac {1}{x}\) converges to zero.\(\lozenge \)

6.8 Generalized Euler’s Constant

In this section, we introduce and discuss an analogue of Euler’s constant for any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\). We first consider a lemma.

Lemma 6.33

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and let \(m\in \mathbb {N}^*\) . Then the sequence \(n\mapsto R^p_{m,n}[g]\) for n  m converges. Denoting its limit by \(R^p_{m,\infty }[g]\) , we have

$$\displaystyle \begin{aligned} R^p_{m,\infty}[g] ~=~ J^{p+1}[\Sigma g](m). \end{aligned}$$

Proof

The proof is an immediate consequence of (6.36) and the generalized Stirling formula (Theorem 6.13). □

Under the assumptions of Lemma 6.33, using (6.34), (6.35), and (6.39) we immediately obtain the following identities

$$\displaystyle \begin{aligned} \begin{array}{rcl} R^p_{m,\infty}[g] & =&\displaystyle \sum_{k=m}^{\infty}\int_0^1\rho^{p+1}_k[g](t){\,}dt ~=~ \int_0^1\sum_{k=m}^{\infty}\rho^{p+1}_k[g](t){\,}dt\\ & =&\displaystyle \int_0^1(f^p_{m}[g](t)-\Sigma g(t)){\,}dt \end{array} \end{aligned} $$

and

$$\displaystyle \begin{aligned} R^p_{m,\infty}[g] ~=~ -\sum_{k=m}^{\infty}J^{p+1}[g](k) ~=~ \int_{m}^{\infty}(g(t)-\overline{P}_p[g](t)){\,}dt. \end{aligned} $$
(6.41)

Moreover, if g is p-convex or p-concave on [m, ), the inequality (6.37) reduces to

$$\displaystyle \begin{aligned} |R^p_{m,\infty}[g]| ~=~ |J^{p+1}[\Sigma g](m)| ~\leq ~ \overline{G}_p{\,}|\Delta^pg(m)|{\,}, \end{aligned} $$
(6.42)

which is also an immediate consequence of Corollary 6.12 (where a tighter inequality is also provided when p ≥ 1).

Let us now provide a geometric interpretation of the remainder \(R^p_{m,\infty }[g]\) when g is p-convex or p-concave on [m, ). Suppose for instance that g is p-convex on [m, ). The interpretation of Gregory’s formula discussed in Sect. 6.7 shows that, on the whole of the interval [m, ), the graph of g lies over or under that of \(\overline {P}_p[g]\) according to whether p is even or odd, and the remainder \(|R^p_{m,\infty }[g]|\) is precisely the surface area between both graphs. Interestingly, the fact that this surface area converges to zero as \(m\to _{\mathbb {N}}\infty \) by (6.42) provides a direct interpretation of the restriction of the generalized Stirling formula to integer values.

This interpretation is particularly visual when p = 0 or p = 1. Consider for instance the case p = 1 and suppose that g is concave on [m, ) (e.g., \(g(x)=\ln x\)). Then, the graph of g on [m, ) lies over (or on) the polygonal line through the points (k, g(k)) for all integers k ≥ m. The value \(|R^p_{m,\infty }[g]|\) is then the surface area between the graph of g and this polygonal line. It is also the absolute value of the remainder in the trapezoidal rule on [m, ).

We are now able to introduce an analogue of Euler’s constant for any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\). We call it the generalized Euler constant.

Definition 6.34 (Generalized Euler’s Constant)

The generalized Euler constant associated with a function \(g\in \mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) is the number

$$\displaystyle \begin{aligned} \gamma[g] ~=~ -R^p_{1,\infty}[g] ~=~ -J^{p+1}[\Sigma g](1){\,},{} \end{aligned}$$

where p = 1 +deg g.

For instance, if g lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\), then using (6.33) we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma[g] & =&\displaystyle \lim_{n\to\infty}\left(\sum_{k=1}^{n-1}g(k)-\int_1^ng(t){\,}dt\right){}\\ & =&\displaystyle \sum_{k=1}^{\infty}\left(g(k)-\int_k^{k+1}g(t){\,}dt\right), \end{array} \end{aligned} $$
(6.43)

and this value represents the remainder in the “left” rectangle method on [1, ) with the integer nodes k = 1, 2, …. Similarly, if g lies in \(\mathcal {C}^0\cap \mathcal {D}^1\cap \mathcal {K}^1\) and deg g = 0, then we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma[g] & =&\displaystyle \lim_{n\to\infty}\left(\sum_{k=1}^{n-1}g(k)-\int_1^ng(t){\,}dt+\frac{1}{2}{\,}g(n)-\frac{1}{2}{\,}g(1)\right){}\\ & =&\displaystyle \sum_{k=1}^{\infty}\left(g(k)-\int_k^{k+1}g(t){\,}dt+\frac{1}{2}{\,}\Delta g(k)\right), \end{array} \end{aligned} $$
(6.44)

and this value represents the remainder in the trapezoidal rule on [1, ) with the integer nodes k = 1, 2, ….

Thus defined, the number γ[g] generalizes to any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) not only the classical Euler constant γ (obtained when \(g(x)=\frac {1}{x}\)) but also the generalized Euler constant γ[g] associated with a positive and strictly decreasing function g as defined in (6.43) (see, e.g., Apostol [8] and Finch [37, Section 1.5.3]). Moreover, as we will see in Sect. 8.2, this number plays a central role in the Weierstrassian form of Σg (which also justifies the choice m = 1 in the definition of γ[g]).

The definition of γ[g] does not require g to be p-convex or p-concave on [1, ). However, if this latter condition holds, then by (6.42) we have the inequality

$$\displaystyle \begin{aligned} |\gamma[g]| ~\leq ~ \overline{G}_p{\,}|\Delta^pg(1)|\end{aligned} $$
(6.45)

and by Corollary 6.12 the following tighter inequality also holds when p ≥ 1

$$\displaystyle \begin{aligned} |\gamma[g]| ~\leq ~ \int_0^1\left|{\textstyle{{{t-1}\choose{p}}}}\right|\left|\Delta^{p-1}g(t+1)-\Delta^{p-1}g(1)\right|{\,}dt.\end{aligned} $$
(6.46)

We also provide and discuss finer bounds for γ[g] in Appendix E (see Remark E.7).

Example 6.35

If g(x) = 1∕x, then γ[g] reduces to Euler’s constant γ, as expected. Indeed, in this case we obtain

$$\displaystyle \begin{aligned} \gamma[g] ~=~ -J^1[\psi](1) ~=~ \gamma.\end{aligned} $$

Using (6.43), we then retrieve the well-known formula

$$\displaystyle \begin{aligned} \gamma ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^n\frac{1}{k} -\ln n\right)\end{aligned} $$

and its classical geometric interpretation. If \(g(x)=\ln x\), then the associated generalized Euler constant is

$$\displaystyle \begin{aligned} \gamma[g] ~=~ -J^2[\ln\circ\Gamma](1) ~=~ -J(1) ~=~ -1+\frac{1}{2}\ln(2\pi) ~\approx ~ -0.081\end{aligned} $$

and we can see that it coincides with the associated asymptotic constant σ[g] (see Example 6.5). Moreover, using (6.44) we obtain the following formula

$$\displaystyle \begin{aligned} \gamma[g] ~=~ \lim_{n\to\infty}\left(\ln n!+n-1-\left(n+\textstyle{\frac{1}{2}}\right)\ln n\right).\end{aligned} $$

The value |γ[g]| = −γ[g] can then be interpreted as the surface area between the graph of g on the unbounded interval [1, ) and the polygonal line through the points (k, g(k)) for all integers k ≥ 1. Moreover, Eq. (6.46) provides the following inequality

$$\displaystyle \begin{aligned} |\gamma[g]| ~\leq ~ \ln 4-\frac{5}{4} ~\approx ~ 0.14.\end{aligned} $$

\(\lozenge \)

A Conversion Formula Between γ[g] and σ[g]

The following proposition, which immediately follows from (6.18) and the identity

$$\displaystyle \begin{aligned} \gamma[g] ~=~ -J^{p+1}[\Sigma g](1), \end{aligned}$$

shows how the numbers γ[g] and σ[g] are related and provides an alternative way to compute the value of γ[g].

Proposition 6.36

For any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) , we have

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \gamma[g]+\sum_{j=1}^pG_j\,\Delta^{j-1}g(1),\end{aligned} $$

where p = 1 +deg g.

An Integral Form of γ[g]

The following proposition shows that the classical integral representation of the Euler constant

$$\displaystyle \begin{aligned} \gamma ~=~ \int_1^{\infty}\left(\frac{1}{\lfloor t\rfloor}-\frac{1}{t}\right){\,}dt\end{aligned} $$

can be generalized to the constant γ[g] for any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\).

Proposition 6.37

For any \(g\in \mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) , where p = 1 +deg g, we have

$$\displaystyle \begin{aligned} \gamma[g] ~=~ \int_1^{\infty}\bigg(\sum_{j=0}^pG_j\Delta^jg(\lfloor t\rfloor)-g(t)\bigg){\,}dt.\end{aligned} $$

In particular, when deg g = −1, we have

$$\displaystyle \begin{aligned} \gamma[g] ~=~ \int_1^{\infty}(g(\lfloor t\rfloor)-g(t)){\,}dt. \end{aligned}$$

Proof

Using (6.16) and (6.41), we obtain

$$\displaystyle \begin{aligned} \gamma[g] ~=~ \sum_{k=1}^{\infty}J^{p+1}[g](k) ~=~ \sum_{k=1}^{\infty}\left(\sum_{j=0}^pG_j\,\Delta^jg(k)-\int_k^{k+1}g(t){\,}dt\right), \end{aligned}$$

which immediately provides the claimed formula. □

The Principal Indefinite Sum of the Generalized Binet Function

If g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\), then the function J p+1[ Σg] lies in \(\mathcal {D}^0_{\mathbb {R}}\) by Theorem 6.13, and hence so does

$$\displaystyle \begin{aligned} \Delta J^{p+1}[\Sigma g] ~=~ J^{p+1}[g]. \end{aligned}$$

If, in addition, J p+1[ Σg] lies in \(\mathcal {K}^0\), then by the uniqueness Theorem 3.1 we have that

$$\displaystyle \begin{aligned} \Sigma J^{p+1}[g] ~=~ J^{p+1}[\Sigma g] -J^{p+1}[\Sigma g](1){\,}.\end{aligned} $$

Thus, if p = 1 +deg g, then we obtain the identity

$$\displaystyle \begin{aligned} \Sigma J^{p+1}[g] ~=~ J^{p+1}[\Sigma g] + \gamma[g]{\,}.\end{aligned} $$
(6.47)

Now, suppose that we wish to show that a given function \(f\colon \mathbb {R}_+\to \mathbb {R}\) satisfies the equation f = J p+1[ Σg] for some function g lying in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\), with p = 1 +deg g. Using the uniqueness theorem with identity (6.47), we see that it is then enough to show that Δf = J p+1[g], f(1) = −γ[g], and \(f\in \mathcal {K}^0\).

Example 6.38

Let \(f\colon \mathbb {R}_+\to \mathbb {R}\) be defined by the equation \(f(x)=\psi (x)-\ln x\) for x > 0. To see that f = J 1[ψ], it is enough to observe that f lies in \(\mathcal {K}^0\), that f(1) = −γ, and that

$$\displaystyle \begin{aligned} \Delta f(x) ~=~ \frac{1}{x}-\ln\left(1+\frac{1}{x}\right)\end{aligned} $$

is precisely the function J 1[g](x) when g(x) = 1∕x. \(\lozenge \)

Example 6.39

Binet established the following integral representation (see, e.g., Sasvári [89])

$$\displaystyle \begin{aligned} J^2[\ln\circ\Gamma](x) ~=~ J(x) ~=~ \int_0^{\infty}\left(\frac{1}{e^t-1}-\frac{1}{t}+\frac{1}{2}\right)\,\frac{e^{-x t}}{t}{\,}dt. \end{aligned}$$

Equation (6.47) then provides a possible (though not immediate) proof of this identity. \(\lozenge \)