Now that we have collected a number of relevant results on multiple \(\log \Gamma \)-type functions, we naturally look forward to applying them on various examples, including not only special functions related to the gamma function but also many other useful functions of mathematical analysis. Such applications will be discussed in the next three chapters. But first and foremost, it is time to take stock of the new theory we have developed and summarize what we have found and learned thus far.

This chapter is devoted to a review of the most interesting and useful results that we have established in the previous chapters. These results are presented here as a step-by-step plan in order to perform a systematic and efficient investigation of the multiple \(\log \Gamma \)-type functions. We have tried to be as self-contained as possible, so that the reader can skip Chaps. 28 and make direct use of the summary given in this chapter.

FormalPara Remark 9.1

At many places in this book (e.g., in Proposition 5.18), we have made the assumption that the function g (resp. g (r) for some \(r\in \mathbb {N}^*\)) is continuous to ensure the existence of certain integrals. Although we can often relax this condition by simply requiring that g (resp. g (r)) is locally integrable, we have kept this continuity assumption for simplicity and consistency with similar results where higher order differentiability is assumed. \(\lozenge \)

9.1 Basic Definitions

Let us recall a few useful concepts introduced in the previous chapters. For any \(p\in \mathbb {N}\) and any \(\mathrm {S}\in \{\mathbb {N},\mathbb {R}\}\), we let \(\mathcal {D}^p_{\mathrm {S}}\) denote the set of functions \(g\colon \mathbb {R}_+\to \mathbb {R}\) having the asymptotic property that

$$\displaystyle \begin{aligned} \Delta^p g(x) ~\to ~0 \qquad \mbox{as }x\to_{\mathrm{S}}\infty. \end{aligned}$$

For any \(p\in \mathbb {N}\), we also let \(\mathcal {C}^p\) denote the set of p times continuously differentiable functions from \(\mathbb {R}_+\) to \(\mathbb {R}\) and we let \(\mathcal {K}^p\) denote the set of functions from \(\mathbb {R}_+\) to \(\mathbb {R}\) that are eventually p-convex or eventually p-concave, that is, p-convex or p-concave (see Definition 2.2) in a neighborhood of infinity. Recall also that the sets \(\mathcal {D}^p_{\mathrm {S}}\)’s are increasingly nested while the sets \(\mathcal {C}^p\)’s and \(\mathcal {K}^p\)’s are decreasingly nested, that is,

$$\displaystyle \begin{aligned} \mathcal{D}^p_{\mathrm{S}}\subset\mathcal{D}^{p+1}_{\mathrm{S}},\qquad \mathcal{K}^{p+1}\subset\mathcal{K}^p,\qquad \mbox{and}\quad \mathcal{C}^{p+1}\subset\mathcal{C}^p\qquad \mbox{for any }p\in\mathbb{N}. \end{aligned}$$

We have also proved in Proposition 4.8 that

$$\displaystyle \begin{aligned} \mathcal{D}^p_{\mathbb{N}}\cap\mathcal{K}^p ~=~ \mathcal{D}^p_{\mathbb{R}}\cap\mathcal{K}^p \end{aligned}$$

and we denote this common intersection simply by \(\mathcal {D}^p\cap \mathcal {K}^p\).

In Chap. 5, we have introduced the map Σ that carries any function \(g\colon \mathbb {R}_+\to \mathbb {R}\) lying in the set

$$\displaystyle \begin{aligned} \mathrm{dom}(\Sigma) ~=~ \bigcup_{p\geq 0}(\mathcal{D}^p\cap\mathcal{K}^p) \end{aligned}$$

into the unique solution \(f\colon \mathbb {R}_+\to \mathbb {R}\) that arises from Theorem 1.4 and satisfies f(1) = 0. That is,

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \lim_{n\to\infty}f^p_n[g](x),\qquad x>0. \end{aligned}$$

The class of functions that are equal (up to an additive constant) to Σg is called the principal indefinite sum of g (see Definition 5.4 and Example 5.5). A function f lying in the range of the map Σ is also called a multiple \(\log \Gamma \)-type function.

In the previous chapters, we have established and discussed several properties of the multiple \(\log \Gamma \)-type functions, many of which are counterparts of classical properties of the gamma function. For instance, we have proved that every multiple \(\log \Gamma \)-type function satisfies an analogue of Gauss’ multiplication formula for the gamma function. In the rest of this chapter, we provide a summary of these properties. The reader can use them for a systematic investigation of any multiple \(\log \Gamma \)-type function.

9.2 ID Card and Main Characterization

The first step in this investigation is to choose a function \(g\in \mathcal {D}^p\cap \mathcal {K}^p\) (for some \(p\in \mathbb {N}\)) for which we wish to study its principal indefinite sum Σg. For instance, if we consider the function \(g(x)=x\ln x\), which lies in \(\mathcal {D}^2\cap \mathcal {K}^2\), then the function Σg is the logarithm of the hyperfactorial function K(x) (see Sect. 12.5), that is

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \ln K(x) ~=~ (x-1)\ln\Gamma(x)-\ln G(x), \end{aligned}$$

where G is the Barnes G-function. Our results will then enable us to study this function through several of its properties.

Alternatively, we can start from a given function \(f\in \mathcal {K}^p\) (for some \(p\in \mathbb {N}\)) that we wish to investigate and whose difference g =  Δf is a function that lies in \(\mathcal {D}^p\cap \mathcal {K}^p\). For instance, we may want to investigate the nth degree Bernoulli polynomial f(x) = B n(x) by first observing that the function

$$\displaystyle \begin{aligned} g(x) ~=~ \Delta f(x) ~=~ n{\,}x^{n-1} \end{aligned}$$

lies in \(\mathcal {D}^n\cap \mathcal {K}^n\). We then have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ B_n(x)-B_1(1). \end{aligned}$$

Remark 9.2

To investigate a function \(f\colon \mathbb {R}_+\to \mathbb {R}\) through our results, it is not enough to check that the difference g =  Δf lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). We also need to make sure that f also lies in \(\mathcal {K}^p\). For instance, both functions

$$\displaystyle \begin{aligned} f_1(x) ~=~ x+\sin{}(2\pi x)\qquad \mbox{and}\qquad f_2(x) ~=~ x+\theta_3(\pi x,1/2), \end{aligned}$$

where θ 3(u, q) is the Jacobi theta function defined by the equation

$$\displaystyle \begin{aligned} \theta_3(u,q) ~=~ 1+2\sum_{n=1}^{\infty}q^{n^2}\cos{}(2nu), \end{aligned}$$

have the same difference g =  Δf 1 =  Δf 2 = 1 in \(\mathcal {D}^1\cap \mathcal {K}^1\) (and we have Σg(x) = x − 1). However, neither f 1 nor f 2 lies in \(\mathcal {K}^1\). \(\lozenge \)

ID Card

It is convenient to start our investigation of the function Σg by collecting some basic properties of the function g, thus establishing a kind of ID card for that function.

Thus, we first consider a function \(g\colon \mathbb {R}_+\to \mathbb {R}\). We then determine its asymptotic degree

$$\displaystyle \begin{aligned} \begin{array}{rcl} \deg g & =&\displaystyle -1+\min\{q\in\mathbb{N}: g\in\mathcal{D}^q_{\mathbb{R}}\}\\ & =&\displaystyle -1+\min\{q\in\mathbb{N}: \Delta^q g(x)\to 0~\mbox{as}~x\to\infty\}. \end{array} \end{aligned} $$

If deg g =  (e.g., when g(x) = 2x) or if \(g\notin \mathcal {K}^p\) for all p ≥ 1 +deg g (e.g., \(g(x)=x+\frac {1}{x}\sin x\)), then the function Σg does not exist and the investigation stops here. Otherwise, the functions g and Σg lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) and \(\mathcal {D}^{p+1}\cap \mathcal {K}^p\), respectively, where p = 1 +deg g.

If deg g = −1, it is important to check whether g also lies in the set \(\mathcal {D}^{-1}_{\mathbb {N}}\) of functions \(g\colon \mathbb {R}_+\to \mathbb {R}\) for which the sequence ng(n) is summable. In this case, by Proposition 6.14 we have that

$$\displaystyle \begin{aligned} \lim_{x\to\infty} \Sigma g(x) ~=~ \sum_{k=1}^{\infty}g(k). \end{aligned}$$

It is also useful to determine the integer \(r\in \mathbb {N}\), if any, for which g lies in \(\mathcal {C}^r\cap \mathcal {K}^{\max \{p,r\}}\). In this case, we know from Theorem 7.5 that Σg lies also in this set. Moreover, many functions of mathematical analysis lie in both

$$\displaystyle \begin{aligned} \mathcal{C}^{\infty} ~=~ \bigcap_{r\geq 0}\mathcal{C}^r\qquad \mbox{and}\qquad \mathcal{K}^{\infty} ~=~ \bigcap_{p\geq 0}\mathcal{K}^p. \end{aligned}$$

If g lies in these sets, then we can write \(g\in \mathcal {C}^{\infty }\cap \mathcal {D}^p\cap \mathcal {K}^{\infty }\).

It may be also useful to determine the domain on which g is p-convex or p-concave. For instance, the function \(g(x)=\frac {1}{x}\ln x\) is 0-concave on [e, ), 1-convex on [e 3∕2, ), etc. (see Example 5.13).

Note that, at this stage, we may not yet have any simple expression for Σg. Limit and series representations will later emerge anyway from our investigation.

Analogue of Bohr-Mollerup’s Theorem

The following characterization result constitutes the analogue of Bohr-Mollerup’s theorem for the function Σg and follows immediately from the uniqueness Theorem 3.1.

If \(f\colon \mathbb {R}_+\to \mathbb {R}\) is a solution to the equation Δf = g, then it lies in \(\mathcal {K}^p\) if and only if f = c +  Σg for some \(c\in \mathbb {R}\).

This characterization sometimes enables one to establish alternative expressions for the function Σg. For instance, if \(g(x)=\frac {1}{x}\), then we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \psi(x)+\gamma. \end{aligned}$$

Using the characterization above, we can easily establish the following Gauss representation (see, e.g., Srivastava and Choi [93, p. 26])

$$\displaystyle \begin{aligned} \psi(x)+\gamma ~=~ \int_0^{\infty}\frac{e^{-t}-e^{-xt}}{1-e^{-t}}{\,}dt{\,},\qquad x>0. \end{aligned}$$

Indeed, both sides of this identity vanish at x = 1 and are eventually increasing solutions to the equation Δf = g. Hence, by uniqueness they must coincide on \(\mathbb {R}_+\).

Note also that, in addition to the analogue of Bohr-Mollerup’s theorem above, we also have an alternative characterization of Σg given in Proposition 3.9.

9.3 Extended ID Card

We now complement the ID card of the function g by considering some additional related constants and mappings. From now on, we assume that g is at least continuous on \(\mathbb {R}_+\). More precisely, we assume that

$$\displaystyle \begin{aligned} g\in\mathcal{C}^r\cap\mathcal{D}^p\cap\mathcal{K}^{\max\{p,r\}} \end{aligned}$$

for p = 1 +deg g and some \(r\in \mathbb {N}\).

Recall also that, for any \(n\in \mathbb {N}\), the symbols G n and B n denote the nth Gregory coefficient and the nth Bernoulli number, respectively. We also let

$$\displaystyle \begin{aligned} \overline{G}_n ~=~ 1-\sum_{j=1}^n|G_j| \end{aligned}$$

and we let B n(x) denote the nth degree Bernoulli polynomial (see Sects. 6.3, 6.4, and 6.7).

Asymptotic Constant

Recall that the asymptotic constant associated with g (see (6.10)) is the number

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \int_0^1\Sigma g(t+1){\,}dt ~=~ \int_1^2\Sigma g(t){\,}dt. \end{aligned}$$

If g is integrable at 0, we also define the generalized Stirling constant (see Definition 6.17) as the number \(\exp (\overline {\sigma }[g])\), where

$$\displaystyle \begin{aligned} \overline{\sigma}[g] ~=~ \sigma[g] -\int_0^1g(t){\,}dt ~=~ \int_0^1\Sigma g(t){\,}dt. \end{aligned}$$

Since this latter constant does not always exist (e.g., when \(g(x)=\frac {1}{x}\)), we do not use it much in our investigation.

The asymptotic constant σ[g] has the following limit, series, and integral representations (see identities (8.11), (8.12), (8.21), and Corollary 8.45).

  1. (a)

    If g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\), then we have

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{j=1}^pG_j{\,}\Delta^{j-1}g(1) - \sum_{k=1}^{\infty}\left(\int_{k}^{k+1}g(t){\,}dt-\sum_{j=0}^pG_j{\,}\Delta^jg(k)\right) \end{aligned}$$

    and

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^{n-1}g(k)-\int_1^ng(t){\,}dt+\sum_{j=1}^pG_j\Delta^{j-1}g(n)\right). \end{aligned}$$
  2. (b)

    If g lies in \(\mathcal {C}^{2q}\cap \mathcal {D}^p\cap \mathcal {K}^{2q}\), where \(q\in \mathbb {N}^*\cup \{\frac {1}{2}\}\) and 0 ≤ p ≤ 2q − 1, then we have

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^{n-1} g(k)-\int_1^n g(t){\,}dt -\sum_{k=1}^p\frac{B_k}{k!}{\,}g^{(k-1)}(n)\right). \end{aligned}$$
  3. (c)

    If g lies in \(\mathcal {C}^2\cap \mathcal {D}^1\cap \mathcal {K}^2\), then we have

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \frac{1}{2}{\,}g(1)+\int_1^{\infty}\textstyle{\left(\{t\}-\frac{1}{2}\right)g'(t){\,}dt}. \end{aligned}$$
  4. (d)

    If g lies in \(\mathcal {C}^{2q+1}\cap \mathcal {D}^p\cap \mathcal {K}^{2q+1}\), then we have

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \frac{1}{2}{\,}g(1)-\sum_{k=1}^q\frac{B_{2k}}{(2k)!}{\,}g^{(2k-1)}(1) - \int_1^{\infty}\frac{B_{2q}(\{t\})}{(2q)!}{\,}g^{(2q)}(t){\,}dt. \end{aligned}$$

We also know from Proposition 6.14 that if g lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\) (here \(\mathcal {D}^{-1}\) stands for \(\mathcal {D}^{-1}_{\mathbb {N}}\)), then g is integrable at infinity and

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{k=1}^{\infty}g(k)-\int_1^{\infty}g(t){\,}dt. \end{aligned}$$

Analogue of Raabe’s Formula

The analogue of Raabe’s formula is simply the identity (see (8.9))

$$\displaystyle \begin{aligned} \int_x^{x+1}\Sigma g(t){\,}dt ~=~ \sigma[g]+\int_1^xg(t){\,}dt \end{aligned}$$

and we know by Proposition 8.20 that any of these integrals lies in \(\mathcal {C}^0\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}\).

Recall also from Corollary 8.23 that a function \(f\colon \mathbb {R}_+\to \mathbb {R}\) lies in \(\mathcal {C}^0\cap \mathcal {K}^p\) and satisfies the equation

$$\displaystyle \begin{aligned} \int_x^{x+1}f(t){\,}dt ~=~ \sigma[g]+\int_1^xg(t){\,}dt{\,},\qquad x>0, \end{aligned}$$

if and only if f =  Σg. This provides an alternative characterization of Σg.

Generalized Binet’s Function

For any \(q\in \mathbb {N}\), the generalized Binet function associated with g and q is the function \(J^q[g]\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equation (see (6.16))

$$\displaystyle \begin{aligned} J^q[g](x) ~=~ \sum_{j=0}^{q-1}G_j\Delta^jg(x)-\int_x^{x+1}g(t){\,}dt\qquad \mbox{for }x>0. \end{aligned}$$

In particular, we also have (see (6.18))

$$\displaystyle \begin{aligned} J^{q+1}[\Sigma g](x) ~=~ \Sigma g(x)-\sigma[g]-\int_1^xg(t){\,}dt + \sum_{j=1}^qG_j\Delta^{j-1} g(x){\,}. \end{aligned}$$

Note that several objects and formulas of our theory can be usefully expressed in terms of this latter function.

Generalized Euler’s Constant

Recall that the generalized Euler constant associated with the function g is the number

$$\displaystyle \begin{aligned} \gamma[g] ~=~ -J^{p+1}[\Sigma g](1), \end{aligned}$$

where p = 1 +deg g (see Definition 6.34).

Note that, contrary to the asymptotic constant σ[g], the generalized Euler constant γ[g] is not invariant if we replace p with a higher value. Besides, by definition of γ[g] both quantities are related through the following identity

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \gamma[g]+\sum_{j=1}^pG_j\,\Delta^{j-1}g(1), \end{aligned}$$

where p = 1 +deg g (see Proposition 6.36). In particular, we have γ[g] = σ[g] whenever deg g = −1.

We also have the following integral representations

$$\displaystyle \begin{aligned} \gamma[g] ~=~ \int_1^{\infty}\bigg(\sum_{j=0}^pG_j\Delta^jg(\lfloor t\rfloor)-g(t)\bigg){\,}dt \end{aligned}$$

and

$$\displaystyle \begin{aligned} \gamma[g] ~=~ \int_1^{\infty}\left(\overline{P}_p[g](t)-g(t)\right)dt, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \overline{P}_p[g](x) ~=~ \sum_{j=0}^p{\textstyle{{{\{x\}}\choose{j}}}}\,\Delta^jg(\lfloor x\rfloor),\qquad x\geq 1, \end{aligned}$$

is the piecewise polynomial function whose restriction to any interval (k, k + 1), with \(k\in \mathbb {N}^*\), is the interpolating polynomial of g with nodes at k, k + 1, …, k + p (see Proposition 6.37 and Eqs.. (6.38) and (6.41)).

If g is p-convex or p-concave on [1, ), then the graph of g is always over or always under that of \(\overline {P}_p[g]\) on [1, ) and |γ[g]| is the surface area between both graphs. In this case, we also have (see (6.45) and (6.46))

$$\displaystyle \begin{aligned} |\gamma[g]| ~\leq ~ \overline{G}_p{\,}|\Delta^pg(1)| \end{aligned}$$

and, if p ≥ 1,

$$\displaystyle \begin{aligned} |\gamma[g]| ~\leq ~ \int_0^1\left|{\textstyle{{{t-1}\choose{p}}}}\right|\left|\Delta^{p-1}g(t+1)-\Delta^{p-1}g(1)\right|{\,}dt. \end{aligned}$$

9.4 Inequalities

Recall that, for any a > 0, the function \(\rho ^p_a[g]\colon [0,\infty )\to \mathbb {R}\) is defined by the equation (see (1.7))

$$\displaystyle \begin{aligned} \rho^p_a[g](x) ~=~ g(x+a)-\sum_{j=0}^{p-1}{\textstyle{{{x}\choose{j}}}}\,\Delta^jg(a)\qquad \mbox{for }x>0. \end{aligned}$$

In particular, we have

$$\displaystyle \begin{aligned} \rho^{p+1}_a[\Sigma g](x) ~=~ \Sigma g(x+a)-\Sigma g(a)-\sum_{j=1}^p{\textstyle{{{x}\choose{j}}}}\,\Delta^{j-1}g(a){\,}. \end{aligned}$$

Generalized Wendel’s Inequality (Symmetrized Version)

Let a ≥ 0 and let x > 0 be so that g is p-convex or p-concave on [x, ). Then we have (see Corollary 6.2)

$$\displaystyle \begin{aligned} \left|\rho^{p+1}_x[\Sigma g](a)\right| ~\leq ~ \lceil a\rceil\left|{\textstyle{{{a-1}\choose{p}}}}\right|\left|\Delta^pg(x)\right|{\,}. \end{aligned}$$

If p ≥ 1, we also have the following tighter inequality

$$\displaystyle \begin{aligned} \left|\rho^{p+1}_x[\Sigma g](a)\right| ~\leq ~ \left|{\textstyle{{{a-1}\choose{p}}}}\right|\left|\Delta^{p-1}g(x+a)-\Delta^{p-1}g(x)\right|{\,}. \end{aligned}$$

This latter inequality is referred to as the symmetrized version of the generalized Wendel inequality (see Corollary 6.2). Both inequalities reduce to equalities when a ∈{0, 1, …, p}.

Now, for any \(n\in \mathbb {N}^*\) we have (see (5.4))

$$\displaystyle \begin{aligned} \rho^{p+1}_n[\Sigma g](x) ~=~ \Sigma g(x)-f_n^p[g](x),\qquad x>0. \end{aligned}$$

Using this identity, we immediately derive the following discrete version of the inequalities above. If g is p-convex or p-concave on [n, ), then

$$\displaystyle \begin{aligned} \left|\Sigma g(x)-f_n^p[g](x)\right| ~\leq ~ \lceil x\rceil\left|{\textstyle{{{x-1}\choose{p}}}}\right|\left|\Delta^pg(n)\right|{\,},\qquad x>0, \end{aligned}$$

and if p ≥ 1,

$$\displaystyle \begin{aligned} \left|\Sigma g(x)-f_n^p[g](x)\right| ~\leq ~ \left|{\textstyle{{{x-1}\choose{p}}}}\right|\left|\Delta^{p-1}g(n+x)-\Delta^{p-1}g(n)\right|{\,},\qquad x>0. \end{aligned}$$

If g lies in \(\mathcal {D}^{-1}_{\mathbb {N}}\), then (see Proposition 6.14)

$$\displaystyle \begin{aligned} \Sigma g(x) ~\to ~ \Sigma g(\infty) ~=~ \sum_{k=1}^{\infty}g(k)\qquad \mbox{as }x\to\infty. \end{aligned}$$

We then have the following additional inequality (see Theorem 3.13). If g is increasing or decreasing on [n, ), then

$$\displaystyle \begin{aligned} \left|\sum_{k=n}^{\infty}g(x+k)\right| ~=~ |\Sigma g(x+n)-\Sigma g(\infty)| ~\leq ~ \left|\Sigma g(n)-\Sigma g(\infty)\right|,\qquad x>0. \end{aligned}$$

Generalized Stirling’s Formula-Based Inequality (Symmetrized Version)

If x > 0 is so that g is p-convex or p-concave on [x, ), then we have the inequality (see Corollary 6.12)

$$\displaystyle \begin{aligned} \left|J^{p+1}[\Sigma g](x)\right| ~\leq ~ \overline{G}_p{\,}|\Delta^p g(x)|. \end{aligned}$$

If p ≥ 1, we also have the following tighter inequality

$$\displaystyle \begin{aligned} \left|J^{p+1}[\Sigma g](x)\right| ~\leq ~ \left|\int_0^1{\textstyle{{{t-1}\choose{p}}}}(\Delta^{p-1}g(x+t)-\Delta^{p-1}g(x)){\,}dt\right|. \end{aligned}$$

Moreover, if p = 0 or p = 1, then (see Proposition 6.19)

$$\displaystyle \begin{aligned} \left|\Sigma g\left(x+\frac{1}{2}\right)-\sigma[g]-\int_1^x g(t){\,}dt\right| ~\leq ~ \left|J^{p+1}[\Sigma g](x)\right|. \end{aligned}$$

Generalized Gautschi’s Inequality

Suppose that g lies in \(\mathcal {C}^2\cap \mathcal {K}^2\). Let a ≥ 0 and let x > 0 be so that Σg is convex on [x + ⌊a⌋, ). Then we have (see Proposition 8.67)

$$\displaystyle \begin{aligned} \begin{array}{rcl} (a-\lceil a\rceil){\,}g(x+\lceil a\rceil) & \leq &\displaystyle (a-\lceil a\rceil){\,}(\Sigma g)'(x+\lceil a\rceil)\\ & \leq &\displaystyle \Sigma g(x+a)-\Sigma g(x+\lceil a\rceil) ~\leq ~ (a-\lceil a\rceil){\,}g(x+\lfloor a\rfloor). \end{array} \end{aligned} $$

(The inequalities are to be reversed if Σg is concave on [x + ⌊a⌋, ).)

9.5 Asymptotic Analysis

In this section, we gather the main results related to the asymptotic behaviors of multiple \(\log \Gamma \)-type functions, including the generalized Stirling formula.

Generalized Wendel’s Inequality-Based Limit

The following convergence result immediately follows from the generalized Wendel inequality (see Theorem 6.1). For any a ≥ 0, we have

$$\displaystyle \begin{aligned} \rho^{p+1}_x[\Sigma g](a) ~\to ~0\qquad \mbox{as }x\to\infty{\,}, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \Sigma g(x+a)-\Sigma g(x)-\sum_{j=1}^p{\textstyle{{{a}\choose{j}}}}\,\Delta^{j-1}g(x) ~\to ~0\qquad \mbox{as }x\to\infty{\,}. \end{aligned}$$

This convergence result still holds if we differentiate r times the left-hand side.

Generalized Stirling’s Formula

We have (see Theorem 6.13)

$$\displaystyle \begin{aligned} J^{p+1}[\Sigma g](x) ~\to ~ 0\qquad \mbox{as }x\to\infty{\,}, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \Sigma g(x) -\int_1^x g(t){\,}dt +\sum_{j=1}^pG_j\Delta^{j-1}g(x) ~\to ~ \sigma[g]\qquad \mbox{as }x\to\infty{\,}. \end{aligned}$$

If g lies in \(\mathcal {C}^{2q}\cap \mathcal {D}^p\cap \mathcal {K}^{2q}\), where \(q\in \mathbb {N}^*\cup \{\frac {1}{2}\}\) and 0 ≤ p ≤ 2q − 1, then we also have (see Proposition 8.39)

$$\displaystyle \begin{aligned} \Sigma g(x)-\int_1^x g(t){\,}dt -\sum_{k=1}^p\frac{B_k}{k!}{\,}g^{(k-1)}(x) ~\to ~ \sigma[g]\qquad \mbox{as }x\to\infty. \end{aligned}$$

If p = 0 or p = 1, we also have the following analogue of Burnside’s formula, which provides a better approximation than the generalized Stirling formula (see Proposition 6.19)

$$\displaystyle \begin{aligned} \Sigma g(x) -\int_1^{x-\frac{1}{2}}g(t){\,}dt ~\to ~ \sigma[g] \qquad \mbox{as }x\to\infty{\,}. \end{aligned}$$

All the convergence results above still hold if we differentiate r times both sides. In particular, the function D r J p+1[ Σg] vanishes at infinity.

Asymptotic Equivalences

For any a ≥ 0 and any \(c\in \mathbb {R}\), we have (see Proposition 6.20)

$$\displaystyle \begin{aligned} c+\Sigma g(x+a) ~\sim ~ c+\int_x^{x+1}\Sigma g(t){\,}dt\qquad \mbox{as }x\to\infty \end{aligned}$$

(under the assumption that c +  Σg(n + 1) ∼ c +  Σg(n) as \(n\to _{\mathbb {N}}\infty \) whenever c +  Σg vanishes at infinity). If g does not lie in \(\mathcal {D}^{-1}_{\mathbb {N}}\), then we also have

$$\displaystyle \begin{aligned} \Sigma g(x+a) ~\sim ~ c+\int_1^x g(t){\,}dt\qquad \mbox{as }x\to\infty. \end{aligned}$$

These equivalences still hold if we differentiate r times both sides; that is,

$$\displaystyle \begin{aligned} D^r\Sigma g(x+a) ~\sim ~ g^{(r-1)}(x)\qquad \mbox{as }x\to\infty \end{aligned}$$

(under the assumption that D r Σg(n + 1) ∼ D r Σg(n) as \(n\to _{\mathbb {N}}\infty \) whenever D r Σg vanishes at infinity).

Asymptotic Expansions

We have the following asymptotic expansions (see Proposition 8.36).

  1. (a)

    If g lies in \(\mathcal {C}^1\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,1\}}\), then for large x we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt -\frac{1}{2}{\,}g(x) + R_1(x){\,}, \end{aligned}$$

    where

    $$\displaystyle \begin{aligned} |R_1(x)| ~\leq ~\frac{1}{2}|g(x)|. \end{aligned}$$
  2. (b)

    If g lies in \(\mathcal {C}^{2q}\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,2q\}}\) for some \(q\in \mathbb {N}^*\), then for large x we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt -\frac{1}{2}{\,}g(x)+\sum_{k=1}^q\frac{B_{2k}}{(2k)!}{\,}g^{(2k-1)}(x) + R^q_1(x){\,}, \end{aligned}$$

    where

    $$\displaystyle \begin{aligned} |R^q_1(x)| ~\leq ~ \frac{|B_{2q}|}{(2q)!}{\,}|g^{(2q-1)}(x)|{\,}. \end{aligned}$$

Asymptotic expansions of the more general function

$$\displaystyle \begin{aligned} x ~\mapsto ~\frac{1}{m}\sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right), \end{aligned}$$

for any \(m\in \mathbb {N}^*\), are also provided in Proposition 8.35.

Generalized Liu’s Formula

The following assertions hold (see Proposition 8.42).

  1. (a)

    If g lies in \(\mathcal {C}^2\cap \mathcal {D}^1\cap \mathcal {K}^2\), then we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^x g(t){\,}dt -\frac{1}{2}{\,}g(x)-\int_0^{\infty}\textstyle{\left(\{t\}-\frac{1}{2}\right)g'(x+t){\,}dt}. \end{aligned}$$
  2. (b)

    If g lies in \(\mathcal {C}^{2q+1}\cap \mathcal {D}^{2q}\cap \mathcal {K}^{2q+1}\) for some \(q\in \mathbb {N}^*\), then we have

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x) & =&\displaystyle \sigma[g]+\int_1^x g(t){\,}dt -\frac{1}{2}{\,}g(x)+\sum_{k=1}^q\frac{B_{2k}}{(2k)!}{\,}g^{(2k-1)}(x)\\ & &\displaystyle \null + \int_0^{\infty}\frac{B_{2q}(\{t\})}{(2q)!}{\,}g^{(2q)}(x+t){\,}dt. \end{array} \end{aligned} $$

9.6 Limit, Series, and Integral Representations

We now recall the different representations of multiple \(\log \Gamma \)-type functions that we established in this work as well as the way we can generate further identities by integration and differentiation.

Note that, in the special case when g lies in \(\mathcal {D}^{-1}_{\mathbb {N}}\), both the Eulerian and Weierstrassian forms coincide with the analogue of Gauss’ limit, i.e., we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sum_{k=1}^{\infty}g(k)-\sum_{k=0}^{\infty}g(x+k), \end{aligned}$$

and the second series converges uniformly on \(\mathbb {R}_+\) (and tends to zero as x →).

Analogue of Gauss’ Limit

By definition of Σg, we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \lim_{n\to\infty} f^p_n[g](x),\qquad x>0. \end{aligned}$$

This is precisely the analogue of Gauss’ limit for the gamma function. We have also established that the sequence \(n\mapsto f^p_n[g]\) converges uniformly on any bounded subset of \(\mathbb {R}_+\) to Σg (see our existence Theorem 3.6).

More generally, we have shown that the sequence \(n\mapsto D^rf^p_n[g]\) converges uniformly on any bounded subset of \(\mathbb {R}_+\) to D r Σg (see Theorem 7.5). In particular, both sides of the identity above can be differentiated r times (i.e., the limit and the derivative operator commute).

Moreover, the function \(f_n^p[g](x)-\Sigma g(x)\) can be (repeatedly) integrated on any bounded interval of [0, ) and the integral converges to zero as n → (see Proposition 5.18 and Remark 5.19).

Eulerian and Weierstrassian Forms

We have the following Eulerian form (see Theorem 8.2)

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ -g(x)+\sum_{j=1}^p{\textstyle{{{x}\choose{j}}}}{\,}\Delta^{j-1}g(1) - \sum_{k=1}^{\infty}\left(g(x+k)-\sum_{j=0}^p{\textstyle{{{x}\choose{j}}}}{\,}\Delta^jg(k)\right). \end{aligned}$$

We also have the following Weierstrassian forms if \(g\in \mathcal {C}^p\) (see Theorems 8.5 and 8.7).

  1. (a)

    If p = 1 +deg g = 0, then

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]-g(x)-\sum_{k=1}^{\infty}\left(g(x+k)-\int_k^{k+1}g(t){\,}dt\right). \end{aligned}$$
  2. (b)

    If p = 1 +deg g ≥ 1, then

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x) & =&\displaystyle \sum_{j=1}^{p-1}{\textstyle{{{x}\choose{j}}}}{\,}\Delta^{j-1}g(1)+{\textstyle{{{x}\choose{p}}}}(\Sigma g)^{(p)}(1)\\ & &\displaystyle -g(x)- \sum_{k=1}^{\infty}\left(g(x+k)-\sum_{j=0}^{p-1}{\textstyle{{{x}\choose{j}}}}{\,}\Delta^jg(k)-{\textstyle{{{x}\choose{p}}}}g^{(p)}(k)\right), \end{array} \end{aligned} $$

    where ( Σg)(p)(1) = g (p−1)(1) − σ[g (p)].

Each of the series above converges uniformly on any bounded subset of [0, ) and can be repeatedly integrated term by term on any bounded interval of [0, ). It can also be differentiated term by term up to r times.

Gregory’s Formula-Based Series Representation

We also have the following series representation (see Proposition 8.11). Suppose that g lies in \(\mathcal {K}^{\infty }\) and let x > 0 be so that for every integer q ≥ p the function g is q-convex or q-concave on [x, ). Suppose also that the sequence q↦ Δq g(x) is bounded. Then we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt-\sum_{n=1}^{\infty}G_n\,\Delta^{n-1}g(x). \end{aligned}$$

Moreover, if these latter assumptions are satisfied for x = 1, then we also have the following analogue of Fontana-Mascheroni’s series representation of γ

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{n=1}^{\infty}G_n\,\Delta^{n-1}g(1). \end{aligned}$$

Integral Representation

We have seen that an integral expression for Σg can sometimes be obtained by first finding an expression for Σg (r) when r > 1. This is the elevator method (see Corollary 7.20).

We have

$$\displaystyle \begin{aligned} (\Sigma g)^{(r)}-\Sigma g^{(r)} ~=~ g^{(r-1)}(1)-\sigma[g^{(r)}] \end{aligned}$$

and, if r > p,

$$\displaystyle \begin{aligned} \sigma[g^{(r)}] ~=~ g^{(r-1)}(1) +\sum_{k=1}^{\infty}g^{(r)}(k). \end{aligned}$$

Moreover, for any a > 0, we have

$$\displaystyle \begin{aligned} \Sigma g ~=~ f_a-f_a(1), \end{aligned}$$

where \(f_a\in \mathcal {C}^r\) is defined by

$$\displaystyle \begin{aligned} f_a(x) ~=~ \sum_{k=1}^{r-1}c_k(a){\,}\frac{(x-a)^k}{k!} + \int_a^x \frac{(x-t)^{r-1}}{(r-1)!}{\,}(\Sigma g)^{(r)}(t){\,}dt \end{aligned}$$

and, for k = 1, …, r − 1,

$$\displaystyle \begin{aligned} c_k(a) ~=~ \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}\left(g^{(j+k-1)}(a)-\int_a^{a+1} \frac{(a+1-t)^{r-j-k}}{(r-j-k)!}{\,}(\Sigma g)^{(r)}(t){\,}dt\right). \end{aligned}$$

9.7 Further Identities and Results

In this section, we collect the remaining identities and results that may be relevant in our investigation of multiple \(\log \Gamma \)-type functions.

Analogue of Gauss’ Multiplication Formula

Let \(m\in \mathbb {N}^*\) and define the function \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) by the equation \(g_m(x)=g(\frac {x}{m})\) for x > 0. Then we have the following analogue of Gauss’ multiplication formula (see Sect. 8.6)

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right) ~=~ \sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right)+\Sigma g_m(mx){\,},\qquad x>0, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right) ~=~ m{\,}\sigma[g]-\sigma[g_m]-m\,\int_{1/m}^1g(t){\,}dt. \end{aligned}$$

We also have

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\frac{\Sigma g_m(mx)-\Sigma g_m(m)}{m} ~=~ \int_1^x g(t){\,}dt{\,},\qquad x>0, \end{aligned}$$

and, if g is integrable at 0,

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\frac{1}{m}\,\Sigma g_m(mx) ~=~ \int_0^x g(t){\,}dt{\,},\qquad x>0. \end{aligned}$$

A related asymptotic result is also given in Proposition 8.30.

Analogue of Wallis’s Product Formula

We present here in a single statement the analogue of Wallis’s product formula as given in Proposition 8.49 and Remark 8.53.

Let \(\tilde {g}_1,\tilde {g}_2,\tilde {g}_3\colon \mathbb {R}_+\to \mathbb {R}\) be the functions defined respectively by the equations

$$\displaystyle \begin{aligned} \tilde{g}_1(x) ~=~ \Delta g(2x-1),\quad \tilde{g}_2(x) ~=~ \Delta g(2x), \quad \tilde{g}_3(x) ~=~ 2{\,}g(2x),\quad \mbox{for }x>0. \end{aligned}$$

We assume that \(\tilde {g}_{\ell }\) lies in \(\mathcal {K}^0\) for some  ∈{1, 2, 3}.

Let also \(\theta _1,\theta _2,\theta _3\colon \mathbb {N}^*\to \mathbb {R}\) be the sequences defined respectively by the equations

$$\displaystyle \begin{aligned} \begin{array}{rcl} \theta_1(n) & =&\displaystyle \sigma[\tilde{g}_1]+\int_1^{n+1}\tilde{g}_1(t){\,}dt - \sum_{j=1}^{(p-1)_+}G_j\,\Delta^{j-1}\tilde{g}_1(n+1){\,},\\ \theta_2(n) & =&\displaystyle g(2n)-g(1)-\sigma[\tilde{g}_2]-\int_1^n\tilde{g}_2(t){\,}dt + \sum_{j=1}^{(p-1)_+}G_j\,\Delta^{j-1}\tilde{g}_2(n){\,},\\ \theta_3(n) & =&\displaystyle \sigma[\tilde{g}_3]-\sigma[g]+\int_1^2(g(2n+t)-g(t)){\,}dt\\ & &\displaystyle \null +\sum_{j=1}^pG_j\left(\Delta^{j-1}g(2n+1)-\Delta^{j-1}\tilde{g}_3(n+1)\right), \end{array} \end{aligned} $$

for \(n\in \mathbb {N}^*\). Then we have

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\left(h(n) + \sum_{k=1}^{2n}(-1)^{k-1}g(k)\right) ~=~ 0, \end{aligned}$$

where h(n) is the function obtained from the series expansion for θ (n) about infinity after removing all the summands that vanish at infinity.

Restriction to the Natural Integers

The restriction of Σg to \(\mathbb {N}^*\) is the sum (5.2). This sum can be estimated, e.g., by means of an integral through Gregory’s summation formula (6.33) with a bounded remainder (6.37). The representations of Σg given above can also lead to interesting identities when restricted to the natural integers.

Analogue of Euler’s Series Representation of γ

When g lies in \(\mathcal {C}^{\infty }\cap \mathcal {K}^{\infty }\), the following series (see (7.4))

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{k=1}^{\infty}(\Sigma g)^{(k)}(1)\,\frac{1}{(k+1)!}{\,}, \end{aligned}$$

when it converges, provides an analogue of Euler’s series representation of γ. It is obtained by integrating term by term the Taylor series expansion of Σg(x + 1) about x = 0.

Generalized Webster’s Functional Equation

This result can be found in Theorem 8.71.

Analogues of Euler’s Reflection Formula and Gauss’ Digamma Theorem

These topics are discussed in Sects. 8.9 and 8.10.