Let \(\mathbb {R}_+\) denote the open half-line (0, ) and let Δ denote the forward difference operator on the space of functions from \(\mathbb {R}_+\) to \(\mathbb {R}\). In this book, we are interested in the classical difference equation Δf = g on \(\mathbb {R}_+\), which can be written explicitly as

$$\displaystyle \begin{aligned} f(x+1)-f(x) ~=~ g(x),\qquad x>0, \end{aligned}$$

where \(g\colon \mathbb {R}_+\to \mathbb {R}\) is a given function. This equation appears naturally in the theory of the Euler gamma function, with \(f(x) = \ln \Gamma (x)\) and \(g(x) = \ln x\), but also in the study of many other special functions such as the Barnes G-function and the Hurwitz zeta function (see Examples 1.6 and 1.7 below).

It is easily seen that, for any function \(g\colon \mathbb {R}_+\to \mathbb {R}\), the equation above has infinitely many solutions, and each of them can be uniquely determined by prescribing its values in the interval (0, 1]. Moreover, any two solutions always differ by a 1-periodic function, i.e., a periodic function of period 1.

For certain functions g, however, special solutions can be determined by their local properties or their asymptotic behaviors. On this issue, a seminal result is the very nice characterization of the gamma function by Bohr and Mollerup [23]. We recall this important result in the following theorem.

FormalPara Theorem 1.1 (Bohr-Mollerup’s theorem)

All log-convex solutions \(f\colon \mathbb {R}_+\to \mathbb {R}_+\) to the equation

$$\displaystyle \begin{aligned} f(x+1) ~=~ x{\,}f(x),\qquad x>0, \end{aligned} $$
(1.1)

are of the form f(x) = c  Γ(x), where c > 0.

The additive, but equivalent, version of this result, obtained by taking the logarithm of both sides of (1.1), can be stated as follows.

For \(g(x)=\ln x\), all convex solutions \(f\colon \mathbb {R}_+\to \mathbb {R}\) to the difference equation Δf = g are of the form \(f(x)=c+\ln \Gamma (x)\), where \(c\in \mathbb {R}\).

As we can see, this characterization enables one to single out the gamma function as a kind of principal solution to its equation (Nörlund [82, Chapter 5] calls it the “Hauptlösung”).

It is noteworthy that the proof of Bohr-Mollerup’s characterization was simplified later by Artin [10] (see also Artin [11]) and, as observed by Webster [98], this result has then become known also “as the Bohr-Mollerup-Artin Theorem, and was adopted by Bourbaki [24] as the starting point for his exposition of the gamma function.”

FormalPara Remark 1.2

In their original result, Bohr and Mollerup actually considered the additional assumption that f(1) = 1, thus leading to the gamma function as the unique solution (see Artin [11, p. 14]). However, it is easy to see that Theorem 1.1 immediately follows from this original result (just replace f(x) with f(x)∕f(1)). \(\lozenge \)

A remarkable generalization of Bohr-Mollerup’s theorem was provided by Krull [54, 55] and then independently by Webster [97, 98]. Recall that a function \(g\colon \mathbb {R}_+\to \mathbb {R}\) is said to be eventually convex (resp. eventually concave) if it is convex (resp. concave) in a neighborhood of infinity. Krull [54] essentially showed that for any eventually concave function \(g\colon \mathbb {R}_+\to \mathbb {R}\) having the asymptotic property that, for each h > 0,

$$\displaystyle \begin{aligned} g(x+h)-g(x) ~\to ~ 0\qquad \mbox{as }x\to\infty, \end{aligned} $$
(1.2)

there exists exactly one (up to an additive constant) eventually convex solution \(f\colon \mathbb {R}_+\to \mathbb {R}\) to the equation Δf = g (and dually, if g is eventually convex, then f is eventually concave). He also provided an explicit expression for this solution as a pointwise limit of functions, namely

$$\displaystyle \begin{aligned} f(x) ~=~ f(1)+\lim_{n\to\infty}f^1_n[g](x),\qquad x>0, \end{aligned}$$

where

$$\displaystyle \begin{aligned} f^1_n[g](x) ~=~ -g(x)+\sum_{k=1}^{n-1}(g(k)-g(x+k))+x{\,}g(n). \end{aligned} $$
(1.3)

Much later, and independently, Webster [97, 98] established the multiplicative version of Krull’s result.

We can actually show that this result still holds if we replace the asymptotic condition (1.2) imposed on the function g with the slightly more general condition that the sequence n↦ Δg(n) converges to zero. However, although this result constitutes a very nice generalization of Bohr-Mollerup’s theorem, we note that the latter asymptotic condition remains a rather restrictive assumption. For instance, it is not satisfied by the functions \(g(x)=x\ln x\) and \(g(x)=\ln \Gamma (x)\).

In this work, we generalize Krull-Webster’s result above by relaxing the asymptotic condition on g into the much weaker requirement that the sequence n↦ Δp g(n) converges to zero for some nonnegative integer p. More precisely, we show that Krull-Webster’s result still holds if we assume this weaker condition, provided that we replace the convexity and concavity properties with the p-convexity and p-concavity properties (see Definition 2.2) and the function \(f_n^1[g]\) defined in (1.3) with an appropriate version of it, which we now introduce.

Throughout this book, we let \(\mathbb {N}\) denote the set of nonnegative integers and we let \(\mathbb {N}^*\) denote the set of strictly positive integers.

FormalPara Definition 1.3

For any \(p\in \mathbb {N}\), any \(n\in \mathbb {N}^*\), and any \(g\colon \mathbb {R}_+\to \mathbb {R}\), we define the function \(f^p_n[g]\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

(1.4)

We now state our result in the following existence theorem. It actually constitutes the p-order version of Krull-Webster’s result.

FormalPara Theorem 1.4 (Existence)

Let \(p\in \mathbb {N}\) and suppose that the function \(g\colon \mathbb {R}_+\to \mathbb {R}\) is eventually p-convex or eventually p-concave and has the asymptotic property that the sequence nΔ p g(n) converges to zero. Then there exists a unique (up to an additive constant) eventually p-convex or eventually p-concave solution \(f\colon \mathbb {R}_+\to \mathbb {R}\) to the difference equation Δf = g. Moreover,

$$\displaystyle \begin{aligned} f(x) ~=~ f(1)+\lim_{n\to\infty}f^p_n[g](x),\qquad x>0, \end{aligned} $$
(1.5)

and f is p-convex (resp. p-concave) on any unbounded subinterval of \(\mathbb {R}_+\) on which g is p-concave (resp. p-convex).

Webster [98, Theorem 3.1] also established (in the multiplicative notation) a uniqueness theorem, which does not require the function g to be eventually convex or eventually concave. In the next theorem, we provide the p-order version of this result.

FormalPara Theorem 1.5 (Uniqueness)

Let \(p\in \mathbb {N}\) and let the function \(g\colon \mathbb {R}_+\to \mathbb {R}\) have the property that the sequence nΔ p g(n) converges to zero. Suppose that \(f\colon \mathbb {R}_+\to \mathbb {R}\) is an eventually p-convex or eventually p-concave function satisfying the difference equation Δf = g. Then f is uniquely determined (up to an additive constant) by g through the equation

$$\displaystyle \begin{aligned} f(x) ~=~ f(1)+\lim_{n\to\infty}f^p_n[g](x),\qquad x>0. \end{aligned}$$

We observe that Theorem 1.4 was first proved in the case when p = 0 by John [49]. As mentioned above, it was also established in the case when p = 1 by Krull [54] and then by Webster [98]. More recently, the case when p = 2 was investigated by Rassias and Trif [86], but the asymptotic condition they imposed on the function g is much stronger than ours and hence it defines a very specific subclass of functions. (We discuss Rassias and Trif’s result in Appendix B.) We also observe that attempts to establish Theorem 1.4 for any value of p were made by Kuczma [58, Theorem 1] (see also Kuczma [60, pp. 118–121]) and then by Ardjomande [9]. However, the representation formulas they provide for the solutions are rather intricate. Thus, to the best of our knowledge, both Theorems 1.4 and 1.5, as stated above in their full generality and simplicity, were previously unknown.

For any solution f arising from Theorem 1.4 when p = 1, Webster [98] calls the function \(\exp \circ f\) a Γ-type function. In fact, \(\exp \circ f\) reduces to the gamma function (i.e., \(f(x)=\ln \Gamma (x)\)) when \(\exp \circ g\) is the identity function (i.e., \(g(x)=\ln x\)), which simply means that the gamma function restricted to \(\mathbb {R}_+\) is itself a Γ-type function. In this particular case, the limit given in (1.5) reduces to the following Gauss well-known limit for the gamma function (see Artin [11, p. 15])

$$\displaystyle \begin{aligned} \Gamma(x) ~=~ \lim_{n\to\infty}\frac{n!{\,}n^x}{x(x+1){\,}\cdots{\,}(x+n)}{\,},\qquad x>0. \end{aligned} $$
(1.6)

Similarly, for any fixed \(p\in \mathbb {N}\) and any solution f arising from Theorem 1.4, we call the function \(\exp \circ f\) a Γp-type function, and we naturally call the function f a \(\log \Gamma _p\)-type function. When the value of p is not specified, we call these functions multiple Γ-type function and multiple \(\log \Gamma \)-type function, respectively. This terminology will be introduced more formally and justified in Sect. 5.2.

Interestingly, Webster established for Γ-type functions analogues of Euler’s constant, Gauss’ multiplication formula, Legendre’s duplication formula, Stirling’s formula, and Weierstrass’ infinite product for the gamma function. In this work, we also establish for multiple Γ-type functions and multiple \(\log \Gamma \)-type functions analogues of all the formulas above as well as analogues of Euler’s infinite product, Gautschi’s inequality, Raabe’s formula, Stirling’s constant, Wallis’s product formula, and Wendel’s inequality. We also introduce and discuss analogues of Binet’s function, Burnside’s formula, Euler’s reflection formula, Fontana-Mascheroni’s series, and Gauss’ digamma theorem. Thus, (to paraphrase Webster [98, p. 607]) for each multiple Γ-type function, it is no longer surprising for instance that “some analogue of Legendre’s duplication formula must hold, almost rendering a formal proof unnecessary!”

All these results, together with the uniqueness and existence theorems above, show that the theory we develop in this book provides a very general and unified framework to study the properties of a large variety of functions. Thus, for each of these functions we can retrieve known formulas and sometimes establish new ones.

At the risk of repeating a large part of our preface, we now present two representative examples to illustrate the way our results can be applied to derive formulas methodically.

FormalPara Example 1.6 (The Barnes G-function, see Sect.10.5)

The restriction to \(\mathbb {R}_+\) of the Barnes G-function can be defined as the function \(G\colon \mathbb {R}_+\to \mathbb {R}_+\) whose logarithm \(f(x)=\ln G(x)\) is the unique eventually 2-convex solution that vanishes at x = 1 to the equation

$$\displaystyle \begin{aligned} f(x+1)-f(x) ~=~ \ln\Gamma(x),\qquad x>0. \end{aligned}$$

Thus, our Theorems 1.4 and 1.5 apply with \(g(x)=\ln \Gamma (x)\) and p = 2, which shows that the function \(\ln G(x)\) is a \(\log \Gamma _2\)-type function and hence that the function G(x) is a Γ2-type function. In particular, formula (1.5) provides the following analogue of Gauss’ limit for the gamma function

$$\displaystyle \begin{aligned} G(x) ~=~ \lim_{n\to\infty}\frac{\Gamma(1)\Gamma(2){\,}\cdots{\,}\Gamma(n)}{\Gamma(x)\Gamma(x+1){\,}\cdots{\,}\Gamma(x+n)}{\,} n!^x{\,}n^{{x\choose 2}}{\,}. \end{aligned}$$

Using some of our new results, we are also able to derive various unusual formulas and properties. For instance, we have the following analogue of Euler’s infinite product

$$\displaystyle \begin{aligned} G(x) ~=~ \frac{1}{\Gamma(x)}{\,} \prod_{k=1}^{\infty}\frac{\Gamma(k)}{\Gamma(x+k)}{\,}k^x(1+1/k)^{{x\choose 2}} \end{aligned}$$

and the following analogue of Weierstrass’ infinite product

$$\displaystyle \begin{aligned} G(x) ~=~ \frac{e^{(-\gamma -1){x\choose 2}}}{\Gamma(x)}{\,} \prod_{k=1}^{\infty}\frac{\Gamma(k)}{\Gamma(x+k)}{\,}k^xe^{\psi'(k){\,}{x\choose 2}}, \end{aligned}$$

where γ is the Euler constant and ψ is the digamma function. We also have the following analogue of Stirling’s formula

$$\displaystyle \begin{aligned} G(x) ~\sim ~ A^{-2}{\,}(2\pi)^{-\frac{1}{4}}{\,}x^{\frac{1}{12}}\,\Gamma(x)^{-\frac{1}{2}}{\,}e^{\psi_{-2}(x)+\frac{1}{12}}\qquad \mbox{as }x\to\infty{\,}, \end{aligned}$$

where ψ −2 is the polygamma function defined by the equation

$$\displaystyle \begin{aligned} \psi_{-2}(x) ~=~ \int_0^x\ln\Gamma(t){\,}dt\qquad \mbox{for }x>0{\,}, \end{aligned}$$

and A is Glaisher-Kinkelin’s constant defined by the equation

$$\displaystyle \begin{aligned} \zeta'(-1) ~=~ \frac{1}{12}-\ln A{\,}. \end{aligned}$$

(Here the map sζ′(s) denotes the derivative of the Riemann zeta function.) We can also easily derive the following analogue of Wendel’s double inequality

$$\displaystyle \begin{aligned} \left(1+\frac{a}{x}\right)^{-\left|{a-1\choose 2}\right|} \leq ~ \frac{G(x+a)}{G(x)\,\Gamma(x)^a{\,}x^{a\choose 2}} ~\leq \left(1+\frac{a}{x}\right)^{\left|{a-1\choose 2}\right|}, \end{aligned}$$

which holds for any x > 0 and any a ≥ 0. As a corollary, this inequality immediately provides the following asymptotic equivalence

$$\displaystyle \begin{aligned} \frac{G(x+a)}{G(x)} ~\sim ~ \Gamma(x)^a{\,}x^{{a\choose 2}} \qquad \mbox{as }x\to\infty{\,}, \end{aligned}$$

which reveals the asymptotic behavior of G(x + a)∕G(x) for large values of x. \(\lozenge \)

FormalPara Example 1.7 (The Hurwitz zeta function, see Sect.10.6)

Consider the Hurwitz zeta function sζ(s, a), defined when \(\Re (a)>0\) as an analytic continuation to \(\mathbb {C}\setminus \{1\}\) of the series

$$\displaystyle \begin{aligned} \sum_{k=0}^{\infty}(a+k)^{-s}{\,\,},\qquad \Re(s)>1{\,}. \end{aligned}$$

This function is known to satisfy the difference equation

$$\displaystyle \begin{aligned} \zeta(s,a+1)-\zeta(s,a) ~=~ -a^{-s}. \end{aligned}$$

Thus, it is not difficult to see that, for any \(s\in \mathbb {R}\setminus \{1\}\), the restriction of the map xζ(s, x) to \(\mathbb {R}_+\) is a \(\log \Gamma _{p(s)}\)-type function, where

$$\displaystyle \begin{aligned} p(s) ~=~ \max\{0,\lfloor 1-s\rfloor\}. \end{aligned}$$

Theorem 1.5 then tells us that all eventually p(s)-convex or eventually p(s)-concave solutions \(f_s\colon \mathbb {R}_+\to \mathbb {R}\) to the difference equation

$$\displaystyle \begin{aligned} f_s(x+1)-f_s(x) ~=~ -x^{-s} \end{aligned}$$

are of the form

$$\displaystyle \begin{aligned} f_s(x) ~=~ c_s+\zeta(s,x), \end{aligned}$$

where \(c_s\in \mathbb {R}\). Moreover, equation (1.5) provides the following analogue of Gauss’ limit for the gamma function

$$\displaystyle \begin{aligned} \zeta(s,x) ~=~ \zeta(s) + x^{-s} +\lim_{n\to\infty} \left(\sum_{k=1}^{n-1}\left((x+k)^{-s}-k^{-s}\right)-\sum_{j=1}^{p(s)}{\textstyle{{{x}\choose{j}}}}\,\Delta_n^{j-1}n^{-s}\right), \end{aligned}$$

where sζ(s) = ζ(s, 1) is the Riemann zeta function . Some of our results also enable us to derive the following analogues of Stirling’s formula

$$\displaystyle \begin{aligned} \begin{array}{rcl} \zeta(s,x) + \frac{x^{1-s}}{1-s}-\sum_{j=1}^{p(s)}G_j\,\Delta_x^{j-1}x^{-s} & \to &\displaystyle 0\qquad \mbox{as }x\to\infty,\\ \zeta(s,x) + \frac{1}{1-s}\sum_{j=0}^{p(s)}{\textstyle{{{1-s}\choose{j}}}}\,\frac{B_j}{x^{s+j-1}} & \to &\displaystyle 0\qquad \mbox{as }x\to\infty, \end{array} \end{aligned} $$

where G n is the nth Gregory coefficient and B n is the nth Bernoulli number. For instance, setting \(s=-\frac {3}{2}\) in these asymptotic formulas, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \textstyle{\zeta\left(-\frac{3}{2},x\right)+\frac{2}{5}{\,}x^{5/2}-\frac{7}{12}{\,}x^{3/2} +\frac{1}{12}{\,}(x+1)^{3/2}} & \to &\displaystyle 0\qquad \mbox{as }x\to\infty{\,},\\ \textstyle{\zeta\left(-\frac{3}{2},x\right)+\frac{2}{5}{\,}x^{5/2}-\frac{1}{2}{\,}x^{3/2}+\frac{1}{8}{\,}x^{1/2}} & \to &\displaystyle 0\qquad \mbox{as }x\to\infty{\,}. \end{array} \end{aligned} $$

Many more formulas and properties involving the Hurwitz zeta function will be provided and discussed in Sect. 10.6. \(\lozenge \)

The two examples above illustrate the scope of our theory and the diversity of our results. These examples and many others will be explored and discussed in the last chapters of this book. However, in the first chapters we will almost always use the basic function \(g(x)=\ln x\) as the guiding example to illustrate our results.

FormalPara Outline of the Book

Let us now see how this book is organized. On the whole, Chaps. 2 to 8 are devoted to the conceptual part: we develop our theory and establish our results. Chapters 10 to 12 focus on applications to a large number of functions, including several classical special functions. In between, Chap. 9 presents an overview and a summary of our results. After reading this introduction, the reader interested by such an overview can go immediately to Chap. 9.

In Chap. 2, we present some definitions and preliminary results on Newton interpolation theory as well as on higher order convexity properties.

In Chap. 3, we establish Theorems 1.4 and 1.5 and provide conditions for the sequence \(n\mapsto f^p_n[g](x)\) to converge uniformly on any bounded subset of \(\mathbb {R}_+\). We also examine the particular case when the sequence ng(n) is summable, and we provide historical remarks on some improvements of Krull-Webster’s theory.

In Chap. 4, we investigate the functions that satisfy the asymptotic condition stated in Theorems 1.4 and 1.5. We also investigate those functions that are eventually p-convex or eventually p-concave.

In Chap. 5, we introduce, investigate, and characterize the multiple \(\log \Gamma \)-type functions.

Chapter 6 is devoted to an asymptotic analysis of multiple \(\log \Gamma \)-type functions. More specifically, in that chapter we show how Euler’s constant, Stirling’s constant, Stirling’s formula, and Wendel’s inequality for the gamma function can be generalized to the multiple Γ-type functions and multiple \(\log \Gamma \)-type functions and we introduce and discuss analogues of Binet’s function and Burnside’s formula. We also show how the so-called Gregory summation formula, with an integral form of the remainder, can be very easily derived in this setting.

In Chap. 7, we discuss conditions for the multiple \(\log \Gamma \)-type functions to be differentiable and establish several important properties of the higher order derivatives of these functions.

In Chap. 8, we explore further properties of the multiple \(\log \Gamma \)-type functions. Specifically, we provide asymptotic expansions of these functions as well as analogues of Euler’s infinite product, Fontana-Mascheroni’s series, Gauss’ multiplication formula, Gautschi’s inequality, Raabe’s formula, Wallis’s product formula, and Weierstrass’ infinite product for the gamma function. We also discuss analogues of Euler’s reflection formula and Gauss’ digamma theorem, and we define and solve a generalized version of a functional equation proposed by Webster.

Chapter 9 is the transition from the theory to the applications. It provides a catalogue of our most relevant results, which can be used as a checklist to investigate the multiple \(\log \Gamma \)-type functions. Chapter 9 is self-contained and can be read right after this introduction.

In Chaps. 10 to 12, we apply our results to a number of multiple Γ-type functions and multiple \(\log \Gamma \)-type functions, some of which are well-known special functions related to the gamma function.

In Chap. 13, we make some concluding remarks and propose a list of interesting open questions.

FormalPara Notation and Basic Definitions

Throughout this book, we use the following notation and definitions. Further definitions will be given in the subsequent chapters.

Unless indicated otherwise, the symbol I always denotes an arbitrary interval of the real line whose interior is nonempty.

The symbol S represents either \(\mathbb {N}\) or \(\mathbb {R}\). For any \(\mathrm {S}\in \{\mathbb {N},\mathbb {R}\}\), the notation x →S means that x tends to infinity, assuming only values in S. We sometimes omit the subscript S when no confusion may arise.

Two functions \(f\colon \mathbb {R}_+\to \mathbb {R}\) and \(g\colon \mathbb {R}_+\to \mathbb {R}\) such that f(x)∕g(x) → 1 as x →S are said to be asymptotically equivalent (over S). In this case, we write

$$\displaystyle \begin{aligned} f(x) ~\sim ~ g(x)\qquad \mbox{as }x\to_{\mathrm{S}}\infty.{} \end{aligned}$$

For any \(x\in \mathbb {R}\), we set

$$\displaystyle \begin{aligned} x_+ ~=~ \max\{0,x\}.{} \end{aligned}$$

As usual, we also let ⌊x⌋ denote the floor of x, i.e., the greatest integer less than or equal to x. Similarly, we let ⌈x⌉ denote the ceiling of x, i.e., the smallest integer greater than or equal to x. When no confusion may arise, we let {x} denote the fractional part of x, i.e., {x} = x −⌊x⌋.

For any \(x\in \mathbb {R}\) and any \(k\in \mathbb {N}\), we set

$$\displaystyle \begin{aligned} x^{\underline{k}} ~=~ x(x-1){\,}\cdots{\,}(x-k+1) ~=~ \frac{\Gamma(x+1)}{\Gamma(x-k+1)}{} \end{aligned}$$

and we let

$$\displaystyle \begin{aligned} \varepsilon_k(x) ~\in ~\{-1,0,1\}{} \end{aligned}$$

denote the sign of \(x^{ \underline {k}}\).

For any \(k\in \mathbb {N}\) and any nonempty open real interval I, we let \(\mathcal {C}^k(I)\) denote the set of k times continuously differentiable functions on I, and we set \(\mathcal {C}^k=\mathcal {C}^k(\mathbb {R}_+)\). We also introduce the intersection sets

$$\displaystyle \begin{aligned} \mathcal{C}^{\infty}(I) ~=~ \bigcap_{k\geq 0}\mathcal{C}^k(I)\qquad \mbox{and}\qquad \mathcal{C}^{\infty} ~=~ \bigcap_{k\geq 0}\mathcal{C}^k.{} \end{aligned}$$

We let Δ and D denote the usual difference and derivative operators, respectively. We sometimes add a subscript to specify the variable on which the operator acts, e.g., writing Δn and D x.

Recall that the digamma function ψ is defined on \(\mathbb {R}_+\) by the equation

$$\displaystyle \begin{aligned} \psi(x) ~=~ D\ln\Gamma(x)\qquad \mbox{for }x>0. \end{aligned}$$

The polygamma functions ψ ν (\(\nu \in \mathbb {Z}\)) are defined on \(\mathbb {R}_+\) as follows (see, e.g., Srivastava and Choi [93]). If \(\nu \in \mathbb {N}\), then

$$\displaystyle \begin{aligned} \psi_{\nu}(x) ~=~ D^{\nu}\psi(x) ~=~ \psi^{(\nu)}(x). \end{aligned}$$

In particular, ψ 0 = ψ is the digamma function. If \(\nu \in \mathbb {Z}\setminus \mathbb {N}\), then we introduce the functions

$$\displaystyle \begin{aligned} \psi_{-1}(x) ~=~ \ln\Gamma(x) \end{aligned}$$

and

$$\displaystyle \begin{aligned} \psi_{\nu-1}(x) ~=~ \int_0^x\psi_{\nu}(t){\,}dt ~=~ \int_0^x\frac{(x-t)^{-\nu-1}}{(-\nu-1)!}{\,}\ln\Gamma(t){\,}dt. \end{aligned}$$

Recall also that the harmonic number function xH x is defined on (−1, ) by the equation

$$\displaystyle \begin{aligned} H_x ~=~ \sum_{k=1}^{\infty}\left(\frac{1}{k}-\frac{1}{x+k}\right)\qquad \mbox{for }x>-1. \end{aligned}$$

Clearly, this function has the property that

$$\displaystyle \begin{aligned} \Delta_x H_x ~=~ \frac{1}{x+1}{\,},\qquad x > -1. \end{aligned}$$

Moreover, both functions H x and ψ(x) are strongly related: we have

$$\displaystyle \begin{aligned} H_{x-1} ~=~ \psi(x)+\gamma{\,},\qquad x>0{\,}, \end{aligned}$$

where γ is Euler’s constant (also called Euler-Mascheroni constant).

We end this first chapter by introducing some new concepts that will be very useful in this book.

FormalPara Definition 1.8

For any a > 0, any \(p\in \mathbb {N}\), and any \(g\colon \mathbb {R}_+\to \mathbb {R}\), we define the function \(\rho ^p_a[g]\colon [0,\infty )\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} \rho^p_a[g](x) ~=~ g(x+a)-\sum_{j=0}^{p-1}{\textstyle{{{x}\choose{j}}}}\,\Delta^jg(a)\qquad \mbox{for }x\geq 0. \end{aligned} $$
(1.7)

Identity (1.7) clearly shows that the function \(\rho ^p_a[g]\) is actually defined on the open interval (−a, ). However, in this work we will almost always consider it as a function defined on the interval [0, ). We also note that \(\rho ^p_a[g](0)=0\).

FormalPara Definition 1.9

For any \(p\in \mathbb {N}\) and any \(\mathrm {S}\in \{\mathbb {N},\mathbb {R}\}\), we let \(\mathcal {R}^p_{\mathrm {S}}\) denote the set of functions \(g\colon \mathbb {R}_+\to \mathbb {R}\) having the asymptotic property that, for each x > 0,

$$\displaystyle \begin{aligned} \rho^p_a[g](x) ~\to ~0\qquad \mbox{as }a\to_{\mathrm{S}}\infty. \end{aligned}$$

We also let \(\mathcal {D}^p_{\mathrm {S}}\) denote the set of functions \(g\colon \mathbb {R}_+\to \mathbb {R}\) having the asymptotic property that

$$\displaystyle \begin{aligned} \Delta^pg(x) ~\to ~0\qquad \mbox{as }x\to_{\mathrm{S}}\infty. \end{aligned}$$

We immediately observe that the inclusion \(\mathcal {D}^p_{\mathrm {S}}\subset \mathcal {D}^{p+1}_{\mathrm {S}}\) holds for every \(p\in \mathbb {N}\). We will see in Sects. 3.1 and 4.1 that the inclusion \(\mathcal {R}^p_{\mathrm {S}}\subset \mathcal {R}^{p+1}_{\mathrm {S}}\) also holds for every \(p\in \mathbb {N}\).