In this chapter, we introduce and investigate the map, denote it by Σ, that carries any function g lying in

$$\displaystyle \begin{aligned} \bigcup_{p\geq 0}(\mathcal{D}^p\cap\mathcal{K}^p) \end{aligned}$$

into the unique solution f to the equation Δf = g that arises from the existence Theorem 3.6. We call these solutions multiple \(\log \Gamma \)-type functions and we investigate certain of their properties. We also discuss the search for simple conditions on the function \(g\colon \mathbb {R}_+\to \mathbb {R}\) to ensure the existence of Σg. Further important properties of these functions, including counterparts of several classical properties of the gamma function, will be investigated in the next three chapters.

The map Σ is actually a central concept of the theory developed here. Its definition and properties seem to show that it is as fundamental as the basic antiderivative operation. In the next chapter we show that both concepts actually share many common features.

5.1 The Map Σ and Its Basic Properties

In this section, we introduce the map Σ and discuss some of its basic properties. We begin with the following important definition.

Definition 5.1 (Asymptotic Degree)

The asymptotic degree of a function \(f\colon \mathbb {R}_+\to \mathbb {R}\), denoted deg f, is defined by the equation

$$\displaystyle \begin{aligned} \deg f ~=~ -1+\min\{q\in\mathbb{N} : f\in\mathcal{D}^q_{\mathbb{R}}\}.{} \end{aligned}$$

For instance, if f is a polynomial of degree p for some \(p\in \mathbb {N}\), then deg f = p. If f(x) = 0 or \(f(x)=\frac {1}{x}\), or \(f(x)=\ln (1+\frac {1}{x})\), then deg f = −1. If \(f(x)=\sin x\) or \(f(x)=x+\sin x\), or f(x) = 2x, then deg f = .

It is easy to see that the identity

$$\displaystyle \begin{aligned} \deg f ~=~ 1+\deg\Delta f \end{aligned}$$

holds whenever deg f is a nonnegative integer. However, it is no longer true when deg f = −1. For instance, for the function f(x) = 0 or the function \(f(x)=\frac {1}{x}\), we have deg f =deg  Δf = −1. This shows that in general we have

$$\displaystyle \begin{aligned} (\deg f)_+ ~=~ 1+\deg\Delta f. \end{aligned}$$

We are now ready to introduce the map Σ. Here and throughout, the symbols dom( Σ) and ran( Σ) denote the domain and range of Σ, respectively.

Definition 5.2 (The Map Σ)

We define the map Σ: dom( Σ) →ran( Σ), where

$$\displaystyle \begin{aligned} \mathrm{dom}(\Sigma) ~=~ \bigcup_{p\geq 0}(\mathcal{D}^p\cap\mathcal{K}^p), \end{aligned}$$

by the following condition: if \(g\in \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\), then

$$\displaystyle \begin{aligned} \Sigma g ~=~ \lim_{n\to\infty}f^p_n[g]. \end{aligned} $$
(5.1)

It is important to note that the map is well defined; indeed, if g lies in both sets \(\mathcal {D}^p\cap \mathcal {K}^p\) and \(\mathcal {D}^q\cap \mathcal {K}^q\) for some integers 0 ≤ p < q, then by Proposition 3.8 both sequences \(n\mapsto f^p_n[g]\) and \(n\mapsto f^q_n[g]\) have the same limiting function. Thus, in view of Proposition 4.7, we can see that condition (5.1) holds for p = 1 +deg g.

Thus defined, it is clear that the map Σ is one-to-one; indeed, if Σg 1 =  Σg 2 for some functions g 1 and g 2 lying in dom( Σ), then g 1 =  Δ Σg 1 =  Δ Σg 2 = g 2. This map is even a bijection since we have restricted its codomain to its range. We then have the following immediate result.

Proposition 5.3

The map Σ is a bijection and its inverse is the restriction of the difference operator Δ to ran( Σ).

Just as the indefinite integral (or antiderivative) of a function g is the class of functions whose derivative is g, the indefinite sum (or antidifference) of a function g is the class of functions whose difference is g (see, e.g., Graham et al. [41, p. 48]). Recall also that any two indefinite integrals of a function differ by a constant while any two indefinite sums of a function differ by a 1-periodic function. The map Σ now enables one to refine the definition of an indefinite sum as follows.

Definition 5.4

We say that the principal indefinite sum of a function g lying in dom( Σ) is the class of functions c +  Σg, where \(c\in \mathbb {R}\).

Example 5.5 (The Log-Gamma Function)

If \(g(x)=\ln x\), then we have \(\Sigma g(x)=\ln \Gamma (x)\), and we simply write

$$\displaystyle \begin{aligned} \Sigma\ln x ~=~ \ln\Gamma(x),\qquad x>0. \end{aligned}$$

Thus, the principal indefinite sum of the function \(x\mapsto \ln x\) is the class of functions \(x\mapsto c+\ln \Gamma (x)\), where \(c\in \mathbb {R}\). With some abuse of language, we can say that the principal indefinite sum of the log function is the log-gamma function. \(\lozenge \)

Exactly as for the difference operator Δ, we will sometimes add a subscript to the symbol Σ to specify the variable on which the map Σ acts. For instance, Σx g(2x) stands for the function obtained by applying Σ to the function xg(2x) while Σg(2x) stands for the value of the function Σg at 2x.

The following proposition provides some straightforward properties of the map Σ that will be very useful as we continue.

Proposition 5.6

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . The following assertions hold.

  1. (a)

    Σg is the unique solution to the equation Δf = g that lies in \(\mathcal {K}^p\) and that vanishes at 1.

  2. (b)

    Σg lies in \(\mathcal {D}^{p+1}\cap \mathcal {K}^p=\mathcal {R}^{p+1}\cap \mathcal {K}^p\).

  3. (c)

    Σg satisfies the identities

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(n) & =&\displaystyle \sum_{k=1}^{n-1}g(k){\,},\qquad n\in\mathbb{N}^*,{} \end{array} \end{aligned} $$
    (5.2)
    $$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x+n) & =&\displaystyle \Sigma g(x) + \sum_{k=0}^{n-1}g(x+k){\,},\qquad n\in\mathbb{N},{} \end{array} \end{aligned} $$
    (5.3)

    and

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ f^p_n[g](x) + \rho^{p+1}_n[\Sigma g](x){\,},\qquad n\in\mathbb{N}^*. \end{aligned} $$
    (5.4)

Proof

Assertions (a) and (b) immediately follow from Theorems 3.1 and 3.6 and Proposition 4.9. Identities (5.2)–(5.4) follow from (3.1)–(3.3). □

Quite surprisingly, we observe that if g lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\), then Σg need not lie in \(\mathcal {K}^{p+1}\). The example given in Remark 4.13 illustrates this observation.

We also have that

$$\displaystyle \begin{aligned} \deg\Sigma g ~=~ 1+\deg g \end{aligned}$$

whenever deg  Σg is a nonnegative integer; but this property no longer holds if deg  Σg = −1. For instance, considering the functions

$$\displaystyle \begin{aligned} g(x) ~=~ \frac{2-x}{x(x+1)(x+2)}\qquad \mbox{and}\qquad \Sigma g(x) ~=~ \frac{x-1}{x(x+1)}{\,}, \end{aligned}$$

we have deg g =deg  Σg = −1. Thus, in general we have

$$\displaystyle \begin{aligned} (\deg\Sigma g)_+ ~=~ 1+\deg g. \end{aligned}$$

We now give two important propositions, which were essentially proved by Webster [98, Theorem 5.1] in the special case when p = 1.

Proposition 5.7

Let g 1 and g 2 lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and let \(c_1,c_2\in \mathbb {R}\) . If c 1 g 1 + c 2 g 2 lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) , then

$$\displaystyle \begin{aligned} \Sigma(c_1g_1+c_2g_2) ~=~ c_1\Sigma g_1+c_2\Sigma g_2. \end{aligned}$$

Proof

It is clear that if g lies in \(\mathcal {D}^p\cap \mathcal {K}^p\), then we have Σcg = c Σg for any \(c\in \mathbb {R}\). Now, suppose that g 1, g 2, and g 1 + g 2 lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) and let us show that

$$\displaystyle \begin{aligned} \Sigma(g_1+g_2) ~=~ \Sigma g_1+\Sigma g_2. \end{aligned}$$

It is actually enough to consider the following two cases.

  1. 1.

    If both g 1 and g 2 lie in \(\mathcal {D}^p\cap \mathcal {K}^p_+\) (resp. \(\mathcal {D}^p\cap \mathcal {K}^p_-\)), then so does g 1 + g 2. It follows that the function f =  Σg 1 +  Σg 2 is a solution to the equation Δf = g 1 + g 2 that lies in \(\mathcal {K}^p_-\) (resp. \(\mathcal {K}^p_+\)) and satisfies f(1) = 0. By the uniqueness Theorem 3.1, we must have Σ(g 1 + g 2) = f.

  2. 2.

    If both g 1 + g 2 and − g 1 lie in \(\mathcal {D}^p\cap \mathcal {K}^p_+\) (resp. \(\mathcal {D}^p\cap \mathcal {K}^p_-\)), then so does g 2 (use the first case) and we have

    $$\displaystyle \begin{aligned} \Sigma g_2 ~=~ \Sigma ((g_1+g_2)+(-g_1)) ~=~ \Sigma (g_1+g_2) - \Sigma g_1. \end{aligned}$$

This completes the proof. □

Proposition 5.8

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p_+\) (resp. \(\mathcal {D}^p\cap \mathcal {K}^p_-\) ) for some \(p\in \mathbb {N}\) , let a ≥ 0, and let \(h\colon \mathbb {R}_+\to \mathbb {R}\) be defined by the equation h(x) = g(x + a) for x > 0. Then h lies in \(\mathcal {D}^p\cap \mathcal {K}^p_+\) (resp. \(\mathcal {D}^p\cap \mathcal {K}^p_-\) ) and

$$\displaystyle \begin{aligned} \Sigma h(x) ~=~ \Sigma_x{\,}g(x+a) ~=~ \Sigma g(x+a)-\Sigma g(a+1). \end{aligned}$$

Proof

Define a function \(f\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} f(x) ~=~ \Sigma g(x+a)-\Sigma g(a+1) \end{aligned}$$

for x > 0. By Corollary 4.21, f is a solution to the equation Δf = h that lies in \(\mathcal {K}^p_-\) (resp. \(\mathcal {K}^p_+\)) and satisfies f(1) = 0. Hence, Σh = f, as required. □

Example 5.9 (See Webster [98])

For any a > 0, consider the function \(g_a\colon \mathbb {R}_+\to \mathbb {R}\) defined by

$$\displaystyle \begin{aligned} g_a(x) ~=~ \ln\frac{x}{x+a} ~=~ \ln x-\ln(x+a)\qquad \mbox{for }x>0. \end{aligned}$$

Then g a lies in \(\mathcal {D}^0\cap \mathcal {K}^0_+\) (and also in \(\mathcal {D}^1\cap \mathcal {K}^1_-\)) and Propositions 5.7 and 5.8 show that

$$\displaystyle \begin{aligned} \Sigma g_a(x) ~=~ \ln\frac{\Gamma(x)\Gamma(a+1)}{\Gamma(x+a)}{\,}. \end{aligned}$$

Also, since g a is concave on \(\mathbb {R}_+\), we have that Σg a is convex on \(\mathbb {R}_+\). As Webster [98, p. 615] observed, this is “a not completely trivial result, but one immediate from the approach adopted here.” \(\lozenge \)

Example 5.10 (A Rational Function)

The function

$$\displaystyle \begin{aligned} g(x) ~=~ \frac{x^4+1}{x^3+x} ~=~ x+\frac{1}{x}-\frac{2x}{x^2+1} \end{aligned}$$

clearly lies in \(\mathcal {D}^2\cap \mathcal {K}^2\). Using Proposition 5.7, we then have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ {\textstyle{{{x}\choose{2}}}}+H_{x-1}-2\,\Sigma h(x), \end{aligned}$$

where the function

$$\displaystyle \begin{aligned} h(x) ~=~ \frac{x}{x^2+1} ~=~ \Re\left(\frac{1}{x+i}\right) \end{aligned}$$

lies in \(\mathcal {D}^0\cap \mathcal {K}^0\). Now, recalling that \(\Sigma _x \frac {1}{x}=H_{x-1}\), it is not difficult to see that

$$\displaystyle \begin{aligned} \Sigma h(x) ~=~ c+\Re H_{x+i-1} \end{aligned}$$

for some \(c\in \mathbb {R}\), where the function zH z on \(\mathbb {C}\setminus (-\mathbb {N}^*)\) satisfies the identity

$$\displaystyle \begin{aligned} H_z ~=~ \sum_{k=1}^{\infty}\left(\frac{1}{k}-\frac{1}{z+k}\right). \end{aligned}$$

Indeed, the function \(f\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equation

$$\displaystyle \begin{aligned} f(x) ~=~ \Re H_{x+i-1} ~=~ \sum_{k=1}^{\infty}\left(\frac{1}{k}-\frac{x+k-1}{(x+k-1)^2+1}\right),\qquad x>0, \end{aligned}$$

lies in \(\mathcal {K}^0\) and satisfies Δf = h. \(\lozenge \)

We also have the following surprising proposition, which says that if a function g lies in \(\mathcal {D}^p\cap \mathcal {K}^p_-\cap \mathcal {K}^q\) for some integers 0 ≤ p ≤ q, then it actually lies in

$$\displaystyle \begin{aligned} \mathcal{K}^p_-\cap\mathcal{K}^{p+1}_+\cap\mathcal{K}^{p+2}_-\cap\mathcal{K}^{p+3}_+\cap\cdots\cap\mathcal{K}^q_{\pm}{\,}, \end{aligned}$$

where the subscripts alternate in sign. The same property holds for Σg.

Proposition 5.11

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p_-\cap \mathcal {K}^{p+1}\) for some \(p\in \mathbb {N}\) . Then it lies in \(\mathcal {K}^{p+1}_+\) and Σg lies in \(\mathcal {D}^{p+1}\cap \mathcal {K}^p_+\cap \mathcal {K}^{p+1}_-\).

Proof

Suppose that g lies in \(\mathcal {K}^{p+1}_-\). Since it also lies in \(\mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}_-\), by Corollary 4.19 it must lie in \(\mathcal {K}^p_+\). By Corollary 4.6, g is eventually a polynomial of degree less than or equal to p. But then, using Corollary 4.6 again, g lies in \(\mathcal {K}^{p+1}_+\). The result about Σg is then trivial. □

Example 5.12

Let us apply Proposition 5.11 to the function \(g(x)=\ln x\) with p = 1. We then obtain that

$$\displaystyle \begin{aligned} \begin{array}{rcl} g & \mbox{lies in} &\displaystyle \mathcal{D}^1\cap\mathcal{K}^1_-\cap\mathcal{K}^2_+\cap\mathcal{K}^3_-\cap\mathcal{K}^4_+\cap\cdots\\ \mbox{while}\quad \Sigma g & \mbox{lies in} &\displaystyle \mathcal{D}^2\cap\mathcal{K}^1_+\cap\mathcal{K}^2_-\cap\mathcal{K}^3_+\cap\mathcal{K}^4_-\cap\cdots{\,}, \end{array} \end{aligned} $$

where \(\Sigma g(x)=\ln \Gamma (x)\). Moreover, it is easy to see that g is 1-concave on \(\mathbb {R}_+\), 2-convex on \(\mathbb {R}_+\), and so on, and similarly for Σg. \(\lozenge \)

Example 5.13

Applying Proposition 5.11 to the function \(g(x)=-\frac {1}{x}\ln x\) with p = 0, we obtain that

$$\displaystyle \begin{aligned} \begin{array}{rcl} g & \mbox{lies in} &\displaystyle \mathcal{D}^0\cap\mathcal{K}^0_+\cap\mathcal{K}^1_-\cap\mathcal{K}^2_+\cap\mathcal{K}^3_-\cap\cdots\\ \mbox{while}\quad \Sigma g & \mbox{lies in} &\displaystyle \mathcal{D}^1\cap\mathcal{K}^0_-\cap\mathcal{K}^1_+\cap\mathcal{K}^2_-\cap\mathcal{K}^3_+\cap\cdots{\,}, \end{array} \end{aligned} $$

where Σg(x) = γ 1(x) − γ 1 is a generalized Stieltjes constant (see Sect. 10.7). Now, for every \(q\in \mathbb {N}\), we have g (q+1)(x) = 0 if and only if \(x=e^{H_{q+1}}\). Hence we can easily see that g is q-convex or q-concave on the unbounded interval \((e^{H_{q+1}},\infty )\). \(\lozenge \)

Remark 5.14

Although the asymptotic degree of a function (see Definition 5.1) defines an important and useful concept, it is not always easy to compute. For instance, we can show after some calculus that, for any \(p\in \mathbb {N}\), the function \(h_p\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equation (see Sect. 11.3)

$$\displaystyle \begin{aligned} h_p(x) ~=~ \frac{x^p}{\ln(x+1)}\qquad \mbox{for }x>0 \end{aligned}$$

has the asymptotic degree deg h p = p − 1. Thus, it would be useful to have a simple formula to compute easily the asymptotic degree of any function. On this matter, let us consider the limiting value (when it exists)

$$\displaystyle \begin{aligned} e_f ~=~ \lim_{x\to\infty}x\,\frac{\Delta f(x)}{f(x)}{\,}, \end{aligned}$$

which is inspired from the concept of the elasticity of a function f (see, e.g., Nievergelt [81]). Computing this limit for the function h p above for instance, we easily obtain \(e_{h_p}=p\). Interestingly, we can observe empirically that many functions f lying in \(\mathcal {K}^0\) satisfy the double inequality

$$\displaystyle \begin{aligned} \lfloor e_f\rfloor_+ ~\leq ~ 1+\deg f ~\leq ~ \lfloor 1+e_f\rfloor_+. \end{aligned}$$

It would then be useful to find necessary and sufficient conditions on the function f for this double inequality to hold. \(\lozenge \)

5.2 Multiple \(\log \Gamma \)-Type Functions

Barnes [14,15,16] introduced a sequence of functions Γ1, Γ2, …, called multiple gamma functions , that generalize the Euler gamma function. The restrictions of these functions to \(\mathbb {R}_+\) are characterized by the equations

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \Gamma_{p+1}(x+1) ~=~ \frac{\Gamma_{p+1}(x)}{\Gamma_p(x)}{\,},\\ & &\displaystyle \Gamma_1(x) ~=~ \Gamma(x),\quad \Gamma_p(1) ~=~ 1,\qquad \mbox{for }x>0\mbox{ and }p\in\mathbb{N}^*, \end{array} \end{aligned} $$

together with the convexity condition

$$\displaystyle \begin{aligned} (-1)^{p+1}D^{p+1}\ln\Gamma_p(x) ~\geq ~0,\qquad x>0. \end{aligned}$$

For more recent references, see, e.g., Adamchik [1, 2] and Srivastava and Choi [93].

Thus defined, this sequence of functions satisfies the conditions

$$\displaystyle \begin{aligned} \ln\Gamma_{p+1}(x) ~=~ -\Sigma\ln\Gamma_p(x)\qquad \mbox{and}\qquad \deg(\ln\circ\Gamma_p) ~=~ p. \end{aligned}$$

Moreover, it can be naturally extended to the case when p = 0 by setting Γ0(x) = 1∕x.

Now, these observations motivate the following definition.

Definition 5.15

Let \(p\in \mathbb {N}\).

  • A Γp-type function (resp. a \(\log \Gamma _p\)-type function) is a function of the form \(\exp \circ \Sigma g\) (resp. Σg), where g lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) with p = 1 +deg g.

  • A multiple Γ-type function (resp. multiple \(\log \Gamma \)-type function) is a Γp-type function (resp. \(\log \Gamma _p\)-type function) for some \(p\in \mathbb {N}\).

When p ≥ 1, \(\exp \circ \Sigma g\) reduces to the function Γp when \(\exp \circ g\) is precisely the function 1∕ Γp−1, which simply shows that the function Γp restricted to \(\mathbb {R}_+\) is itself a Γp-type function.

We also introduce the following notation. We let Γp (resp. Log Γp) denote the set of Γp-type functions (resp. \(\log \Gamma _p\)-type functions). Thus, by definition the set ran( Σ) can be decomposed using the following disjoint union

$$\displaystyle \begin{aligned} \mathrm{ran}(\Sigma) ~=~ \bigcup_{p\geq 0}\mathrm{ran}(\Sigma|{}_{\mathcal{D}^p\cap\mathcal{K}^p}) ~=~ \bigcup_{p\geq 0}\mathrm{Log}\Gamma_p{\,}. \end{aligned}$$

Thus defined, the set of \(\log \Gamma _p\)-type functions can be characterized as follows.

Proposition 5.16

For any function \(f\colon \mathbb {R}_+\to \mathbb {R}\) and any \(p\in \mathbb {N}\) , the following assertions are equivalent.

  1. (i)

    f ∈Log Γ p .

  2. (ii)

    f(1) = 0, \(f\in \mathcal {K}^p\), \(\Delta f\in \mathcal {D}^p\cap \mathcal {K}^p\) , and deg  Δf = p − 1.

  3. (iii)

    f =  Σ Δf, \(\Delta f\in \mathcal {D}^p\cap \mathcal {K}^p\) , and deg  Δf = p − 1.

  4. (iv)

    f ∈ran( Σ) and deg  Δf = p − 1.

  5. (v)

    If p ≥ 1, then f ∈ran( Σ) and deg f = p. If p = 0, then f ∈ran( Σ) and deg f ∈{−1, 0}.

Proof

The equivalence (i) ⇔ (ii) ⇔ (iii) is immediate by definition of Σ. The implications (iii) ⇒ (iv) ⇒ (ii) are straightforward. Finally, the equivalence (iv) ⇔ (v) is trivial. □

From Proposition 5.16 we immediately derive the following characterization of the set ran( Σ) of all multiple \(\log \Gamma \)-type functions.

Corollary 5.17

A function \(f\colon \mathbb {R}_+\to \mathbb {R}\) lies in ran( Σ) if and only if there exists \(p\in \mathbb {N}\) such that f(1) = 0, \(f\in \mathcal {K}^p\) , and \(\Delta f\in \mathcal {D}^p\cap \mathcal {K}^p\).

5.3 Integration of Multiple \(\log \Gamma \)-Type Functions

The uniform convergence of the sequence \(n\mapsto f^p_n[g]\) (cf. Theorem 3.6) shows that the function Σg is continuous whenever so is g. More generally, we also have the following result.

Proposition 5.18

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . The following assertions hold.

  1. (a)

    Σg lies in \(\mathcal {C}^0\cap \mathcal {D}^{p+1}\cap \mathcal {K}^p\).

  2. (b)

    Σg is integrable at 0 if and only if so is g.

  3. (c)

    Let \(n\in \mathbb {N}^*\) be so that g is p-convex or p-concave on [n, ) and let 0 ≤ a  x. The following inequality holds

    $$\displaystyle \begin{aligned} \left|\int_a^x(f^p_n[g](t)-\Sigma g(t)){\,}dt\right| ~\leq ~ \int_a^x\lceil t\rceil\left|{\textstyle{{{t-1}\choose{p}}}}\right| dt ~ \left|\Delta^pg(n)\right|. \end{aligned}$$

    If p ≥ 1, we also have the following tighter inequality

    $$\displaystyle \begin{aligned} \left|\int_a^x(f^p_n[g](t)-\Sigma g(t)){\,}dt\right| ~\leq ~ \int_a^x\left|{\textstyle{{{t-1}\choose{p}}}}\right|\left|\Delta^{p-1}g(n+t)-\Delta^{p-1}g(n)\right|{\,}dt. \end{aligned}$$

    Moreover, the following assertions hold.

    1. (c1)

      The sequence

      $$\displaystyle \begin{aligned} n ~\mapsto ~\int_a^x\left(f^p_n[g](t)-\Sigma g(t)\right){\,}dt \end{aligned}$$

      converges to zero.

    2. (c2)

      The sequence

      $$\displaystyle \begin{aligned} n ~\mapsto ~\int_a^x(f^p_n[g](t)+ g(t)){\,}dt \end{aligned}$$

      converges to

      $$\displaystyle \begin{aligned} \int_a^x(\Sigma g(t) + g(t)){\,}dt ~=~ \int_a^x\Sigma g(t+1){\,}dt. \end{aligned}$$
    3. (c3)

      For any \(m\in \mathbb {N}^*\) , the sequence

      $$\displaystyle \begin{aligned} n ~\mapsto ~\int_a^x(f^p_n[g](t)-f^p_m[g](t)){\,}dt \end{aligned}$$

      converges to

      $$\displaystyle \begin{aligned} \int_a^x(\Sigma g(t)-f^p_m[g](t)){\,}dt. \end{aligned}$$

Proof

Assertion (a) follows from Proposition 5.6 and the uniform convergence of the sequence \(n\mapsto f^p_n[g]\). Assertion (b) follows from assertion (a) and the identity Σg(x + 1) − Σg(x) = g(x). Now, for any \(n\in \mathbb {N}^*\), since \(\rho _n^{p+1}[\Sigma g](0)=0\) by (1.7), the function \(\rho ^{p+1}_n[\Sigma g]\) is clearly integrable on (0, x) and hence on (a, x). Using (5.4), it follows that the function \(f^p_n[g]-\Sigma g\) is also integrable on (a, x). The inequalities of assertion (c) then follows from Theorem 3.6(b); and hence assertion (c1) also holds. Assertion (c2) follows from assertion (c1) and the identity Σg(x + 1) − Σg(x) = g(x). Finally, using (3.8) we see that the function \(f^p_m[g]-f^p_n[g]\) is integrable on (a, x) and hence assertion (c3) follows from assertion (c1). □

Remark 5.19

Assertion (c) of Proposition 5.18 has been obtained by integrating the function \(\rho ^{p+1}_n[\Sigma g]\) on (a, x). The first inequality in assertion (c) then clearly shows that the sequences of functions defined in assertions (c1)–(c3) converge uniformly on any bounded subset of \(\mathbb {R}_+\). Now, we also observe that the integral

$$\displaystyle \begin{aligned} \int_a^x\rho^{p+1}_n[\Sigma g](t){\,}dt \end{aligned}$$

itself can be integrated on (a, x), and we can repeat this process as often as we wish. After n integrations, we obtain

$$\displaystyle \begin{aligned} \frac{1}{(n-1)!}\,\int_a^x(x-t)^{n-1}\,\rho^{p+1}_n[\Sigma g](t){\,}dt, \end{aligned}$$

and, proceeding as in Proposition 5.18, it is then clear that the following inequality holds

$$\displaystyle \begin{aligned} \left|\int_a^x(x-t)^{n-1}{\,}(f^p_n[g](t)-\Sigma g(t)){\,}dt\right| ~\leq ~ \int_a^x(x-t)^{n-1}\,\lceil t\rceil\left|{\textstyle{{{t-1}\choose{p}}}}\right| dt ~ \left|\Delta^pg(n)\right|. \end{aligned}$$

In particular, this inequality shows that the left-hand integral converges uniformly on any bounded subset of \(\mathbb {R}_+\) to zero. \(\lozenge \)

Let us end this section with the following important remark. In Proposition 5.18 we have assumed the continuity of function g to ensure that the integrals of both functions g and Σg be defined. Of course, we could somewhat generalize our result by relaxing this continuity assumption into weaker properties such as local integrability of both g and Σg. However, for the sake of simplicity, in this work we will always assume the continuity of any function whenever we need to integrate it on a compact interval (see also Remark 9.1). In this case, continuity can be regarded simply as a handy assumption to keep the results simple. We then encourage the interested reader to generalize those results by searching for the weakest assumptions. This may sometimes lead to challenging but stimulating problems.

5.4 The Quest for a Characterization of dom( Σ)

Recall that the map Σ is defined on the set

$$\displaystyle \begin{aligned} \mathrm{dom}(\Sigma) ~=~ \bigcup_{p\geq 0}(\mathcal{D}^p\cap\mathcal{K}^p). \end{aligned}$$

In this respect, it would be useful to have a very simple test to check whether a given function \(g\colon \mathbb {R}_+\to \mathbb {R}\) lies in this set. By Propositions 4.2 and 4.7, the condition that g lies in \(\mathcal {D}^{\infty }_{\mathbb {N}}\cap \mathcal {K}^0\) is clearly necessary. In the next proposition we show that, if g is not eventually identically zero, then it must also satisfy the following property

$$\displaystyle \begin{aligned} \limsup_{n\to_{\mathbb{N}}\,\infty}\,\frac{g(n+1)}{g(n)} ~\leq ~1. \end{aligned} $$
(5.5)

We first recall the following discrete version of L’Hospital’s rule, also called Stolz-Cesàro theorem . For a recent reference see, e.g., Ash et al. [12].

Lemma 5.20 (Stolz-Cesàro Theorem)

Let na n and nb n be two real sequences. If the second sequence is strictly monotone and unbounded, then

$$\displaystyle \begin{aligned} \liminf_{n\to\infty}\,\frac{a_{n+1}-a_n}{b_{n+1}-b_n} ~\leq ~ \liminf_{n\to\infty}\,\frac{a_n}{b_n} ~\leq ~ \limsup_{n\to\infty}\,\frac{a_n}{b_n} ~\leq ~ \limsup_{n\to\infty}\,\frac{a_{n+1}-a_n}{b_{n+1}-b_n}{\,}. \end{aligned}$$

In particular, if

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n} ~=~ \ell \end{aligned}$$

for some \(\ell \in \mathbb {R}\) , then

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\frac{a_n}{b_n} ~=~ \ell. \end{aligned}$$

Proposition 5.21

If g lies in dom( Σ) and is not eventually identically zero, then condition (5.5) holds.

Proof

Assume that g lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). Of course we can assume that p = 1 +deg g. We can also assume that g is not eventually a polynomial; for otherwise the condition (5.5) clearly holds. If p = 0, then the function x↦|g(x)| eventually decreases to zero and hence condition (5.5) holds. Now suppose that p ≥ 1. Then the function Δp g lies in \(\mathcal {D}^0\cap \mathcal {K}^0\) and there are two exclusive cases to consider.

  1. (a)

    Suppose that the eventually monotone sequence n↦ Δp−1 g(n) is unbounded. This sequence is actually eventually strictly monotone. Indeed, otherwise the function \(\Delta ^p g\in \mathcal {K}^0\) would vanish in any unbounded interval of \(\mathbb {R}_+\), and hence would eventually be identically zero. Equivalently, g would eventually be a polynomial of degree less than or equal to p − 1, a contradiction. Using the Stolz-Cesàro theorem (see Lemma 5.20) and the fact that condition (5.5) holds for Δp g, we then obtain

    $$\displaystyle \begin{aligned} \limsup_{n\to_{\mathbb{N}}\,\infty}\,\frac{\Delta^{p-1}g(n+1)}{\Delta^{p-1}g(n)} ~\leq ~ \limsup_{n\to_{\mathbb{N}}\,\infty}\,\frac{\Delta^pg(n+1)}{\Delta^pg(n)} ~\leq ~1.\end{aligned} $$

    Iterating this process, we see that condition (5.5) holds for g.

  2. (b)

    Suppose that the sequence n↦ Δp−1 g(n) has a finite limit (which is necessarily nonzero by minimality of p). If p = 1, then condition (5.5) holds trivially. If p ≥ 2, then the eventually monotone sequence n↦ Δp−2 g(n) is unbounded and we can show as in the previous case that it is actually eventually strictly monotone. Using the Stolz-Cesàro theorem, we then obtain

    $$\displaystyle \begin{aligned} \limsup_{n\to_{\mathbb{N}}\,\infty}\,\frac{\Delta^{p-2}g(n+1)}{\Delta^{p-2}g(n)} ~\leq ~ \limsup_{n\to_{\mathbb{N}}\,\infty}\,\frac{\Delta^{p-1}g(n+1)}{\Delta^{p-1}g(n)} ~=~1.\end{aligned} $$

    Iterating this process, we see that condition (5.5) holds.

This completes the proof. □

Remark 5.22

We observe that the left side of (5.5) is not always a limit. For instance, the function \(g\colon \mathbb {R}_+\to \mathbb {R}\) defined by the equation

$$\displaystyle \begin{aligned} g(x) ~=~ \frac{1}{2^x}\left(1+\frac{1}{3}\sin x\right)\qquad \mbox{for }x>0 \end{aligned}$$

lies in \(\mathcal {D}^0\cap \mathcal {K}^0\) (see Remark 4.13) but the function g(x + 1)∕g(x) is a nonconstant periodic function. The first example in Remark 6.21 also illustrates this behavior.

On the other hand, a function \(g\in \mathcal {K}^0\) that satisfies condition (5.5) need not lie in \(\mathcal {D}^{\infty }_{\mathbb {N}}\). For instance, for any \(q\in \mathbb {N}\) the function

$$\displaystyle \begin{aligned} g_q(x) ~=~ x^{q+1}+\sin x \end{aligned}$$

lies in \(\mathcal {K}^q\setminus \mathcal {K}^{q+1}\), and hence also in \(\mathcal {K}^0\), and satisfies

$$\displaystyle \begin{aligned} \lim_{n\to_{\mathbb{N}}\,\infty}\frac{g_q(n+1)}{g_q(n)} ~=~ 1. \end{aligned}$$

However, it does not lie in \(\mathcal {D}^{\infty }_{\mathbb {N}}\). \(\lozenge \)

We observe that condition (5.5) is very easy to check for many functions g lying in \(\mathcal {K}^0\). Thus, this condition provides a simple and useful test. In particular, when the inequality in (5.5) is strict, the sequence ng(n) is summable by the ratio test, and hence g lies in \(\mathcal {D}^0\cap \mathcal {K}^0\). On the other hand, when the inequality is an equality, it is not known whether this condition, together with the property that g lies in \(\mathcal {K}^0\), are also sufficient for g to lie in dom( Σ).

Now, it is easy to see that a function \(g\colon \mathbb {R}_+\to \mathbb {R}\) lies in \(\mathcal {D}^{\infty }_{\mathbb {N}}\) if and only if there exists \(p\in \mathbb {N}\) for which the sequence n↦ Δp g(n) converges. In particular, if we assume that g lies in \(\mathcal {K}^{\infty }\), then g does not lie in \(\mathcal {D}^{\infty }_{\mathbb {N}}\) (and hence it does not lie in dom( Σ)) if and only if for every \(p\in \mathbb {N}\) the sequence n↦ Δp g(n) tends to infinity. On the other hand, we can observe empirically that condition (5.5) fails to hold for many functions g lying in \(\mathcal {K}^{\infty }\setminus \mathcal {D}^{\infty }_{\mathbb {N}}\). Examples of such functions include g(x) = 2x and g(x) =  Γ(x). It seems then reasonable to think that this observation follows from a general rule. We then formulate the following conjecture.

Conjecture 5.23

If a function \(g\colon \mathbb {R}_+\to \mathbb {R}\) lies \(\mathcal {K}^{\infty }\) and is not eventually identically zero, then it also lies in \(\mathcal {D}^{\infty }_{\mathbb {N}}\) if and only if condition (5.5) holds.