Abstract
In this chapter, we discuss the higher order differentiability properties of Σg when g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for any \(p,r\in \mathbb {N}\). In particular, we show the fundamental fact that Σg also lies in \(\mathcal {C}^r\) and that the sequence \(n \mapsto D^rf^p_n[g]\) converges uniformly on any bounded subinterval of \(\mathbb {R}_+\) to D r Σg.
You have full access to this open access chapter, Download chapter PDF
In this chapter, we discuss the higher order differentiability properties of Σg when g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for any \(p,r\in \mathbb {N}\). In particular, we show the fundamental fact that Σg also lies in \(\mathcal {C}^r\) and that the sequence \(n \mapsto D^rf^p_n[g]\) converges uniformly on any bounded subinterval of \(\mathbb {R}_+\) to D r Σg.
We also show that the functions ( Σg)(r) and Σg (r) differ by a constant and we investigate some properties of these functions, including asymptotic behaviors and an analogue of Euler’s series representation of the constant γ. We present and discuss a procedure, that we call the “elevator” method, to compute Σg by first evaluating Σg (r). Finally, we provide an alternative uniqueness result for higher order differentiable solutions to the equation Δf = g.
7.1 Differentiability of Multiple \(\log \Gamma \)-Type Functions
In this first section we investigate the higher order differentiability of the function Σg when g is of class \(\mathcal {C}^r\) for some \(r\in \mathbb {N}\). We start with the following preliminary, but very important result.
Proposition 7.1
If g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r,p\in \mathbb {N}\) , then the function Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\).
Proof
If g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r,p\in \mathbb {N}\), then clearly it also lies in \(\mathcal {C}^r\cap \mathcal {D}^{\max \{p,r\}}\cap \mathcal {K}^{\max \{p,r\}}\). By Proposition 5.6, Σg must lie in \(\mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\). Let us now show that it also lies in \(\mathcal {C}^r\).
We first observe that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\). This is clear if r ≤ p by Proposition 4.12. If r > p, then we first see that g (p) lies in \(\mathcal {C}^{r-p}\cap \mathcal {D}^0\cap \mathcal {K}^{r-p}\), and hence also in \(\mathcal {K}^0\cap \mathcal {K}^1\). Using Proposition 4.16(b) repeatedly, we then see that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\).
By Proposition 5.18, Σg (r) must lie in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_++1}\cap \mathcal {K}^{(p-r)_+}\). Hence, there exists \(F\in \mathcal {C}^r\) such that F (r) = Σg (r). By Proposition 4.12, F must lie in \(\mathcal {K}^{\max \{p,r\}}\). Now, we also have
which shows that Δ(F + P) = g for some polynomial P of degree at most r. By Corollary 4.6 we have that F + P lies in \(\mathcal {K}^{\max \{p,r\}}\). But then, by the uniqueness Theorem 3.1 we must have F + P = Σg + c for some \(c\in \mathbb {R}\). Hence Σg lies in \(\mathcal {C}^r\). □
Remark 7.2
If g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some integers 0 ≤ r < p, then the function Σg lies in \(\mathcal {C}^r\) by Proposition 7.1. Interestingly, this result can also be established very easily using the following argument. Let \(n\in \mathbb {N}\) be so that Σg is p-convex or p-concave on I n = (n, ∞). By Lemma 2.6(a), the function Σg lies in \(\mathcal {C}^{p-1}(I_n)\) and hence also in \(\mathcal {C}^r(I_n)\). Using (5.3), we immediately obtain that Σg lies in \(\mathcal {C}^r\). \(\lozenge \)
We now present the following important and very surprising result. It shows that Proposition 7.1 no longer holds when r > p if we ask g to lie in \(\mathcal {K}^p\) instead of \(\mathcal {K}^{\max \{p,r\}}\). Since the proof is somewhat technical, we defer it to Appendix F.
Proposition 7.3
For every \(p\in \mathbb {N}\) , there exists a function g lying in \(\mathcal {C}^{p+1}\cap \mathcal {D}^p\cap \mathcal {K}^p\) for which Σg does not lie in \(\mathcal {C}^{p+1}\) . Thus, the operator Σ does not always preserve differentiability when the order of differentiability exceeds that of convexity.
Proof
See Appendix F. □
The next theorem is the central result of this section. In this theorem, we recall the fundamental result given in Proposition 7.1 and we show that, under the same assumptions, the sequence \(n \mapsto D^rf^p_n[g]\) converges uniformly on any bounded subinterval of \(\mathbb {R}_+\) to D r Σg. We first consider a technical lemma.
Lemma 7.4
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some integers 0 ≤ r ≤ p. Then, for any \(n\in \mathbb {N}\) the function \(\rho ^{p+1}_n[\Sigma g]\) lies in \(\mathcal {C}^r\) . Moreover, the sequence \(n\mapsto D^r\rho ^{p+1}_n[\Sigma g]\) converges uniformly on any bounded subset of \(\mathbb {R}_+\) to zero.
Proof
By Proposition 7.1, we have that Σg lies in \(\mathcal {C}^r\). Using (1.7) it is then clear that, for any \(n\in \mathbb {N}\), the function \(\rho ^{p+1}_n[\Sigma g]\) lies in \(\mathcal {C}^r\).
Let us now show the second part of the lemma. Negating g if necessary, we may assume that it lies in \(\mathcal {K}^p_-\). In this case, D r Σg must lie in \(\mathcal {K}^{p-r}_+\) by Proposition 4.12. Let n ≥ p be an integer so that g is p-concave on [n, ∞). Using Proposition 2.1 repeatedly, we can see that there exist p − r + 1 pairwise distinct points \(\xi _0^n,\ldots ,\xi ^n_{p-r}\in (0,p)\) such that
Let us now fix x > 0. Using (2.11) and then (2.2) and (2.3), we obtain
if \(x\neq \xi _i^n\) for i = 0, …, p − r, and \(D^r\rho ^{p+1}_n[\Sigma g](x)=0\), otherwise, where
Now, on the one hand, we clearly have
where \(c_x=\max \{p,\lceil x\rceil \}\). On the other hand, using Lemma 2.5 (with the fact that D r Σg lies in \(\mathcal {K}^{p-r}_+\)) and then (2.8), we obtain
Thus, for any bounded subinterval E of \(\mathbb {R}_+\), we obtain the inequality
But the latter sum converges to zero as \(n\to _{\mathbb {N}}\infty \) since D r g lies in \(\mathcal {D}^{p-r}\cap \mathcal {K}^{p-r}\) by Proposition 4.12. This completes the proof of the lemma. □
Theorem 7.5 (Higher Order Differentiability of Multiple \(\boldsymbol \log \ \boldsymbol {\Gamma }\)-Type Functions)
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r,p\in \mathbb {N}\) . The following assertions hold.
-
(a)
Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\).
-
(b)
The sequence \(n\mapsto D^rf^p_n[g]\) converges uniformly on any bounded subset of \(\mathbb {R}_+\) to D r Σg.
Proof
Assertion (a) immediately follows from Proposition 7.1. When r ≤ p, assertion (b) immediately follows from Lemma 7.4 and identity (5.4). Let us now assume that r > p. Using (5.4) and then (1.7) and (5.3) we obtain
By Proposition 4.12, we have that g (p) lies in \(\mathcal {C}^{r-p}\cap \mathcal {D}^0\cap \mathcal {K}^{r-p}\), and hence also in \(\mathcal {K}^0\cap \mathcal {K}^1\). Using Proposition 4.16(b) repeatedly, we then see that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\). Thus, we can apply Theorem 3.12 to the function g (r), with f = D r Σg. Since f lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\) by assertion (a) and Proposition 4.12, it follows from Theorem 3.12 that the sequence \(n\mapsto D^rf^p_n[g]\) converges uniformly on \(\mathbb {R}_+\) to f − f(∞) = f = D r Σg. □
Example 7.6
The function \(g(x)=\ln x\) clearly lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\). Using Theorem 7.5, we now see that the function \(\Sigma g(x)=\ln \Gamma (x)\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^2\cap \mathcal {K}^{\infty }\). Moreover, for any \(r\in \mathbb {N}^*\), we have
If r = 1, then we obtain
If r ≥ 2, then we get (compare with, e.g., Srivastava and Choi [93, p. 33])
where s↦ζ(s, x) is the Hurwitz zeta function (see Example 1.7). \(\lozenge \)
7.2 Some Properties of the Derivatives
In this section, we investigate the functions ( Σg)(r) and Σg (r) and some of their properties. We also show how the asymptotic behaviors of these functions can be analyzed from results of Chap. 6, including the generalized Stirling formula. Finally, we provide a series representation of the asymptotic constant σ[g] as an analogue of Euler’s series representation of γ.
In the next proposition, we essentially establish the fact that the functions ( Σg)(r) and Σg (r) are equal up to an additive constant. This result will have several important consequences in this and the next chapters.
Proposition 7.7
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . Then g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\) . Moreover, for any x > 0 we have
If r > p, then
Proof
As already observed in the proof of Proposition 7.1, the first claim follows from Propositions 4.12 and 4.16(b). Moreover, we have that Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\). Let us now prove (7.1). By Proposition 4.12, the function φ 1 = ( Σg)(r) is a solution in \(\mathcal {K}^{(p-r)_+}\) to the equation Δφ = g (r). By the existence Theorem 3.6, the function φ 2 = Σg (r) is also a solution in \(\mathcal {K}^{(p-r)_+}\). Thus, by the uniqueness Theorem 3.1, we must have ( Σg)(r) − Σg (r) = c for some \(c\in \mathbb {R}\), and hence we also have ( Σg)(r)(1) = c.
Now, for any x > 0, using (6.11) we then get
Evaluating the latter integral, we then obtain
which proves (7.1). Finally, if r > p, then we have that g (r−1) lies in \(\mathcal {C}^1\cap \mathcal {D}^0\cap \mathcal {K}^1\) and that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\) by Proposition 4.16(b). The last part of the statement then follows from applying Proposition 6.14 to the function g (r). □
Example 7.8
The function \(g(x)=\frac {1}{x}\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^0\cap \mathcal {K}^{\infty }\) and all its derivatives lie in \(\mathcal {K}^0\). By Theorem 7.5, the function
lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\). Moreover, the series can be differentiated term by term infinitely many times and hence, for any \(r\in \mathbb {N}^*\), we have
By Proposition 7.7, we also have
where s↦ζ(s) is the Riemann zeta function . \(\lozenge \)
In the next proposition we show the remarkable fact that the asymptotic equivalence (6.31) still holds if we differentiate both sides.
Proposition 7.9
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) , and let a ≥ 0. When D r Σg vanishes at infinity, we also assume that
Then we have
Proof
By Proposition 7.7, we have that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\). Moreover, for any x > 0 we have
and, using (6.11),
where c = g (r−1)(1) − σ[g (r)]. The result then immediately follows from applying Proposition 6.20 to the function g (r). □
Example 7.10
Applying Proposition 7.9 to the function \(g(x)=\ln x\), for any a ≥ 0 we obtain the equivalences
and for any \(\nu \in \mathbb {N}\),
♢
In the next two propositions, we mainly investigate how the convergence results in (6.4) and (6.21) are modified when the function g is replaced with one of its higher order derivatives. The second proposition can be regarded as the “integrated” version of the first one, and hence it naturally involves the generalized Binet function.
Proposition 7.11
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) , and let a ≥ 0. The following assertions hold.
-
(a)
g (r) lies in \(\mathcal {R}_{\mathbb {R}}^{(p-r)_+}\) and both Σg (r) and ( Σg)(r) lie in \(\mathcal {R}_{\mathbb {R}}^{(p-r)_++1}\).
-
(b)
For any \(q\in \mathbb {N}\) , the function \(x\mapsto \rho _x^{q+1}[\Sigma g](a)\) lies in \(\mathcal {C}^r\) and we have
$$\displaystyle \begin{aligned} D_x^r\rho_x^{q+1}[\Sigma g](a) ~=~ \rho_x^{q+1}[\Sigma g^{(r)}](a). \end{aligned}$$ -
(c)
We have that \(\rho _x^{(p-r)_++1}[\Sigma g^{(r)}](a) \to 0\) and \(D_x^r\rho _x^{p+1}[\Sigma g](a) \to 0\) as x →∞.
Proof
By Proposition 7.7, the function g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\). This immediately proves assertion (a). Now, using (1.7) and then (7.1) we get
which proves assertion (b). Assertion (c) follows from assertions (a) and (b) and the fact that \(\mathcal {R}_{\mathbb {R}}^{(p-r)_++1}\subset \mathcal {R}_{\mathbb {R}}^{p+1}\). □
Proposition 7.12
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . The following assertions hold.
-
(a)
For any \(q\in \mathbb {N}\) , the function J q+1[ Σg] lies in \(\mathcal {C}^r\) and we have
$$\displaystyle \begin{aligned} D^r J^{q+1}[\Sigma g] ~=~ J^{q+1}[\Sigma g^{(r)}]. \end{aligned}$$In particular, we have σ[g (r)] = −D r J 1[ Σg](1).
-
(b)
We have that \(J^{(p-r)_++1}[\Sigma g^{(r)}](x) \to 0\) and D r J p+1[ Σg](x) → 0 as x →∞. In particular, if r > p, then ( Σg)(r) → 0 as x →∞.
-
(c)
We have
$$\displaystyle \begin{aligned} D_x^r \int_0^1\rho_x^{p+1}[\Sigma g](t){\,}dt ~=~ \int_0^1D_x^r \rho_x^{p+1}[\Sigma g](t){\,}dt. \end{aligned}$$
Proof
Using (6.18) and (7.1), we get
which proves assertion (a). Now, setting q = p in these equations we obtain
Since g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\), this latter expression vanishes at infinity. This proves assertion (b). Finally, using Proposition 7.11 and assertion (a) we get
which proves assertion (c). □
Assertion (c) of Proposition 7.11 reveals a very important fact. It shows that the convergence result in (6.4) still holds if we replace g with g (r) and p with (p − r)+. But it also says that this new result can also be obtained by differentiating r times both sides of (6.4) and then removing the terms that vanish at infinity.
Similarly, assertion (b) of Proposition 7.12 shows that this property also applies to the generalized Stirling formula (6.21).
Example 7.13
The function \(g(x)=\ln x\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\) and its derivative \(g'(x)=\frac {1}{x}\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^0\cap \mathcal {K}^{\infty }\). For any a ≥ 0, the limit in (6.4) reduces to
If we replace g with g′ and set p = 0 in (6.4), we get
However, this latter limit can also be obtained by differentiating both sides of the previous limit and then removing the term (\(-\frac {a}{x}\)) that vanishes at infinity.
Now, applying the generalized Stirling formula (6.21) to the function \(g(x)=\ln x\), we clearly retrieve the classical Stirling formula
Proceeding similarly as above, we then obtain
which is actually the analogue of Stirling’s formula for the digamma function. \(\lozenge \)
Remark 7.14
To emphasize the similarities between Propositions 7.11 and 7.12, we could for instance extend our formalism a bit further as follows. For any \(p\in \mathbb {N}\) and any \(\mathrm {S}\in \{\mathbb {N},\mathbb {R}\}\), let \(\mathcal {J}^p_{\mathrm {S}}\) denote the set of continuous functions \(g\colon \mathbb {R}_+\to \mathbb {R}\) having the asymptotic property that
This new definition enables one to formalize some results more easily. For instance, using (6.17) we clearly obtain that
and this identity could be used to establish assertion (b) of Proposition 7.12 from assertion (a). To give another example, we can see that (6.22) actually means that
Note also that the generalized Stirling formula simply states that Σg lies in \(\mathcal {J}^{p+1}_{\mathbb {R}}\) whenever g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\). \(\lozenge \)
Taylor Series Expansion of Σ g
Suppose that g lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^p\cap \mathcal {K}^{\infty }\) for some \(p\in \mathbb {N}\). We know from Proposition 7.12 that
Thus, the exponential generating function (see, e.g., Graham et al. [41, Chapter 7]) for the sequence n↦σ[g (n)] is defined by the equation
Denoting this exponential generating function by egfσ[g](x), the previous equation reduces to
If the function J 1[ Σg] is real analytic at 1, then the series in (7.2) converges in some neighborhood of x = 0. Similarly, if the function Σg is real analytic at 1, then the following Taylor series expansion
holds in some neighborhood of x = 0, where the numbers ( Σg)(k)(1) for \(k\in \mathbb {N}^*\) can also be computed through (7.1).
Example 7.15
Consider again the functions \(g(x)=\ln x\) and \(\Sigma g(x)=\ln \Gamma (x)\). We know from Example 7.6 that
and that for any integer k ≥ 2
We then obtain the following Taylor series expansion
The values of the sequence n↦σ[g (n)] can be obtained using (7.1) or (7.2). We get
and for any integer k ≥ 2
♢
Analogues of Euler’s Series Representation of γ
Integrating both sides of (7.3) on (0, 1) (assuming that the series can be integrated term by term), we obtain the identity
Similarly, integrating both sides of (7.2) on (0, 1) (assuming again that the series can be integrated term by term), we obtain the identity
Taking for instance \(g(x)=\frac {1}{x}\) in (7.4), we immediately retrieve Euler’s series representation of γ (see, e.g., Srivastava and Choi [93, p. 272])
This formula can also be obtained taking \(g(x)=\frac {1}{x}\) in (7.5) and using the straightforward identity
Considering different functions g(x) in (7.4) and (7.5) enables one to derive various interesting identities. A few applications are given in the following example.
Example 7.16
Taking g(x) = ψ(x) in (7.5) and using the straightforward identity
we obtain
Similarly, taking \(g(x)=\ln x\) and then \(g(x)=\ln \Gamma (x)\) in (7.4) and (7.5) we obtain the identities
where A is Glaisher-Kinkelin’s constant; see also Srivastava and Choi [93, Section 3.4]. \(\lozenge \)
7.3 Finding Solutions from Derivatives
Given \(r\in \mathbb {N}^*\) and a function \(g\in \mathcal {C}^r\), a solution \(f\in \mathcal {C}^r\) to the equation Δf = g can sometimes be found more easily by first searching for an appropriate solution \(\varphi \in \mathcal {C}^0\) to the equation Δφ = g (r) and then calculating f as an rth antiderivative of φ.
Let us first examine a very simple example to illustrate to which extent this approach can be easily and usefully applied.
Example 7.17
Let \(g\colon \mathbb {R}_+\to \mathbb {R}\) be defined by the equation
Suppose that we search for a simple expression for the indefinite sum Σg. We can apply Proposition 7.7 and observe that g′ lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\) and hence that g lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^2\cap \mathcal {K}^{\infty }\). Moreover, we have
for some \(c\in \mathbb {R}\). Thus, we obtain
To find the value of c, we then observe that
and hence \(c=1-\frac {1}{2}\ln (2\pi )\) (see Example 6.5). Alternatively, this value can also be obtained directly from (7.1); we have
Thus, this approach amounts to first searching for a simple expression for Σg′, and then computing Σg using an antiderivative of Σg′.
Finally, we get
where ψ −2 is the polygamma function \(\psi _{-2}(x)=\int _0^x\ln \Gamma (t){\,}dt\). \(\lozenge \)
The approach described in Example 7.17 is rather simple and can sometimes be very efficient. We will refer to this technique as the elevator method. In very basic terms, to find Σg one proceeds as follows.
- Step 1.:
We take the elevator, go down from the ground floor to the rth basement level, and get the function Σg (r) easily.
- Step 2.:
We go back to the ground floor by converting the latter function into the function sought Σg using an rth antiderivative.
$$\displaystyle \begin{aligned} \begin{array}{ccc} \Delta f ~=~ g & & f ~=~ \Sigma g \\ \downarrow & & \uparrow \\ \Delta\varphi ~=~ g^{(r)} & \quad \to\quad & \varphi ~=~ \Sigma g^{(r)} \end{array} \end{aligned}$$
To our knowledge, this trick was investigated thoroughly by Krull [55] and then by Dufresnoy and Pisot [34].
In the next theorem we provide a general result based on this idea. This result is actually very general: it applies to any function \(g\in \mathcal {C}^r\), even if Σg is not defined (e.g., g(x) = 2x).
We first observe that if \(\varphi \in \mathcal {C}^0\) is a solution to the equation Δφ = g (r), then the map
has a zero derivative and hence it is constant on \(\mathbb {R}_+\). In particular, it has a finite right limit at x = 0.
Theorem 7.18 (The Elevator Method)
Let \(r\in \mathbb {N}^*\) , a > 0, \(g\in \mathcal {C}^r\) , and let \(\varphi \colon \mathbb {R}_+\to \mathbb {R}\) be a continuous solution to the equation Δφ = g (r) . Then there exists a solution \(f\in \mathcal {C}^r\) to the equation Δf = g such that f (r) = φ if and only if
If any of these equivalent conditions holds, then f is uniquely determined (up to an additive constant) by
where, for k = 1, …, r − 1,
Proof
Condition (7.6) is clearly necessary. Indeed, we have
Let us show that it is sufficient. Since φ is continuous, there exists \(f\in \mathcal {C}^r\) such that f (r) = φ. Taylor’s theorem then provides the expansion formula (7.7) with arbitrary parameters c k = f (k)(a) for k = 1, …, r − 1. Now we need to determine the parameters c 1, …, c k for f to be a solution to the equation Δf = g. To this extent, we need the following claim.
Claim
The function f satisfies the equation Δf = g if and only if f (r) satisfies the equation Δf (r) = g (r) and Δf (j)(a) = g (j)(a) for j = 0, …, r − 1.
Proof of the Claim
The condition is clearly necessary. To see that it is sufficient, we simply show by decreasing induction on j that Δf (j) = g (j). Clearly, this is true for j = r. Suppose that it is true for some integer j satisfying 1 ≤ j ≤ r. For any x > 0 we have
which shows that the result still holds for j − 1. □
By the claim, f satisfies the equation Δf = g if and only if Δf (j)(a) = g (j)(a) for j = 0, …, r − 1. When j = r − 1, the latter condition is nothing other than condition (7.6) and hence it is satisfied. Applying Taylor’s theorem to f (j), we obtain
and hence we see that the remaining r − 1 conditions are
where
It is not difficult to see that these r − 1 conditions form a consistent triangular system of r − 1 linear equations in the r − 1 unknowns c 1, …, c r−1. This establishes the uniqueness of f up to an additive constant.
Let us now show that formula (7.8) holds. For k = 1, …, r − 1, we have
Replacing i with i − j − k + 1 and then permuting the resulting sums, the latter expression reduces to
that is, using (6.40),
This completes the proof of the theorem. □
Adding an appropriate constant to φ if necessary in Theorem 7.18, we can always assume that condition (7.6) holds. More precisely, the function φ ⋆ = φ + C, where
satisfies
Example 7.19
Let us see how we can apply Theorem 7.18 to somewhat generalize Example 7.17. Let \(g\in \mathcal {C}^0\), let \(G\in \mathcal {C}^1\) be defined by the equation
and let \(f\in \mathcal {C}^0\) be any solution to the equation Δf = g. To find a solution F to the equation ΔF = G such that F′ = f, we just need to apply Theorem 7.18 to the function G with r = 1 and a = 1. Defining the function
we then obtain that the function \(F\in \mathcal {C}^1\) defined by the equation
is the unique (up to an additive constant) solution to the equation ΔF = G such that F′ = f. For similar results, see Krull [55, p. 254] and Kuczma [58, Section 2]. \(\lozenge \)
The next corollary particularizes the elevator method when the function g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\). We omit the proof, since it immediately follows from Theorem 7.5, Proposition 7.7, and Theorem 7.18.
Corollary 7.20 (The Elevator Method)
Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . Then Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\) and we have
(This latter value reduces to \(-\sum _{k=1}^{\infty }g^{(r)}(k)\) if r > p.) Moreover, for any a > 0, we have
where \(f_a\in \mathcal {C}^r\) is defined by
and, for k = 1, …, r − 1,
Corollary 7.20 has an important practical value. It provides an explicit integral expression for Σg from an explicit expression for Σg (r). Setting a = 1 in this result, we simply obtain
with, for k = 1, …, r − 1,
The following three examples illustrate the use of Corollary 7.20. In the first one, we revisit Example 7.17.
Example 7.21
The function
lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^2\cap \mathcal {K}^{\infty }\). Choosing r = 1 and a = 1 in Corollary 7.20, we get
and
♢
Example 7.22
The function
lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^3\cap \mathcal {K}^{\infty }\). Choosing r = 2 and a = 0 (as a limiting value) in Corollary 7.20, we get
and
where A is Glaisher-Kinkelin’s constant and the integral is the polygamma function ψ −3(x). (Here we use the identity \(\psi _{-3}(1)=\ln A+\frac {1}{4}\ln (2\pi )\).)
We can also investigate the asymptotic properties of Σg using our results. For instance, using the generalized Stirling formula (6.21), we also obtain the following asymptotic behavior of Σg
♢
Example 7.23
The function \(g(x)=\arctan (x)\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\). Choosing r = 1 and a = 0 (as a limiting value) in Corollary 7.20, we get (see also Example 5.10)
for some \(c\in \mathbb {R}\), and hence
Applying the operator Δ to both sides of this identity and then setting x = 1, we obtain \(c=\frac {\pi }{2}\). Thus, we have
Some properties of Σg can be investigated. For instance, using Corollary 6.12 together with the identity
we obtain the inequality
and hence the left side approaches zero as x →∞, which provides the asymptotic behavior of the function Σg for large values of its argument. \(\lozenge \)
7.4 An Alternative Uniqueness Result
The following theorem provides a uniqueness result for higher order differentiable solutions to the equation Δf = g. These solutions can be computed from their derivatives using Theorem 7.18. We first state a surprising and useful fact.
Fact 7.24
A periodic function \(\omega \colon \mathbb {R}_+\to \mathbb {R}\) is constant if and only if it lies in \(\mathcal {K}^0\) . In particular, if \(\varphi _1,\varphi _2\colon \mathbb {R}_+\to \mathbb {R}\) are two solutions to the equation Δφ = g such that φ 1 − φ 2 lies in \(\mathcal {K}^0\) , then φ 1 − φ 2 is constant.
Theorem 7.25 (Uniqueness)
Let \(r\in \mathbb {N}^*\) and \(g\in \mathcal {C}^r\) , and assume that there exists \(\varphi \in \mathcal {C}^r\) such that Δφ = g and \(\varphi ^{(r)}\in \mathcal {R}^0_{\mathbb {N}}\) . Then, the following assertions hold.
-
(a)
For each x > 0, the series \(\sum _{k=0}^{\infty }g^{(r)}(x+k)\) converges and we have
$$\displaystyle \begin{aligned} \varphi^{(r)}(x) ~=~ -\sum_{k=0}^{\infty}g^{(r)}(x+k){\,}. \end{aligned}$$ -
(b)
For any \(f\in \mathcal {C}^r\cap \mathcal {K}^{r-1}\) such that Δf = g, we have f = c + φ for some \(c\in \mathbb {R}\).
Proof
Assertion (a) follows immediately from (3.2). Now, let \(f\in \mathcal {C}^r\cap \mathcal {K}^{r-1}\) be such that Δf = g. By Lemma 2.6(c), f (r) must lie in \(\mathcal {K}^{-1}\). Setting ω = f − φ and using (3.2) again, we then obtain
which shows that ω (r) also lies in \(\mathcal {K}^{-1}\). By Lemma 2.6(d), ω lies in \(\mathcal {K}^{r-1}\subset \mathcal {K}^0\) and, since it is 1-periodic, it must be constant by Fact 7.24. This proves assertion (b). □
Example 7.26
The assumptions of Theorem 7.25 hold if \(g(x)=\ln x\), \(\varphi (x)=\ln \Gamma (x)\), and r = 2. It then follows that all solutions to the equation Δf = g that lie in \(\mathcal {C}^2\cap \mathcal {K}^1\) are of the form \(f(x)=c+\ln \Gamma (x)\), where \(c\in \mathbb {R}\). We thus easily retrieve Bohr-Mollerup’s theorem with the additional assumption that f lies in \(\mathcal {C}^2\). It is remarkable that this latter result can be obtained here from a very elementary theorem that relies only on Lemma 2.6 and Fact 7.24. \(\lozenge \)
References
J. Dufresnoy and Ch. Pisot. Sur la relation fonctionnelle f(x + 1) − f(x) = φ(x). (French). Bull. Soc. Math. Belg., 15: 259–270, 1963.
R. L. Graham, D. E. Knuth, and O. Patashnik. Concrete mathematics: a foundation for computer science. 2nd edition. Addison-Wesley Longman Publishing Co., Boston, MA, USA, 1994.
W. Krull. Bemerkungen zur Differenzengleichung g(x + 1) − g(x) = φ(x). II. (German). Math. Nachr., 2:251–262, 1949.
M. Kuczma. Sur une équation aux différences finies et une caractérisation fonctionnelle des polynômes. (French). Fund. Math., 55:77–86, 1964.
H. M. Srivastava and J. Choi. Zeta and q-zeta functions and associated series and integrals. Elsevier, Inc., Amsterdam, 2012.
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this chapter
Cite this chapter
Marichal, JL., Zenaïdi, N. (2022). Derivatives of Multiple \(\log \Gamma \)-Type Functions. In: A Generalization of Bohr-Mollerup's Theorem for Higher Order Convex Functions. Developments in Mathematics, vol 70. Springer, Cham. https://doi.org/10.1007/978-3-030-95088-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-95088-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95087-3
Online ISBN: 978-3-030-95088-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)