1 Introduction

The present work is motivated by a recent paper by Horst Alzer and Luis Salinas [1]. The authors of [1] study the following functional inequality:

$$\begin{aligned} f(x)f(y) - f(xy) \le f (x) + f (y) - f(x+y) \end{aligned}$$
(1)

for mappings \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\). The inequality itself appears to be quite intriguing and is by no means trivial. In the main theorem of [1] it is assumed that f is differentiable at zero and convex or concave. Then, either f is constant or equal to the identity mapping. The proof splits into two cases: Case 1 “\(f(0)\ne 0\)” and Case 2 “\(f(0)= 0\)”. In Case 1 the assumption on convexity or concavity of f is used to deduce that f is a constant, whereas in Case 2 only continuity of f is needed, which together with the differentiability at 0 leads to the corollary that the identity mapping is the only non-constant solution in this case. An open problem to determine all solutions of (1) in Case 1 under the same assumptions (i.e. with convexity or concavity relaxed to continuity) is formulated. Further, two examples of solutions of (1) are given in [1]. The first one shows that the assumption that f is differentiable at zero cannot be dropped in Case 2. The second one leads to an observation that every function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) such that \(2-\sqrt{2} \le f(x) \le \sqrt{2}\) for all \(x \in {\mathbb {R}}\) solves (1) (f can be smooth or not), and moreover the bounds on the values of f cannot be improved.

The purpose of the present note is to prove two more results on inequality (1). In the first one we study this inequality in the case \(f(0)= 0\). We employ Dini derivatives and use the Denjoy-Young-Saks theorem, which allows us to generalize [1, Cor.] of Alzer and Salinas by dropping the continuity assumption. Our next result deals with the case \(f(0)\ne 0\). We show that unbounded solutions that also satisfy some mild regularity conditions do not exist (and therefore, the second example of Alzer and Salinas is in a sense typical among regular solutions).

2 Main Results

The Dini derivatives of f are defined as follows:

$$\begin{aligned} D^\pm f(x) := \limsup _{h\rightarrow 0\pm }\frac{f(x+h)-f(x)}{h} \end{aligned}$$

and

$$\begin{aligned} D_\pm f(x) := \liminf _{h\rightarrow 0\pm }\frac{f(x+h)-f(x)}{h} \end{aligned}$$

for every \(x \in {\mathbb {R}}\). Next, let us recall the The Denjoy-Young-Saks theorem (see Stanisław Saks [3, Ch. IX.4]).

Theorem

(Denjoy–Young–Saks). Assume that I is an interval and \(f:I \rightarrow {\mathbb {R}}\) is an arbitrary function. Then there exists a set of measure zero \(C\subset I\) such that for all \(x \in I{\setminus } C\) exactly one of the following cases holds true:

(i):

f is differentiable at x;

(ii):

\(D_-f(x)=D^+f(x)\) is finite, \(D^-f(x)=+\infty \) and \(D_+f(x) = -\infty \);

(iii):

\(D_+f(x)=D^-f(x)\) is finite, \(D^+f(x)=+\infty \) and \(D_-f(x) = -\infty \);

(iv):

\(D_-f(x)=D_+f(x) = -\infty \) and \(D^-f(x)=D^+f(x) = +\infty \).

Note that no further assumptions upon f, like measurability, are needed. This theorem in the above form was proven by Stanisław Saks in the 1930s. In 1914 Grace Chisholm Young [2] showed that the lower derivative of a function of a real variable of either side is not greater than the upper derivative on the other side except on a countable set, i.e.

$$\begin{aligned} D_-f(x)\le D^+f(x) \quad \text {and}\quad D_+f(x)\le D^-f(x) \end{aligned}$$
(2)

for all \(x\in {\mathbb {R}}{\setminus } C\) with some countable set C.

With the aid of the above machinery, we will prove the following result.

Theorem 1

Assume that \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a solution of (1) such that f is differentiable at 0 and \(f(0)=0\). Then \(f=0\) on \({\mathbb {R}}\) or \(f(x) = x\) for all \(x \in {\mathbb {R}}\).

Proof

We will modify some calculations from [1]. First, let us rearrange inequality (1) a bit to obtain two auxiliary estimates:

$$\begin{aligned} \frac{f(x+y) - f(x)}{y}\le \frac{f(y)}{y}[1-f(x)] + x\frac{f(xy)}{xy} \end{aligned}$$
(3)

for \(x \ne 0\) and \(y>0\) and

$$\begin{aligned} \frac{f(x+y) - f(x)}{y}\ge \frac{f(y)}{y}[1-f(x)] + x\frac{f(xy)}{xy} \end{aligned}$$
(4)

for \(x \ne 0\) and \(y<0\). Next, fix \(x\ne 0\) temporarily and pick two sequences, say \((y_n)\) and \((y'_n)\) such that \(y_n>0>y'_n\) for all \(n \in {\mathbb {N}}\) and

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{f(x+y_n) - f(x)}{y_n} = D^+f(x) \quad \text {and}\quad \lim _{n \rightarrow \infty }\frac{f(x+y'_n) - f(x)}{y'_n} = D_-f(x). \end{aligned}$$

Apply estimates (3) and (4) for \(y=y_n\) or \(y=y'_n\), respectively, and use the fact that f is differentiable at 0 and \(f(0)=0\) to arrive at

$$\begin{aligned} D^+f(x) \le f'(0)[1-f(x)+x]\le D_-f(x), \quad x \ne 0. \end{aligned}$$

Now, we are ready to apply the theorem of Grace Young to infer that the above estimate holds with equality except for a countable set of points. Further, the equalities \(D_\pm f=+\infty \) and \(D^\pm f=-\infty \) can hold on a set that is at most countable (see e.g. S. Saks [3]). Therefore, the Denjoy–Young–Saks theorem implies that f is differentiable almost everywhere. Moreover, f solves the linear ODE almost everywhere:

$$\begin{aligned} f'(x)=f'(0)[1-f(x)+x]. \end{aligned}$$

Not surprisingly, the same equation appeared in [1]. We have that, either \(f'(0)=0\) and f is constant almost everywhere, or \(f'(0)\ne 0\) and

$$\begin{aligned} f(x) = x + \frac{a+1}{a}(1-e^{ax}) \end{aligned}$$

for almost all \(x \in {\mathbb {R}}\), with \(a=-f'(0)\). Arguments from [1] following this differential equation work without substantial changes in the “almost everywhere” case and lead to the conclusion that \(a=-1\). Thus, to finish the proof we need to observe that if a given solution f of (1) is equal almost everywhere to zero or equal almost everywhere to the identity, then it is equal everywhere. We give arguments for the second case first. Suppose that \(f(x)=x\) for almost all x and fix some \(x_0 \ne 0\). We will show that \(f(x_0)=x_0\). We can find a point \(y_1\in {\mathbb {R}}\) such that

$$\begin{aligned} y_1> 1, \quad f(y_1)=y_1, \quad f(x_0y_1) = x_0y_1, \quad f(x_0+y_1) = x_0 + y_1. \end{aligned}$$

Similarly, we pick a point \(y_2\in {\mathbb {R}}\) such that

$$\begin{aligned} y_2< 1, \quad f(y_2)=y_2, \quad f(x_0y_2) = x_0y_2, \quad f(x_0+y_2) = x_0 + y_2. \end{aligned}$$

Apply inequality (1) twice to get after reductions \(x_0 \le f(x_0)\le x_0\).

If \(f=0\) almost everywhere, then arguing similarly we get that \(f \ge 0\) everywhere (it is enough to use (1) only once). Suppose now that \(f(y_0)>0\) for some \(y_0\). Apply (1) with x replaced by \(x-y+y_0\):

$$\begin{aligned} 0<f(x-y+y_0)f(y) + f(y_0)\le f(x-y+y_0) + f(y) + f((x-y+y_0)y) \end{aligned}$$

for all \(x, y \in {\mathbb {R}}\). If \(y\ne 0\), then we may pick \(x\in {\mathbb {R}}\) such that

$$\begin{aligned} f(x-y+y_0)=f((x-y+y_0)y)=0. \end{aligned}$$

Therefore, \(f(y)>0\) whenever \(y \ne 0\); a contradiction. \(\square \)

Our second result is in a sense complementary to the first one. We deal with the remaining case “\(f(0)\ne 0\)” and we show that there are no nice unbounded solutions.

Theorem 2

Assume that \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a solution of (1) such that \(f(0)\ne 0\) and f has finite limits at \(x=4\) and at \(x=-4\). Then f is globally bounded on \({\mathbb {R}}\).

Proof

First, we will prove four claims which do not involve the assumption of finite limits at \(\pm 4\). We begin by adopting some calculations from [1].

Claim 1. \(f(0) >0\) and \(f(x) \le 2\) for all \(x \in {\mathbb {R}}\).

Apply (1) for \(y=0\) to get \([f(x)-2]f(0)\le 0\). Thus in particular \(f(0)^2 - 2f(0)\le 0\), meaning \(0< f(0)\le 2\) and the first claim is proven.

To find a bound from below is a bit more problematic. We begin with a bound on the positive halfline. Let us introduce an auxiliary function \(h:=f-1\). Note that \(h\le 1\) by Claim 1 and (1) is equivalent to a simpler looking inequality

$$\begin{aligned} h(x)h(y) + h(x+y) \le 1 + h(xy), \quad x, y \in {\mathbb {R}}. \end{aligned}$$
(5)

Next, substitutions \(y=x\) and \(y=-x\) give us

$$\begin{aligned} h(x)^2 + h(2x)&\le 1 + h(x^2), \quad x \in {\mathbb {R}}, \end{aligned}$$
(6)
$$\begin{aligned} h(x)h(-x) + h(0)&\le 1 + h(-x^2), \quad x \in {\mathbb {R}}. \end{aligned}$$
(7)

Claim 2. \(f(x) \ge -\sqrt{3}\) for all \(x > 0 \).

Suppose for a contradiction that \(f(y_0) < -\sqrt{3}\), i.e. \(h(y_0) < -1-\sqrt{3}\) for some \(y_0>0\). Put \(x_1 = \sqrt{y_0}\) and \(x_2 = -\sqrt{y_0}\). By (6) we get

$$\begin{aligned} h(x_1)^2 + h(2x_1)\le 1 + h(y_0)< -\sqrt{3}, \end{aligned}$$

so \(h(2x_1) <-\sqrt{3}\). Similarly,

$$\begin{aligned} h(x_2)^2 + h(2x_2)\le 1 + h(y_0)< -\sqrt{3}, \end{aligned}$$

giving \(h(-2x_1) = h(2x_2) <-\sqrt{3}\). Next, apply (7) for \(x= 2 x_1\) to obtain

$$\begin{aligned} h(2x_1)h(-2x_1) + h(0) \le 1 + h(-4y_0)\le 2. \end{aligned}$$

But \(h(2x_1)h(-2x_1)> 3\); a contradiction to Claim 1.

It is possible to improve the bound of Claim 2.

Claim 3. \(f(x) \ge -1\) for all \(x > 0 \).

We will show that if for some \(M>0\) one has \(h\ge -M\) on \((0, \infty )\), then also \(h\ge -\sqrt{2+M}\) on \((0, \infty )\). This together with Claim 2 gives us the assertion, since 2 is the limit of the sequence \((M_n)\) defined recursively as

$$\begin{aligned} M_0= 1 + \sqrt{3}, \quad M_{n+1} = \sqrt{2+M_n} \text { for } n \in {\mathbb {N}}. \end{aligned}$$

Fix \(x \in {\mathbb {R}}\). We have by (6) and by other assumptions

$$\begin{aligned} h(x)^2 - M \le h(x)^2 + h(2x) \le 1 + h(x^2) \le 2, \end{aligned}$$

so \(h(x)^2 \le 2 + M\).

Next, we will introduce another auxiliary map \(\varphi \) by \(\varphi (x) := h(x) + h(-x)\) for \(x \in {\mathbb {R}}\). By Claim 1 we have \(\varphi \le 2\) and \(\varphi \) is even. Apply inequality (5) with substitutions \((x,y) \rightarrow (\pm x, \pm y)\), resulting in four inequalities including the original one. Add these inequalities side-by-side to arrive at

$$\begin{aligned} \varphi (x)\varphi (y) + \varphi (x+y) + \varphi (x-y) \le 4 + 2 \varphi (xy), \quad x, y \in {\mathbb {R}}. \end{aligned}$$
(8)

Put \(y=x\) to see that

$$\begin{aligned} \varphi (x)^2 + \varphi (2x) \le 4 - \varphi (0)+ 2 \varphi (x^2), \quad x \in {\mathbb {R}}. \end{aligned}$$
(9)

Claim 4. If \(\varphi (y_0) < -M\) for some \(y_0>0\) and \(M \in {\mathbb {R}}\), then \(\varphi (2\sqrt{y_0}) < 6-2M\).

Apply (9) for \(x_0= \sqrt{y_0}\) to get:

$$\begin{aligned} \varphi (2x_0) \le \varphi (x_0)^2 + \varphi (2x_0) \le 4 - \varphi (0)+ 2 \varphi (y_0)< 6 - 2M. \end{aligned}$$

End of the Proof. Claim 4 gives us that if \(\varphi (y_0) < -6\) for some \(y_0>0\) and the sequence \((y_n)\) is defined recursively as \(y_n=2\sqrt{y_{n-1}}\) for \(n \in {\mathbb {N}}\), then \(\varphi (y_n)\) is strictly decreasing to \(-\infty \). On the other hand, \(y_n \rightarrow 4\) and our assumptions imply that \(\varphi \) has a finite limit at 4. These contradiction gives us that \(-6\) is a global bound from below for \(\varphi \). To see that h, and consequently f are also bounded from below it is enough to join the above observation with the definition of \(\varphi \) and Claims 1 and 3. Indeed, if \(x>0\) is arbitrary, then \(-6\le h(x) + h(-x)\) and since \(h(x)\le 1\), then \(h(-x) \ge -7\), that is \(f(x) \ge -6\).\(\square \)

3 Conclusions and Final Remarks

Remark 1

We have shown that in the case \(f(0)\ne 0\) global bounds for f are: from above 2 on \({\mathbb {R}}\) and from below \(-1\) on \((0, \infty )\) and \(-6\) on \({\mathbb {R}}\). This constants are probably far from being optimal, which is suggested by the second remark of [1]. Therefore, it is an open problem to find optimal bounds for regular (in some sense) solutions of (1) for which \(f(0)\ne 0\).

Remark 2

The points \(\pm 4\) in the last result have a distinguished role. We do not know whether the assumption of finite limits at \(\pm 4\) can be omitted or replaced by another one. Non-linear terms appearing in (1), (5) and (8) cause difficulties with the use of many tools that work for linear inequalities, like convexity or subadditivity, and allow one to establish several regularity properties of solutions.