Abstract
The classical result of L. Székelyhidi states that (under some assumptions) every solution of a general linear equation must be a polynomial function. It is known that Székelyhidi’s result may be generalized to equations where some occurrences of the unknown functions are multiplied by a linear combination of the variables. In this paper we study the equations where two such combinations appear. The simplest nontrivial example of such a case is given by the equation
considered by Fechner and Gselmann (Publ Math Debrecen 80(1–2):143–154, 2012). In the present paper we prove several results concerning the systematic approach to the generalizations of this equation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
First we recall briefly the known results connected with the notion of polynomial functions. The history of polynomial functions goes back to the year 1909 when the paper by Fréchet [9] appeared. Let G, H be abelian groups (for some results concerning the noncommutative case see the papers of Almira and Shulman [3] and Shulman [31]) and let \(f:G\rightarrow H\) be a given function. The difference operator \(\Delta _h\) with span \(h\in G\) is defined by
and \(\Delta _h^n\) is defined recursively
Using this operator, polynomial functions are defined in the following way.
Definition 1.1
A function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is called a polynomial function of order at most n if it satisfies the equality
for all \(x\in {\mathbb {R}}.\)
Remark 1.1
It is known (see e.g. [32] or [7]) that a function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is polynomial of order at most n (in the sense of Definition 1.1) if, and only if, it satisfies the equation
for every \(h_1,\ldots , h_{n+1}, x\in {\mathbb {R}},\) where \(\Delta _{h_1,\ldots , h_{n+1}}= \Delta _{h_{n+1}}\circ \cdots \circ \Delta _{h_1}.\)
Polynomial functions are sometimes called generalized polynomials. The shape of solutions of this equation was obtained in various situations among others by Mazur and Orlicz [22], Van der Lijn [35] and Ɖoković [7]. To describe the form of polynomial functions we need the notion of multiadditive functions. A function \(A_n:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) is \(n-\)additive if and only if for every \(i\in \{1,2,\ldots ,n\}\) and for all \(x_1,\ldots ,x_n,y_i\in {\mathbb {R}}\) we have
Further, for a function \(A_n:{\mathbb {R}}^n\rightarrow {\mathbb {R}},\) the diagonalization \(A_n^*\) is defined by
Now we can present the mentioned characterization of polynomial functions.
Theorem 1.2
Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a polynomial function of order at most n, then there exist unique \(k-\)additive functions \(A_k:{\mathbb {R}}^k\rightarrow {\mathbb {R}},\) \(k\in \{1,\dots ,n\}\) and a constant \(A_0\) such that
where \(A_k^*\) is a diagonalization of \(A_k.\) Conversely, every function of the shape (1.2) is a polynomial function of order at most n.
A very important result is due to L. Székelyhidi who proved that every solution of a very general linear equation is a polynomial function (see [32] Theorem 9.5, cf. also Wilson [36]).
Theorem 1.3
Let G be an Abelian semigroup, S an Abelian group, n a nonnegative integer, \(\varphi _i,\psi _i\) additive functions from G to G and let \(\varphi _i(G)\subset \psi _i(G),\;i\in \{1,\dots ,n\}.\) If functions \(f,f_i:G\rightarrow S\) satisfy the equation
then f satisfies (1.1).Footnote 1
Having a result of this kind, it is much easier to solve linear equations because it is no longer necessary to deal with each equation separately. Instead, we may formulate results which are valid for large classes of equations. It is even possible to write computer programs which solve linear functional equations, see the papers of Gilányi [13] and Borus and Gilányi [5].
Székelyhidi’s result though very nice and general does not close the research on polynomial functions. In [26] a lemma more general than Theorem 1.3 was used by the third author to obtain the solutions of the equation
connected with the Taylor formula. As we can see, Eq. (1.4) is not linear and, thereby, the family of equations having only polynomial solutions is enriched. Later on in papers by Koclȩga-Kulpa, Wa̧sowicz and the fourth author [16,17,18,19,20, 33] the mentioned lemma was used to deal with functional equations connected with numerical analysis. For a systematic approach to this topic see the monograph of the fourth author [33]. Let us also cite another monograph by Sahoo and Riedel [29] where other functional equations stemming from mean value theorems are discussed. Actually, there are several examples of results dealing with solving functional equations without or under weak regularity properties, let us mention e.g. [1, 2, 4, 6, 10, 11, 14, 15, 24, 25, 27, 28] or [30].
The present paper is inspired by the equation
(solved by Fechner and Gselmann in [8]), where f is multiplied by two different expressions. In the second section of the paper we present a lemma which generalizes results in the third author’s and Lisak’s papers [21, 26] and which shows that the solutions of a very general equation must be polynomial. The solutions of (1.5) must be polynomial but it is interesting that some monomial summands of them must be continuous whereas others may be any monomial functions. In the third section we deal with generalizations of (1.5) and we explain this behaviour.
2 A lemma
Let us begin with the following general Lemma (cf. Wilson [36], Székelyhidi [32], the third author [26], Pawlikowska [23] and Lisak and the third author [21]). Before we state the Lemma let us adopt the following notation. Let G and H be commutative groups. Then \(SA^{i}(G;H)\) denotes the group of all i-additive, symmetric mappings from \(G^{i}\) into H for \(i\ge 2\), while \(SA^{0}(G;H)\) denotes the family of constant functions from G to H and \(SA^{1}(G;H)=\text{ Hom }(G;H)\). We also denote by \({\mathcal {I}}\) the subset of \(\text{ Hom }(G;G)\times \text{ Hom }(G;G)\) containing all pairs \((\alpha ,\beta )\) for which \(\text{ Ran }(\alpha )\subset \text{ Ran }(\beta )\). Furthermore, we adopt a convention that a sum over an empty set of indices equals 0. We denote also for an \(A_i\in SA^{i}(G;H)\) by \(A_i^*\) the diagonalization of \(A_i, \, i\in {\mathbb {N}}\cup \{0\}.\) Let us also introduce the operator \(\Gamma : G\times G\times H^{G\times G} \rightarrow H^{G\times G}\) defined as follows. For each \(\phi :G\times G\rightarrow H\) and each \((u,v)\in G\times G\) we set
for each \((x,y)\in G\times G.\) In fact, \(\Gamma \) is nothing else but the operator \(\Delta \) defined above, applied to functions of two variables. However we wish to stress the difference between one and two variables, this is why we denote the new operator with a different symbol.
Lemma 2.1
Fix \(N, \, M\in {\mathbb {N}}\cup \{0\}, \, \) and let \(I_{p,n-p}, \, 0\le p\le n, \, n\in \{0,\dots ,M\}\) be finite subsets of \({\mathcal {I}}\). Suppose further that H is uniquely divisible by N! and let functions \(\varphi _i:G\rightarrow SA^i(G;H),\, i\in \{0,\dots ,N\}\) and \(\psi _{p,n-p,(\alpha ,\beta )}:G\rightarrow SA^i(G;H),\, (\alpha ,\beta )\in I_{p,n-p},\, 0\le p \le n, n\in \{0,\dots ,M\} \) satisfy
for every \(x,y\in G.\) Then \(\varphi _N\) is a polynomial function of degree not greater than
where \(K_s= \bigcup _{p=0}^s I_{p,s-p}\) for each \(s\in \{0,\dots ,M\}.\)
Proof
Let us fix an \(N\in {\mathbb {N}}\cup \{0\}.\) We prove the Lemma using induction with respect to M. Let us start with \(M=-1\) - we find that the right-hand side of the Eq. (2.1) vanishes. Thus (2.1) reduces to
for each \(x, \, y\in G.\) It turns out that the polynomial in y and coefficients \(\varphi _i(x), \, i\in \{0,\dots ,N\} \) vanish identically. It is not difficult to see that it is equivalent to the system of identities \(\varphi _i =0, \, i\in \{0, \dots ,N\}.\) In particular \(\varphi _N\) is a polynomial function, identically equal to 0, the degree is hence estimated by 0.
Now suppose that our Lemma holds for some \(M\ge -1\) and consider the equation
for every \(x,y\in G.\) Assume that \(K_{M+1} \ne \emptyset \) - otherwise (2.4) reduces to (2.1) and we are done. Further, assume that \(I_{p,M+1-p}\ne \emptyset \) for some \(p\in \{0,\dots ,M+1\}.\) Fix such a p and write \(I_{p,M+1-p} = \{(\alpha _j, \beta _j): j\in \{1,\dots , m\}\}\) for some \(m\in {\mathbb {N}}.\) Choose a pair \((\alpha , \beta )\in I_{p, M+1-p}\) and fix a \(u_1\in G \) arbitrarily. To the \(u_1\) take a \(v_1\in \beta ^{-1}(\{\alpha (-u_1)\})\) so that \(\alpha (u_1) +\beta (v_1) =0.\) Now let us apply the operator \(\Gamma _{(u_1,v_1)}\) to both sides of (2.4). On the left-hand side we obtain
Denoting \({\hat{\varphi }}_N: = \Delta _{u_1}\varphi _N\) we get again the left-hand side of the Eq. (2.1) but with \({\hat{\varphi }}_N\) instead of \(\varphi _N\) (note that the remaining summands may be written as polynomial functions in y but of degrees lower than N, and they can be rearranged in such a way that the left-hand side is again a finite sum of polynomial functions in y with coefficients dependent on x).
Let us now look at the right-hand side. If we apply \(\Gamma _{(u_1,v_1)}\) to the first summands it will transform them into summands of similar character, with \(\alpha (x) + \beta (y)\) replaced by \(\alpha (x) + \beta (y)+ \alpha (u_1) + \beta (v_1).\) But in the last summand, and more exactly in the summand determined by the pair \((\alpha , \beta )\) to which \(u_1\) and \(v_1\) were selected, we have the following situation
for every \(x, \, y \in G.\) We see that the action of \(\Gamma _{(u_1,v_1)}\) increases the number of summands but decreases the degree of polynomial functions by 1. Applying the operator \(p-1\) more times we will eventually annihilate the summand on the right-hand side. Repeating the above procedure for arbitrary \(u_j\in G, \, j\in \{1,\dots , q\}\) we obtain the Eqs. (cf. (2.5) and (2.6))
for every \(x,\, y\in G.\) Here \({\hat{\psi }}_{r, n-r, (\alpha , \beta )}\) and \({\hat{\varphi }}_i\) are new functions obtained after applying the operator \(\Gamma \) to the previous ones. Anyway, the method shows that repeating it we may arrive at the complete annihilation of the summand corresponding to \(M+1\) and finally replace (2.7) by the following.
for all \(x, \, y\in G \) and \(u_1,\dots , u_q \in G.\) Now we may use the induction hypothesis and infer that
is a polynomial function.
The estimation of the degree consists in realizing what is happening indeed. Applying the operator \(\Gamma _{(u,v)}\) (with properly selected u and v) to both sides we “annihilate” one summand on the right-hand side of (2.1) at level 0. Thus, applying the operator \(\Gamma \) \(\mathrm{card}K_0\) times with arbitrary u’s we get rid of the summands constituting level 0. Then we apply \(\Gamma \) again to annihilate the level 1 summands but we have to do it in two steps. First we decrease the degree of summand by 1 and only then, in step two, can we annihilate the summand. It takes thus \(2\mathrm{card}K_1\) to annihilate the terms of degree 1. Similarly, it takes \(3\mathrm{card}K_2\) to annihilate terms of the second degree, and, in general, \((n+1)\mathrm{card}K_n\) to annihilate terms of the n-th degree. On the left-hand side appears the sign of \(\Delta _{u_1,\dots ,u_q}\varphi _N(x)(y)\) where
\(\square \)
3 Results
Let us solve Eq. (1.5), applying our Lemma 2.1.
Theorem 3.1
Let the pair (f, F) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfy the equation
for all \(x, \, y\in {\mathbb {R}}.\) Then f is a polynomial function of degree not greater than 2 and F is a polynomial function of degree not greater than 3.
Proof
Let us rewrite Eq. (3.1) in the form
for all \(x, y\in {\mathbb {R}}.\) If we take now \(G=H={\mathbb {R}}, \, N=1, \, M=1,\) \( I_{0,0}= \{(0,\mathrm{id}), (\mathrm{id}, \mathrm{id})\},\) \(\psi _{0,0,(0,\mathrm{id})} = -F, \, \psi _{0,0,(\mathrm{id},\mathrm{id})} = F,\) \(I_{0,1}=\emptyset , \, I_{1,0}= \{(0,\mathrm{id})\},\) \( \psi _{1,0,(0,\mathrm{id})} = -f,\) \(\varphi _1 = f, \, \varphi _0 = F\) then we see that (3.2) is a particular case of (2.1). We also have \(K_0= I_{0,0}\) and \(K_1= I_{1,0}\) with \(\mathrm{card }(K_0\cup K_1) = 2\) and \(\mathrm{card} K_1 =1.\) Therefore (cf. (2.2)) f is a polynomial function of degree at most 2. Hence there exist \(A_0\in SA^0({\mathbb {R}},{\mathbb {R}}), A_1\in SA^1({\mathbb {R}}, {\mathbb {R}})\) and \( A_2\in SA^2({\mathbb {R}}, {\mathbb {R}})\) such that f is given by
for every \(x\in {\mathbb {R}}.\) On the other hand, taking (3.1) into consideration again and putting \(y=h\) in (3.1) we obtain after rearranging the equation
or
Since f is a polynomial function, we see that the right-hand side of the above is a polynomial function. Now, applying the Fréchet operator three times to both sides of (3.4) we see that the right-hand side vanishes and so does the left-hand side. This means however that F is a polynomial function of order greater by 1 than order of f. \(\square \)
Remark 3.1
In fact we have shown above that the class of polynomial functions has the so called double difference property, more exactly if DF defined by \(DF(x,y) = F(x+y) - F(x) - F(y)\) is a polynomial function of two variables then \(F=a+p,\) where \(a:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) is an additive function and \(p:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) is a polynomial function.
Let \(B_i\in SA^i({\mathbb {R}},{\mathbb {R}}), \, i\in \{0,\dots ,3\}\) be such that
for every \(x\in {\mathbb {R}}.\)
Remark 3.2
In (3.1) taking qx; qy in places of x and y; respectively, using the rational homogeneity of monomial summands of f and F and joining together the terms with equal powers of q we can see that this equation is possible only if it occurs for monomials of equal order.
Taking the above remark into account, we start with \(F=B_0^*=B_0.\) Then from (3.1) we infer that \(f=0\) and so
In particular, \(F(0)=0.\) Let us now assume that \(F(x)=B_1^*(x)=B_1(x).\) Then necessarily (cf. (3.1))
whence it follows that \(f=0 .\) Thus \(B_1\) is an arbitrary additive function, and in particular \(A_0=0.\)
The next step is
for every \(x\in {\mathbb {R}}.\) From (3.1) we derive
for every \(x, y\in {\mathbb {R}}.\) Hence
for every \(x\in {\mathbb {R}}.\) Now, let us pass to the case where \(F(x)= B_3^*(x)\) for every \(x\in {\mathbb {R}}.\) Then we have \(f(x)=A_2^*(x), \, x\in {\mathbb {R}}\) and from (3.1) we get, taking \(x=y\)
whence
for every \(x\in {\mathbb {R}}.\) Inserting the above equality into (3.1), we obtain
for every \(x, \, y\in {\mathbb {R}}.\) After some elementary calculations we obtain hence
for every \(x,\, y\in {\mathbb {R}}.\) Putting here \(y=1,\) we obtain
for every \(x\in {\mathbb {R}}.\) We obtain from (3.8)
and
for every \(x\in {\mathbb {R}}.\) Taking (3.7) into account, we have by (3.9)
for every \(x\in {\mathbb {R}}.\) Thus we have proved the following.
Proposition 3.2
The pair (f, F) is a solution of (3.1) if, and only if
-
\(f(x) = A_1(x) + a_2x^2,\)
-
\(F(x) = B_1(x) + xA_1(x) + \frac{1}{3}a_2x^3,\)
for all \(x\in {\mathbb {R}}.\) Here \(A_1\) and \(B_1\) are arbitrary additive functions, and \(a_2\in {\mathbb {R}}\) is an arbitrary constant.
Now we are going to investigate a more general equation. We are interested in solving the equation
for every \(x,\, y\in {\mathbb {R}}.\) First, we assume that both functions f and F are polynomial functions. Then, similarly as in the case of Theorem 3.1, the monomial summands of f and F of orders k and \(k+1,\) respectively satisfy (3.11). Later on we will discuss how Lemma 2.1 may be used to show that (in some situations) f and F are indeed polynomial functions.
A characteristic feature of (3.11) is the dependence of the existence of solutions on the behaviour of the sequence \((S_k)_{k\in {\mathbb {N}}}\) given by
for all \(k\in {\mathbb {N}}\cup \{0\}.\) Let us observe that in the case of (3.1) we have \(n=3\) and \(\gamma _1=\alpha _1=\beta _1=\alpha _2=\beta _3=1,\) and \(\beta _2 = \alpha _3=0\) while \(\gamma _2=\gamma _3=-1.\) We have \(S_k= 2^{k+1}-2=2(2^k -1), \, k\in {\mathbb {N}},\) in particular \(S_0= 1\cdot 2 -1\cdot 1 - 1\cdot 1 = 0.\)
Using our Lemma 2.1 we infer rather easily that f is a polynomial function. We assume that also F is a polynomial function. The aim of the next theorem is to prove that, under the assumptions made, solutions of (3.11) are continuous, except for an additive summand. Similarly as in the case of Theorem 3.1, it is enough to assume that f and F are monomials.
Theorem 3.3
Let \(k\in {\mathbb {N}}\cup \{0\}.\) Let \(\gamma _i \in {\mathbb {R}},\, \alpha _i,\beta _i \in {\mathbb {Q}}\) be such that (cf. (3.12)) \(S_k\ne 0, \, k\in {\mathbb {N}}\cup \{0\}.\) Further, let \(f:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) be either 0 or a monomial function of order k, let \(F:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) be a monomial function of order \(k+1\) and suppose that the pair (f, F) satisfies equation (3.11).
-
(i)
If \(k=1\) then either \(\sum _{i=1}^n \gamma _i\alpha _i^2\ne 0\ne \sum _{i=1}^n \gamma _i\beta _i^2\) and \(f=F =0\) is the only solution of (3.11), or \(\sum _{i=1}^n \gamma _i\alpha _i ^2=\sum _{i=1}^n \gamma _i\beta _i^2= 0\) and f is an arbitrary additive function while F is given by \(F(x) =\frac{2}{S_1}xf(x).\)
-
(ii)
If \(k=0\) or \(k\ge 2\) then both f and F are continuous.
Moreover, for every \(k>2\) and for every \(j\in \{2,\dots , k-1\},\) if \(f\ne 0\) then
which implies
and obviously
Proof
Let us start with the case \(k=0.\) Then \(f=\mathrm{const}=A_0\) and F is additive. Putting \(x=y\) in (3.11) we obtain (taking into account the rational homogeneity of F)
for each \(x\in {\mathbb {R}}. \) Using the assumption that \(S_0\ne 0\) we get
for each \(x\in {\mathbb {R}},\) and hence F is a continuous function.
In the case \(k=1\) we obtain that \(f=A_1\) is additive and F is a quadratic function, i.e. a diagonalization of a biadditive symmetric function, \(S_1= \sum _{i=1}^n \gamma _i(\alpha _i+ \beta _i)^2.\) Putting \(x=y\) in (3.11), we obtain
for every \(x\in {\mathbb {R}}, \) whence (keeping in mind that \(S_1\ne 0\)) we get (denoting \(\frac{2}{S_1}\) by \(C_1\))
for every \(x\in {\mathbb {R}}.\) Substituting the above into (3.11), we obtain
for all \(x,\, y\in {\mathbb {R}}.\) Comparing terms of the same degree on both sides of the above equation, we obtain
for all \(x\in {\mathbb {R}},\) and, symmetrically,
for all \(y\in {\mathbb {R}}.\) Both of these equations hold if either \(A_1=0\) or
Now if \(A_1=0\) then also \(F=0,\) and we get the continuity of a solution (f, F) of (3.11) in this case. Further let us look for non-zero solutions of (3.11). The existence of a nontrivial \(A_1\) implies that (3.19) holds. So, in this case we have
Taking (3.18) and (3.19) (hence (3.20)) into account we obtain (keeping in mind that \(S_1\ne 0\))
for all \(x,\, y\in {\mathbb {R}};\) which actually means that taking an arbitrary additive function \(A_1\) as f, we get that the pair (f, F) is a solution of (3.11) for \(k=1.\) Of course, the solutions are mostly discontinuous.
Now, let us proceed to the case \(k=2.\) Observe that f is now a diagonalization of a biadditive, symmetric function \(A_2.\) Similarly as in the previous cases, putting \(x=y\) we obtain from (3.11)
for every \(x\in {\mathbb {R}},\) whence in view of \(S_2\ne 0,\)
for all \(x\in {\mathbb {R}}.\) Denote \(\frac{2}{S_2}\) by \(C_2.\)
Let us substitute the formula (3.21) into (3.11). We obtain
for all \(x, \, y \in {\mathbb {R}}.\) Using the biadditivity of f (and hence its rational homogeneity) we obtain hence
for all \(x, \, y\in {\mathbb {R}}.\) Now, comparing the terms of the same degree on both sides of (3.22), we get first that either
or \(A_2 = 0.\) In the sequel we assume that \(A_2\ne 0,\) hence (3.23) holds. In other words \(S_2= 3\sum _{i=1}^n\gamma _i \left( \alpha _i^2\beta _i + \alpha _i \beta _i^2\right) .\) Let us compare the remaining terms. We get
and
for all \(x, \, y\in {\mathbb {R}}.\) Putting \(x=y\) above; and taking into account that \(A_2\ne 0\) we infer that \( \sum _{i=1}^n \gamma _i \alpha _i ^2 \beta _i = \sum _{i=1}^n \gamma _i \alpha _i \beta _i^2 = \frac{1}{3C_2}=\frac{S_2}{6}.\) Hence we may write
and
for all \(x, \, y\in {\mathbb {R}}.\) Putting \(y=1\) into (3.24) and (3.25) we obtain
for every \(x\in {\mathbb {R}},\) hence f and F are continuous.
Now, let us pass to the situation where \(k\ge 3.\) In general, if \(k\ge 3\) and f and F satisfy (3.11) then
for every \(x\in {\mathbb {R}}\) and hence
for every \(x\in {\mathbb {R}}.\) Put \(C_k:= \frac{2}{S_k}.\) We can write
for all \(x, \, y\in {\mathbb {R}}.\) Comparing the terms of equal degrees, we infer that either \(A_k=0\) or \(\sum _{i=1}^n \gamma _i \alpha _i^{k+1} = \sum _{i=1}^n \gamma _i \beta _i^{k+1}=0\) (cf. (3.14)). Assume from now on that we are interested in nontrivial solutions of (3.11). Continuing comparisons of the terms on both sides of (3.27), we get for every \(j\in \{2,\dots , k-1\}\)
(cf. (3.13)) for otherwise (putting \(x=y\)) we would get
which is impossible. Note that from the above (3.15) and (3.16) follow. Taking this into account, as well as the definition of \(C_k\) and comparing the remaining terms in (3.27), we get
for all \(x, \, y\in {\mathbb {R}}.\) Using (3.15), we get hence
and analogously we infer
for all \(x, \, y\in {\mathbb {R}}.\) Let us put \(x+y\) instead of x in (3.29). We obtain, after some easy though tedious calculations, that the left-hand side is equal to
while the right-hand side is equal to
Comparing on both sides the terms of equal degrees we obtain in particular the following sequence of equalities.
for \(j\in \{0,\dots , k-1\}\) and all \(x, \, y\in {\mathbb {R}}.\) Now, using (3.30) for \(j\in \{0,\dots , k-1\}\) we arrive at
for every \(x,\, y\in {\mathbb {R}},\) in other words, putting \(y=1\) we obtain
for every \(x\in {\mathbb {R}},\) which means that \(A_k\) is continuous for \(k\ge 3\) and thus the proof is finished. \(\square \)
Remark 3.3
Using Lemma 2.1 exactly in the same way as we did in the proof of Theorem 3.1, we infer rather easily that if the functions F and f satisfy (3.11) then f must be a polynomial function. In the following simple example we observe that the function F is not necessarily polynomial.
Example 1
Observe that the equation
is satisfied by any even function F and \(f=0.\)
The reason why the above example works is that the equation
for all \(x\in {\mathbb {R}},\) has solutions which are not polynomial. If we consider a general linear equation
for all \(x,\, y\in {\mathbb {R}},\) and we assume that at least one of the pairs \((\alpha _i,\beta _i)\) is linearly independent from all the others then, using Theorem 1.3, it may be shown that every solution of (3.33) is a polynomial function. Therefore it is natural to formulate the following problem.
Problem 1
Let \(\alpha _i,\beta _i,\gamma _i\in {\mathbb {R}},\gamma _i\ne 0,i=1,\dots ,n\) be such that there exists an \(i_0\in \{1,\dots ,n\}\) satisfying
Is it possible that the functional equation (3.11) is satisfied by some functions \(f,\, F\) where F is not a polynomial function?
As we have seen (cf. Example 1) it is possible that the Eq. (3.11) is satisfied by a pair (f, F) where F is not a polynomial function. However we will give some examples of particular forms of this equation which have only polynomial solutions and therefore we can apply Theorem 3.3 to solve these equations.
Proposition 3.4
Let \(\alpha _i,\beta _i,\gamma _i, \, i\in \{1,\dots ,n\}\) be real numbers such that
holds and \(\alpha _i+\beta _i=1, \, i\in \{1,\dots ,n\}.\) If the pair (f, F) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies Eq. (3.11) then the functions f and F are polynomial.
Proof
Similarly as before, from Lemma 2.1 we know that f is a polynomial function. Now it is enough to take \(x=y\) in (3.11) to show that also F must be polynomial. \(\square \)
Now we show some examples of equations (with nontrivial solutions) which may be solved with the use of the above proposition.
Example 2
Assume that functions \(f,F:{\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfy the functional equation
for all \(x,\, y\in {\mathbb {R}}.\) Rearranging (3.35) in the form
for all \(x, \, y\in {\mathbb {R}},\) we can see that f is a polynomial function of order at most 2. From Proposition 3.4 we know that also F is a polynomial function. Now we check the conditions of Theorem 3.3. If \(k=0\) then \(f(x)=b\) for some constant \(b\in {\mathbb {R}}\) and all \(x\in {\mathbb {R}},\) further \(S_0=-2\ne 0\) and, consequently, \(F(x)=-bx,\) for all \(x\in {\mathbb {R}}.\) Now let \(k=1,\) then \(S_1=-2,\)
and again from Theorem 3.3 we infer that f is any additive function and \(F(x)=-xf(x)\) for all \(x\in {\mathbb {R}}.\) If \(k=2,3,\) then it is easy to see that the solutions of of (3.35) must vanish. Thus the general solution of this equation is given by \(f(x)=a(x)+b\) and \(F(x)=-xa(x)-bx, \, x\in {\mathbb {R}},\) where \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is additive and b is a constant.
Example 3
Assume that functions \(f,F:{\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfy the functional equation
for all \(x,\, y \in {\mathbb {R}}.\) Rearranging (3.36) in the form
for all \(x,\, y\in {\mathbb {R}},\) we can see that f is a polynomial function of order at most 2. From Proposition 3.4 we know that also F is a polynomial function. Now we check the conditions of Theorem 3.3. If \(k=0\) then \(f(x)=b\) for some constant \(b\in {\mathbb {R}},\) and all \(x\in {\mathbb {R}},\) further \(S_0=-6\ne 0\) and, consequently, \(F(x)=-\frac{b}{3}x\) for all \(x\in {\mathbb {R}}.\) Now let \(k=1,\) then \(S_1=-6\) but this time
and again from Theorem 3.3 we infer that \(f=F=0.\) If \(k=2\) then the solutions must be continuous since \(S_2=-6\ne 0,\) moreover
which means that \(f(x)=cx^2\) and \(F(x)=-\frac{c}{3}x^3, \, x\in {\mathbb {R}}\) satisfy (3.36). Thus the general solution of this equation is given by \(f(x)=cx^2+b\) and \(F(x)=-\frac{c}{3}x^3-\frac{b}{3}x, \, x\in {\mathbb {R}}\) where b, c are real constants.
Observe that in the Eq. (1.5) the left-hand side is the difference connected with the Cauchy equation. Since additive functions are monomial functions of order one, it is natural to ask whether this difference may be replaced by the difference connected with monomial function of higher orders or with the polynomial functions. In the next part of the paper we consider a functional equation constructed in such way.
Lemma 3.1
Let n be a given positive integer, if the pair \((f,\,F) \) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the equation
for all \(x,\, y\in {\mathbb {R}},\) then f is a polynomial function of order at most \(n+1\) and F is a polynomial function of order not greater than \(n+2.\)
Proof
We write (3.37) in the form
for all \(x,\, y\in {\mathbb {R}}.\) Similarly as before, using Lemma 2.1, we can see that f is a polynomial function of order at most \((n+1)+1-1=n+1.\) Indeed, observe that in the present situation we have \(K_0 = \{(\mathrm{id}, i\mathrm{id}): i\in \{1,\dots ,n \}\}\) and \(K_1=\{(0,\mathrm{id})\}.\) Hence \(\mathrm{card}(K_0\cup K_1) = n+1\) and \(\mathrm{card}K_1 = 1,\) whence the estimation follows (cf. (2.2)).
Further, applying the difference operator with the span y \((n+2)-\)times to the both sides of (3.37) we get
for all \(x\in {\mathbb {R}}\) i.e. F is a polynomial function of order \(2n+1.\)
Now consider any \(k>n+1,\) the function f is a polynomial function of order smaller than k thus the monomial summand of F of order \(k+1\) satisfies (3.37) with \(f=0.\) However the \(n-\)th difference does not vanish for monomial functions of order k. This means that the summands of F of orders greater than \(n+2\) must be zero, i.e. F is a polynomial function of order at most \(n+2.\)
\(\square \)
Now we turn our attention to the equation where the left-hand side is the difference connected with the equation of monomial functions
Lemma 3.2
Let n be a given positive integer, if the pair (f, F) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the equation
for all \(x,\, y\in {\mathbb {R}},\) then f is a polynomial function of order at most \(n+1\) and F is a polynomial function of order not greater than \(n+2.\)
Proof
We write (3.38) in the form
for all \(x,\, y\in {\mathbb {R}}.\) We see that \(K_0 = \{(\mathrm{id}, i\mathrm{id}): i\in \{1,\dots ,n \}\}\cup \{(0, \mathrm{id})\}\) and \(K_1=\{(0,\mathrm{id})\}.\) Hence \(\mathrm{card}(K_0\cup K_1) = n+1\) and \(\mathrm{card}K_1 = 1.\) Now applying again Lemma 2.1, we can see (cf. (2.2)) that f is a polynomial function of order at most \((n+1)+1-1=n+1.\) Further, applying the difference operator with the span y \((n+2)-\)times to the both sides of (3.38) we get
for all \(x\in {\mathbb {R}},\) i.e. F is a polynomial function of order \(2n+1.\)
Now, similarly as in the respective part of the proof of Lemma 3.1, we can see that the order of F cannot be greater than \(n+2.\) Indeed, the summands of F of orders \(k>n+2\) must satisfy (3.38) with the right-hand side equal to zero (since f has no terms of order \(k-1\)) which is impossible since the equation
for all \(x,\, y\in {\mathbb {R}},\) characterizes monomial functions of order \(n<k.\) \(\square \)
Now we can present the general solutions of Eqs. (3.37) and (3.38).
Theorem 3.5
A pair \((f,\, F)\) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the Eq. (3.37) if and only if F is a polynomial function of order at most \(n-1\) and \(f=0.\)
Proof
From Lemma 3.1 we know that both f and F are polynomial functions. Take first \(k\in \{0,1,\dots ,n-2\},\) and assume that f is a monomial function of order k and that F is a monomial function of order \(k+1.\) We can see that \(S_k=0\) i.e. from Theorem 3.3 we obtain \(f=0.\)
Now, take \(k\in \{n-1,n,n+1\},\) then \(S_k\ne 0\) and, as previously, assume that f is a monomial function of order k and that F is a monomial function of order \(k+1.\) We want to show that \(f=0.\) Thus for the indirect proof assume that \(f\ne 0,\) then F is also nonzero. Observe that it leads to a contradiction. Indeed, Eq. (3.37) cannot be satisfied, since in the expression \(\Delta ^n_yF(x)\) we have the term of order \(k+1\) with respect to y which is missing on the right-hand side.
We proved that \(f=0,\) thus F obviously satisfies
for all \(x,\, y\in {\mathbb {R}},\) i.e. F is a polynomial function of order at most \(n-1.\) \(\square \)
In the next theorem we obtain the solution of Eq. (3.38).
Theorem 3.6
Let \((f,\, F)\) be a pair of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}.\) If \(n=1\) then the solutions of (3.38) are of the form obtained in Proposition 3.2. If \(n>2\) then F is a monomial function of order n and \(f=0.\)
Proof
If \(n=1\) then (3.38) reduces to (1.5) which is already solved. Thus we may assume that \(n\ge 2.\) Using Lemma 3.2, we can see that the functions F and f are polynomial and as usually we will work with monomial functions. Thus let f and F be monomial functions of orders \(k,k+1;\) respectively. We want to show that \(f=0.\) However, if \(f\ne 0\) then the right-hand side which is of the form \(xf(y)+yf(x),\) contains the term yf(x) of order k with respect to the variable x. Such a term is missing in the expression \(\Delta ^n_yF(x)-n!F(y),\) since \(n\ge 2.\) Therefore also in this case we have \(f=0.\)
Using the equality \(f=0\) in (3.38), we get
for all \(x,\, y\in {\mathbb {R}},\) for each monomial summand of F. This means that F is a monomial function of order n. \(\square \)
Remark 3.4
It is interesting that we have a nice set of solutions only for the difference stemming from Cauchy’s equation. Thus the case \(n=1\) in (3.38) is exceptional. It seems that the right-hand side of (1.5) must be suitably modified to get a similar effect for \(n>1.\)
We can add one more class of functional equations which may be solved with the use of Theorem 3.3.
Proposition 3.7
Let \(\beta _i, \, i\in \{1,\dots ,n\}, \, \gamma _i, \, i\in \{1,\dots ,n+1\}\) be real numbers such that (3.34) holds. If the pair (f, F) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the equation
for all \(x,\, y\in {\mathbb {R}},\) then the functions f and F are polynomial.
Proof
Similarly as before, from Lemma 2.1 we know that f is a polynomial function. Now it is enough to take \(y=0\) in (3.39) to show that also F must be polynomial. \(\square \)
Remark 3.5
Note that Eq. (3.39) is a generalization of Eqs. (3.38) and (3.37). However the methods used in Lemmas 3.1 and 3.2 were needed to show that F is polynomial because, in case of these equation, condition (3.34) is not satisfied.
We end the paper with a remark connecting the results obtained here with the topic called alienation of functional equations (for some details concerning the problem of alienation of functional equations see the survey paper of R. Ger and the third author [12]).
Remark 3.6
Consider two equations:
which is satisfied only by \(f=0\) and
which usually has some solutions (depending on n and the constants involved). Results concerning Eq. (3.11) may be viewed from the perspective of the so called alienation of functional equations. Any pair of the form (F, 0) where F satisfies (3.41) is clearly a solution of (3.11). Interesting is the question if (3.11) may have solutions of a different nature. As we proved in case of some equations there are only solutions of this kind whereas in some other cases new solutions appear. Thus, in fact, we have examples of alienation and nonalienation of equations of this kind. It may even happen that for monomial functions of some order some equations are alien and for other orders the same equations are not alien. This effect is similar to the approach presented in [34] by the fourth author.
Notes
Here and in the sequel we assume that if \(n=0,\) then the sum from i equals 1 to n is zero.
References
Aczél, J.: A mean value property of the derivative of quadratic polynomials—without mean values and derivatives. Math. Mag. 58, 42–45 (1985)
Aczél, J., Kuczma, M.: On two mean value properties and functional equations associated with them. Aequationes Math. 38, 216–235 (1989)
Almira, J., Shulman, E.: On polynomial functions on non-commutative groups. J. Math. Anal. Appl. 458(1), 875–888 (2018)
Balogh, Z.M., Ibrogimov, O.O., Mityagin, B.S.: Functional equations and the Cauchy mean value theorem. Aequationes Math. 90(4), 683–697 (2016)
Borus, G., Gilányi, A.: Computer assisted solution of systems of two variable linear functional equations. Aequationes Math. 94(4), 723–736 (2020)
Carter, P., Lowry-Duda, D.: On functions whose mean value abscissas are midpoints, with connections to harmonic functions. Am. Math. Monthly 124(6), 535–542 (2017)
Djoković, D.Z.: A representation theorem for \((X_1-1)(X_2-1)\cdots (X_n-1)\) and its applications.Ann. Polon. Math.22, 189–198 (1969/70)
Fechner, W., Gselmann, E.: General and alien solutions of a functional equation and of a functional inequality. Publ. Math. Debrecen 80(1–2), 143–154 (2012)
Fréchet, M.: Une définition fonctionelle des polynômes. Nouv. Ann. 49, 145–162 (1909)
Ger, J.: On Sahoo-Riedel equations on a real interval. Aequationes Math. 63, 168–179 (2002)
Ger, R.: Oral communication. Katowice (2017)
Ger, R., Sablik, M.: Alien functional equations: a selective survey of results. In: Developments in functional equations and related topics, pp. 107–147, Springer Optim. Appl., 124, Springer, Cham (2017)
Gilányi, A.: Solving linear functional equations with computer (English summary). Math. Pannon. 9(1), 55–70 (1998)
Haruki, S.: A property of quadratic polynomials. Am. Math. Monthly 86, 577–579 (1979)
Koclȩga-Kulpa, B., Szostok, T.: On some functional equations connected to Hadamard inequalities. Aequationes Math. 75, 119–129 (2008)
Koclȩga-Kulpa, B., Szostok, T.: On a functional equation connected to Gauss quadrature rule. Ann. Math. Sil. 22, 27–40 (2008)
Koclȩga-Kulpa, B., Szostok, T.: On a class of equations stemming from various quadrature rules. Acta Math. Hungar. 130, 340–348 (2011)
Koclȩga-Kulpa, B., Szostok, T.: On a functional equation stemming from Hermite quadrature rule. J. Math. Anal. Appl. 414, 632–640 (2014)
Koclȩga-Kulpa, B., Szostok, T., Wa̧sowicz, S.Z., Tatra, M.T.: On functional equations connected with quadrature rules. Math. Publ 44, 27–40 (2009)
Koclȩga-Kulpa, B., Szostok, T., Wa̧sowicz, S.Z.: On some equations stemming from quadrature rules. Ann. Acad. Paedagog. Crac. Stud. Math. 8, 19–30 (2009)
Lisak, A., Sablik, M.: Trapezoidal rule revisited. Bull. Inst. Math. Acad. Sini. 6, 347–360 (2011)
Mazur, S., Orlicz, W.: Grundlegende Eigenschaften der polynomischen Operationen. Stud. Math. 5(50–68), 179–189 (1934)
Pawlikowska, I.: A characterization of polynomials through Flett’s MVT. Publ. Math. Debrecen 60, 1–14 (2002)
Riedel, T., Sablik, M.: Characterizing polynomial functions by a mean value property. Publ. Math. Debrecen 52, 597–610 (1998)
Sablik, M.: A remark on a mean value property. C. R. Canada 14, 207–212 (1992)
Sablik, M.: Taylor’s theorem and functional equations. Aequationes Math. 60, 258–267 (2000)
Sablik, M.: Characterizing polynomial functions. In: Report of Meeting, The Seventeenth Katowice-Debrecen Winter Seminar, Zakopane (Poland), February 1–4, 2017,Ann. Math. Silesianae, 31, 198–199 (2017)
Sablik, M.: An elementary method of solving functional equations. Ann. Univ. Sci. Budapest. Sect. Comp 48, 181–188 (2018)
Sahoo, P.K., Riedel, T.: Mean Value Theorems and Functional Equations. World Scientific, Singapore-New Jersey-London-Hong Kong (1998)
Schwarzenberger, M.: A functional equation related to symmetry of operators. Aequationes Math. 91(4), 779–783 (2017)
Shulman, E.: Each semipolynomial on a group is a polynomial. J. Math. Anal. Appl. 479(1), 765–772 (2019)
Székelyhidi, L.: Convolution Type Functional Equations on Topological Commutative groups. World Scientific Publishing Co. Inc., Teaneck, NJ (1991)
Szostok, T.: Functional equations stemming from numerical analysis. Dissertationes Math. (Rozprawy Mat.)508, 57 pp (2015)
Szostok, T.: Alienation of two general linear functional equations. Aequationes Math. 94(2), 287–301 (2020)
Van der Lijn, G.: La définition fonctionnelle des polynômes dans les groupes abéliens. Fund. Math. 33, 42–50 (1939)
Wilson, W.H.: On a certain general class of functional equations. Amer. J. Math. 40, 263–282 (1918)
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to Professor Ludwig Reich on his 80th Birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nadhomi, T., Okeke, C.P., Sablik, M. et al. On a new class of functional equations satisfied by polynomial functions. Aequat. Math. 95, 1095–1117 (2021). https://doi.org/10.1007/s00010-021-00781-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00010-021-00781-2
Keywords
- Functional equations
- Polynomial functions
- Monomial functions
- Fréchet operator
- Continuity of monomial functions