Abstract
Whenever a numerical method produces accurate results, it creates an interesting functional equation, and because regularities is not assumed, unexpected solutions can emerge. Thus, this paper is mainly devoted to finding solutions to a generalized functional equation constructed in this spirit; namely, we solve the generalized form of the functional equation considered in Fechner and Gselmann (Publ Math Debrecen 80(1–2):143–154, 2012), then considered in Nadhomi et al. (Aequationes Math 95:1095–1117, 2021) and continued in Okeke and Sablik (Results Math 77:125, https://doi.org/10.1007/s00025-022-01664-x, 2022), that is we find the polynomial functions satisfying the following functional equation,
for every \(x,y\in \mathbb R\), \(\gamma _i,\alpha _j,\beta _j \in \mathbb R,\) and \(a_i,b_i,c_j,d_j \in \mathbb Q,\) and its special forms. Thus we continue investigations presented in Nadhomi et al. (Aequationes Math 95:1095–1117, 2021) where we generalized the left hand side of Fechner–Gselmann equation and those from Okeke and Sablik (Results Math 77:125, https://doi.org/10.1007/s00025-022-01664-x, 2022) where the right hand side of the Fechner–Gselmann equation was studied. It turns out that under some assumptions on the parameters involved, the pair (F, f) solving Eq. (0.1) happens to be a pair of polynomial functions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the following functional equation
for every \(x,y\in \mathbb R\), \(\gamma _i,\alpha _j,\beta _j \in \mathbb R,\) and \(a_i,b_i,c_j,d_j \in \mathbb Q,\) and its special forms. The idea to study this generalized equation was motivated by the growing number of its particular forms studied by several mathematicians; let us quote here a few of them Aczél [1], Aczél and Kuczma [2], Alsina et al. [3], Fechner and Gselmann [6], Koclega-Kulpa and Szostok [8], Koclega-Kulpa et al. [9, 10], Nadhomi et al. [14] and Okeke and Sablik [15]. From their studies, it turns out that these particular forms have real applications.
The primary goal of this paper is to continue the investigation proposed in [15] (see Remark 2.3). In particular, to obtain the polynomial solutions of Eq. (1.1) and compare the solutions with the solutions of equations
and
The first of the special forms of (1.1) we solved is the functional equation considered by Koclga-Kulpa et al. [9] namely,
It is worth noting that (1.4) stems from a well known quadrature rule used in numerical analysis. Further, we also considered other special forms of (1.1) namely,
and
Equation (1.5) is the functional equation connected with the Hermite–Hadamard inequality in the class of continuous functions, and it is related to the approximate integration. Note that the quadrature rules of an approximate integration can be obtained by the appropriate specification of the coefficients of (1.5). Moreso, Eqs. (1.6) and (1.7) are variations of Lagrange mean value theorem with many applications in mathematical analysis, computational mathematics and other fields. Finally, Eq. (1.8) stems from the descriptive geometry used for graphical constructions.
In addition we will show that the main results obtained by Koclga-Kulpa et al. [9] (see Theorems 1 and 2 in [9]) are special forms of our results. In line with their papers [8, 10], we would use our method to obtain the polynomial functions connected with the Hermite–Hadamard inequality in the class of continuous functions. Furthermore, we would show that the functional equation considered by Aczél [1] and Aczél and Kuczma [2] (cf. Theorem 5 in [2]) are special forms of Eq. (1.1). Moreso, we would show that our method can be used to solve the functional equation arising from the geometric problems considered by Alsina et al. [3]. Now observe that Eqs. (1.1), (1.2) and (1.3) are obvious generalization of the equation considered by Fechner and Gselmann [6] namely,
Nadhomi et al. [14] investigated Eq. (1.2) and in Okeke and Sablik [15] investigated Eq. (1.3). In our works it turns out that under some mild assumption, the pair (F, f) of functions satisfies Eqs. (1.2) and (1.3) are polynomial functions, and in some important cases, just the usual polynomials (even though we assume no regularity of solutions a priori).
The fundamental tool in achieving the results in [14, 15] is a very special Lemma (cf. Lemma 2.1 in [14], Lemma 1 in [12], Lemma 2.3 in [16] and Lemma 1.1 in [15]). Let us observe that the result is a generalization of theorems from Székelyhidi’s book [17] (Theorem 9.5) which in turn is a generalization of a Wilson result from [19]. We quote here a slight modification of the Lemma. Before we state the Lemma let us adopt the following notation. Let G and H be a commutative groups. Then \(SA^i(G;H)\) denotes the group of all \(i\texttt {-}\)additive, symmetric mappings from \(G^i\) into H for \(i\geqslant 2,\) while \(SA^0(G;H)\) denotes the family of constant functions from G to H and \(SA^1(G;H)= Hom(G;H).\) We also denote by \(\mathcal {I}\) the subset of \(Hom(G;G) \times Hom(G;G)\) containing all pairs \((\alpha , \beta )\) for which \(Ran(\alpha ) \subset Ran(\beta ).\) Furthermore, we adopt a convention that a sum over an empty set of indices equals zero. We denote also for an \(A_i \in SA^i(G;H)\) by \(A_i^*\) the diagonalization of \(A_i,\) \( i\in \mathbb N\cup \{0\}.\)
Lemma 1.1
Fix \(N\in \mathbb N\cup \{0\}, \, M\in \mathbb N\cup \{-1, 0\}\) and, if \(M\ge 0,\) let \(I_{p,n-p}, \, 0\le p\le n, \, n\in \{0,\ldots ,M\}\) be finite subsets of \(\mathcal {I}\). Suppose further that H is an Abelian group uniquely divisible by N! and G is an Abelian group. Moreover, let functions \(\varphi _i:G\rightarrow SA^i(G;H),\, i\in \{0,\ldots ,N\}\) and, if \(M\ge 0,\) \(\psi _{p,n-p,(\alpha ,\beta )}:G\rightarrow SA^i(G;H),\, (\alpha ,\beta )\in I_{p,n-p},\, 0\le p \le n, n\in \{0,\ldots ,M\}\),satisfy
where \(R_M(x,y)\) is defined in the following way
for every \(x,y\in G.\) Then \(\varphi _N\) is a polynomial function of degree not greater than m, where
and \(K_s= \bigcup _{p=0}^s I_{p,s-p}\) for each \(s\in \{0,\ldots ,M\},\) if \(M\ge 0.\) Moreover, if \(M=-1\),
then \(m=-1\) and \(\varphi _N\) is the zero function.
While proving our main results in [14, 15], we observed that the behaviour of solutions depends on the sequences \((L_k)_{k\in \mathbb N\cup \{0\}}\) and \((R_k)_{k\in \mathbb N\cup \{0\}} \) given by
and
respectively, for all \(k \in \mathbb {N} \cup \{0\}.\)
Let us recall that the polynomial function of order at most n, defined in a semigroup S and taking values in a group H is a mapping \(f:S\longrightarrow H\) satisfying the so called Fréchet functional equation, that is
for all \(x, h_1, \ldots , h_{n+1}\in S\) (here \(\Delta ^{n+1}_{h_{n+1}h_n\dots h_1}f:= \Delta _{h_{n+1}}\circ \Delta ^n_{h_n\ldots h_1}f, \) and \(\Delta \) is the Fréchet operator, defined by \(\Delta _hf(x)=f(x+h)-f(x)\) for every \(h, x\in S).\) In the case of \(S=H=\mathbb {R}\) we have the following characterization of the polynomial functions (cf. Van der Lijn [18] and Mazur and Orlicz [13], cf. also Fréchet [7]).
Theorem 1.1
Let \(f:\mathbb {R}\rightarrow \mathbb {R}\) be a polynomial function of order at most m, then there exist unique \(k-\)additive functions \(A_k: \mathbb {R}^k\rightarrow \mathbb {R},\) \(k \in \{1,\ldots ,m\}\) and a constant \(A_0\) such that
where \(A_k^*\) is a diagonalization of \(A_k.\) Conversely, every function of the shape (1.15) is a polynomial function of order at most m.
Let us note that usually \(A_k, \, k\in \{1,\ldots , m\},\) are not continuous (or regular in any sense). If they are slightly regular (e.g., bounded on an interval, measurable, or monotonic), then they take the form \(A_k^*(x)=c_kx^k,\) for every \(x \in \mathbb R,\) where \(c_k\) is a real constant, \(k\in \{1,\ldots , m\}\), and thus f is the ordinary polynomial. But there exist a highly irregular solutions of (1.14), even in the case of \(n=1.\) The discontinuous additive functions are dominating in the family of additive ones, more informations one can find in the book of Kuczma [11].
Let us mention a very important result used in this present paper due to Székelyhidi who proved that every solution of a very general linear equation is a polynomial function (see [17] Theorem 9.5, cf. also Wilson [19]).
Theorem 1.2
Let G be an Abelian semigroup, S an Abelian group, n a positive integer, \(\varphi _i,\psi _i\) additive functions from G to G and let \(\varphi _i(G)\subset \psi _i(G),\;i\in \{1,\ldots ,n\}.\) If functions \(f,f_i:G\rightarrow S\) satisfy equation
then f satisfies (1.14).
The Székelyhidi’s result makes it easier to solve linear equations because it is no longer necessary to deal with each equation separately. Instead, we may formulate results which are valid for large classes of equations. It is even possible to write computer programs which solve functional equations, see the papers of Gilányi [4], Borus and Gilányi [5] and Okeke and Sablik [15].
2 Results
We begin by showing that in general Eq. (1.1) has polynomial functions as solutions. To this name, rewrite (1.1) in the following form:
which allows us to write the left-hand side in the form
We excluded above the summands where \(a_i=0=b_i.\) Such a summand can be omitted, indeed. Namely suppose that \(a_n=b_n=0.\) Let F be a solution of (1.1), and assume that \(n\ge 2\) (otherwise the whole problem becomes trivial). If we put \(x=y=0\) in (1.1) then we get
From (2.2) we infer that either \(F(0)=0\) or \(\sum _{i=1}^{n-1} \gamma _i + \gamma _n=0.\) In the former the constant F(0) disappears, and the left hand side of (1.1) satisfies our assumptions. In the latter case, if moreover \(\gamma _n=0,\) the situation is analogous. Let us consider therefore the case
Observe that \(\sum _{i=1}^{n-1} \gamma _i \ne 0,\) and
Hence (1.1) may written in the form
Substituting \(\tilde{F}(z):=F(z)+\frac{\gamma _n}{\sum _{i=1}^{n-1} \gamma _i }F(0)\) we obtain
where \((a_i,b_i)\ne (0,0), i\in \{1,\ldots ,n-1\}.\) From (2.1) we see that there are three essential groups of terms on left-hand side, the first group contains the summands containing values of F at the points \(a_ix+b_iy,\) where \(a_i \ne 0 \ne b_i.\) The second group contains those summands in which \(b_i=0,\) and the third one those in which \(a_i=0.\) We saw in the paper by Nadhomi et al. (see [14]), that if the first and the third groups are empty, and the second one consists of two pairs (1, 0) and \((-1,0)\) and the corresponding \(\gamma \)’s are 1 and \(-1,\) then arbitrary even function F yields a solution to (2.1), together with \(f=0.\) Thus in general there is no chance that we obtain polynomiality of F. However, a closer look shows that we can state some positive claims.
Namely, rewrite again (2.1) in the form
and assume that there is a \(j\in \{1,\ldots ,m\}\) such that
Then it is possible to perform the change of variables \(c_jx+d_jy=z, \, x=w.\) It remains to express (x, y) in terms of (z, w) and to rewrite (2.3) in the form
where \( P_{f, \varphi _I,\varphi _{II}, \{\varphi _i: i\in I_1\}}\) is a polynomial function in z and w, with coefficients depending on \(f, \varphi _I, \varphi _{II}\) and \(\varphi _i, i\in I_1.\) We can apply Lemma 1.1 to get the polynomiality of f.
Thus the right-hand side of (2.1) is a polynomial in x and y, say Q(x, y). In other words we have
If \(\varphi _{I} \ne 0\) then we may rewrite the above in the form
whence by induction we obtain that \(\varphi _{I}\) is a polynomial function in x. Actually, assuming that Q is a polynomial of order say k in x, we apply the operator \(\Delta ,\) \(k+1\) times to both sides of Eq. (2.6). We get
where R is a polynomial function in y (which remains after annihilating the x part of Q(x, y)). Now, denoting \(\Delta _{h_1,\ldots ,h_{k+1}}\varphi _{I}\) by \(\overline{\varphi }\) and \(\Delta _{h_1,\ldots ,h_{k+1}} \varphi _i\) by \(f_i,\) as well as \( \tilde{\varphi }_{II}(y)+ R(y)\) by \(\overline{f},\) we get the equation from the Székelyhidi’s result(see (1.16)). We use it to infer that \(\overline{\varphi }\) is a polynomial function, whence polynomiality of \(\varphi _{I} =F \) follows.
If we knew that \(\varphi _{I}(x) = DF(x)\) for some constants D then we are done. Analogously, if \(\varphi _{II}(y)\ne 0\) with similar argument as above we get polynomiality of \(\varphi _{II} = F.\) But even if \(\varphi _{I}(x)\ne DF(x)\) or \( \varphi _{II}(y)\ne EF(y)\) for some constants E and D, we can still look at the first summand on the left-hand side of (2.5). Then it is enough to see whether the first sum is non zero and admits \(z=a_{i_0}x+b_{i_0}y, w=x\) for some \(i_0\) and to rewrite (2.5) in the form
and hence we see, similarly as before, that F has to be a polynomial function.
Therefore, it is enough to assume that
-
1.
There exist a \(j \in \{1,\ldots , m\}\) such that (2.4) holds and
-
2.
\(\varphi _{I}= DF\) or,
-
3.
\(\varphi _{II}= EF\) or,
-
4.
for some \(i_0 \in \{1,\ldots , n\}\) we have \(a_{i_0}\ne 0 \ne b_{i_0},\)
to get polynomiality of both f and F.
Having a result of this kind, it is now enough to assume that the pair of functions (F, f) satisfying Eq. (1.1) are monomials. However, from Eqs. (1.2) and (1.3), we may assume that a characteristic feature of Eq. (1.1) is the dependence of the existence of solutions on the sequences given by (1.12) and (1.13), respectively. Hence, we proceed to next theorem.
Theorem 2.1
Suppose \(\gamma _i, \alpha _j, \beta _j \in \mathbb {R}, \, a_i,b_i, c_j,d_j \in \mathbb {Q}, i \in \{1,\ldots , n \}, j \in \{1,\ldots , m \}.\) Let \((L_k)_{k\in \mathbb {N}\cup \{0\}}\) and \((R_k)_{k\in \mathbb {N}\cup \{0\}}\) be defined by (1.12) and (1.13) respectively. Assume that \(L_k, R_k \ne 0\) for some \(k\in \mathbb {N}\cup \{0\},\) and Eq. (1.1) is satisfied by the pair \((F,f): \mathbb R\longrightarrow \mathbb R\) of monomial functions of order \(k+1\) and k, respectively.
-
(i)
if \(k=0\) then \(f=0=F\) or \(f=A_0\ne 0\) and \(F(x)=\tfrac{R_0}{L_0}A_0x;\) in the latter case necessarily
$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^n \gamma _i a_i= \sum _{j=1}^m \alpha _j, \end{aligned}$$(2.9)and
$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^n \gamma _i b_i= \sum _{j=1}^m \beta _j. \end{aligned}$$(2.10) -
(ii)
if \(k \ne 0\) then either \(f = F = 0\) is the only solution of (1.1), or f is an arbitrary additive function while F is given by \(F(x)=\tfrac{R_k}{L_k}xf(x),\) \(x\in \mathbb R,\) when the below equations holds
$$\begin{aligned}{} & {} \tfrac{R_k}{L_k}\sum _{i=1}^n \gamma _i a_i^{k+1}= \sum _{j=1}^m \alpha _jc_j^k, \end{aligned}$$(2.11)$$\begin{aligned}{} & {} \tfrac{R_k}{L_k}\sum _{i=1}^n \gamma _i b_i^{k+1}= \sum _{j=1}^m \beta _jd_j^k, \end{aligned}$$(2.12)and
$$\begin{aligned} \tfrac{R_k}{L_k}\sum _{i=1}^n \genfrac(){0.0pt}1{k+1}{p} \gamma _i a_i^pb_i^{k+1-p}= & {} \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p}\beta _j c_j^pd_j^{k-p} \nonumber \\{} & {} + \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p-1} \alpha _jc_j^{p-1}d_j^{k+1-p}, \end{aligned}$$(2.13)for each \(p\in \{1,\ldots , k\}.\) Furthermore, for non-trivial f we see that either
-
(a)
\(\sum _{j=1}^m \beta _j c_j^pd_j^{k-p} = \sum _{j=1}^m \alpha _jc_j^{p-1}d_j^{k+1-p}\) for each \(p\in \{1,\ldots , k\},\) and f is an arbitrary k-monomial function, or
-
(b)
\(\sum _{j=1}^m \beta _j c_j^pd_j^{k-p} \ne \sum _{j=1}^m \alpha _jc_j^{p-1}d_j^{k+1-p}\) for each \(p\in \{1,\ldots , k\},\) and f is necessarily a continuous monomial function of order k and so is F of order \(k+1.\)
-
(a)
Proof
Suppose that \(k = 0.\) Then \(f =\) const \(= A_0\) and F is additive. Putting \(x=y\) in (1.1) we get
i.e.
for every \(x \in \mathbb {R},\) since \(L_0,R_0 \ne 0,\) thus F is a continuous function. Substituting (2.14) into (1.1) we obtain
for all \(x,y\in \mathbb R,\) whence formulae (2.9) and (2.10) easily follow. Observe that it is impossible to have
Indeed, in such a case \(L_0=0,\) which contradicts our assumption.
Suppose that \(k=1,\) we obtain that \(f = A_1\) is additive and \(F = B_2 ^{*}\) is a quadratic function, or in other words diagonalization of a biadditive function. Putting \(x=y\) in (1.1) we obtain (taking into account the rational homogeneity of f and F)
whence,
for every \(x \in \mathbb {R}.\) Keeping in mind that \(L_1\ne 0\) and denoting \(E_1 = \tfrac{R_1}{L_1}\) we get
for every \(x\in \mathbb R.\) Substituting the above into (1.1) we obtain
Comparing the terms with the same degrees we obtain
Observe that (2.17) holds if either \(A_1 =0\) or
Similarly, (2.18) holds if either \(A_1 =0\) or
Finally, (2.19) holds if either \(A_1 =0\) or,
Now if \(A_1 =0\) then \(F=0\). Let us consider the non-zero solutions of (1.1). Then all Eqs. (2.20), (2.21) and (2.22) hold. Note that it is impossible that
In fact, in such a situation we would have \(L_1=0,\) which contradicts our assumption. Moreover, substituting (2.22) into (2.19) we get
for all \(x,y \in \mathbb {R}.\) From (2.23) we see that either
which leads to a situation where \(A_1\) can be an arbitrary (in particular discontinuous) additive function and we get that the pair (F, f) is a solution of (1.1), or
for all \(x,y \in \mathbb {R}.\) Putting \(y=1\) in the above equation we have
for every \(x \in \mathbb {R},\) hence f and F are continuous.
Now, let us pass to the situation where \(k\ge 2.\) In general, if \(k\ge 2\) and the pair (F, f) satisfies (1.1) then
for every \(x \in \mathbb {R},\) and hence
for every \(x \in \mathbb {R}.\) Denote \(E_k =\tfrac{R_k}{L_k}\), we can write (1.1) as
or
whence
or
or
for all \(x, \, y\in \mathbb R.\) Comparing terms of equal degrees we have the following equations
for \(p\in \{1,\ldots , k\}\) and all \(x, \, y\in \mathbb R.\) Now, we observe that if (2.24) holds then either \(A_k=0\) or
Similarly, (2.25) holds if either \(A_k =0\) or
Finally, (2.26) holds if either \(A_k =0\) or
for \(p\in \{1,\ldots , k\}.\) Assume from now on that we are interested in nontrivial solutions of (1.1), that is when \(A_k \ne 0\) and Eqs. (2.27), (2.28) and (2.29) holds. Observe that is impossible to have
for each \(p\in \{1,\ldots , k\}.\) Indeed, in such a case we would have that \(L_k=0,\) which contradicts our assumption. Now, substituting (2.29) into (2.26)
for \(p\in \{1,\ldots , k\}\) and all \(x, \, y\in \mathbb R.\) Now from (2.30) we see that either
for each \(p\in \{1,\ldots , k\},\) which leads to a situation where \(A_k\) can be an arbitrary additive function and we get that the pair (F, f) is a solution of (1.1) or
for \(p\in \{1,\ldots , k\}\) and all \(x, \, y\in \mathbb R.\) Now, using (2.31) for \(p\in \{1,\ldots , k\}\) we arrive at
for every \(x,\, y\in \mathbb R,\) in other words, putting \(y=1\) we obtain
for every \(x\in \mathbb R,\) which means that \(A_k\) is continuous for \(k\ge 2\) and thus ends the proof. \(\square \)
Remark 2.1
We note here that in Eqs. (1.1) and (1.12), if \(f=0\) and \(k\in \mathbb {N}\cup \{0\}\) with
then F is not necessarily equal to zero. Of course this does not contradict Theorem 2.1 because \(L_k \ne 0.\) Therefore, we state the below propositions.
Proposition 2.2
Let \(\gamma _i \in \mathbb R,\) \(a_i, b_i \in \mathbb Q,\) \(i \in \{1,\ldots , n\}.\) Let \((L_k)_{k\in \mathbb {N}\cup \{0\}}\) be defined by (1.12). Assume that \(k=0\) such that
holds then either \(f = F = 0\) is the only solution of (1.1) or,
-
(a)
If \(f=0\) and \(F = const = A_0\) where \(A_0\) is any real number is the solution to (1.1) then
$$\begin{aligned} \sum _{i=1}^n \gamma _i =0. \end{aligned}$$ -
(b)
If \(f=0\) and \(F = A_1\) is an additive function is the solution to (1.1) then
$$\begin{aligned} \sum _{i=1}^n \gamma _ia_i = \sum _{i=1}^n \gamma _i b_i =0. \end{aligned}$$
Proof
Suppose that (2.33) holds. Let \(f=0\) and \(F = const = A_0\) where \(A_0\) is any real number. Substituting this in Eq. (1.1) we have
this holds if either \(A_0=0\) or \(\sum _{i=1}^n \gamma _i =0.\) Now for non trivial solutions of (1.1) we have that \(A_0 \ne 0\) and \(\sum _{i=1}^n \gamma _i =0\).
Finally, suppose that (2.33) holds. Let \(f=0\) and \(F = A_1\) be additive. Substituting this in Eq. (1.1) we get (taking into account the rational homogeneity of \(A_1\))
i.e.
Comparing terms of the same degree on both sides of the above equation, we obtain
for all \(x \in \mathbb R,\) and symmetrically
for all \(y \in \mathbb R.\) Both of these equations hold if either \(A_1 = 0\) or \(\sum _{i=1}^n \gamma _i a_i = \sum _{i=1}^n \gamma _i b_i = 0.\) Now for non trivial solutions of (1.1) we have that \(A_1 \ne 0\) and \(\sum _{i=1}^n \gamma _i a_i = \sum _{i=1}^n \gamma _i b_i = 0.\) \(\square \)
Proposition 2.3
Let \(\gamma _i \in \mathbb R,\) \(a_i, b_i \in \mathbb Q,\) \(i \in \{1,\ldots , n\}.\) Let \((L_k)_{k\in \mathbb {N}\cup \{0\}}\) be defined by (1.12). Assume that \(k\in \mathbb {N}\) such that
holds then either \(f = F = 0\) is the only solution of (1.1) or, \(f=0\) and \(F= A_{k+1}^*\) is an arbitrary \(k+1\) additive function when
for each \(p\in \{0,\ldots , k+1\}.\)
Remark 2.2
We note that if \(f=0,\) \(k=0,\) and \(\sum _{i=1}^n \gamma _i =0\) then \(F = A_0\) where \(A_0\) is any real number is also a solution to (1.1).
Remark 2.3
Since we are interested in the pair (F, f) of polynomial functions that satisfies (1.1), thus, we mention here that assumptions (2.33), (2.34) and Remark 2.2 are essential when \(f = 0.\) Therefore, if \(f = 0\) and \(k\in \mathbb {N}\cup \{0\}\) with \(L_k \ne 0\) then \(f = F = 0\) is the only solution to (1.1).
Now, we show that the main results obtained by Koclga-Kulpa et al. [9] (see Theorems 1 and 2 in [9]) are indeed special forms of our results.
Theorem 2.4
(cf. Theorem 1 in [9]). The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for \(x,y \in \mathbb R\), if and only if
and
for all \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\)
Proof
Suppose that the pair (F, f) satisfies Eq. (2.35), then putting \(y = x+y\) in the equation we get
Now rearranging (2.36) in the form
and applying Lemma 1.1, we get \(I_{0,0} = \{(id, id)\},\) \(I_{0,1} = \{(id, \tfrac{2}{3}id),(id, \tfrac{1}{3}id),(id, id)\},\) \(\psi _{0,0,(id,id)}= F,\) \(\psi _{0,1,( id,\frac{2}{3}id)}= -f,\) \(\psi _{0,1,(id,\frac{1}{3}id)}= -f,\) \(\psi _{0,1,(id,id)}= -f,\) \(\varphi _0 = F,\) \(\varphi _1 = f.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{0,1},\) and \(K_0 \cup K_1 = \{(id, \tfrac{2}{3}id),(id, \tfrac{1}{3}id),(id, id)\}.\) Therefore, \(\varphi _1 = f\) is a polynomial function of degree at most \(m=5\) i.e.
Observe that (2.35) is a special form of (1.1), thus we have that F is a polynomial function. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = d,\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = dx\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) we get,
and
thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,
then by Theorem 2.1, we infer that the monomial functions F, f are continuous, therefore \(f(x) = cx\) and \(F(x) = \tfrac{1}{2}cx^2\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=2\) then we have,
and
for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,
for each \(p \in \{1,2\}.\) By Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = bx^2\) and \(F(x) = \tfrac{1}{3}bx^3\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=3,\) then we obtain,
and
for each \(p \in \{1,2,3\}.\) Hence, \(\tfrac{R_3}{L_3} = \tfrac{1}{4}\) and also,
for each \(p \in \{1,2,3\}.\) Again by Theorem 2.1 we infer that the monomial functions (F, f) are continuous, therefore \(f(x) = ax^3\) and \(F(x) = \tfrac{1}{4}ax^4\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=4\) then we get,
and
for some \(p \in \{1,2,3,4\}.\) In particular take \(p=1\) then we see that
Hence, this leads to \(f = F = 0.\) Finally, if \(k=5,\) then we obtain
and
for some \(p \in \{1,2,3,4,5\}.\) In particular take \(p=1\) then we see that
Hence, this leads to \(f = F = 0.\) Now taking into account Proposition 2.2 we see that, if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = e,\) where e is a real number, is also a solution to (2.35), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.35) is given by \(f(x) = ax^3 + bx^2 + cx + d\) and \(F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e\) where \(x \in \mathbb R\) and \(a,b,c,d, e \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.35). \(\square \)
Theorem 2.5
(cf. Theorem 2 in [9]). The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for \(x,y \in \mathbb R\), if and only if
and
for all \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\)
Proof
Suppose that the pair (F, f) satisfies Eq. (2.38), then putting \(y = x+y\) in the equation we get
Now rearranging (2.39) in the form
and applying Lemma 1.1 we get, \(I_{0,0} = \{(id, id)\},\) \(I_{0,1} = \{(id, \tfrac{1}{2}id),(id, id)\},\) \(\psi _{0,0,(id,id)}= F,\) \(\psi _{0,1,(id,\frac{1}{2}id)}= -f,\) \(\psi _{0,1,(id,id)}= -f,\) \(\varphi _0 = F,\) \(\varphi _1 = f.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{0,1},\) and \(K_0 \cup K_1 = \{(id, \tfrac{1}{2}id),(id, id)\}.\) Therefore, \(\varphi _1 = f\) is a polynomial function of degree at most \(m=3\) i.e.
Since (2.38) is a special case of (1.1) we know also that F is a polynomial function. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = d,\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = dx\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=1\) we get
and
thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,
then by Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = cx\) and \(F(x) = \tfrac{1}{2}cx^2\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=2\) then we have
and
for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,
for each \(p \in \{1,2\}.\) By Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = bx^2\) and \(F(x) = \tfrac{1}{3}bx^3\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, if \(k=3,\) then we obtain
and
for each \(p \in \{1,2,3\}.\) Hence, \(\tfrac{R_3}{L_3} = \tfrac{1}{4}\) and also,
for each \(p \in \{1,2,3\}.\) Again by Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = ax^3\) and \(F(x) = \tfrac{1}{4}ax^4\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now taking into account Proposition 2.2 we see that, if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = e\) where e is a real number, is also a solution to (2.38), since \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.38) is given by \(f(x) = ax^3 + bx^2 + cx + d\) and \(F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e\) where \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.38). \(\square \)
Remark 2.4
If in (1.1) \(n=2, \gamma _1 = 1, \gamma _2 = -1, a_1 = b_2 =1, b_1 = a_2 =0,\) and \(\beta _j = - \alpha _j\) for each \(j \in \{1, \ldots , m\}\) then we get the equation considered by Koclga-Kulpa et al. [9] namely,
It is worth noting that (2.35) stems from a well known quadrature rule used in numerical analysis.
In line with the papers of Koclga-Kulpa and Szostok [8] and Koclga-Kulpa et al. [10], we now consider polynomial functions connected with Hermite–Hadamard inequality in the class of continuous functions. The Hermite–Hadamard inequality is given as
for all \(x,y \in \mathbb R.\) Rewrite now inequality (2.41) in the form
for all \(x,y \in \mathbb R.\) However, if we consider the function \(f(x) = x^3 + x^2 + x\), then we have much more detailed information namely,
Now we may rewrite (2.42) in the form
where \(F' = f\) (because f is continuous). Now combining Eqs. (2.42) and (2.43) we obtain a more general functional equation namely,
for every \(x,y \in \mathbb R,\) \(c_j \in \mathbb Q,\) and \(\beta _j \in \mathbb R\) with \(\sum _{j=1}^m \beta _j =1.\) This equation is related to the approximate integration. Note that the quadrature rules of an approximate integration can be obtained by the appropriate specification of the coefficients of (2.44).
Remark 2.5
Observe that in (1.1), if \(n=2, \gamma _1 = 1, \gamma _2 = -1, a_1 = b_2 =0, b_1 = a_2 =1,\) \(\alpha _j = - \beta _j\) for each \(j \in \{1, \ldots , m\}\) with\(\sum _{j=1}^m \beta _j =1,\) and \(d_j = 1-c_j\) for each \(j \in \{1, \ldots , m\}\) then we obtain Eq. (2.44) which is the functional equation considered by Koclga-Kulpa et al. [10]. We note here that in their paper \(c_j \in \mathbb R.\)
Since (2.44) is a special form of (1.1), we may now use our method to obtain the polynomial functions of the functional equations belonging to class (2.44).
Theorem 2.6
The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for \(x,y \in \mathbb R\), if and only if
and
for all \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\)
Proof
Suppose that the pair (F, f) satisfies Eq. (2.45), then putting \(y = x+y\) in the equation and applying Lemma 1.1 we get that f is a polynomial function of degree at most 3. Since (2.45) is a special case of (1.1) we know also that F is a polynomial function. Now we rewrite Eq. (2.45) in the form,
and check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = d,\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) thus, \(F(x) = dx\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) then we get
and
thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,
thus by Theorem 2.1 we have that the monomial functions F, f are continuous, therefore \(f(x) = cx\) and \(F(x) = \tfrac{1}{2}cx^2\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=2\) then we have
and
for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,
for each \(p \in \{1,2\}.\) By Theorem 2.1 we have that the monomial functions F, f are continuous, therefore \(f(x) = bx^2\) and \(F(x) = \tfrac{1}{3}bx^3\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, if \(k=3,\) then we obtain
and
for each \(p \in \{1,2,3\}.\) Hence, \(\tfrac{R_3}{L_3} = \tfrac{1}{4}\) and also,
for each \(p \in \{1,2,3\}.\) Again by Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = ax^3\) and \(F(x) = \tfrac{1}{4}ax^4\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = e\) where \(e\in \mathbb R\) is also a solution to (2.45), since \(\displaystyle \sum \nolimits _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.45) is given by \(f(x) = ax^3 + bx^2 + cx + d\) and \(F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e\) where \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.45). \(\square \)
Theorem 2.7
(cf. Theorem 4 in [10]). The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for \(x,y \in \mathbb R\), if and only if
and
for all \(x \in \mathbb R\) and \(a,b,c,d \in \mathbb R.\)
Proof
Suppose that the pair (F, f) satisfies Eq. (2.46), then substituting \(y = x+y\) in the equation and applying Lemma 1.1 we get that f is a polynomial function of degree at most 2. Since (2.46) is a special case of (1.1) we have that F is a polynomial function. Now we rewrite Eq. (2.46) in the form,
and check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = c,\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) thus, \(F(x) = cx\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) then we get
and
thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,
therefore by Theorem 2.1 we have that the monomial functions F, f are continuous, hence \(f(x) = bx\) and \(F(x) = \tfrac{1}{2}bx^2\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, if \(k=2\) then we have
and
for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,
for each \(p \in \{1,2\}.\) By Theorem 2.1 we have that the monomial functions (F, f) are continuous, therefore \(f(x) = ax^2\) and \(F(x) = \tfrac{1}{3}ax^3\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = d\) where \(d\in \mathbb R\) is also a solution to (2.46), because \(\sum _{i=1}^2 \gamma _i = 0.\) Therefore, the general solution of Eq. (2.46) is given by \(f(x) = ax^2 + bx + c\) and \(F(x) = \tfrac{1}{3}ax^3 + \tfrac{1}{2}bx^2 + cx + d\) where \(x \in \mathbb R\) and \(a,b,c, d\in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.46). \(\square \)
Now we give some examples that include known results which may be solved by the use of our method.
Example 2.1
Assume that the functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy the functional equation
for all \(x,y \in \mathbb R.\)
Now we rearrange (2.47) in the form
for all \(x,y \in \mathbb R.\) Applying Lemma 1.1, we can see that f is the zero function. Clearly, (2.47) is a special case of (1.1), thus we infer that F is also a polynomial function. Now, check conditions of Theorem 2.1 and taking into account Remark 2.3, we see that \(f = F =0\) is the only solution of (2.47).
Example 2.2
(Aczél result cf. [1]) The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for all \(x,y \in \mathbb R,\) if and only if
and
for all \(x,y \in \mathbb R,\) and \(a,b, c \in \mathbb R\)
Proof
Now we rewrite (2.49) in the form of Eq. (1.1)
Suppose that the pair (F, f) satisfies (2.50), then rearranging (2.50) in the form
and applying Lemma 1.1, we get \(I_{0,0} = \{(0, id)\},\) \(I_{0,1} = I_{1,0} =\{(id, id)\},\) \(\psi _{0,0,(0,id)}= F,\) \(\psi _{0,1,(id,id)}= -f,\) \(\psi _{1,0,(id,id)}= f,\) \(\varphi _0 = F.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{0,1} \cup I_{1,0},\) and \(K_0 \cup K_1 = \{(0, id),(id, id)\}.\) Therefore, \(\varphi _0 = F\) is a polynomial function of degree at most \(m=2\) i.e.
Since (2.50) is a special form of (1.1), thus we know also that f is a polynomial function. By Theorem 2.1 we infer that f is at most degree 1. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = b,\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = bx\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=1\) we get
and
thus, \(\tfrac{R_1}{L_1} = 1\) and also,
hence by Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = ax\) and \(F(x) = ax^2\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = c\) where \(c\in \mathbb R\) is also a solution to (2.49), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.49) is given by \(f(x) = ax + b\) and \(F(x) = ax^2 +bx + c\) where \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.49). \(\square \)
Remark 2.6
We note here that the functional equation considereds by Aczél [1] is a special case of Eq. (1.1). In particular, choose \(n= 2, m=2, \gamma _1 =\beta _1 = 1, \gamma _2 = \alpha _2= -1, \alpha _1 = \beta _2 =0, a_1 = b_2 = 0\) and \(b_1 = a_2 = c_1 = d_1 = c_2 = d_2 =1,\) we get
Example 2.3
(cf. Theorem 5 in [2]) The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for all \(x,y \in \mathbb R,\) if and only if
and
for all \(x,y \in \mathbb R,\) and \(a,b,c \in \mathbb R.\)
Proof
Now we rewrite (2.52) in the form of Eq. (1.1)
Suppose that the pair (F, f) satisfies (2.53), then rearranging (2.53) in the form
and applying Lemma 1.1 we get \(I_{0,0} = \{(0, id)\},\) \(I_{1,0} = I_{0,1} =\{(\tfrac{1}{2}id, \tfrac{1}{2}id)\},\) \(\psi _{0,0,(0,id)}= F,\) \(\psi _{1,0,(\frac{1}{2}id, \frac{1}{2}id)}= f,\) \(\psi _{0,1,(\frac{1}{2}id, \frac{1}{2}id)}= -f,\) \(\varphi _0 = F.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{1,0} \cup I_{0,1},\) and \(K_0 \cup K_1 = \{(0, id),(\tfrac{1}{2}id, \tfrac{1}{2}id)\}.\) Therefore, \(\varphi _0 = F\) is a polynomial function of degree at most \(m=2\) i.e.
Clearly, (2.53) is a special form of (1.1), thus we know also that f is a polynomial function. By Theorem 2.1 we infer that f is at most degree 1. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = b,\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = bx\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, let \(k=1\) we get
and
thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,
then by Theorem 2.1 we infer that the monomial functions F, f are continuous, therefore \(f(x) = ax\) and \(F(x) = \tfrac{1}{2}ax^2\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = c\) where \(c\in \mathbb R\) is also a solution to (2.52), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.52) is given by \(f(x) = ax + b\) and \(F(x) = \tfrac{1}{2}ax^2 +bx +c\) where \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.52). \(\square \)
Remark 2.7
We note here that using our method in solving (2.52) we obtained the same results as Aczél and Kuczma [2] (cf. Theorem 5 in [2]).
Example 2.4
The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy
for \(x,y \in \mathbb R\), if and only if
and
for all \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\)
Proof
Suppose that the pair (F, f) satisfies Eq. (2.55), then substituting \(y = x+y\) in the equation and applying Lemma 1.1 we get that f is a polynomial function of degree at most 3. Since (2.55) is a special case of (1.1) we have that F is a polynomial function. Now we rewrite Eq. (2.55) in the form,
and check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = b,\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have
and
Hence, \(\tfrac{R_0}{L_0} =1,\) thus, \(F(x) = bx\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) then we get
and
thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,
hence by Theorem 2.1 we have that the monomial functions F, f are continuous, therefore \(f(x) = ax\) and \(F(x) = \tfrac{1}{2}ax^2\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=2\) then we get
and
for some \(p \in \{1,2\}.\) In particular take \(p=1\) then we see that
Hence, this leads to \(f = F = 0.\) Finally, if \(k=3,\) then we obtain
and
for some \(p \in \{1,2,3\}.\) In particular take \(p=1\) then we see that
Again by Theorem 2.1, this leads to \(f = F = 0.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = c\) where \(c\in \mathbb R\) is also a solution to (2.55), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.55) is given by \(f(x) = ax + b\) and \(F(x) = \tfrac{1}{2}ax^2 +bx +c\) where \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.55). \(\square \)
Remark 2.8
We note here that (2.55) is the functional equation arising from the geometric problems considered by Alsina et al. [3].
Remark 2.9
Let us observe that when \(n = 3, \gamma _1 = a_1 = b_1 = a_2 = b_3 = 1, a_3 = b_2 = 0,\) \(\gamma _2 = \gamma _3 = -1,\) \(m=2, \, \alpha _1=\beta _2=d_1=c_2=1, \,\) and \( \alpha _2=\beta _1=c_1=d_2=0,\) Eqs. (1.1), (1.2) and (1.3) have the same polynomial solutions (see [14, 15]) as Eq. (1.9) considered by Fechner and Gselmann [6]. In addition, the polynomial solutions of Eqs. (1.2) or (1.3) are also polynomial solutions of Eq. (1.1) but the converse is not necessarily true.
Remark 2.10
To this end, we conclude that the main results of [14] (see Theorem 3.3 in [14]) and [15] (see Theorem 2.2 in [15]) are special forms of our results. Moreso, we mention here that the pair of functions (F, f) mapping from \(\mathbb {R}\) to \(\mathbb {R}\) that satisfies Eqs. (1.1), (1.2) and (1.3) respectively, were obtained by assuming that \(x=y \in \mathbb R.\) From the paper of Okeke and Sablik [15], we see that it is possible to use a computer program to solve functional equations in particular, Eq. (1.3). Therefore, these leads to the following questions:
-
(a)
Which are the polynomial functions F, f mapping \(\mathbb {R}\) to \(\mathbb {R}\) that satisfy Eqs. (1.1), (1.2), (1.3) and (1.9) when \( x \ne y ? \)
-
(b)
Is it possible to formulate a robust computer algorithm which determines the polynomial solutions of Eq. (1.1) and the polynomial solutions of question a)?
Data availability
The author confirms that all data and materials used for this study are included in this article.
Code availability
Not applicable.
References
Aczél, J.: A mean value property of the derivative of quadratic polynomials—without mean values and derivatives. Math. Mag. 5(8), 42–45 (1985)
Aczél, J., Kuczma, M.: On two mean value properties and functional equations associated with them. Aequationes Math. 38, 216–235 (1989)
Alsina, C., Sablik, M., Sikorska, J.: On a functional equation based upon a result of Gaspard Monge. J. Geom. 85(1–2), 1–6 (2006)
Borus, G. G., Gilányi, A.: Solving systems of linear functional equations with computer. In: 2013 IEEE 4th International Conference on Cognitive Infocommunications (CogInfoCom), pp. 559–562 (2013)
Borus, G.G., Gilányi, A.: Computer assisted solution of systems of two variable linear functional equations. Aequationes Math. 94(4), 723–736 (2020)
Fechner, W., Gselmann, E.: General and alien solutions of a functional equation and of a functional inequality. Publ. Math. Debrecen 80(1–2), 143–154 (2012)
Fréchet, M.: Une définition fonctionelle des polynômes. Nouv. Ann. 49, 145–162 (1909)
Koclȩga-Kulpa, B., Szostok, T.: On some functional equations connected to Hadamard inequalities. Aequationes Math. 1(75), 119–29 (2008)
Koclȩga-Kulpa, B., Szostok, T., Wa̧sowicz, S.: Some functional equations characterizing polynomials. Tatra Mt. Math. Publ. 44, 27–40 (2009)
Koclȩga-Kulpa, B., Szostok, T., Wa̧sowicz, S.: On functional equations connected with quadrature rules, pp. 725–736 (2009)
Kuczma, M.: An introduction to the theory of functional equations and inequalities. In: Gilányi, A. (ed.) Cauchy Equation and Jensen’s Inequality, 2nd edn. Birkhauser, Basel (2009)
Lisak, A., Sablik, M.: Trapezoidal rule revisited. Bull. Inst. Math. Acad. Sin. 6, 347–360 (2011)
Mazur, S., Orlicz, W.: Grundlegende Eigenschaften der polynomischen Operationen. Stud. Math. 5(50–68), 179–189 (1934)
Nadhomi, T., Okeke, C.P., Sablik, M., Szostok, T.: On a new class of functional equations satisfied by polynomial functions. Aequationes Math. 95, 1095–1117 (2021)
Okeke, C.P., Sablik, M.: Functional equation characterizing polynomial functions and an algorithm. Results Math. 77, 125 (2022). https://doi.org/10.1007/s00025-022-01664-x
Sablik, M.: Taylor’s theorem and functional equations. Aequationes Math. 60, 258–267 (2000)
Székelyhidi, L.: Convolution type functional equations on topological commutative groups. World Scientific Publishing Co. Inc, Teaneck (1991)
Van der Lijn, G.: La définition fonctionnelle des polynômes dans les groupes abéliens. Fund. Math. 33, 42–50 (1939)
Wilson, W.H.: On a certain general class of functional equations. Am. J. Math. 40, 263–282 (1918)
Acknowledgements
We are grateful to the referee for the valuable remarks that made the paper readable and improved its editorial quality.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The author confirms sole responsibility for the article preparation.
Corresponding author
Ethics declarations
Conflict of interest
No conflict of interest.
Ethics approval
Not applicable.
Consent to participate
Not applicable
Consent for publication
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Okeke, C.P. Further results on a new class of functional equations satisfied by polynomial functions. Results Math 78, 96 (2023). https://doi.org/10.1007/s00025-023-01877-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00025-023-01877-8
Keywords
- Functional equations
- polynomial functions
- Fréchet operator
- monomial functions
- continuity of monomial functions.