Abstract
The aim of present paper is to obtain Shannon type inequalities using the extended version of Jensen’s inequality in time scales settings. The concept of differential entropy of a continuous random variable on time scales is introduced, and its bounds for some particular distributions are also estimated.
Similar content being viewed by others
1 Introduction and preliminaries
In recent times, Shannon entropy and Zipf–Mandelbrot law have been the topics of great interest, see for example [1, 9, 11, 12]. The concept of Shannon entropy, the central source of information theory, is sometimes referred to as measure of uncertainty. Shannon entropy allows to estimate the average minimum number of bits needed to encode a string of symbols based on the alphabet size and the frequency of the symbols.
The following definition of Shannon entropy is given in [8].
Definition 1
The Shannon entropy of positive probability distribution \(\mathbf{r} =(r_{1},r_{2},\ldots ,r_{n})\) is defined by
A fundamental inequality related to the notion of Shannon entropy is the following inequality given in [16]:
which is valid for all \(r_{i}, f_{i} > 0\) with
Equality holds in (1) if and only if \(r_{i} = f_{i}\) for all i. This result sometimes called the fundamental lemma of information theory has extensive applications (see [14]). In [13], Matić et al. gave the refinement of Shannon’s inequality in its discrete and integral forms by presenting upper estimates of the difference between its two sides. In [10], Sadia et al. studied some interesting results related to the bounds of the Shannon entropy by using nonincreasing (nondecreasing) sequences of real numbers.
One of the main approaches to unifying continuous and discrete mathematics is time scale calculus which was founded by German mathematician Stefan Hilger in 1988. A time scale is an arbitrary nonempty closed subset of the real numbers. For an introduction to the theory of dynamic equations on time scales, see [6]. In [7], Guseinov studied the process of Riemann and Lebesgue integration on time scales. Bohner and Guseinov [4, 5] defined the multiple Riemann and multiple Lebesgue integration on time scales and compared the Lebesgue Δ-integral with the Riemann Δ-integral. Various authors examined certain integral inequalities on time scales. In [2], Agarwal et al. proved the time scales version of Jensen’s inequality. In [18], Wong et al. proved the extended version of Jensen’s inequality on time scales. In [3], Anwar et al. derived a series of known inequalities, their extensions, and some new inequalities in the theory of dynamic equations on time scales by applying the theory of isotonic linear functionals. In [15], Rozarija Mikić and Josip Pečarić obtained lower and upper bounds for the difference in Jensen’s inequality and in the Edmundson–Lah–Ribaric inequality in time scales calculus which holds for the class of n-convex functions.
In the following considerations, \(\mathbb{T}\) denotes a time scale.
Definition 2
([6])
A function \(f : \mathbb{T} \rightarrow \mathbb{R}\) is called rd-continuous provided it is continuous at right-dense points of \(\mathbb{T}\) and its left-sided limits exist (finite) at left-dense points of \(\mathbb{T}\). The set of rd-continuous functions \(f : \mathbb{T} \rightarrow \mathbb{R}\) is denoted here by \(C_{rd}\).
Definition 3
([6])
A function \(F : \mathbb{T} \rightarrow \mathbb{R}\) is called antiderivative of \(f : \mathbb{T} \rightarrow \mathbb{R}\) if \(F^{\Delta }(t) = f(t)\) for all \(t \in \mathbb{T}^{k}\) and the delta integral
The following theorems are useful in the proof of the main results.
Theorem 1
([6], Existence of antiderivatives)
Every rd-continuous function has an antiderivative.
Theorem 2
([18])
Let\(I \subset \mathbb{R}\)and assume that\(r \in C_{rd}([a, b]_{\mathbb{T}}, \mathbb{R})\)with
where\(a,b \in \mathbb{T}\). If\(g \in C(I, \mathbb{R})\)is convex and\(\xi \in C_{rd}([a, b]_{\mathbb{T}}, I)\), then
The inequality in (3) is strict if g is strictly convex.
2 Main results
Throughout the paper ‘log’ refers to logarithms to base b̄ for some fixed \(\bar{b} > 1\). We initiate with the following result.
Theorem 3
Let\(r \in C_{rd}([a, b]_{\mathbb{T}}, \mathbb{R}^{+})\)and assume that
where\(a, b \in \mathbb{T}\). If\(\xi , \frac{1}{\xi } \in C_{rd}([a, b]_{\mathbb{T}}, \mathbb{R}^{+})\)such that\(\int _{a}^{b} r(s)\xi (s)\Delta s < \infty \)and\(\int _{a}^{b}\frac{r(s)}{ \xi (s)}\Delta s < \infty \), then we have
Proof
Use inequality (3) for the convex function \(g(x) = - \log x\), \(x >0 \) to get inequality (4). Replace ξ by \(\frac{1}{\xi }\) in inequality (4), which implies
Now, by adding \(\log (\frac{\int _{a}^{b} r(s)\xi (s)\Delta s}{\int _{a}^{b} r(s) \Delta s} )\) on both sides of (7), we get
which is inequality (5). Inequality (6) is a straightforward outcome of the following inequality given in [13]:
with
□
2.1 Shannon entropy
Consider X to be a continuous random variable with a nonnegative density function \(r(s)\) on \(\mathbb{T}\) such that \(\int _{a}^{b} r(s)\Delta s = 1\), whenever the integral exists, we have the following definition.
Definition 4
The nominal differential entropy of X on time scale is defined by
The following result is the time scale extension of integral Shannon inequality [13, Theorem 18]. Moreover, one can get results related to Shannon entropy by choosing time scale to be the set of integers with positive probability distributions in the following result.
Theorem 4
Let\(a, b \in \mathbb{T}\), \(a < b\)and assume that\(r, f \in C_{rd}([a, b]_{\mathbb{T}}, \mathbb{R})\)are positive functions with\(\int _{a}^{b} r(s) \Delta s > 0\)and\(\lambda := \int _{a}^{b} f(s) \Delta s < \infty \). Suppose that for\(\bar{b} > 1\)at least one of the following Δ-integrals is finite:
If\(\int _{a}^{b} \frac{r^{2}(s)}{f(s)}\Delta s < \infty \), then
Proof
Apply Theorem 3 with \(\xi (s) = \frac{ f(s)}{ r(s)}\) (\(s \in \mathbb{T}\)) and \(\lambda = \int _{a}^{b} f(s) \Delta s = \int _{a}^{b} r(s)\xi (s)\Delta s < \infty \) to get
Since
therefore replacing x by \(\frac{ f(s)}{ r(s)}\) and multiplying both sides by \(r(s)\) in (11), we get
thus
Whenever \(Q_{r}\) is finite, then \(Q_{r} - J = Q_{f}\) is also finite, further if \(Q_{f}\) is finite, then \(Q_{f} + J = Q_{r}\) is finite as well. Therefore we may write \(J = Q_{r} - Q_{f}\), and consequently the desired result is proved. □
Corollary 1
Let\(a, b \in \mathbb{T}\), \(a < b\), and\(r, f \in C_{rd}([a, b]_{\mathbb{T}}, \mathbb{R}^{+})\)with\(\lambda := \int _{a}^{b} f(s) \Delta s < \infty \). Suppose that for\(\bar{b} > 1\)at least one of the following Δ-integrals is finite:
If\(\int _{a}^{b} \frac{r^{2}(s)}{f(s)}\Delta s < \infty \), then
Proof
Use \(\int _{a}^{b} r(s) \Delta s = 1\) in Theorem 4 to get the required result. □
Remark 1
Choose \(\mathbb{T} = \mathbb{R}\) in Theorem 4 with \(\int _{a}^{b} r(s) \Delta s = 1\) to get [13, Theorem 18].
In the proof of our next result, we need the following weighted Grüss type inequality on time scales established by Sarikaya et al. in [17].
Theorem 5
Let\(\xi ,g \in C_{rd}\)and\(\xi , g : [a, b]_{\mathbb{T}} \rightarrow \mathbb{R}\)be two Δ-integrable functions on\([a, b]_{\mathbb{T}}\)and\(r \in C_{rd}\)be a positive function with\(\int _{a}^{b}r(s)\Delta s > 0\). Then, for
we have
Lemma 1
Suppose that the assumptions of Theorem 3are satisfied. If
then
where\(m, M \in \mathbb{R}_{+}\)and\(\varrho := \frac{M}{m}\). Further, if
for\(\varepsilon > 0\), then
Proof
Inequality (14) is the same as (4). From (13) one gets
Set \(g = \frac{1}{\xi }\) in inequality (12) to get
or
Since log is strictly increasing, we have
Using inequality (19) together with (5) gives (15). However, (16) can be easily derived from the elementary inequality (8). Further, set
or
therefore
which holds if and only if
Since
inequality (18) follows from (14) and it holds whenever ϱ satisfies (17). □
Remark 2
Let \(\mathbb{T} = \mathbb{R}\) with \(\int _{a}^{b} r(s) \Delta s = 1\) in Lemma 1 to get [13, Lemma 2].
Theorem 6
Assume the conditions of Theorem 4, and let
Then
Also, if\(\frac{M}{m}\leq \varPhi (\varepsilon ):= 2\bar{b}^{\varepsilon } - 1 + 2\sqrt{\bar{b}^{\varepsilon }(\bar{b}^{\varepsilon } - 1)}\)for some\(\varepsilon > 0\), then
Proof
Apply Lemma 1 with \(\xi (s) = \frac{f(s)}{r(s)}\) (\(s \in [a, b]_{\mathbb{T}}\)) and
to obtain the desired results. □
Corollary 2
Consider the assumptions of Theorem 6with\(\int _{a}^{b} r(s)\Delta s = 1 \), then we obtain
Remark 3
Let \(\mathbb{T} = \mathbb{R}\) with \(\int _{a}^{b} r(s) \Delta s = 1\) in Theorem 6 to get [13, Theorem 19].
2.2 Entropy of continuous random variable
In the sequel, we denote mean and variance of a continuous random variable X by \(\mu _{m} = \int _{a}^{b} sr(s) \Delta s\) and \(v^{2} = \int _{a}^{b} (s-\mu _{m})^{2} r(s) \Delta s\) respectively.
Theorem 7
Consider a continuous random variableXand density function\(r(s)\) (\(s \in \mathbb{T}\)).
- (a)
IfXhas a finite mean\(\mu _{m}\)and variance\(v^{2}\)with
$$ \int _{a}^{b} r^{2}(s) \exp \biggl[ \frac{1}{2v^{2}}(s-\mu _{m})^{2} \biggr]\Delta s < \infty , $$then\(h_{\bar{b}}(X)\)is finite and
$$\begin{aligned} 0 \leq & \log (v\sqrt{2\pi e}) - h_{\bar{b}}(X) + \log (\lambda ) \\ \leq & \log \biggl\{ \lambda v\sqrt{2\pi } \int _{a}^{b} r^{2}(s) \exp \biggl[ \frac{1}{2v^{2}}(s-\mu _{m})^{2} \biggr]\Delta s \biggr\} \\ \leq & \frac{1}{\ln \bar{b}} \biggl\{ \lambda v\sqrt{2\pi } \int _{a}^{b} r^{2}(s) \exp \biggl[ \frac{1}{2v^{2}}(s-\mu _{m})^{2} \biggr]\Delta s - 1 \biggr\} , \end{aligned}$$where\(\lambda = \int _{a}^{b}(1/v\sqrt{2\pi }) \exp [-(s-\mu _{m})^{2}/2v^{2}]\Delta s > 0\).
- (b)
Suppose thatXhas finite mean and\(r(s) = 0\)for all\(s < 0\). If
$$ \int _{0}^{\infty } r^{2}(s) \exp (s/\mu _{m})\Delta s < \infty , $$then\(h_{\bar{b}}(X)\)is finite and
$$\begin{aligned} 0 \leq & \log (\mu _{m} e) - h_{\bar{b}}(X) + \log (\lambda ) \\ \leq & \log \biggl[\lambda \mu _{m} \int _{0}^{\infty } r^{2}(s) \exp (s/\mu _{m})\Delta s \biggr] \\ \leq & \frac{1}{\ln \bar{b}} \biggl[\lambda \mu _{m} \int _{0}^{\infty } r^{2}(s) \exp (s/\mu _{m})\Delta s - 1 \biggr], \end{aligned}$$where\(\lambda = \int _{0}^{\infty }(1/\mu _{m}) \exp (-s/\mu _{m}) \Delta s > 0\).
Proof
-
(a)
As the variance \(v^{2}\) of X is finite, which implies that \(\mu _{m} = \int _{a}^{b} sr(s)\Delta s\) and \(v^{2} = \int _{a}^{b} (s-\mu _{m})^{2} r(s)\Delta s > 0\) are well defined real numbers, we can define \(f(s) = (1/v\sqrt{2\pi }) \exp [-(s-\mu _{m})^{2}/2v^{2}]>0\) (\(s \in \mathbb{T}\)) to get \(\lambda = \int _{a}^{b} f(s)\Delta s > 0\) and
$$\begin{aligned} \int _{a}^{b}r(s) \log \frac{1}{f(s)}\Delta s =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \ln \biggl(\frac{1}{f(s)} \biggr)\Delta s \\ =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \ln \bigl[(v\sqrt{2\pi }) \exp \bigl[(s-\mu _{m})^{2}/2v^{2}\bigr] \bigr]\Delta s \\ =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \bigl[\ln (v\sqrt{2\pi }) + \ln \bigl(\exp \bigl[(s-\mu _{m})^{2}/2v^{2}\bigr] \bigr) \bigr]\Delta s \\ =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \biggl[\ln (v\sqrt{2\pi })+ \frac{(s-\mu _{m})^{2}}{2v^{2}} \biggr]\Delta s \\ =& \frac{1}{\ln \bar{b}} \biggl[\ln (v\sqrt{2\pi }) \int _{a}^{b}r(s)\Delta s + \frac{1}{2v^{2}} \int _{a}^{b}r(s) (s-\mu _{m})^{2} \Delta s \biggr] \\ =& \frac{1}{\ln \bar{b}} \biggl[\ln (v\sqrt{2\pi }) + \frac{1}{2v^{2}} \cdot v^{2} \biggr] \\ =& \frac{1}{\ln \bar{b}} \biggl[\ln (v\sqrt{2\pi })+ \frac{1}{2} \biggr] \\ =& \frac{1}{\ln \bar{b}} \bigl[\ln (v\sqrt{2\pi })+ \ln \sqrt{e} \bigr] \\ =& \frac{1}{\ln \bar{b}} \ln (v\sqrt{2\pi e}) \\ =& \log (v\sqrt{2\pi e}). \end{aligned}$$Now apply Corollary 1 to get the stated result.
-
(b)
Under the given conditions, we have mean \(\mu _{m} = \int _{0}^{\infty } sr(s)\Delta s > 0\), and we may define \(f(s) = (1/\mu _{m}) \exp (-s/\mu _{m})\) (\(s \in [0,\infty )_{\mathbb{T}}\)) such that \(\lambda = \int _{0}^{\infty } f(s)\Delta s > 0\) and
$$\begin{aligned} \int _{0}^{\infty }r(s) \log \frac{1}{f(s)}\Delta s =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \ln \frac{1}{f(s)}\Delta s \\ =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \ln \bigl[\mu _{m} \exp (s/\mu _{m})\bigr]\Delta s \\ =& \frac{1}{\ln \bar{b}} \int _{a}^{b}r(s) \bigl[\ln \mu _{m} + \ln \bigl(\exp (s/\mu _{m})\bigr)\bigr]\Delta s \\ =& \frac{1}{\ln \bar{b}} \int _{0}^{\infty }r(s) \biggl(\ln \mu _{m}+ \frac{s}{\mu _{m}} \biggr)\Delta s \\ =& \frac{1}{\ln \bar{b}} \biggl[\ln \mu _{m} \int _{0}^{\infty }r(s)\Delta s + \frac{1}{\mu _{m}} \int _{0}^{\infty }s r(s)\Delta s \biggr] \\ =& \frac{1}{\ln \bar{b}} \biggl[\ln \mu _{m} + \frac{1}{\mu _{m}} \cdot \mu _{m} \biggr] \\ =& \frac{1}{\ln \bar{b}} (\ln \mu _{m} + \ln e) \\ =& \frac{1}{\ln \bar{b}} \ln (\mu _{m} e) \\ =& \log (\mu _{m} e). \end{aligned}$$Again apply Corollary 1 to obtain the required result. □
Corollary 3
Assume\(\mathbb{T} = \mathbb{R}\)in Theorem 7to get [13, Theorem 21 a, b].
Remark 4
Theorem 7 shows that \(h_{\bar{b}}(X) \approx \log (\lambda v\sqrt{2\pi e})\) whenever the distribution of X is nearly equal to the Gaussian distribution with variance \(v^{2}\). If the distribution of X is close to the exponential distribution with mean \(\mu _{m}\), then we have \(h_{\bar{b}}(X) \approx \log (\lambda \mu _{m} e)\).
Theorem 8
-
(a)
Under the assumptions of Theorem 7(a), if
$$ 0 < \delta \leq r(s) \exp \biggl[\frac{1}{2v^{2}}(s-\mu _{m})^{2} \biggr] \leq \theta \quad \forall s \in \mathbb{T}, $$then
$$\begin{aligned} 0 \leq & \log (v\sqrt{2\pi e}) - h_{\bar{b}}(X) + \log (\lambda ) \\ \leq & \log \frac{(\theta +\delta )^{2}}{4\delta \theta } \\ \leq & \frac{1}{4\ln \bar{b}}\frac{(\theta -\delta )^{2}}{\delta \theta }, \end{aligned}$$where\(\delta , \theta \in \mathbb{R}_{+}\)and\(\lambda = \int _{a}^{b}(1/v\sqrt{2\pi }) \exp [-(s-\mu _{m})^{2}/2v^{2}]\Delta s > 0\).
-
(b)
Consider the assumptions of Theorem 7(b), if
$$ 0 < \delta \leq r(s) \exp (s/\mu _{m}) \leq \theta \quad \forall s \in [0,\infty )_{\mathbb{T}}, $$then
$$\begin{aligned} 0 \leq & \log (\mu _{m} e) - h_{\bar{b}}(X) + \log (\lambda ) \\ \leq & \log \frac{(\theta +\delta )^{2}}{4\delta \theta } \\ \leq & \frac{1}{4\ln \bar{b}}\frac{(\theta -\delta )^{2}}{\delta \theta }, \end{aligned}$$where\(\lambda = \int _{0}^{\infty }(1/\mu _{m}) \exp (-s/\mu _{m}) \Delta s > 0\).
Proof
-
(a)
In Corollary 2 replace m and M by \(v\sqrt{2\pi }\delta \) and \(v\sqrt{2\pi }\theta \) respectively and \(f(s)\) as in the proof of Theorem 7(a).
-
(b)
In Corollary 2 replace m and M with \(\mu _{m}\delta \) and \(\mu _{m}\theta \) respectively and \(f(s)\) as in the proof of Theorem 7(b). □
Corollary 4
Consider\(\mathbb{T} = \mathbb{R}\)in Theorem 8to get [13, Theorem 22 a, b].
The following generalization of Jensen’s inequality on time scales established by Anwar [3] et al. is needed in the proof of our next result.
Theorem 9
Let\(J \subset \mathbb{R}\)be an interval and assume that\(\varPsi \in C(J, \mathbb{R})\)is convex. Considerfto be Δ-integrable onDsuch that\(f(D) \subset J\), where\(D \subset ([a_{1},b_{1})\cap \mathbb{T}_{1} \times \cdots \times [a_{n},b_{n})\cap \mathbb{T}_{n})\)and\(\mathbb{T}_{1},\mathbb{T}_{2},\ldots ,\mathbb{T}_{n}\)are time scales. Moreover, let\(p : D \rightarrow \mathbb{R}\)be Δ-integrable such that\(\int _{D} |p(s)| \Delta s > 0\). Then
The following result is a generalization of Theorem 3.
Proposition 1
Let\(\mathbb{T}_{1},\mathbb{T}_{2},\ldots ,\mathbb{T}_{n}\)be time scales. For\(a_{i}, b_{i} \in \mathbb{T}_{i}\)with\(a_{i} < b_{i}\), \(1\leq i\leq n\), let\(D \subset ([a_{1},b_{1})\cap \mathbb{T}_{1} \times \cdots \times [a_{n},b_{n})\cap \mathbb{T}_{n})\)be Lebesgue Δ-measurable, and let\(\psi : D \rightarrow (0, \infty )\)be a positive Δ-integrable function such that\(\int _{D} |\psi (w)| \Delta w > 0\). If\(\xi , \frac{1}{\xi } : D \rightarrow (0, \infty )\)are two positive Δ-integrable functions such that
then we get
Proof
Use inequality (21) and follow similar steps as in the proof of Theorem 3 to get the stated result. □
Corollary 5
Assume the conditions of Proposition 1with\(\int _{D} \psi (w)\Delta w = 1\), then we have
Remark 5
Choose \(\mathbb{T} = \mathbb{R}\) with \(\int _{D} \psi (w)\Delta w = 1\) in Proposition 1 to get [13, Proposition 1].
Suppose that X and Z are random variables whose distributions have density functions \(r(s)\) and \(r(z)\) respectively, and let \(r(s,z)\) be the joint density function for \((X, Z)\). Denote
and
Definition 5
The differential b̄-entropy of X on time scales is defined by
By analogy, we may state the following definition.
Definition 6
The differential conditional b̄-entropy of X given Z on time scales is defined by
Theorem 10
Suppose thatXandZare random variables whose distributions have density functions\(r(s)\)and\(r(z)\)respectively, and let\(r(s,z)\)be the joint density function for\((X, Z)\). Let
Then\(h_{\bar{b}}(X|Z)\)exists and
Proof
Apply Corollary 5 with \(n = 2\) and
to get
□
With the help of (23) and (24), we can define the differential mutual information between X and Z on time scales by
where \(h_{\bar{b}}(X)\) and \(h_{\bar{b}}(X|Z)\) are given by (25) and (26), respectively. It is straightforward to see that
Theorem 11
Suppose thatXandZare random variables whose distributions have density functions\(r(s)\)and\(r(z)\)respectively, and let\(r(s,z)\)be the joint density function for\((X, Z)\). Define
If
then\(i_{\bar{b}}(X, Z)\)exists and
Proof
Use Corollary 5 with \(n = 2\) and
to get
□
3 Conclusion
In the paper, Shannon type inequalities on time scales have been established by using the time scales version of Jensen’s inequality. Bounds are obtained for some Shannon type inequalities which have direct association to information theory. Differential entropy on time scales has been introduced and its bounds for some particular distributions have been obtained. The given results are the generalization of corresponding results established by Matić, Pearce, and Pečarić in [13], and the idea may stimulate further research in the theory of Shannon entropy, delta integrals, and generalized convex functions.
References
Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Generalization of the Levinson inequality with applications to information theory. J. Inequal. Appl. 2019, 212 (2019)
Agarwal, R., Bohner, M., Peterson, A.: Inequalities on time scales: a survey. Math. Inequal. Appl. 7, 535–557 (2001)
Anwar, M., Bibi, R., Bohner, M., Pečarić, J.: Integral inequalities on time scales via the theory of isotonic linear functionals. Abstr. Appl. Anal. 2011, Article ID 483595 (2011)
Bohner, M., Guseinov, G.S.: Multiple integration on time scales. Dyn. Syst. Appl. 14, 579–606 (2005)
Bohner, M., Guseinov, G.S.: Multiple Lebesgue integration on time scales. Adv. Differ. Equ. 2006, 026391 (2006)
Bohner, M., Peterson, A.: Dynamic Equations on Time Scales. Birkhäuser, Boston (2001)
Guseinov, G.S.: Integration on time scales. J. Math. Anal. Appl. 285(1), 107–127 (2003)
Horváth, L., Pečarić, Ð., Pečarić, J.: Estimations of f- and Rényi divergences by using a cyclic refinement of the Jensen’s inequality. Bull. Malays. Math. Sci. Soc. 42(3), 933–946 (2019)
Jakšetić, J., Pečarić, Ð., Pečarić, J.: Some properties of Zipf–Mandelbrot law and Hurwitz ξ-function. Math. Inequal. Appl. 21(2), 575–584 (2018)
Khalid, S., Pečarić, Ð., Pečarić, J.: On Shannon and Zipf–Mandelbrot entropies and related results. J. Inequal. Appl. 2019, 99 (2019)
Khan, M.A., Pečarić, Ð., Pečarić, J.: Bounds for Shannon and Zipf–Mandelbrot law entropies. Math. Methods Appl. Sci. 40(18), 7316–7322 (2017)
Latif, N., Pečarić, Ð., Pečarić, J.: Majorization, Csiszar divergence and Zipf–Mandelbrot law. J. Inequal. Appl. 2017, 197 (2017)
Matić, M., Pearce, C.E.M., Pečarić, J.: Shannon’s and related inequalities in information theory. In: Survey on Classical Inequalities, pp. 127–164. Springer, Dordrecht (2000)
McEliece, R.J.: The Theory of Information and Coding. Addison-Wesley, Reading (1977)
Mikić, R., Pečarić, J.: Jensen-type inequalities on time scales for n-convex functions. Commun. Math. Anal. 21(2), 46–67 (2018)
Mitrinović, D.S., Pečarić, J.E., Fink, A.M.: Classical and New Inequalities in Analysis. Kluwer, Dordrecht (1993)
Sarikaya, M.Z., Aktan, N., Yildirim, H.: On weighted Cebysev–Gruss type inequalities on time scales. J. Math. Inequal. 2(2), 185–195 (2008)
Wong, F., Yeh, C., Lian, W.: An extension of Jensen’s inequality on time scales. Adv. Dyn. Syst. Appl. 2(2), 113–120 (2006)
Acknowledgements
The authors wish to thank the anonymous referees for their very careful reading of the manuscript and fruitful comments and suggestions.
Availability of data and materials
Data sharing is not applicable to this paper as no datasets were generated or analyzed during the current study.
Funding
There is no funding for this work.
Author information
Authors and Affiliations
Contributions
All authors jointly worked on the results and they read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ansari, I., Khan, K.A., Nosheen, A. et al. Shannon type inequalities via time scales theory. Adv Differ Equ 2020, 135 (2020). https://doi.org/10.1186/s13662-020-02587-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-020-02587-z