Volume 2024, Issue 1 8481103
Research Article
Open Access

Determination of Novel Estimations for the Slater Difference and Applications

Muhammad Adil Khan

Muhammad Adil Khan

Department of Mathematics, University of Peshawar, Peshawar 25000, Pakistan uop.edu.pk

Search for more papers by this author
Hidayat Ullah

Hidayat Ullah

Department of Mathematics, University of Peshawar, Peshawar 25000, Pakistan uop.edu.pk

Search for more papers by this author
Tareq Saeed

Corresponding Author

Tareq Saeed

Financial Mathematics and Actuarial Science (FMAS)-Research Group, Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia kau.edu.sa

Search for more papers by this author
Zaid M. M. M. Sayed

Corresponding Author

Zaid M. M. M. Sayed

Department of Mathematics, University of Sáadah, Sáadah 1872, Yemen

Search for more papers by this author
Salha Alshaikey

Salha Alshaikey

Saudi Arabia-Alqunfudah Mathematics Department, Al-Qunfudah University College, Umm Al-Qura University, Mecca, Saudi Arabia uqu.edu.sa

Search for more papers by this author
Emad E. Mahmoud

Emad E. Mahmoud

Department of Mathematics and Statistics, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia tu.edu.sa

Search for more papers by this author
First published: 30 May 2024
Academic Editor: Daniel Maria Busiello

Abstract

The field of mathematical inequalities has exerted a profound influence across a multitude of scientific disciplines, making it a captivating and expansive domain ripe for research investigation. This article offers estimations for the Slater difference through the application of the concept of convexity. We present a diverse type of applications that stem from the main findings related to power means, Zipf–Mandelbrot entropy, and within the field of information theory. Our main tools for deriving estimates for the Slater difference involve the triangular inequality, the definition of the convex function, and the well-established Jensen inequality.

1. Introduction

In the last few decades, the notion of convex function has received an extremely remarkable attention from researchers due to its peculiar properties and behavior in dealing with problems [13]. Convex functions have a very wealthy history and performed a great role in improving and addressing issues in a variety of scientific domains, particularly in differential equations (4), coding [5], engineering [6], epidemiology [7], information theory [8], analysis [9, 10], statistics [11], and economics [12]. In the world of optimization, convex functions are widely used for guiding algorithms to efficient and globally optimal solutions [13]. The beauty of convex functions is not only in their mathematical elegance but in their ability to model and solve real-world problems with a sense of certainty and efficiency [14, 15]. Moreover, the convex function has a highly weighty structure, and due to this fact, many significant and relevant generalizations, extensions, and refinements of convex functions have been established [16, 17]. In a classical manner, the definition of convex function is stated as follows:

Definition 1. A function is convex, if

()
is true, for each γ, ζ ∈ [β1, β2] and α1 ∈ [0, 1]. If (1) is satisfied in the contrary sense, then ψ is termed as concave.

Because of the exceptional importance and profound nature of convex functions, they have been extensively utilized to thoroughly examine a wide range of issues [1821]. It has been recognized that the field of inequalities stands out as one of the most extensively utilized domains for convex functions, highlighting their notable preference in this context [2224]. The utilization of convex functions has made it possible to establish a variety of inequalities, including Hermite–Hadamard [2527], Ostrowski [2830], Favard [31], and majorization [32] inequalities. Jensen’s inequality stands out as the most attractive and dynamic inequality among the inequalities. The Jensen inequality serves as the foundation for numerous other inequalities and boasts a wide array of applications across various scientific disciplines [6]. In the traditional style, the verbal expression of the Jensen inequality can be presented as follows:

Theorem 2. Presume that ζk ∈ [β1, β2], γk ≥ 0 for k = 1, 2, …, m with and further suppose that the function ψ is convex over [β1, β2], then

()

The aforementioned inequality (2) is authentic in the opposite sense, for the concave function ψ.

The subsequent theorem provides the integral representation of (2).

Theorem 3. Let be a convex function and the functions Z1, g1 : [β1, β2]⟶I be integrable with Z1 ≥ 0. If ψ°g1 is integrable and , then

()

For the concave function ψ, (3) holds in the contrary direction.

Another intriguing inequality, which has also been deduced using convex functions, is the Slater inequality as introduced by Slater [33] in 1981. This inequality is often considered as a companion inequality to the well-known Jensen inequality. Indeed, the Slater inequality can be precisely stated as follows.

Theorem 4. Presume that the function ψ is increasing on (β1, β2) and ζk ∈ (β1, β2), γk ≥ 0 for all k ∈ {1, 2, …, m} with and . If ψ is convex, then

()

In 1985, Pečarić [34] replaced the monotonicity assumption with the condition that and obtained a generalization of the Slater inequality, which is formally stated as follows:

Theorem 5. Let be a convex function and ζk ∈ (β1, β2), γk ≥ 0 for all k ∈ {1, 2, …, m} such that and . If , then (4) is true.

Upon the establishment of the aforementioned two inequalities, a new avenue of research emerged for scholars. Consequently, a wide array of generalizations, enhancements, extensions, and refinements of the renowned Slater inequality have been developed. In 2006, Bakula and collaborators [35] introduced significant modifications of Slater’s inequality designed for the categories of m-convex and (α, m)-convex functions. Furthermore, additional related results for the functions mentioned above are also included. Adil Khan and Pečarić achieved an enhancement and reversal of the Slater inequality through the utilization of differential convex functions [36]. Furthermore, they obtained additional related results for the functions mentioned earlier. Dragomir [37] derived a set of novel Slater-type inequalities for convex functions defined in a general linear space. Furthermore, the obtained results are accompanied by discussions on their applications, particularly in the context of norms and divergences. Bakula et al. [38] used convex functions and acquired some general inequalities of the Jensen–Steffensen type. Moreover, they established variants of the Slater inequality as a special case of the acquired general inequalities. Adil Khan et al. [39] presented some new refinements of the Slater inequality in both continuous and discrete versions by taking convex functions. Delavar and Dragomir [40] discussed some elementary inequalities for the class of η− convex functions. Further, they also examined some more inequalities for the functions which are η− convex and differentiable. In addition to this, they proved Jensen and Slater as well as other related inequalities for the aforesaid classes of functions while utilizing their established fundamental inequalities. You et al. [41] introduced improvements of the Slater inequality by making use of 4-convex functions, and further, they explored the applications of these improvements in the domains of power means and information theory. Adil Khan et al. [42] derived bounds for the Slater gap pertaining to functions that exhibit convexity when the absolute value of their second derivatives is raised to a positive exponent. They also discussed some applications of the main results.

Remark 6. Throughout this article, by Slater difference, we shall mean the difference between the right and the left sides of the inequality (4).

The head objective of this note is to establish some new bounds for the Slater difference while utilizing the notion of convexity. This article is designed as follows:
  • In Section 2, bounds for the Slater difference shall be established

  • In Section 3, consequence of the main results shall be granted for the power means

  • In Section 4, the applications of the main results shall be discussed in information theory

  • In Section 5, the applications for the Zipf–Mandelbrot entropy shall be presented

2. Main Results

The focus of this section revolves around establishing bounds for the Slater difference. We will derive the desired bounds by applying the concepts of convexity, the triangle inequality, and the well-known Jensen inequality. Let’s start this section by introducing the following theorem, which establishes a bound for the Slater difference using the principles of convexity and the widely recognized triangle inequality.

Theorem 7. Assume that, is an arbitrary function with ψ exists, and ζk ∈ (a, b), for each k ∈ {1, 2, …, m} with . Also, let and . If |ψ| is convex, then

()

Proof. Without misfortune of sweeping statement, suppose that for each k ∈ {1, 2, …, m}. Now, using integration, we have

()
which implies that
()

Instantly, by taking the absolute of (7) and applying the triangle inequality, we derive

()

By leveraging the convex nature of |ψ| on the right-hand side of (8), we establish

()
which was needed.

In the forthcoming theorem, we will elaborate on the integral representation of (5).

Theorem 8. Suppose that is a differentiable function and g1, K1 : [a, b]⟶(c, d) are integrable functions with K1 ≥ 0. Additionally, presume that , and . If |ψ| is a convex and ψ°g1 is integrable, then

()

In the succeeding theorem, we utilize the famous Jensen inequality to get an estimate for the Slater difference.

Theorem 9. Presume that the function is differentiable and ζk ∈ (a, b), for each k ∈ {1, 2, …, m} with . Further, suppose that and . If |ψ| is concave, then

()

Proof. By applying the renowned Jensen inequality on the right side of (8), we get

()

By evaluating the integrals in (12), we acquire (10).

The continuous version of (11) is verbalized in the next theorem.

Theorem 10. Let be a function with ψ exits and K1, g1 : [a, b]⟶(c, d) be integrable functions with K1 ≥ 0. further, assume that , , and . If |ψ| is concave and ψ°g1 is integrable, then

()

3. Applications for the Power Means

In recent decades, the power means have been applied very widely in several fields of science for the determination of multiple problems [4346]. The power means have a rich history, and a lot of literature is devoted to the power means [47, 48]. Many inequalities have been initially established for the means mentioned above, and then these inequalities have been further generalized, extended, and refined by numerous scientists using various methods and techniques. [5, 11]. The aforementioned means are achieved a very dominant place in the science community as a result of its great importance and many applications in different areas.

In the present section, we will introduce several relationships concerning power means with the assistance of our main findings. To start, let’s review the definition of a power mean.

Definition 11. The power mean of order is defined for arbitrary positive tuples z1 = (p1, p2, …, pm) and z2 = (x1, x2, …, xm) as follows:

()

Now, we are in the position to give the first result for the power means.

Corollary 12. Presume that the tuples z1 = (p1, p2, …, pm) and z2 = (x1, x2, …, xm) are positive and also consider that with t < r.

  • (i)

    If r, t > 0, then

    ()

  • (ii)

    If r, t < 0 with t/r > 2, then

    ()

  • (iii)

    If r > 0 and t < 0, then

()

Proof.

  • (i)

    Let ψ(σ) = σt/r, σ ∈ (0, ). By differentiation, we have ψ(σ) = t/r(t/r − 1)σt/r−2 and |ψ(σ)| = |t/r|(t/r − 1)(t/r − 2)σ(t/r)−3. Clearly, ψ(σ) < 0 and |ψ(σ)| > 0 on (0, ) for r, t > 0, which confirms the concavity of ψ(σ) and convexity of |ψ(σ)|. Hence, using (5) for ψ(σ) = σ(t/r) and , we get (14).

  • (ii)

    Clearly, for the mentioned conditions, both ψ(σ) and |ψ(σ)| are convex functions. Therefore, by using ψ(σ) = σt/r and , we deduced (15).

  • (iii)

    The functions ψ(σ) and |ψ(σ)| both are convex on (0, ) for r > 0 and t < 0. Therefore, putting ψ(σ) = σ(t/r) and in (5), we acquire (16).

The subsequent corollary establishes a linkage related to power means, illustrating an implementation of Theorem 9.

Corollary 13. Suppose that z1 = (p1, p2, …, pm) and z2 = (x1, x2, …, xm) are arbitrary m− tuples such that pk, xk ∈ (0, ) for k ∈ {1, 2, …, m}. Then, for any negative real numbers r and t with t < r and t/r ∈ (1, 2), we have the following relation

()

Proof. Let ψ(σ) = σt/r, σ > 0. Then, certainly, ψ is convex and |ψ| is concave with the given conditions. Therefore, assume ψ(σ) = σt/r and in (11), we obtain (17).

The below relation for the power means is acquired with the support of Theorem 7.

Corollary 14. Let z1 = (p1, p2, …, pm) and z2 = (x1, x2, …, xm) be any positive tuples. Then,

()

Proof. Consider ψ(σ) = − ln σ, σ > 0, the undoubtedly function ψ and |ψ| both are convex. Therefore, take ψ(σ) = − ln σ in (5), we get (18).

Another consequence of Theorem 7 for the power means is discussed in the following corollary.

Corollary 15. Let the conditions of Corollary 14 be valid. Then,

()

Proof. Let ψ(σ) = exp σ, σ > 0. Then, clearly both ψ and |ψ| are convex functions. Therefore, utilize (5) for ψ(σ) = exp σ and ζk = ln xk, we receive (19).

Remark 16. The continuous forms of the above-aforementioned inequalities for the power means can easily be acquired by replacing the said tuples with continuous functions but applying Theorems 8 and 10 instead of Theorems 7 and 9.

4. Applications in Information Theory

A modern branch of statistics and probability theory that has a very rich history is the information theory [2, 11, 49, 50]. Information theory is one of the most interesting areas for researchers as a result of its massive applications in diverse fields [11, 51, 52]. There exist a lot of relations in the literature in the form of inequalities for divergences, entropies, and distances [9, 53]. In the current section, we shall also work on the information theory and shall add some interesting results to this area. Here, we are going to provide relations for the Csiszár and Kullback–Liebler divergences, Shannon entropy, and Bhattacharyya coefficient. Now, to proceed forward, let us start with the definition of Csiszár divergence.

Definition 17. Let z1 = (γ1, γ2, …, γm), z2 = (ζ1, ζ2, …, ζm) be positive m− tuples, and be any function. Then, the Csiszár divergence is defined by

()

The forthcoming theorem demonstrates how Theorem 7 can be applied to get bound for the Csiszár divergence.

Theorem 18. Presume that ψ is any function over (0, ) with ψ exists and |ψ| is convex. Further assume that z1 = (γ1, γ2, …, γm) and z2 = (ζ1, ζ2, …, ζm) are positive tuples, then

()

Proof. To deduce (20), put pk = γk and ζk = ζk/γk in (5).

The forthcoming theorem is a direct application of Theorem 9 to the Csiszár divergence.

Theorem 19. Suppose that ψ is any function over (0, ) with ψ is defined and |ψ| |ψ| is concave. Also, assume that z1 = (γ1, γ2, …, γm) and z2 = (ζ1, ζ2, …, ζm) are positive m− tuples, then

()

Proof. Inequality (21) can be straightforwardly derived by substituting pk = γk and ζk = ζk/γk into (10).

The Shannon entropy is defined as follows.

Definition 20. Let z1 = (γ1, γ2, …, γm) be a positive probability distribution. Then, the Shannon entropy is given by

()

In the following corollary, we gain a bound for the Shannon entropy as an application of Theorem 7.

Corollary 21. Let z1 = (γ1, γ2, …, γm) be a probability distribution. Then,

()

Proof. Consider, ψ(σ) = − log σ, σ > 0, then by successive differentiations of the chosen function, we receive ψ(σ) = σ−2 and |ψ(σ)| = 2σ−3. Clearly, from the above expressions, we conclude that ψ(σ), |ψ(σ)| > 0 for σ > 0, which admits that ψ and |ψ| are convex with the desired conditions. Therefore, applying inequality (20) by packing ψ(σ) = − log σ and ζk = 1 for each k ∈ {1, 2, …, m}, we deduce (22).

Instantly, we give the definition of Kullback–Leibler divergence.

Definition 22. For arbitrary positive probability distributions z1 = (γ1, γ2, …, γm) and z2 = (ζ1, ζ2, …, ζm), the Kullback–Leibler divergence is given by

()

By taking a particular convex function in Theorem 18, we deduce the following bound for the Kullback–Leibler divergence.

Corollary 23. Let z1 = (γ1, γ2, …, γm) and z2 = (ζ1, ζ2, …, ζm) be probability distributions such that γk, ζk > 0 for all k ∈ {1, 2, …, m}. Then,

()

Proof. Consider ψ(σ) = − log σ, σ > 0, then obviously both the functions ψ and |ψ| are convex. Therefore, putting ψ(σ) = − log σ in (22), we acquire (23).

The following is the definition of the Bhattacharyya coefficient.

Definition 24. Let z1 = (γ1, γ2, …, γm) and z2 = (ζ1, ζ2, …, ζm) be a probability distribution such that γk, ζk > 0 for all k ∈ {1, 2, …, m}. Then, the Bhattacharyya coefficient is defined by

()

In the following corollary, we demonstrate an implementation of Theorem 7 by substituting into (22).

Corollary 25. Assume that z1 = (γ1, γ2, …, γm) and z2 = (ζ1, ζ2, …, ζm) are arbitrary probability distributions such that γk, ζk > 0 for all k ∈ {1, 2, …, m}, then

()

Proof. Consider , then ψ(σ) = 1/4σ−3/2 and |ψ(σ)| = 3/8σ−5/2, clearly ψ, |ψ| > 0. This confirms the convexity of both functions on (0, ). Therefore, inequality (24) can easily be deduced by putting in (22).

Remark 26. Similarly, the integral forms of the relations for Csiszár divergence and its particular cases can easily be acquired by taking integrable functions instead of tuples and using Theorems 8 and 10 as substitutes for Theorems 7 and 9.

5. Applications for the Zipf–Mandelbrot Entropy

The Zipf–Mandelbrot entropy is one the most consequential concepts that developed many new methodologies based on theoretical physics and mathematics [54]. With the help of this entropy, various questions have been answered and many problems have been explained while using its characteristics and behaviors [55, 56]. This entropy has recorded significant developments in the last century as results of its diverse applications [57]. This section is dedicated to presenting various estimates for Zipf–Mandelbrot entropy as applications of the main findings. In order to go forward, we first present some basic theories related to this entropy.

The following function represents the probability mass function for the famous Zipf–Mandelbrot law:
()
k ∈ {1, 2, …, m}, m ∈ {1, 2, ⋯}, α ≥ 0, l > 0, and Qm,α,l is the generalized harmonic number given by
()
The Zipf–Mandelbrot entropy is mathematically which is defined as follows:
()

The corollary stated below gives an estimate for the Zipf–Mandelbrot entropy.

Corollary 27. Presume that ζk ≥ 0, for each k ∈ {1, 2, …, m} with , α ≥ 0 and l > 0, then

()

Proof. Consider γk = 1/Qm,α,l(k + α)l, then

()
()
and
()
Now, by using (26), (27), and (28) in (27), we obtained (25).

Another estimate for the Zipf–Mandelbrot entropy is acquired, which is stated in the following corollary.

Corollary 28. Let α1, α2 ≥ 0 and l1, l2 > 0. Then,

()

Proof. Consider and , then

()
()
and
()

Instantly, using (30)–(32) in (27), we obtained (29).

6. Conclusions

In the last century, mathematical inequalities and their applications have booked a record development with expressive impact on the various fields of science. It has been verified that numerous novel insights into mathematical inequalities and their understanding can be gained through the use of convex functions. There are many famous inequalities, which have been proved for the notion of convexity. One of the notable inequalities related to the concept of convexity is the renowned Slater’s inequality. In the present article, we introduced a novel method for the determination of bounds for the Slater difference by dealing with convex functions in board sense. We successfully established the appropriate bounds for the Slater difference by applying the definitions of convex functions, the triangle inequality, and the widely recognized Jensen inequality for concave functions. We derived new relationships for the power means by utilizing the key findings. Furthermore, we presented various relations in information theory as application of the main results. We closed the article by giving new estimates for the Zipf–Mandelbrot entropy as further applications of the established results. The acquired applications for the aforementioned entropy shall give different estimates for this entropy. The thought and method utilized in the article may simulate further research in this direction.

Conflicts of Interest

All authors declare that there are no conflicts of interest.

Authors’ Contributions

All authors have made an equal contribution to the writing of this paper, and they have all reviewed and provided their approval for the final manuscript.

Acknowledgments

The authors extend their appreciation to Taif University, Saudi Arabia, for supporting this work through project number (TU-DSPP-2024-94).

    Data Availability

    No data provided support for this study.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.