Next Article in Journal
Improvement in Some Inequalities via Jensen–Mercer Inequality and Fractional Extended Riemann–Liouville Integrals
Next Article in Special Issue
Innovative Interpolating Polynomial Approach to Fractional Integral Inequalities and Real-World Implementations
Previous Article in Journal
Modeling Bland–Altman Limits of Agreement with Fractional Polynomials—An Example with the Agatston Score for Coronary Calcification
Previous Article in Special Issue
On Further Refinements of Numerical Radius Inequalities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivation of Bounds for Majorization Differences by a Novel Method and Its Applications in Information Theory

by
Abdul Basir
1,
Muhammad Adil Khan
1,*,
Hidayat Ullah
1,
Yahya Almalki
2,
Saowaluck Chasreechai
3,4 and
Thanin Sitthiwirattham
4,5,*
1
Department of Mathematics, University of Peshawar, Peshawar 25000, Pakistan
2
Department of Mathematics, College of Science, King Khalid University, Abha 61413, Saudi Arabia
3
Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, Bangkok 10800, Thailand
4
Research Group for Fractional Calculus Theory and Applications, Science and Technology Research Institute, King Mongkut’s University of Technology North Bangkok, Bangkok 10800, Thailand
5
Mathematics Department, Faculty of Science and Technology, Suan Dusit University, Bangkok 10300, Thailand
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(9), 885; https://doi.org/10.3390/axioms12090885
Submission received: 9 August 2023 / Revised: 14 September 2023 / Accepted: 15 September 2023 / Published: 16 September 2023
(This article belongs to the Special Issue Current Research on Mathematical Inequalities II)

Abstract

:
In the recent era of research developments, mathematical inequalities and their applications perform a very consequential role in different aspects, and they provide an engaging area for research activities. In this paper, we propose a new approach for the improvement of the classical majorization inequality and its weighted versions in a discrete sense. The proposed improvements give several estimates for the majorization differences. Some earlier improvements of the Jensen and Slater inequalities are deduced as direct consequences of the obtained results. We also discuss the conditions under which the main results give better estimates for the majorization differences. Applications of the acquired results are also presented in information theory.

1. Introduction

Mathematical inequalities play a crucial role in various areas of mathematics and have practical applications in many fields such as physics [1], engineering [2], economics [3], epidemiology [4], and statistics [5]. Actually, inequalities provide a way to compare and order numbers, variables, and quantities and help us to determine the relationships between quantities [6,7,8]. Due to these facts, inequalities are essential in many real-world situations to establish comparisons for making decisions or drawing conclusions [9,10]. Furthermore, inequalities also serve as fundamental tools for solving mathematical problems [11,12]. They are often used to establish conditions or constraints, which help us in formulating and solving equations, optimization problems, as well as systems of equations [7,13]. Inequalities can be used to provide a framework for identifying feasible solutions and finding the best possible outcomes [14,15,16,17]. Moreover, inequalities are frequently utilized in mathematical analysis and proof techniques [4,18]. Furthermore, they allow us to establish bounds, estimate limits, and prove the existence or non-existence of certain mathematical objects or properties [19]. In mathematical analysis, calculus, and advanced proof-based mathematics, inequalities are essential elements [20,21,22]. Mathematical inequalities play a central role to find the best fit solution in optimization problems [7]. In economics, engineering, and operations research, inequalities help us to determine optimal values of variables under some conditions [2,3,18]. Inequalities play a crucial role in decision making processes by considering trade-offs and constraints [3,23,24]. In computer science and image processing, inequalities are used to develop efficient algorithms for solving and modeling complicated problems [25,26,27]. Linear programming, integer programming, and combinatorial optimization techniques rely heavily on inequalities to determine feasible solutions and guide search algorithms [5,18,28]. In simple words, mathematical inequalities are vital in various mathematical disciplines and have practical applications in diverse fields [11,29,30,31]. They provide a framework for comparison, analysis, optimization, decision making, modeling, and proof techniques. Also, they enable us to understand, solve, and make informed choices in complex mathematical and real-world scenarios [9,32,33,34].
The great potency behind the development of mathematical inequalities is the notion of convexity [24,35,36,37]. When we are talking about mathematical inequalities and do not account for convex functions, it is not fair to convex functions [18,30,38]. Convex functions are closely related to inequalities, because without convex functions several inequalities would not be possible to prove, such as the Hermite–Hadamard inequality [11], Jensen–Mercer inequality [39], Jensen–Steffensen inequality [18], Favard’s inequality [40], and many others. The most inaugurating and interesting inequality for convex functions is the Jensen inequality [41]. This inequality has an intense relationship with convex functions in the sense that it generalizes ordinary convexity. This inequality provides a relationship between the expected value of the image of the convex function and the convex function of the expected value. Here is the statement of Jensen’s inequality:
Let ψ be a convex function on [ ξ 1 , ξ 2 ] and ϱ j [ ξ 1 , ξ 2 ] , δ j 0 for j = 1 , 2 , , m with δ * : = j = 1 m δ j > 0 . Then,
ψ 1 δ * j = 1 m δ j ϱ j 1 δ * j = 1 m δ j ψ ( ϱ j ) .
In the case of the concave function ψ , (1) is valid in the contrary direction. The formula that holds in a continuous form of the Jensen inequality is presented in (1):
Presume that the functions g 1 , g 2 : [ c 1 , c 2 ] [ ξ 1 , ξ 2 ] are continuous with g 1 0 and g * : = c 1 c 2 g 1 ( σ ) d σ > 0 . Then, the inequality
ψ 1 g * c 1 c 2 g 1 ( σ ) g 2 ( σ ) d σ 1 g * c 1 c 2 g 1 ( σ ) ψ g 2 ( σ ) d σ
is valid, for every convex function ψ on [ ξ 1 , ξ 2 ] such that the integral of ψ g 2 exists. For the concave function ψ , (2) will be seen in the reverse sense.
The Jensen inequality has many interesting features, for example, it generalizes the triangular inequality, and also this inequality is the source of many other inequalities such as Hölder, Ky Fan’s, Young’s, and Hermite–Hadamard inequalities, as well as many others [41,42]. Moreover, this inequality also has a very desirable structure, and due to these facts, a lot of important variants, generalizations, applications, enhancements, and extensions have been provided for the Jensen inequality in different scales with the support of the characteristics and properties of convex functions and its generalizations [43,44,45,46]. In 1981, Slater presented a companion inequality to this inequality with the help of a convex function known as Slater’s inequality [47]. The Slater inequality can be expressed as follows:
Suppose that ϱ j ( ξ 1 , ξ 2 ) and δ j 0 for j = 1 , 2 , , m with δ * : = j = 1 m δ j > 0 and ψ : ( ξ 1 , ξ 2 ) R is an increasing convex function such that ϱ * : = j = 1 m δ j ψ + ( ϱ j ) 0 . Then,
1 δ * j = 1 m δ j ψ ( ϱ j ) ψ 1 ϱ * j = 1 m δ j ϱ j ψ + ( ϱ j ) .
For the concave function ψ , (3) reverses.
Further in this section, we devote our concentration to majorization. Majorization is a versatile strategy that has been employed to address many kinds of complications, including maximum likelihood estimation, non-negative matrix factorization, image reconstruction, and compressed sensing, among others [12,48]. It provides a powerful framework for solving complex optimization problems by iteratively minimizing simpler surrogate functions [28,49]. Majorization is a concept in mathematics that refers to a partial ordering relationship between vectors or sequences of numbers [50,51]. Given two vectors or sequences, one is said to majorize the other if it is “larger” in a certain sense. More formally, let k 1 = ( ϱ 1 , ϱ 2 , , ϱ m ) and k 2 = ( σ 1 , σ 2 , , σ m ) be any m-tuples. Then, we say k 1 majorizes k 2 , denoted by k 1 k 2 , if we arrange both k 1 and k 2 in decreasing order and the following conditions are satisfied:
  • i = 1 k ϱ j i = 1 k σ j , k = 1 , 2 , , m 1 ,
  • j = 1 m ϱ j = j = 1 m σ j .
In other words, k 1 k 2 means that k 1 has larger or equal stochastic dominance than k 2 [52]. It implies that k 1 has a more spread-out distribution than k 2 . The majorization ordering has some useful properties. For example, if k 1 k 2 , then any permutation of k 1 majorizes the corresponding permutation of k 2 . Additionally, in the theory of convex functions and inequalities, majorization seems very essential. In 1932, Karamata [50] introduced an inequality for the majorized tuples via a convex function, which is known as the majorization inequality. This inequality states the following:
Consider ψ is convex over [ ξ 1 , ξ 2 ] and k 1 = ( ϱ 1 , ϱ 2 , , ϱ m ) and k 2 = ( σ 1 , σ 2 , , σ m ) are any tuples such that ϱ j , σ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m . If k 1 k 2 , then
j = 1 m ψ ( ϱ j ) j = 1 m ψ ( σ j ) .
If the function ψ is concave, then (6) will reverse. Some years later, Fuchs [53] established the generalized form
j = 1 m α j ψ ( ϱ j ) j = 1 m α j ψ ( σ j )
of majorization inequality (6) by taking arbitrary weights k 3 = ( α 1 , α 2 , , α m ) and decreasing tuples k 1 = ( ϱ 1 , ϱ 2 , , ϱ m ) and k 1 = ( σ 1 , σ 2 , , σ m ) with the conditions that i = 1 k α j ϱ j i = 1 k α j σ j , ( k = 1 , 2 , , m 1 ) , and j = 1 m α j ϱ j = j = 1 m α j σ j while utilizing the notion of convexity. Inequality (7) is well known as the weighted version or generalized version of the majorization inequality. In 1973, Bullen et al. also proved inequality ( 7 ) by taking decreasing tuples with real weights and with relaxed majorization conditions i = 1 k α j ϱ j i = 1 k α j σ j , ( k = 1 , 2 , , m 1 , m ) , and keeping the convex function ψ stricter as monotonic. Malagranda et al. [40] obtained inequality (7) for positive weights while keeping the convex function and one tuple relaxed and the second tuple monotonic with the following majorization conditions: i = 1 k α j ϱ j i = 1 k α j σ j , ( k = 1 , 2 , , m 1 ) , and j = 1 m α j ϱ j = j = 1 m α j σ j . In 2004, Dragomir [54] also provided the weighted version of the majorization inequality by taking certain tuples with relaxed majorization conditions. In 2019, Adil Khan et al. [55] presented the extension of the classical majorization inequality and its weighted versions under different circumstances for convex functions on rectangles. Wu et al. [56] acquired improvements of the majorization-type inequalities for convex functions through Taylor’s theorem with a mean value form of remainder. In 2021, Deng et al. [46] refined the Jensen inequality through the theory of majorization by a new method and further explained the importance of the refined inequality by providing its applications in various domains. In 2022, Saeed et al. [57] utilized majorization results and presented refinements of the celebrated integral Jensen inequality. As a direct outcome of the main findings, they have granted modifications to the Hermite–Hadamard and Hölder inequalities. Furthermore, they support their results by providing its applications for the means as well as in information theory.
The main aim of this paper is to provide some estimations for the majorization differences through twice-differentiable functions. The article is designed in the following way:
  • In Section 2, we will present main results, which give estimates for the majorization difference. Also, in this section, we will give bounds for the Slater and Jensen differences, which can be deduced from the major main results by performing some particular substitutions.
  • In Section 3, we will highlight the conditions under which the main results will give better estimations.
  • In Section 4, we will discuss how the main results can be applied in information theory.
  • Section 5 is devoted to the concluding remarks on the article.

2. Main Results

This section concerns the main finding, which will provide estimates for the majorization, Jensen, and Slater differences. The principal goal of this section is to find estimates for the majorization differences by taking twice-differentiable functions. The intended estimates can be acquired by applying the notion of convexity, Hölder’s inequality, prominent Jensen’s inequality, and the famous power mean inequality. The direct consequences of each result will be discussed for the Jensen and Slater differences. Now, we begin this section with the subsequent lemma, which establishes an identity associated with the majorization difference.
Lemma 1.
Assume that m 1 = ( ϱ 1 , ϱ 2 , , ϱ m ) , m 2 = ( σ 1 , σ 2 , , σ m ) [ ξ 1 , ξ 2 ] m , and m 3 = ( ς 1 , ς 2 , , ς n ) ( 0 , ) m . Further, let ψ : [ ξ 1 , ξ 2 ] R be a function where ψ exists. Then,
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) = j = 1 m ς j ( σ j ϱ j ) 2 0 1 t ψ ( t σ j + ( 1 t ) ϱ j ) d t j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) .
Proof. 
Without the misfortune of a sweeping statement, assume that σ j ϱ j for all j { 1 , 2 , , m } . Utilizing integration by parts, we have
j = 1 m ς j ( σ j ϱ j ) 2 0 1 t ψ ( t σ j + ( 1 t ) ϱ j ) d t = j = 1 m ς j ( σ j ϱ j ) 2 ( t σ j ϱ j ψ ( t σ j + ( 1 t ) ϱ j ) | 0 1 1 σ j ϱ j 0 1 ψ ( t σ j + ( 1 t ) ϱ j ) d t ) = j = 1 m ς j ( σ j ϱ j ) 2 ψ ( σ j ) σ j ϱ j 1 ( σ j ϱ j ) 2 ψ ( t σ j + ( 1 t ) ϱ j ) | 0 1 = j = 1 m ς j ( σ j ϱ j ) 2 ψ ( σ j ) σ j ϱ j 1 ( σ j ϱ j ) 2 ψ ( σ j ) ψ ( ϱ j ) = j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) j = 1 m ς j ψ ( σ j ) ψ ( ϱ j )
which implies that
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) = j = 1 m ς j ( σ j ϱ j ) 2 0 1 t ψ ( t σ j + ( 1 t ) ϱ j ) d t .
Clearly, ( 9 ) is equivalent to ( 8 ) . □
In the next theorem, we make use of the Hölder inequality and the concept of a convex function to set up an inequality for the majorization difference.
Theorem 1.
Presume that all predications in Lemma 1 are valid. Moreover, let | ψ | q ( q > 1 ) be a convex function defined over [ ξ 1 , ξ 2 ] . Then,
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 q + 1 | ψ σ j | q + | ψ ϱ j | q q + 1 q + 2 1 q .
Proof. 
Since ψ ( x ) | ψ ( x ) | for all x [ ξ 1 , ξ 2 ] , therefore, from (8), we can write that
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 0 1 | t ψ ( t σ j + ( 1 t ) ϱ j ) | d t .
Applying the Hölder inequality to (11), we deduce
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 0 1 t q | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q .
Now, using the convexity of the function | ψ | q in (12), we obtain
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 | ψ ( σ j ) | q 0 1 t q + 1 d t + | ψ ( ϱ j ) | q 0 1 t q ( 1 t ) d t 1 q .
By evaluating integrals in (13), we arrive at (10). □
The subsequent corollary follows from Theorem 1, which provides an estimation for the Jensen difference.
Corollary 1.
Assume that ς j 0 and ϱ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m with ς * : = j = 1 m ς j > 0 and ϱ ¯ : = 1 ς * j = 1 m ς j ϱ j . If ψ is a twice-differentiable function over [ ξ 1 , ξ 2 ] such that | ψ | q is convex for q > 1 , then
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ϱ ¯ ) j = 1 m ς j ( ϱ ¯ ϱ j ) 2 ( q + 1 ) | ψ ( ϱ ¯ ) | q + | ψ ( ϱ j ) | q ( q + 1 ) ( q + 2 ) 1 q .
Proof. 
Utilizing σ j = ϱ ¯ in ( 10 ) , we obtain ( 14 ) . □
Remark 1.
Ullah et al. [41] in inequality ( 4 ) also provided a similar estimate given on the right side of inequality ( 14 ) for the absolute Jensen difference.
The preceding corollary estimates the Slater difference as an output of Theorem 1.
Corollary 2.
Presume that ψ is a twice-differentiable function over [ ξ 1 , ξ 2 ] and ς j 0 and ϱ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m with ς * : = j = 1 m ς j > 0 and 1 ς * j = 1 m ς j ψ ( ϱ j ) 0 . Further, let ρ ¯ : = 1 j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ϱ j ψ ( ϱ j ) [ ξ 1 , ξ 2 ] and | ψ | q be convex for q > 1 . Then,
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ρ ¯ ) j = 1 m ς j ( ρ ¯ ϱ j ) 2 ( q + 1 ) | ψ ( ρ ¯ ) | q + | ψ ( ϱ j ) | q ( q + 1 ) ( q + 2 ) 1 q .
Proof. 
Inequality (15) can easily be deduced by replacing σ j with ρ ¯ in ( 10 ) . □
Remark 2.
A similar estimate given on the right side of inequality (15) was also presented in inequality ( 9 ) as stated in the article [58] for the absolute Slater difference.
The theorem that follows offers an inequality for majorization that could possibly be constructed by using the Hölder inequality and the convex function formulation.
Theorem 2.
Presume that the postulates of Theorem 1 are valid. Also, let 1 p + 1 q = 1 . Then,
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 p + 1 1 p j = 1 m ς j ( σ j ϱ j ) 2 | ψ ( σ j ) | q + | ψ ( ϱ j ) | q 2 1 q .
Proof. 
By utilizing the Hölder inequality on the right side of (11), we arrive at
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 0 1 t p d t 1 p 0 1 | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q = + 1 p + 1 1 p j = 1 m ς j ( σ j ϱ j ) 2 0 1 | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) .
To obtain inequality (18), we just use the convex function definition on the right side of (17):
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 p + 1 1 p j = 1 m ς j ( σ j ϱ j ) 2 | ψ ( σ ) | q 0 1 t d t + | ψ ( ϱ ) | q 0 1 ( 1 t ) d t 1 q .
Now, finding integrals in (18), we obtain (16). □
The subsequent corollary is a direct consequence of Theorem 2, which provides a bound for the Jensen difference.
Corollary 3.
Assume that the conditions of Corollary 1 are satisfied. Moreover, if 1 p + 1 q = 1 , then
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ϱ ¯ ) 1 ς * 1 p + 1 1 p j = 1 m ς j ( ϱ ¯ ϱ j ) 2 × | ψ ( ϱ ¯ ) | q + | ψ ( ϱ j ) | q 2 1 q .
Proof. 
To obtain (19), apply inequality (16) for σ j = ϱ ¯ . □
Remark 3.
The same type of estimate stated on the right side of ( 19 ) is also obtained in inequality ( 9 ) in [41] for the absolute Jensen difference.
The below corollary is another consequence of Theorem 2.
Corollary 4.
Assume that the conditions of Corollary 2 are satisfied. Moreover, if 1 p + 1 q = 1 , then
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ρ ¯ ) 1 ς * 1 p + 1 1 p j = 1 m ς j ( ρ ¯ ϱ j ) 2 × | ψ ( ρ ¯ ) | q + | ψ ( ϱ j ) | q 2 1 q .
Proof. 
Inequality ( 20 ) is simply achieved through substituting σ j = ρ ¯ into ( 16 ) . □
Remark 4.
The estimate provided in ( 20 ) for the Slater difference is also acquired by Khan et al. [58] in inequality ( 15 ) for the absolute Slater difference.
Remark 5.
It is important to note that the above results are better with respect to the condition of convexity of | ψ | q because there are some functions ϕ such that | ψ | q or | ψ | q is not convex while | ψ | q is convex. For example, if the function ϕ is defined by ϕ ( x ) = x , x > 0 , then clearly the function | ψ | q is not convex for q ( 1 , 2 ) but | ψ | q is convex. Similarly, if the function ψ is defined by ψ ( x ) = x k , x > 0 , k ( 1 , 1 + 1 q ) , then clearly the function | ψ | q is not convex but | ψ | q is convex.
The following theorem establishes an inequality for the majorization difference through Hölder’s and Jensen’s inequalities.
Theorem 3.
Let the conditions of Lemma 1 be satisfied, and further assume that | ψ | is a concave function. Then,
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) 1 2 j = 1 m ς j ( σ j ϱ j ) 2 | ψ 2 σ j + ϱ j 3 | j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) .
Proof. 
By using the property of the absolute function, identity ( 8 ) can be written as
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) 2 0 1 | t ψ ( t σ j + ( 1 t ) ϱ j ) | d t j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) .
Currently utilizing the Jensen inequality on the right side of ( 22 ) , we obtain
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 2 j = 1 m ς j ( σ j ϱ j ) 2 | ψ 0 1 t t σ j + ( 1 t ) ϱ j d t 0 1 t d t | .
Simplifying ( 23 ) , we obtain ( 21 ) .
An estimate for the Jensen difference is given in the coming corollary as a consequence of Theorem 3.
Corollary 5.
Assume that ς j 0 and ϱ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m with ς * : = j = 1 m ς j > 0 and ϱ ¯ : = 1 ς * j = 1 m ς j ϱ j . If ψ is a twice-differentiable function over [ ξ 1 , ξ 2 ] such that | ψ | is concave, then
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ϱ ¯ ) 1 2 ς * j = 1 m ς j ( ϱ ¯ ϱ j ) 2 | ψ 2 ϱ ¯ + ϱ j 3 | .
Proof. 
Consider σ j = ϱ ¯ for all i { 1 , 2 , , n } in inequality ( 21 ) . We obtain ( 24 ) . □
Remark 6.
An estimate for the absolute Jensen difference acquired in inequality ( 12 ) in the article [41] will become alike to the estimate stated on the right side of inequality ( 24 ) by assuming q = 1 in ( 12 ) .
As a consequence of Theorem 3, the below corollary presents a bound for the Slater difference.
Corollary 6.
Assume that ς j 0 and ϱ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m with ς * : = j = 1 m ς j > 0 and ψ is a twice-differentiable function over [ ξ 1 , ξ 2 ] such that | ψ | is concave. If 1 ς * j = 1 m ς j ψ ( ϱ j ) 0 and ρ ¯ : = 1 j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ϱ j ψ ( ϱ j ) [ ξ 1 , ξ 2 ] , then
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ρ ¯ ) 1 2 ς * j = 1 m ς j ( ρ ¯ ϱ j ) 2 | ψ 2 ρ ¯ + ϱ j 3 | .
Proof. 
By taking σ j = ρ ¯ for all j { 1 , 2 , , m } in ( 21 ) , we arrive at the desired inequality ( 25 ) . □
Remark 7.
The estimate provided by Khan et al. [58] in ( 13 ) for the absolute Slater difference will coincide with the estimate on the right side of ( 25 ) by just taking p = 1 in ( 13 ) .
The next theorem provides another relation for the majorization difference, which can be deduced by utilizing the Hölder inequality and Jensen inequality.
Theorem 4.
Let the conditions of Lemma 1 be satisfied, and further assume that | ψ | q is a concave function for q > 1 . If 1 p + 1 q = 1 , then
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 p + 1 1 p j = 1 m ς j ( σ j ϱ j ) 2 | ψ σ j + ϱ j 2 | .
Proof. 
By utilizing the Hölder inequality on the right side of ( 22 ) , we obtain
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 0 1 t p d t 1 p 0 1 | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q = 1 p + 1 1 p j = 1 m ς j ( σ j ϱ j ) 2 0 1 | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) .
Now, applying Jensen’s inequality to ( 27 ) , we acquire
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 p + 1 1 p j = 1 m ς j ( σ j ϱ j ) 2 | ψ 0 1 t σ j + ( 1 t ) ϱ j d t | .
To obtain ( 26 ) , we find the integral in ( 28 ) .
Corollary 7 is a direct outcome of Theorem 4.
Corollary 7.
Assume that ς j 0 and ϱ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m with ς * : = j = 1 m ς j > 0 and ϱ ¯ : = 1 ς * j = 1 m ς j ϱ j . Also, suppose that ψ is a twice-differentiable function over [ ξ 1 , ξ 2 ] such that | ψ | q is concave for q > 1 . If 1 p + 1 q = 1 , then
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ϱ ¯ ) 1 ς * 1 p + 1 1 p j = 1 m ς j ( ϱ ¯ ϱ j ) 2 | ψ ϱ ¯ + ϱ j 2 | .
Proof. 
To obtain inequality ( 29 ) , just replace σ j with ϱ ¯ in inequality ( 26 ) . □
Remark 8.
Ullah et al. [41] in inequality ( 14 ) acquired the same type of estimate indicated on the right side of ( 29 ) for the absolute Jensen difference.
Another consequence of Theorem 4 in terms of an estimate for the Slater difference is in the following corollary.
Corollary 8.
Assume that ς j 0 and ϱ j [ ξ 1 , ξ 2 ] for j = 1 , 2 , , m with ς * : = j = 1 m ς j > 0 and p , q > 1 with 1 p + 1 q = 1 . Further, let ψ be a twice-differentiable function on [ ξ 1 , ξ 2 ] such that | ψ | q is concave, 1 ς * j = 1 m ς j ψ ( ϱ j ) 0 , and ρ ¯ : = 1 j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ϱ j ψ ( ϱ j ) [ ξ 1 , ξ 2 ] . Then,
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ρ ¯ ) 1 ς * 1 p + 1 1 p j = 1 m ς j ( ρ ¯ ϱ j ) 2 | ψ ρ ¯ + ϱ j 2 | .
Proof. 
By utilizing ( 26 ) for σ j = ρ ¯ , we obtain ( 30 ) . □
Remark 9.
An estimate for the Slater difference given on the right side of ( 30 ) is also produced by Khan et al. [58] in ( 19 ) for the absolute Slater difference.
Theorem 5 has been documented by implementing the definition of a convex function and the notable power mean inequality, which provides a bound for the majorization difference.
Theorem 5.
Assuming the presumptions of Theorem 1 are true, then
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 2 1 1 q j = 1 m ς j ( σ j ϱ j ) 2 2 | ψ ( σ j ) | q + | ψ ( ϱ j ) | q 6 1 q .
Proof. 
Once the power mean inequality applies to ( 11 ) , we find
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + j = 1 m ς j ( σ j ϱ j ) 2 0 1 t d t 1 1 q 0 1 t | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q = 1 2 1 1 q j = 1 m ς j ( σ j ϱ j ) 2 0 1 t | ψ ( t σ j + ( 1 t ) ϱ j ) | q d t 1 q j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) .
Utilizing the convexity of the function | ψ | q on the right side of ( 32 ) , we obtain
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) j = 1 m ς j ( σ j ϱ j ) ψ ( σ j ) + 1 2 1 1 q j = 1 m ς j ( σ j ϱ j ) 2 | ψ ( σ ) | q 0 1 t 2 d t + | ψ ( ϱ ) | q 0 1 t ( 1 t ) d t 1 q .
By calculating integrals in ( 33 ) , we arrive at ( 31 ) .
As an immediate outcome of Theorem 5, the next corollary offers a bound for the Jensen difference.
Corollary 9.
Let the assumptions of Corollary 1 hold. Then,
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ϱ ¯ ) 1 ς * 1 2 1 1 q j = 1 m ς j ( ϱ ¯ ϱ j ) 2 × 2 | ψ ( ϱ ¯ ) | q + | ψ ( ϱ j ) | q 6 1 q .
Proof. 
By assuming σ j = ϱ ¯ in inequality ( 31 ) , we deduce ( 34 ) . □
Remark 10.
In inequality ( 9 ) , Ullah et al. [41] also obtained a similar estimate for the absolute Jensen difference given on the right side of inequality ( 34 ) .
The subsequent corollary generates an estimate for the Slater difference as an effect of Theorem 5.
Corollary 10.
Let the assumptions of Corollary 2 hold. Then,
1 ς * j = 1 m ς j ψ ( ϱ j ) ψ ( ρ ¯ ) 1 ς * 1 2 1 1 q j = 1 m ς j ( ρ ¯ ϱ j ) 2 × 2 | ψ ( ρ ¯ ) | q + | ψ ( ϱ j ) | q 6 1 q .
Proof. 
To obtain ( 35 ) , consider σ j = ρ ¯ in ( 31 ) . □
Remark 11.
Similarly, a type of estimates for the Slater difference given in ( 35 ) can be studied in ( 21 ) , which is stated in the article [58] for the absolute Slater difference.

3. Discussion about the Betterness of the Main Results

This section discusses the conditions under which the estimates given in Theorem 1 to Theorem 5 will improve.
The results given in Theorem 1 to Theorem 5 will provide good and superior estimates for the majorization difference
j = 1 m ς j ψ ( ϱ j ) j = 1 m ς j ψ ( σ j ) ,
if
j = 1 m ς j σ j ϱ j ) ς j ψ ( σ j ) 0 .
Now, we are going to discuss certain conditions under which the term
j = 1 m ς j σ j ϱ j ) ς j ψ ( σ j )
is non-negative.
(1)
For p j = 1 for all j { 1 , 2 , , m } , the term “ j = 1 m ς j σ j ϱ j ) ς j ψ ( σ j ) ” is non-negative if the function ψ is convex and m 2 m 1 . The details can be found in [50,52].
(2)
By utilizing the idea of the proof of the result given in [40], it can be proven that the term “ j = 1 m ς j σ j ϱ j ) ς j ψ ( σ j ) ” is non-negative by imposing the conditions that ψ is a convex function and
(i)
m 2 is a decreasing tuple with
j = 1 k ς j ϱ j j = 1 k ς j σ j , k = 1 , 2 , , m 1 ,
and
j = 1 m ς j ϱ j = j = 1 m ς j σ j ,
OR
(ii)
m 2 is an increasing tuple such that
j = 1 k ς j ϱ j j = 1 k ς j σ j , k = 1 , 2 , , m 1 ,
and
j = 1 m ς j ϱ j = j = 1 m ς j σ j .
(3)
If m 2 and m 1 m 2 are monotonic in the same sense and satisfying the condition
j = 1 m ς j ϱ j = j = 1 m ς j σ j ,
then employing the procedure of the proof of the theorem in [54], one easily show that “ j = 1 m ς j σ j ϱ j ) ς j ψ ( σ j ) ” is non-negative.
(4)
By adopting the method of the proof of the result given in [54] with the assumptions that the function ψ is an increasing convex function and the tuples m 2 and m 1 m 2 are monotonic in the parallel direction with
j = 1 m ς j ϱ j j = 1 m ς j σ j ,
then we can have j = 1 m ς j σ j ϱ j ) ς j ψ ( σ j ) 0 .

4. Applications in Information Theory

The current section is devoted to the applications of the main results in information theory. The intended applications will provide bounds for the Csiszár and Kullback–Leibler divergences, Shannon entropy, and the Bhattacharyya coefficient.
Definition 1.
Let ψ be real-valued convex function defined on [ a , b ] . Then, for m 1 = ( ς 1 , ς 2 , , ς m ) , m 2 = ( σ 1 , σ 2 , , σ m ) ( 0 , ) m , the Csiszár divergence is defined by
C ψ ( m 1 , m 2 ) = j = 1 m ς j ψ σ j ς j .
Theorem 6.
Assume that ψ : [ ξ 1 , ξ 2 ] R is a twice-differentiable function such that | ψ | q ( q > 1 ) is convex. Further, let m 1 = ( ς 1 , ς 2 , , ς m ) and m 2 = ( σ 1 , σ 2 , , σ m ) be m-tuples and m 3 = ( ϱ 1 , ϱ 2 , , ϱ m ) be a positive m-tuple. Then,
C ψ ( m 1 , m 3 ) C ψ ( m 1 , m 2 ) j = 1 m σ j ϱ j 2 ς j ( q + 1 ) | ψ σ j ς j | q + | ψ ϱ j ς j | q ( q + 1 ) ( q + 2 ) 1 q j = 1 m σ j ϱ j ψ σ j ς j .
Proof. 
By utilizing inequality ( 10 ) for ϱ j = ϱ j ς j and σ j = σ j ς j , we obtain ( 45 ) .
Definition 2.
Shannon entropy is defined for any positive probability distribution m 1 = ( ς 1 , ς 2 , , ς m ) as follows:
S m 1 = j = 1 m ς j log ς j .
Shannon entropy plays a very important role in information theory. Numerous outcomes and results are devoted to Shannon entropy. The basic result of information theory is the statement derived by Shannon that the entropy formula is the only formula that obeys special criteria which are required for a measure of the uncertainty related to the outcome of a random variable. For some important results and applications of Shannon entropy in information theory, we recommend [59].
Corollary 11.
Let m 1 = ( ς 1 , ς 2 , , ς m ) and m 2 = ( σ 1 , σ 2 , , σ m ) be any positive tuples with j = 1 m ς j = 1 and q > 1 . Then,
j = 1 m ς j log σ j ς j S m 1 j = 1 m σ j 1 2 ς j ( q + 1 ) ς j σ j 2 q + ς j 2 q ( q + 1 ) ( q + 2 ) 1 q + j = 1 m σ j 1 ς j σ j .
Proof. 
Consider ψ ( ϱ ) = log ϱ , ϱ > 0 . Then, ψ ( ϱ ) = 1 ϱ 2 and | ψ ( ϱ ) | q = 2 q ( 2 q + 1 ) ϱ ( 2 q + 2 ) . Clearly, ψ and | ψ | q are positive on ( 0 , ) , which confirms the convexity of ψ and | ψ | q . Therefore, applying ( 45 ) for ψ ( ϱ ) = log ϱ and σ j = 1 for j = 1 , 2 , , m , we deduce ( 47 ) .
Definition 3.
For any positive probability distributions m 1 = ( ς 1 , ς 2 , , ς m ) and m 2 = ( σ 1 , σ 2 , , σ m ) , the Kullback–Liebler divergence is defined as
K ( m 1 , m 2 ) = j = 1 m ς j log ς j σ j .
Corollary 12.
Let m 1 = ( ς 1 , ς 2 , , ς m ) , m 2 = ( σ 1 , σ 2 , , σ m ) , and m 3 = ( ϱ 1 , ϱ 2 , , ϱ m ) be positive probability distributions and q > 1 . Then,
K ( m 1 , m 3 ) K ( m 1 , m 2 ) j = 1 m σ j ϱ j 2 ς j ( q + 1 ) ς j σ j 2 q + ς j ϱ j 2 q ( q + 1 ) ( q + 2 ) 1 q + j = 1 m σ j ϱ j ς j σ j .
Proof. 
For the specified conditions, the functions ψ ( x ) = log x and | ψ | q = x 2 q are convex on ( 0 , ) . Therefore, utilizing ( 45 ) for ψ ( x ) = log x , we obtain ( 49 ) .
Definition 4.
For arbitrary probability distributions m 1 = ( ς 1 , ς 2 , , ς m ) and m 2 = ( σ 1 , σ 2 , , σ m ) with positive entries, the Bhattacharyya coefficient is defined as
B ( m 1 , m 2 ) = j = 1 m ς j σ j .
Corollary 13.
Let m 1 = ( ς 1 , ς 2 , , ς m ) , m 2 = ( σ 1 , σ 2 , , σ m ) , and m 3 = ( ϱ 1 , ϱ 2 , , ϱ m ) be positive probability distributions and q > 1 . Then,
B ( m 1 , m 2 ) B ( m 1 , m 3 ) 1 4 j = 1 m σ j ϱ j 2 ς j ( q + 1 ) ς j σ j 3 q 2 + ς j ϱ j 3 q 2 ( q + 1 ) ( q + 2 ) 1 q + 1 2 j = 1 m σ j ϱ j ς j σ j .
Proof. 
Consider the function ψ ( x ) = x defined on ( 0 , ) . Then, ψ ( x ) = 1 4 x 3 2 and | ψ ( x ) | q = 1 4 q 3 q 2 3 q 2 1 x 3 q 2 2 . Clearly, both ψ and | ψ | q are positive, which substantiates the convexity of ψ and | ψ | q . Therefore, using ( 45 ) for ψ ( x ) = x , we achieve ( 51 ) .
Remark 12.
In a similar fashion, the applications of Theorem 2 to Theorem 5 can be provided for the Csiszár divergence and its related cases.

5. Conclusions

Because of the many potential uses and rich history of mathematical inequalities, this topic has very dynamic characteristics in all areas of science. There are a number of inequalities that have been established for convex functions. The majorization inequality is one of the dominant inequalities, which has also been demonstrated by the support of the convex function. In this paper, we focused on finding bounds for the classical majorization inequality and its weighted forms. The intended bounds have been established by applying the well-known Jensen’s inequality, Hölder inequality, power mean inequality, and notion of convexity. Bounds for the Jensen difference and Slater difference are also provided as consequences of the main findings. Moreover, applications of the acquired results are also presented in information theory. The presented applications provided different estimations for the Csiszár and Kullback–Leibler divergences, Bhattacharyya coefficient, and Shannon entropy. The approach and techniques used in this manuscript may be utilized for the integral majorization inequality. Also, the idea may be further generalized for higher-order differentiable functions.

Author Contributions

Conceptualization, A.B., M.A.K., H.U., Y.A., S.C. and T.S.; funding acquisition, S.C. and T.S.; investigation, A.B., M.A.K. and S.C.; methodology, H.U., Y.A. and T.S.; validation, A.B., M.A.K. and H.U.; visualization, Y.A. and S.C.; writing—original draft, A.B., M.A.K., H.U., Y.A. and S.C.; writing—review and editing, A.B., M.A.K., H.U., S.C. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science, Research and Innovation Fund (NSRF) and King Mongkut’s University of Technology, North Bangkok, with contract no. KMUTNB-FF-66-54.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through a large-group research project under grant number RGP2/366/44. We give thanks to worthy referees for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomez, I.S.; da Costa, B.G.; dos Santos, M.A. Majorization and dynamics of continuous distributions. Entropy 2019, 590, 21. [Google Scholar] [CrossRef]
  2. Cloud, M.J.; Drachman, B.C.; Lebedev, L.P. Inequalities with Applications to Engineering; Springer: Cham, Switzerland; Berlin/Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2014. [Google Scholar]
  3. Grinalatt, M.; Linnainmaa, J.T. Jensen’s inquality, parameter uncertainty, and multiperiod investment. Rev. Asset Pricing Stud. 2001, 1, 1–34. [Google Scholar] [CrossRef]
  4. Ullah, H.; Adil Khan, M.; Pečarić, J. New bounds for soft margin estimator via concavity of Gaussian weighting function. Adv. Differ. Equ. 2020, 2020, 644. [Google Scholar] [CrossRef]
  5. Adil Khan, M.; Sohail, A.; Ullah, H.; Saeed, T. Estimations of the Jensen gap and their applications based on 6–convexity. Mathematics 2023, 11, 1957. [Google Scholar] [CrossRef]
  6. Horváth, L.; Pečarić, Đ.; Pečarić, J. Estimations of f–and Rényi divergences by using a cyclic refinement of the Jensen’s inequality. Bull. Malays. Math. Sci. Soc. 2019, 42, 933–946. [Google Scholar] [CrossRef]
  7. Borwein, J.; Lewis, A. Convex Analysis and Nonlinear Optimization, Theory and Examples; Springer: New York, NY, USA, 2000. [Google Scholar]
  8. Zhao, T.-H.; Wang, M.-K.; Chu, Y.-M. Concavity and bounds involving generalized elliptic integral of the first kind. J. Math. Inequal. 2021, 15, 701–724. [Google Scholar] [CrossRef]
  9. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications, 2nd ed.; Springer Series in Statistics; Springer: New York, NY, USA, 2011. [Google Scholar]
  10. Lakshmikantham, V.; Vatsala, A.S. Theory of Differential and Integral Inequalities with Initial Time Difference and Applications; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  11. Dragomir, S.S.; Pearce, E.E.M. Selected Topics on Hermite–Hadamard Inequalities and Applications; Victoria University: Melbourne, Australia, 2000. [Google Scholar]
  12. Adil Khan, M.; Khalid, S.; Pečarić, J. Refinements of some majorization type inequalities. J. Math. Inequal. 2013, 7, 73–92. [Google Scholar] [CrossRef]
  13. Lin, M.; Wolkowicz, H. An eigenvalue majorization inequality for positive semidefinite block matrices. Linear Multilinear Algebra 2012, 60, 1365–1368. [Google Scholar] [CrossRef]
  14. You, X.; Adil Khan, M.; Ullah, H.; Saeed, T. Improvements of Slater’s inequality by means of 4-convexity and its applications. Mathematics 2022, 10, 1274. [Google Scholar] [CrossRef]
  15. Zhao, T.-H.; Wang, M.-K.; Chu, Y.-M. Monotonicity and convexity involving generalized elliptic integral of the first kind. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2021, 115, 1–13. [Google Scholar] [CrossRef]
  16. Chu, H.-H.; Zhao, T.-H.; Chu, Y.-M. Sharp bounds for the Toader mean of order 3 in terms of arithmetic, quadratic and contraharmonic means. Math. Slovaca 2020, 70, 1097–1112. [Google Scholar] [CrossRef]
  17. Zhao, T.-H.; He, Z.-Y.; Chu, Y.-M. On some refinements for inequalities involving zero-balanced hypergeometric function. AIMS Math. 2020, 5, 6479–6495. [Google Scholar] [CrossRef]
  18. Pečarić, J.; Persson, L.E.; Tong, Y.L. Convex Functions, Partial Ordering and Statistical Applications; Academic Press: New York, NY, USA, 1992. [Google Scholar]
  19. Ullah, H.; Adil Khan, M.; Saeed, T.; Sayed, Z.M.M.M. Some improvements of Jensen’s inequality via 4–convexity and applications. J. Funct. Spaces 2022, 2022, 1–9. [Google Scholar] [CrossRef]
  20. Abdeljawad, T.; Rashid, S.; Hammouch, Z.; Chu, Y.-M. Some new local fractional inequalities associated with generalized (s,m)- onvex functions and applications. Adv. Differ. Equ. 2020, 2020, 406. [Google Scholar] [CrossRef]
  21. Adeel, M.; Khan, K.A.; Pečarić, Đ.; Pečarić, J. Levinson type inequalities for higher order convex functions via Abel–Gontscharoff interpolation. Adv. Differ. Equ. 2019, 2019, 430. [Google Scholar] [CrossRef]
  22. Zhao, T.-H.; Wang, M.-K.; Chu, Y.-M. A sharp double inequality involving generalized complete elliptic integral of the first kind. AIMS Math. 2020, 5, 4512–4528. [Google Scholar] [CrossRef]
  23. Adeel, M.; Khan, K.A.; Pečarić, Đ.; Pečarić, J. Estimation of f–divergence and Shannon entropy by Levinson type inequalities for higher–order convex functions via Taylor polynomial. J. Math. Compt. Sci. 2020, 21, 322–334. [Google Scholar] [CrossRef]
  24. Niculescu, C.P.; Persson, L.E. Convex Functions and Their Applications. A Contemporary Approach; CMS Books in Mathematics; Springer: New York, NY, USA, 2006; Volume 23. [Google Scholar]
  25. Sajjad, M.; Mehmood, I.; Abbasand, N.; Baik, S.W. Basis pursuit denoising-based image super resolution using a redundant set of atoms. Signal Image Video Process. 2016, 10, 181–188. [Google Scholar] [CrossRef]
  26. Vasić, Ž. Mi̇j̇aj̇lovi̇ć, P.M. On an index set function connected with Jensen inequality. Univ. Beograd Publ. Elektroteh. Fak. Ser. Mat. Fiz. 1976, 544–576, 110–112. [Google Scholar]
  27. Vespignani, A. Modelling dynamical processes in complex socio-technical systems. Nat. Phys. 2012, 8, 32–39. [Google Scholar] [CrossRef]
  28. Kleiner, A.; Moldovanu, B.; Strack, P. Extreme points and majorization: Economic applications. Econometrica 2021, 89, 1557–1593. [Google Scholar] [CrossRef]
  29. Wang, M.-K.; Hong, M.-Y.; Xu, Y.-F.; Shen, Z.-H.; Chu, Y.-M. Inequalities for generalized trigonometric and hyperbolic functions with one parameter. J. Math. Inequal. 2020, 14, 1–21. [Google Scholar] [CrossRef]
  30. Rasheed, A.; Hussain, S.; Shah, S.G.A.; Darus, M.; Lodhi, S. Majorization problem for two subclasses of meromorphic functions associated with a convolution operator. AIMS Math. 2022, 5, 5157–5170. [Google Scholar] [CrossRef]
  31. Kalsoom, H.; Rashid, S.; Idrees, M.; Safdar, F.; Akram, S.; Baleanu, D.; Chu, Y.-M. Post quantum integral inequalities of Hermite-Hadamard-type associated with co-ordinated higherorder generalized strongly pre-invex and quasi-pre-invex mappings. Symmetry 2020, 12, 443. [Google Scholar] [CrossRef]
  32. Latif, M.A.; Dragomir, S.S.; Momoniat, E. Some Fejer type integral inequalities for geometrically–arithmetically–convex functions with applications. Filomat 2018, 32, 2193–2206. [Google Scholar] [CrossRef]
  33. Zhao, T.-H.; Shi, L.; Chu, Y.-M. Convexity and concavity of the modified Bessel functions of the first kind with respect to Hölder means. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2020, 114, 1–14. [Google Scholar] [CrossRef]
  34. Ibrahim, M.; Saeed, T.; Algehyne, E.A.; Khan, M.; Chu, Y.-M. The effects of L-shaped heat source in a quarter-tube enclosure filled with MHD nanofuid on heat transfer and irreversibilities, using LBM: Numerical data, optimization using neural network algorithm (ANN). J. Therm. Anal. Calorim. 2021, 144, 2435–2448. [Google Scholar] [CrossRef]
  35. Chen, S.-B.; Rashid, S.; Noor, M.A.; Hammouch, Z.; Chu, Y.-M. New fractional approaches for n-polynomial P-convexity with applications in special function theory. Adv. Differ. Equ. 2020, 2020, 543. [Google Scholar] [CrossRef]
  36. Niaz, T.; Khan, K.A.; Pečarić, J. On refinement of Jensen’s inequality for 3–convex function at a point. Turk. J. Inequal. 2020, 4, 70–80. [Google Scholar]
  37. Pečarić, J.E. A companion inequality to Jensen–Steffensen’s inequality. J. Approx. Theory 1985, 44, 289–291. [Google Scholar] [CrossRef]
  38. Ghadimi, S.; Lan, G.; Zhang, H. Generalized uniformly optimal methods for non-linear programming. J. Sci. Comput. 2019, 79, 1854–1881. [Google Scholar] [CrossRef]
  39. Mercer, A. McD. A Variant of Jensen’s Inequality. J. Inequal. Pure Appl. Math. 2003, 4, 73. [Google Scholar]
  40. Maligranda, L.; Pečarić, J.; Persson, L.E. Weighted Favard and Berwald inequalities. J. Math. Anal. Appl. 1995, 190, 248–262. [Google Scholar] [CrossRef]
  41. Ullah, H.; Adil Khan, M.; Saeed, T. Determination of bounds for the Jensen gap and its applications. Mathematics 2021, 9, 3132. [Google Scholar] [CrossRef]
  42. Micić, J.; Pečarić, J.; Jurica, P. Refined Jensen’s operator inequality with condition on spectra. Oper. Matrices 2013, 7, 293–308. [Google Scholar]
  43. Adil Khan, M.; Ullah, H.; Saeed, T. Some estimations of the Jensen difference and applications. Math. Meth. Appl. Sci. 2023, 46, 5863–5892. [Google Scholar] [CrossRef]
  44. White, C.C.; Harrington, D.P. Application of Jensen’s inequality to adaptive suboptimal design. J. Optim. Theory Appl. 1980, 32, 89–99. [Google Scholar] [CrossRef]
  45. Viloria, J.M.; Vivas-Cortez, M. Jensen’s inequality for convex functions on N–coordinates. Appl. Math. Inf. Sci. 2020, 12, 931–935. [Google Scholar] [CrossRef]
  46. Deng, Y.; Ullah, H.; Adil Khan, M.; Iqbal, S.; Wu, S. Refinements of Jensen’s inequality via majorization results with applications in the information theory. J. Math. 2021, 2021, 1951799. [Google Scholar] [CrossRef]
  47. Slater, M.L. A companion inequality to Jensen’s inequality. J. Approx. Theory 1981, 32, 160–166. [Google Scholar] [CrossRef]
  48. Goel, A.; Meyerson, A.; Plotkin, S. Approximate majorization and fair online load balancing. In Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, USA, 9–11 January 2000; SIAM: Philadelphia, PA, USA, 2000; pp. 384–390. [Google Scholar]
  49. Barnett, N.S.; Cerone, P.; Dragomir, S.S. Majorisation inequalities for Stieltjes integrals. Appl. Math. Lett. 2009, 22, 416–421. [Google Scholar] [CrossRef]
  50. Karamata, J. Sur une inégalité relative aux fonctions convexes. Publ. Math. Univ. Belgrade 1932, 1, 145–148. [Google Scholar]
  51. Siddique, N.; Imran, M.; Khan, K.A.; Pečarić, J. Majorization inequalities via Green functions and Fink’s identity with applications to Shannon entropy. J. Inequal. Appl. 2020, 2020, 192. [Google Scholar] [CrossRef]
  52. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  53. Fuchs, L. A new proof of an inequality of Hardy–Littlewood–Pólya. Mat. Tidsskr. B 1947, 13, 53–54. [Google Scholar]
  54. Dragomir, S.S. Some majorisation type discrete inequalities for convex functions. J. Math. Inequal. Appl. 2004, 7, 207–216. [Google Scholar] [CrossRef]
  55. Adil Khan, M.; Wu, S.-H.; Ullah, H.; Chu, Y.-M. Discrete majorization type inequalities for convex functions on rectangles. J. Inequal. Appl. 2019, 2019, 16. [Google Scholar] [CrossRef]
  56. Wu, S.; Adil Khan, M.; Haleemzai, H.U. Refinements of majorization inequality involving convex functions via Taylor’s theorem with mean value form of the remainder. Mathematics 2019, 7, 663. [Google Scholar] [CrossRef]
  57. Saeed, T.; Adil Khan, M.; Ullah, H. Refinements of Jensen’s inequality and applications. AIMS Math. 2022, 7, 5328–5346. [Google Scholar] [CrossRef]
  58. Adil Khan, M.; Ullah, H.; Saeed, T.; Alsulami, H.H.; Sayed, Z.M.M.M.; Alshehri, A.M. Estimations of the slater gap via convexity and its applications in information theory. Math. Probl. Eng. 2022, 2022, 1750331. [Google Scholar] [CrossRef]
  59. Benish, W.A. A review of the application of information theory to clinical diagnostic testing. Entropy 2020, 22, 97. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Basir, A.; Khan, M.A.; Ullah, H.; Almalki, Y.; Chasreechai, S.; Sitthiwirattham, T. Derivation of Bounds for Majorization Differences by a Novel Method and Its Applications in Information Theory. Axioms 2023, 12, 885. https://doi.org/10.3390/axioms12090885

AMA Style

Basir A, Khan MA, Ullah H, Almalki Y, Chasreechai S, Sitthiwirattham T. Derivation of Bounds for Majorization Differences by a Novel Method and Its Applications in Information Theory. Axioms. 2023; 12(9):885. https://doi.org/10.3390/axioms12090885

Chicago/Turabian Style

Basir, Abdul, Muhammad Adil Khan, Hidayat Ullah, Yahya Almalki, Saowaluck Chasreechai, and Thanin Sitthiwirattham. 2023. "Derivation of Bounds for Majorization Differences by a Novel Method and Its Applications in Information Theory" Axioms 12, no. 9: 885. https://doi.org/10.3390/axioms12090885

APA Style

Basir, A., Khan, M. A., Ullah, H., Almalki, Y., Chasreechai, S., & Sitthiwirattham, T. (2023). Derivation of Bounds for Majorization Differences by a Novel Method and Its Applications in Information Theory. Axioms, 12(9), 885. https://doi.org/10.3390/axioms12090885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop