1. Introduction
Entropy is a parameter describing the disorder of objective things. Shannon [
1] believes that information is the elimination or reduction of uncertainty in people’s understanding of things. He calls the degree of uncertainty information entropy. Since then, some scholars have studied Shannon entropy. Using fuzzy set theory, Zadeh [
2] introduced fuzzy entropy to quantify the number of fuzziness. Following that, De Luca and Termini [
3] proposed a definition of fuzzy entropy, that is, the uncertainty related to the fuzzy set. After that, many studies involved the definition and application of fuzzy entropy, such as Bhandary pal [
4], Pal and PAL [
5], Pal and Bezdek [
6]. Furthermore, Li and Liu [
7] put forward the definition of entropy of fuzzy variable.
In 2007, in order to study the uncertainty related to belief degree, Liu [
8] established uncertainty theory. As a branch of mathematics, Liu [
9] improved the theory in 2009. Uncertain variable was defined [
10]. After that, Liu [
9] gave a definition of expect value of uncertain variable, and Liu and Ha [
11] gave a formula for calculating the expected value of uncertain variable function. Liu [
8] proposed some formulas by uncertainty distribution for calculating variance and moment. Yao [
12], and Sheng and Samarjit [
13] proposed a formula using inverse uncertainty distribution for calculating variance and moment. After that, Liu [
8] proposed a concept of logarithmic entropy of uncertain variables. Later, Dai and Chen [
14] established a formula to calculate the entropy through the inverse of uncertainty distribution. In addition, Chen and Dai [
15] studied the maximum entropy principle. After that, Dai [
16] proposed quadratic entropy. Yao et al. [
17] proposed sine entropy of uncertain variables.
We know that in order to deal with the number of uncertainties, we have two mathematical tools: probability theory and uncertainty theory. The probability theory is a powerful tool for modeling frequency through samples, and uncertainty theory is another tool for modeling belief degree. However, when the system becomes more and more complex, it creates both uncertainty and randomness. In 2013, Liu [
18] established chance theory for modeling the systems. Liu [
19] also proposed and studied the basic concepts of chance measure, which is a monotonically increasing set function and satisfies self-duality. Hou [
20] proved that the chance measure satisfies sub-additivity. Liu [
19] also put forward some basic concepts, including uncertain random variable, and its chance distribution and digital features, etc. Furthermore, Sheng and Yao [
21] provided a formula for calculating the variance. Sheng et al. [
22] proposed the concept of logarithmic entropy in 2017. After that, Ahmadzade et al. [
23] proposed the concept of quadratic entropy, and Ahmadzade et al. [
24] studied the question of partial logarithmic entropy.
Since logarithmic entropy may not be able to measure the uncertainty in some cases. Therefore, in order to further improve this problem, this paper will propose two new entropies for uncertain random variables, namely sine entropy and partial sine entropy, and discuss their properties. Furthermore, the calculation formulas of sine entropy and partial sine entropy are obtained by using chance theory.
Section 2 reviews some basic concepts of chance theory.
Section 3 introduces the concept and basic properties of sine entropy of uncertain random variables. Furthermore, this paper will also propose the concept of partial sine entropy and discuss its properties in
Section 4. Finally, we will give a summary in
Section 5.
3. Sine Entropy of Uncertain Random Variables
Since logarithmic entropy may not be able to measure the uncertainty of uncertain random variables in some case. Therefore, we will propose a sine entropy of uncertain random variables as a supplement to measure the uncertainty in fail of the logarithmic entropy, as shown below.
Definition 6. Let be chance distribution of an uncertain random variable ξ. Then, we define sine entropy Obviously, in the following, is a symmetric function with , and reaches its unique maximum 1 at , and it is strictly increasing in and strictly decreasing in . By Definition 6, we have . If , c is a special uncertainty, that is a constant, then and . Set , If chance distribution of , then .
Remark 1. We can find that the sine entropy of uncertain random variables is invariant under any translations.
Example 1. Let Ψ be a probability distribution of random variable η, and let Υ be an uncertainty distribution of uncertain variable τ. Then, sine entropy of the sum is Example 2. Let Ψ be a probability distribution of random variable , and let Υ be an uncertainty distribution of uncertain variable . Then, sine entropy of the product is Example 3. Let Ψ be a probability distribution of random variable η , and let Υ be an uncertainty distribution of uncertain variable τ.Then, sine entropy of the minimum is Example 4. Let Ψ be a probability distribution of random variable η, and let Υ be an uncertainty distribution of uncertain variable τ. Then, sine entropy of the maximum is Theorem 2. Let be an inverse chance distribution of uncertain random variable ξ. Then, sine entropy is Proof. According to known conditions that
has an inverse chance distribution
, then
has a chance distribution
. We can obtain
then the sine entropy of
can be obtained:
We can also obtain the following formula by Fubini theorem:
The proof is completed from this theorem. □
Remark 2. Theorem 2 provides a new method to calculate sine entropy of an uncertain random variable when the inverse chance distribution exists.
Theorem 3. Let be probability distributions of independent random variables , respectively, and let be independent uncertain variables. Then, the sine entropy of iswhere for any real numbers is the uncertainty distribution of . Proof. For any real numbers
we know that
has a chance distribution by Theorem 1,
where
is the uncertainty distribution of
. By definition of sine entropy, we have
Thus, we proved this theorem. □
Corollary 1. Let be strictly decreasing with respect to and strictly increasing with respect to . If are continuous, then the sine entropy of ξ is Proof. By Theorem 1, we know that the chance distribution of
is
Thus, we can obtain
by Theorem 3. The proof is completed from this corollary. □
Corollary 2. Let be strictly decreasing with respect to and strictly increasing with respect to . If are regular, then the sine entropy of ξ iswhere may be determined by its inverse uncertainty distribution , that is Proof. By Theorem 1, for any real numbers
we know that the chance distribution of
is
where
is the uncertainty distribution of
. From the assumption that
be strictly decreasing with respect to
and strictly increasing with respect to
. It follows that
may be determined by its inverse uncertainty distribution
when
are regular, that is
From Theorem 3, we can obtain
where
and
may be determined by its inverse uncertainty distribution
that is equal to
The proof is completed of this corollary. □
4. Partial Sine Entropy of Uncertain Random Variables
The concept of sine entropy of uncertain random variables are proposed theoretically by using chance theory. However, sometimes we need to know how much the sine entropy of uncertain random variables is related to uncertain variables? To answer this question, following that, we will define a new concept of partial sine entropy of uncertain random variables to measure how much the sine entropy of uncertain random variables is related to uncertain variables. Therefore, we propose the concept of partial sine entropy as following as.
Definition 7. Let be uncertain variables, and let be independent random variables. Then, the partial sine entropy of ξ iswhere for any real numbers , is the uncertainty distribution of . Theorem 4. Let be probability distributions of independent random variables , respectively, let be independent uncertain variables. If f is a measurable function, then the partial sine entropy of iswhere for any real numbers is the inverse uncertainty distribution of ,⋯, , ,,⋯,. Proof. We know that
is a derivable function with
Thus, we have
then the partial sine entropy is
By the Fubini theorem, we have
The proof is completed from this theorem. □
Example 5. Let Ψ be a probability distribution of random variable η, let Υ be an uncertainty distribution of uncertain variable τ. Then, the partial sine entropy of the sum is Proof. It is obvious that the inverse uncertain distribution of uncertain variable
is
. By Theorem 4, we have
Thus, the proof is finished. □
Example 6. Let Ψ be a probability distribution of random variable η, let Υ be an uncertainty distribution of uncertain variable τ. Then, the partial sine entropy of the product is Proof. It is obvious that the inverse uncertain distribution of uncertain variable
is
. By Theorem 4, we have
Thus, the proof is finished. □
Example 7. Let and be two uncertainty distributions of uncertain variables and , respectively, and let and be two probability distributions of random variables and , respectively. Set and , then Proof. It is obvious that the inverse uncertain distributions of uncertain variables
and
are
,
and
. By Theorem 4, we can obtain
Thus, the proof is finished. □
Example 8. Let and be two uncertainty distributions of uncertain variables and , respectively, and let and be two probability distributions of random variables and , respectively. Set and , then Proof. It is obvious that the inverse uncertain distributions of uncertain variables
and
are
,
and
By Theorem 4, we can obtain
Thus, the proof is finished. □
Theorem 5. Let be independent uncertain variables, and let be independent random variables. Set , , ⋯, and If is strictly increasing with respect to and strictly decreasing with respect . Then, the partial sine entropy of iswhere or are the inverse uncertainty distribution of for any real numbers . Proof. It is obvious that the inverse uncertain distribution of uncertain variable, we have
By Theorem 4, we can obtain
The proof is completed from this theorem. □
Theorem 6. Let be independent uncertain variables, and let be independent random variables. Set , , ⋯, and For any real numbers we have Proof. This problem will be proved by three steps.
Step 1: We prove
. If
, then
has an inverse uncertainty distribution
, where
is the inverse uncertainty distribution of
. We have
If
, then
has an inverse uncertainty distribution
, where
is the inverse uncertainty distribution of
. We have
If
, then we immediately have
Thus, we always have
The inverse uncertainty distribution of
is
We can obtain
by Theorem 5.
Step 3: By
Step 1 and
Step 2, for any real numbers
,
, we can obtain
Thus, the proof is finished. □
Remark 3. From Theorem 6, we see that the partial sine entropy is positive linearity any real numbers.