Next Article in Journal
Information Geometry of Non-Equilibrium Processes in a Bistable System with a Cubic Damping
Previous Article in Journal
Model-Based Approaches to Active Perception and Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras

by
Dagmar Markechová
1,* and
Beloslav Riečan
2,3
1
Department of Mathematics, Faculty of Natural Sciences, Constantine the Philosopher University in Nitra, A. Hlinku 1, SK-949 01 Nitra, Slovakia
2
Department of Mathematics, Faculty of Natural Sciences, Matej Bel University, Tajovského 40, SK-974 01 Banská Bystrica, Slovakia
3
Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, SK-814 73 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(6), 267; https://doi.org/10.3390/e19060267
Submission received: 11 May 2017 / Revised: 5 June 2017 / Accepted: 7 June 2017 / Published: 10 June 2017
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The purpose of the paper is to introduce, using the known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. In particular, a convexity of Kullback–Leibler divergence with respect to states in product MV algebras is proved, and chain rules for mutual information and Kullback–Leibler divergence are established. In addition, the data processing inequality for conditionally independent partitions in product MV algebras is proved.

1. Introduction

The notions of entropy and mutual information are fundamental concepts in information theory [1]; they are used as measures of information obtained from a realization of the considered experiments. The standard approach in information theory is based on the Shannon entropy [2]. Consider a finite measurable partition A of probability space ( Ω ,   S ,   P ) with probabilities p 1 , ... , p n of the corresponding elements of A . We recall that the Shannon entropy of A is the number H ( A ) = i = 1 n F ( p i ) , where the function F : [ 0 ,   ) is defined by F ( x ) = x log x , if x > 0 , and F ( 0 ) = 0 . Perhaps a crucial point in applications of the Shannon entropy in another scientific field presents the discovery of Kolmogorov and Sinai [3] (see also [4,5]). They showed an existence of non-isomorphic Bernoulli shifts describing independent repetition of random spaces with finite numbers of results. If two dynamical systems are isomorphic, they have the same Kolmogorov-Sinai entropy. So Kolmogorov and Sinai constructed two Bernoulli shifts with different entropies, hence non-isomorphic. It is natural that the mentioned modification of entropy has been used in many mathematical structures. In [6], we have generalized the notion of Kolmogorov–Sinai entropy to the case when the considered probability space is a fuzzy probability space ( Ω ,   M ,   μ ) defined by Piasecki [7]. This structure can serve as an alternative mathematical model of probability theory for the situations where the observed events are described unclearly, vaguely (so called fuzzy events). Other fuzzy generalizations of Shannon’s and Kolmogorov–Sinai’s entropy can be found e.g., in [8,9,10,11,12,13,14,15,16,17]. It is known that there are many possibilities for defining operations with fuzzy sets; an overview can be found in [18]. It should be noted that while the model presented in [6] was based on the Zadeh connectives [19], in our recently published paper [14], the Lukasiewicz connectives were used to define the fuzzy set operations. In [20], the mutual information of fuzzy partitions of a given fuzzy probability space ( Ω ,   M ,   μ ) has been defined. It was shown that the entropy of fuzzy partitions introduced and studied in [6] can be considered as a special case of their mutual information.
In classical information theory the mutual information is a special case of a more general quantity called Kullback–Leibler divergence (K–L divergence for short), which was originally introduced by Kullback and Leibler in 1951 [21] (see also [22]) as the divergence between two probability distributions. It plays an important role, as a mathematical tool, in the stability analysis of master equations [23] and Fokker–Planck equations [24], and in isothermal equilibrium fluctuations and transient nonequilibrium deviations [25] (see also [24,26]). In [27], we have introduced the concept of K–L divergence for the case of fuzzy probability spaces.
A natural generalization of some family of fuzzy sets is the notion of an MV algebra introduced by Chang [28]. An MV algebra is an algebraic structure which models the Lukasiewicz multivalued logic, and the fragment of that calculus which deals with the basic logical connectives “and”, “or”, and “not”, but in a multivalued context. MV algebras play a similar role in the multivalued logic as Boolean algebras in the classical two-valued logic. Recall also that families of fuzzy sets can be embedded to suitable MV algebras. MV algebras have been studied by many authors (see e.g., [29,30,31,32,33]) and, of course, there are also many results about the entropy on this structure (cf. [34,35]). The theory of fuzzy sets is a rapidly and massively developing area of theoretical and applied mathematical research. In addition to MV algebras, generalizations of MV algebras as D-posets (cf. [36,37,38]), effect algebras (cf. [39]), or A-posets (cf. [40,41]) are currently subject of intensive research. Some results about the entropy on these structures can be found e.g., in [42,43,44].
A special class of MV algebras is a class of product MV algebras. They have been introduced independently in [45] from the point of view of probability theory, and in [46] from the point of view of mathematical logic. Product MV algebras have been studied e.g., in [47,48]. A suitable theory of entropy of Kolmogorov type for the case of product MV algebras has been constructed in [35,49,50].
The purpose of this contribution is to define, using the results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and to study properties of the suggested measures. The main results of the contribution are presented in Section 3 and Section 4. In Section 3 the notions of mutual information and conditional mutual information in product MV algebras are introduced and basic properties of the suggested measures are proved, inter alia, the data processing inequality for conditionally independent partitions. In Section 4 we define the Kullback–Leibler divergence in product MV algebras and its conditional version and examine the algebraic properties of the proposed measures. Our results are summarized in the final section.

2. Basic Definitions, Notations and Facts

In this section, we recall some definitions and basic facts which will be used in the following ones. An MV algebra [30] is a system ( M ,   ,   ,   ,   0 ,   1 ) , where M is a non-empty set, , are binary operations on M , is a unary operation on M and 0, 1 are fixed elements of M , such that the following conditions are satisfied:
(i)
a b = b a ;
(ii)
a ( b c ) = ( a b ) c ;
(iii)
a 0 = a ;
(iv)
a 1 = 1 ;
(v)
( a ) = a ;
(vi)
0 = 1 ;
(vii)
a a = 1 ;
(viii)
( a b ) b = ( a b ) a ;
(ix)
a b = ( a b ) .
An example of MV algebra is the real interval [ 0 ,   1 ] equipped with the operations x y = min ( 1 ,   x + y ) , x y = max ( 0 ,   x + y 1 ) . It is interesting that any MV algebra has a similar structure. In fact, by the Mundici theorem [33] any MV algebra can be represented by a lattice-ordered Abelian group (shortly Abelian l-group). Recall that an Abelian l-group is an algebraic system ( G ,   + ,   ) , where ( G ,   + ) is an Abelian group, ( G ,   ) is a partially ordered set being a lattice and a b implies a + c b + c .
Let ( G ,   + ,   ) be an Abelian l-group, 0 be a neutral element of ( G ,   + ) and u G , u > 0 . On the interval [ 0 ,   u ] = { h G ;   0 h u } we define the following operations: a = u a , a b = ( a + b ) u ; a b = ( a + b u ) 0 . Then the system M G = ( [ 0 ,   u ] ,   ,   ,   ,   0 ,   u ) becomes an MV algebra. The Mundici theorem states that to any MV algebra M there exists an Abelian l-group G with a strong unit u (i.e., to every a G there exists n N with the property a n u ) such that M M G .
In this contribution we shall consider MV algebras with a product. We recall that the definition of product MV algebra is based on Mundici’s categorical representation of MV algebra by an Abelian l-group, i.e., the sum in the following definition of product MV algebra, and subsequently in the next text, means the sum in the Abelian l-group associated to the given MV algebra. Similarly, the element u is a strong unit of this group. More details can be found in [45,46].
Definition 1.
A product MV algebra is a couple ( M ,   ) , where M is an MV algebra and is a commutative and associative operation on M satisfying the following conditions:
(i) 
for any a M , u a = a ;
(ii) 
if a , b , c M , a + b u , then c a + c b u , and c ( a + b ) = c a + c b .
In addition, we shall consider a finitely additive state defined on a product MV algebra.
Definition 2
[30]. Let ( M ,   ) be a product MV algebra. A map m : M [ 0 ,   1 ] is said to be a state if the following properties are satisfied:
(i) 
m ( u ) = 1 ;
(ii) 
if a = i = 1 n a i , then m ( a ) = i = 1 n m ( a i ) .
In product MV algebras a suitable entropy theory has been provided in [35,49,50]. In the following we present the main idea and some results of this theory which will be used in the contribution.
Definition 3.
By a partition in a product MV algebra ( M ,   ) we mean a finite collection A = { a 1 , ... , a n } M such that i = 1 n a i = u .
Let m be a state on a product MV algebra ( M ,   ) . In the set of all partitions of ( M ,   ) the relation is defined in the following way: Let A = { a 1 , ... , a n } and B = { b 1 , ... , b k } be two partitions of ( M ,   ) . We say that B is a refinement of A (with respect to m), and write A B , if there exists a partition I ( 1 ) , I ( 2 ) , ... , I ( n ) of the set { 1 , 2 , ... , k } such that m ( a i ) = j I ( i ) m ( b j ) , for every i = 1 , 2 , ... , n . Given two partitions A = { a 1 , ... , a n } and B = { b 1 , ... , b k } of ( M ,   ) , their join A B is defined as the system A B = { a i b j ;   i = 1 , ... , n , j = 1 , ... , k } , if A B , and A A = A . Since i = 1 n j = 1 k a i b j = i = 1 n a i j = 1 k b j = u ( j = 1 k b j ) = j = 1 k b j = u , the system A B is a partition of ( M ,   ) , too. If A 1 , A 2 , ... , A n are partitions in a product MV algebra ( M ,   ) , then we put i = 1 n A i = A 1 A 2 ... A n .
Let A = { a 1 , ... , a n } be a partition in a product MV algebra ( M ,   ) and m be a state on ( M ,   ) . Then the entropy of A with respect to m is defined by Shannon’s formula:
H m ( A ) = i = 1 n F ( m ( a i ) ) ,
where:
F : [ 0 , ) ,   F ( x ) = { x log x , if   x > 0 ; 0 , if   x = 0 .
If A = { a 1 , ... , a n } and B = { b 1 , ... , b k } are two partitions of ( M ,   ) , then the conditional entropy of A given B is defined by:
H m ( A / B ) = i = 1 n j = 1 k m ( a i b j ) log m ( a i b j ) m ( b j ) .
In accordance with the classical theory the log is to the base 2 and the entropy is expressed in bits. Note that we use the convention (based on continuity arguments) that x log x 0 = if x > 0 , and 0 log 0 x = 0 if x 0 .
Example 1.
Consider any product MV algebra ( M ,   ) and a state m defined on M. Then the set E = {   u } is a partition of ( M ,   ) such that E A for any partition A of ( M ,   ) . Its entropy is H m ( E ) = 0 . Let a M such that m ( a ) = p , where p ( 0 ,   1 ) . Evidently, m ( u a ) = 1 p , and the set A = { a ,   u a } is a partition of ( M ,   ) . The entropy H m ( A ) = p log p ( 1 p ) log ( 1 p ) . In particular, if p = 1 2 , then H m ( A ) = log 2 = 1 bit.
The entropy and the conditional entropy of partitions in a product MV algebra satisfy all properties analogous to properties of Shannon’s entropy of measurable partitions in the classical case; the proofs can be found in [35,49,50]. We present those that will be further exploited. Let A ,   B ,   C be any partitions of a product MV algebra ( M ,   ) . Then the following properties hold: (E1) H m ( A ) 0 ; (E2) B C implies H m ( A / C ) H m ( A / B ) ; (E3) H m ( A B   / C ) = H m ( A   / C ) + H m ( B   / C A ) ; (E4) H m ( A B ) = H m ( A ) + H m ( B / A ) ; (E5) H m ( A B   / C ) H m ( A   / C ) + H m ( B   / C ) .

3. Mutual Information of Partitions in Product MV Algebras

In this section the results concerning the entropy in product MV algebras are used in developing information theory for the case of product MV algebras. We define the notions of mutual information and conditional mutual information of partitions in a product MV algebra and prove basic properties of the proposed measures.
Definition 4.
Let A ,   B be partitions in a given product MV algebra ( M ,   ) . Then we define the mutual information of A and B by the formula:
I m ( A , B ) = H m ( A ) H m ( A / B ) .
Remark 1.
As a simple consequence of (E4) we get:
I m ( A , B ) = H m ( A ) + H m ( B ) H m ( A B ) .
Subsequently we see that I m ( A , A ) = H m ( A ) , i.e., the entropy of partitions in product MV algebras can be considered as a special case of their mutual information. Moreover, we see that I m ( A , B ) = I m ( B , A ) , and hence we can also write:
I m ( A , B ) = H m ( B ) H m ( B   / A ) .
Example 2.
Consider the measurable space ( Ω ,   S ) , where Ω is the unit interval [ 0 ,   1 ] , and S is the σ -algebra of all Borel subsets of [ 0 ,   1 ] . Let F be the family of all S -measurable functions f :   Ω [ 0 ,   1 ] (i.e., [ α ,   β ] [ 0 ,   1 ] f 1 ( [ α ,   β ] ) S ). F is the so called full tribe of fuzzy sets [30] (see also [14,29]); it is closed also under the natural product of fuzzy sets and represents a special case of product MV algebras. On the product MV algebra F we define a state m by the formula m ( f ) = 0 1 f ( x ) d x , for every f F. Evidently, the sets A = { x ,   1 x } and B = { x 2 ,   1 x 2 } are two partitions of F with the m-state values 1 2 ,   1 2 and 1 3 ,   2 3 of the corresponding elements of A and B , respectively. By simple calculations we obtain the entropy H m ( A ) = log 2 = 1 bit, and the entropy H m ( B ) = 1 3 log 1 3 2 3 log 2 3 = 0.9183 bit. The join of A and B is the system A B = { x 3 ,   x 2 ( 1 x ) ,   x ( 1 x 2 ) ,   ( 1 x ) ( 1 x 2 ) } with the m-state values 1 4 ,   1 12 , 1 4 , 5 12 of the corresponding elements. The entropy of A B is the number:
H m ( A B ) = 1 4 log 1 4 1 12 log 1 12 1 4 log 1 4 5 12 log 5 12 = 1.8250   b i t .
Since:
H m ( A / B ) = m ( x 3 ) log m ( x 3 ) m ( x 2 ) m ( x ( 1 x 2 ) ) log m ( x ( 1 x 2 ) ) m ( 1 x 2 ) m ( ( 1 x )   x 2 ) log m ( ( 1 x )   x 2 ) m ( x 2 ) m ( ( 1 x )   ( 1   x 2 ) ) log m ( ( 1 x )   ( 1   x 2 ) ) m ( 1 x 2 ) = 1 4 log 1 4 1 3 1 4 log 1 4 2 3 1 12 log 1 12 1 3 5 12 log 5 12 2 3 = 0.9067   b i t ,
the mutual information of A and B is the number:
I m ( A , B ) = H m ( A ) H m ( A / B ) = 1 0.9067   = 0.0933   b i t .
We can also see that Equation (3) is fulfilled:
H m ( A ) + H m ( B ) H m ( A B ) = 1 + 0.9183   1.8250 = 0.0933   b i t .
In the following we will use the assertions of Propositions 1 and 2.
Proposition 1.
If A = { a 1 , ... , a n } and B = { b 1 , ... , b k } are two partitions of ( M ,   ) , then we have:
(i) 
m ( a i ) = j = 1 k m ( a i b j ) , for i = 1 , 2 , ... , n ;
(ii) 
m ( b j ) = i = 1 n m ( a i b j ) , for j = 1 , 2 , ... , k .
Proof. 
By the assumption j = 1 k b j = u , therefore, according to Definitions 1 and 2, we get:
m ( a i ) = m ( u a i ) = m ( ( j = 1 k b j ) a i ) = m ( j = 1 k ( b j a i ) ) = j = 1 k m   ( a i b j ) ,   for   i = 1 , 2 , ... , n .
The equality (ii) could be obtained in the same way.
From the following proposition it follows that, for every partitions A ,   B of ( M ,   ) , the set A B is a common refinement of A and B .
Proposition 2.
A A B , for every partitions A ,   B of ( M ,   ) .
Proof. 
Assume that A = { a 1 , ... , a n } and B = { b 1 , ... , b k } . Since the set A B is indexed by { ( i , j ) ;   i = 1 , ... , n ,   j = 1 , 2 , ... , k } , we put I ( i ) = { ( i , 1 ) , ... , ( i , k ) } , i = 1 , 2 , ... , n . In view of Proposition 1, we have:
m ( a i ) = j = 1 k m ( a i b j ) = ( l , j ) I ( i ) m ( a l b j ) ,   for   i = 1 , 2 , ... , n .
However, this indicates that A A B .
Theorem 1.
For any partitions A ,   B and C in a product MV algebra ( M ,   ) , we have:
I m ( A B   ,   C ) I m ( A ,   C )   .
Proof. 
By Equation (2) and the properties (E3) and (E4), we get:
I m ( A B   ,   C ) = H m ( A B ) H m ( A B   /   C ) = H m ( A )   + H m ( B / A ) H m ( A / C ) H m ( B / C A ) = I m ( A   ,   C ) + H m ( B / A ) H m ( B / C A ) .
According to Proposition 2 A C A , and therefore by (E2) H m ( B / A )   H m ( B / C A ) . It follows the inequality:
I m ( A B   ,   C )   I m ( A   ,   C ) .  
Proposition 3.
If A = { a 1 , ... , a n } and B = { b 1 , ... , b k } are two partitions of ( M ,   ) , then:
I m ( A , B ) = i = 1 n j = 1 k m ( a i b j ) log m ( a i b j ) m ( a i ) m ( b j ) .
Proof. 
Since by Proposition 1 it holds:
m ( a i ) = j = 1 k m ( a i b j ) ,   for   i = 1 , 2 , ... , n ,
we get:
I m ( A , B ) = i = 1 n m ( a i ) log m ( a i ) + i = 1 n j = 1 k m ( a i b j ) log m ( a i b j ) m ( b j ) = i = 1 n j = 1 k m ( a i b j ) log m ( a i ) + i = 1 n j = 1 k m ( a i b j ) log m ( a i b j ) m ( b j ) = i = 1 n j = 1 k m ( a i b j ) [ log m ( a i b j ) m ( b j ) log m ( a i ) ] = i = 1 n j = 1 k m ( a i b j ) log m ( a i b j ) m ( a i ) m ( b j ) .  
Definition 5.
Two partitions A = { a 1 , ... , a n } and B = { b 1 , ... , b k } of ( M ,   ) are called statistically independent, if m ( a i b j ) = m ( a i ) m ( b j ) , for i = 1 , 2 , ... , n ,   j = 1 , 2 , ... , k .
Theorem 2.
Let A ,   B be partitions in a product MV algebra ( M ,   ) . Then I m ( A , B ) 0 with the equality if and only if the partitions A ,   B are statistically independent.
Proof. 
Assume that A = { a 1 , ... , a n } and B = { b 1 , ... , b k } . Then using the inequality log x x 1 , which is valid for all real numbers x > 0 , with the equality if and only if x = 1 , we get:
m ( a i b j ) log m ( a i ) m ( b j ) m ( a i b j ) m ( a i b j ) [ m ( a i ) m ( b j ) m ( a i b j ) 1 ] = m ( a i ) m ( b j ) m ( a i b j ) .
The equality holds if and only if m ( a i ) m ( b j ) m ( a i b j ) = 1 , i.e., when m ( a i b j ) = m ( a i ) m ( b j ) . Therefore using Equation (5) and Proposition 1 we have:
I m ( A , B ) = i = 1 n j = 1 k m ( a i b j ) log m ( a i ) m ( b j ) m ( a i b j ) i = 1 n j = 1 k [ m ( a i ) m ( b j ) m ( a i b j ) ] = i = 1 n j = 1 k m ( a i ) m ( b j ) i = 1 n j = 1 k m ( a i b j ) = i = 1 n m ( a i ) j = 1 k m ( b j ) i = 1 n m ( a i ) = m ( i = 1 n a i ) m ( j = 1 k b j ) m ( i = 1 n a i ) = m ( u ) m ( u ) m ( u ) = 1 1 1 = 0 .
It follows that I m ( A , B ) 0 with the equality if and only if m ( a i b j ) = m ( a i ) m ( b j ) , for i = 1 , 2 , ... , n , j = 1 , 2 , ... , k , i.e., when the partitions A ,   B are statistically independent.
From Theorem 2 it follows subadditivity and additivity of entropy in a product MV algebra, as shown by the following theorem.
Theorem 3
(Subadditivity and additivity of entropy). For arbitrary partitions A ,   B in a product MV algebra ( M ,   ) , it holds H m ( A B )   H m ( A ) + H m ( B ) with the equality if and only if the partitions A ,   B are statistically independent.
Proof. 
It follows by Equation (3) and Theorem 2.
Theorem 4.
For arbitrary partitions A ,   B in a product MV algebra ( M ,   ) , it holds H m ( A / B ) H m ( A ) with the equality if and only if the partitions A ,   B are statistically independent.
Proof. 
The assertion is a simple consequence of Equation (2) and Theorem 2.
Definition 6.
Let A ,   B and C be partitions in a given product MV algebra ( M ,   ) . Then the conditional mutual information of A and B given C is defined by the formula
I m ( A , B   /   C ) = H m ( A   /   C ) H m ( A   /   B     C ) .
Remark 2.
Notice that the conditional mutual information is nonnegative, because by the property (E2) H m ( A   /   C ) H m ( A   /   B     C ) .
Theorem 5.
For any partitions A ,   B and C in a product MV algebra ( M ,   ) , we have:
I m ( A , B     C ) = I m ( A ,   C ) + I m ( A , B   /   C ) = I m ( A ,   B ) + I m ( A , C   /   B ) .
Proof. 
Let us calculate:
I m ( A ,   C ) + I m ( A , B   /   C ) = H m ( A ) H m ( A   /   C ) + H m ( A   /   C ) H m ( A   /   B     C ) = H m ( A ) H m ( A   /   B     C ) = I m ( A , B     C ) .
In a similar way we obtain also the second equality.
Theorem 6
(Chain rules). Let A 1 , A 2 , ... , A n and C be partitions in a product MV algebra ( M ,   ) . Then, for n = 2 , 3 , ... , the following equalities hold:
(i) 
H m ( A 1 A 2 ... A n )   = H m ( A 1 )   + i = 2 n H m ( A i / k = 1 i 1 A k ) ;
(ii) 
H m ( i = 1 n A i / C )   = H m ( A 1 / C )   + i = 2 n H m ( A i / ( k = 1 i 1 A k ) C ) ;
(iii) 
I m ( i = 1 n A i , C ) = I m ( A 1 , C ) + i = 2 n I m ( A i , C / k = 1 i 1 A k ) .
Proof. 
(i) By the property (E4) we have:
H m ( A 1 A 2 ) = H m ( A 1 ) + H m ( A 2 / A 1 ) .
By (E3) and (E4) we get:
H m ( A 1 A 2 A 3 ) = H m ( A 1 ) + H m ( A 2 A 3 / A 1 ) = H m ( A 1 ) + H m ( A 2 / A 1 ) + H m ( A 3 / A 2 A 1 ) = H m ( A 1 ) + i = 2 3 H m ( A i / k = 1 i 1 A k ) .
Now let us suppose that the result is true for a given n N . Then:
H m ( A 1 A 2 ... A n A n + 1 ) = H m ( A 1 A 2 ... A n ) + H m ( A n + 1 / A 1 A 2 ... A n ) = H m ( A 1 ) + i = 2 n H m ( A i / k = 1 i 1 A k ) + H m ( A n + 1 / A 1 A 2 ... A n ) = H m ( A 1 ) + i = 2 n + 1 H m ( A i / k = 1 i 1 A k ) .
(ii) For n = 2 , using (E3) we obtain:
H m ( A 1 A 2 / C ) = H m ( A 1 / C ) + H m ( A 2 / A 1 C ) .
Suppose that the result is true for a given n N . Then:
H m ( A 1 A 2 ... A n A n + 1 / C ) = H m ( i = 1 n A i / C ) + H m ( A n + 1 / A 1 ... A n C ) = H m ( A 1 / C ) + i = 2 n H m ( A i / ( k = 1 i 1 A k ) C ) + H m ( A n + 1 / ( k = 1 n A k ) C ) = H m ( A 1 / C ) + i = 2 n + 1 H m ( A i / ( k = 1 i 1 A k ) C ) .
(iii) By Equation (2), the equalities (i) and (ii) of this theorem, and Equation (6), we obtain:
I m ( i = 1 n A i , C ) = H m ( i = 1 n A i ) H m ( i = 1 n A i / C ) = H m ( A 1 ) + i = 2 n H m ( A i / k = 1 i 1 A k ) H m ( A 1 / C ) i = 2 n H m ( A i / ( k = 1 i 1 A k ) C ) = I m ( A 1 , C ) + i = 2 n ( H m ( A i / k = 1 i 1 A k ) H m ( A i / ( k = 1 i 1 A k ) C ) ) = I m ( A 1 , C ) + i = 2 n I m ( A i ,   C   / k = 1 i 1 A k ) .  
Definition 7.
Let A ,   B and C be partitions in a product MV algebra ( M ,   ) . We say that A is conditionally independent to C given B (and write A B     C ) if I m ( A , C   /   B ) = 0 .
Theorem 7.
For partitions A ,   B and C in a product MV algebra ( M ,   ) , A B     C if and only if C B     A .
Proof. 
Let A B     C . Then 0 = I m ( A , C   /   B ) = H m ( A   /   B ) H m ( A   /   B     C )   . Therefore by (E4) we get:
H m ( A   /   B ) = H m ( A   /   B     C ) = H m ( A     B     C ) H m ( B     C ) .
Let us calculate:
I m ( C , A   /   B ) = H m ( C   /   B ) H m ( C   /   A     B ) = H m ( C     B ) H m ( B ) H m ( A     B C ) + H m ( A     B ) = H m ( A     B ) H m ( B )   H m ( A   /   B ) = H m ( A   /   B ) H m ( A   /   B ) = 0 .
The results means that C B     A . The reverse implication is evident.
Remark 3.
According to the above theorem, we may say that A and C are conditionally independent given B and write A B     C instead of A B     C .
Theorem 8.
Let A ,   B and C be partitions in a given product MV algebra ( M ,   ) such that A B     C . Then we have:
(i) 
I m ( A B , C ) = I m ( B , C ) ;
(ii) 
I m ( B , C ) = I m ( C , A ) + I m ( C , B   /   A ) ;
(iii) 
I m ( A , B / C )     I m ( A , B ) ;
(iv) 
I m ( A , B )   I m ( A , C ) (data processing inequality).
Proof. 
(i) By the assumption we have I m ( A , C   /   B ) = 0 . Hence using the chain rule for the mutual information (Theorem 6 (iii)), we obtain:
I m ( A B , C ) = I m ( B A , C ) = I m ( B , C ) + I m ( A , C   /   B ) = I m ( B , C ) .
(ii) By the equality (i) of this theorem and Theorem 5, we can write:
I m ( B , C ) = I m ( A B , C ) = I m ( C , B A ) = I m ( C , A ) + I m ( C , B / A ) .
(iii) From (ii) it follows the inequality I m ( B , C ) I m ( C , B   /   A ) . Interchanging A and C (we can do it based on Theorem 7) we obtain:
I m ( A , B ) I m ( A , B / C ) .
(iv) By the assumption we have I m ( A , C   /   B ) = 0 . Therefore by Theorem 5 we get:
I m ( A , B     C ) = I m ( A ,   B ) + I m ( A , C   /   B ) = I m ( A ,   B ) .
Thus by the same theorem we can write:
I m ( A , B ) = I m ( A , B     C ) = I m ( A ,   C ) + I m ( A , B   /   C ) .
Since I m ( A , B / C )     0 , it holds I m ( A , B ) I m ( A , C ) .
In the following, a concavity of entropy H m ( A ) and concavity of mutual information I m ( A , B ) as functions of m are studied. We recall, for the convenience of the reader, the definitions of convex and concave function:
A real-valued function f is said to be convex over an interval [ a , b ] if for every x 1 , x 2 [ a , b ] and for any real number α [ 0 ,   1 ] :
f   ( α x 1 + ( 1 α ) x 2 ) α f ( x 1 ) + ( 1 α ) f ( x 2 ) .
A real-valued function f is said to be concave over an interval [ a , b ] if for every x 1 , x 2 [ a , b ] and for any real number α [ 0 ,   1 ] :
f   ( α x 1 + ( 1 α ) x 2 ) α f ( x 1 ) + ( 1 α ) f ( x 2 ) .
In the following, we will use the symbol F to denote the family of all states on a given product MV algebra ( M ,   ) . It is easy to prove the following proposition:
Proposition 4.
If m 1 , m 2 F , then, for every real number α [ 0 ,   1 ] , α m 1 + ( 1 α ) m 2 F .
Theorem 9
(Concavity of entropy). Let A be a partition in a given product MV algebra ( M ,   ) . Then, for every m 1 , m 2 F , and every real number α [ 0 ,   1 ] , the following inequality holds:
α   H m 1 ( A ) + ( 1 α ) H m 2 ( A )   H α   m 1 + ( 1 α ) m 2 ( A ) .
Proof. 
Assume that A = { a 1 , ... , a n } . Since the function F is convex, we get:
α   H m 1 ( A ) + ( 1 α )   H m 2 ( A ) =   α i = 1 n F ( m 1 ( a i ) ) ( 1 α )   i = 1 n F ( m 2 ( a i ) ) =   i = 1 n ( α   F ( m 1 ( a i ) )   + ( 1 α )   F ( m 2 ( a i ) ) )   i = 1 n   F ( α   m 1 ( a i ) + ( 1 α ) m 2 ( a i ) ) =   i = 1 n   F ( ( α   m 1 + ( 1 α ) m 2 ) ( a i ) ) = H α   m 1 + ( 1 α ) m 2 ( A ) ,
which proves that the entropy m H m ( A ) is a concave function on the family F .
In the proof of concavity of mutual information I m ( A , B ) we will need the assertion of Proposition 5. First, we introduce the following notation. Let m be a state on a product MV algebra ( M ,   ) , a , b M . Then we denote:
m ˙ ( a / b ) =   { m ( a b ) m ( b ) , if m ( b ) > 0 ; 0 , if m ( b ) = 0 .
Proposition 5.
If A = { a 1 , ... , a n } and B = { b 1 , ... , b k } are two partitions of ( M ,   ) , then
H m ( B   / A ) = i = 1 n j = 1 k m ( a i ) F ( m ˙ ( b j / a i ) ) .
Proof. 
Let us calculate:
i = 1 n j = 1 k m ( a i ) F ( m ˙ ( b j / a i ) ) = i :   m ( a i ) > 0 j = 1 k m ( a i ) F ( m ( b j a i ) m ( a i ) ) = i :   m ( a i ) > 0 j = 1 k m ( a i ) m ( b j a i ) m ( a i ) log m ( b j a i ) m ( a i ) = i :   m ( a i ) > 0 j = 1 k m ( b j a i ) log m ( b j a i ) m ( a i ) = i = 1 n j = 1 k m ( b j a i ) log m ( b j a i ) m ( a i ) = H m ( B   / A ) .
In the last step, we used the implication m ( a i ) = 0 m ( b j a i ) = 0 which follows from the equality m ( a i ) = j = 1 k m ( a i b j ) shown in Proposition 1.
Remark 4.
By Proposition 5 there exists c i j = F ( m ˙ ( b j / a i ) ) 0 such that
H m ( B   / A ) = i = 1 n j = 1 k c i j m ( a i ) .
Definition 8.
Let A = { a 1 , ... , a n } , B = { b 1 , ... , b k } be two partitions of ( M ,   ) . Put
K = { m F ;   H m ( B   / A ) = i = 1 n j = 1 k c i j m ( a i ) } .
Theorem 10
(Concavity of mutual information). The mutual information m I m ( A , B ) is a concave function on the family K .
Proof. 
By Equation (4) we can write:
I m ( A , B ) = H m ( B ) H m ( B   / A ) .
In view of Theorem 9 and Remark 4, the function m I m ( A , B ) is the sum of two concave functions on the family K : m H m ( B ) , and m H m ( B   / A ) . Since the sum of two concave functions is itself concave, we have the statement.

4. Kullback–Leibler Divergence in Product MV Algebras

In this section we introduce the concept of Kullback–Leibler divergence in product MV algebras. We prove basic properties of this measure; in particular, Gibb’s inequality. Finally, using the notion of conditional Kullback–Leibler divergence we establish a chain rule for Kullback–Leibler divergence with respect to additive states defined on a given product MV algebra. In the proofs we use the following known log-sum inequality: for non-negative real numbers x 1 ,   x 2 , ... , x n , y 1 ,   y 2 , ... , y n , it holds:
  i = 1 n x i log x i y i ( i = 1 n x i ) log i = 1 n x i i = 1 n y i
with the equality if and only if x i y i is constant. Recall that we use the convention that x log x 0 = if x > 0 , and 0 log 0 x = 0 if x 0 .
Definition 9.
Let m 1 , m 2 be states defined on a given product MV algebra ( M ,   ) , and A = { a 1 , ... , a n } be a partition of ( M ,   ) . Then we define the Kullback–Leibler divergence D A ( m 1 m 2 ) by:
D A ( m 1 m 2 ) = i = 1 n m 1 ( a i )     log m 1 ( a i ) m 2 ( a i ) .
Remark 5.
It is obvious that D A ( m m ) = 0 . The Kullback–Leibler divergence is not a metric in a true sense since it is not symmetric, i.e., the equality D A ( m 1 m 2 ) = D A ( m 2 m 1 ) is not necessarily true (as shown in the following example), and does not satisfy the triangle inequality.
Example 3.
Consider any product MV algebra ( M ,   ) and two states m 1 , m 2 defined on M. Let a M such that m 1 ( a ) = p , and m 2 ( a ) = q , where p ,   q ( 0 ,   1 ) . Evidently, m 1 ( u a ) = 1 p , m 2 ( u a ) = 1 q , and the set A = { a ,   u a } is a partition of ( M ,   ) . Let us calculate:
D A ( m 1 m 2 ) = p log p q + ( 1 p ) log 1 p 1 q ,   and   D A ( m 2 m 1 ) = q log q p + ( 1 q ) log 1 q 1 p .
If p = q , then D A ( m 1 m 2 ) = D A ( m 2 m 1 ) = 0 . If p = 1 2 , q = 1 4 , then we have:
D A ( m 1 m 2 ) = 1 2 log 1 2 1 4 + 1 2 log 1 2 3 4 = 1 2 log 2 + 1 2 log 2 3 = 0. 207519   b i t ,
and:
D A ( m 2 m 1 ) = 1 4 log 1 4 1 2 + 3 4 log 3 4 1 2 = 1 4 log 2 + 3 4 log 3 2 = 0.188722   b i t .
The result means that D A ( m 1 m 2 ) D A ( m 2 m 1 ) , in general.
Theorem 11.
Let m 1 , m 2 be states defined on a product MV algebra ( M ,   ) , and A = { a 1 , ... , a n } be a partition of ( M ,   ) . Then D A ( m 1 m 2 ) 0 (Gibb’s inequality) with the equality if and only if m 1 ( a i ) = m 2 ( a i ) , for i = 1 ,   2 , ... , n .
Proof. 
If we put x i = m 1 ( a i ) and y i = m 2 ( a i ) , for i = 1 ,   2 , ... , n , then x 1 ,   x 2 , ... , x n , y 1 ,   y 2 , ... , y n are non-negative real numbers such that i = 1 n x i = 1 and i = 1 n y i = 1 . Indeed, i = 1 n x i = i = 1 n m 1 ( a i ) = m 1 ( i = 1 n a i ) = m 1 ( u ) = 1 ; analogously we obtain i = 1 n y i = 1 . Thus, using the log-sum inequality we can write:
D A ( m 1 m 2 ) = i = 1 n m 1 ( a i )     log m 1 ( a i ) m 2 ( a i ) = i = 1 n x i     log x i y i ( i = 1 n x i )     log i = 1 n x i i = 1 n y i = 1 log 1 1 = 0
with the equality if and only if m 1 ( a i ) m 2 ( a i ) = α for i = 1 ,   2 , ... , n , where α is constant. Taking the sum for all i = 1 ,   2 , ... , n , we obtain i = 1 n m 1 ( a i ) = α i = 1 n m 2 ( a i ) , which implies that α = 1 . This means that D A ( m 1 m 2 ) = 0 if and only if m 1 ( a i ) = m 2 ( a i ) , for i = 1 ,   2 , ... , n .
Theorem 12.
Let A be a partition of ( M ,   ) and ν be a state on ( M ,   ) uniform over A . Then, for the entropy of A with respect to any state m from F , we have:
H m ( A ) = log c a r d A   D A ( m ν ) .
Proof. 
Assume that A = { a 1 , ... , a n } . Then ν ( a i ) = 1 n , for i = 1 ,   2 , ... , n . Let us calculate:
D A ( m ν ) = i = 1 n m ( a i )     log m ( a i ) ν ( a i ) = i = 1 n m ( a i )     log m ( a i ) 1 n = i = 1 n m ( a i )     ( log m ( a i ) log n 1 ) = i = 1 n m ( a i )   log m ( a i )   + log n = log c a r d A H m ( A ) .  
As a consequence we obtain the following property of entropy of partitions in product MV algebras.
Corollary 1.
For any partition A of ( M ,   ) , it holds H m ( A ) log c a r d A , with the equality if and only if m is uniform over the partition A .
Proof. 
Assume that A = { a 1 , ... , a n } and consider a state ν on ( M ,   ) uniform over A , i.e., it holds ν ( a i ) = 1 n , for i = 1 ,   2 , ... , n . Then, by Theorem 12 we get:
D A ( m ν ) = log c a r d A H m ( A ) .
Since by Theorem 11 D A ( m ν ) 0 , it holds the inequality:
H m ( A ) log c a r d A .
Further, by Theorem 11 D A ( m ν ) = 0 if and only if m ( a i ) = ν ( a i ) , for i = 1 ,   2 , ... , n . This means that the equality H m ( A ) = log c a r d A holds if and only if m ( a i ) = 1 n , for i = 1 ,   2 , ... , n .
Theorem 13
(Convexity of K–L divergence). Let A be a partition in a product MV algebra ( M ,   ) . The K–L divergence D A ( m 1 m 2 ) is convex in the pair ( m 1 ,   m 2 ) , i.e., if ( m 1 ,   m 2 ) , ( m 1 ,   m 2 ) are pairs of states from F , then, for any real number α [ 0 ,   1 ] , the following inequality holds:
D A ( α m 1 + ( 1 α ) m 1 α m 2 + ( 1 α ) m 2 ) α D A ( m 1 m 2 ) + ( 1 α ) D A ( m 1 m 2 ) .
Proof. 
Assume that A = { a 1 , ... , a n } and fix i { 1 ,   2 , ... , n } . Putting x 1 = α m 1 ( a i ) , x 2 = ( 1 α ) m 1 ( a i ) , y 1 = α m 2 ( a i ) , y 2 = ( 1 α ) m 2 ( a i ) in the log-sum inequality, we obtain:
( α m 1 ( a i ) + ( 1 α ) m 1 ( a i ) )     log α m 1 ( a i ) + ( 1 α ) m 1 ( a i ) α m 2 ( a i ) + ( 1 α ) m 2 ( a i ) α m 1 ( a i )     log α m 1 ( a i ) α m 2 ( a i ) + ( 1 α ) m 1 ( a i )   log ( 1 α ) m 1 ( a i ) ( 1 α ) m 2 ( a i ) .
Summing these inequalities over i = 1 ,   2 , ... , n , we obtain the inequality (9).
The result of Theorem 13 is illustrated in the following example.
Example 4.
Consider the product MV algebra F from Example 2 and the real functions F 1 , F 2 , F 3 , F 4 defined by F 1 ( x ) = x ,   F 2 ( x ) = x 2 ,   F 3 ( x ) = x 3 ,   F 4 ( x ) = x 4 , for every x . On the product MV algebra F we define the states m 1 ,   m 2 , m 3 ,   m 4 by the following formulas:
m 1 ( f ) = 0 1 f ( x ) d F 1 ( x ) = 0 1 f ( x ) d x ,   f F ;
m 2 ( f ) = 0 1 f ( x ) d F 2 ( x ) = 0 1 f ( x ) 2 x d x ,   f F ;
m 3 ( f ) = 0 1 f ( x ) d F 3 ( x ) = 0 1 f ( x ) 3 x 2 d x ,   f F ;
m 4 ( f ) = 0 1 f ( x ) d F 4 ( x ) = 0 1 f ( x ) 4 x 3 d x ,   f F .
In addition, we will consider the partition A = { x ,   1 x } of F. It is easy to calculate that it has the m 1 -state values 1 2 ,   1 2 ; the m 2 -state values 2 3 ,   1 3 ; the m 3 -state values 3 4 ,   1 4 ; and the m 4 -state values 4 5 ,   1 5 of the corresponding elements. In the previous theorem we put α = 0.2 . We will show that:
D A ( 0.2 m 1 + 0.8 m 3 0.2 m 2 + 0.8 m 4 ) 0.2 D A ( m 1 m 2 ) +   0.8 D A ( m 3 m 4 ) .
Let us calculate:
D A ( m 1 m 2 ) = 1 2 log 1 2 2 3 + 1 2 log 1 2 1 3 = 0.085   b i t ;
D A ( m 3 m 4 ) = 3 4 log 3 4 4 5 + 1 4 log 1 4 1 5 = 0.01065   b i t ;
D A ( 0.2 m 1 + 0.8 m 3 0.2 m 2 + 0.8 m 4 ) = 0.7 log 0.7 0.7733 + 0.3 log 0.3 0.2267 = 0.020682   b i t .
Since 0.020682 0.2 0.085 + 0.8 0.01065 = 0.02552 , the inequality (10) holds.
In the final part, we define the conditional Kullback–Leibler divergence and, using this notion, we establish the chain rule for Kullback–Leibler divergence.
Definition 10.
Let m 1 , m 2 be states on a given product MV algebra ( M ,   ) and A = { a 1 , ... , a n } , B = { b 1 , ... , b k } be two partitions of ( M ,   ) . Then we define the conditional Kullback–Leibler divergence D B / A ( m 1 m 2 ) by:
D B / A ( m 1 m 2 ) = i = 1 n m 1 ( a i ) j = 1 k m ˙ 1 ( b j / a i )     log m ˙ 1 ( b j / a i ) m ˙ 2 ( b j / a i ) .
Theorem 14
(Chain rule for K–L divergence). Let m 1 , m 2 be states on a given product MV algebra ( M ,   ) . If A , B are two partitions of ( M ,   ) , then:
D A B ( m 1 m 2 ) =   D A ( m 1 m 2 ) +   D B / A ( m 1 m 2 ) .
Proof. 
Assume that A = { a 1 , ... , a n } and B = { b 1 , ... , b k } . We will consider the following two cases: (i) there exists i 0 { 1 , ... , n } such that m 2 ( a i 0 ) = 0 ; (ii) m 2 ( a i ) > 0 for i = 1 , 2 , ... , n . In the first case, both sides of Equation (11) are equal to , thus the equality holds. Let us now assume that m 2 ( a i ) > 0 , for i = 1 , 2 , ... , n . We get:
D A ( m 1 m 2 ) + D B / A ( m 1 m 2 ) = i = 1 n m 1 ( a i )     log m 1 ( a i ) m 2 ( a i ) + i = 1 n m 1 ( a i ) j = 1 k m ˙ 1 ( b j / a i )     log m ˙ 1 ( b j / a i ) m ˙ 2 ( b j / a i ) = i : m 1 ( a i ) > 0   j = 1 k m 1 ( a i b j )   log m 1 ( a i ) m 2 ( a i ) + i : m 1 ( a i ) > 0 j = 1 k m 1 ( a i b j )     log m ˙ 1 ( b j / a i ) m ˙ 2 ( b j / a i ) = i : m 1 ( a i ) > 0   j = 1 k m 1 ( a i b j )     ( log m 1 ( a i ) m 2 ( a i ) + log m ˙ 1 ( b j / a i ) m ˙ 2 ( b j / a i ) ) = i : m 1 ( a i ) > 0   j = 1 k m 1 ( a i b j ) log m 1 ( a i ) m ˙ 1 ( b j / a i ) m 2 ( a i ) m ˙ 2 ( b j / a i ) = i : m 1 ( a i ) > 0   j = 1 k m 1 ( a i b j ) log m 1 ( a i b j ) m 2 ( a i b j ) = i = 1 n j = 1 k m 1 ( a i b j ) log m 1 ( a i b j ) m 2 ( a i b j ) = D A B ( m 1 m 2 ) .
In the last step, analogously as in the proof of Proposition 5, we used the implication m 1 ( a i ) = 0 m 1 ( a i b j ) = 0 which follows from the equality m 1 ( a i ) = j = 1 k m 1 ( a i b j ) shown in Proposition 1.
In the following example, we illustrate the result of the previous theorem.
Example 5.
Consider the product MV algebra F and the partitions A = { x ,   1 x } , B = {   x 2 ,   1 x 2 } of the product MV algebra F from Example 2. In addition, let m 1 , m 2 be the states on F, defined in Example 4. Then the partitions A and B have the m 1 -state values 1 2 ,   1 2 and 1 3 ,   2 3 of the corresponding elements, respectively, and the m 2 -state values 2 3 ,   1 3 and 1 2 ,   1 2 of the corresponding elements, respectively. The join of partitions A and B is the system A B = { x 3 ,   x 2 ( 1 x ) ,   x ( 1 x 2 ) ,   ( 1 x ) ( 1 x 2 ) }   ; it has the m 1 -state values 1 4 ,   1 12 , 1 4 , 5 12 , and the m 2 -state values 2 5 ,   1 10 , 4 15 , 7 30 of the corresponding elements. By simple calculations we obtain:
D A ( m 1 m 2 ) = 0.085   b i t ,   D A B ( m 1 m 2 ) = 0.134   b i t ,   D B / A ( m 1 m 2 ) = 0.049   b i t .
It is possible to verify that D A B ( m 1 m 2 ) = D A ( m 1 m 2 ) + D B / A ( m 1 m 2 ) .

5. Discussion

In this paper, we have extended the study of entropy in product MV algebras. The main aim of the paper was to introduce, using known results concerning the entropy in product MV algebras, the concepts of mutual information and Kullback–Leibler divergence for the case of product MV algebras and examine algebraic properties of the proposed measures. Our results have been presented in Section 3 and Section 4.
In Section 3 we have introduced the notions of mutual information and conditional mutual information of partitions of product MV algebras and proved some basic properties of the suggested measures. It was shown that the entropy of partitions of product MV algebras can be considered as a special case of their mutual information. Specifically, it was proved that from the properties of mutual information it follows subadditivity and additivity of entropy (Theorem 3). Theorem 6 provides the chain rule for mutual information. In addition, the data processing inequality for conditionally independent partitions in product MV algebras is proved. Moreover, a concavity of mutual information has been studied.
In Section 4 the notion of Kullback–Leibler divergence in product MV algebras was introduced and the basic properties of this measure were shown. In particular, a convexity of Kullback–Leibler divergence with respect to additive states defined on a given product MV algebra is proved. Theorem 11 admits interpretation of Kullback–Leibler divergence as a measure of how different two states on a common product MV algebra (over the same partition) are. The relationship between KL-divergence and entropy is provided in Theorem 12: the more a state m F diverges from the state ν F uniform over A (over the same partition A ) the lesser the entropy H m ( A ) is and vice versa. Finally, a conditional version of the Kullback–Leibler divergence in product MV algebras has been defined and the chain rule for Kullback–Leibler divergence with respect to additive states defined on a given product MV algebra has been established.
Notice that in [14] (see also [29,30]) the entropy on a full tribe F of fuzzy sets has been studied. The tribe F is closed also under the natural product of fuzzy sets and it represents a special case of product MV algebras. Accordingly, the theory presented in this contribution can also be applied for the mentioned case of tribes of fuzzy sets.
In [51,52,53,54,55] a more general fuzzy theory—intuitionistic fuzzy sets (IF-sets for short) has been developed. While a fuzzy set is a mapping μ A :   Ω [ 0 ,   1 ] (where the considered fuzzy set is identified with its membership function μ A ), the Atanassov IF-set is a pair A = ( μ A ,   ν A ) of functions μ A ,   ν A :   Ω [ 0 ,   1 ] with μ A +   ν A 1 . The function μ A is interpreted as a membership function of IF-set A , and the function ν A as a non-membership function of IF-set A . Evidently, any fuzzy set μ A :   Ω [ 0 ,   1 ] can be considered as an IF-set A = ( μ A ,   1 μ A ) . Any result holding for IF-sets is applicable also to fuzzy sets. Of course, the opposite implication is not true; the theory of intuitionistic fuzzy sets presents a non-trivial generalization of the fuzzy set theory. So IF-sets present possibilities for modeling a larger class of real situations. Note that some results about the entropy on IF-sets can be found e.g., in [56,57,58,59]. These results could be used in developing information theory for the case of IF-sets.
To give a possibility to applied MV algebra results also to families of IF-experiments, one can use the Mundici characterization of MV algebras. In the family of IF-sets it is natural to define the partial ordering relation in the following way: if A = ( μ A ,   ν A ) , and B = ( μ B ,   ν B ) are two IF-sets, then A B if and only if μ A μ B , and ν A ν B . Namely, in the fuzzy case μ A μ B implies ν A = 1 μ A 1 μ B = ν B . Therefore we can consider the Abelian l-group ( 2 ,   + ,   ) putting A + B = ( μ A + μ B ,   1 ( 1 ν A + 1 ν B ) ) = ( μ A + μ B ,   ν A + ν B 1 ) with the zero element 0 = ( 0 ,   1 ) . (In fact, A + 0 = ( μ A ,   ν A ) +   ( 0 ,   1 ) = ( μ A ,   ν A ) = A .) The partial ordering in the l-group ( 2 ,   + ,   ) is defined by the prescription A B if and only if μ A μ B , and ν A ν B . Then a suitable MV algebra is e.g., the system M = { ( μ A ,   ν A ) ;   ( 0 ,   1 )   ( μ A ,   ν A ) ( 1 ,   0 ) } . Moreover, this MV algebra is a product MV algebra with the product defined by A B = ( μ A μ B ,   1 ( 1 ν A ) ( 1 ν B ) ) = ( μ A μ B ,   ν A + ν B ν A ν B ) . The presented MV algebra approach gives a possible elegant and practical way for obtaining new results also in the intuitionistic fuzzy case. We note that this approach was used to construct the Kolmogorov-type entropy theory for IF systems in [58], drawing on entropy results for product MV-algebras published in [35,49,50]. In this way it is also possible to develop the theory of information and K–L divergence for IF-sets.

Acknowledgments

The authors thank the editor and the referees for their valuable comments and suggestions. The authors thank Constantine the Philosopher University in Nitra for covering the costs to publish in open access.

Author Contributions

Both authors contributed equally and significantly to the theoretical work as well as to the creation of illustrative examples. Dagmar Markechová wrote the paper. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gray, R.M. Entropy and Information Theory; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  2. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  3. Kolmogorov, A.N. New Metric Invariant of Transitive Dynamical Systems and Automorphisms of Lebesgue Spaces. Dokl. Russ. Acad. Sci. 1958, 119, 861–864. [Google Scholar]
  4. Sinai, Y.G. Ergodic Theory with Applications to Dynamical Systems and Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 1990. [Google Scholar]
  5. Sinai, Y.G. On the Notion of Entropy of a Dynamical System. Dokl. Russ. Acad. Sci. 1959, 124, 768–771. [Google Scholar]
  6. Markechová, D. The entropy of fuzzy dynamical systems and generators. Fuzzy Sets Syst. 1992, 48, 351–363. [Google Scholar] [CrossRef]
  7. Piasecki, K. Probability of fuzzy events defined as denumerable additive measure. Fuzzy Sets Syst. 1985, 17, 271–284. [Google Scholar] [CrossRef]
  8. Mesiar, R. The Bayes principle and the entropy on fuzzy probability spaces. Int. J. Gen. Syst. 1991, 20, 67–72. [Google Scholar] [CrossRef]
  9. Mesiar, R.; Rybárik, J. Entropy of Fuzzy Partitions—A General Model. Fuzzy Sets Syst. 1998, 99, 73–79. [Google Scholar] [CrossRef]
  10. Dumitrescu, D. Entropy of a fuzzy dynamical system. Fuzzy Sets Syst. 1995, 70, 45–57. [Google Scholar] [CrossRef]
  11. Rahimi, M.; Riazi, A. On local entropy of fuzzy partitions. Fuzzy Sets Syst. 2014, 234, 97–108. [Google Scholar] [CrossRef]
  12. Rahimi, M.; Assari, A.; Ramezani, F. A Local Approach to Yager Entropy of Dynamical Systems. Int. J. Fuzzy Syst. 2015, 1, 1–10. [Google Scholar] [CrossRef]
  13. Srivastava, P.; Khare, M.; Srivastava, Y.K. m-Equivalence, entropy and F-dynamical systems. Fuzzy Sets Syst. 2001, 121, 275–283. [Google Scholar] [CrossRef]
  14. Markechová, D.; Riečan, B. Entropy of Fuzzy Partitions and Entropy of Fuzzy Dynamical Systems. Entropy 2016, 18, 19. [Google Scholar] [CrossRef]
  15. Riečan, B. An entropy construction inspired by fuzzy sets. Soft Comput. 2003, 7, 486–488. [Google Scholar]
  16. Riečan, B. On a type of entropy of dynamical systems. Tatra Mt. Math. Publ. 1992, 1, 135–140. [Google Scholar]
  17. Riečan, B. On some modifications of the entropy of dynamical systems. Atti Semin. Mat. Fis. dell’Univ. Modena 1994, 42, 157–166. [Google Scholar]
  18. Dubois, D.; Prade, M. A review of fuzzy set aggregation connectives. Inf. Sci. 1985, 36, 85–121. [Google Scholar] [CrossRef]
  19. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–358. [Google Scholar] [CrossRef]
  20. Markechová, D. Entropy and mutual information of experiments in the fuzzy case. Neural Netw. World 2013, 23, 339–349. [Google Scholar] [CrossRef]
  21. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  22. Kullback, S. Information Theory and Statistics; John Wiley & Sons: New York, NY, USA, 1959. [Google Scholar]
  23. Schnakenberg, J. Network theory of microscopic and macroscopic behavior of master equation systems. Rev. Mod. Phys. 1976, 48, 571–585. [Google Scholar] [CrossRef]
  24. Risken, H. The Fokker-Planck Equation, Methods of Solution and Applications; Springer: New York, NY, USA, 1984. [Google Scholar]
  25. Qian, H. Relative Entropy: Free Energy Associated with Equilibrium Fluctuations and Nonequilibrium Deviations. arXiv, 2001; arXiv:math-ph/0007010v2. [Google Scholar]
  26. Ellis, R.S. Entropy, Large Deviations, and Statistical Mechanics; Springer: New York, NY, USA, 1985. [Google Scholar]
  27. Markechová, D. Kullback–Leibler Divergence and Mutual Information of Experiments in the Fuzzy Case. Axioms 2017, 6, 5. [Google Scholar] [CrossRef]
  28. Chang, C.C. Algebraic analysis of many valued logics. Trans. Am. Math. Soc. 1958, 88, 467–490. [Google Scholar] [CrossRef]
  29. Riečan, B.; Mundici, D. Probability on MV-algebras. In Handbook of Measure Theory; Pap, E., Ed.; Elsevier: Amsterdam, The Netherlands, 2002; pp. 869–910. [Google Scholar]
  30. Riečan, B.; Neubrunn, T. Integral, Measure and Ordering; Springer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  31. Dvurečenskij, A.; Pulmannová, S. New Trends in Quantum Structures; Springer: Dordrecht, The Netherlands, 2000. [Google Scholar]
  32. Mundici, D. MV Algebras: A Short Tutorial. 2007. Available online: http://www.matematica.uns.edu.ar/IXCongresoMonteiro/Comunicaciones/Mundici_tutorial.pdf (accessed on 26 May 2007).
  33. Mundici, D. Interpretation of AFC*-algebras in Lukasiewicz sentential calculus. J. Funct. Anal. 1986, 56, 889–894. [Google Scholar]
  34. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property II: MV-Algebras. Kybernetika 2005, 41, 161–176. [Google Scholar]
  35. Riečan, B. Kolmogorov–Sinaj entropy on MV-algebras. Int. J. Theor. Phys. 2005, 44, 1041–1052. [Google Scholar] [CrossRef]
  36. Kôpka, F.; Chovanec, F. D-posets. Math. Slovaca 1994, 44, 21–34. [Google Scholar]
  37. Kôpka, F. Quasiproduct on Boolean D-posets. Int. J. Theor. Phys. 2008, 47, 26–35. [Google Scholar] [CrossRef]
  38. Frič, R. On D-posets of fuzzy sets. Math. Slovaca 2014, 64, 545–554. [Google Scholar] [CrossRef]
  39. Foulis, D.J.; Bennet, M.K. Effect algebras and unsharp quantum logics. Found. Phys. 1994, 24, 1331–1352. [Google Scholar] [CrossRef]
  40. Frič, R.; Papčo, M. Probability domains. Int. J. Theor. Phys. 2010, 49, 3092–3100. [Google Scholar] [CrossRef]
  41. Skřivánek, V.; Frič, R. Generalized random events. Int. J. Theor. Phys. 2015, 54, 4386–4396. [Google Scholar] [CrossRef]
  42. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property I: Basic Properties. Kybernetika 2005, 41, 143–160. [Google Scholar]
  43. Giski, Z.E.; Ebrahimi, M. Entropy of Countable Partitions on effect Algebras with the Riesz Decomposition Property and Weak Sequential Effect Algebras. Cankaya Univ. J. Sci. Eng. 2015, 12, 20–39. [Google Scholar]
  44. Ebrahimi, M.; Mosapour, B. The Concept of Entropy on D-posets. Cankaya Univ. J. Sci. Eng. 2013, 10, 137–151. [Google Scholar]
  45. Riečan, B. On the product MV-algebras. Tatra Mt. Math. 1999, 16, 143–149. [Google Scholar]
  46. Montagna, F. An algebraic approach to propositional fuzzy logic. J. Log. Lang. Inf. 2000, 9, 91–124. [Google Scholar] [CrossRef]
  47. Jakubík, J. On product MV algebras. Czech. Math J. 2002, 52, 797–810. [Google Scholar] [CrossRef]
  48. Di Nola, A.; Dvurečenskij, A. Product MV-algebras. Mult. Valued Log. 2001, 6, 193–215. [Google Scholar]
  49. Petrovičová, J. On the entropy of partitions in product MV-algebras. Soft Comput. 2000, 4, 41–44. [Google Scholar] [CrossRef]
  50. Petrovičová, J. On the entropy of dynamical systems in product MV-algebras. Fuzzy Sets Syst. 2001, 121, 347–351. [Google Scholar] [CrossRef]
  51. Atanassov, K. Intuitionistic Fuzzy Sets: Theory and Applications; Physica Verlag: New York, NY, USA, 1999. [Google Scholar]
  52. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  53. Atanassov, K. More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 33, 37–45. [Google Scholar] [CrossRef]
  54. Atanassov, K.; Riečan, B. On two operations over intuitionistic fuzzy sets. J. Appl. Math. Stat. Inform. 2006, 2, 145–148. [Google Scholar] [CrossRef]
  55. Riečan, B. Probability theory on IF events. In Algebraic and Proof-Theoretic Aspects of Non-Classical Logics; Papers in Honor of Daniele Mundici on the Occasion of his 60th Birthday; Lecture Notes in Computer Science; Springer: New York, NY, USA, 2007; pp. 290–308. [Google Scholar]
  56. Farnoosh, R.; Rahimi, M.; Kumar, P. Removing noise in a digital image using a new entropy method based on intuitionistic fuzzy sets. In Proceedings of the International Conference on Fuzzy Systems, Vancouver, BC, Canada, 24–29 July 2016; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2016; pp. 1328–1332. [Google Scholar]
  57. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst. 1996, 78, 305–316. [Google Scholar] [CrossRef]
  58. Ďurica, M. Entropy on IF-events. Notes Intuit. Fuzzy Sets 2007, 13, 30–40. [Google Scholar]
  59. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Markechová, D.; Riečan, B. Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras. Entropy 2017, 19, 267. https://doi.org/10.3390/e19060267

AMA Style

Markechová D, Riečan B. Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras. Entropy. 2017; 19(6):267. https://doi.org/10.3390/e19060267

Chicago/Turabian Style

Markechová, Dagmar, and Beloslav Riečan. 2017. "Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras" Entropy 19, no. 6: 267. https://doi.org/10.3390/e19060267

APA Style

Markechová, D., & Riečan, B. (2017). Kullback–Leibler Divergence and Mutual Information of Partitions in Product MV Algebras. Entropy, 19(6), 267. https://doi.org/10.3390/e19060267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop