Next Article in Journal
Nontraditional Deterministic Remote State Preparation Using a Non-Maximally Entangled Channel without Additional Quantum Resources
Next Article in Special Issue
Assisted Identification over Modulo-Additive Noise Channels
Previous Article in Journal
Universality of Form: The Case of Retinal Cone Photoreceptor Mosaics
Previous Article in Special Issue
Observations on the Lovász θ-Function, Graph Capacity, Eigenvalues, and Strong Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dimension-Free Bounds for the Union-Closed Sets Conjecture

School of Statistics and Data Science, The Key Laboratory of Pure Mathematics and Combinatorics (LPMC), Key Laboratory for Medical Data Analysis and Statistical Research of Tianjin (KLMDASR), and Laboratory for Economic Behaviors and Policy Simulation (LEBPS), Nankai University, Tianjin 300071, China
Entropy 2023, 25(5), 767; https://doi.org/10.3390/e25050767
Submission received: 12 February 2023 / Revised: 5 May 2023 / Accepted: 5 May 2023 / Published: 8 May 2023
(This article belongs to the Special Issue Extremal and Additive Combinatorial Aspects in Information Theory)

Abstract

:
The union-closed sets conjecture states that, in any nonempty union-closed family F of subsets of a finite set, there exists an element contained in at least a proportion 1 / 2 of the sets of F . Using an information-theoretic method, Gilmer recently showed that there exists an element contained in at least a proportion 0.01 of the sets of such F . He conjectured that their technique can be pushed to the constant 3 5 2 which was subsequently confirmed by several researchers including Sawin. Furthermore, Sawin also showed that Gilmer’s technique can be improved to obtain a bound better than 3 5 2 but this new bound was not explicitly given by Sawin. This paper further improves Gilmer’s technique to derive new bounds in the optimization form for the union-closed sets conjecture. These bounds include Sawin’s improvement as a special case. By providing cardinality bounds on auxiliary random variables, we make Sawin’s improvement computable and then evaluate it numerically, which yields a bound approximately 0.38234 , slightly better than 3 5 2 0.38197 .

1. Introduction

This paper concerns the union-closed sets conjecture which is described in the information-theoretic language as follows. For that purpose, every set B [ n ] : = { 1 , 2 , , n } is uniquely described by an n-length sequence x n : = ( x 1 , x 2 , , x n ) Ω n with Ω : = { 0 , 1 } such that x i = 1 if i B and x i = 0 otherwise. So, a family F of subsets of [ n ] uniquely corresponds to a subset A Ω n . Denote the (element-wise) OR operation for two finite Ω -valued sequences as x n y n : = ( x i y i ) i [ n ] with x n , y n Ω n , where ∨ is the OR operation. The family F is closed under the union operation (i.e., F G F , F , G F ) if and only if the corresponding set A Ω n is closed under the OR operation (i.e., x n y n A , x n , y n A ).
Let A Ω n be closed under the OR operation. Let X n : = ( X 1 , X 2 , , X n ) be a random vector uniformly distributed on A and denote P X n = Unif ( A ) as its distribution (or probability mass function, PMF). We are interested in estimating
p A : = max i [ n ] P X i ( 1 )
where P X i is the distribution of X i and, hence, P X i ( 1 ) is the proportion of the sets containing the element i among all sets in F . Frankl made the following conjecture.
Conjecture 1 
(Frankl Union-Closed Sets Conjecture). p A 1 / 2 for any OR-closed set A.
This conjecture equivalently states that, for any union-closed family F , there exists an element contained in at least a proportion 1 / 2 of the sets of F . Since the union-closed conjecture was posed by Peter Frankl in 1979, it has attracted a great deal of research interest; see, e.g., [1,2,3,4,5]. We refer readers to the survey paper [6] for more details. Gilmer [7] made a breakthrough recently, showing that this conjecture holds with constant 0.01 . His method used a clever idea from information theory in which two independent random vectors were constructed. It was conjectured by Gilmer that his method can improve the constant to 3 5 2 , which is now confirmed by several groups of researchers [8,9,10,11]. This constant is shown to be the best for an approximate version of the union-closed sets problem [9]. Moreover, Sawin [8] further developed Gilmer’s idea by allowing the two random vectors to depend on each other. In fact, the same idea was previously used by the present author in several works [12,13,14]. By this technique, Sawin [8] showed that the constant can be improved to a value that is strictly larger than 3 5 2 . However, without cardinality bounds on auxiliary random variables, Sawin’s constant is difficult to compute, hence the accurate value of this improved constant is not explicitly given in [8].
The present paper further develops Gilmer’s (or Sawin’s) technique to derive new constants (or bounds) in the optimization form for the union-closed sets conjecture. These bounds include Sawin’s improvement as a special case. By providing cardinality bounds on auxiliary random variables, we make Sawin’s improvement computable and then evaluate it numerically which yields a bound approximately 0.38234 , slightly better than 3 5 2 0.38197 .

2. Main Results

To state our result, we need to introduce some notations. Since we only consider distributions on finite alphabets, we do not distinguish between the terms “distributions” and “probability mass functions”. For a pair of distributions ( P X , P Y ) , a coupling of ( P X , P Y ) is a joint distribution P X Y whose marginals are, respectively, P X , P Y . For a distribution P X defined on a finite alphabet X , a coupling P X X of ( P X , P X ) is called symmetric if P X X ( x , y ) = P X X ( y , x ) for all x , y X . Denote C s ( P X ) as the set of symmetric couplings of ( P X , P X ) . Denote δ x as the Dirac measure with a single atom at x. That is, the PMF of this measure takes the value 1 at x and takes the value 0 at other points.
For a joint distribution P X Y , the (Pearson) correlation coefficient between ( X , Y ) P X Y is defined by
ρ p ( X ; Y ) : = Cov ( X , Y ) Var ( X ) Var ( Y ) , Var ( X ) Var ( Y ) > 0 0 , Var ( X ) Var ( Y ) = 0 .
The maximal correlation between ( X , Y ) P X Y is defined by
ρ m ( X ; Y ) : = sup f , g ρ p ( f ( X ) ; g ( Y ) ) = sup f , g Cov ( f ( X ) , g ( Y ) ) Var ( f ( X ) ) Var ( g ( Y ) ) , Var ( f ( X ) ) Var ( g ( Y ) ) > 0 0 , Var ( f ( X ) ) Var ( g ( Y ) ) = 0 ,
where the supremum is taken over all pairs of real-valued functions f , g such that Var ( f ( X ) ) , Var ( g ( Y ) ) < . Note that ρ m ( X ; Y ) [ 0 , 1 ] and, moreover, ρ m ( X ; Y ) = 0 if and only if X , Y are independent. Moreover, ρ m ( X ; Y ) is equal to the second largest singular value of the matrix P X Y ( x , y ) P X ( x ) P Y ( y ) ( x , y ) ; see, e.g., [15]. Clearly, the largest singular value of the matrix P X Y ( x , y ) P X ( x ) P Y ( y ) ( x , y ) is equal to 1 with corresponding eigenvectors ( P X ( x ) ) x and ( P Y ( y ) ) y .
Denote for p , q , ρ [ 0 , 1 ] ,
z 1 : = p q ρ p ( 1 p ) q ( 1 q ) z 2 : = p q + ρ p ( 1 p ) q ( 1 q )
and
φ ( ρ , p , q ) : = median max { p , q , p + q z 2 } , 1 / 2 , min { p + q , p + q z 1 } ,
where median ( A ) denotes the median value of elements in a multiset A. We regard the set in (1) as a multiset which means median { a , a , b } = a . Denote h ( a ) = a log 2 a ( 1 a ) log 2 ( 1 a ) for a [ 0 , 1 ] as the binary entropy function. Define for t > 0 ,
Γ ( t ) : = sup P ρ inf P p : E h ( p ) > 0 , E p t E ρ inf P p q C s ( P p ) : ρ m ( p ; q ) ρ E p , q h ( φ ( ρ , p , q ) ) E h ( p ) ,
where the supremum over P ρ and the infimum over P p are both taken over all finitely supported probability distributions on [ 0 , 1 ] .
Our main results are as follows.
Theorem 1. 
If Γ ( t ) > 1 for some t ( 0 , 1 / 2 ) , then p A t for any OR-closed A Ω n (i.e., for any union-closed family F , there exists an element contained in at least a proportion t of the sets of F ).
The proof of Theorem 1 is given in Section 2 by using a technique based on coupling and entropy. It is essentially the same as the technique used by Sawin [8]. Prior to Sawin’s work, such a technique was used by the present author in several works; see [12,13,14].
Equivalently, Theorem 1 states that p A t sup for any OR-closed A Ω n , where t sup : = sup { t ( 0 , 1 / 2 ) : Γ ( t ) > 1 } . To compute Γ ( t ) numerically, it is required to upper bound the cardinality of the support of P p in the outer infimum in (2) since, otherwise, infinitely many parameters are needed to optimize. This is left to be done in a future work. We next provides a computable bound, which is a lower bound of Γ ( t ) , instead Γ ( t ) itself.
If we choose P ρ = δ 0 , then Theorem 1 implies Gilmer’s bound in [7] since, for this case, the couplings constructed in the proof of Theorem 1 (given in the next section) turn out to be independent, coinciding with Gilmer’s construction. On the other hand, if we choose P ρ = δ 1 , then the couplings constructed in our proof are arbitrary. In fact, we can make a choice of P ρ better than these two special cases. As suggested by Sawin [8], we can choose P ρ = ( 1 α ) δ 0 + α δ 1 which in fact leads to an optimization over mixtures of independent couplings and arbitrary couplings. This final choice yields the following bound.
Substituting ρ = 0 and 1, respectively, into φ ( ρ , p , q ) yields
φ ( 0 , p , q ) = p + q p q ,
φ ( 1 , p , q ) = median max { p , q } , 1 / 2 , p + q ,
where, in the evaluation of φ ( 1 , p , q ) , the following facts were used: (1)
p + q p q p ( 1 p ) q ( 1 q ) max { p , q }
for all p , q [ 0 , 1 ] ; (2) if p + q 1 , then
p + q p q + p ( 1 p ) q ( 1 q ) p + q ,
and otherwise,
1 / 2 < max { p , q } p + q p q + p ( 1 p ) q ( 1 q ) .
By defining
g ( P p q , α ) : = ( 1 α ) E ( p , q ) P p 2 [ h ( p + q p q ) ] + α E ( p , q ) P p q [ h ( φ ( 1 , p , q ) ) ]
and substituting P ρ = ( 1 α ) δ 0 + α δ 1 into Theorem 1, one obtains the following simpler bound.
Proposition 1. 
For t ( 0 , 1 / 2 ) ,
Γ ( t ) Γ ^ ( t ) : = sup α [ 0 , 1 ] inf symmetric P p q : E h ( p ) > 0 g ( P p q , α ) E h ( p ) ,
where the infimum is taken over all distributions P p q of the form ( 1 β ) Q a 1 , a 2 + β Q b 1 , b 2 with
0 a : = a 1 + a 2 2 t < b : = b 1 + b 2 2 1
and β = 0 or β = t a b a > 0 such that E h ( p ) > 0 . (Note that E h ( p ) = 0 if and only if P p q is a convex combination of δ ( 0 , 0 ) , δ ( 0 , 1 ) , δ ( 1 , 0 ) , and δ ( 1 , 1 ) .) Here,
Q x , y : = 1 2 δ ( x , y ) + 1 2 δ ( y , x )
with δ ( x , y ) denoting the Dirac measure at ( x , y ) (whose PMF takes the value 1 at ( x , y ) and takes the value 0 at other points).
As a consequence of the two results above, we have the following corollary.
Corollary 1. 
If Γ ^ ( t ) > 1 for some t ( 0 , 1 / 2 ) , then p A t for any OR-closed A Ω n .
The proof of Corollary 1 is given in Section 3.
The lower bound in (5) without the cardinality bound on the support of P p q was given by Sawin [8], which was used to show p A > 3 5 2 . However, thanks to the cardinality bound, we can numerically compute the best bound on p A that can be derived using Γ ^ ( t ) . That is, p A t ^ sup for any OR-closed A Ω n , where t ^ sup : = sup { t ( 0 , 1 / 2 ) : Γ ^ ( t ) > 1 } . Numerical results show that if we set α = 0.035 , t = 0.38234 , then the optimal P p q = ( 1 β ) Q a , a + β Q a , 1 with a 0.3300622 and β 0.1560676 which leads to the lower bound Γ ^ ( t ) 1.00000889 . Hence, p A 0.38234 for any OR-closed A Ω n . This is slightly better than the previous bound 3 5 2 0.38197 . The choice of ( α , t ) in our evaluation is nearly optimal. Our code can be found on the author’s homepage https://leiyudotscholar.wordpress.com/ (accessed on 1 May 2023.) More decimal places of Sawin’s bound (or equivalently, t ^ sup ) were computed by Cambie in a concurrent work [16], i.e., 0.382345533366702 t ^ sup 0.382345533366703 which is attained by the choice α 0.03560698136437784 . This more precise evaluation can be also verified using our code above.

3. Proof of Theorem 1

Denote H ( X ) = x P X ( x ) log P X ( x ) as the Shannon entropy of a random variable X P X . Let A Ω n be closed under the OR operation. We assume | A | 2 . This is because Theorem 1 holds obviously for singletons A, since for this case, p A = 1 . Let P X n = Unif ( A ) . So, H ( X n ) > 0 and, by the chain rule, H ( X n ) = i = 1 n H ( X i | X i 1 ) .
If P X n Y n C s ( P X n ) , then Z n : = X n Y n A a.s. where ( X n , Y n ) P X n Y n . So, we have
H ( Z n ) log | A | = H ( X n ) .
We hence have
sup P X n Y n C s ( P X n ) H ( Z n ) H ( X n ) 1 .
If p A t , then P X i ( 1 ) t , i [ n ] . Relaxing P X n = Unif ( A ) to arbitrary distributions such that P X i ( 1 ) t , we obtain Γ n ( t ) 1 where
Γ n ( t ) : = inf P X n : P X i ( 1 ) t , i sup P X n Y n C s ( P X n ) H ( Z n ) H ( X n ) .
In other words, if given t, Γ n ( t ) > 1 , then, by contradiction, p A > t .
We next show that Γ n ( t ) Γ ( t ) which implies Theorem 1. To this end, we need the following lemmas.
For two conditional distributions P X | U , P Y | V , denote C ( P X | U , P Y | V ) as the set of conditional distributions Q X Y | U V such that their marginals satisfy Q X | U V = P X | U , Q Y | U V = P Y | V . The conditional (Pearson) correlation coefficient of X and Y given U is defined by
ρ p ( X ; Y | U ) = E [ cov ( X , Y | U ) ] E [ var ( X | U ) ] E [ var ( Y | U ) ] , E [ var ( X | U ) ] E [ var ( Y | U ) ] > 0 , 0 , E [ var ( X | U ) ] E [ var ( Y | U ) ] = 0 .
The conditional maximal correlation coefficient of X and Y given U is defined by
ρ m ( X ; Y | U ) = sup f , g ρ p ( f ( X , U ) ; g ( Y , U ) | U ) ,
where the supremum is taken over all real-valued functions f ( x , u ) , g ( y , u ) such that E [ var ( f ( X , U ) | U ) ] , E [ var ( g ( Y , U ) | U ) ] < . It has been shown in [17] that
ρ m ( X ; Y | U ) = sup u : P U ( u ) > 0 ρ m ( X ; Y | U = u ) ,
where ρ m ( X ; Y | U = u ) = ρ m ( X ; Y ) with ( X , Y ) P X Y | U = u .
Lemma 1 
(Product Construction of Couplings). Lemma 9 in [12], Corollary 3 in [17], and Lemma 6 in [18] For any conditional distributions P X i | X i 1 , P Y i | Y i 1 , i [ n ] and any
Q X i Y i | X i 1 Y i 1 C ( P X i | X i 1 , P Y i | Y i 1 ) , i [ n ] ,
it holds that
i = 1 n Q X i Y i | X i 1 Y i 1 C i = 1 n P X i | X i 1 , i = 1 n P Y i | Y i 1 .
Moreover, for ( X n , Y n ) i = 1 n Q X i Y i | X i 1 Y i 1 , it holds that
ρ m ( X n ; Y n ) = max i [ n ] ρ m ( X i ; Y i | X i 1 , Y i 1 ) .
For a conditional distribution P X | U defined on finite alphabets, a conditional coupling P X X | U U of ( P X | U , P X | U ) is called symmetric if P X X | U U ( x , y | u , v ) = P X X | U U ( y , x | v , u ) for all x , y X , u , v U . Denote C s ( P X | U ) as the set of symmetric conditional couplings of ( P X | U , P X | U ) . Applying the lemma above to symmetric couplings, we have that if couplings Q X i Y i | X i 1 Y i 1 C s ( P X i | X i 1 ) satisfy ρ m ( X i ; Y i | X i 1 , Y i 1 ) ρ for some ρ > 0 , then
i = 1 n Q X i Y i | X i 1 Y i 1 C s i = 1 n P X i | X i 1 , ρ m ( X n ; Y n ) ρ ,
with ( X n , Y n ) i = 1 n Q X i Y i | X i 1 Y i 1 . We hence have that, for any ρ [ 0 , 1 ] ,
sup P X n Y n C s ( P X n ) : ρ m ( X n ; Y n ) ρ H ( Z n ) sup P X n 1 Y n 1 C s ( P X n 1 ) : ρ m ( X n 1 ; Y n 1 ) ρ H ( Z n 1 ) + sup P X n Y n | X n 1 Y n 1 C s ( P X n | X n 1 ) : ρ m ( X n ; Y n | X n 1 , Y n 1 ) ρ H ( Z n | Z n 1 ) sup P X n 1 Y n 1 C s ( P X n 1 ) : ρ m ( X n 1 ; Y n 1 ) ρ H ( Z n 1 ) + inf P X n 1 Y n 1 C s ( P X n 1 ) : ρ m ( X n 1 ; Y n 1 ) ρ sup P X n Y n | X n 1 Y n 1 C s ( P X n | X n 1 ) : ρ m ( X n ; Y n | X n 1 , Y n 1 ) ρ H ( Z n | Z n 1 ) i = 1 n inf P X i 1 Y i 1 C s ( P X i 1 ) : ρ m ( X i 1 ; Y i 1 ) ρ sup P X i Y i | X i 1 Y i 1 C s ( P X i | X i 1 ) : ρ m ( X i ; Y i | X i 1 , Y i 1 ) ρ H ( Z i | Z i 1 ) ,
where the first inequality above follows by Lemma 1 and the chain rule for entropies. In fact, in the derivation above, the i-th distribution P X i Y i | X i 1 Y i 1 is chosen as a greedy coupling in the sense that it only maximizes the i-th objective function H ( Z i | Z i 1 ) , regardless of other H ( Z j | Z j 1 ) with j > i (although it indeed affects their values).
By the fact that conditioning reduces entropy, it holds that
H ( Z i | Z i 1 ) H ( Z i | X i 1 , Y i 1 ) .
Denote
g i ( P X i 1 , ρ ) : = inf P X i 1 Y i 1 C s ( P X i 1 ) : ρ m ( X i 1 ; Y i 1 ) ρ sup P X i Y i | X i 1 Y i 1 C s ( P X i | X i 1 ) : ρ m ( X i ; Y i | X i 1 , Y i 1 ) ρ H ( Z i | X i 1 , Y i 1 ) .
Then, the expression at the right-hand side of (11) is further lower bounded by i = 1 n g i ( P X i 1 , ρ ) . Combing this with (8) and (11), and by noting that ρ [ 0 , 1 ] is arbitrary, we obtain that
Γ n ( t ) inf P X n : P X i ( 1 ) t , i sup ρ [ 0 , 1 ] i = 1 n g i ( P X i 1 , ρ ) i = 1 n H ( X i | X i 1 ) = inf P X n : P X i ( 1 ) t , i sup P ρ E P ρ i = 1 n g i ( P X i 1 , ρ ) i = 1 n H ( X i | X i 1 ) sup P ρ inf P X n : P X i ( 1 ) t , i i = 1 n E P ρ g i ( P X i 1 , ρ ) i = 1 n H ( X i | X i 1 ) sup P ρ inf P X n : P X i ( 1 ) t , i min i [ n ] : H ( X i | X i 1 ) > 0 E P ρ g i ( P X i 1 , ρ ) H ( X i | X i 1 ) sup P ρ inf P X j : H ( X j | X j 1 ) > 0 , P X j ( 1 ) t E P ρ g j ( P X j 1 , ρ ) H ( X j | X j 1 ) ,
where
  • (13) follows since a + b c + d min { a c , b d } for a , b 0 , c , d > 0 , and H ( X i | X i 1 ) = 0 implies X i is a deterministic function of X i 1 and, hence, g i ( P X i 1 , ρ ) = 0 ;
  • The index j in the last line is the optimal i attaining the minimum in (13).
Denote X = X j , Y = Y j , U = X j 1 , V = Y j 1 , and Z = X Y . Then,
Γ n ( t ) sup P ρ inf P U X : H ( X | U ) > 0 , P X ( 1 ) t E P ρ inf P U V C s ( P U ) : ρ m ( U ; V ) ρ sup P X Y | U V C s ( P X | U ) : ρ m ( X ; Y | U , V ) ρ H ( Z | U , V ) H ( X | U ) .
We next further simplify the lower bound in (14). Denote
p = P X | U ( 1 | U ) , q = P Y | V ( 1 | V ) , r = P X Y | U V ( 1 , 1 | U , V ) .
So,
P X Y | U V ( · | U , V ) = 1 + r p q q r p r r
with
max { 0 , p + q 1 } r min { p , q } .
Note that
ρ m ( X ; Y | U , V ) = sup u , v : P U V ( u , v ) > 0 ρ m ( X u ; Y v ) = sup u , v : P U V ( u , v ) > 0 ρ p ( X u ; Y v ) = sup u , v : P U V ( u , v ) > 0 r p q p ( 1 p ) q ( 1 q ) ,
where X u , Y v P X Y | U = u , V = v , ρ p denotes the Pearson correlation coefficient and (16) follows since the maximal correlation coefficient between two binary random variables is equal to the absolute value of the Pearson correlation coefficient between them; see, e.g., [19]. So, ρ m ( X ; Y | U , V ) ρ is equivalent to r p q p ( 1 p ) q ( 1 q ) ρ a.s. and also equivalent to z 1 r z 2 a.s.
The inner supremum in (14) can be rewritten as
sup P X Y | U V C s ( P X | U ) : ρ m ( X ; Y | U , V ) ρ H ( Z | U , V ) = E p , q sup max { 0 , p + q 1 , z 1 } r min { p , q , z 2 } h ( p + q r ) .
By the fact that h is increasing on [ 0 , 1 / 2 ] and decreasing on [ 1 / 2 , 1 ] , it holds that the optimal r attaining the supremum in the last line above, denoted by r * , is the median of max { 0 , p + q 1 , z 1 } , p + q 1 / 2 , and min { p , q , z 2 } , which implies
p + q r * = φ ( ρ , p , q ) .
Recall the definition of φ in (1). So, the inner supremum in (14) is equal to E p , q h ( φ ( ρ , p , q ) ) E h ( p ) .
We make the following observations. Firstly,
H ( X | U ) = E h ( p ) , P X ( 1 ) = E p .
Secondly, by the definition of maximal correlation, ρ m ( p ; q ) ρ m ( U ; V ) holds (which is known as the data processing inequality) since p , q are, respectively, functions of U , V ; see (15). Lastly, observe that P U V is symmetric and p , q are obtained from U , V via the same function P X | U ( 1 | · ) (since P X | U = P Y | V holds by the symmetry of P X Y | U V ). Hence, P p q is symmetric as well. Substituting all of these into (14) yields Γ n ( t ) Γ ( t ) . □

4. Proof of Proposition 1

By choosing P ρ = ( 1 α ) δ 0 + α δ 1 in (2), we obtain
Γ ( t ) sup α [ 0 , 1 ] inf symmetric P p q : E h ( p ) > 0 , E p t g ( P p q , α ) E h ( p ) .
Note that P p q g ( P p q , α ) is concave, since, by Lemma 5 in [10] P p E ( p , q ) P p 2 h ( p + q p q ) is concave, and P p q P p is linear.
Let B be a finite subset of [ 0 , 1 ] . Let P B be the set of symmetric distributions P p q concentrated on B 2 such that E p t . By the Krein–Milman theorem, P B is equal to the closed convex hull of its extreme points. These extreme points are of the form ( 1 β ) Q a 1 , a 2 + β Q b 1 , b 2 with 0 a t < b 1 and β = 0 or t a b a ; recall the definitions a : = a 1 + a 2 2 , b : = b 1 + b 2 2 , and Q x , y : = 1 2 δ ( x , y ) + 1 2 δ ( y , x ) in (6) and (7). By Carathéodory’s theorem, it is easy to see that the convex hull of these extreme points is closed (in the weak topology or, equivalently, in the relative topology on the probability simplex). So, every P p q supported on a finite set B 2 [ 0 , 1 ] 2 such that E p t is a convex combination of the extreme points above, i.e., P p q = i = 1 k γ i Q i where Q i , i [ k ] are extreme points, and γ i > 0 and i = 1 k γ i = 1 . For this distribution,
g ( P p q , α ) E h ( p ) = g ( i = 1 k γ i Q i , α ) i = 1 k γ i E Q i h ( p ) i = 1 k γ i g ( Q i , α ) i = 1 k γ i E Q i h ( p ) min i : E Q i h ( p ) > 0 g ( Q i , α ) E Q i h ( p )
where, in the last line, the constraint E Q i h ( p ) > 0 is posed since E Q i h ( p ) = 0 implies Q i = δ ( 0 , 0 ) (note that t < 1 / 2 ) and, hence, g ( Q i , α ) = 0 .
Therefore,
Γ ( t ) sup α [ 0 , 1 ] inf P p q : E h ( p ) > 0 g ( P p q , α ) E h ( p ) ,
where the infimum is taken over distributions P p q of the form ( 1 β ) Q a 1 , a 2 + β Q b 1 , b 2 with 0 a t < b 1 and β = 0 or β = t a b a > 0 such that E h ( p ) > 0 . (Recall the definition of a , b in (6)). □

5. Discussion

The breakthrough made by Gilmer [7] shows the power of information-theoretic techniques in tackling problems in related fields. In fact, the union-closed sets conjecture has a natural interpretation in the information-theoretic (or coding-theoretic) sense. Consider the memoryless OR multi-access channel ( x n , y n ) Ω 2 n x n y n Ω n . We would like to find a nonempty code A Ω n to generate two independent inputs X n , Y n with each following Unif ( A ) such that the input constraint E [ X i ] t , i [ n ] is satisfied and the output X n Y n is still in A a.s. The union-closed sets conjecture states that such a code exists if and only if t 1 / 2 . Based on this information-theoretic interpretation, it is reasonable to see that the information-theoretic techniques work for this conjecture. It is well-known that information-theoretic techniques usually work very well for problems with “approximate” constraints, e.g., the channel coding problem with the asymptotically vanishing error probability constraint (or the approximate version of the union-closed sets problem introduced in [9]). It is still unclear whether information-theoretic techniques are sufficient to prove sharp bounds for problems with “exact” constraints, e.g., the zero-error coding problem (or the original version of the union-closed sets conjecture).
Furthermore, as an intermediate result, it has been shown that Γ n ( t ) > 1 implies p A > t for any OR-closed A Ω n . Here Γ n ( t ) is given in (8), expressed in the multi-letter form (i.e., the dimension-dependent form). By the super-block coding argument, it is verified that, given t > 0 , lim n Γ n ( t ) exists. It is interesting to investigate this limit and prove a single-letter (dimension-independent) expression for it.
For simplicity, in this paper, we only consider the maximal correlation coefficient as the constraint function. In fact, the maximal correlation coefficient used here can be replaced by other functionals. The key property of the maximal correlation coefficient we used in this paper is the “tensorization” property, i.e., (10) (in fact, only “≤” part of (10) was used in our proof). In the literature, there is a class of measures of correlation satisfying this property, e.g., the hypercontractivity constant, strong data processing inequality constant, or, more generally, Φ -ribbons, see [20,21,22]. (Although the tensorization property in the literature is only defined and proven for independent random variables, this property can be extended to the coupling constructed in (9)). Following the same proof steps given in this paper, one can obtain various variants of Theorem 1 with the maximal correlation coefficient replaced by other quantities, as long as these quantities satisfy the tensorization property. Another potential direction is to replace the Shannon entropy with a class of more general quantities, Rényi entropies. However, unfortunately Rényi entropies do not satisfy the chain rule (unlike the Shannon entropy), which leads to a serious difficulty in single-letterizing the corresponding multi-letter bound such as Γ n ( t ) in (8) (i.e., in making the multi-letter bound dimension-independent).

Funding

This work was supported by the NSFC grant 62101286 and the Fundamental Research Funds for the Central Universities of China (Nankai University).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank Fan Chang for bringing Gilmer’s breakthrough [7] to their attention and thank Stijn Cambie for sharing their early draft of [16] after he noticed the present paper on arXiv. The author also would like to thank the guest editor and the anonymous referees for their comments, which led to significant improvements in the presentation of this paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Balla, I.; Bollobás, B.; Eccles, T. Union-closed families of sets. J. Comb. Theory Ser. A 2013, 120, 531–544. [Google Scholar] [CrossRef]
  2. Johnson, R.T.; Vaughan, T.P. On union-closed families, I. J. Comb. Theory Ser. A 1998, 84, 242–249. [Google Scholar] [CrossRef]
  3. Karpas, I. Two results on union-closed families. arXiv 2017, arXiv:1708.01434. [Google Scholar]
  4. Knill, E. Graph generated union-closed families of sets. arXiv 1994, arXiv:math/9409215. [Google Scholar]
  5. Wójcik, P. Union-closed families of sets. Discret. Math. 1999, 199, 173–182. [Google Scholar] [CrossRef]
  6. Bruhn, H.; Schaudt, O. The journey of the union-closed sets conjecture. Graphs Comb. 2015, 31, 2043–2074. [Google Scholar] [CrossRef]
  7. Gilmer, J. A constant lower bound for the union-closed sets conjecture. arXiv 2022, arXiv:2211.09055. [Google Scholar]
  8. Sawin, W. An improved lower bound for the union-closed set conjecture. arXiv 2022, arXiv:2211.11504. [Google Scholar]
  9. Chase, Z.; Lovett, S. Approximate union closed conjecture. arXiv 2022, arXiv:2211.11689. [Google Scholar]
  10. Alweiss, R.; Huang, B.; Sellke, M. Improved lower bound for the union-closed sets conjecture. arXiv 2022, arXiv:2211.11731. [Google Scholar]
  11. Pebody, L. Extension of a Method of Gilmer. arXiv 2022, arXiv:2211.13139. [Google Scholar]
  12. Yu, L.; Tan, V.Y.F. On Exact and -Rényi common information. IEEE Trans. Inf. Theory 2020, 66, 3366–3406. [Google Scholar] [CrossRef]
  13. Yu, L. Strong Brascamp–Lieb inequalities. arXiv 2021, arXiv:2102.06935. [Google Scholar]
  14. Yu, L. Exact exponents for concentration and isoperimetry in product Polish spaces. arXiv 2022, arXiv:2205.07596. [Google Scholar]
  15. Witsenhausen, H.S. On sequences of pairs of dependent random variables. SIAM J. Appl. Math. 1975, 28, 100–113. [Google Scholar] [CrossRef]
  16. Cambie, S. Better bounds for the union-closed sets conjecture using the entropy approach. arXiv 2022, arXiv:2212.12500. [Google Scholar]
  17. Yu, L. On Conditional Correlations. arXiv 2018, arXiv:1811.03918. [Google Scholar]
  18. Beigi, S.; Gohari, A. Monotone measures for non-local correlations. IEEE Trans. Inf. Theory 2015, 61, 5185–5208. [Google Scholar] [CrossRef]
  19. Anantharam, V.; Gohari, A.; Kamath, S.; Nair, C. On maximal correlation, hypercontractivity, and the data processing inequality studied by Erkip and Cover. arXiv 2013, arXiv:1304.6133. [Google Scholar]
  20. Ahlswede, R.; Gács, P. Spreading of sets in product spaces and hypercontraction of the Markov operator. Ann. Probab. 1976, 4, 925–939. [Google Scholar] [CrossRef]
  21. Raginsky, M. Strong data processing inequalities and Φ-Sobolev inequalities for discrete channels. IEEE Trans. Inf. Theory 2016, 62, 3355–3389. [Google Scholar] [CrossRef]
  22. Beigi, S.; Gohari, A. Φ-entropic measures of correlation. IEEE Trans. Inf. Theory 2018, 64, 2193–2211. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, L. Dimension-Free Bounds for the Union-Closed Sets Conjecture. Entropy 2023, 25, 767. https://doi.org/10.3390/e25050767

AMA Style

Yu L. Dimension-Free Bounds for the Union-Closed Sets Conjecture. Entropy. 2023; 25(5):767. https://doi.org/10.3390/e25050767

Chicago/Turabian Style

Yu, Lei. 2023. "Dimension-Free Bounds for the Union-Closed Sets Conjecture" Entropy 25, no. 5: 767. https://doi.org/10.3390/e25050767

APA Style

Yu, L. (2023). Dimension-Free Bounds for the Union-Closed Sets Conjecture. Entropy, 25(5), 767. https://doi.org/10.3390/e25050767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop