Next Article in Journal
A New Entropy Measurement for the Analysis of Uncertain Data in MCDA Problems Using Intuitionistic Fuzzy Sets and COPRAS Method
Next Article in Special Issue
Two Extensions of Cover Automata
Previous Article in Journal
A Generalized Bochner Technique and Its Application to the Study of Conformal Mappings
Previous Article in Special Issue
P Systems with Evolutional Communication and Division Rules
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

List Approximation for Increasing Kolmogorov Complexity

Department of Computer and Information Sciences, Towson University, Baltimore, MD 21252, USA
Axioms 2021, 10(4), 334; https://doi.org/10.3390/axioms10040334
Submission received: 27 September 2021 / Revised: 22 November 2021 / Accepted: 30 November 2021 / Published: 7 December 2021
(This article belongs to the Special Issue In Memoriam, Solomon Marcus)

Abstract

:
It is impossible to effectively modify a string in order to increase its Kolmogorov complexity. However, is it possible to construct a few strings, no longer than the input string, so that most of them have larger complexity? We show that the answer is yes. We present an algorithm that takes as input a string x of length n and returns a list with O ( n 2 ) strings, all of length n, such that 99% of them are more complex than x, provided the complexity of x is less than n log log n O ( 1 ) . We also present an algorithm that obtains a list of quasi-polynomial size in which each element can be produced in polynomial time.

1. Introduction

The Kolmogorov complexity of a binary string x, denoted C ( x ) , is the minimal description length of x, i.e., it is the length of the shortest program (in a fixed universal programming system) that prints x. We analyze the possibility of modifying a string in an effective way in order to obtain a string with higher complexity, without increasing its length. Strings with high complexity exhibit good randomness properties and are potentially useful, because they can be employed in lieu of random bits in probabilistic algorithms. It is common to define the randomness deficiency of x as the difference | x | C ( x ) (where | x | is the length of x) and to say that the smaller the randomness deficiency is, the more random the string is. In this sense, we want to modify a string so that it becomes “more” random. As stated, the above task is impossible, because, clearly, any effective modification cannot increase the Kolmogorov complexity (at least not by more than a constant). If f is a computable function, C ( f ( x ) ) C ( x ) + O ( 1 ) , for every x. Consequently, we have to settle for a weaker solution and the one we consider is that of list approximation. List approximation consists in the construction of a list of objects guaranteed to contain at least one element having the desired property. Here, we try to obtain a stronger type of list approximation, in which, not just one, but most of the elements in the list have the desired property. More precisely, we study the following question.
Question. Is there a computable function which takes as input a string x and outputs a short list of strings, which are not longer than x, such that most of the elements in the list have complexity greater than C ( x ) ?
The formulation of the question rules out some trivial and non-interesting answers. First, the requirement that the list is “short” is necessary, because, otherwise, we can ignore the input x and simply take all strings of length n and most of them have complexity at least n 2 , which is within O ( 1 ) of the largest complexity of strings of length n. Secondly, the restriction that the length is not increased is also necessary, because, otherwise, we can append to the input x a random string and obtain, with high probability, a more complex string (see the discussion in Section 2). These restrictions not only make the problem interesting, but also amenable to applications in which the input string and the modified strings need to be in a given finite set. The solution that we give can be readily adjusted to handle such applications.
There are several parameters to consider. The first one is the size of the list. The shorter the list is, the better the approximation is. Next, the increasing-complexity procedure that we seek does not work for all strings x. Let us recall that C ( x ) | x | + O ( 1 ) and, if x is a string of maximal complexity at its length, then there simply is no string of larger complexity at its length. In general, for strings x that have complexity close to | x | , it is difficult to increase their complexity. Thus, a second parameter is the bound on the complexity of x for which the increasing-complexity procedure succeeds. The closer this bound is to | x | , the better the procedure is. The third parameter is the complexity of the procedure. The procedure is required to be computable, but it is preferable if it is computable in polynomial time.
We show the following two results. The first one exhibits a computable list approximation for increasing the Kolmogorov complexity that works for any x with complexity C ( x ) < | x | log log | x | O ( 1 ) .
Theorem 1
(Computable list of quadratic size for increasing the Kolmogorov complexity). There exists a computable function f that takes as input x { 0 , 1 } * and a rational number δ > 0 and returns a list of strings of length at most | x | with the following properties:
1. 
The size of the list is O ( | x | 2 ) poly ( 1 / δ ) ;
2. 
If C ( x ) < | x | log log | x | O ( 1 ) , then the ( 1 δ ) fraction of the elements in the list f ( x ) have a Kolmogorov complexity larger than C ( x ) (where the constant hidden in O ( 1 ) depends on δ).
Whether the bound C ( x ) < | x | log log | x | O ( 1 ) can be improved remains open. Further reducing the list size is also an interesting open question. We could not establish a lower bound and, as far as we currently know, it is possible that even a constant list size may be achievable.
In the next result, the complexity-increasing procedure runs in polynomial time in the following sense. The size of the list is only quasi-polynomial, but each string in the list is computed in polynomial time.
Theorem 2
(Polynomial-time computable list for increasing the Kolmogorov complexity). There exists a function f that takes as input x { 0 , 1 } * and a constant rational number δ > 0 and returns a list of strings of length at most | x | with the following properties:
1. 
The size of the list is bounded by 2 O ( log | x | · log ( | x | / δ ) ) ;
2. 
If C ( x ) < | x | O ( log | x | · log ( | x | / δ ) ) , then ( 1 δ ) fraction of the elements in the list f ( x ) have a Kolmogorov complexity larger than C ( x ) ;
3. 
The function f is computable in polynomial time in the following sense: there is a polynomial time algorithm that takes as input x , i and computes the i-th element in the list f ( x ) .
Remark 1.
A preliminary version of this paper has appeared in STACS 2017 [1]. In that version, it was claimed that the result in Theorem 1 holds for all strings x with C ( x ) < | x | . The proof had a bug and we can only prove it for strings satisfying C ( x ) < | x | log log | x | O ( 1 ) . The proof of Theorem 2 given here is different from that in [1]. Theorem 2 has better parameters than its analog in the preliminary version.
Remark 2.
Any procedure that constructs the approximation list can be converted into a probabilistic algorithm that does the same work and picks one random element from the list. The procedure in Theorem 2 can be converted into a polynomial-time probabilistic algorithm, which uses O ( log | x | · log ( | x | / δ ) ) random bits to pick which element from the list to construct (see item 3 in the statement).
Vice-versa, a probabilistic algorithm can be converted into a list-approximation algorithm in the obvious way, i.e., by constructing the list that has as elements the outputs of the algorithm for all choices of the random coins.
Thus, a list-approximation algorithm A 1 , in which ( 1 δ ) elements in the list have the desired property, is equivalent to a probabilistic algorithm A 2 that succeeds with probability 1 δ . The number of random bits used by A 2 is the logarithm in base two of the size of the list produced by A 1 .

1.1. Basic Concepts and Notation

We recall the standard setup for Kolmogorov complexity. We fix an universal Turing machine U. The universality of U means that, for any Turing machine M, there exists a computable “translator” function t, such that, for all strings p, M ( p ) = U ( t ( p ) ) and | t ( p ) | | p | + O ( 1 ) . For the polynomial-time constructions, we also require that t is polynomial-time computable. If U ( p ) = x , we say that p is a program (or description) for x. The Kolmogorov complexity of the string x is C ( x ) = min { | p | p   is   a   program   for   x } . If p is a program for x and | p | C ( x ) + c , we say that p is a c-short program for x.

1.2. Related Works

The problem of increasing the Kolmogorov complexity has been studied before by Buhrman, Fortnow, Newman and Vereshchagin [2]. They show that there exists a polynomial-time computable f that takes as input x of length n and returns a list of strings, all having length n, such that, if C ( x ) < n , then there exists y in the list with C ( y ) > C ( x ) (this is Theorem 14 in [2]). In the case of complexity conditioned by the string length, they show that it is even possible to compute in polynomial time a list of constant size. That is, f ( x ) is a list with O ( 1 ) strings of length n and, if C ( x n ) < n , then it contains a string y with C ( y n ) > C ( x n ) (this is Theorem 11 in [2]). Our results are incomparable with the results in [2]. On one hand, their results work for any input x with complexity less than | x | , while, in Theorem 1, we only handle inputs with complexity at most | x | log log | x | O ( 1 ) (and, in Theorem 2, the complexity of the input is required to be even lower). On the other hand, they only guarantee that one string in the output list has higher complexity than x, while we guarantee this property for most strings in the output list and this can be viewed as a probabilistic algorithm with few random bits as explained in Remark 2.
This paper is inspired by recent list-approximation results regarding another problem in the Kolmogorov complexity, namely, the construction of short programs (or descriptions) for strings. Using a Berry paradox argument, it is easy to see that it is impossible to effectively construct a shortest program for x (or, even a, say, n / 2 -short program for x). Remarkably, Bauwens et al. [3] show that effective list approximation for short programs is possible. There is an algorithm that, for some constant c, takes as input x and returns a list with O ( | x | 2 ) strings guaranteed to contain a c-short program for x. They also show a lower bound; The quadratic size of the list is minimal up to constant factors. Bauwens and Zimand [4] consider a more general type of optimal compressor that goes beyond the standard Kolmogorov complexity and, using another type of pseudo-random function called conductor, re-obtains the overhead of O ( log 2 n ) . Theorem 2 directly uses results from the latter, namely, Theorem 3. Theorem 1 uses a novel construction, but some of the ideas are inspired from the papers mentioned above.

2. Technique and Proof Overview

We start by presenting an approach that probably comes to mind first. It does not work for inputs x having a complexity very close to | x | , such as in Theorem 1 (for which we use a more complicated argument), but, combined with the results from [4], it yields Theorem 2.
Given that we want to modify a string x so that it becomes more complex, which, in a sense, means more random, a simple idea is to just append a random string z to x. Indeed, if we consider strings z of length c, then C ( x z ) > C ( x ) + c / 2 , for most strings z, provided that c is large enough. Let us see why this is true. Let k = C ( x ) and let z be a string that satisfies the opposite inequality, that is,
C ( x z ) C ( x ) + c / 2 ,
Given a shortest program for x z and a self-delimited representation of the integer c, which is 2 log c bits long, we obtain a description of x with at most k + c / 2 + 2 log c bits. Note that, in this way, from different z’s satisfying (1), we obtain different programs for x that are ( c / 2 + 2 log c ) -short. By a theorem of Chaitin [5] (also presented as Lemma 3.4.2 in [6]), for any d, the number of d-short programs for x is bounded by O ( 2 d ) . Thus, the number of strings z satisfying (1) is bounded by 2 c / 2 + 2 log c + O ( 1 ) . Since, for large c, 2 c / 2 + 2 log c + O ( 1 ) is much smaller than 2 c , it follows that most strings z of length c satisfy the claimed inequality (the opposite of (1)). Therefore, we obtain the following lemma.
Lemma 1.
If we append to a string x, a string z chosen at random in { 0 , 1 } c , then C ( x z ) > C ( x ) + c / 2 with probability 1 2 ( c / 2 2 log c O ( 1 ) ) .
The problem with appending a random z to x is that this operation not only increases the complexity (which is something we want) but also increases the length (which is something we do not want). The natural way to get around this problem is to first compress x to close to the minimal description length using the probabilistic algorithms from [4] described in the Introduction and then append z. If we know C ( x ) , then the algorithms from [4] compress x to length C ( x ) + Δ ( n ) , where n is the length of x and Δ ( n ) (called the overhead) is O ( log n ) (or poly ( log n ) for the polynomial-time algorithm). After appending a random z of length c, we obtain a string of length C ( x ) + Δ ( n ) + c and, for this to be n (so that length is not increased), we need C ( x ) n Δ ( n ) c . This is the idea that we follow for Theorem 2, with an adjustment caused by the fact that we do not know C ( x ) but only a bound of it.
However, in this way, we cannot obtain a procedure that works for all x with C ( x ) < n log log n O ( 1 ) , as required in Theorem 1. Our proof for this theorem is based on a different construction. The centerpiece is a type of bipartite graph with a low congestion property. Once we have the graph (in which the two bipartitions are called the set of left nodes and the set of right nodes), we view x as a left node and the list f ( x ) consists of some of the nodes at distance 2 from x in the graph. (A side remark: Buhrman et al. [2] also use graphs, namely, constant-degree expanders, and they obtain the lists also as the set of neighbors at some given distance.) In our graph, the left side is L = { 0 , 1 } n , the set of n-bit strings, the right side is R = { 0 , 1 } m , the set of m-bit strings, and each left node has degree D. The graphs also depend on three parameters, ϵ , Δ and t, and, for our discussion, it is convenient to also use δ = ϵ 1 / 2 and s = δ · Δ . The graphs that we need have two properties:
  • For every subset B of left nodes of size at most 2 t , the ( 1 δ ) fraction of nodes in B satisfies the low congestion condition which requires that the ( 1 δ ) fraction of their right neighbors have at most s neighbors in B. (More formally, for all B L with | B | 2 t , for all x B , except at most δ | B | elements, all neighbors y of x, except at most δ D , have deg B ( y ) s , where deg B ( y ) is the number of y’s neighbors that are in B. We say that such x has the low-congestion property for B.)
  • Each right node has at least Δ neighbors.
The graph with the above two properties is constructed using the probabilistic method in Lemma 2.
Let us now see how to use such a graph to increase the Kolmogorov complexity in the list-approximation sense. Let us suppose that we have a graph G with the above properties for the parameters n , δ , Δ , D , s and t.
Claim 1.
There is a procedure that takes as input a string x of length n with complexity C ( x ) < t and produces a list with D · Δ strings, all having length n, such that at least a fraction of ( 1 2 δ ) of the strings in the list has a complexity larger than C ( x ) .
Indeed, let x be a string of length n with C ( x ) = k < t . Let us consider the set B = { x { 0 , 1 } n C ( x ) k } , which we view as a set of left nodes in G. Note that the size of B is bounded by 2 t . A node that does not have the low-congestion property for B is said to be δ BAD ( B ) . By the first property of G, there are at most δ | B | elements in B that are δ BAD ( B ) . It can be shown that x is not δ BAD ( B ) . The reason is, essentially, that the strings that are δ BAD ( B ) can be enumerated and they make up a small fraction of B; therefore, they can be described with less than k bits. Now, to construct the list, we view x as a left node in G and we “go-right-then-go-left”. This means that we first “go-right”, i.e., we take all the D neighbors of x and, for each such neighbor y, we “go-left”, i.e., we take Δ of the y’s neighbors and put them in the list. Since x is not δ BAD ( B ) , ( 1 δ ) D of its neighbors have at most s = δ · Δ elements in B. Overall, less than 2 δ · D · Δ of the strings in the list can be in B and so at least a fraction of ( 1 2 δ ) of the strings in the list has complexity larger than k = C ( x ) . Our claim is proved.

3. Proof of Theorem 2

We use the following definition and results from [4].
Definition 1.
  • A compressor C is a probabilistic function that takes as input a rational number ϵ > 0 , a positive integer m and a string x and outputs (with probability 1) a string C ( ϵ , m , x ) of length exactly m.
  • Δ ( ϵ , m , n ) is a function of ϵ and positive integers m and n, called overhead.
  • A compressor C is Δ -optimal for the Kolmogorov complexity, if there exists an algorithm D (called decompressor) such that, for every string x, every rational ϵ 2 | x | and every m C ( x ) + Δ ( ϵ , m , | x | ) ,
    P r o b [ D ( C ( ϵ , m , x ) ) = x ] 1 ϵ .
In other words, if we are given a bound m that is at least C ( x ) +overhead, then C compresses x to a string of length m, from which D is able to reconstruct x with high probability.
Theorem 3
(Theorem 1.1 in [4]). There exists a compressor C with overhead Δ ( ϵ , m , n ) = O ( log m · log ( n / ϵ ) ) that is Δ -optimal for the Kolmogorov complexity. Furthermore, the compressor C takes as input ( ϵ , m , x ) and runs in polynomial time in | x | , using a random string of length O ( log m · log ( | x | / ϵ ) ) .
Note: Theorem 1.1 in [4] is more general, but we only need the above version.
Proof of Theorem 2. 
We follow the plan sketched in Section 2; we compress the input x to a string y with the optimal compressor from Theorem 3 and then append to y a random string z of constant length. We show that, with high probability, y z has the desired properties; it has a complexity larger than C ( x ) and it is not longer than x. We see below that this randomized algorithm uses O ( log | x | · log | x | / ϵ ) ) random bits, which implies the desired list approximation via the observations in Remark 2.
Let the compressor C and the overhead Δ be the functions from Theorem 3. Let ϵ = δ / 2 . We fix n; let us consider a string x of length n such that C ( x ) n 3 Δ ( ϵ , n , n ) . Note that C ( x ) n O ( log n · log ( n / ϵ ) ) . Let m = n 2 Δ ( ϵ , n , n ) and y = C ( ϵ , m , x ) (note that y is a random variable because C is a randomized function). For n sufficiently large,
C ( x ) n 3 Δ ( ϵ , n , n ) m Δ ( ϵ , m , n ) .
Let A be the event by which the decompressor D reconstructs x from y. By Theorem 3, A has probability 1 ϵ .
We take c a constant large enough such that Equations (2) and (3) below are satisfied. Conditioned by A ,
C ( y ) C ( x ) c ( because x is reconstructed from y )
Let c = 2 c . We choose c so that
2 ( c / 2 2 log c O ( 1 ) ) < ϵ ,
where the O ( 1 ) term is the constant from Lemma 1.
We append to y a string z chosen at random in { 0 , 1 } c . By Lemma 1 and Equation (3), with probability 1 ϵ , C ( y z ) > C ( y ) + c / 2 = C ( y ) + c . Now, we condition on A and we obtain that, with probability 1 2 ϵ ,
C ( y z ) > C ( y ) + c C ( x ) c + c = C ( x ) .
We take δ = 2 ϵ . Now, let us check the properties of the above algorithm. For every n-bit string x with C ( x ) n 3 Δ ( ϵ , n , n ) = n O ( log | x | · log | x | / δ ) , the algorithm takes as input x and δ and outputs, in polynomial time, the string y z that, with probability 1 δ , has a complexity larger than the complexity of x. The string y z has length m + c = n 2 Δ ( ϵ , n , n ) + c n . The whole randomized procedure uses O ( log m · log ( n / ϵ ) ) = O ( log n · log ( n / δ ) ) random bits for compression with C and c = O ( 1 ) random bits for z. The list approximation is obtained from the probabilistic algorithm in the obvious way, i.e., by including in the list one element for each choice of the random string (see Remark 2). The theorem is proved. □

4. Proof of Theorem 1

We split the proof in three parts. In Section 4.1, we introduce balanced graphs; in Section 4.2, we show how to increase the Kolmogorov complexity in the list approximation sense using balanced graphs and, in Section 4.3, we use the probabilistic method to obtain the balanced graph with the parameters needed for Theorem 1.

4.1. Balanced Graphs

Here, we formally define the type of graphs that we need. We work with families of bipartite graphs G n = ( L R , E L × R ) , indexed by n, which have the following structure:
(1)
The vertices are labeled with binary strings, L = { 0 , 1 } n and R = { 0 , 1 } n , where we view L as the set of left nodes and R as the set of right nodes.
(2)
All the left nodes have the same degree D; D = 2 d is a power of two and the edges outgoing from a left node x are labeled with binary strings of length d.
(3)
We allow multiple edges between two nodes to exist. For a node x, we write N ( x ) for the multiset of x’s neighbors, each element being taken with the multiplicity equal to the number of edges from x landing into it.
A bipartite graph of this type can be viewed as a function EXT : { 0 , 1 } n × { 0 , 1 } d { 0 , 1 } n , where EXT ( x , y ) = z if there is an edge between x and z labeled y. We want EXT to yield a ( k , ϵ ) randomness extractor whenever we consider the modified function EXT k , which takes as input ( x , y ) and returns EXT ( x , y ) , from which we keep only the first k bits. (Note: A randomness extractor is a type of function that plays a central role in the theory of pseudo-randomness. All we need here is that it satisfies Equation (4).)
From the function EXT k , we go back to the graph representation and we obtain the “prefix” bipartite graph G n , k = ( L = { 0 , 1 } n , R k = { 0 , 1 } k , E k L × R k ) , where, in G n , k , we merge the right nodes of G n that have the same prefix of length k. The left degrees in the prefix graph do not change. However, the right degrees may change and, as k becomes smaller, the right degrees typically become larger due to merging.
The requirement is that, for every subset B L of size | B | 2 k , for every A R k ,
| | E k ( B , A ) | | B | × D | A | | R k | | ϵ ,
where E k ( B , A ) is the set of edges between B and A in G n , k . (Note: This means that G n , k is a ( k , ϵ ) randomness extractor.)
We also want to have the guarantee that each right node in G n , t has degree at least Δ , where Δ and t are parameters.
Accordingly, we have the following definition.
Definition 2.
A graph G n = ( L , R , E L × R ) as above is ( ϵ , Δ , t ) -balanced if the following requirements hold:
1. 
For every k { 1 , , n } , let G n , k be the graph corresponding to EXT k described above. We require that, for every k { 1 , , n } , G n , k is a ( k , ϵ ) extractor, i.e., G n , k has the property in Equation (4).
2. 
In the graph G n , t , every right node with non-zero degree has degree at least Δ .
In our application, we need balanced graphs in which the neighbors of a given node can be found effectively. As usual, we consider families of graphs ( G n ) n 1 and we say that such a family is computable if there is an algorithm that takes as input ( x , y ) , views x as a left node in G | x | , views y as the label of an edge outgoing from x and outputs z, where z is the right node where the edge y lands in G | x | .
The following lemma provides the balanced graphs that we need as explained in the proof overview in Section 2.
Lemma 2.
For every rational ϵ > 0 , there exist some constant c and a computable family of graphs ( G n ) n 1 , where each G n = ( L = { 0 , 1 } n , R = { 0 , 1 } n , E L × R ) is ( ϵ , Δ , t ) -balanced graph, with left degree D = 2 d for d = log ( 2 n / ϵ 2 ) , Δ = 2 ( 1 / ϵ ) 3 / 2 D and t = n log log n c .
The proof of Lemma 2 is by the standard probabilistic method and is presented in Section 4.3.

4.2. From Balanced Graphs to Increasing the Kolmogorov Complexity in the List-Approximation Sense

The following lemma shows a generic transformation of a balanced graph into a function that takes as input x and produces a list so that most of its elements have a complexity larger than C ( x ) .
Lemma 3.
Let us suppose that, for every δ > 0 , there are t = t ( n ) and a computable family of graphs ( G n ) n 1 , where each G n = ( L n = { 0 , 1 } n , R n = { 0 , 1 } n , E n L n × R n ) is ( δ 2 , Δ , t ) - balanced graph, with Δ = 2 ( 1 / δ 3 ) · D , where D is the left degree.
Then, there exists a computable function f that takes as input a string x and a rational number δ > 0 and returns a list containing strings of length | x | ; additionally, the following are true:
1. 
The size of the list is O ( ( 1 / δ ) 3 D 2 ) ;
2. 
If C ( x ) t , then ( 1 O ( δ ) ) of the elements in the list have a complexity larger than C ( x ) .
(The constants hidden in O ( · ) do not depend on δ.)
Proof. 
The following arguments are valid if δ is smaller than some small positive constant. We assume that δ satisfies this condition and also that it is a power of 1 / 2 . This can be performed because scaling down δ by a constant factor only changes the constants in the O ( · ) in the statement. Let ϵ = δ 2 . We explain how to compute the list f ( x ) , with the property stipulated in the theorem’s statement.
We take G n to be the ( ϵ , Δ , t ) -balanced graph with left nodes of length n promised by the hypothesis. Let G n , t be the “prefix” graph obtained from G n by cutting the last n t bits in the labels of right nodes (thus preserving the prefix of length t in the labels).
The list f ( x ) is computed in two steps:
  • First, we view x as a left node in G n , t and take N ( x ) , the multiset of all neighbors of x in G n , t .
  • Secondly, for each p in N ( x ) , we take A p to be a set of Δ neighbors of p in G n , t (e.g., the first Δ ones in some canonical order). We set f ( x ) = p N ( x ) A p (if p appears n p times in N ( x ) , we also take A p in the union n p times; note that f ( x ) is a multiset).
Note that all the elements in the list have length n and the size of the list is | f ( x ) | = Δ · D = 2 ( 1 / δ ) 3 D 2 .
Let x be a binary string of length n, with complexity C ( x ) = k . We assume that k t . The rest of the proof is dedicated to showing that the list f ( x ) satisfies the second item in the statement. Let
B n , k = { x { 0 , 1 } n C ( x ) k } ,
and let S n , k = log | B n , k | . Thus, 2 S n , k | B n , k | < 2 S n , k + 1 . Later, we use the fact that
S n , k k t .
We consider the graph G n , S n , k , which is obtained, as explained above, from G n by taking the prefixes of the right nodes of length S n , k . To simplify notation, we use G instead of G n , S n , k . The set of left nodes in G is L = { 0 , 1 } n and the set of right nodes in G is R = { 0 , 1 } m , for m = S n , k .
We view B n , k as a subset of the left nodes in G. Let us introduce some helpful terminology. In the following, all the graph concepts (left node, right node, edge and neighbor) refer to the graph G. We say that a right node z in G is ( 1 / ϵ ) -light if it has at most ( 1 / ϵ ) · | B n , k | · D | R | neighbors in B n , k . A node that is not ( 1 / ϵ ) -light is said to be ( 1 / ϵ ) -heavy. Note that
( 1 / ϵ ) · | B n , k | · D | R | ( 1 / ϵ ) 2 S n , k + 1 · D 2 S n , k = δ Δ ,
thus, a ( 1 / ϵ ) -light node has at most δ Δ neighbors in B n , k .
We also say that a left node in B n , k is δ BAD with respect to B n , k if at least a δ fraction of the D edges outgoing from it lands in the right neighbors that are ( 1 / ϵ ) -heavy. Let δ BAD ( B n , k ) be the set of nodes that are δ BAD with respect to B n , k .
We show the following claim.
Claim 2.
At most a 2 δ fraction of the nodes in B n , k is δ B A D with respect to B n , k .
(In other words, for every x in B n , k except at most a 2 δ fraction, at least a ( 1 δ ) fraction of the edges going out from x in G lands in the right nodes that have at most Δ neighbors with complexity at most k).
We defer for later the proof of Claim 2 and continue the proof of the theorem.
For any positive integer k, let
B k = { x C ( x ) k   and   k t ( | x | ) } .
Let I k = { n k t ( n ) } . Note that | B k | = n I k | B n , k | . Let x B k and let n = | x | . We say that x is δ BAD with respect to B k if, in G n , x is δ BAD with respect to B n , k . We denote by δ BAD ( B k ) the set of nodes that are δ BAD with respect to B k . We upper bound the size of δ BAD ( B k ) as follows:
| δ B A D ( B k ) | = n I k | δ B A D ( B n , k ) | n I k 2 δ · | B n , k | ( by   Claim 2 ) = 2 δ n I k | B n , k | = 2 δ | B k | 2 δ · 2 k + 1 .
Note that the set δ BAD ( B k ) can be enumerated given k and δ . Therefore, a node x that is δ BAD with respect to B k can be described by k, δ and its ordinal number in the enumeration of the set δ BAD ( B k ) . We write the ordinal number on exactly k + 2 log ( 1 / δ ) bits and δ in a self-delimited way on 2 log log ( 1 / δ ) bits (recall that 1 / δ is a power of 2), so that k can be inferred from the ordinal number and δ . It follows that, if x is δ BAD with respect to B k , then, provided 1 / δ is sufficiently large,
C ( x ) k + 2 log ( 1 / δ ) + 2 log log ( 1 / δ ) + O ( 1 ) < k .
Now, we recall our string x { 0 , 1 } n , which has complexity C ( x ) = k . The inequality (6) implies that x cannot be δ BAD with respect to B k , which means that ( 1 δ ) of the edges going out from x land in neighbors in G having at most δ Δ neighbors in B k . The same is true if we replace G by G n , t , because, by the inequality (5), the right nodes in G are prefixes of the right nodes in G n , t .
Now, let us suppose that we pick at random a neighbor p of x in G n , t and then find a set A p of Δ neighbors of p in G n , t . Then, with probability 1 δ , only a fraction of δ of the elements of A p can be in B k . Let us recall that we have defined the list f ( x ) to be
f ( x ) = p neighbor   of   x   in G n , t A p .
It follows that at least a ( 1 δ ) 2 > ( 1 2 δ ) fraction of the elements in f ( x ) has complexity larger than C ( x ) . This ends the proof. □
We now prove Claim 2.
Proof of Claim 2.
Let A be the set of right nodes that are ( 1 / ϵ ) -heavy. Then,
| A | ϵ | R | .
Indeed, the number of edges between B n , k and A is at least | A | · ( 1 / ϵ ) · | B n , k | · D | R | (by the definition of ( 1 / ϵ ) -heavy), but, at the same time, the total number of edges between B n , k and R is | B n , k | · D (because each left node has degree D).
Next, we show that
| δ B A D ( B n . k ) | 2 δ | B n , k | .
For this, note that G is a ( S n , k , ϵ ) randomness extractor and B n , k has size at least 2 S n , k . Therefore, by the property (4) of extractors,
| E ( B n , k , A ) | | B n , k | · D | A | | R | + ϵ 2 ϵ .
On the other hand, the number of edges linking B n , k and A is at least the number of edges linking δ BAD ( B n , k ) and A; this number is at least | δ BAD ( B n , k ) | · δ D . Thus,
| E ( B n , k , A ) | | δ B A D ( B n , k ) | · δ D .
Combining the last two inequalities, we obtain
| δ BAD ( B n , k ) | | B n , k | 2 ϵ · 1 δ = 2 δ .
This ends the proofs of Claim 2, which is the last piece that we needed for the proof of Lemma 3. □
Theorem 1 is obtained by plugging, into the above lemma, the balanced graphs from Lemma 2 with parameter ϵ = δ 2 .

4.3. Construction of Balanced Graphs: Proof of Lemma 2

We use the probabilistic method. We consider a random function EXT : { 0 , 1 } n × { 0 , 1 } d { 0 , 1 } n for d = log ( 2 n / ϵ 2 ) . We show the following two claims, which imply that a random function has the desired properties with positive probability. Since the properties can be checked effectively, we can find a graph by exhaustive search. We use the notation from Definition 2 and from the paragraph preceding it.
Claim 3.
For sufficiently large n, with probability 3 / 4 , it holds that, for every k { 1 , , n } , in the bipartite graph G n , k = { L , R k , E k L × R k } , every B L = { 0 , 1 } n of size | B | 2 k and every A R k = { 0 , 1 } k satisfies
| | E k ( B , A ) | | B | × D | A | | R k | | ϵ .
Claim 4.
For some constant c and every sufficiently large positive integer n, with probability 3 / 4 , every right node in the graph G n , n log log n c has degree at least Δ .
Proof of Claim 3. 
First, we fix k { 1 , , n } and let K = 2 k and N = 2 n . Let us consider B { 0 , 1 } n of size | B | K and A R k . For a fixed x B and y { 0 , 1 } d , the probability that EXT k ( x , y ) is in A is | A | / | R k | . By the Chernoff bounds,
Prob | | E k ( B , A ) | | B | × D | A | | R k | | > ϵ 2 Ω ( K · D · ϵ 2 ) .
The probability that relation (8) fails for a fixed k, some B { 0 , 1 } k of size | B | K and some A R k is bounded by 2 K · N K · 2 Ω ( K · D · ϵ 2 ) , because A can be chosen in 2 K ways; further, we can consider that B has size exactly K and that there are N K possible choices of such B’s. Since D 2 n / ϵ 2 , the above probability is much less than ( 1 / 4 ) 2 k . Therefore, the probability that relation (8) fails for some k { 1 , , n } , some B and some A is less than 1 / 4 . □
Proof of Claim 4. 
We use a “coupon collector” argument. We consider the graph G n , n log log n c for some constant c to be fixed later. This graph is obtained from the above function EXT as explained in Definition 2. The graph G n , n log log n c is a bipartite graph with left side L = { 0 , 1 } n , right side R = { 0 , 1 } n log log n c and each left node has degree D = 2 d . We show that, with probability 3 / 4 , every right node in G n , n log log n c has degree at least Δ . The random process consists of drawing, for each x L and edge y { 0 , 1 } d , a random element from R . Thus, we draw at random N D times, with replacement, from a set with | R | “coupons”. Newman and Shepp [7] have shown that, to obtain at least h times each coupon from a set of p coupons, the expected number of draws is p log p + ( h 1 ) p log log p + o ( p ) . By Markov’s inequality, if the number of draws is 4 times the expected value, we collect each coupon p times with probability 3 / 4 . In our case, we have p = 2 n log log n c and h = Δ ; it can be checked readily that, for an appropriate choice of the constant c, 4 ( p log p + ( h 1 ) p log log p + o ( p ) ) < N D , provided n is large enough. □

Funding

The author has been supported in part by the National Science Foundation through grant CCF 1811729.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author is grateful to Bruno Bauwens for his insightful observations and to Nikolay Vereshchagin for pointing out an error in an earlier version. The author thanks the anonymous referees for their useful suggestions.

Conflicts of Interest

The author declares no conflict of interest..

References

  1. Zimand, M. List Approximation for Increasing Kolmogorov Complexity. In Proceedings of the 34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, Hannover, Germany, 8–11 March 2017; Vollmer, H., Vallée, B., Eds.; Schloss Dagstuhl-Leibniz-Zentrum für Informatik: Dagstuhl, Germany, 2017. Leibniz International Proceedings in Informatics (LIPIcs). Volume 66, pp. 58:1–58:12. [Google Scholar] [CrossRef]
  2. Buhrman, H.; Fortnow, L.; Newman, I.; Vereshchagin, N. Increasing Kolmogorov complexity. In Proceedings of the 22nd Annual Symposium on Theoretical Aspects of Computer Science, Stuttgart, Germany, 24–26 February 2005; Lecture Notes in Computer Science #3404. Springer: Berlin, Germany, 2005; pp. 412–421. [Google Scholar]
  3. Bauwens, B.; Makhlin, A.; Vereshchagin, N.; Zimand, M. Short lists with short programs in short time. In Proceedings of the 28th IEEE Conference on Computational Complexity, Stanford, CA, USA, 5–7 June 2013. [Google Scholar]
  4. Bauwens, B.; Zimand, M. Universal almost optimal compression and Slepian-Wolf coding in probabilistic polynomial time. arXiv 2019, arXiv:1911.04268. [Google Scholar]
  5. Chaitin, G.J. Information-Theoretic Characterizations of Recursive Infinite Strings. Theor. Comput. Sci. 1976, 2, 45–48. [Google Scholar] [CrossRef] [Green Version]
  6. Downey, R.; Hirschfeldt, D. Algorithmic Randomness and Complexity; Springer: New York, NY, USA, 2010. [Google Scholar]
  7. Newman, D.; Shepp, L. The Double Dixie Cup Problem. Am. Math. Mon. 1960, 67, 58–61. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zimand, M. List Approximation for Increasing Kolmogorov Complexity. Axioms 2021, 10, 334. https://doi.org/10.3390/axioms10040334

AMA Style

Zimand M. List Approximation for Increasing Kolmogorov Complexity. Axioms. 2021; 10(4):334. https://doi.org/10.3390/axioms10040334

Chicago/Turabian Style

Zimand, Marius. 2021. "List Approximation for Increasing Kolmogorov Complexity" Axioms 10, no. 4: 334. https://doi.org/10.3390/axioms10040334

APA Style

Zimand, M. (2021). List Approximation for Increasing Kolmogorov Complexity. Axioms, 10(4), 334. https://doi.org/10.3390/axioms10040334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop