Next Article in Journal / Special Issue
Content Sharing Graphs for Deduplication-Enabled Storage Systems
Previous Article in Journal
Finding All Solutions and Instances of Numberlink and Slitherlink by ZDDs
Previous Article in Special Issue
A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Online Algorithm for Lightweight Grammar-Based Compression

1
Department of Informatics, Kyushu University, 744 Motooka Nishi-ku Fukuoka-shi, Fukuoka, Japan
2
Graduate School of Computer Science and Systems Engineering, Kyushu Institute of Technology,680-4 Kawazu Iizuka-shi Fukuoka, Japan
*
Author to whom correspondence should be addressed.
Algorithms 2012, 5(2), 214-235; https://doi.org/10.3390/a5020214
Submission received: 30 January 2012 / Revised: 26 March 2012 / Accepted: 28 March 2012 / Published: 10 April 2012
(This article belongs to the Special Issue Data Compression, Communication and Processing)

Abstract

:
Grammar-based compression is a well-studied technique to construct a context-free grammar (CFG) deriving a given text uniquely. In this work, we propose an online algorithm for grammar-based compression. Our algorithm guarantees O ( log 2 n ) -approximation ratio for the minimum grammar size, where n is an input size, and it runs in input linear time and output linear space. In addition, we propose a practical encoding, which transforms a restricted CFG into a more compact representation. Experimental results by comparison with standard compressors demonstrate that our algorithm is especially effective for highly repetitive text.

1. Introduction

Grammar-based compression [1] finds a small context-free grammar (CFG) that generates a given string uniquely. Let us express grammar-based compression by an intuitive example. If a string w contains many occurrences of a substring γ , we can replace all of them by a single variable A which is associated with γ like A γ . The text is thus compressed to a shorter one according to the frequency of γ . The expression of grammar-compressed strings is simple, yet it is powerful because they can derive a string of its exponential length. In fact, it was recently reported that grammar-based and LZ77-based [2] compressors can achieve effective compression for highly-repetitive text [3,4,5] by comparison with entropy-based encoders. Grammar-compressed strings are also suitable for accelerating string processing, for examples, combinatorial pattern matching [6,7,8,9,10,11], edit-distance computation [12], q-gram computation [13,14], mining characteristic patterns [15,16], and so on. We note above compressed string algorithms, mainly from theoretical approaches, are considered on the straight-line programs (SLPs). An SLP is a CFG in the Chomsky normal form that derives a single string. This situation is reasonable because any CFG deriving a single string can be straightforwardly converted into an SLP without paying significant time and space penalties. The smaller a given CFG is, the more these compressed string algorithms are accelerated to running time, so they desire effective compression algorithms which can guarantee to produce small CFGs in any case. Also, we should pay attention to working costs for the compression phase when we need to consider total time and space. One of our interests is, therefore, to translate an input string into a good CFG under the conditions of efficient working time and space.
In a theoretical sense, the NP-hardness and its approximation hardness for finding the smallest CFG from the input text was proved [17]. For this reason, there exist many characteristic compression algorithms proposed so far. In the grammar-based compression, some algorithms based on greedy strategies are known to achieve high compression ratios for the real-world texts, e.g., Sequitur [18], Re-Pair [19], Greedy [20], LFS2 [21], etc. Their upper bound of approximation ratios were theoretically analyzed in [22]. The best approximation ratio among greedy algorithms is O ( ( n / log n ) 1 / 2 ) , where n is the input string length (In this paper, log stands for log 2 ). On the other hand, several algorithms achieving a logarithmic approximation ratio were proposed. For the minimum grammar size g * , the first O ( log ( n / g * ) ) -approximation algorithm was developed by Charikar et al [22]. Independently, Rytter [23] presented another O ( log ( n / g * ) ) -approximation algorithm using the suffix tree. Sakamoto [24] also proposed a linear-time O ( log ( n / g * ) ) -approximation algorithm based on Re-Pair. However, these algorithms require Ω ( n ) space and this weakness prevents us from applying them to huge texts. This space complexity was improved by several multi-pass algorithms over read/write streams. Sakamoto et al. [25] proposed the LCA algorithm that requires O ( g * log g * ) space with linear running time and O ( log n log g * ) -approximation ratio. LCA was modified to achieve O ( log n log * n ) -approximation ratio within O ( n log * n ) running time [26], where log * n , called the iterated logarithmic function, is the number of times the log function is applied to n to produce a constant. On the other hand, Gagie and Gawrychowski [27] proposed O ( min ( g * , n log n ) ) -approximation algorithm in a streaming model where the algorithm works in constant space and logarithmic passes over a constant number of streams. Here, we must point out that these lightweight algorithms require large external memory space for managing read/write streams, and thus the practical running time is affected by the I/O response time. Moreover, the main results of approximation algorithms almost consist of theoretical achievements and their practical compressive performance is either not known or worse than popular compression programs.
Because of these factors, we assume more empirical situation. Many practical data compressors mandate linear running time in the length of the input string. Ideally, a compressor should also be online; that is, it processes the characters of the input string from left to right, one by one, with no need to know the whole string beforehand. Preferably, the space consumption throughout compression processing should depend on the size of the compressed string, not the size of the string being compressed. We thus focus on the compression with a restricted resource and develop an online algorithm preserving a good approximation ratio. The proposed algorithm is based on LCA. Thanks to its simplicity, LCA does not require special data structures and it runs in linear time and an economical space. The required space by proposed algorithm is O ( g * log 2 n ) and the approximation ratio is O ( log 2 n ) . The main task of LCA is to replace long and frequent substrings with a common nonterminal within a smaller work space than Ω ( n ) . Thus, an obtained CFG is much smaller with highly repetitive texts, so we implement the online LCA algorithm as a more practical compressor. To do this, we introduce a practical encoding technique that cuts off the constant factor of the output grammar size. The proposed encoding is based on the binary tree representation of CFGs. The space complexity of the improved LCA algorithm is proportional to the size of the produced CFG. Therefore, it can be expected that the smaller work space is required when the given text is extremely compressible. Our experiments show that the online LCA achieves effective compression for highly-repetitive text compared with other standard compressors, and the space consumption is smaller than the input size.

2. Preliminary

This section gives the notations and definitions for string and grammar-based compression.

2.1. Basic Notations

We assume a finite alphabet  Σ for the symbols forming input strings throughout this paper. The set of all strings over Σ is denoted by Σ * , and Σ i denotes the set of all strings of length just i. The length of w Σ * is denoted by | w | , and the cardinality of a set C is similarly denoted by | C | .
Strings x and z are said to be a prefix and suffix of the string w = x y z , respectively. Also, x , y , z are called substrings of w. The ith symbol of w is denoted by w [ i ] . For integers i , j with 1 i j | w | , the substring of w from w [ i ] to w [ j ] is denoted by w [ i , j ] .
A repetition is a string x k for a symbol x and an integer k 2 . A repetition w [ i , j ] = x k is maximal if w [ i 1 ] , w [ i + 1 ] x . It is simply referred to by x + if the length is unnecessary. Substrings w [ i , j ] and w [ k , ] are overlapping if i < k j < . A string of length two is called a pair.

2.2. Grammar-Based Compression

A context-free grammar (CFG) is a quadruple G = ( Σ , N , D , S ) of disjoint finite alphabets Σ and N, a finite set (a dictionary) D N × ( N Σ ) * of production rules, and the start symbol S N . Symbols in N are called nonterminals. A production rule A b 1 , , b k in D  derives  β ( Σ N ) * from α ( Σ N ) * by replacing an occurrence of A N in α with b 1 , , b k , denoted by α β . Similarly we say that D derives β from α provided α * β , where * is the reflexive, transitive closure of ⇒. If a string is derived from the start symbol, we also say that the CFG derives the string. In this paper, we assume that any CFG is admissible [1]; that is G derives just one string in Σ * and for each nonterminal A N , exactly one production rule A α is defined in D . We also assume that any A N is appropriate, that is, A α , B α D implies A = B . The size of G is the total length of strings on the right hand sides of all production rules, and is denoted by | G | . The aim of grammar-based compression is formalized as a combinatorial optimization problem as follows:
Problem 1.
Grammar-Based Compression
Input: A string w Σ * .
Output: An admissible CFG G that derives w.
Measure: The size of G.
In the following, we assume that every admissible CFG is restricted such that the length of right hand side of any production rule is two. Note that for any CFG G, there is an equivalent restricted CFG G whose size is at most 2 | G | . Thus this restriction is reasonable.
An important relation is known to exist between an admissible CFG and the following factorization. The LZ-factorization  L Z ( w ) of w is the decomposition of w into f 1 , , f k , where f 1 = w [ 1 ] , and for each 1 < k , f is the longest prefix of suf = f , , f k that appears in f 1 , , f 1 , and otherwise f = suf [ 1 ] . Each f is called a factor. The size | L Z ( w ) | of L Z ( w ) is the number of its factors. The following result is used in the analysis of the approximation ratio of our algorithm.
Theorem 1
(Rytter [23]). For any string w and its admissible CFG G, the inequality | L Z ( w ) | | G | holds.

3. Compression Algorithm

This section presents our proposed algorithm and analyzes its performance.

3.1. Basic Idea

The basic task of the algorithm is to replace a pair X Y occurring in a string by a new symbol Z and generate a production Z X Y to D , where all occurrences of X Y that are determined to be replaced are replaced by a same Z. We note, however, that not all occurrences of X Y are replaced by Z. The critical task is to determine which occurrence of X Y is replaced such that replaced pairs in common substrings are almost synchronized as shown in Figure 1. That is, the aim of this algorithm becomes to minimize the number of different nonterminals generated.
Here we explain the three decision rules for the replacement. The rules introduced in this paper are modified version of the Sakamoto et al.’s algorithm [25] to extend to our online compression in the next subsection.
The first rule (repetitive pair): Let a current string S contain a maximal repetition S [ i , j ] = a k . We generate A a a D for an appropriate nonterminal A, and replace S [ i , j ] = A k / 2 if k is even, or replace S [ i , j 1 ] = A ( k 1 ) / 2 if k is odd.
The second rule (minimal pair): We assume a total order over Σ N ; that is, any symbol is represented by an integer. If a current string contains a substring A i A j A k such that j < i , k , then the occurrence of A j is called minimal. The second decision rule is to replace all such pairs A j A k in A i A j A k by an appropriate nonterminal.
In order to introduce the third decision rule, we explain the notion of the lowest common ancestor on a tree.
Definition 1.
Let p be a positive integer and k = log p . The index tree T p is the rooted, ordered complete binary tree whose leaves are labeled with 1 , , 2 k from the left. The height of a node v refers to the number of edges in the longest path from v to a descendant of v. Then, the height of the lowest common ancestor of leaves i , j is denoted by l c a ( i , j )  (We can get the l c a for any leaves i and j of (virtual) complete binary tree in O ( 1 ) time/space under RAM model [28].) for short.
Figure 2 shows an example of the index tree and lowest common ancestor.
The third rule (maximal pair): For a fixed order of alphabet, let a current string contain a substring A i A j A k A such that the integers i , j , k , are either increasing or decreasing in this order. If l c a ( j , k ) > l c a ( i , j ) , l c a ( k , ) , then the occurrence of the middle pair A j A k is called maximal. The third decision rule is to replace all such pairs by an appropriate nonterminal.
We call pairs replaced by the above rules special pairs which appear in almost synchronized position in the common substrings. Be careful that we need to set the priority of the decision rules because such cases possibly overlap and we cannot apply the repetitive and minimal rules simultaneously. For example, the substring a 2 a 1 a 3 a 3 a 3 contains such overlapping pairs. We therefore apply the repetitive and minimal rules in this order to keep uniqueness of the replacement. Indeed, no cases overlap with this priority.
If pairs w [ i 2 , i 1 ] and w [ j + 1 , j + 2 ] ( i < j ) are special pairs and a substring w [ i , j ] contains no special pairs, then we suitably determine replaced pairs in w [ i , j ] with left priority, that is, if j i is odd then the pairs w [ i , i + 1 ] , w [ i + 2 , i + 3 ] , , w [ j 1 , j ] are replaced, otherwise w [ i , i + 1 ] , w [ i + 2 , i + 3 ] , , w [ j 2 , j 1 ] . By this replacement, the length of the interval not replaced for a string becomes at most one.
In the case of offline compression, it is easy to generate a grammar by the replacement rules. After we replace pairs in a input string w, we recursively continue this process for a new string produced by replacing pairs until the replaced string becomes only one symbol as shown in Figure 3. We note the height of a parse tree is bounded by O ( log n ) , where n = | w | , because the algorithm replace at least one of w [ i , i + 1 ] or w [ i + 1 , i + 2 ] . In the next subsection, we will apply the basic idea to the online algorithm.

3.2. Algorithmic Detail

The offline algorithm makes a bottom-up parse tree represented as a CFG. On the other hand, the online algorithm approximates the compression by simulating the left to right construction of a parse tree. To do this, we must determine a replaced pair by only a short substring. By the priority of the rules, we can determine that the pair for any position is the special pair or not by checking the rules simultaneously. In addition, the following lemma enable us to determine a replaced pair from the only a substring of length five.
Lemma 1.
Assume that replaced pairs in w [ 1 , i 1 ] are already determined, whether w [ i , i + 1 ] becomes a replaced pair or not depends on the interval w [ i 1 , i + 3 ] .
Proof. 
We consider to decide whether the substring w [ i , i + 2 ] includes the special pair under the assumption that w [ i 2 , i 1 ] is already replaced. We first need to check a repetitive pair is included or not because of the strongest rule. For the case that w [ i , i + 2 ] = a r a r a s ( r s ) , we replace w [ i , i + 1 ] as a repetitive pair. For the case that w [ i , i + 2 ] = a r a s a s ( r s ) , we preferentially select w [ i + 1 , i + 2 ] as a replaced pair. We also pay attention to the case such that w [ i , i + 3 ] = a r a s a t a t ( s t ) , then we forcibly replace w [ i , i + 1 ] because w [ i + 2 , i + 3 ] is the beginning of maximal repetition. For the minimal or maximal pair, we can decide w [ i , i + 1 ] is minimal and maximal pair by computing with w [ i 1 , i + 1 ] and w [ i 1 , i + 2 ] , respectively. If w [ i + 1 , i + 2 ] is minimal or maximal pair, then w [ i ] is not selected as a replaced pair by the priority. Thus there is no conditional statement using the outside of the interval w [ i 1 , i + 3 ] for computing special pairs.  □
From the Lemma 1, we describe the Algorithm 1 that is the statement to determine w [ i , i + 1 ] is replaced or not. Note if w [ i , i + 2 ] contains no special pair, we determine w [ i , i + 1 ] as a replaced pair. By using Algorithm 1, it is easy to replace pairs in one pass over a string.
Algorithm 1  r e p l a c e d _ p a i r ( w , i ) : a string w and a position i.
 1: /* r e p l a c e d _ p a i r ( w , i ) decides a pair w [ i , i + 1 ] is replaced or not */
 2: if w [ i , i + 1 ] is the repetitive pair then
 3:    return  t r u e ;
 4: else if w [ i + 1 , i + 2 ] is the repetitive pair then
 5:    /* w [ i + 1 , i + 2 ] is preferentially replaced. */
 6:    return  f a l s e ;
 7: else if w [ i + 2 , i + 3 ] is the repetitive pair then
 8:    /* w [ i , i + 1 ] is forcibly replaced by the priority of the repetitive pair. */
 9:    return  t r u e ;
10: else if w [ i , i + 1 ] is the minimal or maximal pair then
11:    return  t r u e ;
12: else if w [ i + 1 , i + 2 ] is the minimal or maximal pair then
13:    /* w [ i + 1 , i + 2 ] is preferentially replaced. */
14:    return  f a l s e ;
15: else
16:    /* w [ i , i + 2 ] contains no special pair. */
17:    return  t r u e ;
18: end if
The compression algorithm determine replaced pairs in short buffers corresponding to each level of a parse tree. Let h is the height of the parse tree. We first prepare queues q 1 , q 2 , , q h implemented by circular buffers. Each q i has a role as a buffer to store a segment of string w at the ith level of the parse tree, where the input string corresponds to first level of the tree. The number h of queues is bounded by O ( log n ) because of the height of parse tree. We define basic operations for such queues as follows:
  • e n q u e ( q i , x ) : add the symbol x into the tail of the queue q i .
  • d e q u e ( q i ) : return the head of the queue q i and remove it.
  • h e a d ( q i ) : return the head of the queue q i .
  • l e n ( q i ) : return the length of the queue q i .
The head of the queue q i is denoted by q i [ 0 ] and thus the tail corresponds to q i [ l e n ( q i ) 1 ] . Each queue is used for deciding a replaced pair using the function r e p l a c e d _ p a i r ( w , i ) and its maximum length can be bounded by O ( 1 ) .
By the Lemma 1, we can restrict the maximum length of any queue by a constant longer than five.
For linear time compression, we must prepare another data structure D R called reverse dictionary: D R ( x , y ) returns a nonterminal z associated with the pair x y by z x y D . In case z x y D , D R ( x , y ) creates a new nonterminal symbol z N and returns z . For instance, if we have a dictionary D = { X 1 a 1 a 2 , X 2 a 3 a 1 , X 3 a 2 a 2 } , D R ( a 3 , a 1 ) returns X 2 , D R ( a 1 , a 1 ) creates a new nonterminal X 4 and returns it. If we use randomization, D R ( x , y ) can be computed in O ( 1 ) worst case time and inserting a new production rule can be achieved in O ( 1 ) amortized time within O ( | D | ) space using dynamic perfect hashing [29].
Next we outline the online algorithm. We describe the online version of LCA in Algorithm 2 as well as its recursive function i n s e r t _ s y m b o l ( q i , x ) in Algorithm 3. All queues are initialized to contain only dummy symbol d Σ N , which is required to compute the first pair at the each queue. In the lines 2–5 of Algorithm 2, input characters are enqueued to q 1 one by one. In Algorithm 3, if there is q i such that l e n ( q i ) 5 , the algorithm decides the replaced pair in q i [ 1 , 3 ] . In case q i [ 1 , 2 ] is replaced by an appropriate nonterminal z, q i [ 0 , 1 ] is dequeued and z is enqueued to q i + 1 . In case q i [ 2 , 3 ] is replaced by z, q i [ 0 , 2 ] is dequeued and q i [ 1 ] z is enqueued to q i + 1 . The symbol q i [ 2 ] in the first case and q i [ 3 ] in the second case are remaining in q j to determine the next replaced pair after a new symbol is enqueued to q i . Figure 4 describes the action of the function i n s e r t _ s y m b o l ( q i , x ) . The algorithm recursively continues the above process until all input characters are enqueued. As the post-processing, the symbols remaining in the queues q i , , q h are replaced by appropriate nonterminals in the left to right order. Finally the produced dictionary is returned.
Algorithm 2 LCA-online.
 1: D : = ;  initialize queues;
 2: repeat
 3:    input a new character c;
 4:     i n s e r t _ s y m b o l ( q 1 , c ) ;
 5: until c is not the end of the inputs.
 6: i : = 1 ;
 7: while q i is not empty do
 8:    replace the symbols remained in q i [ 1 , 4 ] ,
 9:    and then enqueue the string replaced in q i [ 1 , 4 ] into q i + 1 ;
10:     i : = i + 1 ;
11: end while
12: output D ;
Algorithm 3  i n s e r t _ s y m b o l ( q i , x ) : a queue q i and a symbol x.
 1: e n q u e ( q i , x ) ;
 2: if l e n ( q i ) 5 then
 3:    if  r e p l a c e d _ p a i r ( q i , 1 ) = t r u e  then
 4:       d e q u e ( q i ) ;
 5:       y 1 : = d e q u e ( q i ) y 2 : = h e a d ( q i ) ;
 6:       z : = D R ( y 1 , y 2 ) ;
 7:       D : = { z y 1 y 2 } D ;  /* update D */
 8:       i n s e r t _ s y m b o l ( q i + 1 , z ) ;
 9:    else
10:       d e q u e ( q i ) ;
11:       y 1 : = d e q u e ( q i ) ;
12:       i n s e r t _ s y m b o l ( q i + 1 , y 1 ) ;
13:       y 2 : = d e q u e ( q i ) y 3 : = h e a d ( q i ) ;
14:       z : = D R ( y 2 , y 3 ) ;
15:       D : = { z y 2 y 3 } D ;  /* update D */
16:       i n s e r t _ s y m b o l ( q i + 1 , z ) ;
17:    end if
18: end if

3.3. Performance Analysis

First, we estimate the running time of LCA-online. We use the following notation indicating the string enqueued to a queue.
Definition 2.
For each queue q i , let S i denote the string obtained by concatenating all symbols enqueued to q i from left to right order.
Note that S i corresponds to the ith level of string in a parse tree produced by replacing pairs as shown in Figure 3. We first prove the following characteristic.
Theorem 2.
The running time of LCA-online is bounded by O ( n ) , where n is the input length.
Proof. 
For any S k , the inequality 1 2 | S k | | S k + 1 | 2 3 | S k | holds because the algorithm replaces at least one of S k [ i , i + 1 ] and S k [ i + 1 , i + 2 ] . Therefore, k O ( log n ) and the total number of symbols inserted into all queues is bounded by O ( n ) . In any queue, computing a replaced pair is O ( 1 ) time because we can verify in O ( 1 ) time whether or not S k contains one of the repetitive, minimal, and maximal pairs. Also, computing the appropriate nonterminal for any pair can be computed in O ( 1 ) time. Hence, the running time is bounded by O ( n ) time.  □
Next, we prove that the approximation ratio of LCA-online is reasonable. The approximation ratio of the compression algorithm is the upper bound of g g * for the output grammar size g and the minimum grammar size g * when arbitrary input string is given.
Definition 3.
Let S be a string and S [ i , j ] = α be an occurrence of a substring α in S . We call S [ i , j ] a boundary occurrence if S [ i ] S [ i + 1 ] and S [ j ] S [ j 1 ] .
Definition 4.
Let S t be a string enqueued to q t . Then R t ( i , j ) is the shortest substring of S t + 1 which derives a string containing S t [ i , j ] .
Lemma 2.
Let S t [ i 1 , j 1 ] = S t [ i 2 , j 2 ] = α be any boundary occurrences. For input string length n, there exists an integer k log n such that R t ( i 1 + k , j 1 k ) = R t ( i 2 + k , j 2 k ) .
Proof. 
Let us consider the index tree T n . If a string α = a 1 , a 2 , , a m of length m is a monotonic; i.e., either 1 > 2 > > m or 1 < 2 < < m , and l c a ( 1 , 2 ) , l c a ( 2 , 3 ) , , l c a ( m 1 , m ) are monotonic, then m is bounded by log n . Therefore, at least one of minimal pair or maximal pair must appear within log n consecutive symbols having no repetition. Thus, a prefix of S t [ i 1 , j 1 ] longer than log n contains at least one of minimal/maximal pair; it also appears in S t [ i 2 , j 2 ] at the corresponding position. Hence, the replacements of inside S t [ i 1 , j 1 ] and S t [ i 2 , j 2 ] , i.e. S t [ i 1 + k , j 1 k ] and S t [ i 2 + k , j 2 k ] completely synchronize. Hence, R t ( i 1 + k , j 1 k ) = R t ( i 2 + k , j 2 k ) for k log n . ȃ□
Theorem 3.
The approximation ratio g / g * of LCA-online is O ( log 2 n ) , where g is the output grammar size, g * is the minimum grammar size, and n is the length of the input string.
Proof. 
We estimate the number of different nonterminals produced by LCA-online. Let w 1 , , w m be the L Z -factorization of an input string w. Let # ( w ) denote the maximum number of different nonterminals generated in a single queue after the compression of w is completed. From the definition of L Z -factorization, any factor w i occurs in the prefix w 1 , , w i 1 , or | w i | = 1 . First, we consider the case that w i is a boundary occurrence. By Lemma 2, any two occurrences of w i are respectively transformed to α β γ and α β γ such that | α | = | α | , | γ | = | γ | , and | α γ α γ | = O ( log n ) . In the case that w i is not a boundary occurrence, w i = a + λ b + for some string λ and repetitions a + , b + . For repetitions a + and b + , the number of different nonterminals produced by the replacement of a + and b + is bounded by O ( 1 ) . If λ is a boundary occurrence, this case is the same as the one in which w i is a boundary occurrence. If λ is not a boundary occurrence, λ = c + λ d + for some string λ , and c a and d b . In this case, any occurrence of λ inside a + λ b + is transformed to exactly same string. Thus, for a single queue, we can estimate # ( w ) = # ( w 1 , , w m 1 ) + O ( log n ) = O ( m log n ) = O ( g * log n ) . Because the number of queues is at most O ( log n ) , the size of the final dictionary is O ( g * log 2 n ) .  □
Finally, we estimate the space complexity of our algorithm.
Theorem 4.
The space required by LCA-online is bounded by O ( g * log 2 n ) , where g * is the minimum grammar size and n is the input string length.
Proof. 
The number of queues is bounded by O ( log n ) and the length of any queue is O ( 1 ) . Thus, required space for the queues is O ( log n ) . For the reverse dictionary, the space is bounded by the generated grammar size. By the Theorem 3, the space of the reverse dictionary is bounded by O ( g * log 2 n ) . Thus, the total space is bounded by O ( g * log 2 n ) .  □

4. Encoding Technique

This section proposes a compact representation for a restricted CFG G = ( Σ , N , D , S ) . In the following, we assume Σ = { 1 , 2 , , σ } for simplicity.

4.1. Encoded Representation of CFG

For the grammar G deriving w, we create the partial parse tree (This concept was introduced by Rytter [23].) P T r e e ( G ) , which is obtained by the following operation: Let T be the parsing tree for w by G. If T contains a maximal subtree rooted by A N appearing in T at least twice, replace all occurrences of the subtree by a single node labeled by A except the leftmost occurrence of the subtree. When we continue this replacement, the final tree is denoted by P T r e e ( G ) . Figure 5 shows an example of the partial parse tree. P T r e e ( G ) has g internal nodes and g + 1 leaves because P T r e e ( G ) is a binary tree, where g = | N | . The construction of P T r e e ( G ) can be in O ( g ) time/space by expanding each nonterminal only once.
The skeleton of P T r e e ( G ) is represented by a sequence of parentheses.
Let x 1 , x 2 , , x 2 g + 1 be a sequence of nodes sorted by post-order. We represent the sequence of nodes by 2 g + 1 parentheses as follows:
F [ i ] = ( if   x i   is   a   leaf ) otherwise
We then make a sequence of leaf labels of P T r e e ( G ) to keep the information of the original string w. Let E [ 1 , g ] N g be the sequence of internal node labels of P T r e e ( G ) in post-order. Let M [ 1 , g + 1 ] ( Σ N ) g + 1 be the sequence of leaf labels of P T r e e ( G ) in post-order. We note that E [ 1 , g ] is a permutation on N because every internal node has a different label from each other. Let E 1 ( N ) be a function that maps any nonterminal z N to the position i such that E [ i ] = z . Hereby, we define the sequence L [ 1 , g + 1 ] , which consists of renamed nonterminals for M by the following:
L [ i ] = E 1 ( M [ i ] ) + σ ( M [ i ] N ) M [ i ] ( M [ i ] Σ )
We then create the pair ( F , L ) as an encoded representation of CFG. Clearly the time/space to compute ( F , L ) is O ( g ) .
We estimate the bits of space required for ( F , L ) . The space required for F is 2 g + 1 bits because F is the sequence over a binary alphabet representing g internal nodes and g + 1 leaves. Because L is the sequence over { 1 , 2 , , g + σ } whose length is g + 1 , L can be represented in ( g + 1 ) log ( g + σ ) bits of space. Thus, the total space for ( F , L ) is approximately g log ( g + σ ) + 2 g bits. A naïve encoding represented by a sequence of right-hand sides of g production rules requires 2 g log ( g + σ ) bits of space. Thus, our representation reduces the space to almost half.
We note two array F and L can be combined into one array such that each symbol L [ i ] is embedded after ith open parenthesis of F. The representation of the combined array has an advantage that the decoding processing can be done in one pass over the compressed text. We can also apply simple variable-length coding like LZW [30] for each element of L because the number of allocatable nonterminals for any leaf node is limited by the number of internal nodes that appear before the leaf node in post-order. The efficiency of compression is further improved using such variable-length coding.

4.2. Decoding Process

We can decode the encoded representation of CFG because any nonterminal z in L indicates the position of the internal node corresponding to z. We describe the process in Algorithm 4. Scanning the compressed text (combined F and L) from the left to right, we can simulate the post-order traversal of the partial parse tree and restore the dictionary D . To do this, we use a stack s t k with two basic operations as follows:
  • p u s h ( s t k , x ) : add symbol x into the top of the stack s t k .
  • p o p ( s t k ) : return the top of the stack s t k and remove it.
Algorithm 4 Decode.
 1: input a grammar size g, an alphabet size σ ;
 2: create an empty stack s t k ;
 3: i : = 1 j : = 1 k : = 1 ;
 4: while i 2 g + 1 do
 5:    input a parenthesis F [ i ] ;
 6:    if  F [ i ] = ’(’ then
 7:      input a symbol L [ j ] ;
 8:      output a string derived from L [ j ] using D ;
 9:       p u s h ( s t k , L [ j ] ) j : = j + 1 ;
 10:    else
 11:       y 2 : = p o p ( s t k ) y 1 : = p o p ( s t k ) ;
 12:       z : = k + σ ;
 13:       D : = { z y 1 y 2 } D ; /* update D */
 14:       p u s h ( s t k , z ) k : = k + 1 ;
 15:    end if
 16:     i : = i + 1 ;
17: end while
When we decode L [ j ] in the line 8, the required production rules are certainly contained in the current dictionary D by the characteristics of the partial parse tree. Thus, the algorithm can correctly output the original string by decoding the sequence L [ 1 , g + 1 ] . The decoding time is bounded by O ( n ) to output the original string, and the space is O ( g ) to store the dictionary D .

5. Experimental Results

We implemented three compressors based on the LCA algorithm, which are available from google code project (http://code.google.com/p/lcacomp/). The first, denoted by LCA-online, is the online algorithm of LCA proposed in Section 3. The second, denoted by LCA-offline, is a faithful implementation of the offline LCA algorithm, which requires o ( g ) space by using O ( n ) external memory space, where g is the output grammar size and n is the input text size. Therefore, the compression speed of LCA-offline is affected by the I/O time. The third, denoted by LCA-fast, is another implementation of offline LCA, which requires O ( n ) memory space and can thus achieve faster compression than LCA-offline.
For each generated CFG, two encoding methods are applied: one is the naïve encoding for the production rules and the other is the improved encoding presented in Section 4. Recall that the improved method requires O ( g ) space. We distinguish between the naïve encoding and the improved one by the signs :N and :I, respectively. For example, LCA-online:I means the implementation of LCA-online with the improved encoding.
We compare our algorithms with other practical compressors. LZW [30] is a variant of LZ78-encoding [31], which we implemented. Our LZW implementation does not reset the codeword dictionary, unlike compress in UNIX programs. gzip (http://www.gzip.org) is based on LZ77-encoding [2] with limited window size. bzip2 (http://www.bzip.org) is based on the block-sorting compression using the Burrows Wheeler Transform [32]. For gzip and bzip2, although we specified -9 option to obtain their best compressive performances, those programs run in limited memory space because they output compressed texts before they have seen all of the input. Re-Pair [19] (http://www.cbrc.jp/~rwan/en/restore.html) is an offline grammar-based compressor that recursively substitutes a new symbol for the most frequent pair. LZMA (p7zip) (http://p7zip.sourceforge.net/) is a powerful compressor based on the LZ77-encoding with unlimited window size. We set its window length as the input text length to achieve the best compressive performance. Table 1 summarizes the comparison in space usage and online/offline separation, where g is the output grammar size produced by LCAs, z is the number of output phrase parsed with LZW, and n is the input text length.
We used highly repetitive texts from repetitive corpus (Real) (http://pizzachili.dcc.uchile.cl/repcorpus.html), which consists of DNA sequences (Escherichia_Coli, Para, Cere, influenza), source codes (coreutils, kernel), and natural language texts (einstein.de.txt, einstein.en.txt, world_leaders). More detailed documentation is available from the Pizza & Chili (http://pizzachili.dcc.uchile.cl/repcorpus/statistics.pdf). We also used general real world texts (ENGLISH, XML) from the Pizza & Chili corpus (http://pizzachili.dcc.uchile.cl/texts.html). ENGLISH is a natural language text collection written in English. XML is a structured text downloaded from http://dblp.uni-trier.de. Our environments are OS:CentOS 5.5 (64-bit), CPU:Intel Xeon E5504 2.0GHz (Quad)×2, Memory:144GB RAM. Our programs are written in the C language and compiled by gcc 4.1.2 optioned with -O3 optimization. We measure the processing time by the time command, and maximum memory usage in programs by the memusage command.

5.1. Comparison with Standard Compressors.

LCA-online:(I) is compared with other standard compressors in terms of the compression ratio, the consumption of memory space, and the compression time. Table 2(a) shows the result of compression ratio. Table 2(b) gives the result of main memory usage. Space usage is represented by the ratio to input text size. For general texts, the compression ratio of LCA-online:(I) is worse compared with other compressors. The repetitive substrings in typical texts are generally short, for example, single words in English texts, short tags in XML documents. Our algorithm seems to be weak to capture such short repetitive substrings. On the other hand, it achieves a higher compression ratio for the repetitive texts because our algorithm can replace long common substrings by the same nonterminals from the analysis of Section 3.3. LZW does not work well for the repetitive texts in spite of maintaining the dictionary. This is because LZW(LZ78) parsing does not guarantee to capture long and frequent substrings. gzip and bzip2 also do not work well because they compress the input text in limited segments, not using the whole text. Re-Pair and LZMA use the whole text to powerful compression and thus they have a better compression ratio than ours. Therefore, as seen in Table 2(b), they require more memory space than the input text. On the other hand, the space requirement of ours and that of LZW depend on the output size. Especially, our compression ratios for repetitive texts are very high than that of LZW. Thus, the space usage becomes very small when the input text is sufficiently compressed.
Table 2(c) shows the average time per 1MiB for the compression processing. LCA-online:(I) can achieve fast compression independent of the kind of texts. But it is a little slower in the case of general texts than that of repetitive texts. This is because our implementation of the reverse dictionary seems to cause a little slowdown with increasing of the size of the dictionary. The other compressors, especially in gzip, Re-Pair and LZMA, are quite slow depending on the kind of text, especially in biological data with small alphabet.
By these results, we can say LCA-online:(I) has practical properties for compressing huge highly repetitive texts with economical space, fast compression, and powerful compressive performance.

5.2. Comparison with Different Variations on LCA

Table 3(a), Table 3(b) and Table 3(c) show the compression ratio, the maximum memory usage, and the compression time within the variations of LCA, respectively. In Table 3(a) and Table 3(c), we can see that the improved encoding brings more efficient compression than the naïve encoding, and the processing time of the improved encoding is almost the same as that of the naïve encoding. On the other hand, regarding the grammar size, there is really not much difference between the online and offline algorithm. From Table 3(b) and Table 3(c), the running time of LCA-online is almost the same as that of LCA-offline and a bit slower than LCA-fast. However, we recall that the compression speed of LCA-offline depends on the I/O time of the computing environment; and LCA-fast always needs more memory space than the input text. In addition, LCA-online has an advantage to enable us incremental compression.
By these results, LCA-online has sufficient performance compared with the offline version, and the proposed encoding is very effective at CFG representation.

6. Summary

We developed an online algorithm for grammar-based compression. Our algorithm not only guarantees a reasonable approximation ratio for the minimum grammar, it also achieves effective compression for highly repetitive text, practically.
As future work, we will apply our grammar to string processing over compressed texts. For example, compressed pattern matching [33], grammar-based self-index [34,35], random accessible data structure [36] and so on. One property of our grammar is that the height of the parse tree is bounded by O ( log n ) ; another property is that our algorithm can find long common substrings without Ω ( n ) space data structures. These properties would be suitable for such compressed string processing.

Acknowledgements

The authors would like to thank the anonymous reviewers for their careful reading. This work was supported by JST PRESTO program and Grants-in-Aid for Young Scientists (A), MEXT (No. 23680016).

References

  1. Kieffer, J.; Yang, E.H. Grammar-based codes: A new class of universal lossless source codes. IEEE Trans. Inf. Theory 2000, 46, 737–754. [Google Scholar] [CrossRef]
  2. Ziv, J.; Lempel, A. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 1977, 23, 337–343. [Google Scholar] [CrossRef]
  3. Sirẽn, J.; Välimäki, N.; Mäkinenn, V.; Navarro, G. Run-Length Compressed Indexes are Superior for Highly Repetitive Sequence Collections. In Proceedings of the 15th International Symposium on String Processing and Information Retrievals (SPIRE ’08), Melbourne, Australia, 10–12 November 2008; pp. 164–175. [Google Scholar]
  4. Claude, F.; Feriña, A.; Martinez-Prieto, M.A.; Navarro, G. Compressed q-Gram Indexing for Highly Repetitive Biological Sequences. In Proceedings of the 10th IEEE International Conference on Bioinformatics and Bioengineering (BIBE ’10), Philadelphia, PA, USA, 31 May–3 June 2010; pp. 86–91. [Google Scholar]
  5. Kreft, S.; Navarro, G. Self-Indexing Based on LZ77. In Proceedings of the 22th Annual Symposium on Combinatorial Pattern Matching (CPM ’11), Palermo, Italy, 27–29 June 2011; pp. 285–298. [Google Scholar]
  6. Karpinski, M.; Rytter, W.; Shinohara, A. An efficient pattern matching algorithm strings with short descriptions. Nordic J. Comput. 1997, 4, 172–186. [Google Scholar]
  7. Miyazaki, M.; Shinohara, A.; Takeda, M. An Improved Pattern Matching Algorithm for Strings in Terms of Straight-Line Programs. In Proceedings of the 8th Annual Symposium on Combinatorial Pattern Matching (CPM ’97), CPM 97, Aarhus, Denmark, 30 June–2 July 1997; volume 1264, pp. 1–11. [Google Scholar]
  8. Cégielski, P.; Guessarian, I.; Lifshits, Y.; Matiyasevich, Y. Window Subsequence Problems for Compressed Texts. In Proceedings of the 1st International Computer Science Symposium in Russia (CSR ’06), St. Petersburg, Russia, 8–12 June 2006; pp. 127–136. [Google Scholar]
  9. Lifshits, Y. Processing Compressed Texts: A Tractability Border. In Proceedings of the 18th Annual Symposium on Combinatorial Pattern Matching (CPM ’07), London, Canada, 9–11 July 2007; volume 4580, pp. 228–240. [Google Scholar]
  10. Tiskin, A. Towards Approximate Matching in Compressed Strings. In Proceedings of the 6th International Computer Science Symposium in Russia (CSR ’11), St. Petersburg, Russia, 14–18 June 2011; volume 6651, pp. 401–414. [Google Scholar]
  11. Yamamoto, T.; Bannai, H.; Inenaga, S.; Takeda, M. Faster Subsequence and Don’t-Care Pattern Matching on Compressed Texts. In Proceedings of the 22th Annual Symposium on Combinatorial Pattern Matching (CPM ’11), Palermo, Italy, 27–29 June 2011; volume 6661, pp. 309–322. [Google Scholar]
  12. Hermelin, D.; Landau, G.M.; Landau, S.; Weimann, O. A Unified Algorithm for Accelerating Edit-Distance Computation via Text-Compression. In Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS ’09), Freiburg, Germany, 26–28 February 2009; pp. 529–540. [Google Scholar]
  13. Goto, K.; Bannai, H.; Inenaga, S.; Takeda, M. Fast q-Gram Mining on SLP Compressed Strings. In Proceedings of the 18th International Symposium on String Processing and Information Retrieval (SPIRE ’11), Pisa, Italy, 17–21 October 2011; volume 7028, pp. 278–289. [Google Scholar]
  14. Goto, K.; Bannai, H.; Inenaga, S.; Takeda, M. Computing q-Gram Non-Overlapping Frequencies on SLP Compressed Texts. In Proceedings of the 38th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM ’12), Spindlerv Mlýn, Czech Republic, 21–27 January 2012; pp. 301–312. [Google Scholar]
  15. Inenaga, S.; Bannai, H. Finding Characteristic Substrings from Compressed Texts. In Proceedings of the Prague Stringology Conference (PSC ’09), Prague, Czech Republic, 31 August– 2 September 2009; pp. 40–54. [Google Scholar]
  16. Matsubara, W.; Inenaga, S.; Ishino, A.; Shinohara, A.; Nakamura, T.; Hashimoto, K. Efficient algorithms to compute compressed longest common substrings and compressed palindromes. Theor. Comput. Sci. 2009, 410, 900–913. [Google Scholar] [CrossRef]
  17. Lehman, E.; Shelat, A. Approximation Algorithms for Grammar-Based Compression. In Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’02), San Francisco, CA, USA, 6–8 January 2002; pp. 205–212. [Google Scholar]
  18. Nevill-Manning, C.; Witten, I. Identifying hierarchical strcture in sequences: A linear-time algorithm. J. Artif. Intell. Res. 1997, 7, 67–82. [Google Scholar] [CrossRef]
  19. Larsson, N.; Moffat, A. Off-line dictionary-based compression. Proc. IEEE 2000, 88, 1722–1732. [Google Scholar] [CrossRef]
  20. Apostolico, A.; Lonardi, S. Offline compression by greedy textual substitution. Proc. IEEE 2000, 88, 1733–1744. [Google Scholar] [CrossRef]
  21. Nakamura, R.; Inenaga, S.; Bannai, H.; Funamoto, T.; Takeda, M.; Shinohara, A. Linear-time off-line text compression by longest-first substitution. Algorithms 2009, 2, 1429–1448. [Google Scholar] [CrossRef]
  22. Charikar, M.; Lehman, E.; Liu, D.; Panigrahy, R.; Prabhakaran, M.; Sahai, A.; Shelat, A. The smallest grammar problem. IEEE Trans. Inf. Theory 2005, 51, 2554–2576. [Google Scholar] [CrossRef]
  23. Rytter, W. Application of Lempel-Ziv factorization to the approximation of grammar-based compression. Theor. Comput. Sci. 2003, 302, 211–222. [Google Scholar] [CrossRef]
  24. Sakamoto, H. A fully linear-time approximation algorithm for grammar-based compression. J. Discret. Algorithms 2005, 3, 416–430. [Google Scholar] [CrossRef]
  25. Sakamoto, H.; Kida, T.; Shimozono, S. A Space-Saving Linear-Time Algorithm for Grammar-Based Compression. In Proceedings of the 11th International Symposium on String Processing and Information Retrieval (SPIRE ’04), Padova, Italy, 5–8 October 2004; pp. 218–229. [Google Scholar]
  26. Sakamoto, H.; Maruyama, S.; Kida, T.; Shimozono, S. A space-saving approximation algorithm for grammar-based compression. IEICE Trans. Inf. Syst. 2009, E92-D, 158–165. [Google Scholar] [CrossRef]
  27. Gagie, T.; Gawrychowski, P. Grammar-Based Compression in a Streaming Model. In Proceedings of the 4th International Conference on Language and Automata Theory and Applications (LATA ’10), Trier, Germany, 24–28 May 2010; volume 6031, pp. 273–284. [Google Scholar]
  28. Gusfield, D. Algorithms on Strings, Trees, and Sequences; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  29. Dietzfelbinger, M.; Karlin, A.; Mehlhorn, K.; auf der Heide, F.M.; Rohnert, H.; Tarjan, R.E. Dynamic Perfect Hashing: Upper and Lower Bounds. In Proceedings of the 29th Annual Symposium on Foundations of Computer Science (FOCS ’88), White Plains, New York, USA, 24–26 October 1988; pp. 524–531. [Google Scholar]
  30. Welch, T. A technique for high-performance data compression. IEEE Comput. 1984, 8–19. [Google Scholar] [CrossRef]
  31. Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar] [CrossRef]
  32. Burrows, M.; Wheeler, D. A Block Sorting Lossless Data Compression Algorithm; Technical Report 124, Digital Equipment Corporation, 1994.
  33. Kida, T.; Matsumoto, T.; Shibata, Y.; Takeda, M.; Shinohara, A.; Arikawa, S. Collage systems: A unifying framework for compressed pattern matching. Theor. Comput. Sci. 2003, 298, 253–272. [Google Scholar] [CrossRef]
  34. Claude, F.; Navarro, G. Self-Indexed Grammar-Based Compression. Fundam. Inform. 2011, 3, 313–337. [Google Scholar] [CrossRef]
  35. Gagie, T.; Gawrychowski, P.; Puglisi, S.J. Faster Grammar-Based Self-Index. In Proceedings of the 6th International Conference on Language and Automata Theory and Applications (LATA ’12), A Coruña, Spain, 5–9 March 2012; pp. 273–284. [Google Scholar]
  36. Bille, P.; Landau, G.M.; Raman, R.; Sadakane, K.; Satti, S.R.; Weimann, O. Random Access to Grammar-Compressed Strings. In Proceedings of the 22th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’11), San Francisco, California, USA, 23–25 January 2011; pp. 373–389. [Google Scholar]
Figure 1. An example of replacing pairs. Our aim is to replace pairs that are almost synchronized in common substrings.
Figure 1. An example of replacing pairs. Our aim is to replace pairs that are almost synchronized in common substrings.
Algorithms 05 00214 g001
Figure 2. The (virtual) index tree T 16 for Σ N = { a 1 , a 2 , , a 16 } .
Figure 2. The (virtual) index tree T 16 for Σ N = { a 1 , a 2 , , a 16 } .
Algorithms 05 00214 g002
Figure 3. An example of a parse tree produced with our replacement rules.
Figure 3. An example of a parse tree produced with our replacement rules.
Algorithms 05 00214 g003
Figure 4. The action of i n s e r t _ s y m b o l ( q i , x ) .
Figure 4. The action of i n s e r t _ s y m b o l ( q i , x ) .
Algorithms 05 00214 g004
Figure 5. The example of the encoding process for CFG G = ( { a , b } , { A , B , C , S } , D , S ) : (1) the dictionary D ; (2) the partial parse tree P T r e e ( G ) ; (3) the parentheses representation of the tree and renamed labels where Y i = i + σ ; and (4) the encoded representation of G.
Figure 5. The example of the encoding process for CFG G = ( { a , b } , { A , B , C , S } , D , S ) : (1) the dictionary D ; (2) the partial parse tree P T r e e ( G ) ; (3) the parentheses representation of the tree and renamed labels where Y i = i + σ ; and (4) the encoded representation of G.
Algorithms 05 00214 g005
Table 1. Summary of comparison methods.
Table 1. Summary of comparison methods.
MethodSpace usageOnline/Offline
LCA-online:I O ( g ) online
LCA-online:N O ( g ) online
LCA-offline:I O ( g ) + external spaceoffline
LCA-offline:N o ( g ) + external spaceoffline
LCA-fast:I O ( n ) offline
LCA-fast:N O ( n ) offline
LZW O ( z ) online
gzip -9limited spaceonline
bzip2 -9limited spaceoffline per blocks
Re-Pair O ( n ) offline
LZMA O ( n ) online
Table 2. Experimental results for LCA-online versus standard compressors. (a) Compression ratio (percentage of the compressed size over the text size); (b) Main memory consumption (fraction of the text size); (c) Compression time (seconds per 1MiB).
Table 2. Experimental results for LCA-online versus standard compressors. (a) Compression ratio (percentage of the compressed size over the text size); (b) Main memory consumption (fraction of the text size); (c) Compression time (seconds per 1MiB).
Sourceonline:ILZWgzip -9bzip2 -9Re-PairLZMA
Repetitive TextSize (bytes)
Escherichia_Coli112,689,51512.4325.4637.6426.989.604.43
Para429,265,7584.4823.5627.0426.152.741.24
Cere461,286,6443.2522.5926.2025.221.861.05
influenza154,808,5554.5711.556.876.593.261.55
coreutils205,281,7785.2322.2424.3216.022.541.99
kernel257,961,6162.1824.1326.9021.741.100.82
einstein.de.txt92,758,4410.3010.6131.044.320.160.11
einstein.en.txt467,626,5440.176.6835.005.170.100.07
world_leaders46,968,1813.4012.4817.656.941.791.39
General TextSize (bytes)
ENGLISH209,715,20040.8633.0637.6428.0731.7921.12
XML209,715,20023.6417.8417.1211.3516.6712.07
(a)
Sourceonline:ILZWgzip -9bzip2 -9Re-PairLZMA
Repetitive TextSize (bytes)
Escherichia_Coli112,689,5150.811.590.00650.06226.9810.24
Para429,265,7580.261.240.00170.01624.4310.02
Cere461,286,6440.211.460.00160.01525.0010.00
influenza154,808,5550.410.720.00470.04525.0610.16
coreutils205,281,7780.381.230.00360.03425.0310.22
kernel257,961,6160.231.480.00280.02724.399.76
einstein.de.txt92,758,4410.150.610.00790.07623.749.54
einstein.en.txt467,626,5440.0330.420.00160.01722.429.87
world_leaders46,968,1810.380.960.0160.1526.799.73
General TextSize (bytes)
ENGLISH209,715,2002.171.950.00350.03427.0010.00
XML209,715,2001.531.050.00350.03425.0010.00
(b)
Sourceonline:ILZWgzip -9bzip2 -9Re-PairLZMA
Repetitive TextSize (bytes)
Escherichia_Coli112,689,5150.110.141.560.161.651.76
Para429,265,7580.110.161.570.151.211.35
Cere461,286,6440.0910.191.480.151.101.19
influenza154,808,5550.0820.130.380.350.670.35
coreutils205,281,7780.140.180.120.200.940.37
kernel257,961,6160.140.200.110.150.800.72
einstein.de.txt92,758,4410.120.140.120.330.630.15
einstein.en.txt467,626,5440.110.160.110.340.670.16
world_leaders46,968,1810.0760.100.0890.110.500.21
General TextSize (bytes)
ENGLISH209,715,2000.210.220.180.163.420.92
XML209,715,2000.180.150.060.231.760.42
(c)
Table 3. Experimental results for LCA variations. (a) Compression ratio (percentage of the compressed size over the text size); (b) Main memory consumption (fraction of the text size); (c) Compression time (seconds per 1MiB).
Table 3. Experimental results for LCA variations. (a) Compression ratio (percentage of the compressed size over the text size); (b) Main memory consumption (fraction of the text size); (c) Compression time (seconds per 1MiB).
Sourceonline:Ionline:Noffline:Ioffline:Nfast:Ifast:N
Repetitive TextSize (bytes)
Escherichia_Coli112,689,51512.4324.5812.2124.1912.2124.19
Para429,265,7584.488.694.398.534.398.53
Cere461,286,6443.256.393.176.243.176.24
influenza154,808,5554.578.994.488.834.488.83
coreutils205,281,7785.2310.055.2210.055.2210.05
kernel257,961,6162.184.172.174.162.174.16
einstein.de.txt92,758,4410.300.580.300.570.300.57
einstein.en.txt467,626,5440.170.330.170.330.170.33
world_leaders46,968,1813.406.703.416.713.416.71
General TextSize (bytes)
ENGLISH209,715,20040.8679.3740.4178.53840.4178.54
XML209,715,20023.6445.5023.8445.8523.8445.85
(a)
Sourceonline:Ionline:Noffline:Ioffline:Nfast:Ifast:N
Repetitive TextSize (bytes)
Escherichia_Coli112,689,5150.810.810.510.304.374.37
Para429,265,7580.260.260.170.0784.154.15
Cere461,286,6440.210.210.130.0364.094.09
influenza154,808,5550.410.410.200.0954.094.09
coreutils205,281,7780.380.380.220.084.164.16
kernel257,961,6160.230.230.0980.0574.074.07
einstein.de.txt92,758,4410.150.150.150.154.074.07
einstein.en.txt467,626,5440.0320.0320.0310.0313.813.81
world_leaders46,968,1810.380.380.310.314.204.20
General TextSize (bytes)
ENGLISH209,715,2002.172.171.510.586.006.00
XML209,715,2001.531.530.920.265.545.54
(b)
Sourceonline:Ionline:Noffline:Ioffline:Nfast:Ifast:N
Repetitive TextSize (bytes)
Escherichia_Coli112,689,5150.110.120.110.110.0860.083
Para429,265,7580.110.120.0990.110.0750.074
Cere461,286,6440.0910.100.0950.100.0720.071
influenza154,808,5550.0820.100.0850.0990.0650.064
coreutils205,281,7780.140.160.130.130.100.10
kernel257,961,6160.140.160.120.120.0980.098
einstein.de.txt92,758,4410.120.110.110.110.0860.086
einstein.en.txt467,626,5440.110.120.110.110.0870.087
world_leaders46,968,1810.0760.0840.0780.0790.0600.059
General TextSize (bytes)
ENGLISH209,715,2000.210.160.180.170.150.14
XML209,715,2000.180.140.160.150.130.12
(c)

Share and Cite

MDPI and ACS Style

Maruyama, S.; Sakamoto, H.; Takeda, M. An Online Algorithm for Lightweight Grammar-Based Compression. Algorithms 2012, 5, 214-235. https://doi.org/10.3390/a5020214

AMA Style

Maruyama S, Sakamoto H, Takeda M. An Online Algorithm for Lightweight Grammar-Based Compression. Algorithms. 2012; 5(2):214-235. https://doi.org/10.3390/a5020214

Chicago/Turabian Style

Maruyama, Shirou, Hiroshi Sakamoto, and Masayuki Takeda. 2012. "An Online Algorithm for Lightweight Grammar-Based Compression" Algorithms 5, no. 2: 214-235. https://doi.org/10.3390/a5020214

APA Style

Maruyama, S., Sakamoto, H., & Takeda, M. (2012). An Online Algorithm for Lightweight Grammar-Based Compression. Algorithms, 5(2), 214-235. https://doi.org/10.3390/a5020214

Article Metrics

Back to TopTop