Next Article in Journal
Semiotic Aggregation in Deep Learning
Next Article in Special Issue
Threshold Computation for Spatially Coupled Turbo-Like Codes on the AWGN Channel
Previous Article in Journal
Functional Kernel Density Estimation: Point and Fourier Approaches to Time Series Anomaly Detection
Previous Article in Special Issue
Systematic Encoding and Shortening of PAC Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Skew Convolutional Codes

1
Institute for Communications Engineering, Technical University of Munich, 80333 München, Germany
2
Skolkovo Institute of Science and Technology, 143026 Moscow, Russia
3
Information Theory and Applications Chair, Technical University of Berlin, 10623 Berlin, Germany
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(12), 1364; https://doi.org/10.3390/e22121364
Submission received: 30 October 2020 / Revised: 25 November 2020 / Accepted: 30 November 2020 / Published: 2 December 2020
(This article belongs to the Special Issue Information Theory for Channel Coding)

Abstract

:
A new class of convolutional codes, called skew convolutional codes, that extends the class of classical fixed convolutional codes, is proposed. Skew convolutional codes can be represented as periodic time-varying convolutional codes but have a description as compact as fixed convolutional codes. Designs of generator and parity check matrices, encoders, and code trellises for skew convolutional codes and their duals are shown. For memoryless channels, one can apply Viterbi or BCJR decoding algorithms, or a dualized BCJR algorithm, to decode skew convolutional codes.

1. Introduction

Convolutional codes were introduced by Elias in 1955 [1]. With the discovery that convolutional codes can be decoded with Fano sequential decoding [2], Massey threshold decoding [3], and, above all, Viterbi decoding [4], they became quite widespread in practice. Convolutional codes are still widely used in telecommunications, e.g., in Turbo codes [5] and in the WiFi IEEE 802.11 standard [6], in cryptography [7], etc.
The most common are binary convolutional codes; however, communication with higher orders of modulation [8] or streaming of data [9] require non-binary convolutional codes. It is known that periodic time-varying convolutional codes improve the free distance and weight distribution over fixed convolutional codes; see, e.g., Mooser [10] and Lee [11]. This is the motivation to introduce a new class of periodic time-varying non-binary convolutional codes, i.e., skew convolutional codes. These codes are based on the non-commutative ring of skew polynomials over finite fields and on the skew field of their fractions.
Block codes based on skew polynomials were studied by various authors; see, e.g., publications of Gabidulin [12], Boucher and Ulmer [13,14], Martínez-Peñas [15], Gluesing-Luerssen [16], Abualrub, Ghrayeb, Aydin, and Siap [17].
Convolutional codes are nonblock linear codes over a finite field, but it can be advantageous to treat them as block codes over certain infinite fields. We will use both approaches. Classical convolutional codes are described by usual polynomials. The product of the polynomials corresponds to convolution of vectors of their coefficients and this gives fixed-in-time classical convolutional codes. We replace usual polynomials by skew polynomials to define the new codes. The product of skew polynomials corresponds to skew convolution of their coefficients, which can be obtained by varying elements in the usual convolution. In this way, we obtain time varying convolutional codes.
Our goal is to define and to give a first encounter with skew convolutional codes. In Section 2, we define skew convolutional codes. In Section 3, we obtain generator matrices and encoders for skew codes and show that skew codes are equivalent to time-varying convolutional codes. Some useful properties of skew codes are considered in Section 4. Section 5 introduces dual skew convolutional codes. Trellis decoding of skew codes is considered in Section 6. Section 7 concludes the paper.

2. Skew Convolutional Codes

2.1. Skew Polynomials and Fractions

Consider a field F and an automorphism θ of the field. Later on, we will use the finite field F = F q m , where q is a prime power, with the Frobenius automorphism
θ ( a ) = a q , a F .
The composition of automorphisms is denoted as θ ( θ ( a ) ) = θ 2 ( a ) , and, for any integer i, we have θ i ( a ) = θ ( θ i 1 ( a ) ) . The identity automorphism θ ( a ) = a is denoted as θ = id . For the automorphism (1) for all a F , we have θ i ( a ) = a q i and θ m = id since a q m = a .
Denote by R = F [ D ; θ ] the non- commutative ring [18] of skew polynomials in the variable D over F (with zero derivation) such that
R = F [ D ; θ ] = { a ( D ) = a 0 + a 1 D + + a n D n | a i F and n N } .
The skew polynomials look like usual polynomials F [ D ] where the coefficients a i are placed to the right of the variable D. The addition in R is as for usual polynomials from F [ D ] . The multiplication is defined by the basic rule
D a = θ ( a ) D
and is extended to all elements of R by associativity and distributivity; see Example 1 below. The ring R has a unique left skew field of fractions Q , from which it inherits its linear algebra properties, see, e.g., [19] for more details.
Example 1.
To demonstrate our results, we use the field F Q = F q m = F 2 2 , q = 2 , m = 2 , with automorphism θ ( a ) = a q = a 2 for all a F 2 2 . The field F 2 2 consists of the elements { 0 , 1 , α , α 2 } , where a primitive element α satisfies α 2 + α + 1 = 0 and the following relations hold:
α 2 = α + 1 α 3 = 1 , α 4 = α , a n d   i Z θ i = θ i f i i s   o d d , i d i f i i s   e v e n . ,
Let a ( D ) = 1 + α D and b ( D ) = α 2 + D , a ( D ) , b ( D ) R . Using (2), we compute the product a b as
a ( D ) b ( D ) = ( 1 + α D ) ( α 2 + D ) = ( α 2 + D ) + α θ ( α 2 ) D + α D 2 = α 2 + α D + α D 2
while the product b a is
b ( D ) a ( D ) = ( α 2 + D ) ( 1 + α D ) = ( α 2 + D ) + α 3 D + θ ( α ) D 2 = α 2 + α 2 D 2 .
In this example, we see that a ( D ) b ( D ) b ( D ) a ( D ) .
The left skew field Q consists of fractions b ( D ) a ( D ) = a 1 ( D ) b ( D ) Q for all a ( D ) , b ( D ) R , a ( D ) 0 . Every fraction can be expanded as the left skew Laurent series in increasing powers of D. In our example, the inverse element 1 a ( D ) = a ( D ) 1 is expanded using long devision as follows:
a 1 ( D ) = 1 + α D + D 2 + α D 3 + D 4 +
with a 1 ( D ) a ( D ) = a ( D ) a 1 ( D ) = 1 . We can expand the left fraction b ( D ) a ( D ) = a 1 ( D ) b ( D ) by long left division or equivalently by left multiplication b ( D ) by a 1 ( D ) and get
b ( D ) a ( D ) = a 1 ( D ) b ( D ) = α 2 + α D + D 2 + α D 3 + D 4 + .
Notice that the right fraction b ( D ) a 1 ( D ) = α 2 , since b ( D ) = α 2 + D = α 2 ( 1 + α D ) = α 2 a ( D ) .

2.2. Definition of Skew Convolutional Codes

Much of linear algebra can be generalized from vector spaces over a field to (either left or right) modules over the skew field Q . Indeed, it is shown in [19] that any left Q -module C is free, i.e., it has a basis, and any two bases of C have the same cardinality, the dimension of C .
Definition 1
(Skew convolutional code). Given an automorphism θ of the field F , a skew convolutional [ n , k ] code C over the field F is a left sub-module of dimension k of the free module Q n .
The elements of the code C are called its codewords. A codeword v ( D ) = v ( 1 ) ( D ) , , v ( n ) ( D ) is an n-tuple over Q , where every component v ( i ) ( D ) is a fraction of skew polynomials from R . The code C is F = F q m -linear. Let the weight of every codeword be defined by some selected metric. The free distance d f of a skew convolutional code is defined to be the minimum nonzero weight of any codeword.
For the Hamming metric, which is the most interesting for applications, the weight of a fraction v ( i ) ( D ) is the number of nonzero coefficients in its expansion as a left skew Laurent series from F ( ( D ) ) in increasing powers of D. The weight of a codeword is the sum of weights of its components. Another interesting metric is sum-rank metric, which will be defined later.

2.3. Relations to Fixed Convolutional Codes

Lemma 1.
The class of skew convolutional codes includes the class of fixed (time-invariant) convolutional codes.
Proof. 
A time-invariant convolutional [ n , k ] code C ˜ over the field F is defined as a k-dimensional subspace of F ( D ) n . Take the identity automorphism θ = id . Then, the ring R = F [ D ; θ ] becomes a ring F [ D ] of usual commutative polynomials. The skew field of fractions Q becomes the field of rational functions F ( D ) . In this case, by Definition 1, the skew convolutional code C coincides with the classical fixed code C ˜ . □

3. Encoding Skew Convolutional Codes

3.1. Polynomial Form of Encoding

A generator matrix of a skew convolutional [ n , k ] code C is a k × n matrix G ( D ) over the skew field Q whose rows form a basis for the code C . If the matrix G ( D ) is over the ring R of skew polynomials, then G ( D ) is called a polynomial generator matrix for C . Every skew code C has a polynomial generator matrix. Indeed, given a generator matrix G ( D ) over the skew field of fractions Q , a polynomial generator matrix can be obtained by left multiplying each row of G ( D ) by the least common left multiple of the denominators in that row. In the paper, we focus on polynomial generator matrices and corresponding encoders.
By Definition 1, every codeword v ( D ) of a skew code C , which is an n-tuple over the skew field of fractions Q
v ( D ) = v ( 1 ) ( D ) , v ( 2 ) ( D ) , , v ( n ) ( D ) , v ( j ) ( D ) Q , 1 j n
can be written as
v ( D ) = u ( D ) G ( D )
where u ( D ) is a k-tuple (k-word) over Q :
u ( D ) = u ( 1 ) ( D ) , u ( 2 ) ( D ) , u ( k ) ( D ) , u ( i ) ( D ) Q , 1 i k
and is called an information word, G ( D ) is a generator matrix of C , G ( D ) Q k × n or G ( D ) R k × n . Relation (4) already provides an encoder. This encoder (4) is just an encoder of a block code over Q and the skew code C can be considered as the set of n-tuples v ( D ) over Q that satisfy (4), i.e., C = { v ( D ) } .
However, to use the code in practice we need to write components of u ( D ) and v ( D ) as skew Laurent series, i.e., we have
u ( i ) ( D ) = u 0 ( i ) + u 1 ( i ) D + u 2 ( i ) D 2 + , i = 1 , , k
and
v ( j ) ( D ) = v 0 ( j ) + v 1 ( j ) D + u 2 ( j ) D 2 + , j = 1 , , n .
Actually, in a Laurent series, the lower (time) index of coefficients can be a negative integer, but, in practice, the information sequence u ( i ) ( D ) should be causal for every component i, that is, the coefficients u t ( i ) are zeros for time t < 0 . Causal information sequences should be encoded into causal code sequences; otherwise, an encoder can not be implemented, since it should output code symbols before it receives an information symbol.
Denote the block of information symbols that enters an encoder at time t = 0 , 1 , by
u t = u t ( 1 ) , u t ( 2 ) , u t ( k ) F k .
The block of code symbols that leaves the encoder at time t = 0 , 1 , is denoted by
v t = v t ( 1 ) , v t ( 2 ) , v t ( n ) F n .
Combining (5), (6), and (8), we obtain the following information series with vector coefficients:
u ( D ) = u 0 + u 1 D + + u t D t + , u ( D ) F ( ( D ) ) k .
Using (3), (7), and (9), we write a codeword as a series
v ( D ) = v 0 + v 1 D + + v t D t + , v ( D ) F ( ( D ) ) n .
One can write a skew polynomial generator matrix G ( D ) = g i j ( D ) R k × n as a skew polynomial with matrix coefficients:
G ( D ) = G 0 + G 1 D + G 2 D 2 + + G μ D μ
where μ is the maximum degree of polynomials g i j ( D ) . The matrices G i are k × n matrices over the field F and μ is called the generator matrix memory.
From (4), (10), and (11), we obtain that v t is a coefficient in the product of skew series u ( D ) and skew polynomial G ( D ) , which is the following skew convolution (see Figure 1)
v t = u t θ t ( G 0 ) + u t 1 θ t 1 ( G 1 ) + + u t μ θ t μ ( G μ )
where u t = 0 for t < 0 . This encoding rule explains the title skew convolutional code, which can be also seen as the set C = { v ( D ) } of series v ( D ) defined in (11).
At time t, the decoder observes an information block u t of k symbols from F and outputs the code block v t of n code symbols from F using (13); hence, the code rate is R = k / n . The encoder (13) uses u t and also μ previous information blocks u t 1 , u t 2 , , u t μ , which should be stored in the encoder’s memory; this is why μ is also the encoder memory.
The coefficients θ t i ( G i ) , i = 0 , 1 , , μ in the encoder (13) depend on the time t. Hence, the skew convolutional code is a time-varying classical convolutional code. Denote
τ = min i > 0 : θ i ( G j ) = G j j = 0 , 1 , , μ .
For the field F = F q m , we have θ m = θ 0 ; hence, the coefficients in (13) are periodic with period τ m , and the skew convolutional code is periodic with period τ m . The period τ can be less than m if all the matrices G 0 , G μ are over a subfield of F q m .

3.2. Scalar Form of Encoding

The input of the encoder can also be written as an information sequence of k-blocks over F
u = u 0 , u 1 , u 2 , , u t ,
and the output as a code sequence of n-blocks over F
v = v 0 , v 1 , v 2 , , v t , .
Then, the encoding rule (13) can be written in a scalar form
v = u G
with semi-infinite scalar generator block matrix
G = G 0 G 1 G 2 G μ θ ( G 0 ) θ ( G 1 ) θ ( G μ 1 ) θ ( G μ ) θ 2 ( G 0 ) θ 2 ( G μ 2 ) θ 2 ( G μ 1 ) θ 2 ( G μ ) .
Thus, a skew convolutional code can be equivalently represented in a scalar form as the set C = { v } of sequences v defined in (16) that satisfy (17).

3.3. Relations between Skew and Classical Convolutional Codes

In case of identity automorphism, θ ( a ) = a , the scalar generator matrix of the skew code becomes
G = G 0 G 1 G 2 G μ G 0 G 1 G μ 1 G μ G 0 G μ 2 G μ 1 G μ ,
which is a generator matrix of a fixed convolutional code [20]. For fixed convolutional codes, polynomial generator matrices with G 0 of full rank k are of particular interest [20] (Chapter 3). The skew convolutional codes use the following nice property: if G 0 has full rank, then θ i ( G 0 ) has full rank as well for all i.
Time-varying classical convolutional codes are defined by the following generator matrices in [20],
G v a r = G 0 , 0 G 1 , 1 G μ , μ G 1 , 0 G μ , μ 1 G μ + 1 , μ G μ + 1 , μ 1 G μ , 0 G μ + 1 , 0
where the first index t in G t , i is time index. The code defined by the generator matrix G v a r in (20) is called τ-periodic if the columns ( G t , μ T , , G t , 0 T ) T , t μ , repeat with period τ .
Lemma 2.
A scalar generator matrix (18) of a skew code can be written in the following equivalent form:
G = G ˜ 0 θ ( G ˜ 1 ) θ μ ( G ˜ μ ) θ ( G ˜ 0 ) θ μ ( G ˜ μ 1 ) θ μ + 1 ( G ˜ μ ) θ μ + 1 ( G ˜ μ 1 ) θ μ ( G ˜ 0 ) θ μ + 1 ( G ˜ 0 ) .
Proof. 
The statement follows from the change of variables G i = θ i ( G ˜ i ) for i = 1 , 2 , , μ . □
From (14), (20) and (21), we see again that a skew code defined by a generator matrix (21) is a τ -periodic classical convolutional code. Thus, above we proved the following theorem.
Theorem 1.
Given a field F = F q m with automorphism θ in (1), any skew convolutional [ n , k ] code C over F is equivalent to a periodic time-varying (classical) convolutional [ n , k ] code over F , with period τ m (14). If G ( D ) is a skew polynomial generator matrix (12) of the code C , then the scalar generator matrix G of the time-varying code is given by (18) or (21).
Not every periodic classical convolutional code can be represented as a skew code. Indeed, e.g., the submatrix G 1 , 0 in (20) can be selected independently of G 0 , 0 while corresponding submatrix θ ( G ˜ 0 ) in (21) is completely determined by G ˜ 0 . Hence, a class of skew convolutional codes is a subclass of periodic classical convolutional codes.
Given the field F q m , the automorphism θ in (1), and the code memory μ , an [ n , k ] skew convolutional code is defined by a generator matrix G in (21). To specify the matrix, we should fix elements of μ + 1 matrices G ˜ 0 , , G ˜ μ of size k × n . Hence, we should define ( μ + 1 ) k n field elements. Since a classical convolutional code corresponds to the identity automorphism θ = id , the description of skew and classical codes require the same number of field elements.
The number of skew [ n , k ] convolutional codes over F Q = F q m of memory μ with fixed automorphism θ ( a ) = a q has order q ( μ + 1 ) m k n . The number of τ -periodic classical convolutional codes has order q ( μ + 1 ) m k n τ , which is much larger. As a result, the search of good periodic time-varying convolutional codes is much simpler in the class of skew codes in comparison with periodic classical codes. The search among skew convolutional codes has the same complexity as the search among fixed classical codes.
How many more skew codes can we obtain by considering all possible automorphisms? Denote q = p s , where p is the field characteristic then F q m = F p s m = F p M , i.e., our field F Q is an M-extension of the prime field F p . The parameter q = p s , we should select such that F q is a subfield of F p M , hence s should divide M. Denote by δ ( M ) the number-of-divisors function that is the number of divisors i of M, 1 i M . Given the field F Q = F p M , we can select q = p i and the automorphism θ ( a ) = a q in δ ( M ) ways.
Lemma 3.
For a fixed field F Q = F p M , there are δ ( M ) sub-classes of skew convolutional codes, each defined by a fixed automorphism θ.
In Example 2, we have F Q = F 2 2 , i.e., p = q = 2 , s = 1 , m = 2 . M = s m = 2 with divisors 1 and 2. For i = 1 , we have q = p i = 2 and θ ( a ) = a 2 considered in Example 2. For i = 2 , we have q = p 2 = 4 that corresponds to θ = id and gives a constant classical convolutional code. For the field F 2 6 , we have δ ( 6 ) = 4 , and there are four sub-classes of skew codes with q = 2 1 , 2 2 , 2 3 , and q = 2 6 (for fixed code).

4. Properties of Skew Convolutional Codes

4.1. Extension of Fixed Convolutional Codes

To show properties of skew convolutional codes, we will use the following example.
Example 2.
Consider a [2,1] skew convolutional code C ^ over the field F Q = F q m = F 2 2 with automorphism θ ( a ) = a q = a 2 (see Example 1). Let the generator matrix of the code C ^ in polynomial form be
G ( D ) = ( 1 + α D , α + α 2 D ) = G 0 + G 1 D ,
where
G 0 = ( 1 , α ) , G 1 = ( α , α 2 ) .
The generator matrix in scalar form (18) is
G = 1 α α α 2 1 α 2 α 2 α 1 α α α 2 1 α 2 α 2 α .
Here, μ = 1 ; hence, it is a unit memory code. The encoding rule is v = u G , or, from (13), it is
v t = u t θ t ( G 0 ) + u t 1 θ t 1 ( G 1 ) , t = 0 , 1 , .
In some applications, it is preferable to have a generator matrix in systematic form. For our example, a systematic fractional matrix can be obtained using the left division of its components by the first component
G s y s t ( D ) = 1 , α + α 2 D 1 + α D = 1 , ( 1 + α D ) 1 ( α + α 2 D ) .
Let us show that the matrices G s y s t ( D ) and G ( D ) in (22) encode the same code C ^ . Denote g 1 ( D ) = 1 + α D . Then, for any information sequence u ( D ) Q , we have the code word
u ( D ) G ( D ) = u ( D ) g 1 ( D ) g 1 1 ( D ) G ( D ) = u ( D ) g 1 ( D ) G s y s t ( D ) = u ( D ) G s y s t ( D )
and the statement follows since there is a one to one mapping between u ( D ) and u ( D ) = u ( D ) g 1 ( D ) ; hence, both information sequences u ( D ) and u ( D ) run over all possible causal sequences.
Theorem 2.
The class of skew convolutional codes extends the class of fixed convolutional codes.
Proof. 
By Lemma 1, the class of fixed codes is included in the class of skew convolutional codes. Hence, it is sufficient to show that there exists a codeword in a skew convolutional [ n , k ] code that can not belong to any fixed [ n , k ] code with the same memory. Indeed, consider the unit memory, μ = 1 , skew [ 2 , 1 ] code C ^ defined by the generator matrix (24). By encoding the information sequence u = 1 , 0 , 0 , 1 , we obtain the codeword v = v 0 , v 1 , v 2 , v 3 , v 4 = ( 1 , α ) , ( α , α 2 ) , ( 0 , 0 ) , ( 1 , α 2 ) , ( α 2 , α ) C ^ .
Suppose for the sake of contradiction that the codeword v belongs to a fixed unit memory [ 2 , 1 ] convolutional code C . A general form of generator matrix of a unit memory fixed [ 2 , 1 ] code C is (19):
G = a b c d a b c d a b c d a b c d
where a , b , c , d , F 2 2 . Assume that the word v = ( e , f , g , h , ) G C where e , f , g , h F 2 2 . From v 2 = f ( c , d ) + g ( a , b ) = ( 0 , 0 ) , it follows that either i) f = g = 0 , or ii) vectors ( c , d ) and ( a , b ) are F 2 2 -linearly dependent. In case i), v 0 = e ( a , b ) = ( 1 , α ) and v 3 = h ( a , b ) = ( 1 , α 2 ) , e 1 ( 1 , α ) = h 1 ( 1 , α 2 ) , which is impossible since the vectors ( 1 , α ) and ( 1 , α 2 ) are linearly independent. In case ii), linear combinations of linearly dependent vectors ( c , d ) and ( a , b ) should give two linearly independent vectors v 0 = ( 1 , α ) and v 3 = ( 1 , α 2 ) , which is impossible as well. □

4.2. Canonical Encoders and Generator Matrices

The encoder in a controller canonical form [20] for the code C ^ in Example 2 with generator matrix (22) is shown in Figure 2a for even t and in Figure 2b for odd t. The encoder has one shift register, since k = 1 . There is one Q-ary memory element in the shift register shown as a rectangle, where Q = q m = 4 is the order of the field. We need only one memory element since a maximum degree of items in G ( D ) , which consists of a single row in our example, is 1. A large circle means multiplication by the coefficient shown inside.
In the general case of a k × n matrix G ( D ) , we define the degree ν i of its ith row as the maximum degree of its components, and external degree ν of G ( D ) is the sum of its row degrees [21]. The controller-canonical-form encoder of G ( D ) over F Q has k shift registers, the ith register has ν i memory elements, and the total number of Q-ary memory elements in the encoder is ν . Different generator matrices for a code C may have different external degrees.
Definition 2.
Among all skew polynomial generator matrices (PGM) for a given skew convolutional code C , those for which the external degree is as small as possible are called canonical PGMs. This minimal external degree is called the degree or overall constraint length of the code C , and denoted as ν = deg C .

4.3. Code Trellises

For the code C ^ in Example 2, the code trellis is shown in Figure 3. The trellis consists of sections periodically repeated with period τ = m = 2 . Every section has Q ν = 4 1 = 4 states labeled by elements of the field F Q . For the t-th section for time t , t = 0 , 1 , , an edge connects the states u t 1 and u t and is labeled by the code block v t . Every codeword is represented by a path in the trellis that starts from the zero state 0 and goes to the right. The edge label v t is computed according to the encoding rule (25) as follows:
v t = u t 1 ( α , α 2 ) + u t ( 1 , α 2 ) for   odd   t , u t 1 ( α 2 , α ) + u t ( 1 , α ) for   even   t
where we assume that u 1 = 0 , i.e., the initial state of the shift register is 0.

4.4. Code Distances

There are two important characteristics of a convolutional code: the free distance d f and the slope σ of the increase of the active burst distance, defined as follows [20]. The weight of a branch labeled by a vector v t is defined to be the weight w ( v t ) of v t . The weight of a path is the sum of its branch weights. A path in the trellis that diverges from zero state, which does not use edges of weight 0 from zero state to zero state, and that returns to zero state after edges is called a loop of length or ℓ-loop.
The ℓth order active burst distance d burst is defined [20] to be the minimum weight of -loops in the code trellis. The slope is defined as σ = lim d burst / . The free distance is d f = min d burst .
Theorem 3
(Singleton bound). The free Hamming distance of [ n , k ] skew convolutional code C of degree ν = deg C is upper bounded as follows:
d f ( n k ) ν k + 1 + ν + 1 .
Proof. 
We adopted the proof given in [22] for time-invariant finite state codes. The trellis of the code C is time-varying with Q ν states at every level. Consider Q k information sequences u 0 , , u 1 of length blocks. For each of them, the code path in the trellis starts at the state 0 and terminates in one of Q ν states. From the pigeon-hole principle, it follows that there must be at least Q k ν = Q K of these paths that have the same final state. The code sequences corresponding to these paths can be thought of as a block code with length N = n with at least Q K codewords. We should select such that K = k ν > 0 . The Hamming distance d of the block code is upper bounded by the Singleton bound d N K + 1 = ( n k ) + ν + 1 . On the other hand, d is an upper bound on the free Hamming distance d f of the code C . Since this is true for all > ν / k , we have
d f min : > ν / k ( n k ) + ν + 1
that gives the upper bound (28). □
To obtain (28), we use the Singleton bound for block codes; therefore, the bound (28) is Singleton-type bound for skew convolutional codes. In fact, this bound and the proof are valid for arbitrary (also for nonlinear) time-varying trellis codes. In [23], codes that reach the Singleton-type bound are called maximum distance separable (MDS) codes. Any other upper bound for the Hamming distance of block codes can be used to obtain another upper bound for d f of skew convolutional codes (also for time-varying trellis codes). Using the Plotkin bound for block codes, we obtain the following bound.
Corollary 1
(Heller bound). The free Hamming distance of [ n , k ] skew convolutional code C over F Q of degree ν = deg C and memory μ is upper bounded as follows:
d f min i N ^ n ( μ + i ) Q k ( μ + i ) ν 1 ( Q 1 ) Q k ( μ + i ) ν 1 ,
where N ^ = { 1 , 2 , } if k μ = ν and N ^ = { 0 , 1 , } , otherwise.
The bound is named Heller since it was obtained for fixed binary convolutional codes in 1968, see [20,24]. The bound (29) is valid for for time-varying (nonlinear or linear) trellis codes.
In the Hamming metric, the upper bound
σ n k
for the slope σ was obtained in [25] for fixed binary convolutional codes. We conjecture that this bound is true also for non-binary time-varying convolutional codes, and, hence, for skew convolutional codes.
Another interesting metric for convolutional codes is the sum-rank metric, which can be applied for multi-shot network coding [26]. The metric is defined as follows. The rank weight w R ( v t ) of a vector v t over the extension field F q m is the rank of the vector over the base field F q , i.e., w R ( v t ) is the maximum number of F q -linearly independent components of v t . The sum-rank weight of a sequence v in (16) is the sum of weights of its items v t . The sum-rank distance between two sequences is the weight of their difference.
The rank of a vector v t F q m n is upper bounded by the Hamming weight w H ( v t ) of the vector, i.e.,
w R ( v t ) w H ( v t ) .
Hence, any upper bound for the Hamming metric is an upper bound for the sum-rank metric, and, from Theorem 3, we have the following corollary.
Corollary 2.
The free sum-rank distance d f of [ n , k ] skew convolutional code C of degree ν = deg C is upper bounded by (28) or by (29).
On the other hand, from (31), we obtain the following lemma.
Lemma 4.
Let the distance of a code C in the sum-rank metric be d; then, in the Hamming metric, the code distance is at least d.
We next compute the free distance, active burst distances, and the slope for the code C ^ from Example 2 for the Hamming and for the sum-rank metrics.
Lemma 5.
In the sum-rank metric, the skew convolutional code C ^ defined by G ( D ) in (22) has the ℓ-th active burst distance d b u r s t = + 2 for = 2 , 3 , , the slope of the active distance is σ = 1 , and the free distance is d f = 4 .
Proof. 
For this code, the shortest length of a loop is = 2 ; hence, we should consider loops of length = 2 , 3 , . It follows from (27) that the weight w R ( v t ) = rank   v t of a branch in the code trellis that diverges from or merges to zero state is 2. A branch connecting two nonzero states has weight at least 1. Indeed, for odd t, the branch label v t in (27) is a linear combination of vectors ( α , α 2 ) and ( 1 , α 2 ) that are F q m -linearly independent, and v t can’t be 0 for nonzero coefficients u t 1 , u t . The same is true for even t. Hence, d burst + 2 . On the other hand, the path, corresponding to the information sequence u 0 = u 1 = = u 1 0 , u = 0 , is an -loop of weight + 2 . Hence, d burst + 2 , and the statement of the lemma follows. □
Combining Lemmas 3 and 4, we obtain the following corollary.
Corollary 3.
In the Hamming metric, the skew convolutional code C ^ defined by G ( D ) in (22) has the ℓ-th active burst distance d b u r s t = + 2 for = 2 , 3 , , the slope of the active distance is σ = 1 and free distance is d f = 4 .
For both metrics, the Hamming and the sum-rank, the upper bounds for the free distance (28) and for the slope (30) for the unit memory [ 2 , 1 ] code C ^ become
d f 2 n k + 1 = 4 , and σ n k = 1 .
Hence, the skew code C ^ defined by (22) achieves the Singleton-type upper bound on d f and can be called MDS codes like in [23]. The Heller bound (29) gives d f 4 as well. The slope of C ^ also reaches the upper bound (30).
A generator matrix G ( D ) of a skew convolutional code (and corresponding encoder) is called catastrophic if there exists an information sequence u ( D ) of infinite weight such that the code sequence v ( D ) = u ( D ) G ( D ) has finite weight. The generator matrix G ( D ) in (22) of skew convolutional code C ^ with θ = ( . ) q is non-catastrophic, since the slope σ > 0 . Note that, in case of fixed convolutional code C , i.e., for θ = i d , the generator matrix (22) is catastrophic, and the code has d f = 2 and σ = 0 .

4.5. Blocking of Skew Convolutional Codes

A skew convolutional code C , represented as a τ -periodic [ n , k ] code, can be considered as a [ τ n , τ k ] fixed code C ( τ ) by τ-blocking, described in [21]. The only difference between C and C ( τ ) is that the code symbols are grouped in blocks v t of different lengths in these codes. In this way, known methods to analyze fixed codes can be applied to skew convolutional codes.
For example, the [ 2 , 1 ] skew code C ^ with generator matrix (24) has period τ = m = 2 and can be written as [ 4 , 2 ] fixed code C ( τ ) = C ( 2 ) defined by the scalar generator matrix
G ( 2 ) = 1 α α α 2 0 0 0 0 0 0 1 α α 2 α 0 0 1 α α α 2 0 0 0 0 0 0 1 α α 2 α 0 0
which coincides with the matrix G in (24) but is written in 2-blocked form. From G ( 2 ) , we obtain the generator polynomial matrix of the [ 4 , 2 ] blocked code C ^ ( 2 ) as
G ( 2 ) ( D ) = 1 α α α 2 α 2 D α D 1 α .
In general, for any skew convolutional code C and for any i-blocking C ( i ) , i N 1 , the codewords, represented by sequences v of elements from F q m in (16), are the same for codes C and C ( i ) . Hence, the codes have the same properties, e.g., we have
deg C = deg C ( i ) .

5. Dual Skew Convolutional Codes

5.1. Definitions of Duality

The duality of skew convolutional codes can be defined in different ways.
First, consider a skew convolutional code C over F in a scalar form as a set of sequences as in (16). For two sequences v and v , where at least one of them is finite, define the scalar product ( v , v ) as the sum of products of corresponding components, where missing components are assumed to be zeros. We say that the sequences are orthogonal if ( v , v ) = 0 .
Definition 3.
The dual code C to a skew convolutional [ n , k ] code C is an [ n , n k ] skew convolutional code C such that ( v , v ) = 0 for all finite length words v C and for all words v C .
Another way to define orthogonality is, for example, as follows. Consider two n-words v ( D ) and v ( D ) over Q n . We say that v ( D ) is left-orthogonal to v ( D ) if v ( D ) v ( D ) = 0 and right-orthogonal if v ( D ) v ( D ) = 0 . A left dual code to a skew convolutional code C can be defined as
C left = { v Q n : v ( D ) v ( D ) = 0   for   all   v C } .
The dual code C left is a left submodule of Q n , hence it is a skew convolutional code.
Later on, we consider dual codes according to Definition 3 only, since it is more interesting for practical applications.

5.2. Parity Check Matrices

Given a code C with generator matrix G, we next show how to find a parity check matrix H, such that G H T = 0 .
Let a skew [ n , k ] code C of memory μ be defined by a polynomial generator matrix G ( D ) in (12), which corresponds to the scalar generator matrix G in (18). For the dual [ n , n k ] code C , we write a transposed parity check matrix H T of memory μ , similar to classical convolutional codes, as
H T = H 0 T H 1 T H μ T θ ( H 0 T ) θ ( H μ 1 T ) θ ( H μ T )
where rank ( H 0 ) = n k . Similar to [20], we call the matrix H the syndrome former and write it in polynomial form as
H T ( D ) = H 0 T + H 1 T D + + H μ T D μ .
Then, we have the following parity check matrix of the causal code C with the generator matrix (21)
H = H 0 H 1 θ ( H 0 ) H μ θ ( H μ 1 ) θ ( H μ )
which, in the case of θ = id, coincides with the check matrix of a classical fixed convolutional code.
From Definition 3, we have that v H T = 0 for all sequences v C over F . On the other hand, from (4), we have that every codeword v ( D ) C can be written as v ( D ) = u ( D ) G ( D ) . Hence, if we find an n × ( n k ) matrix H T ( D ) over R of full rank such that G ( D ) H T ( D ) = 0 , then every codeword satisfies v ( D ) H T ( D ) = u ( D ) G ( D ) H T ( D ) = 0 and, vice versa, i.e., if v ( D ) H T ( D ) = 0 then v ( D ) is a codeword of C .
Theorem 4.
We have G ( D ) H T ( D ) = 0 if and only if G H T = 0 .
Proof. 
We show the proof for the code of memory μ = 1 (like in Example 2). For the general memory case, the proof follows similarly. Consider the code with generator matrices G ( D ) and G given by (12) and (18). Let us find a check matrix with memory μ = μ = 1 . Then, we have
H T ( D ) = H 0 T + H 1 T D
and
H T = H 0 T H 1 T θ ( H 0 T ) θ ( H 1 T ) θ 2 ( H 0 T ) θ 2 ( H 1 T ) .
From the condition G ( D ) H T ( D ) = 0 , we have the following system of equations for unknowns H 0 T , H 1 T
G 0 H 0 T = 0 G 0 H 1 T + G 1 θ ( H 0 T ) = 0 G 1 θ ( H 1 T ) = 0 .
From the condition G H T = 0 , we obtain the following equations: by multiplying the first row of G by H T we get the system (38), by multiplying the second row of of G by H T we get the system
θ ( G 0 ) θ ( H 0 T ) = 0 θ ( G 0 ) θ ( H 1 T ) + θ ( G 1 ) θ 2 ( H 0 T ) = 0 θ ( G 1 ) θ 2 ( H 1 T ) = 0
which is equivalent to (38). Multiplication of other rows of G by H T does not give new equations. Hence, conditions G ( D ) H T ( D ) = 0 and G H T = 0 give the same system (38). □
Example 3.
For the code C ^ from Example 2, we write H 0 = ( a , c ) and H 1 = ( b , d ) . Using G 0 , G 1 from (22) and solving the system (38), we obtain H 0 = ( α , 1 ) and H 1 = ( 1 , α ) . Hence, H ( D ) = ( α + D , 1 + α D ) and a parity check matrix H of the code C ^ , which is a generator matrix for the dual code C ^ , is as follows:
H = α 1 1 α α 2 1 1 α 2 α 1 1 α .

5.3. Trellises of Dual Codes

Similar to fixed convolutional codes, we have the following theorem:
Theorem 5.
For a skew convolutional code C and its dual C , we have deg C = d e g C .
Proof. 
Denote by τ and τ periods of the codes C and C , respectively. Let be the least common multiple of periods τ and τ ; then, the -blocked codes C ( ) and ( C ) ( ) are both fixed convolutional codes. The fixed codes C ( ) and ( C ) ( ) are dual to each other, since blocking does not change code sequences, hence deg C ( ) = deg ( C ) ( ) , see, e.g., Theorem 2.69, [20] for fixed dual convolutional codes. From (32), we have deg C = C ( ) , deg C = deg ( C ) ( ) , hence deg C = deg C . □
It follows from Theorem 5 that the number of states at one level of the code trellis (trellis complexity) is the same for an original code C and for its dual C and equals Q deg C .
The trellis of the dual code C ^ obtained from the matrix H in Example 3 is shown in Figure 4. The trellis has Q deg C ^ = 4 1 = 4 states labeled by elements of the set S = { 0 , 1 , α , α 2 } . Every word of the dual code C ^ is represented by a path in the trellis that starts from a state s 1 S and goes to the right. For the trellis section corresponding to time t = 0 , 1 , , the edge connecting states s t 1 and s t are labeled by v t computed as follows:
v t = s t 1 ( α 2 , 1 ) + s t ( 1 , α 2 ) for   odd t , s t 1 ( α , 1 ) + s t ( 1 , α ) for   even t .

6. Trellis Decoding of Skew Convolutional Codes

For a given skew convolutional code C , we showed how to obtain a code trellis using a generator matrix of the code. Another way to obtain a code trellis of C using a parity check matrix H was proposed in [27]. Having a code trellis, one can use the Viterbi decoder [4] for maximum likelihood sequence decoding or the BCJR decoder [28] for symbol-wise decoding.
For an [ n , k ] skew convolutional code, the complexity of the Viterbi decoder has order ϰ = n Q k Q deg C operations (additions and binary selections), which exponentially increases in k and might be high for high rate codes. Using detailed code trellis [27,29], where every edge is labeled by a single field element, the decoding complexity can be reduced to
ϰ = n Q min { k , n k } Q deg C .
Another advantage of the method in [29] is that it can be applied to every trellis section separately, which is convenient for time-varying codes. The decoding complexity of a particular code can also be decreased using methods in [30]. The complexity of the BCJR decoding algorithm has the same order as in (39) as well.
Symbol-wise decoding of a skew convolutional code C can be implemented using a trellis of the dual code C , see [31,32,33]. The order of decoding complexity in this case is also given in (39).

7. Conclusions

A new class of non-binary skew convolutional codes was defined that extends the class of fixed convolutional codes. The skew convolutional codes are equivalent to periodic time-varying classical convolutional codes but have as compact a description as fixed convolutional codes.
Given a field F = F p M = F q m of characteristic p and code parameters n , k and μ ; for every authomorphism θ ( a ) = q a of the field, the subclass S C C ( θ ) of skew convolutional [ n , k ] codes of memory μ over the field is defined. All the subclasses have the same number of codes. In case of the identity automorphism θ = id , we obtain the subclass S C C ( id ) of classical fixed convolutional codes. Any other automorphism θ of the field gives a subclass S C C ( θ ) of skew convolutional codes that can be represented as a periodic time-varying convolutional code with typical period m. The total number of the subclasses S C C ( θ ) is equal to the number of divisors of M, which is usually not a large number. The class of m-periodic time-varying convolutional codes is larger than the class of skew convolutional codes. Every code in the subclass S C C ( θ ) is defined by a k × n polynomial generator matrix G ( D ) over the ring of θ -skew polynomials; hence, the descriptions of skew codes and fixed codes are the same, and the description is given by the same matrix G ( D ) .
Every τ -periodic convolutional [ n , k ] code can be written as a fixed [ τ n , τ k ] code; hence, skew convolutional codes can be analyzed by methods known for fixed codes. We showed how to design generator and parity check matrices in polynomial and scalar forms, encoders and code trellises for skew convolutional codes, and their duals. Using code trellises for original and dual codes, in the case of channels without memory, one can apply Viterbi or BCJR decoding algorithms, or the dualized BCJR algorithm.
Future work. We gave just a first encounter with skew convolutional codes. There are many open problems remaining. The algebraic structure of classical fixed convolutional codes is well understood, see, e.g., [20,21] and references therein. The questions such as how to obtain a canonical generator matrix of a skew convolutional code and its dual, or how to design encoders of a fractional generator matrix can be considered in the future. Another open problem is to find good skew convolutional codes reaching an upper bound on the free distance. One possibility to obtain skew convolutional codes is based on unwrapping skew quasi-cyclic (QC) block codes (see such codes in [17]) in a way similar to [34] or [35], where it is shown how fixed classical convolutional codes can be obtained by unwrapping QC block codes and vice versa.

Author Contributions

Conceptualization, V.S., W.L., O.G. and G.K.; Investigation, V.S., W.L., O.G. and G.K.; Writing—original draft, V.S., W.L., O.G. and G.K. All authors have read and agreed to the published version of the manuscript.

Funding

The work of V. Sidorenko was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 801434) and by the Chair of Communications Engineering at the Technical University of Munich. The work of O. Günlü was supported by the German Federal Ministry of Education and Research (BMBF) within the national initiative for “Post Shannon Communication (NewCom)” under Grant 16KIS1004. The work of G. Kramer was supported in part by the German Research Foundation through DFG Grant KR 3517/9-1.

Acknowledgments

The authors are grateful to the guest editor, to the assistant editor, and especially to anonymous reviewers for their comments and advice that allowed us to improve the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elias, P. Coding for noisy channels. IRE Conv. Rec. 1955, 4, 37–46. [Google Scholar]
  2. Fano, R. A heuristic discussion of probabilistic decoding. IEEE Trans. Inf. Theory 1963, 9, 64–74. [Google Scholar] [CrossRef]
  3. Massey, J.L. Threshold Decoding; MIT Press: Cambridge, MA, USA, 1963. [Google Scholar]
  4. Viterbi, A. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inf. Theory 1967, 13, 260–269. [Google Scholar] [CrossRef] [Green Version]
  5. Berrou, C.; Glavieux, A.; Thitimajshima, P. Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1. In Proceedings of the ICC ’93—IEEE International Conference on Communications, Geneva, Switzerland, 23–26 May 1993; Volume 2, pp. 1064–1070. [Google Scholar] [CrossRef]
  6. IEEE Standard for Telecommunications and Information Exchange between Systems-LAN/MAN Specific Requirements —Part 11: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications: High Speed Physical Layer in the 5 GHz Band. Available online: https://ieeexplore.ieee.org/document/815305 (accessed on 1 December 2020).
  7. Filler, T.; Judas, J.; Fridrich, J. Minimizing Additive Distortion in Steganography Using Syndrome-Trellis Codes. IEEE Trans. Inf. Forensics Secur. 2011, 6, 920–935. [Google Scholar] [CrossRef] [Green Version]
  8. Ouahada, K. Nonbinary convolutional codes and modified M-FSK detectors for power-line communications channel. J. Commun. Netw. 2014, 16, 270–279. [Google Scholar] [CrossRef]
  9. Holzbaur, L.; Freij-Hollanti, R.; Wachter-Zeh, A.; Hollanti, C. Private Streaming With Convolutional Codes. IEEE Trans. Inf. Theory 2020, 66, 2417–2429. [Google Scholar] [CrossRef]
  10. Mooser, M. Some periodic convolutional codes better than any fixed code (Corresp.). IEEE Trans. Inf. Theory 1983, 29, 750–751. [Google Scholar] [CrossRef]
  11. Lee, P.J. There are many good periodically time-varying convolutional codes. IEEE Trans. Inf. Theory 1989, 35, 460–463. [Google Scholar] [CrossRef]
  12. Gabidulin, E.M. Theory of Codes with Maximum Rank Distance. Probl. Inform. Trans. 1985, 21, 1–12. [Google Scholar]
  13. Boucher, D.; Ulmer, F. Coding with skew polynomial rings. J. Symb. Comput. 2009, 44, 1644–1656. [Google Scholar] [CrossRef]
  14. Boucher, D.; Ulmer, F. Codes as Modules over Skew Polynomial Rings; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef] [Green Version]
  15. Martínez-Peñas, U. Sum-Rank BCH Codes and Cyclic-Skew-Cyclic Codes. arXiv 2020, arXiv:2009.04949. [Google Scholar]
  16. Gluesing-Luerssen, H. Skew-Polynomial Rings and Skew-Cyclic Codes. arXiv 2019, arXiv:1902.03516. [Google Scholar]
  17. Abualrub, T.; Ghrayeb, A.; Aydin, N.; Siap, I. On the Construction of Skew Quasi-Cyclic Codes. IEEE Trans. Inf. Theory 2010, 56, 2081–2090. [Google Scholar] [CrossRef] [Green Version]
  18. Ore, O. Theory of Non-Commutative Polynomials. Ann. Math. 1933, 34, 480–508. [Google Scholar] [CrossRef]
  19. Clark, P. Non-Commutative Algebra; University of Georgia: Athens, GA, USA, 2012. [Google Scholar]
  20. Johannesson, R.; Zigangirov, K.S. Fundamentals of Convolutional Coding; John Wiley and Sons, Ltd.: Hoboken, NJ, USA, 2015. [Google Scholar]
  21. McEliece, R.J. The Algebraic Theory of Convolutional Codes. In Handbook of Coding Theory; Chapter 12; Pless, V.S., Huffman, W.C., Eds.; Elsevier Science: Amsterdam, The Netherlands, 1998; Volume I, pp. 1065–1138. [Google Scholar]
  22. Pollara, F.; McEliece, R.J.; Abdel-Ghaffar, K. Finite-state codes. IEEE Trans. Inf. Theory 1988, 34, 1083–1089. [Google Scholar] [CrossRef]
  23. Rosenthal, J.; Smarandache, R. Maximum Distance Separable Convolutional Codes. Appl. Algebra Eng. Commun. Comput. 1998, 10, 15–32. [Google Scholar] [CrossRef]
  24. Gluesing-Luerssen, H.; Schmale, W. Distance bounds for convolutional codes and some optimal codes. arXiv 2003, arXiv:math/0305135. [Google Scholar]
  25. Jordan, R.; Pavlushkov, V.; Zyablov, V.V. An upper bound on the slope of convolutional codes. In Proceedings of the 2002 IEEE International Symposium on Information Theory, Lausanne, Switzerland, 30 June–5 July 2002; p. 424. [Google Scholar] [CrossRef]
  26. Wachter-Zeh, A.; Stinner, M.; Sidorenko, V. Convolutional Codes in Rank Metric with Application to Random Network Coding. IEEE Trans. Inf. Theory 2015, 61, 3199–3213. [Google Scholar] [CrossRef] [Green Version]
  27. Sidorenko, V.; Zyablov, V. Decoding of convolutional codes using a syndrome trellis. IEEE Trans. Inf. Theory 1994, 40, 1663–1666. [Google Scholar] [CrossRef]
  28. Bahl, L.; Cocke, J.; Jelinek, F.; Raviv, J. Optimal decoding of linear codes for minimizing symbol error rate (Corresp.). IEEE Trans. Inf. Theory 1974, 20, 284–287. [Google Scholar] [CrossRef] [Green Version]
  29. Li, W.; Sidorenko, V.; Jerkovits, T.; Kramer, G. On Maximum-Likelihood Decoding of Time-Varying Trellis Codes. In Proceedings of the 2019 XVI International Symposium “Problems of Redundancy in Information and Control Systems” (REDUNDANCY), Moscow, Russia, 21–25 October 2019; pp. 104–109. [Google Scholar] [CrossRef]
  30. Lafourcade, A.; Vardy, A. Optimal sectionalization of a trellis. IEEE Trans. Inf. Theory 1996, 42, 689–703. [Google Scholar] [CrossRef]
  31. Hartmann, C.; Rudolph, L. An optimum symbol-by-symbol decoding rule for linear codes. IEEE Trans. Inf. Theory 1976, 22, 514–517. [Google Scholar] [CrossRef]
  32. Berkmann, J.; Weiss, C. On dualizing trellis-based APP decoding algorithms. IEEE Trans. Commun. 2002, 50, 1743–1757. [Google Scholar] [CrossRef]
  33. Srinivasan, S.; Pietrobon, S.S. Decoding of High Rate Convolutional Codes Using the Dual Trellis. IEEE Trans. Inf. Theory 2010, 56, 273–295. [Google Scholar] [CrossRef]
  34. Esmaeili, M.; Gulliver, T.A.; Secord, N.P.; Mahmoud, S.A. A link between quasi-cyclic codes and convolutional codes. IEEE Trans. Inf. Theory 1998, 44, 431–435. [Google Scholar] [CrossRef]
  35. Kudryashov, B.D.; Zakharova, T.G. Block Codes from Convolution Codes. Probl. Peredachi Inf. 1989, 25, 98–102. [Google Scholar]
Figure 1. Encoder of a skew convolutional code.
Figure 1. Encoder of a skew convolutional code.
Entropy 22 01364 g001
Figure 2. Encoder of the skew code C ^ from Example 2: (a) for even t, (b) for odd t.
Figure 2. Encoder of the skew code C ^ from Example 2: (a) for even t, (b) for odd t.
Entropy 22 01364 g002
Figure 3. Time-varying trellis of the skew code C ^ .
Figure 3. Time-varying trellis of the skew code C ^ .
Entropy 22 01364 g003
Figure 4. Time-varying trellis of the dual skew code C ^ from Example 3.
Figure 4. Time-varying trellis of the dual skew code C ^ from Example 3.
Entropy 22 01364 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sidorenko, V.; Li, W.; Günlü, O.; Kramer, G. Skew Convolutional Codes. Entropy 2020, 22, 1364. https://doi.org/10.3390/e22121364

AMA Style

Sidorenko V, Li W, Günlü O, Kramer G. Skew Convolutional Codes. Entropy. 2020; 22(12):1364. https://doi.org/10.3390/e22121364

Chicago/Turabian Style

Sidorenko, Vladimir, Wenhui Li, Onur Günlü, and Gerhard Kramer. 2020. "Skew Convolutional Codes" Entropy 22, no. 12: 1364. https://doi.org/10.3390/e22121364

APA Style

Sidorenko, V., Li, W., Günlü, O., & Kramer, G. (2020). Skew Convolutional Codes. Entropy, 22(12), 1364. https://doi.org/10.3390/e22121364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop