Next Article in Journal
Some Results on the Free Poisson Distribution
Next Article in Special Issue
Solutionsof Fuzzy Goursat Problems with Generalized Hukuhara (gH)-Differentiability Concept
Previous Article in Journal
On Some Distance Spectral Characteristics of Trees
Previous Article in Special Issue
A Comprehensive Study of Generalized Lambert, Generalized Stieltjes, and Stieltjes–Poisson Transforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrix Factorization and Some Fast Discrete Transforms

by
Iliya Bouyukliev
1,*,
Mariya Dzhumalieva-Stoeva
2 and
Paskal Piperkov
2
1
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Faculty of Mathematics and Informatics, St. Cyril and St. Methodius University of Veliko Tarnovo, 5003 Veliko Tarnovo, Bulgaria
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(8), 495; https://doi.org/10.3390/axioms13080495
Submission received: 4 July 2024 / Revised: 18 July 2024 / Accepted: 21 July 2024 / Published: 23 July 2024
(This article belongs to the Special Issue Recent Advances in Special Functions and Applications)

Abstract

:
In this paper, three discrete transforms related to vector spaces over finite fields are studied. For our purposes, and according to the properties of the finite fields, the most suitable transforms are as follows: for binary fields, this is the Walsh–Hadamard transform; for odd prime fields, the Vilenkin–Chrestenson transform; and for composite fields, the trace transform. A factorization of the transform matrices using Kronecker power is given so that the considered discrete transforms are reduced to the fast discrete transforms. Examples and applications are also presented of the considered transforms in coding theory for calculating the weight distribution of a linear code.

1. Introduction

The Fourier-type discrete transforms have various applications [1], including signal processing, communications, and cryptography. As their special application, we indicate the implementation for digital devices related to spectral logic [2]. The discrete transformations are also important in studying the cryptographic properties of Boolean functions [3,4,5], and they are used in the study of self-dual bent functions (see [6]). A description of fast Fourier-type discrete transforms can be found in the popular book by Joux [3] (Chapter 9).
Fast discrete transforms are widely used in coding theory, especially for computing some important parameters of linear codes over finite fields. The problems for computing the weight distribution, minimum distance, and covering radius of a linear code are NP-complete [7]. Algorithms that solve these problems are usually based on an exhaustive search [8,9,10]. Connections between discrete Fourier-type transforms, on the one hand, and weight distribution and covering radius, on the other, were established by Karpovsky [11,12], but he mainly considered binary codes. The algorithms based on fast discrete transforms are well suited for linear codes with small rates (large length and small dimension). The present study is a continuation of the recent work of the authors on the transforms’ applications in coding theory [13,14].
Fast discrete transforms, in most cases, can be represented as a matrix by vector multiplication, where the matrix is recursively defined by a Kronecker product. The transition from discrete to fast discrete transform is based on the fact that the transform matrix defined recursively by a Kronecker product can be factorized into a product of sparse matrices. We prove that the sparse matrices in this product commute, which is important for some implementations [15]. As a result, this manages to reduce the complexity of computing the discrete transform from O ( n 2 ) to O ( n log n ) , where n is the size of the transform matrix.
The fast discrete transforms are implemented with butterfly diagrams and algorithms (see, for example, [16]). These types of algorithms have very efficient natural implementations with the SIMD model of parallelization, especially with the CUDA platform. The speedup of the parallel implementation of Walsh–Hadamard transform in GPU can be seen in [15].
In addition to the basic properties of the considered discrete transforms, we dwell in more detail on the following cases concerning linear codes over a finite field with q elements:
(1)
If q = 2 , we consider the Walsh–Hadamard (Walsh) transform. This transform is a square-wave analog to the Fourier transform, based on Walsh functions [17,18]. The Walsh transform matrices are related to the Hadamard matrices of the Sylvester type. The Hadamard matrix H n , n N , is the n-th Kronecker power of the matrix H 1 = 1 1 1 1 . This allows us to represent H n as a product of rare matrices [19], which leads to a fast algorithm. As the computations of the transform are addition and subtraction only, it can be implemented by a butterfly algorithm [3,16]. Therefore, this transform is best suited for vector spaces over the binary field.
(2)
If q is an odd prime, we use the Vilenkin–Chrestenson transform [20,21]. This transform is defined over the ring of integers modulo q Z q = { 0 , 1 , 2 , , q 1 } . As it is well known, Z q is a field only when q is a prime. We consider linear codes over a field, so this transform is suitable for our research for prime q. The Walsh–Hadamard transform is a special case of the Vilenkin–Chrestenson transform, namely for q = 2 . Information about the Vilenkin–Chrestenson transform, as well as other discrete Fourier-type transforms, is given in [2,22,23].
(3)
If q = p s for a prime p and positive integer s > 1 , we use the trace transform for linear codes over the filed F q .
The aim of this paper is to present methods for factorization of transform matrices when they are represented as a Kronecker product and their application for faster computation of the weight distribution of linear codes over prime and composite finite fields. We also give specific examples for better understanding. This consideration is essential for the modification of algorithms for computing the weight spectrum and covering radius of a linear code in some cases [13,14].
Section 2 presents the information we need about finite fields and linear codes. In Section 3, we present some useful properties of the Kronecker product. The factorization of the k-th Kronecker power of a square matrix is also given in this part. Section 4, Section 5 and Section 6 are devoted to the Walsh–Haramard, Vilenkin–Chrestenson, and trace transform, respectively.

2. Finite Fields and Linear Codes

Let F q be a finite field, or Galois field, with q elements and the prime p be its characteristic. Detailed information about finite field properties is given in [24]. Any finite extension F over the field K is a linear space over K and a basis of the field F is any basis of this linear space. In particular, the field F p m , as a finite m-powered extension of its prime subfield F p , is isomorphic to the linear space F p m . If β 1 , β 2 , , β m F p m is a basis of F p m over the prime subfield F p and α F p m , then α = λ 1 β 1 + λ 2 β 2 + + λ m β m , where λ 1 , λ 2 , , λ m F p .
 Definition 1. 
Let K = F q , F = F q m and α F . Trace T r F / K ( α ) of the element α over K is
T r F / K ( α ) = α + α q + α q 2 + + α q m 1 .
If K is a prime subfield of F, then T r F / K ( α ) is called absolute trace of the element α and it is denoted by T r F ( α ) .
If the field is given in advance, the absolute trace is denoted only by T r ( α ) and called just trace.
Definition 2. 
Two bases β 1 , , β m F and β 1 , , β m F of the field F over the subfield K are called dual, if
T r F / K ( β i β j ) = 0 , i f i j , 1 , i f i = j . ,
for any 1 i , j m . A basis that is dual of itself is called a self-dual basis.
In other words, the trace T r ( β i β j ) is equal to the Kronecker δ of i and j for all i , j { 1 , 2 , , m } . Each basis has exactly one dual basis. The following theorem gives a sufficient and necessary condition for the existence of a self-dual basis.
Theorem 1 
([25] (Theorem 5.1.18), [26]). There exists a self-dual basis of F = F q m over K = F q if and only if either q is even or both q and m are odd.
For the purposes of the transforms, the elements of F q are ordered, i.e., F q = { α 0 = 0 , α 1 , , α q 1 } . In most cases, when q is composite, we use the lexicographic ordering of the elements, depending on the chosen basis. If β 1 , , β m is a basis of F q , q = p m , over F p for the prime p, then any elements α F q can be written as α = λ 1 ( α ) β 1 + + λ m ( α ) β m , where λ i ( α ) F p , i = 1 , , m . There is one-to-one correspondence σ β between F p m and F p m defined by this basis. For α F p m , σ β ( α ) = ( λ 1 ( α ) , , λ m ( α ) ) . To order the elements of F q , we use the lexicographic ordering of the vectors in F p m , namely
( 0 , , 0 , 0 ) < ( 0 , , 0 , 1 ) < ( 0 , , 0 , 2 ) < < ( p 1 , , p 1 ) .
The position of the zero element is constant, whereas all other elements could change their positions if the basis is changed.
Let F q n be the n-dimensional vector space over the finite field F q = { 0 , α 1 , α 2 , , α q 1 } , where q is a power of a prime number. Any k-dimensional linear subspace C F q n is called linear code, and it is said to be [ n , k ] q code with length n and dimension k. A vector v C is called a codeword of C. The (Hamming) weight w t ( v ) of a vector v F q n is the number of the nonzero coordinates of v. The weight distribution of the code C is the finite sequence ( A 0 , A 1 , , A n ) Z n + 1 , where A u is the number of the codewords with weight u , u = 0 , 1 , , n , and the weight enumerator of the code C is the polynomial W C ( y ) = 1 + A 1 y 1 + + A n y n .
A linear code can be represented by its generator matrix, i.e., a k × n matrix whose rows form a basis of C as a vector space over F q . We use another representation of the [ n , k ] linear code C that depends on the chosen generator matrix G. We define a characteristic function f G : F q k Z of the code C with respect to its generator matrix G in the following way.
Definition 3. 
The value of the characteristic function f G ( x ) of the linear [ n , k ] q code C with respect to the generator matrix G is the number of the columns in G, which are proportional (with non zero coefficients) of x, for x F q k . If f G ( 0 ) = 0 we say that the code has full length.
Obviously, if x 1 and x 2 are proportional vectors in F q k , then f G ( x 1 ) = f G ( x 2 ) . Since a linear code has more than one generator matrix (in the general case), it has more than one characteristic function.

3. Factorization of Kronecker Power

Kronecker product of the matrices A = ( a i j ) s 1 × t 1 and B = ( b i j ) s 2 × t 2 is the ( s 1 s 2 × t 1 t 2 ) matrix
A B = a 11 B a 12 B a 1 t 1 B a 21 B a 22 B a 2 t 1 B a s 1 1 B a s 1 2 B a s 1 t 1 B .
We give some elementary properties of the Kronecker product:
A ( B C ) = ( A B ) C ,
( A + B ) ( C + D ) = A C + A D + B C + B D ,
( A B ) ( C D ) = ( A C ) ( B D ) .
Kronecker’s product is not commutative.
The k-th Kronecker power of a square matrix M is defined as follows:
2 M = M M , k + 1 M = M k M , k > 1 .
If M is a square matrix of size t, then k M is a square matrix of size t k .
Good [19] showed that the Kronecker power could be reduced to the usual row-by-column multiplication of some sparse matrices. This fact stands also for the Kronecker power of a square matrix.
Theorem 2. 
Let M be a square matrix of size t and k be a positive integer. Then
k M = B 1 · B 2 B k ,
where B l = I t l 1 M I t k l , 1 l k , and I s is the identity matrix of size s.
Proof. 
The proof of the theorem is made by induction. The statement is obvious for k = 1 :
1 M = B 1 = I t 1 1 M I t 1 1 = I 1 M I 1 = M .
Let (4) be true for all Kronecker powers smaller than k. Specifically,
k 1 M = B 1 · B 2 B k 1 ,
where B l = I t l 1 M I t k l 1 , l = 1 , 2 , , k 1 . Because of (1), (3) and the inductive hypothesis, we have that
k M = M k 1 M = ( M · I t ) I t k 1 · k 1 M = ( M I t k 1 ) · I t k 1 M = B 1 · ( I t k 1 ( B 1 · B 2 B k 1 ) ) = B 1 · ( I t B 1 ) · ( I t B 2 ) ( I t B k 1 ) = B 1 · ( I t M I t k 2 ) · ( I t I t M I t k 3 ) ( I t I t k 2 M ) = B 1 · B 2 · B 3 B k .
So the theorem holds for all k N . □
Another important result is that the order of the factors in (4) does not matter.
Theorem 3. 
The factors in (4) commute.
Proof. 
We apply the properties of the Kronecker product.
B i · B j = ( I t i 1 M I t n i ) · ( I t j 1 M I t k j ) = I t i 1 ( ( M I t k i ) · ( I t j i M I t k j ) ) = I t i 1 ( ( M I t j i ) · ( I t j i M ) ) I t k j = I t i 1 ( ( M I t j i ) · ( I t I t j i 1 M ) ) I t k j = I t i 1 ( M · I t ) ( I t j i · ( I t j i 1 M ) ) I t k j = I t i 1 M I t j i 1 M I t k j = I t i 1 ( I t · M ) ( ( I t j i 1 M ) · I t j i ) I t k j = I t i 1 ( ( I t I t j i 1 M ) · ( M I t j i ) ) I t k j = I t i 1 ( ( I t j i M ) · ( M I t j i ) ) I t k j = B j · B i .
If we take a square t × t matrix M without zero elements, then the number of nonzero elements in each row and column in the matrices B l is t.
I u M I s = M M M I s = M I s M I s M I s ,
where
M I s = m 11 I s m 12 I s m 1 t I s m 21 I s m 22 I s m 2 t I s m t 1 I s m t 2 I s m t t I s
If m i j 0 , then each block m i j I s has exactly one nonzero element in each row and column. Hence, M I s has the same number of nonzero elements by rows and columns as the matrix M itself. Obviously, this holds for I u M I s , too.
Let M be a square t × t matrix without zero elements and v be a vector of length t k for some integer k. To compute the product of the k-th Kronecker power k M with the vector v, one needs t k · t k multiplications and t k ( t k 1 ) additions. Applying Theorem 2, k M is factorized into k square matrices of size t k , such that each one has the same number t of nonzero elements by rows and columns as M. We have that
k M · v = B 1 ( B 2 ( B 3 ( B k 1 ( B k v ) ) ) )
So, calculating k M · v consists of k multiplications B l · v , where v is the vector, obtained in the previous step. Computing B l · v consists of t k · t multiplications and t k ( t 1 ) additions. In the general case, the whole calculation consists of at most k ( t · t k + ( t 1 ) t k ) = k t k ( 2 t 1 ) operations. Hence, k M · v is computed much faster. This is the core of the presented fast discrete transforms.

4. Fast Walsh Transform

Let n be positive integer and F 2 = { 0 , 1 } be the Galois field of two elements. There is one-to-one correspondence between the non negative integers u < 2 n and the binary vectors of length n, defined by u u ¯ = ( u 0 , u 1 , , u n 1 ) F 2 , where u = u 0 2 n 1 + u 1 2 n 2 + + u n 1 is the binary representation of u.
Let f : F 2 n F 2 be a Boolean function of n variables. Truth Table T T f of the function f is a 2 n -dimensional column vector whose ( i + 1 ) -th coordinate is f ( i ¯ ) , i = 0 , 1 , , 2 n 1 .
Walsh transform of the function f is defined as follows:
f ^ ( ω ) = x F 2 k f ( x ) ( 1 ) x , ω , ω F 2 k ,
where x , y = x 1 y 1 + x 2 y 2 + + x k y k and it means the Euclidean inner product of the vectors x and y. The column vector of the function f ^ is denoted also by T T f ^ .
We use a special type of Hadamard matrices, defined recurrently as follows:
H 1 = 1 1 1 1 , H k = H k 1 H k 1 H k 1 H k 1 , k > 1 .
Theorem 4. 
If f is a Boolean function of n variables then T T f ^ = H n · T T f .
Proof. 
We prove the theorem by induction. For k = 1 , let f : F 2 F 2 be a Boolean function of one variable. Then T T f = f ( 0 ) f ( 1 ) and
W f ( 0 ) = f ( 0 ) . ( 1 ) 0.0 + f ( 1 ) . ( 1 ) 1.0 = f ( 0 ) + f ( 1 ) ,
W f ( 1 ) = f ( 0 ) . ( 1 ) 0.1 + f ( 1 ) . ( 1 ) 1.1 = f ( 0 ) f ( 1 ) .
Obviously, W f = H 1 · T T f .
Let the theorem holds for n = k , k 1 , and f : F 2 k + 1 F 2 be a Boolean function of k + 1 variables with truth table T T f . We use the Boolean functions of k variables f 0 ( x 1 , , x k ) = f ( 0 , x 1 , , x k ) and f 1 x 1 , , x k ) = f ( 1 , x 1 , , x k ) . According to the lexicographic ordering, the set of all vectors in F 2 k + 1 can be split into two subsets, namely V 0 = { ( 0 , x 1 , , x k ) , ( x 1 , , x k ) F 2 k } and V 1 = { ( 1 , x 1 , , x k ) , ( x 1 , , x k ) F 2 k } . Then T T f = ( T T f 0 , T T f 1 ) . According to the induction hypothesis W f 0 = H k · T T f 0 and W f 1 = H k · T T f 1 . Then
W f ( ω ) = x F 2 k + 1 f ( x ) ( 1 ) x , ω = ( x 1 , , x k ) F 2 k f ( 0 , x 1 , , x k ) ( 1 ) ( 0 , x 1 , , x k ) , ( ω 0 , ω 1 , , ω k ) + ( x 1 , , x k ) F 2 k f ( 1 , x 1 , , x k ) ( 1 ) ( 1 , x 1 , , x k ) , ( ω 0 , ω 1 , , ω k ) = ( x 1 , , x k ) F 2 k f 0 ( x 1 , , x k ) ( 1 ) ( x 1 , , x k ) , ( ω 1 , , ω k ) + ( x 1 , , x k ) F 2 k f 1 ( x 1 , , x k ) ( 1 ) ω 0 + ( x 1 , , x k ) , ( ω 1 , , ω k ) = ( x 1 , , x k ) F 2 k f 0 ( x 1 , , x k ) ( 1 ) ( x 1 , , x k ) , ( ω 1 , , ω k ) + ( 1 ) ω 0 ( x 1 , , x k ) F 2 k f 1 ( x 1 , , x k ) ( 1 ) ( x 1 , , x k ) , ( ω 1 , , ω k ) .
It follows that
W f ( ω ) = W f 0 ( ω 1 , , ω k ) + W f 1 ( ω 1 , , ω k ) for ω 0 = 0 ,
W f ( ω ) = W f 0 ( ω 1 , , ω k ) W f 1 ( ω 1 , , ω k ) for ω 0 = 1 .
These equations and the induction hypothesis show that
W f = W f 0 + W f 1 W f 0 W f 1 = H k · T T f 0 + H k · T T f 1 H k · T T f 0 H k · T T f 1 = H k H k H k H k · T T f 0 T T f 1 .
Hence, the induction step is proven. □
Thus, the computation of the Walsh transform is reduced to a matrix by vector multiplication. The elements of the matrix H n are only 1’s and −1’s. Hence, the multiplication of a row of the matrix by the considered vector means only addition and subtraction.
There is a fast algorithm to calculate H n · v . First, the vector v is partitioned into vectors of two coordinates. For each small vector
H 1 · v 1 v 2 = 1 1 1 1 · v 1 v 2 = v 1 + v 2 v 1 v 2 .
Using these results, H 2 is multiplied by 4-length parts of v. The k-th step consists of multiplying the matrix H k by a part v of v, where v has length 2 k . Let v ( 1 ) and v ( 2 ) be 2 k 1 parts of v from the previous step of the algorithm and also H k 1 · v ( 1 ) and H k 1 · v ( 2 ) are computed. Then
H k · v = H k 1 H k 1 H k 1 H k 1 · v ( 1 ) v ( 2 ) = H k 1 · v ( 1 ) + H k 1 · v ( 2 ) H k 1 · v ( 1 ) H k 1 · v ( 2 ) .
Therefore, we may consequently calculate H n · v .
The presented algorithm consists of n steps, each one performing only additions and subtractions. Therefore, n · 2 n arithmetic operations are made, instead of 2 n · 2 n operations of the ordinary matrix by vector multiplication. An implementation of the Walsh–Hadamard fast transform in CUDA, as well as analysis of the complexity of the algorithm and comparison to other existing algorithms, are given in [15].
Example 1. 
Let M = H 1 and k H 1 = H k . For k = 3 , it holds that H 3 = ( H 1 I 4 ) · ( I 2 H 1 I 2 ) · ( I 4 H 1 ) .
H 3 = I 4 I 4 I 4 I 4 · I 2 I 2 I 2 I 2 I 2 I 2 I 2 I 2 · H 1 H 1 H 1 H 1
Each row in the above multipliers consists of only two nonzero elements, namely 1 and −1. So, the row by vector multiplication costs one arithmetic operation (addition or subtraction). Let T T f be the Truth Table of a function f : F 2 3 F 2 . The process of computing H 3 · T T f is given below in Figure 1. In the given diagram, the continuous line means addition, and the dotted line means subtraction.
In some cases, especially by parallelization of the algorithm and using shared memory, it is more convenient to rearrange the matrices. Because of Theorem 3, there is another way to compute H 3 · T T f .
H 3 = ( I 4 H 1 ) · ( H 1 I 4 ) · ( I 2 H 1 I 2 ) .
This leads to another fast algorithm for computing V 3 . b (b is a vector of suitable length), whose diagram is given in Figure 2.
As a factorization of the Kronecker power is made, the Vilenkin–Chrestenson and the trace transforms also have similar fast algorithms.

5. Discrete Transform of Vilenkin–Chrestenson

Let ξ be a primitive complex q-th root of unity. The matrices of Vilenkin–Chrestenson of size k are recurrently defined as follows:
V 1 = 1 1 1 1 1 ξ ξ 2 ξ q 1 1 ξ 2 ξ 4 ξ 2 ( q 1 ) 1 ξ q 1 ξ 2 ( q 1 ) ξ ( q 1 ) 2 , V k + 1 = V 1 V k , k Z , k 1 ,
where ⊗ means the Kronecker product. The elements of the matrix V k are v ω ( x ) = ξ ω , x , where ω = ( ω 1 , ω 2 , , ω k ) Z q k are the row labels and x = ( x 1 , x 2 , , x k ) Z q k are the column labels, both in lexicographical order, Z q = { 0 , 1 , 2 , , q 1 } and ω , x = i = 1 k ω i . x i Z q .
Some useful properties directly following the definition are given below:
v ω ( x ) = v x ( ω ) , v ω ( x ) v ω ( x ) = v ω ( x + x ) , v ω ( 0 ) = 1 .
The first property shows that the matrices V k are symmetric.
Definition 4. 
Let f : Z q k C be a function. Transform of Vilenkin–Chrestenson of f is the function f ^ V C : Z q k C , defined as follows:
f ^ V C ( ω ) = x Z q k f ( x ) v ω ( x ) , ω Z q k .
Let T T f be the vector whose coordinates are all values of the function f when the elements of Z q k are lexicographically ordered. It is called a truth table, and it is similar to the truth table of a Boolean function but with complex numbers. The value vectors of f and f ^ V C are related as follows:
T T f ^ V C = V k · T T f .
Hence, the Vilenkin–Chrestenson transform can be reduced to a matrix by vector multiplication (see, for example, [16]).
One very useful application of the presented transform is to calculate the weight distribution of a linear code over a prime field. Let q be a prime and G be a generator matrix of the linear [ n , k ] q code C. We take a code C of full length, so f G ( 0 ) = 0 .
The relation between the Vilenkin–Chrestenson transform f ^ V C and the weight of the codeword ω G C is given by the following theorem.
Theorem 5. 
Let G be a generator matrix of the linear [ n , k ] q code C with full length. Then, for the weight of the codeword ω G C , it holds that
w t ( ω G ) = ( q 1 ) n f ^ V C ( ω ) q , ω F q k ,
where f ^ V C is the Vilenkin–Chrestenson transform of the characteristic function f G of the code C.
Proof. 
It is known that all codewords of C are linear combinations of the rows of its generator matrix, so any codeword is generated by multiplication of G by a vector ω F q k . Let us denote the columns of the matrix G by g 1 , , g n . It follows that ω G = ( ω , g 1 , , ω , g n ) , where ω , g i is the inner product of ω and the vector g i . Since w t ( ω G ) is the number of the nonzero coordinates of the vector, so n w t ( ω G ) is the number of the zeros in ω G . On the other hand, the summands in (9) are 0 only if there exists a proportional to x column in G. Furthermore, if f ( x ) 0 for some x 0 , then f ( x ) = f ( α x ) for each α F q { 0 } . Hence, for each column x of the generator matrix G it holds that
α F q { 0 } v ω ( α x ) = α F q { 0 } ξ ω , α x = q 1 , if ω , x = 0 , 1 , if ω , x 0 .
As the code has full length, then f ( 0 ) = 0 . The summands in (9) can be rearranged such that exactly one sum of type (11) corresponds to each column of G. Then
f ^ V C ( ω ) = ( q 1 ) ( n w t ( ω G ) ) w t ( ω G ) .
This proves that in this case f ^ V C is an integer-valued function. Calculating w t ( ω G ) , we obtain (10). □
Example 2. 
Let C be [ 7 , 3 ] 3 code and its generator matrix is
G = 1 1 1 1 1 0 0 0 1 1 2 0 1 0 1 0 1 0 0 0 1
The characteristic function f G ( x ) : F 3 3 Z of C with respect to G has the following Truth Table:
T T f G = ( 0 , 1 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 1 )
We have that T T f ^ V C = V 3 . T T f G , so
T T f ^ V C = ( 14 , 5 , 5 , 2 , 4 , 1 , 2 , 1 , 4 , 1 , 4 , 2 , 1 , 1 , 1 , 2 , 4 , 1 , 1 , 2 , 4 , 2 , 1 , 4 , 1 , 1 , 1 )
Applying Theorem 5, we obtain that the nonzero weights in the code are 3, 4, 5, 6, and the weight enumerator is W C ( y ) = 1 + 2 y 3 + 6 y 4 + 12 y 5 + 6 y 6 . The structure of the matrix V 3 is given below.
V 3 = ( I 3 1 1 V 1 I 3 3 1 ) . ( I 3 2 1 V 1 I 3 3 2 ) . ( I 3 3 1 V 1 I 3 3 3 ) = ( V 1 I 9 ) . ( I 3 V 1 I 3 ) . ( I 9 V 1 ) ,
where V 1 = 1 1 1 1 ξ ξ 2 1 ξ 2 ξ .
V 3 = I 9 I 9 I 9 I 9 ξ I 9 ξ 2 I 9 I 9 ξ 2 I 9 ξ I 9 · I 3 I 3 I 3 I 3 ξ I 3 ξ 2 I 3 I 3 ξ 2 I 3 ξ I 3 I 3 I 3 I 3 I 3 ξ I 3 ξ 2 I 3 I 3 ξ 2 I 3 ξ I 3 I 3 I 3 I 3 I 3 ξ I 3 ξ 2 I 3 I 3 ξ 2 I 3 ξ I 3 · V 1 V 1 V 1
Each of the sparse matrices above has exactly three nonzero elements in its rows and columns.
In the general case, a sparse matrix for the Vilenkin–Chrestenson transform has exactly q nonzero elements since V 1 is a q × q matrix. That is why the number of the necessary operations is reduced to 2 k q k + 1 , approximately, while the same result would be obtained with 2 q 2 k operations without the V k factorization. The same conclusion could be drawn for the trace transform as well.

6. Trace Transform

Let F q be a composite finite field with q elements, where q = p m for the prime p and the integer s > 1 . For the trace transform, the absolute trace of the inner product is used instead of the inner product itself, as in the Vilenkin–Chrestenson transform [27]. Let ζ be a primitive complex p-th root of unity, and
τ ω ( x ) = ζ T r ( ω , x )
for any ω , x F q k . We define the trace transform similarly to the Vilenkin–Chrestenson transform.
Definition 5. 
Let F q be a finite field with q elements, q = p m for the prime number p, and ζ be a primitive complex p-th root of unity. Trace transform of the function f : F q k C is the function f ^ T : F q k C , defined as follows:
f ^ T ( ω ) = x F q k f ( x ) τ ω ( x ) = x F q k f ( x ) ζ T r ( ω , x ) , ω F q k .
The transform matrix here is the q k × q k matrix T k = ( τ ω ( x ) ) , where the vectors (in our case denoted by ω and x) in F q k are lexicographically ordered. Then the Truth Tables of f and f ^ T are connected by the equality
T T f ^ T = T k · T T f .
Because of the symmetry and linearity of the inner product, the following properties hold:
τ ω ( x ) = τ x ( ω ) , τ ω ( x ) τ ω ( x ) = τ ω ( x + x ) .
Lemma 1. 
For x F q k it holds that
ω F q k τ ω ( x ) = q k , f o r x = 0 , 0 , f o r x 0 .
Proof. 
The lemma is a modification of [28] (Chapter 5, Lemma 9), and its proof is by induction on k. In the base step, k = 1 it holds
ω F q τ ω ( x ) = ω F q ζ T r ( x ω ) = q , for x = 0 , 0 , for x 0 .
For x 0 , when ω takes all possible values of F q , then x ω takes all the values of F q . Moreover, for any element α F q , there are exactly p m 1 elements α F q with the same trace value T r ( α ) = T r ( α ) . Thus
ω F q ζ T r ( x ω ) = ω F q ζ T r ( ω ) = p m 1 ω F p ζ ω = p m 1 1 ζ p 1 ζ = 0 .
Let the equality hold for some integer k. Consider the vectors in F q k + 1 in the following form: x = ( x 0 , x ) , x 0 F q , x F q k . Because of the trace linearity
ω F q k + 1 τ ω ( x ) = ω F q k + 1 ζ T r ( ω , x ) = j F q ω F q k ζ T r ( j x 0 + ω , x ) = j F q ω F q k ζ T r ( j x 0 ) + T r ( ω , x ) = j F q ω F q k ζ T r ( j x 0 ) ζ T r ( ω , x ) = j F q ζ T r ( j x 0 ) ω F q k ζ T r ( ω , x ) = j F q τ j ( x 0 ) ω F q k τ ω ( x )
According to the induction hypothesis and its base step, the sum is 0 exactly when x 0 = 0 and x = 0 . In this case, the sum is q . q k = q k + 1 .
Hence, the equality holds for each k N . □
To reduce (15), it is also taken that
ζ T r ( ω , x ) = ζ T r ( j x 0 ) ζ T r ( ω , x ) ,
where ω = ( j , ω ) , x = ( x 0 , x ) F q k + 1 , j , x 0 F q , ω , x F q k . This means that the matrices T k are generated as a Kronecker power, i.e., T k + 1 = T 1 T k and T k = k T 1 for k N .
The trace transform also has applications in coding theory, but it is used for linear codes over composite finite fields. In this case, a theorem similar to Theorem 5 is proved.
Theorem 6. 
Let G be a generator matrix of the linear [ n , k ] q code C with full length. Then, for the weights of all codewords of C, it holds that
w t ( ω G ) = ( q 1 ) n f ^ T ( ω ) q , ω F q k ,
where f ^ T is the trace transform of the characteristic function f G of the code C.
Proof. 
The proof is similar to the proof of Theorem 5. Each coordinate in ω G is equal to the inner product of ω and the corresponding column of G. Since w t ( ω G ) is the number of the nonzero coordinates of ω G , so n w t ( ω G ) is the number of zeros. On the other hand, the summand x ζ T r ( ω , x ) in (13) is not equal to 0 if and only if there exists a proportional vector of x as a column in G. If f G ( x ) 0 , for some x 0 , then f G ( x ) = f G ( α x ) for all α F q { 0 } . For each column x of the matrix G it holds that
α F q { 0 } τ ω ( α x ) = α F q { 0 } ζ T r ( ω , α x ) = q 1 , if ω , x = 0 , 1 , if ω , x 0 .
Let ω , x 0 . Then { α ω , x | α F q , α 0 } = F q { 0 } . The absolute trace T r is a linear map onto F p with a kernel of p m 1 elements. Hence, the sum (17) has exactly p m 1 1 additives ζ 0 = 1 and exactly p m 1 additives ζ λ for each λ F p , λ 0 . Further
α F q { 0 } ζ T r ( ω , α x ) = p m 1 1 + p m 1 λ F p { 0 } ζ λ = 1 + p m 1 λ Z p ζ λ = 1 .
Because the code has full length, then f G ( 0 ) = 0 . The summands in (13) might be rearranged in such a way that any of the columns of G corresponds exactly to one sum of the type (17). Then
f ^ T ( ω ) = ( q 1 ) ( n w t ( ω G ) ) w t ( ω G ) ,
which leads to (16). □
It was proved in [13] that if the characteristic of the field is 2 and the used basis of F q is self-dual, the matrix T 1 is a q × q Sylvester-Hadamard matrix (this matrix is denoted by Λ in [13] (Proposition 2.1)). We give an example with a quaternary code with two characteristic vectors—one of them with respect to a self-dual basis and another one with respect to the ordinary polynomial basis 1 , x . Recall that the elements β 1 , , β m F q , where q = p m for a prime p, form a self-dual basis of the field if T r ( β i β j ) = δ i j for the Kronecker δ .
Example 3. 
Let q = 4 and F 4 = { 0 , 1 , x , x + 1 = x 2 } . We consider the [ 4 , 2 ] quaternary code with a generator matrix
G = 1 0 1 x 2 0 1 x 2 x .
If we take the ordering 0 < 1 < x < x + 1 , then the lexicographic ordering of the quaternary vectors of length 2 is
( 0 , 0 ) < ( 0 , 1 ) < ( 0 , x ) < ( 0 , x + 1 ) < ( 1 , 0 ) < < ( x + 1 , x ) < ( x + 1 , x + 1 ) ,
and the characteristic vector of the code is
f G ( C ) = ( 0 , 1 , 1 , 1 , 1 , 0 , 0 , 2 , 1 , 2 , 0 , 0 , 1 , 0 , 2 , 0 ) .
In this case, the trace transform matrix is
T 2 = 2 T 1 = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Then T 2 . f G ( C ) = ( 12 , 0 , 0 , 0 , 0 , 4 , 4 , 4 , 0 , 4 , 4 , 4 , 0 , 4 , 4 , 4 ) and after applying (16), we obtain that the weight distribution of the given code is W c ( z ) = 1 z 0 + 3 z 2 + 6 z 3 + 6 z 4 .
The quaternary field has a self-dual basis, namely β 1 = x , β 2 = x + 1 . Then we have the following ordering of the elements 0 < β 2 < β 1 < 1 , since β 2 = ( 0 , 1 ) , β 1 = ( 1 , 0 ) , and 1 = β 1 + β 2 = ( 1 , 1 ) . In this case, the lexicographic ordering of the quaternary vectors of length 2 is
( 0 , 0 ) < ( 0 , β 2 ) < ( 0 , β 1 ) < ( 0 , 1 ) < ( β 2 , 0 ) < < ( 1 , β 1 ) < ( 1 , 1 ) .
The new characteristic vector is
f G ( C ) = ( 0 , 1 , 1 , 1 , 1 , 0 , 2 , 0 , 1 , 0 , 0 , 2 , 1 , 2 , 0 , 0 ) .
Now T 1 = 2 H 1 and so T 2 has the following structure:
T 2 = 2 T 1 = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
After the corresponding calculations, the same weight distribution of the code is obtained, but the algorithm is much faster, as we can use the fast Walsh transform, presented in Section 4.
In the general case, if the trace transform could not be reduced to the Walsh–Hadamard transform, the matrix T 1 is of size q × q , i.e., p m × p m . Then the matrix T k is a p m k × p m k matrix, so the computation of T T f ^ T requires p 2 m k multiplications and the same number of additions, 2 p 2 m k operations over all. Let T k be factorized into k sparse p m k × p m k matrices. Each of them has at most p m × p m nonzero elements, so its multiplications by a vector consist of 2 p m k p m multiplications and additions. The total amount of operations to obtain the same result is at most 2 k p m ( k + 1 ) .
We finish this section with a proposition showing that the trace transform can be reduced to a Vilenkin–Chrestenson transform in some cases. Let q = p m , where p is an odd prime and m > 1 is an odd integer. According to Theorem 1, there exists a self-dual basis β 1 , , β m of F q over F p . Let F q = { α 0 = 0 , α 1 , , α q 1 } , where the elements are ordered lexicographically with respect to the self-dual basis. Therefore,
α s = λ 1 ( α s ) β 1 + + λ m ( α s ) β m , s = 0 , 1 , , q 1 ,
and λ ( α s ) = ( λ 1 ( α s ) , , λ m ( α s ) ) F p m . Because of the lexicographic ordering, we have the following representation of the index s in the p-ary numeral system
s = λ 1 ( α s ) p m 1 + + λ m 1 ( α s ) p + λ m ( α s ) .
Let V m ( p ) be the Vilenkin–Chrestenson matrix for the pth root of unity ζ , considered in this section. Then, the following statement holds.
Proposition 1. 
For F q = { α 0 = 0 , α 1 , , α q 1 } , the matrices T 1 and V m ( p ) coincides.
Proof. 
According to the linearity of the trace function, we have
T r ( α j α j ) = T r ( s = 1 m λ s ( α j ) β s s = 1 m λ s ( α j ) β s ) = T r ( s = 1 m s = 1 m λ s ( α j ) λ s ( α j ) β s β s ) = s = 1 m s = 1 m T r ( λ s ( α j ) λ s ( α j ) β s β s ) = s = 1 m s = 1 m λ s ( α j ) λ s ( α j ) T r ( β s β s ) .
Since the basis is self-dual, it holds that
λ s ( α j ) λ s ( α j ) T r ( β s β s ) = λ s ( α j ) λ s ( α j ) , if s = s , 0 , if s s .
Then
T r ( α j α j ) = s = 1 m λ s ( α j ) λ s ( α j ) = λ ( α j ) , λ ( α j ) .
Hence
τ ω ( x ) = ζ λ ( α j ) , λ ( α j ) = v λ ( ω ) λ ( x )
and so
f ^ T ( ω ) = f ^ V C ( λ ( ω ) ) ,
where the f ^ V C is the Vilenkin–Chrestenson transform with the p-th root of unity ζ . Transferring this result to the transform matrices, we obtain T 1 = V m ( p ) . □

7. Conclusions

Computing the weight distribution of a linear code is not an easy problem, especially when the considered finite field is composite. Most algorithms that solve this problem are based on searching in large amounts of generated data. Therefore, other different techniques, such as Fourier-type transforms, are also being investigated and applied.
In this paper, three fast discrete transforms are studied, namely the Walsh–Hadamard, the Vilenkin–Chrestenson, and the trace transform, mainly considering the problem of representing their matrices as a product of sparse matrices. Using these transforms, the weights of a linear code could be computed without generating the codewords themselves. Algorithms for implementing the given fast discrete transforms have not only better complexity but they could also be easily parallelized.

Author Contributions

Conceptualization, I.B. and P.P.; methodology, I.B.; validation, I.B., M.D.-S. and P.P.; formal analysis, I.B., M.D.-S. and P.P.; investigation, I.B., M.D.-S. and P.P.; resources, I.B. and P.P.; writing—original draft preparation, I.B., M.D.-S. and P.P.; writing—review and editing, I.B. and M.D.-S.; visualization, M.D.-S.; supervision, I.B.; project administration, I.B.; funding acquisition, I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Bulgarian National Science Fund grant number KP-06-H62/2/13.12.2022.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article, because it describes entirely theoretical research.

Acknowledgments

The authors would like to thank Stefka Bouyuklieva for her valuable discussions and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Elliot, D.F.; Rao, K.R. Fast Transforms:Algorithms, Analyses, Applications; Academic Press: Orlando, FL, USA, 1982. [Google Scholar]
  2. Karpovsky, M.G.; Stanković, R.S.; Astola, J.T. Spectral Logic and Its Applications for the Design of Digital Devices; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 2008. [Google Scholar]
  3. Joux, A. Algorithmic Cryptanalysis; Chapman & Hall/CRC: Boca Raton, FL, USA, 2009. [Google Scholar]
  4. Maitra, S.; Sarkar, P. Cryptographycally Significant Boolean Functions with Specified Five Valued Walsh Spectra. Theor. Comp. Sci. 2002, 276, 133–146. [Google Scholar] [CrossRef]
  5. Uyan, E.; Çalik, Ç.; Doğanaksoy, A. Counting Boolean Functions with Specified Values in Their Walsh Spectrum. J. Comp. Appl. Math. 2014, 259, 522–528. [Google Scholar] [CrossRef]
  6. Bouyukliev, I.; Bouyuklieva, S. Dual transform and projective self-dual codes. Adv. Math. Commun. 2024, 18, 328–341. [Google Scholar] [CrossRef]
  7. Kaski, P.; Östergård, P. Classification Algorithms for Codes and Designs; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  8. Betten, A.; Braun, M.; Fripertinger, H.; Kerber, A.; Kohnert, A.; Wassermann, A. Error-Correcting Linear Codes. Classification by Isometry and Applications; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  9. Cheng, W.; Liu, Y.; Guilley, S.; Rioul, O. Towards finding best linear codes for side-channel protections. In Proceedings of the 10th International Workshop on Security Proofs for Embedded Systems (PROOFS’2021), Beijing, China, 17 September 2021. [Google Scholar]
  10. Hannusch, C.; Major, S.R. Torch: Software Package For The Search Of Linear Binary Codes. In Proceedings of the 2022 IEEE 2nd Conference on Information Technology and Data Science (CITDS), Debrecen, Hungary, 16–18 May 2022; pp. 103–106. [Google Scholar] [CrossRef]
  11. Karpovsky, M.G. On the Weight Distribution of Binary linear codes. IEEE Trans. Inform. Theory 1979, IT-25, 105–109. [Google Scholar] [CrossRef]
  12. Karpovsky, M.G. Weight distribution of translates, covering radius, and perfect codes correcting errors of given weights. IEEE Trans. Inform. Theory 1981, 27, 462–472. [Google Scholar] [CrossRef]
  13. Piperkov, P.; Bouyukliev, I.; Bouyuklieva, S. An algorithm for computing the weight distribution of a linear code over composite Finite Field with characteristic 2. In Recent Topics in Differential Geometry and Its Related Fields; Adachi, T., Hashimoto, H., Eds.; World Scientific Publishing Company: Singapore, 2019; pp. 163–181. [Google Scholar]
  14. Piperkov, P.; Bouyukliev, I.; Bouyuklieva, S. An algorithm for computing the covering radius of a linear code based on Vilenkin-Chrestenson transform. In New Horizons in Differential Geometry and Its Related Fields; Adachi, T., Hashimoto, H., Eds.; World Scientific Publishing Company: Singapore, 2022; pp. 105–123. [Google Scholar]
  15. Bikov, D.; Bouyukliev, I. Parallel Fast Walsh Transform Algorithm and Its Implementation with CUDA on GPUs. Cybern. Inf. Technol. 2018, 18, 21–43. [Google Scholar] [CrossRef]
  16. van Loan, C. Computational Frameworks for the Fast Fourier Transform; The Society for Indistrial and Applied Mathematics: Philadelphia, PA, USA, 1992; ISBN 978-0-898712-85-8. [Google Scholar]
  17. Henderson, K.W. Some Notes on the Walsh Functions. IEEE Trans. Electron. Comput. 1964, EC-13, 50–52. [Google Scholar] [CrossRef]
  18. Walsh, J.L. A Closed Set of Normal Orthogonal Functions. Am. J. Math. 1923, 45, 5–24. [Google Scholar] [CrossRef]
  19. Good, I.J. The Interaction Algorithm and Practical Fourier Analysis. J. R. Stat. Soc. 1958, 20, 361–372. [Google Scholar] [CrossRef]
  20. Chrestenson, H.E. A class of generalized Walsh functions. Pac. J. Math. 1955, 5, 17–31. [Google Scholar] [CrossRef]
  21. Vilenkin, N.Y. On a class of complete orthonormal systems. Izv. Akad. Nauk SSSR Ser. Mat. 1947, 11, 363–400. [Google Scholar]
  22. Bespalov, M.S. Discrete Chrestenson transform. Probl. Inf. Transm. 2010, 46, 353–375. [Google Scholar] [CrossRef]
  23. Farkov, Y.A. Discrete wavelets and the Vilenkin-Chrestenson transform. Math. Notes 2011, 89, 871–884. [Google Scholar] [CrossRef]
  24. Lidl, R.; Niederreiter, H. Introduction to Finite Fields and their Applications; Cambridge University Press: Cambridge, UK, 1986. [Google Scholar]
  25. Mullen, G.L.; Panario, D. Handbook of Finite Fields; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013; pp. 2742–33487. [Google Scholar]
  26. Seroussi, G.; Lempel, A. Factorization of Symmetric Matrices and Trace-orthogonal Bases in Finite Fields. SIAM J. Comput. 1980, 9, 758–767. [Google Scholar] [CrossRef]
  27. Assmus, E.F.; Mattson, H.F. Coding and combinatorics. SIAM Rev. 1974, 16, 349–388. [Google Scholar] [CrossRef]
  28. MacWilliams, F.J.; Sloane, N.J.A. The Theory of Error-Correcting Codes; Elsevier Science Publishers: Amsterdam, The Netherlands, 1977. [Google Scholar]
Figure 1. Fast Walsh transform diagram.
Figure 1. Fast Walsh transform diagram.
Axioms 13 00495 g001
Figure 2. Fast Walsh transform diagram with rearranged steps.
Figure 2. Fast Walsh transform diagram with rearranged steps.
Axioms 13 00495 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bouyukliev, I.; Dzhumalieva-Stoeva, M.; Piperkov, P. Matrix Factorization and Some Fast Discrete Transforms. Axioms 2024, 13, 495. https://doi.org/10.3390/axioms13080495

AMA Style

Bouyukliev I, Dzhumalieva-Stoeva M, Piperkov P. Matrix Factorization and Some Fast Discrete Transforms. Axioms. 2024; 13(8):495. https://doi.org/10.3390/axioms13080495

Chicago/Turabian Style

Bouyukliev, Iliya, Mariya Dzhumalieva-Stoeva, and Paskal Piperkov. 2024. "Matrix Factorization and Some Fast Discrete Transforms" Axioms 13, no. 8: 495. https://doi.org/10.3390/axioms13080495

APA Style

Bouyukliev, I., Dzhumalieva-Stoeva, M., & Piperkov, P. (2024). Matrix Factorization and Some Fast Discrete Transforms. Axioms, 13(8), 495. https://doi.org/10.3390/axioms13080495

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop