Next Article in Journal
Confidence Interval, Prediction Interval and Tolerance Interval for the Skew Normal Distribution: A Pivotal Approach
Previous Article in Journal
Khalil New Generalized Weibull Distribution Based on Ranked Samples: Estimation, Mathematical Properties, and Application to COVID-19 Data
Previous Article in Special Issue
Analysis of Flame Suppression Capabilities Using Low-Frequency Acoustic Waves and Frequency Sweeping Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Third-Order Tensor Decorrelation Based on 3D FO-HKLT with Adaptive Directional Vectorization

by
Roumen K. Kountchev
1,*,
Rumen P. Mironov
1 and
Roumiana A. Kountcheva
2
1
Department of Radio Communications and Video Technologies, Faculty of Telecommunications, Technical University of Sofia, 1000 Sofia, Bulgaria
2
TK Engineering, 1582 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 854; https://doi.org/10.3390/sym14050854
Submission received: 25 March 2022 / Revised: 10 April 2022 / Accepted: 14 April 2022 / Published: 21 April 2022
(This article belongs to the Special Issue Multidimensional Signal Processing and Its Applications)

Abstract

:
In this work, we present a new hierarchical decomposition aimed at the decorrelation of a cubical tensor of size 2n, based on the 3D Frequency-Ordered Hierarchical KLT (3D FO-HKLT). The decomposition is executed in three consecutive stages. In the first stage, after adaptive directional vectorization (ADV) of the input tensor, the vectors are processed through one-dimensional FO-Adaptive HKLT (FO-AHKLT), and after folding, the first intermediate tensor is calculated. In the second stage, on the vectors obtained after ADV of the first intermediate tensor, FO-AHKLT is applied, and after folding, the second intermediate tensor is calculated. In the third stage, on the vectors obtained from the second intermediate tensor, ADV is applied, followed by FO-AHKLT, and the output tensor is obtained. The orientation of the vectors, calculated from each tensor, could be horizontal, vertical or lateral. The best orientation is chosen through analysis of their covariance matrix, based on its symmetry properties. The kernel of FO-AHKLT is the optimal decorrelating KLT with a matrix of size 2 × 2. To achieve higher decorrelation of the decomposition components, the direction of the vectors obtained after unfolding of the input tensor in each of the three consecutive stages, is chosen adaptively. The achieved lower computational complexity of FO-AHKLT is compared with that of the Hierarchical Tucker and Tensor Train decompositions.

Graphical Abstract

1. Introduction

The main tensor decompositions could be divided into two groups.
The first group comprises decompositions executed in the spatial domain of the tensor. These are the famous Canonical Polyadic Decomposition (CPD), Higher-Order Singular Value Decomposition (HOSVD) [1,2,3,4], Tensor Train (TT) Decomposition [5], Hierarchical Tucker (H-Tucker) algorithm [6] and some of their modifications [7,8], based on the calculation of the tensor eigenvalues and eigenvectors. Their most important feature is that they are optimal regarding the minimization of the mean square approximation error derived from the low-energy component “truncation”. The calculation of the retained components is based on iterative methods [9,10] that need a relatively small number of mathematical operations to achieve the requested accuracy. The hierarchical tensor decompositions based on the H-Tucker algorithm are presented in publications [8,11]. The compositional hierarchical tensor factorization introduced in [8] disentangles the hierarchical causal structure of object image formation, but the computational complexity (or Complexity) is not presented. In [11] is offered the TT-based hierarchical decomposition of high-order tensors, based on the Tensor-Train Hierarchical SVD (TT-HSVD). This approach permits parallel processing, which significantly accelerates the process. Unlike the TT-SVD algorithm, TT-HSVD is based on applying SVDs to matrices of smaller dimensions, which results in lower Complexity of TT-HSVD.
The second group comprises tensor decompositions performed in the transform domain, which use reversible 3D linear orthogonal transforms such as the Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT), etc. [12,13,14]. This approach is distinguished by its flexibility regarding the choice of the transform based on the processed data contents.
In this work, we present alternative new hierarchical 3D tensor decompositions based on the famous statistical orthogonal Karhunen–Loeve Transform (KLT) [15,16]. They are close to optimal (which ensures full decorrelation of the decomposition components), but do not need iterations and have lower computational complexity. As a basis, we present here the decomposition called 3D Frequency-Ordered Adaptive Hierarchical Karhunen–Loeve Transform (3D FO-AHKLT), whose efficiency is enhanced through adaptive directional tensor vectorization (ADV).
In Section 2, we present the method for 3D hierarchical adaptive transform based on the one-dimensional Frequency-Ordered Adaptive Hierarchical Karhunen–Loeve Transform (FO-AHKLT). Section 3 gives the details on the cubical tensor decomposition through separable 3D FO-AHKLT based on correlation analysis, and Section 4 explains the related algorithm. In Section 5, we analyze the computational complexity of the new approaches compared to that of the well-known H-Tucker and TT decompositions; Section 6 contains the conclusion.

2. Method for 3D Adaptive Frequency-Ordered Hierarchical KLT of a Cubical Tensor

The proposed method for decomposition of 3rd order cubical tensor 𝒳 R N × N × N is based on the 3D Frequency-Ordered Hierarchical Karhunen–Loeve Transform (3D FO-HKLT), defined by the relation below [17]:
𝒳 = ( 1 / N 3 ) m = 0 N 1 v = 0 N 1 l = 0 N 1 s ( m , v , l ) [ k m ( 1 ) k v ( 2 ) k l ( 3 ) ] .
Here, N = 2n is the size of the tensor 𝒳 with nonnegative components x (i, j, k). The coefficients s ( m , v , l ) are the elements of the spectrum tensor S, which is of the same size as 𝒳 . Each coefficient s ( m , v , l ) represents the weights of the basic tensor, 𝒦 u , v , l . Each basic tensor is represented as the outer product of three vectors:
𝒦 m , v , l = k n ( 1 ) k v ( 2 ) k l ( 3 ) .
Here, “ ” denotes the outer product of the two column-vectors ( x y = x · y T ) , and k m ( 1 ) , k v ( 2 ) , k l ( 3 ) are the basic vectors, obtained after execution of the three stages of 3D FO-HKLT. In the first decomposition components s ( m , v , l ) . 𝒦 m , v , l (as given in Equation (1)) the main part of the tensor 𝒳 power is concentrated, and their high decorrelation is achieved. The kernel of 3D FO-HKLT, defined as KLT2×2, is KLT with a transform matrix of size 2 × 2.
The decomposition comprises three consecutive stages arranged in accordance with the correlation analysis of the tensor 𝒳 elements. One example decomposition for the tensor X of size N × N × N (for N = 8) is shown in Figure 1. After applying the 3D FO-AHKLT on the input tensor X, it is sequentially transformed into the intermediate tensors E, F and the output tensor, S. The 3D FO-AHKLT is divisible, and this permits it to be executed by using the FO-AHKLT, whose graph for the case N = 8 is shown in Figure 2. As a result, the tensor 𝒳 is transformed into the first intermediate tensor E, of the same size.
In the general case, FO-HKLT is executed for the N-dimensional vectors x s ( u ) of total number N2, oriented in horizontal (u = 1), vertical (u = 2) or lateral (u = 3) direction. These vectors are defined as a result of the ADV of the tensor 𝒳 . The choice of the orientation direction of vectors x s ( u ) for s = 1, 2, …, N2 is defined through analysis of their covariance matrices, Kx(u). Before the calculation of FO-HKLT for the N-dimensional vector x s ( u ) , it is divided into N/2 two-component vectors x q , s ( u ) . For the case N = 8, each vector x s ( u ) is divided into 4 vectors x q , s ( u ) , for q = 1, 2, 3, 4. For this case, the total number of vectors x s ( u ) is 4N2 = 256, and they are divided into 4 sub-groups of N2 = 64 vectors x q , s ( u ) each.
Let x s = [ x 1 , s , x 2 , s , , x n , s ] T and e s = [ e 1 , s , e 2 , s , , e N , s ] T be the input and the output column-vectors of N = 2n components, respectively, and n is the number of FO-HKLT hierarchical levels. The relation between vectors es and xs, is [17]:
e s = P n ( 2 n ) [ p = 1 n G p   ( 2 n ) ]   x s = T FO - HKLT ( 2 n ) x s   for   s = 1 ,   2 ,   ,   N 2 ,
where T FO - HKLT   ( 2 n ) = P n ( 2 n ) [ p = 1 n G p   ( 2 n ) ] is the FO-HKLT matrix; P n ( 2 n ) of size 2n × 2n is the permutation matrix for the last level n of FO-HKLT, and p = 1 n G p   ( 2 n ) is the product of n sparse transform matrices G p ( 2 n ) for p = 1, 2, 3, …, n. Each matrix G p ( 2 n ) is defined as follows:
G p ( 2 n ) = [ j = 1 2 p 1 c p , j j = 1 2 p 1 s p , j j = 1 2 p 1 s p , j j = 1 2 p 1 c p , j ] [ j = 1 2 p 1 c p , 2 p 1 + j j = 1 2 p 1 s p , 2 p 1 + j j = 1 2 p 1 s p , 2 p 1 + j j = 1 2 p 1 c p , 2 p 1 + j ] [ j = 1 2 p 1 c p , 2 p 1 ( n p 1 ) + j j = 1 2 p 1 s p , 2 p 1 ( n p 1 ) + j j = 1 2 p 1 s p , 2 p 1 ( n p 1 ) + j j = 1 2 p 1 c p , 2 p 1 ( n p 1 ) + j ]
for p = 1, 2, 3, …, n, where “⊕” denotes the direct sum of matrices.
In the equations, figures and text below, the abbreviations c p , q = cos θ p , q and s p , q = sin θ p , q are used. Here:
θ p , q = arctan { 2 k 3 ( p , q ) / [ k 1 ( p , q ) k 2 ( p , q ) + ( k 1 ( p , q ) k 2 ( p , q ) ) 2 + 4 k 3 2 ( p , q ) ] } ,
k 1 ( p , q ) = E { x 1 , s 2 ( p , q ) } ;   k 2 ( p , q ) = E { x 2 , s 2 ( p , q ) } ;   k 3 ( p , q ) = E { x 1 , s ( p , q ) x 2 , s ( p , q ) } .
E { x i , s ( p , q ) } = ( 1 / N 2 ) s = 1 N 2 x i , s ( p , q ) for i = 1, 2-averaging operator.
If β2 = 2k3(p,q) and β 1 = k 1 ( p , q ) k 2 ( p , q ) + [ k 1 ( p , q ) k 2 ( p , q ) ] 2 + 4 k 3 2 ( p , q ) are two real numbers, the extended arctan function εarctan12) is mapped to (−π, π] by adding π [18]:
ε arctan ( β 1 , β 2 ) = { arctan ( β 2 / β 1 ) , if β 1 > 0 ; arctan ( β 2 / β 1 ) + π , if β 1 < 0 ,   β 2 0 ; arctan ( β 2 / β 1 ) π , if β 1 < 0 ,   β 2 < 0 ; π / 2 , if β 1 = 0 ,   β 2 > 0 ; π / 2 , if β 1 = 0 ,   β 2 < 0 ; 0 , if β 2 = β 1 = 0 .
In particular, the direct and inverse KLT2×2 of the elements x1s and x2s, which have the same spatial position in the couple of matrices X1 and X2 of size N × N for m 1 = E { x 1 s } = 0 and m 2 = E { x 2 s } = 0 (valid for the Adaptive KLT2×2), are:
y 1 s = x 1 s cos θ 1 , 2 + x 2 s sin θ 1 , 2 ; x 1 s = y 1 s cos θ 1 , 2 + y 2 s sin θ 1 , 2 ;
y 2 s = x 1 s sin θ 1 , 2 x 2 s cos θ 1 , 2 ; x 2 s = y 1 s sin θ 1 , 2 y 2 s cos θ 1 , 2 .
or y s = T ( 2 ) · x s and x s = T ( 2 ) · y s for s = 1, 2, …, N2.
Here, y1s and y2s are the corresponding elements in the couple of transformed matrices Y1 and Y2 (each of size N × N); y s = [ y 1 , s , y 2 , s ] T , x s = [ x 1 , s , x 2 , s ] T and [ T ( 2 ) ] = [ cos θ 1 , 2 sin θ 1 , 2 sin θ 1 , 2 cos θ 1 , 2 ] = [ c 1 , 2 s 1 , 2 s 1 , 2 c 1 , 2 ] .
In particular, for θ 1 , 2 = π / 4 , the relations (8) and (9) become as follows:
y 1 s = ( x 1 s + x 2 s ) ; y 2 s = ( x 1 s x 2 s ) ; x 1 s = ( 1 / 2 ) ( y 1 s + y 2 s ) ; x 2 s = ( 1 / 2 ) ( y 1 s y 2 s ) .
Hence, in this case, the KLT2×2 coincides with the Walsh–Hadamard Transform (WHT).
In the level n, the components of the column-vectors y n , s = G n ( 2 n ) y n 1 , s (respectively, the components of matrices Yn,k for k = 0, 1, …, N − 1) are rearranged. For this, the permutation matrix P n ( 2 n ) is used.
From the components of the column-vectors e s = P n ( 2 n ) y n , s are obtained the frequency-ordered matrices Er (i.e., E0, E1, …, EN−1), calculated in accordance with the relation between their sequential number r and the sequential number k (for matrices Yn,k).
The relation that defines the matrix P n ( 2 n ) is:
  • The binary code k n 1 , k n 2 , , k 0 of the sequential decimal number k = 0, 1, …, 2n−1 of the component Yn,k is arranged inversely (i.e., k 0 , k 1 , , k n 1 ), as g i = k n i 1 for 0 ≤ i ≤ n − 1;
  • The so-obtained code g n 1 , g n 2 , , g 0 is transformed from Gray code into the binary code r n 1 , r n 2 , , r 0 , in accordance with the operations r n 1 = g n 1 , r i = r i 1 g i for 0 ≤ i ≤ n − 2. Here, “⊕” denotes the operation “exclusive OR”.
The decimal number r = i = 0 n 1 r i 2 i defines the sequential number of Er for r = 0, 1, …, 2n−1, which (before the rearrangement) corresponded to Yn,k, with sequential number k = i = 0 n 1 k i 2 i .
For N = 8 (n = 3), p = 1, 2, 3 and q = 1, 2, 3, 4, FO-HKLT is executed in three consecutive levels. The transform matrix of size 8 × 8 is decomposed in accordance with the relation:
T FO - HKLT ( 8 ) = P 3 ( 8 ) G 3 ( 8 ) G 2 ( 8 ) G 1 ( 8 )
Since e s = P 3 ( 8 ) G 3 ( 8 ) G 2 ( 8 ) G 1 ( 8 ) x s , the transform is executed as follows:
y 1 , s = G 1 ( 8 ) x s ,   y 2 , s = G 2 ( 8 ) y 1 , s ,   y 3 , s = G 3 ( 8 ) y 2 , s ,   e s = P 3 ( 8 ) y 3 , s
The operations above correspond to the calculations in hierarchical levels p = 1, 2, 3 of FO-HKLT. In this case, the transform matrices G 1 ( 8 ) ,   G 2 ( 8 ) ,   G 3 ( 8 ) are defined by the equations:
G 1 ( 8 ) = [ c 1 , 1 s 1 , 1 0 0 0 0 0 0 s 1 , 1 c 1 , 1 0 0 0 0 0 0 0 0 c 1 , 2 s 1 , 2 0 0 0 0 0 0 s 1 , 2 c 1 , 2 0 0 0 0 0 0 0 0 c 1 , 3 s 1 , 3 0 0 0 0 0 0 s 1 , 3 c 1 , 3 0 0 0 0 0 0 0 0 c 1 , 4 s 1 , 4 0 0 0 0 0 0 s 1 , 4 c 1 , 4 ]
where θ 1 , q = arctan { 2 k 3 ( 1 , q ) / [ k 1 ( 1 , q ) k 2 ( 1 , q ) + ( k 1 ( 1 , q ) k 2 ( 1 , q ) ) 2 + 4 k 3 2 ( 1 , q ) ] } for q = 1, 2, 3, 4 and p = 1; k 1 ( 1 , q ) = E { x 1 , s 2 ( 1 , q ) } ;   k 2 ( p , q ) = E { x 2 , s 2 ( 1 , q ) } ;   k 3 ( 1 , q ) = E { x 1 , s ( 1 , q ) · x 2 , s ( 1 , q ) } .
G 2 ( 8 ) = [ c 2 , 1 0 s 2 , 1 0 0 0 0 0 0 c 2 , 2 0 s 2 , 2 0 0 0 0 s 2 , 1 0 c 2 , 1 0 0 0 0 0 0 s 2 , 2 0 c 2 , 2 0 0 0 0 0 0 0 0 c 2 , 3 0 s 2 , 3 0 0 0 0 0 0 c 2 , 4 0 s 2 , 4 0 0 0 0 s 2 , 3 0 c 2 , 3 0 0 0 0 0 0 s 2 , 4 0 c 2 , 4 ]
where: θ 2 , q = arctan { 2 k 3 ( 2 , q ) / [ k 1 ( 2 , q ) k 2 ( 2 , q ) + ( k 1 ( 2 , q ) k 2   ( 2 , q ) ) 2 + 4 k 3 2 ( 2 , q ) ] } for q = 1, 2, 3, 4 and p = 2; k 1 ( 2 , q ) = E { y 1 , 1 s 2 ( 2 , q ) } ;   k 2 ( 2 , q ) = E { y 1 , 2 s 2 ( 2 , q ) } ;   k 3 ( 2 , q ) = E { y 1 , 1 s ( 2 , q ) · y 1 , 2 s ( 2 , q ) } .
G 3 ( 8 ) = [ c 3 , 1 0 0 0 s 3 , 1 0 0 0 0 c 3 , 2 0 0 0 s 3 , 2 0 0 0 0 c 3 , 3 0 0 0 s 3 , 3 0 0 0 0 c 3 , 4 0 0 0 s 3 , 4 s 3 , 1 0 0 0 c 3 , 1 0 0 0 0 s 3 , 2 0 0 0 c 3 , 2 0 0 0 0 s 3 , 3 0 0 0 c 3 , 3     0 0 0 0 s 3 , 4 0 0 0 c 3 , 4 ]
where θ 3 , q = arctan { 2 k 3 ( 3 , q ) / [ k 1 ( 3 , q ) k 2 ( 1 , q ) + ( k 1 ( 3 , q ) k 2 ( 3 , q ) ) 2 + 4 k 3 2 ( 3 , q ) ] } for q = 1, 2, 3, 4 and p = 3; k 1 ( 3 , q ) = E { y 2 , 1 , s 2 ( 3 , q ) } ;   k 2 ( 3 , q ) = E { y 2 , 2 , s 2 ( 3 , q ) } ;   k 3 ( 3 , q ) = E { y 2 , 1 , s ( 3 , q ) y 2 , 2 , s ( 3 , q ) } .
The permutation matrix P 3 ( 8 ) in the level p = 3 is:
P 3 ( 8 ) = [ 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 ]
The vector k m ( 1 ) for m = 0, 1, …, 7 comprises the elements of the mth row of the matrix T FO - HKLT 1 ( 8 ) for the stage S1 of 3D FO-HKLT. The vectors k v ( 2 ) and k l ( 3 ) for v, l = 0, 1, …, 7 respectively comprise the elements of rows v and l in matrices T FO - HKLT 2 ( 8 ) and T FO - HKLT 3 ( 8 ) , for 3D FO-HKLT stages S2 and S3.

3. Enhancement of the 3D FO-HKLT Efficiency, Based on Correlation Analysis

To achieve higher efficiency of the 3D tensor decomposition based on the 3D FO-HKLT, here, we use the correlation relations between the tensor 𝒳 components in the orthogonal directions x, y, z. To detect the direction in which these relations are strongest, correlation analysis should be used. It is based on the analysis of the covariance matrices K x ( u ) of vectors x s ( u ) , obtained through unfolding mode u = 1, 2, 3 of the tensor 𝒳 .

3.1. Analysis of the Covariance Matrices

To estimate the correlation of vectors x s ( u ) = [ x 1 , s ( u ) , x 2 , s ( u ) , , x N , s ( u ) ] T , for 3D FO-AHKLT, we use the ratio Δ ( u ) of the sums of the squares of coefficients k x , i , j ( u ) placed outside the main diagonal of the matrix K x ( u ) , and those on the diagonal:
Δ ( u ) = { i = 1 N j = 1 N [ k x , i , j ( u ) ] 2 ( i j ) / i = 1 N j = 1 N [ k x , i , j ( u ) ] 2 ( i = j ) }   for   u = 1 , 2 , 3
The relation above takes into account the symmetry of coefficients k x , i , j ( u ) = k x , j , i ( u )   for   i j , in respect of the main diagonal of the covariance matrix, K x ( u ) . The value of this ratio is maximum for the highest correlation of vectors x s ( u ) of same orientation, u.

3.2. Choice of Vectors’ Orientation for Adaptive Directional Tensor Vectorization

The choice of the orientation u = 1, 2, 3 of vectors x s ( u ) in each stage Su of 3D FO-AHKLT which ensures maximum decorrelation for the tensor 𝒳 , is based on the relation between coefficients Δ(u), given in Table 1. Figure 3a–c shows the vectors x s ( 1 ) ,   x s ( 2 ) ,   x s ( 3 ) , which have horizontal, vertical and lateral orientation, correspondingly, and are obtained through unfolding mode u = 1, 2, 3 of the tensor 𝒳 , of size 4 × 4 × 4. The couples of elements of the vectors x q , s ( u ) for q = 1, 2 on which KLT2×2 is applied in the lowest level of FO-HKLT, are colored in red and white.

3.3. Evaluation of the Decorrelation Properties of FO-AHKLT

The qualities of the one-dimensional FO-AHKLT correspond to the decorrelation degree of the output eigen images, which defines the number of needed execution levels, n. As an example, here is given the evaluation of the decorrelation properties of the transform, for the case N = 8 and n = 3. In the hierarchical level p = 1, 2, 3 of FO-AHKLT, the covariance matrix K y p of size 8 × 8, which represents the transformed 8-component vectors y p , s ( u ) = [ y p , 1 , s ( u ) , y p , 2 , s ( u ) , , y p , 8 , s ( u ) ] T , is:
K y p ( u ) = E { y p , s ( u ) · y p , s T ( u ) } E { y p , s ( u ) } · E { y p , s ( u ) } T = [ K y 1 p ( u ) K y 1 , y 2 p ( u ) K y 1 , y 3 p ( u ) K y 1 , y 4 p ( u ) K y 2 , y 1 p ( u ) K y 2 p ( u ) K y 2 , y 3 p ( u ) K y 2 , y 4 p ( u ) K y 3 , y 1 p ( u ) K y 3 , y 2 p ( u ) K y 3 p ( u ) K y 3 , y 4 p ( u ) K y 4 , y 1 p ( u ) K y 4 , y 2 p ( u ) K y 4 , y 3 p ( u ) K y 4 p ( u ) ]
for s = 1, 2, …, 64.
Here, the sub-matrices K y q p ( u ) of size 2 × 2 (for q = 1, 2, 3, 4) are defined as:
K y q p ( u ) = E { y q , s p ( u ) , y q , s p   T ( u ) } E { y q , s p ( u ) } · E { y q , s p ( u ) } T = [ λ 1 , q p ( u ) 0 0 λ 2 , q p ( u ) ]
These are the covariance matrices of the transformed vectors y q , s p ( u ) = [ y q , 1 , s p ( u ) , y q , 2 , s p ( u ) ] T in the group q for the level p, and λ 1 , q p ( u ) , λ 2 , q p ( u ) are the eigen values of the covariance matrices K y q p ( u ) . Respectively,
K y q , y k p ( u ) = E { y q , s p ( u ) · y k , s p   T ( u ) } E { y q , s p ( u ) } · E { y k , s p ( u ) } T   for   q , k = 1 , 2 , 3 , 4   ( q k )
are the mutual covariance matrices of size 2 × 2 of the two-component vectors y q , s p ( u ) and y k , s p   ( u ) for the couples of groups q and k from the level p.
After the rearrangement that follows the decomposition level p = 3, the vectors y 3 , s ( u ) = [ y 3 , 1 , s ( u ) , y 3 , 2 , s ( u ) , , y 3 , 8 , s ( u ) ] T are transformed into vectors e s ( u ) = [ e 1 , s ( u ) , e 2 , s ( u ) , , e 8 , s ( u ) ] T . To evaluate the result of FO-AHKLT, the covariance matrix K e 3 ( u ) of the vectors e s   ( u ) is analyzed, from which the achieved decorrelation degree in the level is defined.

4. Adaptive Control for Each Level of FO-AHKLT

After the execution of FO-AHKLT in the current level p, from the transformed two-component vectors y q , s p ( u ) = [ y q , 1 , s p ( u ) , y q , 2 , s p ( u ) ] T for each group q = 1, 2, …, (N/2), through concatenation the N-dimensional vectors y p , s ( u ) = [ y p , 1 , s ( u ) , y p , 2 , s ( u ) , , y p , n , s ( u ) ] T for p = 1, 2, …, n (n = lg2N) and s = 1, 2, …, N2 are obtained. At this moment, the decision to continue with the next level of FO-AHKLT, or to stop, must be taken. For this, the covariance matrix K y p ( u ) of the vectors y p , s ( u ) is analyzed for s = 1, 2, …, S, which defines the achieved decorrelation. In the case that the decorrelation is full, their matrix K y p ( u ) is diagonal, and the algorithm is stopped. The proposed adaptive control of FO-AHKLT permits the process to stop earlier, despite that full decorrelation is not achieved, if the result is satisfactory. The decision to stop the FO-AHKLT in the current level p is defined by the relation:
Λ p = i = 1 N j = 1 N [ k y p , i , j ( u ) ] 2 ( i j ) δ 1
Here, k y p , i , j ( u ) is the (i, j)th element of the matrix K y p ( u ) , and δ1 is a threshold of small value, set in advance. In the case that the condition is satisfied, the calculations stop. Otherwise, the processing continues with the next FO-AHKLT level, p + 1. When the calculations for the second level are finished, the result is checked again, but in this case k y p + 1 , i , j ( u ) are the elements of the matrix K y p + 1 ( u ) of the vectors y p + 1 , s ( u ) , etc.
Taking into account the condition (21), the FO-AHKLT matrix defined in Equation (3) turns into:
T FO - AHKLT ( 2 n ) = { P p 0 ( 2 ) [ p = 1 p 0 G p ( 2 n ) ]   for   p 0 < n     if     Λ p 0 δ 1 ; P n ( 2 n ) [ p = 1 n G p ( 2 n ) ]   for   p = 1 , 2 , , n 1   if   Λ p > δ 1 ,
y p , 1 , s q ( u ) = { y p 1 , 1 , s q ( u )   cos θ p , q ( u ) + y p 1 , 2 , s q ( u ) sin θ ( u ) p , q   if   [ k p , 3 q (   u ) ] 2 > δ 2 ; y p 1 , 1 , s q ( u )                                                 if [ k p , 3 q ( u ) ] 2 δ 2 ,
y p , 2 , s q ( u ) = { y p 1 , 1 , s q ( u ) sin θ ( u ) p , q y p 1 , 2 , s q ( u ) cos θ ( u ) p , q   if   [ k p , 3 q ( u ) ] 2 > δ 2 ; y p 1 , 2 , s q ( u )                                                 if [ k p , 3 q (   u ) ] 2 δ 2
for p = 2,3,…,n, q = 1, 2, …, (N/2) and k p , 3 q ( u ) = E { y p 1 , 1 , s q ( u ) , y p 1 , 2 , s q ( u ) } .
For the level p = 1 only, the relations (22) and (23) are transformed as follows:
y 1 , 1 , s q ( u ) = { x 1 , 1 , s q ( u ) cos θ 1 , q ( u ) + x 1 , 2 , s q ( u )   sin θ 1 , q ( u )   if   [ k 1 , 3 q ( u ) ] 2 > δ 2 ; x 1 , 1 , s q ( u )                                                 if [ k 1 , 3 q ( u ) ] 2 δ 2 .
y 1 , 2 , s q ( u ) = { x 1 , 1 , s q ( u ) sin θ 1 , q ( u ) x 1 , 2 , s q ( u ) cos θ 1 , q ( u )   if   [ k 1 , 3 q ( u ) ] 2 > δ 2 ; x 1 , 2 , s q ( u )                                                 if [ k 1 , 3 q ( u ) ] 2 δ 2 ,
where [ k 1 , 3 q ( u ) ] 2 = E { x 1 , 1 , s q ( u ) , x 1 , 2 , s q ( u ) } .
In these relations, δ2 is a threshold of small value, set in advance. The condition (21), together with the adaptive KLT2×2 performed in accordance with relations (23)–(26), permits reducing the number of calculations without worsening the 3D FO-AHKLT decorrelation properties.

5. Algorithm 3D FO-AHKLT

On the basis of the 3D FO-HKLT tensor decomposition, together with the adaptive control of the directional vectorization on the input, the intermediate and the output tensor, and the correlation analysis, the algorithm called 3D FO-AHKLT is developed as Algorithm 1:
Algorithm1
Input: Third-order tensor 𝒳 of size N × N × N (N = 2n) with elements x(i, j, k), and threshold values, δ1, δ2;
The steps of the algorithm are given below:
1. Unfolding of the tensor 𝒳 (mode u = 1, 2, 3), thereby obtaining the corresponding groups of vectors, x s ( u ) .
2. Calculation of the covariance matrices K y p ( u ) of the vectors x s ( u ) , for u = 1, 2, 3.
3. Calculation of the coefficients Δ ( u ) for u = 1, 2, 3, using the matrices K y p ( u ) .
4. Setting the sequence of stages Su1, Su2, Su3 for the execution of 3D FO-AHKLT, chosen in accordance with the relations between coefficients Δ ( 1 ) ,   Δ ( 2 ) ,   Δ ( 3 ) .
5. Start of stage Su1, which comprises:
  5.1. Execution of FO-AHKLT for the vectors x s ( u 1 ) with orientation u1 (for Δ ( u 1 ) = max ), in correspondence with the calculation graph of n hierarchical levels and with a kernel—the adaptive KLT2×2, thereby obtaining the vectors e s ( u 1 ) . In each level p = 1, 2, …, n is checked if the condition to stop the FO-AHKLT calculations is satisfied;
  5.2. Folding of the tensor E of size N × N × N, by using the vectors e s ( u 1 ) ;
  5.3. Unfolding of the tensor E (mode u2), thereby obtaining the vectors x s ( u 2 ) with orientation u2, for Δ ( u 2 ) Δ ( u 1 ) .
6. Start of stage Su2, which comprises:
  6.1. Execution of FO-AHKLT for the vectors x s ( u 2 )   in correspondence with the computational graph of n hierarchical levels and with a kernel—the adaptive KLT2 × 2, thereby obtaining the vectors y s ( u 2 ) . In each level p = 1, 2, …, n is checked if the condition to stop the FO-AHKLT calculations is satisfied;
  6.2. Folding the tensor F of size N × N × N, by using the vectors y s ( u 2 ) ;
  6.3. Unfolding the tensor F (mode u3), thereby obtaining the vectors x s ( u 3 ) , with orientation u3, for Δ ( u 3 ) Δ ( u 2 ) ;
7. Start of stage Su3, which comprises:
  7.1. Execution of FO-AHKLT for the vectors x s ( u 3 ) in correspondence with the computational graph of n hierarchical levels and with a kernel—the adaptive KLT2 × 2, thereby obtaining the vectors s s ( u 3 ) . In each level p = 1, 2, …, n is checked if the condition to stop the FO-AHKLT calculations is satisfied;
  7.2. Folding of the tensor S of size N × N × N (N = 2n) by using the vectors s s ( u 3 ) . The elements of the tensor S are the spectrum coefficients s (m, v, l).
8. Calculation of the transform matrices T FO - AHKLT t ( 2 n ) for each stage t = 1, 2, 3 of the 3D FO-AHKLT in accordance with the relations below:
e s ( u 1 ) = T FO - AHKLT 1 ( 2 n ) x s ( u 1 ) for s = 1 ,   2 ,   ,   N 2 ;
y s ( u 2 ) = T FO - AHKLT 2 ( 2 n ) x s ( u 2 ) for s = 1 ,   2 ,   ,   N 2 ;
s s ( u 3 ) = T FO - AHKLT 3 ( 2 n ) x s ( u 3 ) for s = 1 ,   2 ,   ,   N 2 .
9. Determination of the basic vectors k m ( 1 ) ,   k v   ( 2 ) ,   k l ( 3 ) for m, v, l = 0, 1, 2, …, N − 1, which are the rows of the matrices T FO - AHKLT 1 ( 2 n ) , T FO - AHKLT 2 ( 2 n ) , T FO - AHKLT 3 ( 2 n ) . With this, the decomposition of the tensor 𝒳 in correspondence with Equation (1), is finished.
Output: Spectral tensor S, whose elements s(m, v, l) are the coefficients of the 3D FO-AHKLT.
As a result of the algorithm execution:
  • The main part of the tensor energy is concentrated into a small number of coefficients s(m, v, l) of the spectrum tensor S, for m, v, l = 0, 1, 2;
  • The decomposition components of the tensor are uncorrelated.

6. Comparative Evaluation of the Computational Complexity

For the computation of the covariance matrix K y p ( u ) = E { y p , s ( u ) · y p , s T ( u ) } E { y p , s ( u ) } · E { y p , s ( u ) } T of size N × N for N = 2n, in correspondence with Equation (17), A K y p ( n ) = 3 × 2 2 n additions and M K y p ( n ) = 2 2 n multiplications are needed. Then, the total number of the operations needed for the calculation of K y p ( u ) is:
O K y p ( n ) = A K y p ( n ) + M K y p ( n ) = 4 × 2 2 n   for   p = 1 ,   2 ,   ,   n
For the n levels of the algorithm FO-HKLT is obtained:
p = 1 n O K y p ( n ) = 4 × 2 2 n n
For the calculation of Δu(u) in correspondence with Equation (17), A Δ u ( n ) = 2 2 n 1 additions and M Δ u ( n ) = 2 2 n + 1 multiplications are needed. Then,
O Δ u ( n ) = A Δ u ( n ) + M Δ u ( n ) = 2 × 2 2 n   for   u = 1 ,   2 ,   3
Hence,
u = 1 3 O Δ u ( n ) = 6 × 2 2 n .
In accordance with [17], the computational complexity (or Complexity) of FO-HKLT is defined by taking into account the number of needed additions A HKLT FO ( n ) = 2 n 1 ( 5 × 2 2 n + 1 ) n 2.5 × 2 3 n n , and multiplications M HKLT FO ( n ) = 2 n 1 ( 7 × 2 2 n + 5 ) n 3.5 × 2 3 n n . Hence, the total number of operations needed for the 3D FO-HKLT execution is:
O HKLT 3 DFO ( n ) = A HKLT 3 DFO ( n ) + M HKLT 3 DFO ( n ) 18 × 2 3 n n
The total number of operations needed for the execution of the algorithm 3D FO-AHKLT (without taking into consideration the possibility to stop the processing earlier than the last level) is:
O AHKLT 3 DFO ( n ) = A AHKLT 3 DFO ( n ) + M AHKLT 3 DFO ( n ) 2 2 n + 1 [ 2 n ( 9 × 2 n 1 + 1 ) + 3 ]
For the H-Tucker and TT decomposition of a cubical tensor of size N = 2n, in accordance with [17], the number of needed operations is:
O HT ( n ) = 2 3 n ( 2 n + 1 + 3 ) ,   O TT ( n ) = 3 × 2 4 n
Compared to H-Tucker and TT, the Complexity of the new decomposition decreases together with the growth of n. In the general case, the relative Complexity of 3D FO-AHKLT with respect to decompositions H-Tucker and TT is evaluated in accordance with the relations below:
ψ 1 ( n ) = O HT ( n ) O AHKLT 3 DFO ( n ) = 2 n 1 2 n + 1 + 3 2 n ( 9 × 2 n 1 + 1 ) + 3
ψ 2 ( n ) = O TT ( n ) O AHKLT 3 DFO ( n ) = 2 2 n 1 3 2 n ( 9 × 2 n 1 + 1 ) + 3
For example, for n = 8 the results are: ψ 1 ( 8 ) = 3.57 and ψ 2 ( 8 ) = 5.33 .
For the case when the new decomposition must have minimum Complexity, the transform kernel KLT2×2 could be replaced by WHT of the same size (2 × 2), which in correspondence with Equation (10), is the particular case of KLT2×2 for the fixed value of θ 1 , 2 = π / 4 . According to [19], the Complexity of the 3D Fast Walsh-Hadamard Transform (3D-FWHT) evaluated through the total number of operations O 3 D - FWHT ( n ) is defined by the relation:
O 3 D - FWHT ( n ) = A 3 D - FWHT ( n ) + M 3 D - FWHT ( n ) = 3 × 2 3 n n
Then, for the Complexity of 3D-FWHT with ADV (3D-AFWHT), we obtain:
O 3 D - AFWHT ( n ) = A 3 D - AFWHT ( n ) + M 3 D - AFWHT ( n ) 2 2 n [ 3 × 2 n n + 2 ( 2 n + 3 ) ]
The relative Complexity of 3D FO-AHKLT with respect to 3D-AFWHT is defined by the relation:
ψ 3 ( n ) = O AHKLT 3 DFO ( n ) O 3 D - AFWHT ( n ) = 2 2 n ( 9 × 2 n 1 + 1 ) + 3 3 × 2 n n + 2 ( 2 n + 3 )
Hence, for the example value n = 8, it is ψ 3 ( n ) = 5.97 , and the relative Complexities of H-Tucker and TT vs. 3D-AFWHT are O H T ( 8 ) / O 3 D - AFWHT ( 8 ) = 21.33 and O TT ( 8 ) / O 3 D - AFWHT ( 8 ) = 31.84 , correspondingly.
In Table 2 are given the values of coefficients ψ 1 ( n ) , ψ 2 ( n ) and ψ 3 ( n ) , calculated in accordance with Equations (36), (37) and (40), for n = 2, 3, …, 10. From the table, it is seen that for n > 5, the values of coefficients ψ 1 ( n ) and ψ 2 ( n ) are higher than “1” and grow fast together with n, while for the same values of n, the coefficient ψ 3 ( n ) increases a little in the range from 4 up to 6.
From the comparison follow the conclusions below:
  • The new hierarchical decomposition has low Complexity, which decreases together with the growth of n faster than those of the H-Tucker and TT decompositions;
  • Significant reduction in the decomposition Complexity could be achieved through replacement of the kernel KLT2×2 by WHT2×2. In this particular case, the decrease in the decomposition Complexity results in a lower decorrelation degree;
  • For the case n = 8, the Complexity of the algorithm 3D-AFWHT is approximately 6 times lower than that of the 3D FO-AHKLT;
  • The Complexity of algorithms 3D FO-AHKLT and 3D-AFWHT was evaluated without taking into consideration the use of the adaptive KLT2×2 and the possibility to stop the execution prior to the maximum level n. Equations (35) and (40) give the maximum values of Complexity used for the evaluation of the compared algorithms.

7. Conclusions

This work presented the new algorithm 3D FO-AHKLT, aimed at the decorrelation of the elements of a cubical tensor of size N = 2n. The Complexity of the algorithm was evaluated and compared with that of other, similar algorithms, and its efficiency was shown. The main qualities of the tensor decomposition 3D FO-AHKLT are:
  • Efficient decorrelation of the calculated components;
  • Concentration of the tensor energy into a small number of decomposition components;
  • Lack of iterations;
  • Low Complexity;
  • The capacity for parallel recursive implementation, which reduces the needed memory volume;
  • The capacity for additional significant Complexity reduction through the use of the algorithm 3D-AFWHT, depending on the needs of the implemented application.
The presented algorithms 3D FO-AHKLT and 3D-AFWHT could be generalized for tensors with three different dimensions 2 n 1 × 2 n 2 × 2 n 3 (i.e., for n 1 n 2 n 3 ). The choice of the offered hierarchical 3D decompositions depends on the requirements and limitations of their Complexity imposed by the application area.
The future investigations of 3D FO-AHKLT and 3D-AFWHT will be aimed at the evaluation of their characteristics compared to famous tensor decompositions, in order to define the best settings and to define the most efficient applications for tensor image compression, feature space reduction, filtration, analysis, search and recognition of multidimensional visual information, deep learning, etc. The future development of the presented algorithm will be aimed at applications related to tree tensor network [20], multiway array (tensor) data analysis [21,22], tensor decompositions in neural networks for tree-structured data, etc.

Author Contributions

Conceptualization, R.K.K., R.P.M. and R.A.K.; methodology, formal analysis, R.A.K.; investigation, R.P.M.; writing—original draft preparation, R.K.K.; writing—review and editing, R.A.K.; visualization, R.K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Bulgarian National Science Fund: Project No. KP-06-H27/16: “Development of efficient methods and algorithms for tensor-based processing and analysis of multidimensional images with application in interdisciplinary areas”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Liu, Y.; Liu, J.; Long, Z.; Zhu, C. Tensor Decomposition. In Tensor Computation for Data Analysis; Springer: Cham, Switzerland, 2021; pp. 19–57. [Google Scholar]
  2. Ji, Y.; Wang, Q.; Li, X.; Liu, J. A Survey on Tensor Techniques and Applications in Machine Learning. IEEE Access 2019, 7, 162950–162990. [Google Scholar] [CrossRef]
  3. Cichocki, A.; Mandic, D.; Phan, A.; Caiafa, C.; Zhou, G.; Zhao, Q.; De Lathauwer, L. Tensor Decompositions for Signal Processing Applications: From Two-Way to Multiway Component Analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef] [Green Version]
  4. Sidiropoulos, N.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor Decomposition for Signal Processing and Machine Learning. IEEE Trans. Signal Proc. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  5. Oseledets, I. Tensor-train decomposition. Siam J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  6. Grasedyck, L. Hierarchical Singular Value Decomposition of Tensors. Siam J. Matrix Anal. Appl. 2010, 31, 2029–2054. [Google Scholar] [CrossRef] [Green Version]
  7. Ozdemir, A.; Zare, A.; Iwen, M.; Aviyente, S. Extension of PCA to Higher Order Data Structures: An Introduction to Tensors, Tensor Decompositions, and Tensor PCA. Proc. IEEE 2018, 106, 1341–1358. [Google Scholar]
  8. Vasilescu, M.; Kim, E. Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors. In Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD): Tensor Methods for Emerging Data Science Challenges, Anchorage, NY, USA, 2–4 August 2019; p. 9. [Google Scholar]
  9. Wang, P.; Lu, C. Tensor Decomposition via Simultaneous Power Iteration. In Proceedings of the 34th International Conference on Machine Learning (PMLR), Sydney, Australia, 6–9 August 2017; Volume 70, pp. 3665–3673. [Google Scholar]
  10. Ishteva, M.; Absil, P.; Van Dooken, P. Jacoby Algorithm for the Best Low Multi-linear Rank Approximation of Symmetric Tensors. SIAM J. Matrix Anal. Appl. 2013, 34, 651–672. [Google Scholar] [CrossRef]
  11. Zniyed, Y.; Boyer, R.; Almeida, A.; Favier, G. A TT-based Hierarchical Framework for Decomposing High-Order Tensors. SIAM J. Sci. Comput. 2020, 42, A822–A848. [Google Scholar] [CrossRef] [Green Version]
  12. Kernfeld, E.; Kilmer, M.; Aeron, S. Tensor–tensor products with invertible linear transforms. Linear Algebra Appl. 2015, 485, 545–570. [Google Scholar]
  13. Keegany, K.; Vishwanathz, T.; Xu, Y. A Tensor SVD-based Classification Algorithm Applied to fMRI Data. arXiv 2021, arXiv:2111.00587v1. [Google Scholar]
  14. Rao, K.; Kim, D.; Hwang, J. Fast Fourier Transform: Algorithms and Applications; Springer: Dordrecht, The Netherlands, 2010; pp. 166–170. [Google Scholar]
  15. Ahmed, N.; Rao, K. Orthogonal Transforms for Digital Signal Processing; Springer: Berlin/Heidelberg, Germany, 1975; pp. 86–91. [Google Scholar]
  16. Kountchev, R.; Kountcheva, R. Adaptive Hierarchical KL-based Transform: Algorithms and Applications. In Computer Vision in Advanced Control Systems: Mathematical Theory; Favorskaya, M., Jain, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 1, pp. 91–136. [Google Scholar]
  17. Kountchev, R.; Mironov, R.; Kountcheva, R. Complexity Estimation of Cubical Tensor Represented through 3D Frequency-Ordered Hierarchical KLT. Symmetry 2020, 12, 1605. [Google Scholar] [CrossRef]
  18. Lu, H.; Kpalma, K.; Ronsin, J. Motion descriptors for micro-expression recognition. In Signal Processing: Image Communication; Elsevier: Amsterdam, The Netherlands, 2018; Volume 67, pp. 108–117. [Google Scholar]
  19. Kountchev, R.; Mironov, R.; Kountcheva, R. Hierarchical Cubical Tensor Decomposition through Low Complexity Orthogonal Transforms. Symmetry 2020, 12, 864. [Google Scholar] [CrossRef]
  20. Cichocki, A.; Phan, H.; Zhao, Q.; Lee, N.; Oseledets, I.; Sugiyama, M.; Mandic, D. Tensor networks for dimensionality reduction and large-scale optimizations: Part 2 Applications and future perspectives. Found. Trends Mach. Learn. 2017, 9, 431–673. [Google Scholar] [CrossRef]
  21. Abhyankar, S.S.; Christensen, C.; Sundaram, G.; Sathaye, A.M. Algebra, Arithmetic and Geometry with Applications: Papers from Shreeram S. Abhyankar's 70th Birthday Conference; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004; pp. 457–472. [Google Scholar]
  22. Peters, J. Computational Geometry, Topology and Physics of Digital Images with Applications: Shape Complexes, Optical Vortex Nerves and Proximities; Springer Nature: Cham, Switzerland, 2019; pp. 1–115. [Google Scholar]
Figure 1. Sequence of the 3D FO-AHKLT execution stages for a tensor 𝒳 of size 8 × 8 × 8 for the case. Δ (1) > Δ (2) > Δ (3), based on FO-AHKLT. On each 2-component x q , s ( u ) for q = 1, 2, 3, 4, KLT2×2 is applied.
Figure 1. Sequence of the 3D FO-AHKLT execution stages for a tensor 𝒳 of size 8 × 8 × 8 for the case. Δ (1) > Δ (2) > Δ (3), based on FO-AHKLT. On each 2-component x q , s ( u ) for q = 1, 2, 3, 4, KLT2×2 is applied.
Symmetry 14 00854 g001
Figure 2. Graphs of the 3-level direct FO-AHKLT of the tensor X of size 8 × 8 × 8, with ADV.
Figure 2. Graphs of the 3-level direct FO-AHKLT of the tensor X of size 8 × 8 × 8, with ADV.
Symmetry 14 00854 g002
Figure 3. Orientations of vectors x s ( u ) for u = 1, 2, 3 and N = 4 after unfolding mode u = 1, 2, 3 of the tensor 𝒳 : (a) horizontal; (b) vertical; (c) lateral.
Figure 3. Orientations of vectors x s ( u ) for u = 1, 2, 3 and N = 4 after unfolding mode u = 1, 2, 3 of the tensor 𝒳 : (a) horizontal; (b) vertical; (c) lateral.
Symmetry 14 00854 g003
Table 1. Adaptive choice of the orientation of vectors x s ( u ) for u = 1, 2, 3, based on the correlation analysis.
Table 1. Adaptive choice of the orientation of vectors x s ( u ) for u = 1, 2, 3, based on the correlation analysis.
No.Relations for Δ(1), Δ(2), Δ(3)Sequence of Stages Su in the
3D FO-AHKLT Execution (for u = 1, 2, 3)
1Δ(1) > Δ(2) > Δ(3)
Δ(1) = Δ(2) = Δ(3)
Δ(1) > Δ(2) = Δ(3)
Δ(1) = Δ(2) > Δ(3)
S1 → S2 → S3
2Δ(1) > Δ(3) > Δ(2)
Δ(1) > Δ(3) = Δ(2)
Δ(1) = Δ(3) > Δ(2)
S1 → S3 → S2
3Δ(2) > Δ(1) > Δ(3)
Δ(2) > Δ(1) = Δ(3)
Δ(2) = Δ(1) > Δ(3)
S2 → S1 → S3
4Δ(2) > Δ(3) > Δ(1)
Δ(2) > Δ(3) = Δ(1)
Δ(2) = Δ(3) > Δ(1)
S2 → S3 → S1
5Δ(3) > Δ(2) > Δ(1)
Δ(3) > Δ(2) = Δ(1)
Δ(3) = Δ(2) > Δ(1)
S3 → S2 → S1
6Δ(3) > Δ(1) > Δ(2)
Δ(3) > Δ(1) = Δ(2)
Δ(3) = Δ(1) > Δ(2)
S3 → S1 → S2
Table 2. Relative Complexity of H-Tucker and TT vs. 3D FO-AHKLT and of 3D FO-AHKLT vs. 3D-AFWHT.
Table 2. Relative Complexity of H-Tucker and TT vs. 3D FO-AHKLT and of 3D FO-AHKLT vs. 3D-AFWHT.
n2345678910
ψ 1 ( n ) 0.280.340.480.741.212.053.576.3411.39
ψ 2 ( n ) 0.300.420.661.061.773.045.339.4717.06
ψ 3 ( n ) 4.165.005.465.745.875.945.975.985.99
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kountchev, R.K.; Mironov, R.P.; Kountcheva, R.A. Third-Order Tensor Decorrelation Based on 3D FO-HKLT with Adaptive Directional Vectorization. Symmetry 2022, 14, 854. https://doi.org/10.3390/sym14050854

AMA Style

Kountchev RK, Mironov RP, Kountcheva RA. Third-Order Tensor Decorrelation Based on 3D FO-HKLT with Adaptive Directional Vectorization. Symmetry. 2022; 14(5):854. https://doi.org/10.3390/sym14050854

Chicago/Turabian Style

Kountchev, Roumen K., Rumen P. Mironov, and Roumiana A. Kountcheva. 2022. "Third-Order Tensor Decorrelation Based on 3D FO-HKLT with Adaptive Directional Vectorization" Symmetry 14, no. 5: 854. https://doi.org/10.3390/sym14050854

APA Style

Kountchev, R. K., Mironov, R. P., & Kountcheva, R. A. (2022). Third-Order Tensor Decorrelation Based on 3D FO-HKLT with Adaptive Directional Vectorization. Symmetry, 14(5), 854. https://doi.org/10.3390/sym14050854

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop