Next Article in Journal
On the Security of Offloading Post-Processing for Quantum Key Distribution
Next Article in Special Issue
Joint Design of Polar Coding and Physical Network Coding for Two−User Downlink Non−Orthogonal Multiple Access
Previous Article in Journal
Wealth Redistribution and Mutual Aid: Comparison Using Equivalent/Non-Equivalent Exchange Models of Econophysics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rank-Adaptive Tensor Completion Based on Tucker Decomposition

School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 225; https://doi.org/10.3390/e25020225
Submission received: 24 December 2022 / Revised: 21 January 2023 / Accepted: 21 January 2023 / Published: 24 January 2023
(This article belongs to the Special Issue Advances in Multiuser Information Theory)

Abstract

:
Tensor completion is a fundamental tool to estimate unknown information from observed data, which is widely used in many areas, including image and video recovery, traffic data completion and the multi-input multi-output problems in information theory. Based on Tucker decomposition, this paper proposes a new algorithm to complete tensors with missing data. In decomposition-based tensor completion methods, underestimation or overestimation of tensor ranks can lead to inaccurate results. To tackle this problem, we design an alternative iterating method that breaks the original problem into several matrix completion subproblems and adaptively adjusts the multilinear rank of the model during optimization procedures. Through numerical experiments on synthetic data and authentic images, we show that the proposed method can effectively estimate the tensor ranks and predict the missing entries.

1. Introduction

Tensors, as a higher-order generalization of vectors and matrices, can preserve the original structure and the latent characteristics of multi-dimensional data, such as images and videos. To store and process these data, vectorization or matricization may break the adjacency relation and lead to the loss of crucial information in practical applications. Therefore, tensor analysis of data has gathered increasing attention due to its better performance in finding hidden structures and capturing potential features. Due to data missing from a transmission or insufficient collection, sometimes we have to deal with incomplete data, which increases the operation cost and the difficulty of multi-dimensional data analysis. Therefore, tensor completion, also referred to as tensor recovery, plays a key role in processing and analyzing incomplete multi-dimensional data. In tensor completion, the prediction of unknown data from a few observed entries is achieved based on the correlation between different parts of data or, simply, the hidden low-rank structure. Many real-world data lie in a latent low dimensional space since they may only relate to a small number of contributions; thus, we can use tensor completion to infer the missing data. It has already been widely applied in many scientific fields, including surrogate construction [1], image and video recovery [2,3,4], traffic data completion [5] and the MIMO (multi-input multi-output) problems in information theory [6].
The tensor completion methods can be roughly classified into model-based and non-model-based ones. Extending the idea from existing matrix completion algorithms [7,8], non-model-based methods directly formulate the tensor completion task as a rank minimization problem subject to linear constraints, and then relax it to a nuclear norm minimization problem, such as LRTC (low rank tensor completion) [9]. However, model-based methods achieve the goal differently. Based on a decomposition model with a given rank, these methods minimize the error on the known entries by adjusting the model parameters, such as CP-Wopt (CP-weighted optimization) [10], Tucker–Wopt (Tucker-weighted optimization) [5], M 2 SA (multilinear subspace analysis with missing values) [11] and so on.
The computation cost of non-model-based methods is closely related to the size of the tensor, leaving little room for reducing the time complexity. On the contrary, this can be more flexibly controlled in model-based methods by altering the model size. Nonetheless, there still exists a big challenge when choosing the model’s rank. Underestimation of the tensor rank makes it harder to reach our desired accuracy, whereas overestimation may lead to a poor generalization on the unobserved part of entries. Therefore, rank-adaptive methods are needed and worth studying. In [1], a greedy scheme is designed to find the fittest rank by gradually increasing the number of factors in the CP decomposition model. In [12], from an overestimated rank, a systematic approach to reducing the CP ranks to optimal ones is developed.
Focusing on tensor completion based on Tucker decomposition, we propose a novel rank-adaptive tensor completion method and verify its efficiency through experiments on synthetic and real practical data. The rest of the paper is organized as follows. Section 2 formulates the problem setting. Section 3 presents Tucker decomposition and the related works. We propose our rank-adaptive tensor completion based on the Tucker decomposition (RATC-TD) approach in Section 4 and verify it through numerical experiments in Section 5. Finally, we conclude our work in Section 6.

2. Notations and the Tensor Completion Problem

Before summarizing the related work and presenting our algorithm, this section first introduces some basic definitions that will be used in the following sections, and then states our problem settings.

2.1. Notations

We follow the notations in [13]. Let lowercase letters denote scalars, e.g., a; let lowercase bold letters denote vectors, e.g., a ; let bold capital letter denote matrices, e.g., U ; let capital fraktur letter denote high-order tensors, e.g., T . The ith element of a vector a is denoted as a i ; the element of matrix U with index ( i , j ) is denoted by u i j ; the element of N-way tensor T with index ( i 1 , i 2 , , i N ) is denoted by t i 1 i 2 i N . A tensor can be viewed as a multi-dimensional array, and the number of dimensions is generally called the order of the tensor. For example, matrices are second-order tensors, while vectors are first-order tensors. A mode-n fiber is a vector extracted from a tensor by fixing all indices except the nth one. Consider a matrix as an example; mode-1 fibers refer to the columns, and mode-2 fibers refer to the rows. The mode-n unfolding of a tensor T R I 1 × I 2 × × I N is to rearrange the mode-n fibers as the columns of the resulting matrix, denoted as T ( n ) R I n × I 1 I n 1 I n + 1 I N . Naturally, the reverse of unfolding, the process of rearranging T ( n ) R I n × I 1 I n 1 I n + 1 I N into T R I 1 × I 2 × × I N , is called folding.
The n-mode matrix product is the multiplication of a tensor T R I 1 × I 2 × × I N with a matrix U R J × I n , denoted as X = T × n U , where X R I 1 × × I n 1 × J × I n + 1 × × I N . It can be regarded as multiplying each mode-n fiber of T with U . The matricized form is given by
X ( n ) = U T ( n ) ,
and the element-wise operations can be obtained by
x i 1 i n 1 j i n + 1 i N = i n = 1 I n t i 1 i n 1 i n i n + 1 i N u j i n .
The norm of a tensor T R I 1 × I 2 × × I N is a higher-order analog of the matrix Frobenius norm, defined by
T F = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N t i 1 i 2 i N 2 .
The inner product of two same sized tensors T , X R I 1 × I 2 × × I N is defined by
T , X = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N t i 1 i 2 i N x i 1 i 2 i N .

2.2. The Tensor Completion Problem

Tensor completion is the problem of predicting missing entries through a few sampled entries of a tensor based on the assumption that it has a low-rank structure. Supposing we have a partially observed tensor T R I 1 × × I N , the set of indices where entries are observed is denoted as Ω . Let P Ω be an operator on R I 1 × × I N , such that P Ω ( X ) is equal to X on Ω and is equal to 0 otherwise. The low-rank tensor completion problem can be formulated as
min X rank ( X ) s . t . P Ω ( X ) = P Ω ( T ) .
Although the formulation seems easy, how to measure the rank of a tensor is still an open question. In previous research, the CP rank [1,12] and the Tucker rank [9,14,15] were mostly considered. After the tensor-train (TT) decomposition was proposed in [16], tensor completion based on the TT rank was studied in [3,17,18]. The low-tubal rank tensor completion problem was studied in [19,20,21]. In addition, the combination of the CP and the Tucker ranks was investigated in [22].
Since matrices are second-order tensors, matrix completion can be viewed as a special case of tensor completion. Using the matrix nuclear norm to approximate the rank of a matrix is studied for matrix completion in [23]. Liu et al. [9] generalized this to the tensor scheme by defining the nuclear norm of tensors and proposing the following low-rank tensor completion problem:
min X X * s . t . P Ω ( X ) = P Ω ( T ) .
The nuclear norm of a tensor is defined by X * : = n = 1 N X ( n ) * , where X ( n ) * is the nuclear norm of the matrix X ( n ) . The nuclear norm can be computed by the sum of the singular values. However, the computation involves SVD (singular value decomposition) of the unfolding matrix T ( n ) in each iteration, which is expensive when the size of T is large. To control the computation cost, this problem is reformulated based on tensor decomposition [5,11]. Particularly, the low-rank tensor completion problem based on Tucker decomposition can be written as
min G , A 1 , A N T X Ω 2 s . t . X = G × 1 A 1 × N A N , A n T A n = I R n × R n , n = 1 , , N ,
where X is the completed goal tensor, ( R 1 , , R N ) is a predefined multilinear rank, A n R I n × R n and · Ω denote the norm on the observation elements, i.e., P Ω ( · ) F . Methods are actively developed to solve this problem, including M 2 SA [11], gHOI (generalized higher-order orthogonal iteration) [24] and Tucker–Wopt [5]. However, these algorithms typically require the multilinear ranks given a priori, both underestimating and overestimating the ranks can result in inaccurate completion results. To handle this issue, we in this work propose a rank-adaptive tensor completion approach without requiring the multilinear ranks given a priori.

3. The Tucker Decomposition and Its Computation

We review Tucker decomposition and its related algorithms in this section.

3.1. Tucker Decomposition

Tucker decomposition was first introduced by Tucker [25]. For a Nth-order tensor T R I 1 × I 2 × × I N , its Tucker decomposition has the form
T = G × 1 A 1 × 2 A 2 × N A N ,
where G R R 1 × × R N is a core tensor and A n R I n × R n , n = 1 , 2 , , N , are factor matrices. For simplicity, we assume all the factor matrices have orthogonal columns. R n is the rank of the mode-n unfolding matrix T ( n ) . As described in [13], Tucker decomposition can be viewed as a higher-order generalization of PCA (principal component analysis) in some sense. In Tucker decomposition, the columns of the factor matrix represent the components in each mode, and each element of the core tensor characterizes the level of interaction between the components in different modes.
The tuple ( R 1 , , R N ) is called Tucker rank or multilinear rank. However, practically, we prefer to use the truncated version of Tucker decomposition, where we set R n < rank ( T ( n ) ) for one or more n. Given the truncation ( R 1 , , R N ) , the approximation of the truncated Tucker decomposition of a tensor T R I 1 × × I N can be described as
min G , A 1 , A N T X F 2 s . t . X = G × 1 A 1 × N A N , A n T A n = I R n × R n , n = 1 , , N ,
where G R R 1 × × R N , A n R I n × R n is the nth factor matrix, and I R n × R n is an identity matrix of size R n × R n .

3.2. The Higher-Order Orthogonal Iteration (HOOI) Algorithm

There are several approaches to computing the truncated Tucker decomposition of a tensor. One of the most popular methods is the higher-order orthogonal iteration (HOOI) algorithm [26], also referred to as the alternative least square (ALS) algorithm for Tucker decomposition. It is an alternative iterating method, where we fix all except one factor matrix each time, and then minimize the objective function in (9). Specifically, given initial guesses { A n ( 0 ) : n = 1 , , N } , at the kth iteration, for each n = 1 , , N , we fix all the factor matrices except A n , and then find the optimal solution to the subproblem
min G , A n ( k ) T X F 2 s . t . X = G × 1 A 1 ( k ) × n A n ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) , ( A n ( k ) ) T A n ( k ) = I R n × R n ,
where G R R 1 × × R N , A n ( k ) R I n × R n . Denoting B : = G × n A n ( k ) , this optimization problem can be simplified as
min B T X F 2 s . t . X = B × 1 A 1 ( k ) × n 1 A n 1 ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) , rank ( B ( n ) ) R n ,
where B R R 1 × R n 1 × I n × R n + 1 R N and B ( n ) is the mode-n unfolding matrix form of B . It can be considered as a constrained least squares problem [27], and thus the solution to (10) can be easily obtained by the following steps:
  • Compute B * = T × 1 A 1 ( k ) T × n 1 A n 1 ( k ) T × n + 1 A n + 1 ( k 1 ) T × N A N ( k 1 ) T ;
  • Unfold B * at the nth mode to obtain B ( n ) * , then perform a truncated rank- R n singular value decomposition ( B ( n ) * ) [ R ] = U Σ V T = B ( n ) ;
  • Compute A n ( k ) = U and G ( n ) = Σ V T .
We iteratively solve these subproblems until a given stopping criterion is reached, i.e., the difference between the solutions at two adjacent iterations is small enough, or the value of the objective function decreases very slightly. The complete procedure of HOOI is presented in Algorithm 1.
Algorithm 1 The high-order orthogonal iteration (HOOI) algorithm [26]
  • Input: Tensor T R I 1 × I 2 × × I N and truncation ( R 1 , R 2 , , R N ) .
  • Output: Core tensor G R R 1 × × R N , and factor matrices A n R I n × R n for n = 1 , 2 , , N .
1:
Initialize A n ( 0 ) R I n × R n for n = 1 , 2 , , N using HOSVD.
2:
k 0 .
3:
while not converge do
4:
     k k + 1 .
5:
    for  n = 1 , 2 , , N  do
6:
         B T × 1 A 1 ( k ) T × n 1 A n 1 ( k ) T × n + 1 A n + 1 ( k 1 ) T × N A N ( k 1 ) T .
7:
         B ( n ) mode-n unfolding matrix of B .
8:
         U , Σ , V T truncated rank- R n SVD of B ( n ) .
9:
         A n ( k ) U .
10:
    end for
11:
end while
12:
G Σ V T folding at mode-n.

3.3. The Rank-Adaptive HOOI Algorithm

HOOI requires the multilinear rank ( R 1 , , R N ) given a priori, which is hard to be determined in practice. Instead of (9), Xiao and Yang [27] consider the following form of the low multilinear rank approximation problem:
min G , A 1 , A N ( R 1 , R 2 , , R N ) s . t . X T F 2 < ϵ T F 2 , X = G × 1 A 1 × N A N , A n T A n = I R n × R n , n = 1 , , N ,
where ϵ is a given tolerance. A rank-adaptive HOOI algorithm is proposed in [27], which adjusts the truncation R n for dimension n in the HOOI iterations by
R n ( k ) = argmin R B ( n ) ( B ( n ) ) [ R ] F 2 B F 2 ( 1 ϵ ) T F 2 ,
where ( B ( n ) ) [ R ] is the best rank-R approximation of B ( n ) . For the full SVD of B ( n ) = U Σ V T , it can be calculated by ( B ( n ) ) [ R ] = U : , 1 : R Σ 1 : R , 1 : R V : , 1 : R T . In [27], it is proven that (13) is a local optimal strategy for updating R n , i.e., the optimal solution of the following problem:
min B rank ( B ( n ) ) s . t . X T F 2 < ϵ T F 2 , X = B × 1 A 1 ( k ) × n 1 A n 1 ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) .
Details of the above procedure are summarized in Algorithm 2.
Algorithm 2 The rank-adaptive HOOI algorithm [27]
  • Input: Tensor T R I 1 × I 2 × × I N , error tolerance ϵ , initial guess of factor matrices A n ( 0 ) R I n × R n ( 0 ) for n = 1 , 2 , , N and initial truncation ( R 1 ( 0 ) , R 2 ( 0 ) , , R N ( 0 ) ) .
  • Output: Truncation ( R 1 ( k ) , R 2 ( k ) , , R N ( k ) ) , core tensor G R R 1 ( k ) × × R N ( k ) and factor matrices A n ( k ) R I n × R n ( k ) for n = 1 , 2 , , N .
1:
G ( 0 ) T × 1 ( A 1 ( 0 ) ) T × N ( A N ( 0 ) ) T .
2:
k 0 .
3:
while not converge do
4:
     k k + 1 .
5:
    for all  n { 1 , 2 , , N }  do
6:
         B T × 1 A 1 ( k ) T × n 1 A n 1 ( k ) T × n + 1 A n + 1 ( k 1 ) T × N A N ( k 1 ) T .
7:
         B ( n ) mode-n unfolding matrix of B .
8:
         U , Σ , V T full SVD of B ( n ) .
9:
         R n ( k ) minimum R such that r > R Σ r , r 2 < B F 2 ( 1 ϵ ) T F 2 .
10:
         A n ( k ) U : , 1 : R n ( k ) .
11:
    end for
12:
     G Σ 1 : R n ( k ) , 1 : R n ( k ) V : , 1 : R n ( k ) T folding at mode-n.
13:
end while

4. A Rank-Adaptive Tensor Recovery Scheme

This section proposes a rank-adaptive tensor completion scheme based on truncated Tucker decomposition (RATC-TD).
Analogous to the rank-adaptive HOOI algorithm, instead of (7), we consider a different form of the low-rank tensor completion problem:
min G , A 1 , A N ( R 1 , R 2 , , R N ) s . t . P Ω ( X ) = P Ω ( T ) , X = G × 1 A 1 × N A N , A n T A n = I R n × R n , n = 1 , , N .
Note that if the data are noisy, the constraint must be relaxed to X T Ω 2 < ϵ T Ω 2 , where ϵ is a given tolerance of the relative error on the observation part between the original tensor and the completed one.
Using an alternative optimization technique, problem (15) can be divided into N subproblems, which will be detailedly represented in Section 4.1. To address those subproblems, the singular value thresholding (SVT) algorithm [7] is introduced in Section 4.2. The entire algorithm is summarized in Section 4.3.

4.1. Alternative Optimization in Tensor Completion

We solve (15) by an alternative iteration method. For each n = 1 , , N , initialize R n ( 0 ) = I n , and the initial guess A n ( 0 ) can be either randomly set or given by HOSVD (higher-order SVD) [13] of P Ω ( T ) . At the kth iteration, for each n = 1 , , N , by fixing all the factor matrices except A n , we solve the subproblem
min G , A n ( k ) ( R n ( k ) ) s . t . P Ω ( X ) = P Ω ( T ) , X = G × 1 A 1 ( k ) × n A n ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) , ( A n ( k ) ) T A n ( k ) = I R n × R n .
Denoting B : = G × n A n ( k ) , (16) is equivalent to
min B rank ( B ( n ) ) s . t . P Ω ( X ) = P Ω ( T ) , X = B × 1 A 1 ( k ) × n 1 A n 1 ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) ,
where B ( n ) is the mode-n unfolding matrix form of B . However, this rank minimization problem is NP-hard. Its tightest convex relaxation is
min B B ( n ) * s . t . P Ω ( X ) = P Ω ( T ) , X = B × 1 A 1 ( k ) × n 1 A n 1 ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) ,
where B ( n ) * is the nuclear norm of the matrix B ( n ) , which can be computed by the sum of the singular values. The matrix form of this problem can be written as [27]
min B B * s . t . P Ω ( B M T ) = P Ω ( T ( n ) ) , M = A N A n + 1 A n 1 A 1 ,
where T ( n ) is the mode-n unfolding matrix form of T , and ⊗ is the Kronecker product. In this work, we use the singular value thresholding (SVT) algorithm [7] to obtain a proximity solution to (19).

4.2. Solving the Subproblems Using SVT

Following the notation used in [7], we first introduce the singular value thresholding operator. Consider the skinny singular value decomposition of a matrix X R n 1 × n 2 of rank r,
X = U Σ V T , Σ = diag ( { σ i } 1 i r ) ,
where U and V are, respectively, n 1 × r and n 2 × r matrices with orthonormal columns, and the singular values σ i > 0 for i = 1 , , r . Given the shrinkage threshold τ > 0 , the singular value thresholding operator D τ is defined by
D τ ( X ) : = U D τ ( Σ ) V T , D τ ( Σ ) = diag ( { max ( σ i τ , 0 ) } 1 i r ) .
According to the deduction of [7], for each τ > 0 and Y R n 1 × n 2 , the singular value shrinkage operator (21) obeys
D τ ( Y ) = argmin X τ X * + 1 2 X Y F 2 .
The SVT algorithm utilizes the singular value thresholding operator D τ and its property (22) to handle the subproblem (19). Consider the proximal problem for τ > 0 ,
min B τ B * + 1 2 B F 2 s . t . P Ω ( B M T ) = P Ω ( T ( n ) ) ,
the solution of which converges to that of (19) as τ . This optimization problem can be solved by a Lagrangian multiplier method known as the Uzawa’s algorithm [28].
Given the Lagrangian of (23)
L ( B , Y ) = τ B * + 1 2 B F 2 + Y , P Ω ( T ( n ) B M T ) ,
where Y has the same size as T ( n ) , the optimal B * and Y * should satisfy
L ( B * , Y * ) = inf B sup Y L ( B , Y ) = sup Y inf B L ( B , Y ) .
Starting with Y ( 0 ) = 0 , Uzawa’s algorithm finds the saddle point ( B * , Y * ) through an iterative procedure given by
B ( k ) = argmin B L ( B , Y ( k 1 ) ) Y ( k ) = Y ( k 1 ) + δ k P Ω ( T ( n ) B ( k ) M T ) ,
where { δ k } k 1 > 0 are scalar step sizes. The sequence { B ( k ) } converges to the unique solution to (23). The update of Y is actually based on the gradient descent method if we note that
Y L ( B , Y ) = P Ω ( T ( n ) B M T ) .
Now, we have to compute the minimizer of (26). Observe that the factor matrices { A k : k = 1 , , N } have orthogonal columns. Based on the orthogonal invariance property of the Frobenius norm, we have
B F 2 = B × 1 A 1 ( k ) × n 1 A n 1 ( k ) × n + 1 A n + 1 ( k 1 ) × N A N ( k 1 ) F 2 ,
which in the matrix form gives that
B F 2 = B M T F 2 .
Utilizing this property,
argmin B τ B * + 1 2 B F 2 + Y , P Ω ( T ( n ) B M T ) = argmin B τ B * + 1 2 B M T F 2 + Y , P Ω ( T ( n ) B M T ) = argmin B τ B * + 1 2 B M T P Ω ( Y ) F 2 = argmin B τ B * + 1 2 B P Ω ( Y ) M F 2 .
According to (22), the optimal B * is given by D τ ( P Ω ( Y ) M ) = D τ ( Y M ) since P Ω ( Y ) = Y for all k 0 . Therefore, Uzawa’s algorithm finally takes the form
B ( k ) = D τ ( Y ( k 1 ) M ) Y ( k ) = Y ( k 1 ) + δ k P Ω ( T ( n ) B ( k ) M T ) ,
also referred to as the shrinkage iterations in SVT.
To obtain an approximation to the solution of (19), we choose a large enough τ and perform the iterations (31) until the stopping criteria T B ( k ) M T Ω 2 < ϵ T Ω 2 are reached, starting with Y ( 0 ) = 0 R I 1 × × I N .
The overall process of solving the subproblem (18) is shown in Algorithm 3.
Algorithm 3 The SVT algorithm for solving (18)
  • Input: Set of observed indices Ω , tensor with observed entries P Ω ( T ) R I 1 × I 2 × × I N , fixed factor matrices A m R I m × R m for m = { 1 , 2 , , N } / n , error tolerance ϵ , shrinkage threshold τ and scalar step sizes { δ k } k 1 .
  • Output: Optimized B .
1:
P Ω ( T ) mode-n unfolding matrix of P Ω ( T ) .
2:
M A N A n + 1 A n 1 A 1 .
3:
Initialize B ( 0 ) ( P Ω ( T ) ) M ,    Y ( 0 ) 0 .
4:
k 0 .
5:
while T B ( k ) M T Ω 2 ϵ T Ω 2 do
6:
     k k + 1 .
7:
    Update B ( k ) D τ ( Y ( k 1 ) M ) , where D τ is defined in (21).
8:
    Update Y ( k ) Y ( k 1 ) + δ k P Ω ( T ( n ) B ( k ) M T ) .
9:
end while
10:
B B ( k ) folding at mode-n.

4.3. The Rank-Adaptive Tensor Completion Algorithm

This section summarizes our proposed method, namely the rank-adaptive tensor completion algorithm. Based on Tucker decomposition, this algorithm can complete a tensor with only a few observed entries and adaptively estimate its multilinear rank.
Given a tensor T R I 1 × × I N , only entries with indices in the set Ω are observed. Our goal is to predict the unobserved entries based on the premise that T has a low-rank structure. In this work, we consider the multilinear rank of the tensor. We solve this problem by finding a low multilinear rank tensor X whose entries on the observation part satisfy X T Ω 2 < ϵ T Ω 2 , where ϵ is a given tolerance. Here, we restate our problem setting (15)
min G , A 1 , A N ( R 1 , R 2 , , R N ) s . t . P Ω ( X ) = P Ω ( T ) , X = G × 1 A 1 × N A N , A n T A n = I R n × R n , n = 1 , , N .
Similar to HOOI, we solve (15) by alternative iterations. By fixing all the factor matrices except one, we obtain the rank minimization subproblem (17). Since it is NP-hard, we instead consider its convex relaxation form (18), i.e., minimizing the nuclear norm, which can be handled by the SVT algorithm. The solution can be computed using Algorithm 3. After optimized B in (18) is obtained, we update R n ( k ) = min ( R n ( k 1 ) , rank ( B ( n ) ) ) to ensure that R n ( k ) is monotone decreasing. We perform the iterations until some stopping criteria are satisfied, e.g., the estimated rank ( R 1 , R 2 , , R N ) remains unchanged, and the factor matrices have slight improvement, or the maximum number of iterations is reached. We present the detailed procedure of our proposed method in Algorithm 4.
Algorithm 4 The rank-adaptive tensor completion based on Tucker decomposition (RATC-TD) algorithm
  • Input: Set of observed indices Ω , tensor with observed entries P Ω ( T ) R I 1 × I 2 × × I N and initial guess of factor matrices A n ( 0 ) R I n × R n ( 0 ) for n = 1 , 2 , , N .
  • Output: Estimated rank ( R 1 ( k ) , R 2 ( k ) , , R N ( k ) ) , core tensor G R R 1 ( k ) × × R N ( k ) and factor matrices A n ( k ) R I n × R n ( k ) for n = 1 , 2 , , N .
1:
k 0 .
2:
Initialize ( R 1 ( 0 ) , R 2 ( 0 ) , , R N ( 0 ) ) ( I 1 , I 2 , , I N ) .
3:
while not converge do
4:
    for all  n = 1 , 2 , , N  do
5:
         k k + 1 .
6:
         B solution to (18) using Algorithm 3.
7:
         B ( n ) mode-n unfolding matrix of B .
8:
         U , Σ , V T full-SVD of B ( n ) .
9:
         R n ( k ) min ( R n ( k 1 ) , rank ( B ( n ) ) ) .
10:
         A n ( k ) U : , 1 : R n ( k ) .
11:
    end for
12:
     G Σ 1 : R n ( k ) , 1 : R n ( k ) V : , 1 : R n ( k ) T folding at mode-n.
13:
end while

5. Numerical Experiments

In this section, numerical experiments are conducted to the effectiveness of our proposed rank-adaptive tensor completion based on Tucker decomposition (RATC-TD) algorithm.

5.1. Test Problem 1: The Recovery of Third-Order Tensors

Many tensor recovery algorithms in the Tucker scheme require a priori given the rank of the tensor, but our method does not require this. In this test problem, synthetic data are considered, and we set the random composition tensor in the same way as [9]. We consider the tensor T R I 1 × I 2 × I 3 and obtain tensors by multiplying a randomly generated kernel G of size r 1 × r 2 × r 3 with randomly generated factor matrices A i , where A i R I i × r i , i = 1 , 2 , 3 . So, the tensor T can be represented by G × 1 A 1 × 2 A 2 × 3 A 3 , and the number of entries of T is denoted by | T | . By setting tensors in this way, we can obtain random tensors and ensure that the obtained tensors have a low-rank structure, which can be handled well by the classical low-rank tensor recovery methods.
In this experiment, we set the tensor size as 50 × 50 × 50 , the kernel size as 5 × 5 × 5 , and the kernel data are randomly generated from the interval [ 0 , 1 ] . The factor matrices are of size 50 × 5 , and the data of factor matrices are randomly generated from [ 0.5 , 0.5 ] . In order to show the robustness and reliability of our proposed algorithm, we add a small perturbation to the synthetic data. We add Gaussian noise with zero mean and standard deviation as 0.1 times the element mean and take the tensor with Gaussian noise as the ground truth data. As described above, our algorithm does not require the rank in advance, and the algorithm takes the size of the tensor as the initial tensor rank. With the continuous operation of the algorithm, the tucker rank is constantly reduced, and the exact rank can be approximated.
We compare the proposed algorithm with other Tucker-decomposition-based tensor completion algorithms (that require appropriate ranks given a priori), including gHOI [24], M 2 SA [11] and Tucker–Wopt [5], and test the sample estimation error e r r o r o b s and out-of-sample estimation error e r r o r v a l with different initial ranks at sampling rates r (the proportion of known data in the total data) of 0.05 , 0.1 and 0.2 , respectively. The size of the observed index set Ω is r | T | , and each index in Ω is randomly generated through the uniform distribution. Ω ¯ denotes the complementary set of Ω . The specific definitions of two kinds of errors are given by the following formulas:
e r r o r o b s = P Ω ( X T ) F 2 P Ω ( T ) F 2 ,
e r r o r v a l = P Ω ¯ ( X T ) F 2 P Ω ¯ ( T ) F 2 ,
where X is the result obtained by completion algorithms.
To ensure the generalization of the proposed algorithms, we set the error tolerance ϵ on the observed data to 0.0025, which means we stop optimizing B with SVT when the relative error is less than ϵ or the maximum number of iterations is reached. When using SVT to optimize B , we set the relative error e r r o r S V T in the following format:
e r r o r S V T = P Ω ( T B ( k ) M T ) F 2 P Ω ( T ) F 2 .
For the estimated tensor X obtained at each iteration in Algorithm 4, the relative error is assessed as (32) and (33), for other methods considered, the relative errors are also assessed using the same form. In addition to the relative error and the maximum number of iterations, Algorithm 4 also terminates if the difference of the relative errors obtained from the algorithm for two consecutive steps is less than a certain threshold η , i.e.,
e r r o r i t e r ( k + 1 ) e r r o r i t e r ( k ) e r r o r i t e r ( k ) < η ,
where the e r r o r i t e r ( k ) represents the relative error obtained at the kth iteration.
The initial rank is set to R ( 0 ) = [ r , r , r ] , r = 5 , 10 , 15 . Table 1 and Table 2 respectively show recovery error (32) and (33) of test problem 1. It can be seen that when the given initial rank does not match the ground truth data, gHOI, M 2 SA and Tucker–Wopt have large errors, while our RATC-TD has small errors. We emphasize that our proposed method does not require a given tensor rank; the initial rank of our proposed algorithm is set to the size of the tensor we aim to complete, i.e., R ( 0 ) = size ( T ) .
In this experiment, we use our RATC-TD algorithm to estimate the tensor ranks. We next use the M 2 SA method to improve the estimated results. Our algorithm can be considered an initial step for other algorithms, and the effects of tensor completion are also sound when only our method is used for optimization. The recovery results without using M 2 SA are shown in Table 3. As seen from Table 2, since our method was used for optimization in advance, compared with the results obtained by simply using the M 2 SA method with a given rank, it gives a better recovery effect. Notably, our proposed algorithm can also stably complete the tensor under a small sampling rate and does not need to give an appropriate rank in advance.

5.2. Test Problem 2: The Recovery of Fourth-Order Tensors

Similarly to test problem 1, following the procedures in [9], we in this test problem generate the object tensor T R 30 × 30 × 30 × 30 by multiplying a randomly generated kernel G R 5 × 5 × 5 × 5 with randomly generated factor matrices of size R 30 × 5 . The ground truth data is defined by T with Gaussian noise added (the same setting as in test problem 1). For gHOI, M 2 SA, Tucker–Wopt and our proposed RATC-TD, different initial ranks are tested, and the sampling rate r is set to 0.1 . The corresponding number of entries in the observed data set Ω is r | T | , and the index of each entry in Ω is randomly generated through the uniform distribution. The errors for this test problem are shown in Table 4, where e r r o r o b s is the error on the observed data set (32) and e r r o r v a l is the error on the validation set (33). It can be seen that our RATC-TD method performs well for this test problem.

5.3. Test Problem 3: The Recovery of Real Missing Pictures

In this test problem, real practical data are considered, and the performance of our RATC-TD is compared with that of gHOI and M 2 SA. The initial complete images considered are shown in Figure 1. The data format of each image can be regarded as a third-order tensor. Each image here is stored as a tensor with size 256 × 256 × 3 , where the third dimension is 3, representing three color channels of red, green, and blue. The ground truth data T for this test problem are the images in Figure 1 (details are as follows).
Two ways to construct partial images are considered, and our method is tested to recover the original images using the partial images. First, some black lines are added to the images, which can be considered a kind of structural missing, and the corrupted images are shown in the first column of Figure 2. The black line parts of images correspond to Ω ¯ , and the rest correspond to Ω . Second, after the initial image is converted into the data format of a third-order tensor T , the observed index set Ω is constructed with each index randomly generated through the uniform distribution, and the size of Ω is r | T | (where the sampling rate is r = 0.1 ). The images associated with Ω are shown in the first column of Figure 3.
From Figure 2, it is clear that the images obtained by our RATC-TD are closer to the ground truth images than the ones obtained by gHOI and M 2 SA. The numerical results representing the qualities of recovery are shown in Table 5. In addition to the errors e r r o r o b s (32) on the observation set and the errors e r r o r v a l (33) on the unobserved data set, we also use SSIM (structure similarity index measure) [29] and PSNR (peak signal-to-noise ratio) parameters to evaluate the effect of our image restoration, which are often used to assess the quality of image restoration. SSIM ranges from 0 to 1; the larger the SSIM is, the smaller the image distortion is. Similarly, the larger the PSNR is, the less distortion there is. PSNR is used to measure the difference between two images. For the restored image X and the original image T , the PSNR between them is given by the following formula:
PSNR = 10 × lg Maxpixel 2 MSE ,
where Maxpixel is the maximum value of the image pixel. In our test problem, Maxpixel = 255 , and MSE is the mean square error between X and T . For the case with sampling rate r = 0.1 , the corresponding restoration results are shown in Figure 3, and the numerical results representing the qualities of recovery are shown in Table 6. In this case, we only use 10 percent of the ground truth data, but it can be seen that our RATC-TD gives effective recovery results for this test problem.
For the gHOI method and the M 2 SA method, which need to be given the rank of the initial tensor in advance, we choose the initial rank as R ( 0 ) = [ 20 , 20 , 3 ] . It is worth noting that for real image data, it is difficult for us to know the exact Tucker rank in advance. When there is no way to know the rank in advance, in order to use gHOI and M 2 SA to achieve data completion, the initial rank can only be constantly adjusted through experiments, or the rank is given based on prior experience.

6. Conclusions

In this paper, we propose a novel rank-adaptive tensor completion based on the Tucker decomposition (RATC-TD) method. As the existing tensor completion methods based on Tucker decomposition, such as gHOI, M 2 SA and Tucker–Wopt, typically require the tensor rank given a priori, overestimating or underestimating the tensor ranks can lead to poor results. Inspired by the RaHOOI algorithm, we propose our algorithm based on the HOOI structure. Our proposed algorithm can adaptively estimate the multilinear rank of data in the process of tensor completion. We show the algorithm’s effectiveness through experiments on completing synthetic data and genuine pictures. The results of our algorithm can also provide effective initial data for other tensor completion methods, such as M 2 SA . The further work we expect is to extend the algorithm to higher-dimensional problems, which requires us to optimize the algorithm further to reduce the algorithm’s computational time.

Author Contributions

Conceptualization, S.L., X.S. and Q.L.; methodology, S.L., X.S. and Q.L.; software, S.L. and X.S.; validation, S.L. and X.S.; investigation, S.L. and X.S.; writing—original draft preparation, S.L. and X.S.; writing—review and editing, S.L., X.S. and Q.L.; visualization, S.L. and X.S.; project administration, Q.L.; funding acquisition, Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 12071291), the Science and Technology Commission of Shanghai Municipality (No. 20JC1414300) and the Natural Science Foundation of Shanghai (No. 20ZR1436200).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, K.; Liao, Q. Rank adaptive tensor recovery based model reduction for partial differential equations with high-dimensional random inputs. J. Comput. Phys. 2020, 409, 109326. [Google Scholar]
  2. Ahmadi-Asl, S.; Asante-Mensah, M.G.; Cichocki, A.; Phan, A.H.; Oseledets, I.; Wang, J. Cross Tensor Approximation for Image and Video Completion. arXiv 2022, arXiv:2207.06072. [Google Scholar]
  3. Bengua, J.A.; Phien, H.N.; Tuan, H.D.; Do, M.N. Efficient tensor completion for color image and video recovery: Low-rank tensor train. IEEE Trans. Image Process. 2017, 26, 2466–2479. [Google Scholar] [CrossRef] [Green Version]
  4. Long, Z.; Liu, Y.; Chen, L.; Zhu, C. Low rank tensor completion for multiway visual data. Signal Process. 2019, 155, 301–316. [Google Scholar] [CrossRef] [Green Version]
  5. Tan, H.; Feng, J.; Chen, Z.; Yang, F.; Wang, W. Low multilinear rank approximation of tensors and application in missing traffic data. Adv. Mech. Eng. 2014, 6, 157597. [Google Scholar]
  6. Araújo, D.C.; De Almeida, A.L.; Da Costa, J.P.; de Sousa, R.T. Tensor-based channel estimation for massive MIMO-OFDM systems. IEEE Access 2019, 7, 42133–42147. [Google Scholar]
  7. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  8. Ma, S.; Goldfarb, D.; Chen, L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 2011, 128, 321–353. [Google Scholar]
  9. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 208–220. [Google Scholar] [CrossRef]
  10. Acar, E.; Dunlavy, D.M.; Kolda, T.G.; Mørup, M. Scalable tensor factorizations for incomplete data. Chemom. Intell. Lab. Syst. 2011, 106, 41–56. [Google Scholar]
  11. Geng, X.; Smith-Miles, K.; Zhou, Z.H.; Wang, L. Face image modeling by multilinear subspace analysis with missing values. In Proceedings of the 17th ACM international conference on Multimedia, Beijing, China, 19–24 October 2009. [Google Scholar]
  12. He, Z.; Zhang, Z. High-dimensional uncertainty quantification via tensor regression with rank determination and adaptive sampling. IEEE Trans. Compon. Packag. Manuf. Technol. 2021, 11, 1317–1328. [Google Scholar] [CrossRef]
  13. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  14. Gandy, S.; Recht, B.; Yamada, I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 2011, 27, 025010. [Google Scholar] [CrossRef] [Green Version]
  15. Kressner, D.; Steinlechner, M.; Vandereycken, B. Low-rank tensor completion by Riemannian optimization. BIT Numer. Math. 2014, 54, 447–468. [Google Scholar]
  16. Oseledets, I.V. Tensor-train decomposition. SIAM J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  17. Ding, M.; Huang, T.Z.; Ji, T.Y.; Zhao, X.L.; Yang, J.H. Low-rank tensor completion using matrix factorization based on tensor train rank and total variation. J. Sci. Comput. 2019, 81, 941–964. [Google Scholar] [CrossRef]
  18. Yang, J.; Zhu, Y.; Li, K.; Yang, J.; Hou, C. Tensor completion from structurally-missing entries by low-TT-rankness and fiber-wise sparsity. IEEE J. Sel. Top. Signal Process. 2018, 12, 1420–1434. [Google Scholar]
  19. Jiang, Q.; Ng, M. Robust Low-Tubal-Rank Tensor Completion via Convex Optimization. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), Macao, China, 10–16 August 2019. [Google Scholar]
  20. Liu, X.Y.; Aeron, S.; Aggarwal, V.; Wang, X. Low-tubal-rank tensor completion using alternating minimization. IEEE Trans. Inf. Theory 2020, 66, 1714–1737. [Google Scholar] [CrossRef]
  21. Wang, A.; Lai, Z.; Jin, Z. Noisy low-tubal-rank tensor completion. Neurocomputing 2019, 330, 267–279. [Google Scholar] [CrossRef]
  22. Liu, Y.; Long, Z.; Huang, H.; Zhu, C. Low CP rank and tucker rank tensor completion for estimating missing components in image data. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 944–954. [Google Scholar]
  23. Hu, Y.; Zhang, D.; Ye, J.; Li, X.; He, X. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 2117–2130. [Google Scholar]
  24. Liu, Y.; Shang, F.; Fan, W.; Cheng, J.; Cheng, H. Generalized higher-order orthogonal iteration for tensor decomposition and completion. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  25. Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika 1966, 31, 279–311. [Google Scholar] [CrossRef]
  26. De Lathauwer, L.; De Moor, B.; Vandewalle, J. On the best rank-1 and rank-(R1, R2, ⋯, RN) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 2000, 21, 1324–1342. [Google Scholar]
  27. Xiao, C.; Yang, C. A rank-adaptive higher-order orthogonal iteration algorithm for truncated Tucker decomposition. arXiv 2021, arXiv:2110.12564. [Google Scholar]
  28. Uzawa, H. Iterative methods for concave programming. Stud. Linear Nonlinear Program. 1958, 6, 154–165. [Google Scholar]
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
Figure 1. The original images, which are referred to as `girl’, `baboon’ and `boat’, test problem 3.
Figure 1. The original images, which are referred to as `girl’, `baboon’ and `boat’, test problem 3.
Entropy 25 00225 g001
Figure 2. The restoration results of the images with black lines, test problem 3.
Figure 2. The restoration results of the images with black lines, test problem 3.
Entropy 25 00225 g002
Figure 3. The restoration results with sampling rate r = 0.1 , test problem 3.
Figure 3. The restoration results with sampling rate r = 0.1 , test problem 3.
Entropy 25 00225 g003aEntropy 25 00225 g003b
Table 1. Relative error (32) comparison with different initial rank on observed data, test problem 1.
Table 1. Relative error (32) comparison with different initial rank on observed data, test problem 1.
Sampling RatioInitial RankM 2 SAgHOITucker–WoptRATC-TD
0.055, 5, 50.03540.04310.03540.0354
0.0510, 10, 100.03220.04040.02840.0354
0.0515, 15, 150.02700.03620.11280.0354
0.15, 5, 50.02640.02780.02640.0264
0.110, 10, 100.02500.02670.02330.0264
0.115, 15, 150.02290.02290.01990.0264
0.25, 5, 50.02350.02360.02350.0235
0.210, 10, 100.02300.02290.02250.0235
0.215, 15, 150.02230.02150.02080.0235
Table 2. Relative error (33) comparison with different initial rank on unknown data, test problem 1.
Table 2. Relative error (33) comparison with different initial rank on unknown data, test problem 1.
Sampling RatioInitial RankM 2 SAgHOITucker–WoptRATC-TD
0.055, 5, 50.01870.05180.01870.0186
0.0510, 10, 100.34380.41920.04290.0186
0.0515, 15, 150.62620.61431.44440.0186
0.15, 5, 50.01530.02310.01530.0153
0.110, 10, 100.23000.31020.02410.0153
0.115, 15, 150.49350.45210.06100.0153
0.25, 5, 50.01450.01620.01450.0145
0.210, 10, 100.24800.23900.01810.0145
0.215, 15, 150.48720.28000.02220.0145
Table 3. Numerical results only using our proposed algorithm RATC-TD, test problem 1.
Table 3. Numerical results only using our proposed algorithm RATC-TD, test problem 1.
Sampling Ratio error obs error val
0.050.05000.0410
0.10.04890.0523
0.20.04950.0443
Table 4. The results of tensor completion quality of methods in the case of sampling rate r = 0.1, test problem 2.
Table 4. The results of tensor completion quality of methods in the case of sampling rate r = 0.1, test problem 2.
Types of ErrorInitial RankM 2 SAgHOITucker–WoptRATC-TD
e r r o r o b s 5, 5, 5, 50.03000.03420.03000.0300
e r r o r v a l 5, 5, 5, 50.01470.02510.01470.0147
e r r o r o b s 15, 15, 15, 150.02580.02550.15520.0300
e r r o r v a l 15, 15, 15, 150.71300.48076.60250.0147
Table 5. Numerical characterization of the recovery quality of images with black lines, test problem 3.
Table 5. Numerical characterization of the recovery quality of images with black lines, test problem 3.
GirlPSNRSSIM error obs error val
missing figure11.43470.5044   //
gHOI24.85580.95360.02360.0985
M 2 SA24.81960.95320.02350.1080
RATC-TD28.99810.98330.00990.0751
BaboonPSNRSSIM error obs error val
missing figure8.63720.1878   //
gHOI18.36610.74770.02680.1670
M 2 SA17.29230.69480.02630.2341
RATC-TD20.98570.86120.00980.1004
BoatPSNRSSIM error obs error val
missing figure8.55020.3229   //
gHOI17.43650.87170.02820.2419
M 2 SA16.96710.85560.02760.2635
RATC-TD22.01620.95240.00990.0998
Table 6. Numerical characterization of the recovery quality of images obtained by random sampling at sampling rate of 0.1, test problem 3.
Table 6. Numerical characterization of the recovery quality of images obtained by random sampling at sampling rate of 0.1, test problem 3.
GirlPSNRSSIM error obs error val
missing figure5.57470.0329   //
gHOI13.20450.65850.03090.1336
M 2 SA11.99430.59770.03050.1663
RATC-TD18.95660.80490.01000.0764
BaboonPSNRSSIM error obs error val
missing figure5.81650.0306   //
gHOI13.69540.58620.03190.1121
M 2 SA12.88400.54040.03510.1441
RATC-TD17.41010.70080.00980.0968
BoatPSNRSSIM error obs error val
missing figure5.61100.0346   //
gHOI12.50330.71680.03120.1336
M 2 SA11.61360.65490.03430.1615
RATC-TD17.45270.87350.00960.0869
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, S.; Shi, X.; Liao, Q. Rank-Adaptive Tensor Completion Based on Tucker Decomposition. Entropy 2023, 25, 225. https://doi.org/10.3390/e25020225

AMA Style

Liu S, Shi X, Liao Q. Rank-Adaptive Tensor Completion Based on Tucker Decomposition. Entropy. 2023; 25(2):225. https://doi.org/10.3390/e25020225

Chicago/Turabian Style

Liu, Siqi, Xiaoyu Shi, and Qifeng Liao. 2023. "Rank-Adaptive Tensor Completion Based on Tucker Decomposition" Entropy 25, no. 2: 225. https://doi.org/10.3390/e25020225

APA Style

Liu, S., Shi, X., & Liao, Q. (2023). Rank-Adaptive Tensor Completion Based on Tucker Decomposition. Entropy, 25(2), 225. https://doi.org/10.3390/e25020225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop