Next Article in Journal
New Cubic B-Spline Approximation for Solving Linear Two-Point Boundary-Value Problems
Next Article in Special Issue
Estimating the Quadratic Form xTA−mx for Symmetric Matrices: Further Progress and Numerical Computations
Previous Article in Journal
On Model Order Reduction of Interconnect Circuit Network: A Fast and Accurate Method
Previous Article in Special Issue
Sensitivity of the Solution to Nonsymmetric Differential Matrix Riccati Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition †

by
Mustapha Hached
1,‡,
Khalide Jbilou
2,‡,
Christos Koukouvinos
3,‡ and
Marilena Mitrouli
4,*,‡
1
University of Lille, CNRS, UMR 8524—Laboratoire Paul Painlevé, F-59000 Lille, France
2
Laboratoire LMPA, 50 rue F. Buisson, ULCO, 62228 Calais, France
3
Department of Mathematics, National Technical University of Athens, Zografou, 15773 Athens, Greece
4
Department of Mathematics, National and Kapodistrian University of Athens Panepistimiopolis, 15784 Athens, Greece
*
Author to whom correspondence should be addressed.
This paper is dedicated to Mr Constantin M. Petridi.
These authors contributed equally to this work.
Mathematics 2021, 9(11), 1249; https://doi.org/10.3390/math9111249
Submission received: 11 May 2021 / Revised: 18 May 2021 / Accepted: 25 May 2021 / Published: 29 May 2021
(This article belongs to the Special Issue Numerical Linear Algebra and the Applications)

Abstract

:
Face recognition and identification are very important applications in machine learning. Due to the increasing amount of available data, traditional approaches based on matricization and matrix PCA methods can be difficult to implement. Moreover, the tensorial approaches are a natural choice, due to the mere structure of the databases, for example in the case of color images. Nevertheless, even though various authors proposed factorization strategies for tensors, the size of the considered tensors can pose some serious issues. Indeed, the most demanding part of the computational effort in recognition or identification problems resides in the training process. When only a few features are needed to construct the projection space, there is no need to compute a SVD on the whole data. Two versions of the tensor Golub–Kahan algorithm are considered in this manuscript, as an alternative to the classical use of the tensor SVD which is based on truncated strategies. In this paper, we consider the Tensor Tubal Golub–Kahan Principal Component Analysis method which purpose it to extract the main features of images using the tensor singular value decomposition (SVD) based on the tensor cosine product that uses the discrete cosine transform. This approach is applied for classification and face recognition and numerical tests show its effectiveness.

1. Introduction

An important challenge in the last few years was the extraction of the main information in large datasets, measurements, observations that appear in signal and hyperspectral image processing, data mining, machine learning. Due to the increasing volume of data required by these applications, approximative low-rank matrix and tensor factorizations play a fundamental role in extracting latent components. The idea is to replace the initial large and maybe noisy and ill conditioned large scale original data by a lower dimensional approximate representation obtained via a matrix or multi-way array factorization or decomposition. Principal Components Analysis is a widely used technique for image recognition or identification. In the matrix case, it involves the computation of eigenvalues or singular decompositions. In the tensor case, even though various factorization techniques have been developed over the last decades (high-order SVD (HOSVD), Candecomp–Parafac (CP) and Tucker decomposition), the recent tensor SVDs (t-SVD and c-SVD), based on the use of the tensor t-product or c-products offer a matrix-like framework for third-order tensors, see [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] for more details on recent work related to tensors and applications. In the present work, we consider third order tensors that could be defined as three dimensional arrays of data. As our study is based on the cosine transform product, we limit this work to three-order tensors.
For a given 3-mode tensor X R n 1 × n 2 × n 3 , we denote by x i 1 , i 2 , i 3 the element i 1 , i 2 , i 3 of the tensor X . A fiber is defined by fixing all the indexes except one. An element c R 1 × 1 × n is called a tubal-scalar or simply tube of length n. For more details refer to [1,2].

2. Definitions and Notations

2.1. Discrete Cosine Transformation

In this subsection we recall some definitions and properties of the discrete cosine transformation and the c-product of tensors. During recent years, many advances were made in order to establish a rigorous framework enabling the treatment of problems for which the data is stored in three-way tensors without having to resort to matricization [1,8]. One of the most important feature of such a framework is the definition of a tensor-tensor product as the t-product, based on the Fast Fourier Transform. For applications as image treatment, the tensor-tensor product based on the Discrete Cosine Transformation (DCT) has shown to be an interesting alternative to FFT. We now give some basic facts on the DCT and its associated tensor-tensor product. The DCT of a vector v R n is defined by
v ˜ = C n v s . R n ,
where C n is the n × n discrete cosine transform matrix with entries
( C n ) i j = 2 δ i 1 n cos ( i 1 ) ( 2 j 1 ) π 2 n 1 i , j n
with δ i j is the Kronecker delta; see p. 150 in [16] for more details. It is known that the matrix C n is orthogonal, i.e., C n T C n = C n C n T = I n ; see [17]. Furthermore, for any vector v R n , the matrix vector multiplication C n v can be computed in O ( n l o g ( n ) ) operations. Moreover, Reference [17] have shown that a certain class of Toeplitz-plus-Hankel matrices can be diagonalized by C n . More precisely, we have
C n th ( v ) C n 1 = Diag ( v ˜ ) ,
where
th ( v ) = v 1 v 2 v n v 2 v 1 v 3 v n v n 1 v 1 Toeplitz + v 2 v n 0 v n v n 0 0 v n v 2 Hankel
and Diag ( v ˜ ) is the diagonal matrix whose i-th diagonal element is ( v ˜ ) i .

2.2. Definitions and Properties of the Cosine Product

In this subsection, we briefly review some concepts and notations, which play a central role for the elaboration of the tensor global iterative methods based on the c-product; see [18] for more details on the c-product.
Let A R n 1 × n 2 × n 3 be a real valued third-order tensor, then the operations mat and its inverse ten are defined by
mat ( A ) = A 1 A 2 A n A 2 A 1 A 3 A n A n 1 A 1 Block Toeplitz + A 2 A n 0 A n A n 0 0 A n A 2 Block Hankel R n 1 n 3 × n 2 n 3
and the inverse operation denoted by ten is simply defined by
ten ( mat ( A ) ) = A .
Let us denote A ˜ the tensor obtained by applying the DCT on all the tubes of the tensor A . This operation and its inverse are implemented in the Matlab by the commands dct and idct as
A ˜ = dct ( A , [ ] , 3 ) , and idct ( A ˜ , [ ] , 3 ) = A ,
where idct denotes the Inverse Discrete Cosine Transform.
Remark 1.
Notice that the tensor A ˜ can be computed by using the 3-mode product defined in [2] as follows:
A ˜ = A × 3 M
where M is the n 3 × n 3 invertible matrix given by
M = W 1 C n 3 ( I + Z )
where C n 3 denote de n 3 × n 3 Discrete Cosine Transform DCT matrix, W = diag ( C n 3 ( : , 1 ) ) is the diagonal matrix made of the first column of the DCT matrix, Z is n 3 × n 3 circulant upshift matrix which can be computed in MATLAB using W = diag ( ones ( n 3 1 , 1 ) , 1 ) and I the n 3 × n 3 identity matrix; see [18] for more details.
Let A be the matrix
A = A ( 1 ) A ( 2 ) A ( n 3 ) R n 3 n 1 × n 3 n 2
where the matrices A ( i ) ’s are the frontal slices of the tensor A ˜ . The block matrix mat ( A ) can also be block diagonalized by using the DCT matrix as follows
( C n 3 I n 1 ) mat ( A ) ( C n 3 T I n 2 ) = A
Definition 1.
The c-product of two tensors A R n 1 × n 2 × n 3 and B R n 2 × m × n 3 is the n 1 × m × n 3 tensor defined by:
A 🟉 c B = ten ( mat ( A ) mat ( B ) ) .
Notice that from Equation (3), we can show that the product C = A 🟉 c B is equivalent to C = A B . Algorithm 1 allows us to compute, in an efficient way, the c-product of the tensors A and B , see [18].
Algorithm 1 Computing the c-product.
Inputs A R n 1 × n 2 × n 3 and B R n 2 × m × n 3
Output C = A 🟉 c B R n 1 × m × n 3
1.  Compute A ˜ = dct ( A , [ ] , 3 ) and B ˜ = dct ( B , [ ] , 3 ) .
2.  Compute each frontal slices of C ˜ by
C ( i ) = A ( i ) B ( i )
3.  Compute C = idct ( C ˜ , [ ] , 3 ) .
Next, give some definitions and remarks on the c-product and related topics.
Definition 2.
The identity tensor I n 1 n 1 n 3 is the tensor such that each frontal slice of I ˜ n 1 n 1 n 3 is the identity matrix I n 1 n 1 .
An n 1 × n 1 × n 3 tensor A is said to be invertible if there exists a tensor B of order n 1 × n 1 × n 3 such that
A 🟉 c B = I n 1 n 1 n 3 and B 🟉 c A = I n 1 n 1 n 3 .
In that case, we denote B = A 1 . It is clear that A is invertible if and only if mat ( A ) is invertible.
The inner scalar product is defined by
A , B = i 1 = 1 n 1 i 2 = 1 n 2 i 3 = 1 n 3 a i 1 i 2 i 3 b i 1 i 2 i 3
and its corresponding norm is given by A F = A , A .
An n 1 × n 1 × n 3 tensor Q is said to be orthogonal if Q T 🟉 c Q = Q 🟉 c Q T = I n 1 n 1 n 3 .
Definition 3
([1]). A tensor is called f-diagonal if its frontal slices are diagonal matrices. It is called upper triangular if all its frontal slices are upper triangular.
Next we recall the Tensor Singular Value Decomposition of a tensor (Algorithm 2); more details can be found in [19].
Theorem 1.
Let A be an n 1 × n 2 × n 3 real-valued tensor. Then A can be factored as follows
A = U 🟉 c S 🟉 c V T ,
where U and V are orthogonal tensors of order ( n 1 , n 1 , n 3 ) and ( n 2 , n 2 , n 3 ) , respectively, and S is an f-diagonal tensor of order ( n 1 × n 2 × n 3 ) . This factorization is called Tensor Singular Value Decomposition (c-SVD) of the tensor A .
Algorithm 2 The Tensor SVD (c-SVD).
Input:   A R n 1 × n 2 × n 3 Output: U , V and S .
1.  Compute A ˜ = dct ( A , [ ] , 3 ) .
2.  Compute each frontal slices of U ˜ , V ˜ and S ˜ from A ˜ as follows
     (a)    for i = 1 , , n 3
[ U ˜ ( i ) , S ˜ ( i ) , V ˜ ( i ) ] = s v d ( A ˜ ( i ) )
     (b)    End for
3.  Compute U = idct ( U ˜ , [ ] , 3 ) , S = idct ( S ˜ , [ ] , 3 ) and V = idct ( V ˜ , [ ] , 3 ) .
Remark 2.
As for the t-product [19], we can show that if A = U 🟉 c S 🟉 c V T is a c-SVD of the tensor A , then we have
k = 1 n 3 A k = k = 1 n 3 U k k = 1 n 3 S k k = 1 n 3 V k T ,
where A k , U k , S k and V k are the frontal slices of the tensors A , U , S and V , respectively, and
A = i = 1 min ( n 1 , n 2 ) U ( : , i , : ) 🟉 c S ( i , i , : ) 🟉 c V ( : , i , : ) T .
Theorem 2.
Let A = U 🟉 c S 🟉 c V T given by (5), and define for k m i n ( n 1 , n 2 ) the tensor
A k = i = 1 k U ( : , i , : ) 🟉 c S ( i , i , : ) 🟉 c V ( : , i , : ) T .
Then
A k = a r g min X M A k A F ,
where M = { X 🟉 c Y ; X R n 1 × k × n 3 , Y R k × n 2 × n 3 } .
Note that when n 3 = 1 this theorem reduces to the well known Eckart–Young theorem for matrices [20].
Definition 4
(The tensor tubal-rank).Let A be an n 1 × n 2 × n 3 be a tensor and consider its c-SVD A = U 🟉 c S 🟉 c V T . The tensor tubal rank of A , denoted as rank t ( A ) is defined to be the number of non-zero tubes of the f-diagonal tensor S , i.e.,
rank t ( A ) = # { i , S ( i , i , : ) 0 } .
Definition 5.
The multi-rank of the tensor A is a vector p R n 3 with the i-th element equal to the rank of the i-th frontal slice of A ˜ = fft ( A , [ ] , 3 ) , i.e.,
p ( i ) = r a n k ( A ( i ) ) , i = 1 , , n 3 .
The well known QR matrix decomposition can also be extended to the tensor case; see [19].
Theorem 3.
Let A be a real-valued tensor of order n 1 × n 2 × n 3 . Then A can be factored as follows
A = Q 🟉 c R ,
where Q is an n 1 × n 1 × n 3 orthogonal tensor and R is an n 1 × n 1 × n 3 f-upper triangular tensor.

3. Tensor Principal Component Analysis for Face Recognition

Principle Component Analysis (PCA) is a widely used technique in image classification and face recognition. Many approaches involve a conversion of color images to grayscale in order to reduce the training cost. Nevertheless, for some applications, color an is important feature and tensor based approaches offer the possibility to take it into account. Moreover, especially in the case of facial recognition, it allows the treatment of enriched databases including for instance additional biometric information. However, one has to bear in mind that the computational cost is an important issue as the volume of data can be very large. We first recall some background facts on the matrix based approach.

3.1. The Matrix Case

One of the simplest and most effective PCA approaches used in face recognition systems is the so-called eigenface approach. This approach transforms faces into a small set of essential characteristics, eigenfaces, which are the main components of the initial set of learning images (training set). Recognition is done by projecting a test image in the eigenface subspace, after which the person is classified by comparing its position in eigenface space with the position of known individuals. The advantage of this approach over other face recognition strategies resides in its simplicity, speed and insensitivity to small or gradual changes on the face.
The process is defined as follows: Consider a set of training faces I 1 , I 2 , , I p . All the face images have the same size: n × m . Each face I i is transformed into a vector x i using the operation v e c : x i = v e c ( I i ) . These vectors are columns of the n m × p matrix
X = [ x 1 , , x p ] .
We compute the average image μ = 1 p i = 1 p x i . Set x ¯ i = x i μ and consider the new matrices
X ¯ = [ x ¯ 1 , , x ¯ p ] , and C = X ¯ X ¯ T .
Notice that the n m × n m covariance matrix C = X ¯ X ¯ T can be very large. Therefore, the computation of the n m eigenvalues and the corresponding eigenvectors (eigenfaces) can be very difficult. To circumvent this issue, we instead consider the smaller p × p matrix L = X ¯ T X ¯ .
Let v i be an eigenvector of L then L v i = X ¯ T X ¯ v i = λ i v i and
X ¯ L v i = X ¯ X ¯ T X ¯ v i = λ i X ¯ v i ,
which shows that X ¯ v i is an eigenvector of the covariance matrix C = X ¯ X ¯ T .
The p eigenvectors of L = X ¯ T X ¯ are then used to find the p eigenvectors u i = X ¯ v i of C that form the eigenface space. We keep only k eigenvectors corresponding to the largest k eigenvalues (eigenfaces corresponding to small eigenvalues can be omitted, as they explain only a small part of characteristic features of the faces.)
The next step consists of projecting each image of the training sample onto the eigenface space spanned by the orthogonal vectors u 1 , , u k :
U k = s p a n { u 1 , , u k } , with U k = [ u 1 , , u k ]
The matrix U k U k T is an orthogonal projector onto the subspace U k . A face image can be projected onto this face space as y i = U k T ( x i μ ) .
We now give the steps of an image classification process based on this approach:
Let x = v e c ( I ) be a test vector-image and project it onto the face space to get y = U k T ( x μ ) . Notice that the reconstructed image is given by
x r = U ˜ k y + μ .
Compute the Euclidean distance
ϵ i = y y i , i = 1 , , k .
A face is classified as belonging to the class l when the minimum l is below some chosen threshold θ Set
θ = 1 2 max i , j y i y j , i , j = 1 , , k ,
and let ϵ be the distance between the original test image x and its reconstructed image x r : ϵ = x x r . Then
  • If ϵ θ , then the input image is not even a face image and not recognized.
  • If ϵ < θ and ϵ i θ for all i then the input image is a face image but it is an unknown image face.
  • If ϵ < θ and ϵ i < θ for all i then the input images are the individual face images associated with the class vector x i .
We now give some basic facts on the relation between the singular value decomposition (SVD) and PCA in this context:
Consider the Singular Value Decomposition of the matrix A as
X ¯ = U Σ V T = i = 1 p σ i u i v i T
where U and V are orthonormal matrices of sizes n m and p, respectively. The singular values σ i are the square roots of the eigenvalues of the matrix L = X ¯ T X ¯ , the u i ’s are the left vectors and the v i s are the right vectors. We have
L = X ¯ T X ¯ = V Δ V T ; Δ = d i a g ( σ 1 2 , , σ p 2 )
which is is the eigendecomposition of the matrix L and
C = X ¯ X ¯ T = U D U T ; D = d i a g ( σ 1 2 , , σ p 2 , 0 , , 0 ) .
In the PCA method, the projected eigenface space is then generated by the first u 1 , , u k columns of the unitary matrix U derived from the SVD decomposition of the matrix X ¯ .
As only a small number k of the largest singular values are needed in PCA, we can use the well known Golub–Kahan algorithm to compute these wanted singular values and the corresponding singular vectors to define the projected subspace.
In the next section, we explain how the SVD based PCA can be extended to tensors and propose an algorithm for facial recognition in this context.

4. The Tensor Golub–Kahan Method

As explained in the previous section, it is important to take into account the potentially large size of datasets, especially for the training process. The idea of extending the matrix Golub–Kahan bidiagonalization algorithm to the tensor context has been explored in the recent years for large and sparse tensors [21]. In [1], the authors established the foundations of a remarkable theoretical framework for tensor decompositions in association with the tensor-tensor t- or c-products, allowing to generalize the main notions of linear algebra to tensors.

4.1. The Tensor C-Global Golub–Kahan Algorithm

Let A R n 1 × n 2 × n 3 be a tensor ans s 1 an integer. The Tensor c-global Golub–Kahan bidiagonalization algorithm (associated to the c-product) is described in Algorithm 3.
Algorithm 3 The Tensor Global Golub–Kahan algorithm (TGGKA).
1.  Choose a tensor V 1 R n 2 × s × n 3 such that V 1 F = 1 and set β 0 = 0 .
2.  For i = 1 , 2 , , k
     (a)   U i = A 🟉 c V i β i 1 U i 1 ,
     (b)   α i = U i F ,
     (c)   U i = U i / α i ,
     (d)   V i + 1 = A T 🟉 c U i α i V i ,
     (e)   β i = V i + 1 F .
     (f)   V i + 1 = V i + 1 / β i .
     End
Let C k be the k × k upper bidiagonal matrix defined by
C k = α 1 β 1 α 2 β 2 α k 1 β k 1 α k .
Let V k and A 🟉 c V k be the ( n 2 × ( s k ) × p ) and ( n 1 × ( s k ) × n 3 ) tensors with frontal slices V 1 , , V k and A 🟉 c V 1 , , A 🟉 c V k , respectively, and let U k and A T 🟉 c U k be the ( n 1 × ( s k ) × n 3 ) and ( n 2 × ( s k ) × n 3 ) tensors with frontal slices U 1 , , U k and A T 🟉 c U 1 , , A T 🟉 c U k , respectively. We set
V k : = V 1 , , V k , and A 🟉 c V k : = [ A 🟉 c V 1 , , A 🟉 c V k ] ,
U k : = U 1 , , U k , and A T 🟉 c U k : = [ A T 🟉 c U 1 , , A T 🟉 c U k ] ,
with
C ˜ k T = C k T β k e k T R ( k + 1 ) × k , e k T = ( 0 , 0 , , 0 , 1 ) T .
Then, we have the following results [13].
Proposition 1.
The tensors produced by the tensor c-global Golub–Kahan algorithm satisfy the following relations
A 🟉 c V k = U k C k ,
A T 🟉 c U k = V k + 1 C ˜ k T
= V k C k T + β k O n × s × p , , O n 1 × s × n 3 , V k + 1 ,
where the product ⊛ is defined by:
U k y = j = 1 k y j V j , y = ( y 1 , , y m ) T R k .
We set the following notation:
U k C k = U k C k 1 , , U k C k k ,
where C k i is the i-th column of the matrix C k .
We note that since the matrix C k is bidiagonal, T k = C k T C k is symmetric and tridiagonal and then Algorithm computes the same information as tensor global Lanczos algorithm applied to the symmetric matrix A 🟉 c A .

4.2. Tensor Tubal Golub–Kahan Bidiagonalisation Algorithm

First, we introduce some new products that will be useful in this section.
Definition 6
([13]). Let a R 1 × 1 × n 3 and B R n 1 × n 2 × n 3 , the tube fiber tensor product ( a B ) is an ( n 1 × n 2 × n 3 ) tensor defined by
a B = a 🟉 c b ( 1 , 1 , : ) a 🟉 c b ( 1 , n 2 , : ) a 🟉 c b ( n 1 , 1 , : ) a 🟉 c b ( n 1 , n 2 , : )
Definition 7
([13]). Let A R n 1 × m 1 × n 3 , B R n 1 × m 2 × n 3 , C R n 2 × m 1 × n 3 and D R n 2 × m 2 × n 3 be tensors. The block tensor
A B C D R ( n 1 + n 2 ) × ( m 1 + m 2 ) × n 3
is defined by compositing the frontal slices of the four tensors.
Definition 8.
Let A = [ A 1 , , A n 2 ] R n 1 × n 2 × n 3 where A i R n 1 × 1 × n 3 , we denoted byTVect( A ) the tensor vectorization operator: R n 1 × n 2 × n 3 R n 1 n 2 × 1 × n 3 obtained by superposing the laterals slices A i of A , for i = 1 , , n 2 . In others words, for a tensor A = [ A 1 , , A n 2 ] R n 1 × n 2 × n 3 where A i R n 1 × 1 × n 3 , we have:
TVect ( A ) = A 1 A 2 A n 2 R n 1 n 2 × 1 × n 3
Remark 3.
TheTVectoperator transform a given tensor on lateral slice. Its easy to see that when we take p = 1 , theTVectoperator coincides with the operation v e c which transform the matrix on vector.
Proposition 2.
Let A be a tensor of size R n 1 × n 2 × n 3 , we have
A F = TVec ( A ) F
Definition 9.
Let A = [ A 1 , , A n 2 ] R n 1 × n 2 × n 3 where A i R n 1 × 1 × n 3 . We define the range space of A denoted by Range ( A ) as the c-linear span of the lateral slices of A
Range ( A ) = A 1 🟉 c a ( 1 , 1 , : ) + + A n 2 🟉 c a ( n 2 , n 2 , : ) | a ( i , i , : ) R 1 × 1 × n 3
Definition 10
([14]). Let A R n 1 × n 2 × n 3 and B R m 1 × m 2 × n 3 , the c-Kronecker product A B of A and B is the n 1 m 1 × n 2 m 2 × n 3 tensor in which the i-th frontal slice of their transformed tensor ( A B ) ˜ is given by:
( A B ) ˜ i = ( A ( i ) B ( i ) ) , i = 1 , . . . , n 3
where A ( i ) and B ( i ) are the i-th frontal slices of the tensors A ˜ = dct ( A , [ ] , 3 ) and B ˜ = dct ( B , [ ] , 3 ) , respectively.
We introduce now a normalization algorithm allowing us to decompose the non-zero tensor C R n 1 × n 2 × n 3 , such that:
C = a Q , with Q , Q = e ,
where a is an invertible tube fiber of size a R 1 × 1 × n 3 and Q R n 1 × n 2 × n 3 and e is the tube fiber e R 1 × 1 × n 3 defined by unfold ( e ) = ( 1 , 0 , 0 , 0 ) T .
This procedure is described in Algorithm 4.
Algorithm 4 Normalization algorithm (Normalize).
1.  Input. A R n 1 × n 2 × n 3 and a tolerance t o l > 0 .
2.  Output. The tensor Q and the tube fiber a .
3.  Set Q ˜ = dct ( A , [ ] , 3 )
     (a)  For j = 1 , , n 3
         i   a j = | | Q ˜ ( j ) | | F
         ii  if a j > t o l , Q ˜ ( j ) = Q ˜ ( j ) a j
         iii  else Q ˜ j = rand ( n 1 , n 2 ) ; a j = | | Q ˜ ( j ) | | F
            Q ˜ ( j ) = Q ˜ ( j ) a j ; a j = 0 ,
     (b)  End
4.   Q = idct ( Q ˜ , [ ] , 3 ) , a = idct ( a , [ ] , 3 )
5.  End
Next, we give the Tensor Tube Global Golub–Kahan (TTGGKA) algorithm, seeElIchi1. Let A R n 1 × n 2 × n 3 be a tensor and let s 1 be an integer. The Tensor Tube Global Golub–Kahan bidiagonalization process is described in Algorithm 5.
Algorithm 5 The Tensor Tube Global Golub–Kahan algorithm (TTGGKA).
1.  Choose a tensor V 1 R n 2 × s × n 3 such that V 1 , V 1 = e and set b 0 = 0 .
2.  For i = 1 , 2 , , k
   (a)   U i = A 🟉 c V i b i 1 U i 1 ,
   (b)   [ U i , a i ] = N o r m a l i z e ( U i ) .
   (c)   V i + 1 = A T 🟉 c U i a i V i ,
   (d)   [ V i + 1 , b i ] = N o r m a l i z e ( V i + 1 ) .
   End
Let C k be the k × k × n 3 upper bidiagonal tensor (each frontal slice of C k is a bidiagonal matrix) and C ˜ k the k × ( k + 1 ) × n 3 defined by
C k = a 1 b 1 a 2 b 2 a k 1 b k 1 a k , and C ˜ k = a 1 b 1 a 2 b 2 a k 1 b k 1 a k b k .
Let V k and A 🟉 c V k be the ( n 2 × ( s k ) × n 3 ) and ( n 1 × ( s k ) × n 3 ) tensors with frontal slices V 1 , , V k and A 🟉 c V 1 , , A 🟉 c V k , respectively, and let U k and A T 🟉 c U k be the ( n 1 × ( s k ) × n 3 ) and ( n 2 × ( s k ) × n 3 ) tensors with frontal slices U 1 , , U k and A T 🟉 c U 1 , , A T 🟉 c U k , respectively. We set
V k : = V 1 , , V k , and A 🟉 c V k : = [ A 🟉 c V 1 , , A 🟉 c V k ] ,
U k : = U 1 , , U k , and A T 🟉 c U k : = [ A T 🟉 c U 1 , , A T 🟉 c U k ] ,
Then, we have the following results.
Proposition 3.
The tensors produced by the tensor TTGGKA algorithm satisfy the following relations
A 🟉 c V k = U k 🟉 c ( C k I s s n 3 ) ,
A T 🟉 c U k = V k + 1 🟉 c ( C ˜ k T I s s n 3 )
= V k 🟉 c ( C k T I s s p ) + V k + 1 🟉 c ( ( b k 🟉 c e 1 , k , : ) I s s n 3 ) ,
where e 1 , k , : R 1 × k × n 3 with 1 in the ( 1 , k , 1 ) position and zeros in the other positions, I s s n 3 R s × s × n 3 the identity tensor and b k is the fiber tube in the ( k , k + 1 , : ) position of the tensor C ˜ k .

5. The Tensor Tubal PCA Method

In this section, we describe a tensor-SVD based PCA method for order 3 tensors which naturally arise in problems involving images such as facial recognition. As for the matrix case, we consider a set of N training images, each of one being encoded as n 1 × n 2 × n 3 real tensors I i , 1 i N . In the case of RGB images, each frontal slice would contain the encoding for each color layer ( n 3 = 3 ) but in order to be able to store additional features, the case n 3 > 3 could be contemplated.
Let us consider one training image I i 0 . Each one of the n 3 frontal slices I i 0 ( j ) of I i 0 is resized into a column vector v e c ( I i 0 ( j ) ) of length L = n 1 × n 2 and we form a L × 1 × n 3 tensor X i 0 defined by X i 0 ( : , : , j ) = v e c ( I i 0 ( j ) ) . Applying this procedure to each training image, we obtain N tensors X i of size L × 1 × n 3 . The average image tensor is defined as X ¯ = 1 N i = 1 N X i and we define the L × N × n 3 training tensor X = [ X 1 ¯ , , X N ¯ ] , where X i ¯ = X i X ¯ .
Let us now consider the c-SVD decomposition X = U c S c V T of X , where U and V are orthogonal tensors of size L × L × n 3 and N × N × n 3 , respectively, and S is a f-diagonal tensor of size L × N × n 3 .
In the matrix context, it is known that just a few singular values suffice to capture the main features of an image, therefore, applying this idea to each one of the three color layers, an RGB image can be approximated by a low tubal rank tensor. Let us consider an image tensor S R n 1 × n 2 × n 3 and its c-SVD decomposition S = U 🟉 c S 🟉 c V T . Choosing an integer r such as r m i n ( n 1 , n 2 ) , we can approximate S by the r tubal rank tensor
S r i = 1 r U ( : , i , : ) c S ( i , i , : ) c V ( : , i , : ) T .
In Figure 1, we represented a 512 × 512 RGB image and the images obtained for various truncation indices. On the left part, we plotted the singular values of one color layer of the RGB tensor (the exact same behaviour is observed on the two other layers). The rapid decrease of the singular values explain the good quality of compressed images even for small truncation indices.
Applying this idea to our problem, we want to be able to obtain truncated tensor SVDs of the training tensor X , without needing to compute the whole c-SVD. After k iterations of the TTGGKA algorithm (for the case s = 1 ), we obtain three tensors U k R n 1 × k × n 3 , V k + 1 R n 2 × ( k + 1 ) × n 3 and C ˜ k R ( k × ( k + 1 ) × n 3 as defined in Equation (21) such as
A T 🟉 c U k = V k + 1 🟉 c C ˜ k T .
Let C ˜ k = Φ 🟉 c Σ 🟉 c Ψ the c-SVD of C ˜ k , noticing that C ˜ k R k × ( k + 1 ) × n 3 is much smaller than X ¯ . Then first tubal singular values and the left tubal singular tensors of X ¯ are given by Σ ( i , i , : ) and U k 🟉 c Φ ( : , i , : ) , respectively, for i k , see [1] for more details.
In order to illustrate the ability to approximate the first singular elements of a tensor using the TTGGKA algorithm, we considered a 900 × 900 × 3 real tensor A which frontal slices were matrices generated by a finite difference discretization method of differential operators. On Figure 2, we displayed the error on the first diagonal coefficient of the first frontal S ( 1 , 1 , 1 ) in function of the number of iteration of the Tensor Tube Golub–Kahan algorithm, where A = U 🟉 c S 🟉 c V T is the c-SVD of A .
In Table 1, we reported on the errors on the tensor Frobenius norms of the singular tubes in function of the number k of the Tensor Tube Golub–Kahan algorithm.
The same behaviour was observed on all the other frontal slices. This example illustrate the ability of the TTGKA algorithm for approximating the largest singular tubes.
The projection space is generated by the lateral slices of the tensor P = U k 🟉 c Φ ( : , 1 : k , : ) R n 1 × i × n 3 derived from the TTGGKA algorithm and the c-SVD decomposition of the bidiagonal tensor C ˜ k , i.e., the c-linear span of first k lateral slices of P , see [1,19] for more details.
The steps of the Tensor Tubal PCA algorithm for face recognition which finds the closest image in the training database for a given image I 0 are summarized in Algorithm 6:
Algorithm 6 The Tensor Tubal PCA algorithm (TTPCA).
1.  Inputs Training Image tensor X (N images), mean image tensor X ¯ ,Test image I 0 , index of truncation r, k=number of iterations of the TTGGKA algorithm ( k r ).
2.  Output Closest image in the Training database.
3.  Run k iterations of the TTGGKA algorithm to obtain tensors U k and C k ^
4.  Compute Φ , Σ , Ψ = c-SVD ( C k ˜ )
5.  Compute the projection tensor P r = P r ( : , 1 , : ) , , P r ( : , r , : ) , where P r ( : , i , : ) = U k 🟉 c Φ ( : , i , : ) R n 1 × 1 × n 3
6.  Compute the projected Training tensor X ^ r = P r T 🟉 c X and projected centred test image I ^ r = P r T 🟉 c ( I X ¯ )
7.  Find i = arg min i = 1 , . . , N I ^ r X ^ r ( : , i , : ) | F
In the next section, we consider image identification problems on various databases.

6. Numerical Tests

In this section, we consider three examples of image identification. In the case of grayscale images, the global version of Golub–Kahan was used to compute the dominant singular values in order to perform a PCA on the data. For the two other situations, we used the Tensor Tubal PCA (TTPCA) method based on the Tube Global Golub–Kahan (TTGGKA) algorithm in order to perform facial recognition on RGB images. The tests were performed with Matlab 2019a, on an Intel i5 laptop with 16 Go of memory. We considered various truncation indices r for which the recognition rates were computed. We also reported the CPU time for the training process.

6.1. Example 1

In this example, we considered the MNIST database of handwritten digits [22]. The database contains two subsets of 28 × 28 grayscale images (60,000 training images and 10,000 test images). A sample is shown in Figure 3. Each image was vectorized as a vector of length 28 × 28 = 784 and, following the process described in Section 3.1, we formed the training and the test matrices of sizes 784 × 60 , 000 and 784 × 10 , 000 , respectively.
Both matrices were centred by substracting the mean training image and the Golub–Kahan algorithm was used to generate an approximation of r dominant singular values s i and left singular vectors u i , i = 1 , , r .
Let us denote U r the subspace spanned by the columns of U r = u 1 , , u r . Let t be a test image and t ^ r = U r T t its projection onto U r . The closest image in the training dataset is determined by computing
i = arg min i = 1 , . . , 60 , 000 t ^ r X ^ r ( : , i ) ,
where X ^ r = U r T X .
For various truncation indices r, we tested each image of the test subset and computed the recognition rate (i.e., a test is successful if the digit is correctly identified). The results are plotted on Figure 4 and show that a good level of accuracy is obtained with only a few approximate singular values. Due to the large size of the training matrix, it validates the interest of computing only a few singular values with the Golub–Kahan algorithm.

6.2. Example 2

In this example, we used the Georgia Tech database GTDB_ crop [23], which contains 750 face images of 50 persons in different illumination conditions, facial expression and face orientation, as shown in Figure 5. The RGB JPEG images were resized to 100 × 100 × 3 tensors.
Each image file is coded as a 100 × 100 × 3 tensor and transformed into a 10,000 × 1 × 3 tensor as explained in the previous section. We built the training and test tensors as follows: from 15 pictures of each person in the database, five pictures were randomly chosen and stored in the test folder and the 10 remaining pictures were used for the train tensor. Hence, the database was partitioned into two subsets containing 250 and 500 items, respectively, at each iteration of the simulation.
We applied the TTGGKA based Algoritm 6 for various truncation indices. In Figure 6, we represented a test image (top left position), the closest image in the database (top right), the mean image of the training database (bottom left) and the eigenface associated to the test image (bottom right).
In order compute the rate of recognition, we ran 100 simulations, obtained the number of successes (i.e., a test is successful if the person is correctly identified) and reported the best identification rates, in function of the truncation index r in Figure 7.
The results match the performances observed in the literature [24] for this database and it confirms that the use of a Golub–Kahan strategy is interesting especially because, in terms of training, the Tube Tensor PCA algorithm required only 5 s instead of 25 s when using a c-SVD.

6.3. Example 3

In the second example, we used the larger AR face database (cropped version) (Face crops) [9], which contains 2600 bitmap pictures of human faces (50 males and 50 females, 26 pictures per person), with different expressions, lightning conditions, facial expressions and face orientation. The bitmap pictures were resized to 100 × 100 Jpeg images. The same protocol as for Example 1 was followed: we partitioned the set of images in two subsets. Out of 26 pictures, 6 pictures were randomly chosen as test images and the remaining 20 were put into the training folder. The training process took 24 s while it would have taken 81.5 s if using a c-SVD. An example of test image, the closest match in the dataset, the mean image and its associated eigenface are shown in Figure 8.
We applied our approach (TTPCA) to the 10,000 × 2000 × 3 training tensor X and plotted the recognition rate as a function of the truncation index in Figure 9.
For all examples, it is worth noticing that, as expected in face identification problems, only a few of the first largest singular elements suffice to capture the main features of an image. Therefore, the Golub–Kahan based strategies such as the TTPCA method are an interesting choice.

7. Conclusions

In this manuscript, we focused on two types of Golub–Kahan factorizations. We used the recent advances in the field of tensor factorization and showed that this approach is efficient for image identification. The main feature of this approach resides in the ability of the Global Golub–Kahan algorithms to approximate the dominant singular elements of a training matrix or tensor without needing to compute the SVD. This is particularly important as the matrices and tensors involved in this type of application can be very large. Moreover, in the case for which color has to be taken into account, this approach do not involve a conversion to grayscale, which can be very important for some applications. In a future work, we would like to study the feasability of implementing the promising randomized PCA approaches in the Golub–Kahan tensor algorithm in order to improve the training process computational cost in the case of very large datasets.

Author Contributions

Conceptualization, M.H., K.J., C.K. and M.M.; methodology, M.H. and K.J.; software, M.H.; validation, M.H., K.J., C.K. and M.M.; writing—original draft preparation, M.H., K.J., C.K. and M.M.; writing—review and editing, M.H., K.J., C.K. and M.M.; visualization, M.H., K.J., C.K. and M.M.; supervision, K.J.; project administration, M.H. and K.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Mustapha Hached acknowledges support from the Labex CEMPI (ANR-11-LABX-0007-01).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar]
  2. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 3, 455–500. [Google Scholar] [CrossRef]
  3. Zhang, J.; Saibaba, A.K.; Kilmer, M.E.; Aeron, S. A randomized tensor singular value decomposition based on the t-product. Numer Linear Algebra Appl. 2018, 25, e2179. [Google Scholar] [CrossRef] [Green Version]
  4. Cai, S.; Luo, Q.; Yang, M.; Li, W.; Xiao, M. Tensor robust principal component analysis via non-convex low rank approximation. Appl. Sci. 2019, 9, 1411. [Google Scholar] [CrossRef] [Green Version]
  5. Kong, H.; Xie, X.; Lin, Z. t-Schatten-p norm for low-rank tensor recovery. IEEE J. Sel. Top. Signal Process. 2018, 12, 1405–1419. [Google Scholar] [CrossRef]
  6. Lin, Z.; Chen, M.; Ma, Y. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
  7. Kang, Z.; Peng, C.; Cheng, Q. Robust PCA via nonconvex rank approximation. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015. [Google Scholar]
  8. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis with a New Tensor Nuclear Norm. IEEE Anal. Mach. Intell. 2020, 42, 925–938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar] [CrossRef] [Green Version]
  10. Guide, M.E.; Ichi, A.E.; Jbilou, K.; Sadaka, R. Tensor Krylov subspace methods via the T-product for color image processing. arXiv 2020, arXiv:2006.07133. [Google Scholar]
  11. Brazell, M.; Navasca, N.L.C.; Tamon, C. Solving Multilinear Systems Via Tensor Inversion. SIAM J. Matrix Anal. Appl. 2013, 34, 542–570. [Google Scholar] [CrossRef]
  12. Beik, F.P.A.; Jbilou, K.; Najafi-Kalyani, M.; Reichel, L. Golub–Kahan bidiagonalization for ill-conditioned tensor equations with applications. Numer. Algorithms 2020, 84, 1535–1563. [Google Scholar] [CrossRef]
  13. Ichi, A.E.; Jbilou, K.; Sadaka, R. On some tensor tubal-Krylov subspace methods via the T-product. arXiv 2020, arXiv:2010.14063. [Google Scholar]
  14. Guide, M.E.; Ichi, A.E.; Jbilou, K. Discrete cosine transform LSQR and GMRES methods for multidimensional ill-posed problems. arXiv 2020, arXiv:2103.11847. [Google Scholar]
  15. Vasilescu, M.A.O.; Terzopoulos, D. Multilinear image analysis for facial recognition. In Proceedings of the Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 11–15 August 2002; pp. 511–514. [Google Scholar]
  16. Jain, A. Fundamentals of Digital Image Processing; Prentice–Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  17. Ng, M.K.; Chan, R.H.; Tang, W. A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comput. 1999, 21, 851–866. [Google Scholar] [CrossRef] [Green Version]
  18. Kernfeld, E.; Kilmer, M.; Aeron, S. Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 2015, 485, 545–570. [Google Scholar] [CrossRef]
  19. Kilmer, M.E.; Martin, C.D. Factorization strategies for third-order tensors. Linear Algebra Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef] [Green Version]
  20. Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  21. Savas, B.; Eldén, L. Krylov-type methods for tensor computations I. Linear Algebra Appl. 2013, 438, 891–918. [Google Scholar] [CrossRef]
  22. Lecun, Y.; Cortes, C.; Curges, C. The MNIST Database. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 22 February 2021).
  23. Nefian, A.V. Georgia Tech Face Database. Available online: http://www.anefian.com/research/face_reco.htm (accessed on 22 February 2021).
  24. Wang, S.; Sun, M.; Chen, Y.; Pang, E.; Zhou, C. STPCA: Sparse tensor Principal Component Analysis for feature extraction. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 2278–2281. [Google Scholar]
Figure 1. Image compression.
Figure 1. Image compression.
Mathematics 09 01249 g001
Figure 2. Σ ( 1 , 1 , 1 ) S ( 1 , 1 , 1 ) vs. number of TTGGKA iteration k.
Figure 2. Σ ( 1 , 1 , 1 ) S ( 1 , 1 , 1 ) vs. number of TTGGKA iteration k.
Mathematics 09 01249 g002
Figure 3. First 16 images of MNIST training subset.
Figure 3. First 16 images of MNIST training subset.
Mathematics 09 01249 g003
Figure 4. Identification rates for different truncation indices r.
Figure 4. Identification rates for different truncation indices r.
Mathematics 09 01249 g004
Figure 5. Fifteen pictures of one individual in the database.
Figure 5. Fifteen pictures of one individual in the database.
Mathematics 09 01249 g005
Figure 6. Test image, closest image, mean image and eigenface.
Figure 6. Test image, closest image, mean image and eigenface.
Mathematics 09 01249 g006
Figure 7. Identification rates for different truncation indices r.
Figure 7. Identification rates for different truncation indices r.
Mathematics 09 01249 g007
Figure 8. Test image, closest image, mean image and eigenface.
Figure 8. Test image, closest image, mean image and eigenface.
Mathematics 09 01249 g008
Figure 9. Identification rates for different truncation indices r.
Figure 9. Identification rates for different truncation indices r.
Mathematics 09 01249 g009
Table 1. S ( i , i , : ) Σ ( i , i , : ) F vsk.
Table 1. S ( i , i , : ) Σ ( i , i , : ) F vsk.
k = 10 k = 30 k = 50 k = 70
S ( 1 , 1 , : ) 3.6 × 10 4 1.3 × 10 5 5.1 × 10 11 4.8 × 10 17
S ( 2 , 2 , : ) 2.0 × 10 3 1.6 × 10 6 5.2 × 10 7 3.1 × 10 8
S ( 3 , 3 , : ) 4.9 × 10 3 5.9 × 10 4 2.3 × 10 4 5.6 × 10 8
S ( 4 , 4 , : ) 8.4 × 10 3 8.8 × 10 4 1.5 × 10 4 1.0 × 10 8
S ( 5 , 5 , : ) 1.4 × 10 2 1.3 × 10 3 2.7 × 10 4 1.1 × 10 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hached, M.; Jbilou, K.; Koukouvinos, C.; Mitrouli, M. A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition. Mathematics 2021, 9, 1249. https://doi.org/10.3390/math9111249

AMA Style

Hached M, Jbilou K, Koukouvinos C, Mitrouli M. A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition. Mathematics. 2021; 9(11):1249. https://doi.org/10.3390/math9111249

Chicago/Turabian Style

Hached, Mustapha, Khalide Jbilou, Christos Koukouvinos, and Marilena Mitrouli. 2021. "A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition" Mathematics 9, no. 11: 1249. https://doi.org/10.3390/math9111249

APA Style

Hached, M., Jbilou, K., Koukouvinos, C., & Mitrouli, M. (2021). A Multidimensional Principal Component Analysis via the C-Product Golub–Kahan–SVD for Classification and Face Recognition. Mathematics, 9(11), 1249. https://doi.org/10.3390/math9111249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop