Next Article in Journal
An Improved Multi-Objective Artificial Bee Colony Optimization Algorithm with Regulation Operators
Previous Article in Journal
A Quick Artificial Bee Colony Algorithm for Image Thresholding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exact Solution Analysis of Strongly Convex Programming for Principal Component Pursuit

School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Information 2017, 8(1), 17; https://doi.org/10.3390/info8010017
Submission received: 14 October 2016 / Accepted: 25 January 2017 / Published: 2 February 2017
(This article belongs to the Section Information Theory and Methodology)

Abstract

:
In this paper, we address strongly convex programming for principal component analysis, which recovers a target matrix that is a superposition of low-complexity structures from a small set of linear measurements. In this paper, we firstly provide sufficient conditions under which the strongly convex models lead to the exact low-rank matrix recovery. Secondly, we also give suggestions that will guide us how to choose suitable parameters in practical algorithms. Finally, the proposed result is extended to the principal component pursuit with reduced linear measurements and we provide numerical experiments.

1. Introduction

Recently, much attention has been focussed on the problem of recovering a target matrix with low-complexity structure from a small set of linear measurements. This problem has regained great concern since the publication of the pioneering works of E.J. Candés et al. [1,2,3,4]. It can be found in many different fields, such as medical imaging [5,6,7], seismology [8], information retrieval [9] and machine learning [10], especially the detection of moving objects [11]. In the case of detection of moving objects, the columns of matrix M are the video frames, and the low-rank matrix L 0 and the sparse matrix S 0 are the stationary background and the moving objects in the foreground respectively. According to [12], the main problem of detection of moving objects is how to recover the low-rank matrix L 0 and the sparse matrix S 0 from the given data matrix M = L 0 + S 0 , where L 0 R n × n has low-rank, and S 0 is a sparse matrix. In the paper [12], E.J. Candés et al. have proved that that most low-rank matrices and the sparse components can be recovered, provided that the rank of the low-rank component is not too large, and that the sparse component is reasonably sparse; and more importantly they proved that this can be done by solving a simple convex optimization problem, i.e., provided that the rank of the matrix L and the cardinality of the sparse matrix S obey some suitable conditions, most matrices L 0 of rank r and the sparse component S 0 can be perfectly recovered by solving the simple optimization problem as follows:
minimize L * + λ S 1 subject to L + S = M
wherein L * is the nuclear norm of matrix L, and S 1 is the sum of absolute values of all matrix entries.
Strongly convex optimizations have many advantages, e.g., unique optimal solution. Many scholars suggest solving their strongly convex approximations (see, e.g., [13,14,15,16]), instead of directly solving the original convex optimizations. J.F. Cai et al. addressed the strongly convex optimization τ X * + 1 2 X F 2 ( X F denoting the Frobenius norm) instead of the original convex optimization X * , and the authors introduced an important algorithm (singular value thresholding algorithm) to solve matrix completion based on this strongly convex optimization [15]; J. Wright et al. also addressed the strongly convex optimization L * + λ S 1 + 1 2 τ L F 2 + 1 2 τ S F 2 instead of the original convex optimization L * + λ S 1 , and the authors proposed the iterative thresholding(IT) algorithm to solve robust principal component analysis [14]. J. Wright et al. only confirm performance of iterative thresholding by numerical experiments; however, the authors do not provide the sufficient conditions which can guarantee strongly convex optimization L * + λ S 1 + 1 2 τ L F 2 + 1 2 τ S F 2 and original convex optimization L * + λ S 1 have the same optimal solution. In this article, the authors have given many suitable sufficient conditions that would lead the strongly convex models to the exact low-rank and sparse matrix recovery. Some suggestions have been given in [16] on how to choose suitable parameters in practical algorithms. However, the results shown in [16] are limited to a special condition, i.e., Q = R n × n . In this paper, we extend this result to the principal component pursuit with reduced linear measurements, that is Q is a p-dimensional random subspace instead of Q = R n × n . It is easy to prove that the results in [16] are only a special case of those that we proposed.

1.1. Basic Problem Formulations

In this subsection, we will interpret an important strongly convex programming to be addressed in this paper, and list its existence and uniqueness theorems, which will be proved in the next sections. In [17], the authors have studied principal component pursuit with reduced linear measurements and given sufficient conditions under which L 0 and S 0 can be perfectly recovered.
In this paper, we address a strongly convex programming, and prove that it has the capability to guarantee exact low-rank matrix recovery. The proposed optimization is realized in the following way:
minimize L * + λ S 1 + 1 2 τ L F 2 + 1 2 τ S F 2 subject to P Q M = P Q ( L + S )
wherein, τ 0 refers to some positive penalty parameter and P Q the orthogonal projection onto the linear subspace Q. We also assume that Q is a random subspace. The existence and uniqueness theorems when τ = in (2)are provided in [17], and listed below. In the end, how to choose suitable parameters in the optimization model (2) is discussed.
Theorem 1
([17]). If we fix any C p > 0 , and let Q be a p-dimensional random subspace of R n × n ; L 0 obeys incoherence condition with parameter μ, and s u p p ( S 0 ) B e r ( ρ ) . With high probability, the solution of problem (2) with λ = 1 n is exact, i.e., L ^ = L 0 and S ^ = S 0 , provided that
R a n k ( L 0 ) < C r n μ 1 ( log n ) 2 p < C p n a n d ρ < ρ 0
wherein, C r , C p and ρ are positive numerical constants and ρ 0 < 1 .

1.2. Contents and Notations

We provide a brief summary of the notations which are used throughout the paper. X denotes the operator norm of matrix X, X F denotes the Frobenius norm, X * the nuclear norm, and the dual norm of X ( i ) by X ( i ) * . The Euclidean inner product between two matrices X , Y is defined by the formula X , Y = t r a c e ( X * Y ) . It’s easy to note that X F 2 = X , X . The Cauchy-Schwarz inequality which will often be used in next sections gives X , Y X F Y F , and it is well known that X , Y X ( i ) Y ( i ) * , (see e.g., [2,18]). Linear transformations which act on the space of matrices are denoted by P X . It can be easily seen that the operator of P is high dimension matrix in substance. The operator norm of this operator is signified by P . It should also be noted that P = sup { X F = 1 } P X F . We say an event E occurs with high probability if P [ E ] = C m α . We denote the reduced singular value decomposition (SVD) of the low-rank matrix L 0 as L 0 = U Σ V * , and define a linear subspace T as follows:
T = { U X * + Y V * : X R n × r , Y R m × r }
We denote the support of the sparse matrix S 0 as Ω, by a slight abuse of notation, we also denote Ω as the subspace of matrices whose support is contained in the support of S 0 .
The rest of this paper is organized as follows. In Section 2, we firstly list many important lemmas, at the same time we prove a key lemma on which our main result is based. Secondly suggestions are given in Section 3, these conditions will guide us to choose suitable parameters in practical algorithms. Thirdly, the numerical result is given in Section 4. Finally, conclusions and results are discussed in Section 5.

2. Important Lemmas

In this section, we first list some useful lemmas which will be used throughout this paper and then prove a main lemma. Although the main lemma is similar to the corresponding one in [17], they have significant difference, in which the construction of W Q is very different. That leads to our necessary additional work.
Lemma 1
([17]). Let Γ = Q T , then we have Γ = Q T , and d i m ( Q T Ω ) = d i m ( Q ) + d i m ( T ) + d i m ( Ω ) . At the same time we assume that P Ω P Γ < 1 / 2 and λ < 1 . Then ( L 0 , S 0 ) is the unique optimal solution to (2) if there is a pair ( W , F ) R n × n × R n × n satisfying the following conditions
U V * + W = λ ( s g n ( S 0 ) + F + P Ω D ) Q
In which P T = 0 , W < 1 / 2 , P Ω F = 0 , F < 1 / 2 , and P Ω D F 1 / 4 .
Lemma 2
([17]). If Ω B e r ( ρ ) for some small ρ ( 0 , 1 ) and the other conditions of Theorem 1 are true. Then, the matrix W L obeys the below inequalities with high probability.
(a).
W L < 1 / 4
(b).
P Ω ( U V * + W L ) F < λ / 4
(c).
P Ω ( U V * + W L ) < λ / 4
Lemma 3
([17]). In addition to the assumptions in the previous lemma, suppose that the signs of the non-zero entries of S 0 are i.i.d. random. Then the matrix W S obeys the below inequalities with high probability.
(a).
W S < 1 / 8
(b).
P Ω W S < λ / 8
The construction of W L and W S can be found in the paper [17]. However, the matrix W Q constructed in the paper [17] does not satisfy the requirement of our problem, so we have to modify this construction in order to satisfy the problem (2). Firstly we will give explicit construction of W Q , and then prove that the modification of W Q satisfies the proper property.
Construction of W Q with least modification. We define W Q by the following least squares problem:
W Q = arg min X X F subject to P Q X = P Q ( U V * + 1 τ L 0 ) P Π X = 0
wherein Π = T Ω . This construction of W Q satisfies Lemma 5 in the paper [17], also has the below proper property.
Lemma 4.
If τ M F , Ω B e r ( ρ ) for some small ρ ( 0 , 1 ) and the assumptions of Theorem 1 are true. Then the matrix W Q obeys the below inequalities with high probability.
(a).
W Q < 1 / 8
(b).
P Ω W Q < λ / 8
Proof. A:
Bounding the Frobenius norm of U V * + 1 τ L 0 . For convenience, let ξ : = U V * + 1 τ L 0 F . According to triangle inequality, we have
L 0 F = M S 0 F M F + S 0 F = M F + P Ω S 0 F
In the last equality, we have used S 0 Ω . Note that
P Ω S 0 F = P Ω ( M L 0 ) F P Ω M F + P Ω L 0 F
According to the derivation in [16], the below inequality is true with high probability
P Ω L 0 F 3 3 P Ω M F 3 3 M F
Putting those all together, we can obtain
L 0 F ( 3 / 3 + 2 ) M F
Combining with τ M F , we can obtain
ξ U V * F + L 0 F τ r + ( 3 / 3 + 2 ) M F τ r + 3 / 3 + 2
W Q is the optimum solution of least squares problem, due to this we can use the convergent Neumann series expansion. It’s easy to note that
W Q = P Π k > 0 ( P Q P Π P Q ) k ( P Q ( U V * 1 τ L 0 ) )
According to triangle inequality, we have
W Q F k > 0 ( P Q P Π P Q ) k P Q ( U V * 1 τ L 0 ) F
B: 
Estimating the first inequality of Lemma 4. In order to bound W Q F , we first have to bound the norm of k > 0 ( P Q P Π P Q ) k and bound the Frobenius norm of P Q ( U V * 1 τ L 0 ) . The norm of k > 0 ( P Q P Π P Q ) k satisfies
k > 0 ( P Q P Π P Q ) k k > 0 ( P Q P Π P Q ) k k > 0 P Q P Π 2 k
According to Lemma 11 in the paper [17], the following inequality is true with high probability for any ϵ > 0 ,
P Q P Π 2 64 1 ρ + ϵ p n 2 + 5 ρ 4 2 + p n 2 + 2 r n 2
According to the paper [17], the following inequality is true with high probability:
k > 0 ( P Q P Π P Q ) k 4 3
Secondly, we will bound the Frobenius norm of P Q ( U V * 1 τ L 0 ) . P Q has the same distribution as H ( H * H ) 1 H * , in which H R n 2 × p is a random Gaussian matrix with i.i.d. entries satisfied N ( 0 , 1 / n 2 ) . Therefore, we can obtain the below inequality
P Q ( U V * + 1 τ L 0 ) F = H ( H * H ) 1 H * vec ( U V * + 1 τ L 0 ) F H ( H * H ) 1 H * vec ( U V * + 1 τ L 0 ) 2
Together with Lemma 7 in the paper [17], we can obtain
P [ H ( H * H ) 1 4 ] e n 2 32
It is easy to note that any entries of H * vec ( U V * + 1 τ L 0 ) have the same distribution as < G , U V * + 1 τ L 0 > , in which G i j N ( 0 , 1 / n 2 ) are independent identically distributed. It is obvious to see that
E { < G , U V * + 1 τ L 0 > } = < E { G } , U V * + 1 τ L 0 > = 0
and
Var { < G , U V * + 1 τ L 0 > } = i j ( U V * + 1 τ L 0 ) i j 2 Var { G i j } = ξ 2 / n 4
Therefore < G , U V * + 1 τ L 0 > is distributed with N ( 0 , ξ / n 2 ) , where ξ: = U V * + 1 τ L 0 F . For simplicity, we define Z: = H * vec ( U V * + 1 τ L 0 ) . Using the Jesen inequality, we can obtain
E [ Z 2 ] ( E [ Z 2 2 ] ) 1 / 2 = p ξ n 2
According to the Proposition 2.18 in [18], we can obtain
P Z 2 E [ Z 2 ] + t ξ n 2 e t 2 / 2
Setting t = 6 log n , after a simple inference, we can obtain the below inequality with high probability.
W Q W Q F 16 3 p ξ n + 6 ξ log n n
For sufficiently large n, the first inequality of Lemma 4 is established. We will estimate the second inequality of Lemma 4 further.
C: 
Estimating the second inequality of Lemma 4, Note that
W Q = P Π P Q k > 0 ( P Q P Π P Q ) k ( P Q ( U V * 1 τ L 0 ) )
After a simple inference, we can obtain the below inequality.
P Ω W Q C ξ n 2 ( p + 6 log n ) 2
wherein C is some constant. Note that the second inequality of Lemma 4 is established for sufficiently large n. ☐

3. Estimating Parameter τ

In this section, we shall provide sufficient conditions under which ( L 0 ; S 0 ) is the unique and exact solution of the strongly convex programming (2) with high probability, i.e., the solution of problem (2) is exact L ^ = L 0 and S ^ = S 0 . Afterwards, an explicit lower bound of τ will be provided, which will further guide us to choose suitable parameters in practical algorithms.
Theorem 2.
Let Γ = Q T , then we have Γ = Q T , and d i m ( Q T Ω ) = d i m ( Q ) + d i m ( T ) + d i m ( Ω ) . Assume that P Ω P Γ < 1 / 2 and λ < 1 . If there is a pair ( W , F ) R n × n × R n × n and a matrix D satisfying
U V * + W + 1 τ L 0 = λ ( s g n ( S 0 ) + F + P Ω D ) + 1 τ S 0 Q
with
P T W = 0 , W β , P Ω F = 0 , F β , P Ω D F α
where α, β are positive parameters satisfying
α + β 1
Then ( L 0 , S 0 ) is the unique solution of the strongly convex programming (2).
Proof. 
For any feasible perturbation ( H L , H S ) , it’s easy to note that P Q H L = P Q H S . According to the definition of Γ, we have Γ Q . Therefore P Γ H L = P Γ H S . For simplicity, we define f ( L , S ) : = L * + λ S 1 + 1 2 τ L F 2 + 1 2 τ S F 2 , and we can obtain the below inequality
f ( L 0 + H L , S 0 H S ) f ( L 0 , S 0 ) + < U V * + W 0 + 1 τ L 0 , H L > < λ sgn ( S 0 ) + λ F 0 + 1 τ S 0 , H S > f ( L 0 , S 0 ) + < W 0 , H L > < W , H L > + < U V * + W + 1 τ L 0 , P Q H L > < λ F 0 , H S > + < λ F , H S > < λ sgn ( S 0 ) + λ F + 1 τ S 0 , P Q H S > f ( L 0 , S 0 ) + < W 0 , P T H L > < W , P T H L > < λ F 0 , P Ω H S > + < λ F , P Ω H S > < λ P Ω D , P Q H S > f ( L 0 , S 0 ) + ( 1 β ) P T H L * + ( 1 β ) λ P Ω H S 1 α λ P Ω H S F
In the second inequality above, we have used the facts
U V * + W + 1 τ L 0 = λ ( sgn ( S 0 ) + F + P Q D ) + 1 τ S 0 Q
In the third inequality above, we have used the property P Q H L = P Q H S .
We have provided the bound of f ( L 0 + H L , S 0 H S ) , and then we will give the bound of P Ω H S F . According to the definition of Γ, we can obtain
P Ω H S F P Ω P Γ H S F + P Ω P Γ H S F P Ω P Γ H L F + 1 2 H S F P Γ H L F + 1 2 P Ω H S F + 1 2 P Ω H S F P T H L F + 1 2 P Ω H S F + 1 2 P Ω H S F
Therefore
P Ω H S F 2 P T H L F + P Ω H S F 2 P T H L * + P Ω H S 1
Putting those all together, we get
f ( L 0 + H L , S 0 H S ) f ( L 0 , S 0 ) + ( 1 β 2 α λ ) P T H L * + ( 1 β α ) λ P Ω H S 1
According to (6), the inequality above implies that ( L 0 , S 0 ) is a solution to (2), i.e., ( L 0 ; S 0 ) is the exact solution of the strongly convex programming (2) with high probability. The uniqueness follows from the strong convexity of the objective in (2).
In the practice, the choose of parameter τ is very difficult, therefore we will provide the criterion of the value of τ in the next section which will guide us to choose suitable parameters in practical algorithms. Theorem 3 and 4 provide the criterion of the value of τ, and the bound of τ in Theorem 4 is more explicit and useful in practice. ☐
Theorem 3.
Let τ 1 = P Ω L 0 ( β 1 2 ) λ , τ 2 = P Ω ( L 0 S 0 ) F ( α 1 4 ) λ , and τ 3 = 4 ( P Ω L 0 + P Ω ( L 0 S 0 ) F ) λ . Assume
τ max τ 1 , τ 2 , τ 3 , M F
Then, under the other assumptions of Theorem 1, ( L 0 , S 0 ) is the unique solution to the strongly convex programming (2) with high probability.
Proof. 
In order to check the conditions in Theorem 2, we will prove the existence of a matrix W obeying
P T W = 0 W β P Q W = P Q ( U V * + 1 τ L 0 ) P Ω ( U V * + W + 1 τ L 0 1 τ S 0 ) β λ P Ω ( U V * + W λ sgn ( S 0 ) + 1 τ L 0 1 τ S 0 ) F α λ
Note that W = W L + W S + W Q . We will check above conditions hold true one by one. For simplicity, we define
γ : = P Ω ( L 0 S 0 ) , δ : = P Ω ( L 0 S 0 ) F
Without loss of generality, let β > 1 / 2 . With the help of the construction of W L , W S and W Q , it is easy to check the first and second conditions are true. According to the modification of W Q constructed in Lemma 4, and P Q W L = 0 and P Q W S = 0 , we have P Q W Q = P Q ( U V * + 1 τ L 0 ) . It is easy to check that P Q W = P Q W L + P Q W S + P Q W Q = P Q ( U V * + 1 τ L 0 ) , which implies that the third condition holds true. Consequently, we will provide the last two conditions also hold true under some suitable assumptions. Pertaining to the fourth inequality, we have
P Ω ( U V * + W + 1 τ L 0 1 τ S 0 ) P Ω ( U V * + W L ) + P Ω W S + P Ω W Q + 1 τ P Ω ( L 0 S 0 ) λ 4 + λ 8 + λ 8 + 1 τ P Ω ( L 0 S 0 ) λ 2 + γ τ
For the last inequality, noting that P Ω ( W S ) = λ sgn ( S 0 ) and P Ω ( W Q ) = 0 , we can obtain
P Ω ( U V * + W λ sgn ( S 0 ) + 1 τ L 0 1 τ S 0 ) F = P Ω ( U V * + W L + 1 τ L 0 1 τ S 0 ) F P Ω ( U V * + W L ) F + 1 τ P Ω ( L 0 S 0 ) F λ 4 + δ τ
In order to satisfy the condition (8), we choose a τ obeying
λ 2 + γ τ β λ , and λ 4 + δ τ α λ
Therefore
τ max γ ( β 1 2 ) λ , δ ( α 1 4 ) λ
Combining (9) with (6), we can obtain
λ 2 + γ τ + λ 4 + δ τ β λ + α λ λ
Therefore
τ 4 ( γ + δ ) λ
Together with (10) and (11), the Theorem 3 is established.
In order to simplify the Formula (7), we suppose α = 3 / 8 and β = 5 / 8 , which satisfy the conditions above. Therefore
τ max 8 P Ω L 0 λ , 8 P Ω ( L 0 S 0 ) F λ
However, note that the exact lower bound is very hard to get, because we only have the information about the given data matrix M in practical problem. Noting that
P Ω M M
And according to the paper [16], we have
P Ω ( L 0 S 0 ) F 15 3 M F
Therefore, we can choose
τ max 8 M λ , 8 15 M F 3 λ
It is obvious that M M F . Therefore, we can obtain Theorem 3.3 as follows. ☐
Theorem 4.
Assuming
τ 8 15 M F 3 λ
and the other assumptions of Theorem 1, ( L 0 , S 0 ) is the unique solution to the strongly convex programming (2) with high probability.

4. Numerical Results

In this section, we provide numerical experiments to certify the Theorem 4. Without loss of generality, we assume that r = 2 , M = L 0 + S 0 , and a rank-r matrix L 0 = X Y T where X and Y are 15 × 2 and 30 × 2 matrices with entries independently sampled from a N ( 0 ; δ 2 ) distribution, the sparse matrix S 0 = P Ω E with the support set of size k s = ρ s m n uniformly at random. Assume that ( L 1 , S 1 ) and ( L 2 , S 2 ) are the optimal solutions of optimization problem (1) and strongly convex optimization problem (2) respectively. Numerical experiments are given under M F = 1 . Figure 1 shows probability of correct recovery ( L 1 L 2 F 2 + S 1 S 2 F 2 10 3 ) with different values of τ. It’s noted that when 1 τ < 3 λ 8 15 M F = 0 . 03 , the probability of correct recovery is nearly 100%; however when 1 τ > 0 . 03 , the probability of correct recovery decreases fast. This phenomenon verifies the Theorem 4 by number experiments.

5. Results and Conclusions

In this paper, we have studied strongly convex programming for principal component pursuit with reduced linear measurements.
We firstly provide sufficient conditions under which the strongly convex models lead to the exact low rank and sparse components recovery, i.e.,
Assuming
τ 8 15 M F 3 λ
and the other assumptions of Theorem 1, ( L 0 , S 0 ) is the unique solution to the strongly convex programming (2) with high probability.
Secondly, we give the criterion of the choice of the value of τ, which gives very useful advice on how to set the suitable parameters in designing efficient algorithms. In particular, it is easy to note that the main results of paper [16] are only the special case of our results. In some sense, we extend the result of choosing suitable parameters to the general problem.

Acknowledgments

The author would like to thank the anonymous reviewers for their comments that helped to improve the quality of the paper. This research was supported by the National Natural Science Foundation of China (NSFC) under Grant U1533125, and the Scientific Research Program of the Education Department of Sichuan under Grant 16ZB0032.

Author Contributions

Qingshan You and Qun Wan conceived and designed the experiments; Qingshan You performed the experiments; Qingshan You and Qun Wan analyzed the data; Qingshan You contributed reagents/materials/analysis tools; Qingshan You wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fazel, M. Matrix Rank Minimization with Applications. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2002. [Google Scholar]
  2. Candès, E.J.; Recht, B. Exact matrix completion via convex optimzation. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  3. Candès, E.J.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar] [CrossRef]
  4. Candès, E.J.; Tao, T. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory 2010, 56, 2053–2080. [Google Scholar] [CrossRef]
  5. Ellenberg, J. Fill in the blanks: Using math to turn lo-res datasets into hi-res samples. Wired. 2010. Available online: https://www.wired.com/2010/02/ff_algorithm/all/1 (accessed on 26 January 2016).
  6. Wright, J.; Yang, A.; Ganesh, A.; Ma, Y.; Sastry, S. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  7. Chambolle, A.; Lions, P.L. Image recovery via total variation minimization and related problems. Numer. Math. 1997, 76, 167–188. [Google Scholar] [CrossRef]
  8. Claerbout, J.F.; Muir, F. Robust modeling of erratic data. Geophysics 1973, 38, 826–844. [Google Scholar] [CrossRef]
  9. Papadimitriou, C.; Raghavan, P.; Tamaki, H.; Vempala, S. Latent semantic indexing: A probabilistic analysis. In Proceedings of the Seventeenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, Seattle, WA, USA, 1–4 June 1998; Volume 61.
  10. Argyriou, A.; Evgeniou, T.; Pontil, M. Convex multi-task feature learning. Mach. Learn. 2008, 73, 243–272. [Google Scholar] [CrossRef]
  11. Bouwmans, T.; Sobral, A.; Javed, S.; Jung, S.; Zahzah, E. Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset. Comput. Vis. Pattern Recognit. 2016. [Google Scholar] [CrossRef]
  12. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2009, 58, 1–37. [Google Scholar] [CrossRef]
  13. Cai, J.F.; Osher, S.; Shen, Z. Linearized Bregman Iterations for Compressed Sensing. Math. Comp. 2009, 78, 1515–1536. [Google Scholar] [CrossRef]
  14. Wright, J.; Ganesh, A.; Rao, S.; Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. 2009; arXiv:0905.0233. [Google Scholar]
  15. Cai, J.-F.; Canès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2008, 20, 1956–1982. [Google Scholar] [CrossRef]
  16. Zhang, H.; Cai, J.-F.; Cheng, L.; Zhu, J. Strongly Convex Programming for Exact Matrix Completion and Robust Principal Component Analysis. Inverse Probl. Imaging 2012, 6, 357–372. [Google Scholar] [CrossRef]
  17. Ganesh, A.; Min, K.; Wright, J.; Ma, Y. Principal Component Pursuit with Reduced Linear Measurements. 2012; arXiv:1202.6445v1. [Google Scholar]
  18. Ledoux, M. The Concentration of Measure Phenomenon; American Mathematical Society: Providence, RI, USA, 2001. [Google Scholar]
Figure 1. probability of correct recovery with different values of τ.
Figure 1. probability of correct recovery with different values of τ.
Information 08 00017 g001

Share and Cite

MDPI and ACS Style

You, Q.; Wan, Q. Exact Solution Analysis of Strongly Convex Programming for Principal Component Pursuit. Information 2017, 8, 17. https://doi.org/10.3390/info8010017

AMA Style

You Q, Wan Q. Exact Solution Analysis of Strongly Convex Programming for Principal Component Pursuit. Information. 2017; 8(1):17. https://doi.org/10.3390/info8010017

Chicago/Turabian Style

You, Qingshan, and Qun Wan. 2017. "Exact Solution Analysis of Strongly Convex Programming for Principal Component Pursuit" Information 8, no. 1: 17. https://doi.org/10.3390/info8010017

APA Style

You, Q., & Wan, Q. (2017). Exact Solution Analysis of Strongly Convex Programming for Principal Component Pursuit. Information, 8(1), 17. https://doi.org/10.3390/info8010017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop