Next Article in Journal
New Results on the Unimodular Equivalence of Multivariate Polynomial Matrices
Next Article in Special Issue
Sparse Support Tensor Machine with Scaled Kernel Functions
Previous Article in Journal
Global Boundedness in a Logarithmic Keller–Segel System
Previous Article in Special Issue
An Improved Convergence Condition of the MMS Iteration Method for Horizontal LCP of H+-Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Heavy-Ball-Based Hard Thresholding Pursuit for Sparse Phase Retrieval Problems

1
School of Mathematics and Statistics, Shandong University of Technology, Zibo 255000, China
2
School of Mathematics and Statistics, Xinyang Normal University, Xinyang 464000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(12), 2744; https://doi.org/10.3390/math11122744
Submission received: 17 May 2023 / Revised: 13 June 2023 / Accepted: 15 June 2023 / Published: 16 June 2023
(This article belongs to the Special Issue Optimization Theory, Method and Application)

Abstract

:
We introduce a novel iterative algorithm, termed the Heavy-Ball-Based Hard Thresholding Pursuit for sparse phase retrieval problem (SPR-HBHTP), to reconstruct a sparse signal from a small number of magnitude-only measurements. Our algorithm is obtained via a natural combination of the Hard Thresholding Pursuit for sparse phase retrieval (SPR-HTP) and the classical Heavy-Ball (HB) acceleration method. The robustness and convergence for the proposed algorithm were established with the help of the restricted isometry property. Furthermore, we prove that our algorithm can exactly recover a sparse signal with overwhelming probability in finite steps whenever the initialization is in the neighborhood of the underlying sparse signal, provided that the measurement is accurate. Extensive numerical tests show that SPR-HBHTP has a markedly improved recovery performance and runtime compared to existing alternatives, such as the Hard Thresholding Pursuit for sparse phase retrieval problem (SPR-HTP), the SPARse Truncated Amplitude Flow (SPARTA), and Compressive Phase Retrieval with Alternating Minimization (CoPRAM).

1. Introduction

In many engineering problems, one wishes to reconstruct signals from the (squared) modulus of its Fourier (or any linear) transform. This is called phase retrieval (PR). Particularly, if the target signal is sparse, this is referred to a sparse phase retrieval problem. The corresponding mathematical model is used to recover a sparse vector x R n from a system of phaseless quadratic equations taking the form
y i   =   | a i , x | , i = 1 , 2 , , m , subject to x 0 s ,
where α i i = 1 m are a set of n-dimensional sensing vectors, y i i = 1 m for i = 1 , 2 , , m are observed modulus data, and s is the sparsity level (s is much less than n and is assumed to be known a priori for theoretical analysis purposes). As a matter of fact, solving the above problem is conducted to solve the following optimization system with the optimal value equal to zero
min 1 2     y     A x   2 2 s . t . x 0 s ,
where the measurement matrix A and observed data y are processed as follows:
A : = 1 m [ a 1 a 2 a m ] T R m × n , y : = 1 m [ y 1 y 2 y m ] T .
The phase retrieval problem arises naturally in many important applications, such as X-ray crystallography, microscopy, and astronomical imaging, etc. Interested readers are referred to [1] and its references for a more detailed discussion of the scientific and engineering background of the model.
Although several heuristic methods [2,3,4,5] are commonly used to solve (1), it is generally accepted that (1) is a very challenging, ill-posed, nonlinear inverse problem in both theory and practice. In particular, (1) is an NP-hard problem  [6]. At present, phase retrieval approaches can be mainly categorized as convex and nonconvex. A popular class of nonconvex approaches is based on alternating projections, e.g., the groundbreaking works by Gerchberg and Saxton [2], Fienup [3], Chen et al. [7], and Waldspurger [8]. The convex alternatives either rely on the so-called Shor’s relaxation to obtain a solver based on semidefinite programming (SDP), known simply as PhaseLift [9] and PhaseCut [10], or solve the problem of basis pursuit in dual domains, as conducted in PhaseMax [11,12]. Next, a series of breakthrough results [9,13,14,15] provided provably valid algorithmic processes for special cases, where the measurement vectors are randomly derived from some certain multivariate probability distribution, such as the Gaussian distributions. Phase retrieval problems [9,13,14] require the number of observations m to exceed the problem dimension n. However, for the sparse phase retrieval problem, the true signal can be successfully recovered, even if the number of measurements m is less than the length of the signal n. In particular, a recent paper [16] utilized the random sampling technique to achieve the best empirical sampling complexity; in other words, it requires less measurements than state-of-the-art algorithms for sparse phase retrieval problems. For more on phase recovery, interested readers can refer to [17,18,19,20,21,22,23,24,25,26,27].
There is a close relationship between the phase recovery problem and the compressed sensing problem [28,29,30]. Compressed sensing, also known as compressed sampling, is a new information technology used to find sparse solutions to underdetermined linear systems. Its corresponding mathematical model can be expressed as finding a sparse vector x from the following linear problem with sparsity constraints
y i = a i , x , i = 1 , 2 , , m , subject to x 0 s .
Some mainstream algorithms for solving this problem have been proposed, including Iterative Hard Thresholding (IHT) [31], the Orthogonal Matching Pursuit (OMP) [32], the Compressive Sampling Matching Pursuit (CoSaMP) [33], the Subspace Pursuit (SP) [34], and the Hard Thresholding Pursuit (HTP)  [29]. The optimization model for the problem (4) is
min 1 2 y A x 2 2 s . t . x 0 s .
Comparing two problems, (1) and (4), we see that problem (1) has an extra absolute value over problem (4). This makes the objective function of system (2) nonsmooth compared to system (5). Accordingly, this leads to a huge difference in the design of our algorithms. How can we naturally modify the compressed sensing algorithm to solve the phase recovery problem? Recently, a breakthrough in this direction was the algorithm (we call it SPR-HTP) proposed in [15], which is a modification of HTP from linear measurements to phaseless measurements. Inspired by [15], we think that an acceleration method could be used to accelerate SPR-HTP. The Heavy-Ball was first introduced by Polyak [35], and it is an old as well as efficient way to speed things up. More recently, in [36], the authors combined the Heavy-Ball acceleration method with the classic HTP to obtain the HBHTP for solving compressed sensing. Their numerical experiment shows that the HBHTP performs better than the classical HTP in terms of the recovery capability and runtime.
Considering the above motivation, we propose, in this paper, a novel nonconvex algorithm to offset the disadvantages of existing algorithms. The new algorithm is called the Heavy-Ball-Based Hard Thresholding Pursuit for sparse phase retrieval problem (SPR-HBHTP). Similar to the most of the existing nonconvex algorithms, our proposed algorithm is divided into two stages: the initialization part and the iteration part. The optimization model (2) could have multiple local minimizers due to its nonconvexity. Hence, the initialization step is crucial to ensure the initial point can fall within a certain neighborhood of the real signal. Common initialization methods are orthogonality-promoting initialization, spectral initialization, variant of the spectral initialization, etc. In our algorithm, we adopt off-the-shelf spectral initialization as the initial step. For more details, please refer to reference [15]. For the iterative refinement stage, the search direction for SPR-HTP is
α A T y sgn A x n A x n ,
where ⊙ represents the Hadamard product (the definition is given below) of two vectors, and α > 0 is a positive parameter. Our algorithm SPR-HBHTP is a combination of SPR-HTP and the Heavy-Ball method, and the search direction is represented as
α A T y sgn A x n A x n + β ( x n x n 1 )
with two parameters α > 0 , β 0 . Clearly SPR-HBHTP reduces to SPR-HTP as β = 0 . The following theoretical and numerical experiments show that the modified algorithm (SPR-HBHTP) using the momentum term x n x n 1 matches the best available state-of-the-art sparse phase retrieval methods. With the help of the restricted isometry property (RIP) [37], the convergence of our algorithm is established, and our algorithm is proved to have robust sparse signal recovery under inaccurate measurement conditions. Moreover, our algorithm can exactly recover a s-sparse signal in finite steps if the measurement is accurate.
Our contributions in this paper are as follows: we propose a new class of HTP-type algorithms called the Heavy-Ball-Based Hard Thresholding Pursuit for sparse phase retrieval problem (SPR-HBHTP), which is a natural combination of the Hard Thresholding Pursuit for sparse phase retrieval (SPR-HTP) and the classical Heavy-Ball (HB) acceleration method. In the theoretical analysis, a new theoretical analysis framework is introduced to establish the theoretical performance results of the proposed algorithm by resorting to the restricted isometry property of measurement matrices. The local convergence of our algorithm is established regardless of whether there is noise in the measurement. An estimation of the iteration number is established under the framework of accurate measurements. For the aspect of numerical experiments, the phase transition curves, grayscale maps, and algorithm selection maps demonstrate that the new algorithm SPR-HBHTP is numerically much more efficient than existing alternatives, such as SPR-HTP, SPARTA, and CoPRAM in terms of both the recovery success rate and the recovery time.
The rest of the paper is organized as follows: we describe the SPR-HBHTP in Section 2. A theoretical analysis of the proposed algorithm is conducted in Section 3. Numerical experiments to illustrate the performance of the algorithm are given in Section 4, and conclusions are drawn in the last section.

2. Preliminary and Algorithms

2.1. Notations

We introduce some notations that are used throughout the paper. Let N denote the set 1 , 2 , , N and S be the cardinality of a set S. Denote by S ¯ the complement N \ S of a set S in N . At x R n , the signum function sgn ( x ) R n is defined as
[ sgn x ] i : = 1 x i > 0 , 0 x i = 0 , 1 x i < 0 .
The Hard Thresholding operator H s ( x ) keeps the s largest absolute entries and sets other ones to zeros. For an index set Ω N , A Ω denotes the matrix obtained from A R m × n by keeping only the columns indexed by Ω , and x Ω stands for the vector obtained by retaining the components of x indexed by Ω and zeroing out the remaining components of x. The set supp x : = i N : x i 0 is called the support of x, and L s ( x ) stands for the support of H s ( x ) . For two sets, S and S ˜ , S S ˜ : = ( S \ S ˜ ) ( S ˜ \ S ) is the symmetric difference between S and S ˜ . For x , z R n , the distance between x and z is defined as
dist ( x , z ) : = min x z 2 , x + z 2 .
The notation ⊙ represents the Hadamard product of two vectors, i.e.,
a b : = a 1 b 1 , a 2 b 2 , , a n b n T .

2.2. New Algorithms

The alternative initialization methods include orthogonality-promoting initialization [38], spectral initialization [39], or the other initialization [40], etc. In this paper, we use spectral initialization to constitute the initial step of the algorithm.
Algorithm 1: Heavy-Ball-Based Hard Thresholding Pursuit for sparse phase retrieval (SPR-HBHTP).
Input: a matrix a i i = 1 m , a vector y i i = 1 m , and two parameters α > 0 and β 0 .
Initialization:
ϕ 2 : = 1 m i = 1 m y i 2 , v j : = 1 m i = 1 m y i 2 a i j 2 , j = 1 , n , S ˜ : = L S ( v ) ,
W : = 1 m i = i m y i 2 a i S ˜ a i S ˜ T , x 0 : = ϕ u ˜ ,
where u ˜ is the unit principal eigenvector of W. Set x 1 = x 0 .
Repeat:
z n + 1 = A x n , y n + 1 = y sgn ( z n + 1 ) ,
S n + 1 = supp H s x n + α A T ( y n + 1 z n + 1 ) + β ( x n x n 1 ) ,
x n + 1 = arg min 1 2 A z y n + 1 2 2 : supp ( z ) S n + 1 .
Output: the s-sparse vector x .
The following is a brief explanation of initialization in the Algorithm 1. For a more rigorous and in-depth discussion of this aspect, please refer to [15,39].
Estimate the support: Since E 1 m i = 1 m y i 2 a i j 2 = x 2 2 + 2 x j 2 , S ˜ = L s 1 m i = 1 m y i 2 a i j 2 j = 1 n contains a mass of correct support, it could be a good approximation of the support of x.
Compute the signal: Note that
E 1 m i = i m y i 2 a i a i T = I + 2 x x 2 · x T x 2 x 2 2 , E 1 m i = 1 m y i 2 = E y 2 = x 2 .
Hence, the principal eigenvector of the matrix W in Algorithm 1 gives a good initial direction guess of the true signal x. We select the principal eigenvector vector with a length of y 2 as the initial point x 0 to ensure that the power of the initial estimate x 0 is close to that of the true signal x.
The most important conclusion of the initialization process is that the initial point can fall within a certain neighborhood of the real signal relative to the distance measurement defined in (6).
Lemma 1 
([39] [Theorem IV.1]). Let a i i = 1 m be i.i.d Gaussian random vectors with a mean of 0 and variance matrix of I. Let x 0 be generated by initialization in Algorithm 1 with the input y i = a i T x for i = 1 , , m , where x R n is a signal satisfying x 0 s . Then, for any λ 0 ( 0 , 1 ) , there exists a positive constant C depending only on λ 0 ( 0 , 1 ) , such that if m C s 2 log n , we have
dist ( x 0 , x ) λ 0 x 2
with a probability of at least 1 8 m 1 .

3. Convergence Analysis

We first list some of the main Lemmas and present our results on the local convergence of the proposed algorithm (SPR-HBHTP).
Lemma 2 
([30] [Theorem 9.27]). Let a i i = 1 m be i.i.d Gaussian random vectors with a mean of 0 and variance matrix of I. Let A be defined in (3). Some universal positive constants C 1 , C 2 exist, such that for any natural number r n and any δ r ( 0 , 1 ) , if m C 1 δ r 2 r log ( n / r ) , then A satisfies the following r-RIP with a probability of at least 1 e C 2 m ,
1 δ r x 2 2 A x 2 2 1 + δ r x 2 2 , x 0 r .
Lemma 3 
([36] [Lemma 3.1]). Suppose that the non-negative sequence τ n R ( n = 0 , 1 , . . . ) satisfies
τ n + 1 b 1 τ n + b 2 τ n 1 + b 3 , n 1 ,
where b 1 , b 2 , b 3 0 and b 1 + b 2 < 1 . Then,
τ n θ n 1 τ 1 + θ b 1 τ 0 + b 3 1 θ
with
0 θ : = b 1 + b 1 2 + 4 b 2 2 < 1 .
Lemma 4 
([29,41]). Let u R n , v R m and S { 1 , , n } . Assume that the condition (7) holds.
(i).
If | S s u p p ( u ) | r , then
( I A T A ) u S 2 δ r u 2 .
(ii).
If | S | r , then
A T v S 2 1 + δ r v 2 .
Lemma 5 
([15] [Lemma 2]). Let a i i = 1 m be i.i.d. Gaussian random vectors with a mean of 0 and variance matrix of I. Let λ 0 be any constant in ( 0 , 1 8 ] . After fixing any given ε 0 > 0 , the universal positive constants C 3 , C 4 exist. If
m C 3 s log n / s ,
then with a probability of at least 1 e C 4 m , it holds that
1 m i = 1 m | a i T x | 2 · 1 a i T x a i T x 0 1 1 λ 0 2 ε 0 + λ 0 21 20 2 x x 2 2 ,
whenever x , x satisfies x 0 s and x x 2 λ 0 x 2 .
Lemma 6. 
Let a i i = 1 m be i.i.d. Gaussian random vectors with a mean of 0 and variance matrix of I. Let z , x , y ^ , λ 0 be given as
y ^ = | A x | sgn ( A z ) , z x 2 λ 0 x 2 , z 0 s , λ 0 ( 0 , 1 8 ] .
For any ε 0 > 0 , the universal positive constants C 5 and C 6 exist, such that if
m C 5 s log ( n / s ) ,
then the following inequality holds with a probability of at least 1 e C 6 m :
A T ( y ^ A x ) S 2 θ ( λ 0 ) 1 + δ s z x 2 ,
where θ ( λ 0 ) : = 2 1 λ 0 ε 0 + λ 0 21 20 and S 1 , 2 , , m with | S | s .
Proof of Lemma 6.
Pick C 5 : = max { C 1 δ s 2 , C 3 } , where C 1 , C 3 comes from Lemmas 2 and 5. The condition (12) ensures the validity of (7) with a probability of at least 1 e C 2 m and (11) with a probability of at least 1 e C 4 m , respectively. Let
C 6 : = ln e C 2 m + e C 4 m e C 2 + C 4 m m .
It is easy to see that 1 e C 2 m 1 e C 4 m = 1 e C 6 m . In other words, the probability of (7) and (11) being true is at least 1 e C 6 m . By replacing y k + 1 , x k and x used in [15] [Equation (16)] by y ^ , z and x, we obtain
y ^ A x 2 2 4 1 λ 0 2 ε 0 + λ 0 21 20 2 z x 2 2 = θ ( λ 0 ) 2 z x 2 2 .
This, together with (10) in Lemma 4, leads to
A T y ^ A x S 2 1 + δ s y ^ A x 2 θ ( λ 0 ) 1 + δ s z x 2 .
The local convergence property of Algorithm 1 is established in the following result.
Theorem 1. 
(Local convergence). Let a i i = 1 m be i.i.d. Gaussian random vectors with a mean of 0 and variance matrix of I. Let λ 0 be any constant in ( 0 , 1 8 ] . Suppose that the RIC, δ 3 s , of matrix A and the parameters α and β obey
0 < δ 3 s < δ , 0 β < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s 1 ,
1 + 2 β 1 η + θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s < α < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s ,
where
θ ( λ 0 ) : = 2 1 λ 0 ε 0 + λ 0 21 20 , ε 0 : = 10 3 , η : = 2 1 δ 2 s 2 ,
and δ is the unique root in the interval ( 0 , 1 ) of the equation γ ( x ) = θ ( λ 0 ) with
γ ( x ) : = 1 x 1 + x x 2 1 x 1 + x + 2 1 x 2 .
For a s-sparse signal x R n , the universal positive constants C 7 , C 8 exist, such that if
m C 7 s log ( n / s ) , dist ( x 0 , x ) λ 0 x 2 ,
then the sequence x n generated by Algorithm 1 with input measured data y = A x satisfies
dist ( x n + 1 , x ) τ n min { x 1 x 2 + ( τ b ) x 0 x 2 , x 1 + x 2 + ( τ b ) x 0 + x 2 }
with a probability of at least ( 1 e C 8 m ) n , where
τ : = b + b 2 + 4 η β 2 , b : = η 1 α + β + α δ 3 s + θ ( λ 0 ) 1 + δ 2 s + θ ( λ 0 ) 1 + δ s 1 δ 2 s ,
and τ < 1 is guaranteed under the conditions (13) and (14).
Proof of Theorem 1. 
Note that x 0 = x 1 in the design of Algorithm 1 and x 0 is generated by utilizing the initialization process. Hence, dist ( x 0 , x ) = dist ( x 1 , x ) λ 0 x 2 by Lemma 1. Since dist ( x 0 , x ) = min { x 0 x 2 , x 0 + x } , we can assume without a loss of generalization that dist ( x 0 , x ) = x 0 x 2 (the case of dist ( x 0 , x ) = x 0 + x 2 can be proved by following a similar argument). Hence, x 0 x 2 λ 0 x 2 , x 1 x 2 λ 0 x 2 . Our proof is based on mathematical induction, and hence, we further assume that x n 1 x 2 λ 0 x 2 , x n x 2 λ 0 x 2 .
The first step of the proof is a consequence of the pursuit step of Algorithm 1. Recall that
x n + 1 = argmin 1 2 y n + 1 A z 2 2 : supp ( z ) S n + 1 .
As the best l 2 -approximation to y n + 1 from the space A z | supp ( z ) S n + 1 , the vector x n + 1 is characterized by
y n + 1 A x n + 1 , A z = 0 whenever supp ( z ) S n + 1 ,
i.e., A T ( y n + 1 A x n + 1 ) , z = 0 whenever supp ( z ) S n + 1 or ( A T ( y n + 1 A x n + 1 ) ) S n + 1 = 0 . We derive, in particular,
( x n + 1 x ) S n + 1 2 2 = ( x n + 1 x ) S n + 1 , ( x n + 1 x ) S n + 1 = ( x n + 1 x ) S n + 1 , x n + 1 x + A T ( y n + 1 A x n + 1 ) S n + 1 = ( x n + 1 x ) S n + 1 , ( I A T A ) ( x n + 1 x ) S n + 1 + ( x n + 1 x ) S n + 1 , A T ( y n + 1 A x ) S n + 1 ( x n + 1 x ) S n + 1 2 ( I A T A ) ( x n + 1 x ) S n + 1 2 + A T ( y n + 1 A x ) S n + 1 2 .
Due to supp x n + 1 S n + 1 , supp x n + 1 x S n + 1 2 s and | S n + 1 | s . It follows from Lemmas 4 and 6 that if m max { C 1 δ 2 s 2 ( 2 s ) log ( n / 2 s ) , C 5 s log ( n / s ) } , then
( ( I A T A ) ( x n + 1 x ) ) S n + 1 2 δ 2 s x n + 1 x 2
with a probability of at least 1 e C 2 m and
( A T ( y n + 1 A x ) ) S n + 1 2 θ ( λ 0 ) 1 + δ s x n x 2
with a probability of at least 1 e C 6 m . Combining (18), (19) with (20) leads to
( x n + 1 x ) S n + 1 2 δ 2 s x n + 1 x 2 + θ ( λ 0 ) 1 + δ s x n x 2 .
Hence,
x n + 1 x 2 2 = ( x n + 1 x ) S n + 1 ¯ 2 2 + ( x n + 1 x ) S n + 1 2 2 ( x n + 1 x ) S n + 1 ¯ 2 2 + δ 2 s x n + 1 x 2 + θ ( λ 0 ) 1 + δ s x n x 2 2 .
Denote ϑ : = θ ( λ 0 ) 1 + δ s x n x 2 . This reads as g ( x n + 1 x 2 ) 0 for the quadratic polynomial defined by
g ( t ) : = ( 1 δ 2 s 2 ) t 2 2 ϑ δ 2 s t ( ( x n + 1 x ) S n + 1 ¯ 2 2 + ϑ 2 ) .
Hence, x n + 1 x 2 is bounded by the largest root of g, i.e.,
x n + 1 x 2 ϑ δ 2 s + ( 1 δ 2 s 2 ) ( x n + 1 x ) S n + 1 ¯ 2 2 + ϑ 2 1 δ 2 s 2 .
Based on a 2 + b 2 a + b for a , b 0 , we obtain
x n + 1 x 2 1 1 δ 2 s 2 ( x n + 1 x ) S n + 1 ¯ 2 + ϑ 1 δ 2 s .
The second step of the proof is a consequence of the Hard Thresholding step of Algorithm 1. Denote
u n : = x n + α A T y n + 1 A x n + β x n x n 1 .
Since S n + 1 = supp H s u n and S : = supp ( x ) , we have
( u n ) S n + 1 2 2 ( u n ) S 2 2 .
According to S n + 1 \ S S n + 1 = S n + 1 \ S and S \ S S n + 1 = S \ S n + 1 , we yield
( u n ) S n + 1 \ S 2 2 ( u n ) S \ S n + 1 2 2 .
Taking ( x ) S n + 1 \ S = 0 and ( x n + 1 ) S \ S n + 1 = 0 into consideration, we write
( u n x ) S n + 1 \ S 2 ( x x n + 1 + u n x ) S \ S n + 1 2 ( x x n + 1 ) S n + 1 ¯ 2 ( u n x ) S \ S n + 1 2 .
Hence,
( x x n + 1 ) S n + 1 ¯ 2 ( u n x ) S \ S n + 1 2 + ( u n x ) S n + 1 \ S 2 2 ( u n x ) S \ S n + 1 2 2 + ( u n x ) S n + 1 \ S 2 2 = 2 ( u n x ) S n + 1 S 2 .
Note from (23) that
u n x = x n + α A T y n + 1 A x n + β x n x n 1 x = 1 α + β x n x β x n 1 x + α I A T A x n x + α A T y n + 1 A x .
Then,
u n x S n + 1 S 2 1 α + β · ( x n x ) S n + 1 S 2 + α ( I A T A ) ( x n x ) S n + 1 S 2 + α A T ( y n + 1 A x ) S n + 1 S 2 + β x n 1 x S n + 1 S 2 1 α + β + α δ 3 s + α θ ( λ 0 ) 1 + δ 2 s x n x 2 + β x n 1 x 2 ,
where the second inequality is based on the fact that
( I A T A ) ( x n x ) S n + 1 S 2 δ 3 s x n x 2
with a probability of at least 1 e C 2 m as m C 1 δ 3 s 1 ( 3 s ) log ( n / 3 s ) by (9) in Lemma 4 due to | S n + 1 Δ S supp x n x | 3 s , and
( A T ( y n + 1 A x ) ) S n + 1 S 2 θ ( λ 0 ) 1 + δ 2 s x n x 2
with a probability of at least 1 e C 6 m as m C 5 ( 2 s ) log ( n / 2 s ) by Lemma 6 due to | S n + 1 S | 2 s . Combining (24) with (25), we have
( x x n + 1 ) S n + 1 ¯ 2 2 1 α + β + α δ 3 s + α θ ( λ 0 ) 1 + δ 2 s x n x 2 + β x n 1 x 2 .
Putting (22) and (28) together yields
x n + 1 x 2 2 1 α + β + α δ 3 s + α θ ( λ 0 ) 1 + δ 2 s 1 δ 2 s 2 + θ ( λ 0 ) 1 + δ s 1 δ 2 s x n x 2 + 2 β 1 δ 2 s 2 x n 1 x 2 = b x n x 2 + η β x n 1 x 2 ,
where b is given by (17). The recursive inequality (29) is in the form (8) in Lemma 3 by letting b 3 = 0 . Note that, to ensure the validity of (19), (20), (26) and (27) with a corresponding probability of at least ( 1 e C 2 m ) 2 ( 1 e C 6 m ) 2 , the number of measurements must satisfy
m max { C 1 δ 2 s 2 ( 2 s ) log ( n / 2 s ) , C 1 δ 3 s 1 ( 3 s ) log ( n / 3 s ) , C 5 s log ( n / s ) , C 5 ( 2 s ) log ( n / 2 s ) } .
Hence, we can pick C 7 and C 8 , satisfying
C 7 max C 5 , 2 C 5 1 log 2 log n / s , 2 C 1 δ 2 s 2 1 log 2 log n / s , 3 C 1 δ 3 s 2 1 log 3 log n / s
and
C 8 1 m ln 1 1 ( 1 e C 2 m ) 2 ( 1 e c 6 m ) 2 .
In other words, the above choices of C 7 and C 8 guarantee that (19), (20), (26), and (27) hold with a probability of at least ( 1 e C 8 m ) as m C 7 s log ( n / s ) . This, in turn, means that the (29) holds accordingly.
In the following, we need to show that the coefficients of the right-hand side of the (29) satisfy the conditions required for Lemma 3, i.e., b + η β < 1 .
First, we clarify that the parameters α , β appearing in (13) and (14) are well-defined. On the interval ( 0 , 1 ) , define
γ ( x ) : = 1 x 1 + x x 2 1 x 1 + x + 2 1 x 2 and g ( x ) : = 1 + x + 2 1 x 2 .
Note that g ( x ) = 1 ( 2 x / 2 ( 1 x 2 ) ) and the root of g ( x ) = 0 is x = 3 / 3 . It is easy to see
max { g ( x ) : x ( 0 , 1 ) } = g ( 3 / 3 ) = 1 + 3 .
Thus,
γ ( x ) = 1 x 1 + x x 2 1 x 1 + x + 2 1 x 2 1 x 1 + x x 2 1 x 1 + 3 = : h ( x ) .
Note that
h ( x ) = 1 1 + 3 1 3 x 2 1 + x 2 3 x 2 ( 1 x ) .
Then, h ( x ) = 0 is equivalent to saying that ( 1 3 x ) 2 ( 1 x ) = 2 ( 2 3 x ) ( 1 + x ) . Taking the square on both sides yields ( x 1 / 3 ) ( 27 x 2 21 ) = 27 x 3 9 x 2 21 x + 7 = 0 . The roots of the above equation in the interval ( 0 , 1 ) are x = 1 / 3 , 7 / 3 . By substituting these into (31), we obtain the root of h ( x ) = 0 in (0,1) is 7 / 3 . It is easy to see that h is monotonically decreasing on ( 0 , 7 / 3 ) and increasing on ( 7 / 3 ,1). Due to h ( 1 ) = 0 , h ( x ) < 0 as x [ 7 / 3 , 1 ) by the monotonicity of h. On the other hand, θ is monotonically increasing and 0 θ ( λ 0 ) θ ( 1 / 8 ) 0.295 < 0.366 h ( 0 ) as λ 0 ( 0 , 1 / 8 ] . Hence for the given λ 0 ( 0 , 1 / 8 ] and θ ( λ 0 ) , there is a unique solution in the interval ( 0 , 7 / 3 ) , denoted by δ , satisfying h ( x ) = θ ( λ 0 ) . In addition according to the decreasing monotonicity of h on this interval, we know h ( x ) > θ ( λ 0 ) for all x ( 0 , δ ) . Thus, h ( δ 3 s ) > θ ( λ 0 ) as δ 3 s < δ . This, together with (30), yields γ ( δ 3 s ) > θ ( λ 0 ) , i.e.,
1 δ 3 s 2 2 δ 3 s 1 + δ 3 s 1 δ 3 s + 2 1 + δ 3 s = 1 δ 3 s 1 + δ 3 s δ 3 s 2 1 δ 3 s 1 + δ 3 s + 2 1 δ 3 s 2 > θ ( λ 0 ) .
It follows from (32) that
1 δ 3 s 2 2 δ 3 s > θ ( λ 0 ) 1 + δ 3 s 2 ( 1 δ 3 s ) + θ ( λ 0 ) 1 + δ 3 s = θ ( λ 0 ) 1 δ 3 s 2 2 1 + δ 3 s 1 δ 3 s + θ ( λ 0 ) 1 + δ 3 s ,
where the second step comes from
1 + δ 3 s 2 ( 1 δ 3 s ) = 1 δ 3 s 2 2 1 + δ 3 s 1 δ 3 s .
Multiplying 2 1 δ 3 s 2 on both sides of (33) yields
1 θ ( λ 0 ) 1 + δ 3 s 1 δ 3 s > 2 1 δ 3 s 2 δ 3 s + θ ( λ 0 ) 1 + δ 3 s η δ 3 s + θ ( λ 0 ) 1 + δ 3 s ,
where the last step comes from
2 1 δ 3 s 2 2 1 δ 2 s 2 = η
due to δ 2 s δ 3 s . Note that (35) can be rewritten as
1 η θ ( λ 0 ) 1 + δ 3 s η 1 δ 3 s > δ 3 s + θ ( λ 0 ) 1 + δ 3 s .
Using the fact δ s δ 2 s δ 3 s again, (36) rearranges into
0 < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s 1 .
This shows that the range for β in (13) is well-defined. Since β < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s 1 , then
1 + 2 β ( 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s ) β = 1 + [ 2 ( 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s ) ] β < 1 η θ ( λ 0 ) 1 + δ s η ( 1 δ 2 s ) + ( 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s ) ,
implying
1 + 2 β 1 η + θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s < 1 + β < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s .
This means that the range for α in (14) is also well-defined.
To show b + η β < 1 , let us consider the following two cases.
Case 1: 
if
1 + 2 β 1 η + θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s < α 1 + β ,
then
α 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s > 1 + 2 β 1 η + θ ( λ 0 ) 1 + δ s η 1 δ 2 s ,
and hence
b = η 1 α + β + α δ 3 s + θ ( λ 0 ) 1 + δ 2 s + θ ( λ 0 ) 1 + δ s 1 δ 2 s = η 1 + β α 1 δ 3 s θ ( λ 0 ) 1 + δ 2 s + θ ( λ 0 ) 1 + δ s 1 δ 2 s < 1 η β .
Case 2: 
if
1 + β < α < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s ,
then
α 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s < 1 η + 1 θ ( λ 0 ) 1 + δ s η 1 δ 2 s ,
and hence
b = η 1 α + β + α δ 3 s + θ ( λ 0 ) 1 + δ 2 s + θ ( λ 0 ) 1 + δ s 1 δ 2 s = η 1 β + α 1 + δ 3 s + θ ( λ 0 ) 1 + δ 2 s + θ ( λ 0 ) 1 + δ s 1 δ 2 s < 1 η β .
On both sides, we always obtain b < 1 η β , i.e., b + η β < 1 . Therefore, τ < 1 by Lemma 3.
According to the recursive relation (29) and b + η β < 1 , as shown above, we obtain
x n + 1 x 2 b x n x 2 + η β x n 1 x 2 λ 0 ( b + η β ) x 2 < λ 0 x 2 .
Hence, it follows from (29) and Lemma 3 that
x n + 1 x 2 τ n x 1 x 2 + ( τ b ) x 0 x 2 .
Following the above proof process, we can show that (37) holds true by replacing x by x , i.e.,
x n + 1 + x 2 τ n x 1 + x 2 + ( τ b ) x 0 + x 2 .
By putting (37) and (38) together and taking into account the definition of ’dist’ given in (6), the desired result (16) follows. □
The above analysis assumes that the measurements are noiseless. The following result demonstrates that the Algorithm 1 is robust in the presence of noise.
Theorem 2. 
(The noisy case): Let a i i = 1 m be i.i.d. Gaussian random vectors with a mean of 0 and variance matrix of I. Suppose that the RIC, δ 3 s , of matrix A and the parameters α and β satisfy the conditions (13) and (14). Take a s-sparse signal x R n and e R m , satisfying
e 2 λ 0 1 δ 1 b η β 2 1 δ + 1 + δ x 2 ,
where λ 0 ( 0 , 1 / 8 ] , b, η, δ are given in Theorem 1. The universal positive constants C 7 , C 8 exist, such that if
m C 7 s log ( n / s ) , dist ( x 0 , x ) λ 0 x 2 ,
then the sequence x n generated by Algorithm 1 with the input measured data y = A x + e satisfies
dist ( x n + 1 , x ) τ n min { x 1 x 2 + ( τ b ) x 0 x 2 , x 1 + x 2 + ( τ b ) x 0 + x 2 } + 1 1 τ α η 1 + δ 2 s + 1 + δ s 1 δ 2 s e 2 , n ,
with a probability of at least ( 1 e C 8 m ) n , where τ < 1 is given by (17).
Proof of Theorem 2. 
Let ( x n , y n ) n 1 be the iterate sequence generated by Algorithm 1 in the framework of noisy data. Thus, in this case y n + 1 = | A x | + e sgn A x n . Following a similar argument to that in [15] [Equation (27)], we obtain from Lemma 6 that
A T y n + 1 A x S n + 1 2 = A T | A x | + e sgn A x n A x S n + 1 2 A T | A x | sgn A x n A x S n + 1 2 + A T e sgn A x n S n + 1 2 θ ( λ 0 ) 1 + δ s x n x 2 + 1 + δ s e 2 .
Now, we need to modify the proof of Theorem 1 for the noisy case, i.e., some formula appearing in the proof of Theorem 1 should be replaced. Precisely, (21) and (22) take the following forms:
( x n + 1 x ) S n + 1 2 δ 2 s x n + 1 x 2 + θ ( λ 0 ) 1 + δ s x n x 2 + 1 + δ s e 2
and
x n + 1 x 2 1 1 δ 2 s 2 ( x n + 1 x ) S n + 1 ¯ 2 + ϑ + ω 1 δ 2 s ,
where ω : = 1 + δ s e 2 and ϑ : = θ ( λ 0 ) 1 + δ s x n x 2 . In addition, (25) is modified to
u n x S n + 1 S 2 1 α + β + α δ 3 s + α θ ( λ 0 ) 1 + δ 2 s x n x 2 + β x n 1 x 2 + α 1 + δ 2 s e 2 .
Combining (24) with (40), the inequality (28) is revised as
( x n + 1 x ) S n + 1 ¯ 2 2 1 α + β + α δ 3 s + α θ ( λ 0 ) 1 + δ 2 s x n x 2 + β x n 1 x 2 + α 1 + δ 2 s e 2 .
Putting (39) and (41) together and recalling the definitions of ϑ and ω above, we then have
x n + 1 x 2 2 1 α + β + α δ 3 s + α θ ( λ 0 ) 1 + δ 2 s 1 δ 2 s 2 + θ ( λ 0 ) 1 + δ s 1 δ 2 s x n x 2 + 2 β 1 δ 2 s 2 x n 1 x 2 + α 2 1 + δ 2 s 1 δ 2 s 2 + 1 + δ s 1 δ 2 s e 2 = b x n x 2 + η β x n 1 x 2 + α η 1 + δ 2 s + 1 + δ s 1 δ 2 s e 2 ,
where b, η , β are given exactly as shown in Theorem 1. Hence, b + η β < 1 , as α , β satisfy conditions (13) and (14). This further ensures that τ < 1 by applying Lemma 3 to the recursive Formula (42).
It remains to be shown that the iterative sequence x n belongs into a neighbor of x. First, x 0 x 2 λ 0 x 2 , x 1 x 2 λ 0 x 2 . Now, assume x n 1 x 2 λ 0 x 2 , x n x 2 λ 0 x 2 . We claim that x n + 1 x 2 λ 0 x 2 . Indeed, recall that
e 2 λ 0 1 δ 1 b η β α 2 1 δ + 1 + δ x 2 ,
i.e.,
α 2 1 δ + 1 + δ 1 δ e 2 λ 0 1 b η β x 2 .
Since 0 δ s δ 2 s δ 3 s < δ and η = 2 / 1 δ 2 s 2 , we obtain
α 2 1 δ + 1 + δ 1 δ = 2 α 1 δ + 1 + δ 1 δ > 2 α 1 δ 3 s + 1 + δ 3 s 1 δ 3 s 2 α 1 δ 2 s + 1 + δ s 1 δ 2 s = α η 1 + δ 2 s + 1 + δ s 1 δ 2 s .
It follows from (43), (44) and the recursive relation (42) that
x n + 1 x 2 b x n x 2 + η β x n 1 x 2 + α η 1 + δ 2 s + 1 + δ s 1 δ 2 s e 2 < λ 0 ( b + η β ) x 2 + α 2 1 δ + 1 + δ 1 δ e 2 λ 0 ( b + η β ) x 2 + λ 0 1 b η β x 2 = λ 0 x 2 .
Taking into account (42) and applying Lemma 3, we obtain
x n + 1 x 2 τ n x 1 x 2 + ( τ b ) x 0 x 2 + 1 1 τ α η 1 + δ 2 s + 1 + δ s 1 δ 2 s e 2 .
Moreover, this inequality holds true by following symmetrical arguments, such as replacing x by x . Thus, according to the definition of ’dist’ given in (6), the desired result follows. □

Finite Termination

In the framework of accurate measurements, the true signal can be recovered after a finite number of iterations. An estimation of the iteration number is given below.
Theorem 3. 
Let a i i = 1 m be i.i.d. Gaussian random vectors with a mean of 0 and variance matrix of I. Suppose that the RIC, δ 3 s , of the matrix A and the parameters α and β satisfy the conditions (13) and (14) with λ 0 ( 0 , 1 8 ] . For a s-sparse signal x R n and an initial point x 0 given in Algorithm 1, the universal positive constants C 7 , C 8 exist, such that if
m C 7 s log ( n / s ) , dist ( x 0 , x ) λ 0 x 2 ,
then the s-sparse signal sgn ( x + x 0 2 x x 0 2 ) x is recovered by Algorithm 1 with y = | A x | in, at most,
p : = max log 2 λ 0 τ b + 1 x 2 / η ξ 1 log 1 / τ , log λ 0 θ ( λ 0 ) 1 + δ s x 2 2 / ξ 2 2 log 1 / τ + 1
iterations with a probability of at least ( 1 e C 8 m ) p , where θ ( λ 0 ) , η and τ, b are given by (15) and (17), respectively, in Theorem 1, and ξ 1 , ξ 2 are the smallest nonzero entries of x, y in the modulus.
Proof of Theorem 3. 
The case of x = 0 is trivial. Indeed, under this case, the initial point x 0 generated by the initial step of the Algorithm 1 is 0. This means that the true signal x is recovered in, at most, one step. Therefore, we only need to consider the case of x 0 .
We first assume that dist ( x 0 , x ) = x 0 x 2 . Now, we need to determine an integer n such that S n + 1 = S . Recall that u n is given as shown in (23). According to the definitions of S n + 1 , for all j S and l S ¯ , we have
| u n j | | u n l | .
Note that
| u n j | = | x j + 1 α + β x n x j + α I A T A x n x j + α A T y n + 1 A x l β x n 1 x j | ξ 1 | 1 α + β | · | x n x j | α | I A T A x n x j | α | A T y n + 1 A x j | β | x n 1 x j | ,
where ξ 1 is the smallest nonzero entry of x in the modulus, and
| u n l | = | x l + 1 α + β x n x l + α I A T A x n x l + α A T y n + 1 A x l β x n 1 x l | | 1 α + β | · | x n x l | + α | I A T A x n x l | + α | A T y n + 1 A x l | + β | x n 1 x l | ,
where x l = 0 due to l S ¯ . Combining the above two inequalities yields
| u n l | | u n j | | 1 α + β | · | x n x l | + | x n x j | + α | I A T A x n x l | + | I A T A x n x j | + α | A T y n + 1 A x l | + | A T y n + 1 A x j | + β | x n 1 x l | + | x n 1 x j | ξ 1 2 | 1 α + β | · x n x l , j 2 + α I A T A x n x l , j 2 + α A T y n + 1 A x l , j 2 + β x n 1 x l , j 2 ξ 1 .
It follows from (17) that
η | 1 α + β | + α δ 3 s + θ ( λ 0 ) 1 + δ 2 s < b , τ τ b = η β , and b < τ .
Note that supp ( x n x ) { l , j } S l due to j S . Hence, by (9), (17), and Lemma 6, we have
| u n l | | u n j | 2 | 1 α + β | + α δ 3 s + θ ( λ 0 ) 1 + δ 2 s x n x 2 + β x n 1 x 2 ξ 1 < 2 η b x n x 2 + τ τ b x n 1 x 2 ξ 1 < 2 η τ x n x 2 + τ b x n 1 x 2 ξ 1 .
According to (29), we have
x n x 2 + τ b x n 1 x 2 b x n 1 x 2 + η β x n 2 x 2 + τ b x n 1 x 2 = τ x n 1 x 2 + τ b x n 2 x 2 τ n 1 x 1 x 2 + τ b x 0 x 2 τ n 1 λ 0 x 2 + τ b λ 0 x 2 = τ n 1 λ 0 τ b + 1 x 2 .
Taking into account the definition of u n in (23), we know that S n + 1 = S is satisfied as soon as (45) holds. Due to (47), this can be ensured, provided that
2 λ 0 τ b + 1 x 2 η τ n ξ 1 , i . e . , n log 2 λ 0 τ b + 1 x 2 / η ξ 1 log 1 / τ .
We next demonstrate that y = | A x | 0 by contradiction. If y = | A x | = 0 , i.e., A x = 0 , according to Lemma 2, then A satisfies ( 1 δ s ) x 2 2 A x 2 2 with a probability of at least 1 e C 2 m as m C 1 δ s 2 s log ( n / s ) . Due to A x = 0 and δ 3 s < 1 , we can deduce that x = 0 . This contradicts the hypothesis.
Now, we also need to determine an integer n, such that y n + 1 = A x . Denote
n 1 : = log 2 λ 0 τ b + 1 x 2 / η ξ 1 log 1 / τ + 1
and
n 2 : = log λ 1 θ ( λ 0 ) 1 + δ s x 2 2 / ξ 2 2 log 1 / τ + 1 ,
where θ ( λ 0 ) and τ are given by (15) and (17), respectively, in Theorem 1, and ξ 2 is the minimum nonzero entry of y. According to the argument of proof given in part (b) of [15] [Theorem 1], we can obtain that y n + 1 = A x as n max n 1 , n 2 . Since it is already known from the previous proof that n 1 is the smallest positive integer, such that S n + 1 = S , then, as n max n 1 , n 2 , we have both S n + 1 = S and y n + 1 = A x . This ensures that, in the Algorithm 1,
x n + 1 = arg min 1 2 A z A x 2 2 supp ( z ) S .
Thus, x n + 1 = x = sgn ( x + x 0 2 x x 0 2 ) x due to dist ( x 0 , x ) = x x 0 2 x + x 0 2 . Therefore, Algorithm 1 successfully recovers the s-sparse signal sgn ( x + x 0 2 x x 0 2 ) x after a finite number of iterations.
Similarly, if dist ( x 0 , x ) = x + x 0 2 , we can obtain that S n + 1 = S and y n + 1 = A ( x ) as n max n 1 , n 2 . In this case, (48) takes the form
x n + 1 = arg min 1 2 A z + A x 2 2 supp ( z ) S .
Thus, x n + 1 = x = sgn ( x + x 0 2 x x 0 2 ) x due to dist ( x 0 , x ) = x + x 0 2 x x 0 2 . This completes the proof. □

4. Numerical Experiments

In order to make our paper more convenient and clear to read, we give Table 1 before the numerical experiments.
All experiments were performed on a laptop with an Apple M1 processor and 8GB memory by using MATLAB R2021a. In terms of the recovery capability and average runtime, a comparison of SPR-HBHTP with three popular algorithms, including SPR-HTP [15] (an extension of HTP from traditional compressed sensing to sparse phase recovery), CoPRAM [39] (a combination of the classical alternating minimization approach for phase retrieval with the CoSaMP algorithm for sparse recovery) and SPARTA [38] (a combination of TWF and TAF) for the sparse phase retrieval problem is shown in this section.
In the following experiments, the s-sparse vector x R n with a fixed dimension of n = 3000 is randomly generated, whose nonzero entries are independent identically distributed (i.i.d) and follow the standard normal distribution N ( 0 , 1 ) . Particularly, the indices in support of x follow a uniform distribution. The measurement matrix A is the i.i.d Gaussian matrix whose elements follow N ( 0 , 1 / m ) , which satisfies the RIP with a high probability as m is large enough (Chapter 9, [30]). On the other hand, the observed modulus data y are expressed as follows: y = A x in noiseless environments, and y = A x + κ e in noisy environments, where e is the noise vector with elements following N ( 0 , 1 / m ) and κ > 0 is the noise level.
The maximum number of iterations for all algorithms was set as 50. All initial points of the algorithms were generated by initialization in SPR-HBHTP, in which two initial points x 1 = x 0 were produced for SPR-HBHTP, while the other algorithms just needed an initial point x 0 . The algorithmic parameters of SPARTA were γ = 0.7 , μ = 1 , and I = m / 6 [38], and the stepsize of SPR-HTP was set as μ = 0.75 [15]. The choice of algorithmic parameters for SPR-HBHTP is discussed in Section 4.1. For each sparsity level s, 100 independent trials were used to test the success rates of the algorithms. The trial was said to be a ’success’ if the following recovery condition
dist ( x , x n ) x 2 10 3
was satisfied, where x is the target signal, x n is the corresponding approximated value generated by the algorithm, and the distance dist ( x , x n ) is given by (6).

4.1. Choices of Parameters α and β

In order to choose suitable algorithmic parameters ( α , β ) in terms of the recovery capability, we considered the success rates of SPR-HBHTP for recovery with a fixed value of m = 3000 in noiseless environments. The numerical results are displayed in Figure 1, in which the sparsity level s ranges from 40 to 260 with a stepsize of 10. Figure 1a corresponds to the case with β { 0 , 0.1 , 0.3 , 0.5 , 0.7 , 0.9 } and α = 8 , while Figure 1b corresponds to the case with α { 1 , 2 , 3 , 4 , 6 , 8 } and β = 0.7 . Figure 1a shows that the recovery ability of SPR-HBHTP becomes stronger with an increase in the coefficient β of the momentum. Note that SPR-HBHTP reduces to SPR-HTP as β = 0 , which indicates that an important role played by the momentum is to enlarge the range of the stepsize α of SPR-HBHTP. From Figure 1b, we can see that SPR-HBHTP is sensitive to the stepsize α , and its recovery capability with α = 2 is stronger than that of α = 1 , 3 , 8 . However, the recovery effect of SPR-HBHTP with α = 2 is worse than that of α = 4 , 6 for large sparsity levels s, while the former is slightly better than the latter for small s. Thus, we use α = 2 and β = 0.7 for the rest of the article.

4.2. Phase Transition

In this section, we use the phase transition curve (PTC) [42,43] and grayscale map to compare the recovery capabilities of the algorithms.

4.2.1. Phase Transition Curves

The phase transition curve is a logistic regression curve identifying a 50% success rate for the given algorithms in this paper. Indeed, a ratio of 50% can be replaced by other values in the interval ( 0 , 1 ) based on the practical background. Denote δ = m / n and ρ = s / m . The ( δ , ρ ) -plane is separated by the PTC of an algorithm into success and failure regions (see Figure 2). The former corresponds to the region below the PTC, wherein the algorithm can reconstruct the target sparse signal successfully; the latter corresponds to the region above the PTC. In other words, the higher the PTC, the stronger the recovery capability of the algorithm.
To generate the PTC, ( δ , ρ ) are taken as follows
δ 0.05 , 0.07 , 0.09 , 0.1 , 0.1445 , , 0.99 , ρ = s / m ,
where the interval [ 0.1 , 0.99 ] is equally divided into 20 parts. For each δ given in (49), the ‘glmfit’ function in Matlab is used to generate the logistic regression curve based on the success rates with different sparsity levels s. The PTC is obtained directly by identifying the point in the logistic regression curve with a 50% success rate. This technique is almost the same as that shown in [36], except that the ‘glmfit’ function is replaced by the logistic regression model. Thus, the process of the generation of the PTC is omitted here, and the interested reader can consult [36] for detailed information.
The comparison of the PTCs for algorithms including SPR-HBHTP, SPR-HTP, CoPRAM, and SPARTA is shown in Figure 2, wherein Figure 2a,b correspond to accurate and inaccurate measurements, respectively. From Figure 2a, we see that SPR-HBHTP has the highest PTC as δ > 0.2 , which indicates that the recovery capability of SPR-HBHTP is stronger than that of the other three algorithms, especially for larger δ . The PTCs in Figure 2b are similar to those in Figure 2a as δ > 0.1 ; that is, all algorithms are stable under small disturbances. However, when δ 0.1 , all PTCs in Figure 2b decrease rapidly with a decrease in δ , which is different from the phenomena shown in Figure 2a. This indicates that all algorithms require more measurements to ensure the recovery effect under the disturbance for the sparse phase retrieval problem. Finally, it should be noted that the PTCs of SPR-HTP, CoPRAM, and SPARTA are close to each other in Figure 2a,b, which means that the reconstruction capability of these three algorithms is almost the same. Comparatively speaking, the recovery capability of SPR-HTP is slightly better than those of CoPRAM and SPARTA.

4.2.2. Greyscale Maps

A grayscale map is an image that contains only brightness information and does not contain color information. In grayscale maps, the successful recovery rate of the algorithms is expressed by the different gray levels of the corresponding blocks, where black indicates a successful recovery rate of 0%, white is 100%, and gray is 0% to 100%.
Grayscale maps for different algorithms, such as SPR-HBHTP, SPR-HTP, CoPRAM and SPARTA, with signal lengths of n = 3000 are displayed in Figure 3. In our experiments, the sample size m ranged from 250 to 3000 with a stepsize of 250 and the sparsity s ranged from 20 to 100 with a stepsize of 5. By comparing the four graphs in Figure 3, it can be seen that when the sparsity is relatively large, the recovery ability of SPR-HBHTP is stronger than that of other algorithms. For smaller sparsity s values, SPR-HTP and CoPRAM have almost the same recovery ability, and similarly, SPARTA and SPR-HBHTP have comparable recovery abilities. This is consistent with our results for phase transition curves.

4.3. Algorithm Selection Maps

In practice, the algorithm that consumes the least amount of time should be selected naturally when more than one algorithm can reconstruct the target signal successfully. Hence, it is meaningful to choose the fastest algorithm at the intersection of the success regions of the algorithms, which builds the algorithm selection maps (ASM) proposed in [42,43]. Note that the algorithm will be selected automatically if it is the only one that can recover the signal successfully.
Denote δ = m / n and ρ = s / m . Next, we establish the ASM in the ( δ , ρ ) -plane with ( δ , ρ ) given as follows
δ 0.05 , 0.07 , 0.09 , 0.1 , 0.1445 , , 0.99 , ρ 0.02 , 0.022 , , 0.1 ,
where the intervals [ 0.1 , 0.99 ] and [ 0.02 , 1 ] are equally divided into 20 and 40 parts, respectively. For each δ , we tested 10 problem instances at ( δ , ρ ) for each algorithm with an increase in ρ until the success frequency was less than 50%. The ASM and average runtime of the fastest algorithm with accurate measurements are summarized in Figure 4. Figure 4a indicates that SPR-HBHTP or SPR-HTP is the fastest algorithm in most areas, and CoPRAM is a slower algorithm relatively, since it does not appear in the ASM. Figure 4b shows us that the average runtime of the fastest algorithm is less than one second in most regions, and it increases by up to 3–7 s for larger δ and ρ . This demonstrates that all algorithms will take more time, as the sparsity level s becomes larger.
For comparing the algorithms thoroughly, we collected detailed information about their average runtimes, as shown in Figure 5, and the ratios of the average runtimes for the algorithms against the fastest one are shown in Figure 4b. Figure 5a,b reveal that the ratios of SPR-HBHTP and SPR-HTP are close to 1 in most areas, which indicates that they are faster than CoPRAM and SPARTA. In particular, the advantage value of SPR-HBHTP lies in the region with a larger ρ compared to SPR-HTP. In Figure 5c, the ratio of CoPRAM is about 5–20 in most regions, and the minimum value is 5, which means that it is slower than the other three algorithms. Finally, by comparing Figure 5d with Figure 5a–c, we find that SPARTA is slower than SPR-HBHTP and SPR-HTP in most cases, but it is much faster than CoPRAM.

5. Conclusions

We introduced a second-order accelerated method for the Hard Thresholding Pursuit, named the Heavy-Ball-Based Hard Thresholding Pursuit, to reconstruct a sparse signal from phaseless measurements. Under the restricted isometry property, SPR-HBHTP enjoys an exact provable recovery in the finite steps as soon as the number of noiseless Gaussian measurements exceeds a certain bound. It is remarkably different from the existing phase recovery algorithms in terms of theoretical analysis. Moreover, numerical experiments on random problem instances indicate that our algorithm outperforms the state-of-the-art algorithms, such as SPR-HTP, CoPRAM, and SPARTA, in terms of its recovery success rate and computational times through the phase transition analysis. Conducting experiments on realistic situations to truly verify the practical performance of SPR-HBHTP is worthy of further research. In addition, studying the random alternating minimization method based on Heavy-Ball acceleration is an interesting research topic.

Author Contributions

Conceptualization, J.Z. and Z.S.; Methodology, Z.S. and J.T.; Investigation, Y.L. and J.Z.; writing—original draft, Y.L. and J.T.; Supervision, J.Z. and Z.S.; Funding acquisition, J.Z. and J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (11771255), the Young Innovation Teams of Shandong Province (2019KJI013), the Shandong Province Natural Science Foundation (ZR2021MA066), the Natural Science Foundation of Henan Province (222300420520), and the Key Scientific Research Projects of Higher Education of Henan Province (22A110020).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

References

  1. Shechtman, Y.; Eldar, Y.C.; Cohen, O.; Chapman, H.N.; Miao, J.W.; Segev, M. Phase retrieval with application to optical imaging: A contemporary overview. IEEE Signal Process Mag. 2015, 32, 87–109. [Google Scholar] [CrossRef] [Green Version]
  2. Gerchberg, R.; Saxton, W. A practical algorithm for the determination of plane from image and diffraction pictures. Optik 1972, 35, 237–246. [Google Scholar]
  3. Fienup, J.R. Phase retrieval algorithms: A comparison. Appl. Opt. 1982, 21, 2758–2769. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Marchesini, S. Phase retrieval and saddle-point optimization. J. Opt. Soc. Am. A 2007, 24, 3289–3296. [Google Scholar] [CrossRef] [PubMed]
  5. Nugent, K.; Peele, A.; Chapman, H.; Mancuso, A. Unique phase recovery for nonperiodic objects. Phys. Rev. Lett. 2003, 91, 203902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Fickus, M.; Mixon, D.G.; Nelson, A.A.; Wan, Y. Phase retrieval from very few measurements. Linear Algebra Appl. 2014, 449, 475–499. [Google Scholar] [CrossRef] [Green Version]
  7. Chen, P.; Fannjiang, A.; Liu, G.R. Phase retrieval with one or two diffraction patterns by alternating projections with the null initialization. J. Fourier Anal. Appl. 2018, 24, 719–758. [Google Scholar] [CrossRef]
  8. Waldspurger, I. Phase retrieval with random gaussian sensing vectors by alternating projections. IEEE Trans. Inf. Theory 2018, 64, 3301–3312. [Google Scholar] [CrossRef] [Green Version]
  9. Candès, E.J.; Strohmer, T.; Voroninski, V. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 2013, 66, 1241–1274. [Google Scholar] [CrossRef] [Green Version]
  10. Waldspurger, I.; d’Aspremont, A.; Mallat, S. Phase recovery, maxcut and complex semidefinite programming. Math. Program. 2015, 149, 47–81. [Google Scholar] [CrossRef] [Green Version]
  11. Goldstein, T.; Studer, C. Phasemax: Convex phase retrieval via basis pursuit. IEEE Trans. Inf. Theory 2018, 64, 2675–2689. [Google Scholar] [CrossRef]
  12. Hand, P.; Voroninski, V. An elementary proof of convex phase retrieval in the natural parameter space via the linear program PhaseMax. Commun. Math. Sci. 2018, 16, 2047–2051. [Google Scholar] [CrossRef] [Green Version]
  13. Candès, E.J.; Li, X.; Soltanolkotabi, M. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory 2015, 61, 1985–2007. [Google Scholar] [CrossRef] [Green Version]
  14. Netrapalli, P.; Jain, P.; Sanghavi, S. Phase retrieval using alternating minimization. IEEE Trans. Signal Process 2015, 18, 4814–4826. [Google Scholar] [CrossRef] [Green Version]
  15. Cai, J.F.; Li, J.; Lu, X.; You, J. Sparse signal recovery from phaseless measurements via hard thresholding pursuit. Appl. Comput. Harmon. Anal. 2022, 56, 367–390. [Google Scholar] [CrossRef]
  16. Cai, J.F.; Jiao, Y.L.; Lu, X.L.; You, J.T. Sample-efficient sparse phase retrieval via stochastic alternating minimization. IEEE Trans. Signal Process 2022, 70, 4951–4966. [Google Scholar] [CrossRef]
  17. Yang, M.H.; Hong, Y.W.P.; Wu, J.Y. Sparse affine sampling: Ambiguity-free and efficient sparse phase retrieval. IEEE Trans. Inf. Theory 2022, 68, 7604–7626. [Google Scholar] [CrossRef]
  18. Bakhshizadeh, M.; Maleki, A.; Jalali, S. Using black-box compression algorithms for phase retrieval. IEEE Trans. Inf. Theory 2020, 66, 7978–8001. [Google Scholar] [CrossRef]
  19. Cha, E.; Lee, C.; Jang, M.; Ye, J.C. DeepPhaseCut: Deep relaxation in phase for unsupervised fourier phase retrieval. IEEE Trans. Pattern. Anal. Mach. Intell. 2022, 44, 9931–9943. [Google Scholar] [CrossRef]
  20. Wang, B.; Fang, J.; Duan, H.; Li, H. PhaseEqual: Convex phase retrieval via alternating direction method of multipliers. IEEE Trans. Signal Process 2020, 68, 1274–1285. [Google Scholar] [CrossRef]
  21. Liu, T.; Tillmann, A.M.; Yang, Y.; Eldar, Y.C.; Pesavento, M. Extended successive convex approximation for phase retrieval with dictionary learning. IEEE Trans. Signal Process 2022, 70, 6300–6315. [Google Scholar] [CrossRef]
  22. Fung, S.W.; Di, Z.W. Multigrid optimization for large-scale ptychographic phase retrieval. SIAM J. Imaging Sci. 2020, 13, 214–233. [Google Scholar] [CrossRef] [Green Version]
  23. Chen, Y.; Chi, Y.; Fan, J.; Ma, C. Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval. Math. Program. 2019, 176, 5–37. [Google Scholar] [CrossRef] [Green Version]
  24. Cai, J.F.; Huang, M.; Li, D.; Wang, Y. Solving phase retrieval with random initial guess is nearly as good as by spectral initialization. Appl. Comput. Harmon. Anal. 2022, 58, 60–84. [Google Scholar] [CrossRef]
  25. Soltanolkotabi, M. Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization. IEEE Trans. Inf. Theory 2019, 65, 2374–2400. [Google Scholar] [CrossRef] [Green Version]
  26. Jaganathan, K.; Oymak, S.; Hassibi, B. Sparse phase retrieval: Uniqueness guarantees and recovery algorithms. IEEE Trans. Signal Process. 2017, 65, 2402–2410. [Google Scholar] [CrossRef]
  27. Killedar, V.; Seelamantula, C.S. Compressive phase retrieval based on sparse latent generative priors. In Proceedings of the ICASSP 2022—2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 23–27 May 2022; pp. 1596–1600. [Google Scholar]
  28. Wen, J.; He, H.; He, Z.; Zhu, F. A pseudo-inverse-based hard thresholding algorithm for sparse signal recovery. IEEE Trans. Intell. Transp. Syst. 2022. [Google Scholar] [CrossRef]
  29. Foucart, S. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM J. Numer. Anal. 2011, 49, 2543–2563. [Google Scholar] [CrossRef] [Green Version]
  30. Foucart, S.; Rauhut, H. An Invitation to Compressive Sensing. In A Mathematical Introduction to Compressive Sensing; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–39. [Google Scholar]
  31. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef] [Green Version]
  32. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  33. Needell, D.; Tropp, J. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  34. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef] [Green Version]
  35. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Comp. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  36. Sun, Z.F.; Zhou, J.C.; Zhao, Y.B.; Meng, N. Heavy-ball-based hard thresholding algorithms for sparse signal recovery. J. Comput. Appl. Math. 2023, 430, 115264. [Google Scholar] [CrossRef]
  37. Candès, E.J. The restricted isometry property and its implications for compressed sensing. Crendus. Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  38. Wang, G.; Zhang, L.; Giannakis, G.B.; Akçakaya, M.; Chen, J. Sparse phase retrieval via truncated amplitude flow. IEEE Trans. Signal Process 2018, 66, 479–491. [Google Scholar] [CrossRef]
  39. Jagatap, G.; Hegde, C. Sample-efficient algorithms for recovering structured signals from magnitude-only measurements. IEEE Trans. Inf. Theory 2019, 65, 4434–4456. [Google Scholar] [CrossRef] [Green Version]
  40. Cai, T.T.; Li, X.; Ma, Z. Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow. Ann. Stat. 2016, 44, 2221–2251. [Google Scholar] [CrossRef]
  41. Zhao, Y.B. Optimal k-thresholding algorithms for sparse optimization problems. SIAM J. Optim. 2020, 30, 31–55. [Google Scholar] [CrossRef] [Green Version]
  42. Blanchard, J.D.; Tanner, J. Performance comparisons of greedy algorithms in compressed sensing. Numer. Linear Algebra Appl. 2015, 22, 254–282. [Google Scholar] [CrossRef] [Green Version]
  43. Blanchard, J.D.; Tanner, J.; Wei, K. CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference 2015, 4, 289–327. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of the success rates of SPR-HBHTP with different parameters. (a) Different momentum coefficients β . (b) Different stepsizes α .
Figure 1. Comparison of the success rates of SPR-HBHTP with different parameters. (a) Different momentum coefficients β . (b) Different stepsizes α .
Mathematics 11 02744 g001
Figure 2. Comparison of the PTCs for the algorithms with accurate/inaccurate measurements. (a) Accurate measurements. (b) Inaccurate measurements with κ = 0.01.
Figure 2. Comparison of the PTCs for the algorithms with accurate/inaccurate measurements. (a) Accurate measurements. (b) Inaccurate measurements with κ = 0.01.
Mathematics 11 02744 g002
Figure 3. Grayscale map for different algorithms with n = 3000. (a) SPR-HBHTP. (b) SPR-HTP. (c) CoPRAM. (d) SPARTA.
Figure 3. Grayscale map for different algorithms with n = 3000. (a) SPR-HBHTP. (b) SPR-HTP. (c) CoPRAM. (d) SPARTA.
Mathematics 11 02744 g003
Figure 4. ASM and the shortest average runtime with accurate measurements. (a) ASM. (b) Average runtime of the fastest algorithm.
Figure 4. ASM and the shortest average runtime with accurate measurements. (a) ASM. (b) Average runtime of the fastest algorithm.
Mathematics 11 02744 g004
Figure 5. The ratios of average runtimes for the algorithms against the fastest one. (a) SPR-HBHTP. (b) SPR-HTP. (c) CoPRAM. (d) SPARTA.
Figure 5. The ratios of average runtimes for the algorithms against the fastest one. (a) SPR-HBHTP. (b) SPR-HTP. (c) CoPRAM. (d) SPARTA.
Mathematics 11 02744 g005
Table 1. The descriptions of the parameters and the abbreviations of the algorithm names.
Table 1. The descriptions of the parameters and the abbreviations of the algorithm names.
α The stepsize of the algorithm
β The coefficient of the momentum term
HTPHard thresholding pursuit
HBHTPHeavy-Ball-Based Hard Thresholding Pursuit
CoSaMPCompressive Sampling Matching Pursuit
SPARTASPARse Truncated Amplitude Flow
TWFThresholded Wirtinger Flow
TAFTruncated Amplitude Flow
SPR-HTPHard Thresholding Pursuit for sparse phase retrieval
SPR-HBHTPHeavy-Ball-Based Hard Thresholding Pursuit for sparse phase retrieval
CoPRAMCompressive Phase Retrieval with Alternating Minimization
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Zhou, J.; Sun, Z.; Tang, J. Heavy-Ball-Based Hard Thresholding Pursuit for Sparse Phase Retrieval Problems. Mathematics 2023, 11, 2744. https://doi.org/10.3390/math11122744

AMA Style

Li Y, Zhou J, Sun Z, Tang J. Heavy-Ball-Based Hard Thresholding Pursuit for Sparse Phase Retrieval Problems. Mathematics. 2023; 11(12):2744. https://doi.org/10.3390/math11122744

Chicago/Turabian Style

Li, Yingying, Jinchuan Zhou, Zhongfeng Sun, and Jingyong Tang. 2023. "Heavy-Ball-Based Hard Thresholding Pursuit for Sparse Phase Retrieval Problems" Mathematics 11, no. 12: 2744. https://doi.org/10.3390/math11122744

APA Style

Li, Y., Zhou, J., Sun, Z., & Tang, J. (2023). Heavy-Ball-Based Hard Thresholding Pursuit for Sparse Phase Retrieval Problems. Mathematics, 11(12), 2744. https://doi.org/10.3390/math11122744

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop