Next Article in Journal
Discounted and Expected Utility from the Probability and Time Trade-Off Model
Next Article in Special Issue
The Newtonian Operator and Global Convergence Balls for Newton’s Method
Previous Article in Journal
The Fixed Point Property of the Infinite M-Sphere
Previous Article in Special Issue
Multipoint Fractional Iterative Methods with (2α + 1)th-Order of Convergence for Solving Nonlinear Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Least-Square-Based Three-Term Conjugate Gradient Projection Method for 1-Norm Problems with Application to Compressed Sensing

by
Abdulkarim Hassan Ibrahim
1,2,
Poom Kumam
1,2,3,*,
Auwal Bala Abubakar
1,4,
Jamilu Abubakar
1,5 and
Abubakar Bakoji Muhammad
6
1
KMUTTFixed Point Research Laboratory, Room SCL 802 Fixed Point Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
5
Department of Mathematics, Usmanu Danfodiyo University, Sokoto 840004, Nigeria
6
Faculty of Natural Sciences II, Institute of Mathematics, Martin Luther University Halle-Wittenberg, 06099 Halle, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(4), 602; https://doi.org/10.3390/math8040602
Submission received: 27 February 2020 / Revised: 7 April 2020 / Accepted: 9 April 2020 / Published: 15 April 2020
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems 2020)

Abstract

:
In this paper, we propose, analyze, and test an alternative method for solving the 1 -norm regularization problem for recovering sparse signals and blurred images in compressive sensing. The method is motivated by the recent proposed nonlinear conjugate gradient method of Tang, Li and Cui [Journal of Inequalities and Applications, 2020(1), 27] designed based on the least-squares technique. The proposed method aims to minimize a non-smooth minimization problem consisting of a least-squares data fitting term and an 1 -norm regularization term. The search directions generated by the proposed method are descent directions. In addition, under the monotonicity and Lipschitz continuity assumption, we establish the global convergence of the method. Preliminary numerical results are reported to show the efficiency of the proposed method in practical computation.

1. Introduction

Discrete ill-posed problems are systems of linear equations arising from the discretization of ill-posed problems. Consider the linear system
b = A t ,
where t R n is an original signal, A R m × n ( m < n ) is a linear map, and b R m is an observed data. In particular, the original signal t is assumed to be sparse, that is, it has very few non-zero coefficients. Since m < n , the linear system (1) is usually referred to as ill-conditioned or under-determined problems. In compressive sensing (CS), it is possible to regain the sparse signal t from the linear system (1), by finding the solution of the 0 -regularized problem
min t { t 0 : A t = b } ,
where t 0 denotes the nonzero components in t. Unfortunately, the 0 -norm is NP-hard in general. Based on this, researchers have developed alternative model by replacing the 0 -norm with 1 -norm. Thus, they solved the following problem:
min t { t 1 : A t = b } ,
Under some mild assumptions, Donoho [1] proved that solution(s) of problem (2) also solves (3). In most applications, the observed value b usually contains some noise, thus problem (3) can be relaxed to the penalized least squares problem
min t τ t 1 + 1 2 A t b 2 2 ,
where τ > 0 , balancing the trade-off between sparsity and residual error. Problems of the form (4) have become familiar over the past three decades, particularly in compressive sensing contexts. Interested readers may refer to the recent papers (Refs. [2,3]) for more details.
In order to address problem (4), several numerical algorithms have been proposed, for instance, Daubechies, Defrise, and Demol [4] proposed the iterative shrinkage thresholding (IST) algorithm; thereafter, Beck and Teboulle [5] proposed the fast iterative shrinkage thresholding algorithm (FISTA). These algorithms are well known due to their simplicity and efficiency. Likewise, Hale, Yin, and Zhang [6] proposed the fixed-point continuous search method [6], and an acceleration technique was introduced by nonmonotone line search with the Barzilai–Borwein stepsize [7]. He, Chang, and Osher [8] introduced the unconstrained formulation of 1 -regularization problem, where the Bregman iterative approach was used to obtain the solutions to problem (4). The proximal forward backward splitting technique is another efficient technique for solving (4). This technique is based on the proximal operator introduced by Moreau [9]. Another type of method for solving the problem (4) is by using the gradient descent method. Quite recently, Figueiredo, Nowak, and Wright [10] first developed a gradient projection method to solve the sparse reconstruction problem. Thereafter, Xiao and Zhu [11,12] proposed a conjugate gradient projection method and a spectral gradient method to solve problem (4), respectively. Unlike IST and FISTA, in order to solve problem (4), the problem was first transformed into a monotone operator equation; see Section 2. Thereafter, an algorithm is developed to solve these systems of equations.
With the approximate equivalence between problem (4) and a system of monotone operator equations, one of the methods for solving the systems of nonlinear monotone operator equations is the conjugate gradient method. Considering the importance of the method, several extensions of this method have been proposed. The three-term conjugate gradient method happens to be one such extension. The first two three-term conjugate gradient method was introduced by Beale [13] and Nazareth [14] to weaken the condition of global convergence of the two-term conjugate gradient method. It is clear that, due to the existence of an additional parameter in the three-term conjugate gradient schemes, establishing the sufficient descent property is more accessible than the two-term conjugate gradient ones. To this end, the three-term conjugate gradient methods are presented, analyzed, and extensively studied in several references because of their advantages in the descent property and the computational performance. The references [15,16,17] have proposed different three-term conjugate gradient methods, and shown their specific properties, global convergence, and numerical performance. Tang, Li and Cui [18] presented a new three-term conjugate gradient approach based on the technique of the least-squares. Their approach incorporates the advantage of two existing efficient conjugate gradient approaches and also generates sufficient descent direction without the aid of a line search procedure. Preliminary numerical tests indicate that their method is efficient. Due to the simplicity and low storage requirements of the conjugate gradient method, numerous researchers have recently extended a number of conjugate gradient algorithms designed to solve unconstrained optimization problem to solve large-scale nonlinear equations. Using the popular CG_DESCENT method [19], Xiao and Zhu [12] recently constructed a conjugate gradient method (CGD) based on the projection scheme of Solodov and Svaiter [20] to solve monotone nonlinear operator equations with convex constraints. The method was also successfully used to solve the sparse signal in compressive sensing. Interested readers may refer to the following articles [21,22,23,24,25,26,27] for an overview of algorithms used for solving monotone operator equations.
Inspired by the work of Xiao and Zhu [12], the least-squares-based three-term conjugate gradient method (LSTT) for solving unconstrained optimization problems by Tang, Li, and Cui [18] and the projection technique of Solodov and Svaiter [20], we further study, analyze, and construct a derivative-free least-square-based three-term conjugate gradient method to solve the 1 -norm problem arising from the reconstruction of sparse signal and image in compressive sensing. The method can be viewed as an extension of the LSTT method for solving unconstrained optimization problem and a projection technique. Under the monotonicity and Lipchitz continuity assumption, the global convergence of the proposed method is established using the backtracking line search. Computational experiments are carried out to reconstruct sparse signal and image in compressive sensing. The numerical results indicate that the proposed method is more efficient and robust.
The rest of the paper is organized as follows. In Section 2, we give a review of the reformulation of problem (4) into a convex quadratic program problem by Figueiredo et al. [10]. In Section 3, we present the motivation and general algorithm of the proposed method. The global convergence of the proposed algorithm is presented in Section 4. In Section 5, numerical experiments are presented to illustrate the efficiency of our algorithm. Unless otherwise stated, throughout this paper, the symbol · denotes the Euclidean norm. Furthermore, the projection map denoted as P S , which is a mapping from R n onto the non-empty, closed and convex subset S R n , that is,
P S ( t ) : = arg min { t y | y S } ,
which has the well known nonexpansive property, that is,
P S ( h ) P S ( g ) h g , h , g R n .

2. Reformulation of the Model

Figuredo, Nowak, and Wright [10] gave the reformulation of the minimization problem (4) into a quadratic programming problem as follows. Consider any vector t R n , t can be rewritten as follows:
t = u v , u 0 , v 0 ,
where u R n , v R n and u i = ( t i ) + , v i = ( t i ) + for all i [ 1 , n ] with ( · ) + = max { 0 , · } . Therefore, the 1 -norm could be represented as t 1 = e n T u + e n T v , where e n is an n-dimensional vector with all elements one. Thus, (4) was rewritten as
min u , v { 1 2 b A ( u v ) 2 + τ e n T u + τ e n T v : u , v 0 } .
Moreover, from [10], with no difficulty, (6) can be rewritten as the quadratic program problem with box constraints. That is,
min z 1 2 z T H z + c T z , z 0 ,
where z = u v , c = τ e 2 n + y y , b = A T y , H = A T A A T A A T A A T A .
Simple calculation shows that H is a semi-definite positive matrix. Hence, (7) is a convex quadratic program problem, and it is equivalent to
F ( z ) = min { z , H z + c : z S } = 0 .
The function F is a vector-valued and the “min” interpreted as componentwise minimum. From ([28], Lemma 3) and ([11], Lemma 2.2), we know that the mapping F is Lipschitz continuous and monotone. Hence, the algorithm for solving problem (8) can be equally and effectively used to solve problem (4).

3. Algorithm

Recently, Chunning, Shuangyu, and Zengru [18] proposed a new three-term conjugate gradient method based on the least-squares technique for solving the following unconstrained optimization problem:
min { f ( t ) : t R n }
where the function f is continuously differentiable from R n into R and the gradient f ( t ) is available. Similar to most conjugate gradient methods, the iterative scheme of the conjugate gradient method developed in [18] generates a sequence of iterates by letting
t k + 1 = t k + α k d k , k 0
where α k is the steplength and the search direction d k is updated by
d k : = f ( t k ) + β k ˜ d k 1 θ k y ¯ k 1 if k > 0 , f ( t k ) if k = 0 ,
where y ¯ k 1 = f ( t k ) f ( t k 1 ) , β k ˜ and θ k are scalar computed as
β k ˜ : = β k M H S ( τ k ) : = β k H S f ( t k ) T d k 1 d k 1 2 : = f ( t k ) T y ¯ k 1 y ¯ k 1 T d k 1 f ( t k ) T d k 1 d k 1 2 , θ k : = f ( t k ) T d k 1 y ¯ k 1 T d k 1 .
A valid approach for solving (8) is by using a derivative-free line search to ensure the step-size α k [29]. To this end, we present the following derivative-free least-square three term conjugate gradient projection algorithm (Algorithm 1).
Algorithm 1 DF-LSTT
Input. Choose any arbitrary initial point t 0 S , the positive constants: T o l ( 0 , 1 ) ξ ( 0 , 2 ) , ϱ ( 0 , 1 ) , β > 0 , ς > 0 .
Step 0. Let d 0 = F ( t 0 ) and k : = 0 .
Step 1. Determine the step-size α k = β ϱ i , where i is the smallest non-negative integer such that the following line search is satisfied:
F ( t k + α k d k ) T d k ς α k d k 2 .
Step 2. Compute
u k : = t k + α k d k .
Step 3. If u k S and F ( u k ) = 0 , stop. Otherwise, compute the next iterate by
t k + 1 : = P S [ t k δ k ξ F ( u k ) ] ,
where
δ k : = F ( u k ) T ( t k u k ) F ( u k ) 2
Step 4. If the stopping criterion is satisfied, that is, if F ( t k ) T o l , stop. Otherwise, compute the next search direction d k by
d k : = F ( t k ) + β k ( N k ) d k 1 v k y k 1
where
y k 1 : = F ( t k ) F ( t k 1 ) , v k : = F ( t k ) T d k 1 y ˜ k 1 T d k 1 ,
β k ( N k ) : = β k M H S F ( t k ) T d k 1 d k 1 2 : = y k 1 T F ( t k ) y ˜ k 1 T d k 1 F ( t k ) T d k 1 d k 1 2
y ˜ k 1 : = y k 1 + j k 1 d k 1 , j k 1 : = 1 + max 0 , y k 1 T d k 1 d k 1 2
Step 5. Finally, we set k : = k + 1 and return to step 1.
Remark 1.
In order to ensure that the parameters β k ˜ and θ k are well defined when extending them to solve (4), we re-modify their denominator using the scalar y ˜ k 1 T d k 1 . Furthermore, to guarantee the boundness of the search direction, we assume the operator under consideration to be monotone rather than uniformly monotone as assumed in [18].
Lemma 1.
The search direction d k generated by DF-LSTT algorithm is a descent direction. That is,
F ( t k ) T d k = F ( t k ) 2 , k 0 .
Proof. 
Now, by direct computation, we have
F ( t k ) T d k = F ( t k ) 2 + y k 1 T F ( t k ) y ˜ k 1 T d k 1 F ( t k ) T d k 1 F ( t k ) T d k 1 d k 1 2 F ( t k ) T d k 1 + F ( t k ) T d k 1 y ˜ k 1 T d k 1 F ( t k ) T y k 1 = F ( t k ) 2 ( F ( t k ) T d k 1 ) 2 d k 1 2 F ( t k ) 2 .
This completes the proof. □

4. Global Convergence

In this section, we investigate the global convergence property of DF-LSTT algorithm for solving (8). For this purpose, we make the following assumptions.
Assumption 1.
A1. 
The mapping F is Lipschitz continuous, that is, there exists a constant L > 0 such that
F ( t ) F ( y ) L t y , t , y R n
A2. 
Xiao et al. [11] proved that, for the problem (8), F is monotone. That is,
( F ( t ) F ( y ) ) T ( t y ) 0 , t , y R n .
Lemma 2.
Suppose that Assumption 1 holds. Let { u k } and { t k } be sequences generated by (11) and (12) in the DF-LSTT algorithm. Then, the following statements hold:
1. 
{ t k } and { u k } are bounded.
2. 
lim k u k t k = 0 .
3. 
lim k t k + 1 t k = 0 .
Proof. 
Since F is monotone, then, for any solution t of problem (8), we have
F ( u k ) T ( t k t ) = F ( u k ) T ( t k u k ) + F ( u k ) T ( u k t ) F ( u k ) T ( t k u k ) + F ( t ) T ( u k t ) = F ( u k ) T ( t k u k ) ς α k 2 · d k 2
= ς t k u k 2 .
Note that Equation (20) is obtained from the line search. From (5), we get
t k + 1 t 2 =   P S [ t k δ k ξ F ( u k ) ] t 2   t k δ k ξ F ( u k ) t 2 =   t k t 2 2 δ k ξ F ( u k ) T ( t k t ) + δ k 2 ξ 2 F ( u k ) 2   t k t 2 2 δ k ξ F ( u k ) T ( t k u k ) + δ k 2 ξ 2 F ( u k ) 2
=   t k t 2 ξ ( 2 ξ ) ( F ( u k ) T ( t k u k ) ) 2 F ( u k ) 2   t k t 2 ξ ( 2 ξ ) ς 2 t k u k 4 F ( u k ) 2
  t k t 2
where (22) and (23) follow from (21).
From (24), it is easy to see that the sequence { t k } is bounded. That is,
t k γ , k 0 .
Furthermore, by (18), we have
F ( t k ) = F ( t k ) F ( t ) L t k t L t 0 t .
Letting M = L t 0 t , then
F ( t k ) M , k 0
By the monotonicity of property given in (19), we know that
( F ( t k ) F ( u k ) ) T ( t k u k ) 0 .
Therefore, by Cauchy–Schwarz inequality, we have
F ( t k ) t k u k F ( t k ) T ( t k u k ) F ( u k ) T ( t k u k ) ς t k u k 2 ,
where the last inequality can be implied from (21). Thus, it is easy to obtain that
ς t k u k F ( t k ) M , k 0
which implies that { u k } is bounded. Using the continuity of F, we know that there exists a constant M 1 > 0 , such that
F ( u k ) M 1 k 0 .
It follows from (23) that
ξ ( 2 ξ ) ς 2 t k u k 4 F ( u k ) 2 t k t 2 t k + 1 t 2 .
Adding (29) for k 0 , we obtain
ξ ( 2 ξ ) ς 2 M 1 2 k = 0 t k u k 4 k = 0 ( t k t 2 t k + 1 t 2 ) t 0 t 2 <
Equation (30) implies that
lim k t k u k = 0 .
Hence, the second assertion holds.
From (5), we have
t k + 1 t k = P S [ t k δ k F ( u k ) ] t k δ k ξ F ( u k ) ,
then by (13) and Cauchy–Schwartz inequality, we obtain
t k + 1 t k ξ u k t k ,
which shows that the third assertion holds. □
Theorem 1.
Suppose that Assumption 1 holds. Let the sequence u k and t k be generated by (11) and (12) in the DF-LSTT algorithm. Then,
lim inf k F ( t k ) = 0 .
Proof. 
Suppose that conclusion (32) does not hold, that is, there exists κ 0 0 such that
F ( t k ) κ 0 , k 0 .
Equation (33) together with (17) imply that
d k φ , k 0 .
We note that the sequences { t k } , { u k } , { F ( t k ) } , and { F ( u k ) } are bounded from (25), (31), (27), and (28), respectively. In addition, from the Lipchitz continuity and (25), we have
y k 1 = F ( t k ) F ( t k 1 ) L t k t k 1 = L ( t k + t k 1 ) 2 L γ .
On the other hand, from the definition of y ˜ k 1 , it holds that
y ˜ k 1 T d k 1 y k 1 T d k 1 + d k 1 2 y k T d k 1 = d k 1 2 .
Thus, by Cauchy–Schwartz inequality, we obtain
| β k ( N k ) | | y k 1 T F ( t k ) | y ˜ k 1 T d k 1 + | F ( t k ) T d k 1 | d k 1 2 y k 1 F ( t k ) d k 1 2 + F ( t k ) d k 1 2 L M γ φ 2 + M φ = M φ 2 L γ φ + 1
Note that the last inequality in (35) can be easily obtained using (34).
Therefore, from (14), it follows that
d k F ( t k ) + | β k ( N k ) | d k 1 + F ( t k ) d k 1 d k 1 2 y k 1 M + 2 M L γ φ + M + 2 M L γ φ = M + φ M + 4 M L γ φ χ
From (22), α k ϱ 1 does not satisfy (10). Thus, we have
F ( t k + α k ϱ 1 d k ) T d k < ς α k ϱ 1 d k 2 .
It follows from Lemma 1 that
F ( t k ) 2 F ( t k ) T d k ( F ( t k + α k ϱ 1 d k ) F ( t k ) ) T d k F ( t k + α k ϱ 1 d k ) T d k L α k ϱ 1 d k 2 + ς α k ϱ 1 d k 2 α k ϱ 1 ( L + ς ) d k 2 .
Equation (36) is obtained by using Cauchy–Schwartz inequality and Lipchitz continuity (18). Therefore, it holds that
α k d k 2 ϱ F ( t k ) 2 ( L + ς ) ϱ κ 0 2 ( L + ς ) .
Note, from (33), the last inequality is obtained which implies that
lim k t k u k > 0
holds, which contradicts (31). Thus, (32) holds. □

5. Numerical Experiment

We present numerical experiments to show the efficiency of the DF-LSTT method. The experiments presented here are of two types. The first experiment involves applying the DF-LSTT method to solve the 1 -norm regularization problem arising in compressive sensing. The second involves testing DF-LSTT in solving some given convex constrained nonlinear equations with different initial points and various dimensions. The implementations were performed using Matlab R2019b Update 1 (9.7.0.1216025, Mathwork Inc, Massachusetts, USA) on a HP PC (Hewlett-Packard, California, USA) with CPU 2.4 GHz, 8.0 GB RAM with the Windows 10 operating system.

5.1. Experiments on the 1 -Norm Regularization Problem in Compressive Sensing

We begin by considering a typical compressive sensing scenario where the ultimate goal is to reconstruct a length-n sparse signal from m observations ( m < < n ) with Gaussian noise, where the number of samples is dramatically smaller than the size of the original signal. We compare DF-LSTT with the CGD conjugate gradient method [12] and the PCG projection method [30] designed to solve nonlinear equations with convex constrained and signal recovery problems.
As a consequence of the limited memory of our PC, in this test, we considered a small size of signal with n = 2 11 , m = 2 9 and the original t contains 2 6 randomly non-zero elements. Similar to [11,12,31], the quality of the restored signal is assessed by the mean of squared error (MSE) to the original signal t, that is,
M S E : = 1 n t t 2
where t is the restored signal. In this test, we generate the random matrix A using the Matlab command rand(n,k). In addition, noise is appropriately added to the observed data computed by
b = A t + η
where η is the Gaussian noise with N ( 0 , 10 4 ) . The DF-LSTT algorithm is implemented with the following parameters: β = 10 , ϱ = 0.55 , ς = 10 4 .
For the compared methods, their parameters are set as reported in their respective papers. In line with [32], we chose the parameter τ in the merit function f ( t ) = τ t + 1 2 b A t 2 as τ = 0.008 A T b and the initial point for all algorithms also starts at t 0 = A T b . The process terminates when
T o l : = f k f k 1 f k 1 < 10 5 ,
where f k denotes the function value at t k . Note, for this test, we only observe the convergence behavior of each method to obtain a similar accuracy solution.
In view of the plots depicted in Figure 1, DF-LSTT wins in decoding sparse signals in compressive sensing in terms of number of iterations for around 116 and around 0.69s. Table 1 contains the number of iterations, mean of Squared error (MSE), and time of the decoding of sparse signal for over 10 runs for the algorithms tested. The reported results in Figure 2 shows that, in decoding sparse signal in compressive sensing, DF-LSTT is faster than CGD and PCG with a clear lowest number of iterations.
Next, the effectiveness and robustness of DF-LSTT algorithm is illustrated in an image de-blurring problem. We carried out our experiment using some widely used test images obtained from http://hlevkin.com/06testimages.htm. DF-LSTT is compared with the state-of-art methods CGD proposed by Xiaoh and Zhu [12], SGCS [11], and MFR [33].
The performance evaluation criteria to measure the quality of restoration by the methods are measured by the signal-to-noise ratio (SNR) defined as
S N R : = 20 × log 10 t t t ,
the peak signal-to-noise ratio (PSNR) [34], and the structural similarity index (SSIM) [35]. For fairness in comparing the algorithms, the iteration process of all algorithms started at t 0 = A T b and terminates when T o l < 10 5 . For the image de-blurring experiment, the following parameters were used in our implementation: β = 0.001 , ϱ = 0.6 , ς = 10 4 . We tested several images including Tiffany ( 512 × 512 ) , Lena ( 512 × 512 ) , Barbara ( 720 × 576 ) , degraded by Gaussian blur and 10% Gaussian noise.
In Table 2, we report the performance for SNR, PSNR, and SSIM for DF-LSTT, CGD, SGCS, and the MFRM method in recovery blurred and noisy images. We can see that the SNR, PSNR, and SSIM of the test images calculated by the DF-LSTT algorithm are a bit higher than CGD, SGCS, and MFRM. The higher value of SNR, PSNR, and SSIM reflects better quality of restoration.
Based on the performance reported in Table 2, the DF-LSTT algorithm restores blurred and noisy images quite well and obtain better quality reconstructed images in an efficient manner. Figure 3 and Figure 4 show the original and blurred images while Figure 5 shows the restored images by each method.

5.2. Experiments on Some Large-Scaled Monotone Nonlinear Equations

In this subsection, we evaluate the performance of the proposed conjugate gradient method in solving nonlinear equations with convex constraints. We compare the proposed method with CGD [12] and PCG [30]. For each test problem, the stopping condition employed is
F ( t k ) 10 6 .
We also stop the algorithms when the iterations exceed 1000 without achieving convergence. The algorithms are tested using seven different initial points, one of which is randomly generated in R n . We ran the algorithms for several dimensions ranging from n = 1000 to 100,000. The parameters chosen for the proposed algorithm are; ϱ = 0.75 , β = 1 , ς = 10 4 , ξ = 1.2 . For the CGD and PCG algorithm, all parameters are chosen as in [12,30], respectively. We use the following well-known benchmark test functions where the mapping F is taken as
F ( t ) = ( f 1 ( t ) , f 2 ( t ) , , f n ( t ) ) T ,
where the associated initial points for these problems are
t 1 = ( 0.1 , 0.1 , , 0.1 ) T , t 2 = ( 0.2 , 0.2 , , 0.2 ) T , t 3 = ( 0.5 , 0.5 , , 0.5 ) T , t 4 = ( 1.2 , 1.2 , , 1.2 ) T ,
t 5 = ( 1.5 , 1.5 , 1.5 ) T , t 6 = ( 2 , 2 , , 2 ) T , t 7 = r a n d ( n , 1 ) .
Problem 1.
This problem is the Exponential function [36] with constraint set S = R + n , that is,
f 1 ( t ) = e t 1 1 , f i ( t ) = e t i + t i 1 , for i = 2 , 3 , , n .
Problem 2.
Modified Logarithmic function [36] with constraint set S = { t R n : i = 1 n t i n , t i > 1 , i = 1 , 2 , , n } , that is,
f i ( t ) = ln ( t i + 1 ) t i n , i = 2 , 3 , , n .
Problem 3.
The function f i ( x ) [37] with S = R + n defined by
f i ( t ) = min min ( | t i | , t i 2 ) , max ( | t i | , t i 3 ) for i = 2 , 3 , , n .
Problem 4.
The Strictly convex function [38], with constraint set S = R + n , that is,
f i ( t ) = e t i 1 , i = 2 , 3 , , n .
Problem 5.
Strictly convex function II [38], with constraint set S = R + n , that is,
f i ( x ) = 1 n e t i 1 , i = 2 , 3 , , n .
Problem 6.
Tridiagonal Exponential function [39] with constraint set S = R + n , that is,
f 1 ( t ) = t 1 e cos ( h ( t 1 + t 2 ) ) , f i ( t ) = t i e cos ( h ( t i 1 + t i + t i + 1 ) ) , for 2 i n 1 , f n ( t ) = t n e cos ( h ( t n 1 + t n ) ) , where h = 1 n + 1
Problem 7.
Nonsmooth function [40] with constraint set S = { t R n : i = 1 n t i n , t i 1 , 1 i n } :
f i ( t ) = t i sin | t i 1 | , i = 2 , 3 , , n .
Problem 8.
The function f i ( t ) with S = R + n defined by
f i ( t ) = 8 1 2 t i 1 i = 1 , 2 , 3 , , n .
In Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, we report the computation results obtained from the implementation of DF-LSTT, CGD, and PCG algorithm. We report the number of iterations (ITER), the number of function evaluations (FVAL), and the CPU time in seconds (TIME).
We employ the widely used performance profile metric of Dolan and More [41] to compare the performance of the methods. The profile metric measures the ratio of each method by its computational outcome versus the computational outcome of the best presented method. The performance profile metric operates in the following manner. Let S m and P m denote the set of methods and test problems, respectively. In this section, we see a problem with different dimensions or different initial points as another new problem. For n s methods and n p problems, the performance profile ρ : R [ 0 , 1 ] is defined as follows: for each problem p P m and for each method s S m , they define
t p , s : = ( computing time required to solve problem p by method s )
The performance ratio is given by r p , s : = t p , s / min s S m t p , s . Then, the performance profile is defined by
ρ ( τ ) : = n p 1 s i z e { p P m | log 2 ( r p , s ) τ } , τ R + , where 1 s n s .
We note that ρ ( τ ) is the probability for a method s S m that log 2 ( r p , s ) is within a factor τ R + of the best possible ratio. Obviously, when τ takes certain value, a method with high value of ρ ( τ ) is preferable or represents the best method. As usual, we obtain the number of iterations, number of function evaluations, and the computing time (CPU time) from the performance profile metric. Figure 6 and Figure 7 are the performance profile obtained based on the Dolan and Moré profile. Both figures indicate that the performances of the proposed method is competitive with the other methods. However, from both Figure 6 and Figure 7, DF-LSTT is more efficient because it was able to solve a higher percentage of the test problems with less number of iterations and function evaluations compared to CGD and PCG methods. However, with respect to CPU time, our algorithm did not perform well. This could be as a result of several computations involved in the method.

6. Conclusions

In this paper, we have presented a derivative-free conjugate gradient method for solving the 1 -norm regularization problem by combining the projection technique with the proposed direction in [18]. Unlike the uniform convexity assumption used in establishing the convergence of the method in [18], our convergence was established under the monotonicity and Lipchitz continuity assumption. Our numerical experiments in recovering sparse signals and blurred images indicate efficient and robust behavior of the proposed algorithm. For instance, in recovering sparse signals, the proposed algorithm is faster than the compared algorithms. In addition, it exhibits less iterations and least mean squared error. Moreover, our algorithm is able to restore blurred and noisy images with better quality. This is reflected in the values of the SNR, PSNR, and SSIM. Furthermore, numerical experiments on a set of monotone problems with different initial points and dimensions were reported. Although, applying the proposed method in solving monotone operator equations, the proposed method does not perform well in terms of CPU time. This could be as a result of several computations involved in the method.

Author Contributions

Conceptualization, A.H.I.; Formal analysis, A.H.I.; Funding acquisition, P.K.; Methodology, J.A.; Project administration, P.K.; Resources, A.B.M.; Software, A.B.A. and J.A.; Supervision, P.K.; Writing–original draft, A.H.I.; Writing–review and editing, A.B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Petchra Pra Jom Klao Scholarship, grant number 16/2561.

Acknowledgments

The authors are very grateful to Jinkui Liu of the University of Chongqing Three Gorges, Chognqing, China, for his kind offer of the source codes for the signal reconstruction problem. We would also like to thank the anonymous referees for their valuable comments. The authors acknowledge the support provided by the Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation Cluster (CLASSIC), Faculty of Science, KMUTT. The first author was supported by the Petchra Pra Jom Klao Doctoral Scholarship, Academic for Ph.D. Program at KMUTT (Grant No.16/2561).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
  2. Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
  3. Candes, E.; Romberg, J. Sparsity and incoherence in compressive sampling. Inverse Probl. 2007, 23, 969. [Google Scholar] [CrossRef] [Green Version]
  4. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  5. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  6. Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for l1-regularized minimization with applications to compressed sensing. CAAM TR07-07 Rice Univ. 2007, 43, 44. [Google Scholar]
  7. Huang, S.; Wan, Z. A new nonmonotone spectral residual method for nonsmooth nonlinear equations. J. Comput. Appl. Math. 2017, 313, 82–101. [Google Scholar] [CrossRef]
  8. He, L.; Chang, T.C.; Osher, S. MR image reconstruction from sparse radial samples by using iterative refinement procedures. In Proceedings of the 13th Annual Meeting of ISMRM, Seattle, WA, USA, 6–12 May 2006; Volume 696. [Google Scholar]
  9. Moreau, J.J. Fonctions Convexes Duales et Points Proximaux dans un Espace Hilbertien. 1962. Available online: http://www.numdam.org/article/BSMF_1965__93__273_0.pdf (accessed on 26 February 2020).
  10. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  11. Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based method for 1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  12. Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  13. Beale, E.M.L. A derivation of conjugate gradients. In Numerical Methods for Nonlinear Optimization; Lootsma, F.A., Ed.; Academic Press: London, UK, 1972. [Google Scholar]
  14. Nazareth, L. A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 1977, 23, 373–387. [Google Scholar] [CrossRef]
  15. Zhang, L.; Zhou, W.; Li, D.H. A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 2006, 26, 629–640. [Google Scholar] [CrossRef]
  16. Andrei, N. On three-term conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 2013, 219, 6316–6327. [Google Scholar] [CrossRef]
  17. Liu, J.; Li, S. New three-term conjugate gradient method with guaranteed global convergence. Int. J. Comput. Math. 2014, 91, 1744–1754. [Google Scholar] [CrossRef]
  18. Tang, C.; Li, S.; Cui, Z. Least-squares-based three-term conjugate gradient methods. J. Inequalities Appl. 2020, 2020, 27. [Google Scholar] [CrossRef]
  19. Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef] [Green Version]
  20. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control. Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  21. Liu, J.; Feng, Y. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2018, 82, 245–262. [Google Scholar] [CrossRef]
  22. Liu, J.; Xu, J.; Zhang, L. Partially symmetrical derivative-free Liu–Storey projection method for convex constrained equations. Int. J. Comput. Math. 2019, 96, 1787–1798. [Google Scholar] [CrossRef]
  23. Ibrahim, A.H.; Garba, A.I.; Usman, H.; Abubakar, J.; Abubakar, A.B. Derivative-free RMIL conjugate gradient algorithm for convex constrained equations. Thai J. Math. 2019, 18, 212–232. [Google Scholar]
  24. Abubakar, A.B.; Rilwan, J.; Yimer, S.E.; Ibrahim, A.H.; Ahmed, I. Spectral three-term conjugate descent method for solving nonlinear monotone equations with convex constraints. Thai J. Math. 2020, 18, 501–517. [Google Scholar]
  25. Ibrahim, A.H.; Kumam, P.; Abubakar, A.B.; Jirakitpuwapat, W.; Abubakar, J. A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing. Heliyon 2020, 6, e03466. [Google Scholar] [CrossRef] [PubMed]
  26. Abubakar, A.B.; Kumam, P.; Awwal, A.M. Global convergence via descent modified three-term conjugate gradient projection algorithm with applications to signal recovery. Results Appl. Math. 2019, 4, 100069. [Google Scholar] [CrossRef]
  27. Abubakar, A.B.; Kumam, P.; Awwal, A.M. An inexact conjugate gradient method for symmetric nonlinear equations. Comput. Math. Methods 2019, 1, e1065. [Google Scholar] [CrossRef] [Green Version]
  28. Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
  29. Zhou, W.; Li, D. Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math. 2007, 25, 89–96. [Google Scholar]
  30. Liu, J.; Li, S. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  31. Wan, Z.; Guo, J.; Liu, J.; Liu, W. A modified spectral conjugate gradient projection method for signal recovery. Signal Image Video Process. 2018, 12, 1455–1462. [Google Scholar] [CrossRef]
  32. Kim, S.; Koh, K.; Lustig, M.; Boyd, S.; Gorinevsky, D. A method for large-scale 1-regularized least squares. IEEE J. Sel. Top. Signal Process. 2007, 1, 606–617. [Google Scholar] [CrossRef]
  33. Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M.; Sitthithakerngkiet, K. A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications. Mathematics 2019, 7, 745. [Google Scholar] [CrossRef] [Green Version]
  34. Bovik, A.C. Handbook of Image and Video Processing; Academic Press: Cambridge, MA, USA, 2010. [Google Scholar]
  35. Lajevardi, S.M. Structural similarity classifier for facial expression recognition. Signal Image Video Process. 2014, 8, 1103–1110. [Google Scholar] [CrossRef]
  36. La Cruz, W.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef] [Green Version]
  37. La Cruz, W. A spectral algorithm for large-scale systems of nonlinear monotone equations. Numer. Algorithms 2017, 76, 1109–1130. [Google Scholar] [CrossRef]
  38. Wang, C.; Wang, Y.; Xu, C. A projection method for a system of nonlinear monotone equations with convex constraints. Math. Methods Oper. Res. 2007, 66, 33–46. [Google Scholar] [CrossRef]
  39. Bing, Y.; Lin, G. An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations. SIAM J. Optim. 1991, 1, 206–221. [Google Scholar] [CrossRef]
  40. Yu, G.; Niu, S.; Ma, J. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. J. Ind. Manag. Optim. 2013, 9, 117–129. [Google Scholar] [CrossRef]
  41. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Illustration of the sparse signal recovery. From the top to the bottom is the original signal (First plot), the measurement (Second plot), and the reconstructed signals by DF-LSTT (Third plot), CGD (Fourth plot), and PCG (Fifth plot).
Figure 1. Illustration of the sparse signal recovery. From the top to the bottom is the original signal (First plot), the measurement (Second plot), and the reconstructed signals by DF-LSTT (Third plot), CGD (Fourth plot), and PCG (Fifth plot).
Mathematics 08 00602 g001
Figure 2. Comparison results of the DF-LSTT, CGD, and PCG algorithms with the signal parameter chosen as n = 2048 , m = 512 , τ = 0.00396321 . The x-axes represent the number of iterations (top left and bottom left) and the CPU time in seconds (top right and bottom right). The y-axes represent the MSE (top left and top right) and the function values (bottom left and right).
Figure 2. Comparison results of the DF-LSTT, CGD, and PCG algorithms with the signal parameter chosen as n = 2048 , m = 512 , τ = 0.00396321 . The x-axes represent the number of iterations (top left and bottom left) and the CPU time in seconds (top right and bottom right). The y-axes represent the MSE (top left and top right) and the function values (bottom left and right).
Mathematics 08 00602 g002
Figure 3. The original test images: From (left), Tiffany, Lenna (middle), and Barbara (right).
Figure 3. The original test images: From (left), Tiffany, Lenna (middle), and Barbara (right).
Mathematics 08 00602 g003
Figure 4. Blurred and noisy Barbara and Lenna test images.
Figure 4. Blurred and noisy Barbara and Lenna test images.
Mathematics 08 00602 g004
Figure 5. Restored images by DF-LSTT (left column), CGD (left middle column), SGCS (right middle column), and MFRM (right column).
Figure 5. Restored images by DF-LSTT (left column), CGD (left middle column), SGCS (right middle column), and MFRM (right column).
Mathematics 08 00602 g005
Figure 6. Performance of compared methods relative to the number of iterations.
Figure 6. Performance of compared methods relative to the number of iterations.
Mathematics 08 00602 g006
Figure 7. Performance of compared methods relative to the number of function evaluations.
Figure 7. Performance of compared methods relative to the number of function evaluations.
Mathematics 08 00602 g007
Table 1. Results for sparse signal recovery.
Table 1. Results for sparse signal recovery.
DF-LSTTCGDPCG
ITERMSETIMEITERMSETimeITERMSETime
1377.39 × 10 6 1.312594.51 × 10 5 1.71841.50 × 10 5 0.88
889.18 × 10 6 0.731877.04 × 10 5 1.381441.63 × 10 5 1.42
901.40 × 10 5 0.662313.78 × 10 5 1.731441.99 × 10 5 1.61
861.26 × 10 5 0.782282.15 × 10 5 1.41825.32 × 10 4 0.78
971.22 × 10 5 0.72456.75 × 10 5 1.551185.99 × 10 5 0.78
879.72 × 10 6 0.641995.13 × 10 5 1.33824.50 × 10 4 0.83
1155.39 × 10 6 0.842113.67 × 10 5 1.641529.68 × 10 6 0.97
891.31 × 10 5 1.271581.56 × 10 4 3.141501.16 × 10 5 1.59
1051.35 × 10 5 0.632804.82 × 10 5 1.891522.23 × 10 5 0.86
975.22 × 10 6 0.632283.89 × 10 5 1.451548.00 × 10 6 2.3
Average99.11.02 × 10 5 0.819222.65.73 × 10 5 1.722136.21.14 × 10 4 1.202
Table 2. Numerical results of DF-LSTT, CGD, SGCS, and MFRM methods in image restoration.
Table 2. Numerical results of DF-LSTT, CGD, SGCS, and MFRM methods in image restoration.
DF-LSTTCGDSGCSMFRM
ImageSNRPSNRSSIMSNRPSNRSSIMSNRPSNRSSIMSNRPSNRSSIM
Tiffany21.2523.080.920421.2023.040.919321.2423.070.920220.8722.700.9128
Lenna16.9822.310.917616.9322.260.916616.9622.290.917316.6021.940.9104
Barbara13.8120.230.637713.7720.190.635513.8020.220.637313.5719.990.6231
Average17.3521.870.825217.3021.830.823817.3321.860.824917.0121.540.8154
Table 3. Numerical test reports for the three tested methods for Problem 1.
Table 3. Numerical test reports for the three tested methods for Problem 1.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 270.0116830.00 × 10 0 421250.0326069.97 × 10 7 18710.0205675.72 × 10 6
t 2 270.0071570.00 × 10 0 451340.0204089.45 × 10 7 18710.0174779.82 × 10 6
t 3 270.0593490.00 × 10 0 481430.0250539.82 × 10 7 19750.0104017.10 × 10 6
t 4 270.0100970.00 × 10 0 501490.0204629.70 × 10 7 18710.0146028.27 × 10 6
t 5 311240.0756537.60 × 10 7 511520.0221988.17 × 10 7 632510.0355879.58 × 10 6
t 6 14550.0416150.00 × 10 0 511520.0174998.56 × 10 7 612430.0708999.15 × 10 6
t 7 441750.0523591.83 × 10 12 381130.0135799.14 × 10 7 18710.0092289.24 × 10 6
5000 t 1 270.0230490.00 × 10 0 411220.0458148.34 × 10 7 18710.0372947.42 × 10 6
t 2 270.0131780.00 × 10 0 431280.0518199.81 × 10 7 19750.0308836.53 × 10 6
t 3 270.0082870.00 × 10 0 471400.0569488.05 × 10 7 20790.0403425.20 × 10 6
t 4 270.0088190.00 × 10 0 481430.0608799.93 × 10 7 19750.036918.10 × 10 6
t 5 281120.148316.87 × 10 7 491460.0530068.36 × 10 7 622470.07759.53 × 10 6
t 6 281120.176613.38 × 10 7 491460.0615348.76 × 10 7 602390.0816479.10 × 10 6
t 7 19750.107190.00 × 10 0 401190.0523587.70 × 10 7 19750.0423439.16 × 10 6
10,000 t 1 270.0179410.00 × 10 0 401190.0820148.97 × 10 7 18710.0552249.50 × 10 6
t 2 270.0255970.00 × 10 0 431280.0743018.26 × 10 7 19750.0508238.15 × 10 6
t 3 270.0159780.00 × 10 0 461370.0960628.46 × 10 7 20790.0464866.74 × 10 6
t 4 270.246590.00 × 10 0 481430.0973568.30 × 10 7 20790.052975.11 × 10 6
t 5 391560.861024.62 × 10 7 481430.110718.75 × 10 7 622470.192368.87 × 10 6
t 6 270.0232140.00 × 10 0 481430.0874499.17 × 10 7 592350.166499.96 × 10 6
t 7 11430.0895530.00 × 10 0 371100.0725617.58 × 10 7 20790.0551565.82 × 10 6
50,000 t 1 270.0461110.00 × 10 0 391160.291278.43 × 10 7 19750.215018.80 × 10 6
t 2 270.0883960.00 × 10 0 411220.329759.37 × 10 7 20790.276467.39 × 10 6
t 3 270.0507680.00 × 10 0 441310.363499.16 × 10 7 21830.284926.31 × 10 6
t 4 270.0529530.00 × 10 0 461370.344438.84 × 10 7 21830.219315.10 × 10 6
t 5 341361.3362.22 × 10 7 461370.412259.34 × 10 7 612430.59758.85 × 10 6
t 6 270.106670.00 × 10 0 461370.445819.78 × 10 7 592350.597778.50 × 10 6
t 7 642562.24434.32 × 10 13 461370.444198.86 × 10 7 21830.20525.79 × 10 6
100,000 t 1 270.0921340.00 × 10 0 391160.557187.72 × 10 7 20790.378165.52 × 10 6
t 2 270.149770.00 × 10 0 411220.577388.33 × 10 7 21830.457214.62 × 10 6
t 3 270.0936780.00 × 10 0 441310.627077.92 × 10 7 21830.540428.78 × 10 6
t 4 270.105060.00 × 10 0 451340.634089.66 × 10 7 21830.531417.21 × 10 6
t 5 391569.47389.14 × 10 7 461370.660027.99 × 10 7 602391.12719.73 × 10 6
t 6 270.328820.00 × 10 0 461370.689218.38 × 10 7 582311.15199.42 × 10 6
t 7 6124413.70757.48 × 10 7 411220.583489.17 × 10 7 21830.385298.20 × 10 6
Table 4. Numerical test reports for the three tested methods for Problem 2.
Table 4. Numerical test reports for the three tested methods for Problem 2.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 6230.0181841.73 × 10 7 551630.0321698.99 × 10 7 15580.0118498.59 × 10 6
t 2 6230.0145722.51 × 10 7 611810.03578.88 × 10 7 11410.0100069.07 × 10 6
t 3 6230.0198465.60 × 10 7 692050.0326348.33 × 10 7 17650.0126056.44 × 10 6
t 4 6230.0570221.19 × 10 7 762260.0280919.17 × 10 7 18680.0149796.00 × 10 6
t 5 8310.0307731.98 × 10 7 782320.0396369.18 × 10 7 13470.0096257.58 × 10 6
t 6 7270.0146462.53 × 10 7 812410.038998.64 × 10 7 18670.0111595.40 × 10 6
t 7 14550.0262566.83 × 10 7 722140.0325688.68 × 10 7 19730.0163566.13 × 10 6
5000 t 1 6230.0463574.66 × 10 7 591750.088528.02 × 10 7 16620.0373219.35 × 10 6
t 2 6230.077376.75 × 10 7 641900.106869.97 × 10 7 12450.0305518.80 × 10 6
t 3 7270.0684712.49 × 10 7 722140.121929.37 × 10 7 18690.0474986.98 × 10 6
t 4 7270.10537.27 × 10 7 802380.126548.26 × 10 7 19720.0390886.45 × 10 6
t 5 8310.0653625.25 × 10 7 822440.118718.26 × 10 7 14510.035026.71 × 10 6
t 6 7270.0626627.28 × 10 7 842500.131069.71 × 10 7 19710.0406045.71 × 10 6
t 7 23910.452651.77 × 10 7 752230.164389.73 × 10 7 20770.0594246.86 × 10 6
10,000 t 1 6230.0810176.73 × 10 7 601780.162399.04 × 10 7 17660.0755126.60 × 10 6
t 2 6230.153349.76 × 10 7 661960.169069.00 × 10 7 13490.0413246.11 × 10 6
t 3 7270.110843.61 × 10 7 742200.193628.46 × 10 7 18690.0767349.83 × 10 6
t 4 5190.0764886.03 × 10 7 812410.236639.32 × 10 7 19720.0643779.07 × 10 6
t 5 8310.251857.57 × 10 7 832470.217849.33 × 10 7 14510.0503949.18 × 10 6
t 6 8310.156761.66 × 10 7 862560.233078.77 × 10 7 19710.0754628.02 × 10 6
t 7 271071.10614.66 × 10 7 772290.279248.81 × 10 7 20770.0887959.69 × 10 6
50,000 t 1 7270.58492.40 × 10 7 641900.701658.26 × 10 7 18700.251857.37 × 10 6
t 2 7270.485163.47 × 10 7 702080.757498.23 × 10 7 14530.272136.74 × 10 6
t 3 7271.05228.23 × 10 7 772290.813689.67 × 10 7 20770.311285.50 × 10 6
t 4 7270.345322.05 × 10 7 852530.931538.52 × 10 7 21800.289435.07 × 10 6
t 5 9354.33472.69 × 10 7 872590.941058.53 × 10 7 16590.223665.02 × 10 6
t 6 8310.923873.80 × 10 7 902681.08328.02 × 10 7 20750.277068.93 × 10 6
t 7 20791.82068.21 × 10 7 812411.40088.03 × 10 7 22850.394115.41 × 10 6
100,000 t 1 7270.734063.39 × 10 7 651931.37219.34 × 10 7 19740.528295.22 × 10 6
t 2 7270.498434.92 × 10 7 712111.52879.30 × 10 7 14530.371919.52 × 10 6
t 3 8310.750631.82 × 10 7 792351.67258.75 × 10 7 20770.546377.78 × 10 6
t 4 7270.502873.21 × 10 7 862561.84099.64 × 10 7 21800.573917.17 × 10 6
t 5 9350.760343.81 × 10 7 882621.90859.64 × 10 7 16590.522237.07 × 10 6
t 6 8310.481875.39 × 10 7 912711.96429.07 × 10 7 21790.725796.32 × 10 6
t 7 21831.76061.06 × 10 7 822442.51489.07 × 10 7 22850.855677.66 × 10 6
Table 5. Numerical test reports for the three tested methods for Problem 3.
Table 5. Numerical test reports for the three tested methods for Problem 3.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 260.0043590120.0293020130.0068870
t 2 260.0040970120.0032180130.0029750
t 3 260.0027190120.0028510130.0076650
t 4 260.0022360130.0031440140.0070960
t 5 260.0024770130.0045930140.0050780
t 6 260.0027440130.0030930140.0024270
t 7 15590.0561237.18 × 10 7 120.0030460130.0065390
5000 t 1 260.0088890120.0081980130.0087330
t 2 260.0125550120.0079470130.0084620
t 3 260.0086580120.0077630130.0080920
t 4 260.006120130.0088120140.0096490
t 5 260.0058530130.0097940140.008890
t 6 260.0069470130.0077550140.0078020
t 7 18710.318127.95 × 10 7 120.0104640130.0081690
10,000 t 1 260.0159630120.0126850130.0123870
t 2 260.0160010120.0116490130.010440
t 3 260.0159560120.0102290130.0103060
t 4 260.0181970130.0112670140.017040
t 5 260.0113550130.011820140.0100010
t 6 260.0188420130.0121490140.0120290
t 7 18710.383338.40 × 10 7 120.0109450130.0110360
50,000 t 1 260.0696330120.0389980130.0383910
t 2 260.0583140120.0401890130.048880
t 3 260.0952310120.0389220130.0364430
t 4 260.0469980130.0409420140.0434180
t 5 260.0448270130.0530450140.0403820
t 6 260.0919570130.0422390140.0411880
t 7 18711.29665.49 × 10 7 120.0469870130.0409320
100,000 t 1 260.118380120.0907270130.0779270
t 2 260.116580120.0770240130.0747880
t 3 260.123380120.106010130.0820060
t 4 260.168860130.0855450140.080950
t 5 260.0869420130.0908270140.122060
t 6 260.110210130.0805310140.0805640
t 7 20793.03793.93 × 10 7 120.0783510130.078510
Table 6. Numerical test reports for the three tested methods for Problem 4.
Table 6. Numerical test reports for the three tested methods for Problem 4.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 3110.0045910682030.0298098.60 × 10 7 18710.0105859.93 × 10 6
t 2 3110.0024890712120.0368268.29 × 10 7 19750.0071568.75 × 10 6
t 3 270.0020950742210.0281858.89 × 10 7 20790.0099697.15 × 10 6
t 4 3110.0027050762270.0343069.28 × 10 7 471870.0174857.83 × 10 6
t 5 270.0028880762270.0286619.95 × 10 7 461830.0198299.76 × 10 6
t 6 3110.0097780772300.0353838.34 × 10 7 411630.0154648.77 × 10 6
t 7 572280.0883817.44 × 10 7 742210.0272058.89 × 10 7 20790.0106966.77 × 10 6
5000 t 1 3110.012310712120.0778349.85 × 10 7 20790.0247045.57 × 10 6
t 2 3110.0094140742210.0657579.49 × 10 7 20790.0297649.80 × 10 6
t 3 270.0076620782330.103578.14 × 10 7 21830.0263688.01 × 10 6
t 4 3110.0164530802390.0864178.50 × 10 7 491950.0575939.46 × 10 6
t 5 270.0073450802390.076269.11 × 10 7 491950.0675178.68 × 10 6
t 6 3110.0119930802390.0768169.55 × 10 7 441750.075877.79 × 10 6
t 7 451800.383862.35 × 10 7 782330.0669878.20 × 10 7 21830.0377997.86 × 10 6
10,000 t 1 3110.0118820732180.129658.91 × 10 7 20790.0761947.88 × 10 6
t 2 3110.0294070762270.118018.59 × 10 7 21830.0537786.94 × 10 6
t 3 270.0168820792360.133749.21 × 10 7 22870.0465755.67 × 10 6
t 4 3110.0182180812420.190649.62 × 10 7 501990.115889.84 × 10 6
t 5 270.0124360822450.126648.25 × 10 7 501990.100829.03 × 10 6
t 6 3110.0496290822450.120318.64 × 10 7 451790.0885018.11 × 10 6
t 7 451800.263013.61 × 10 7 792360.133469.24 × 10 7 22870.0750825.55 × 10 6
50,000 t 1 3110.0595520772300.530268.16 × 10 7 21830.172538.83 × 10 6
t 2 3110.052240792360.517969.84 × 10 7 22870.169077.78 × 10 6
t 3 270.0313860832480.553378.44 × 10 7 23910.185696.36 × 10 6
t 4 3110.0890990852540.568588.81 × 10 7 532110.469848.75 × 10 6
t 5 270.0411530852540.636249.44 × 10 7 532110.441578.02 × 10 6
t 6 3110.0686450852540.756539.89 × 10 7 471870.51169.80 × 10 6
t 7 512040.972512.43 × 10 7 832480.593758.48 × 10 7 23910.229126.16 × 10 6
100,000 t 1 3110.232070782331.19079.24 × 10 7 22870.323126.25 × 10 6
t 2 3110.0961020812420.956768.90 × 10 7 23910.360865.51 × 10 6
t 3 270.0595220842510.991739.55 × 10 7 23910.368198.99 × 10 6
t 4 3110.188680862571.02749.97 × 10 7 542150.821629.10 × 10 6
t 5 270.0761780872601.36038.55 × 10 7 542150.855338.34 × 10 6
t 6 3110.133870872601.04158.95 × 10 7 491950.73697.49 × 10 6
t 7 481921.77777.92 × 10 7 842510.990159.59 × 10 7 23910.361258.67 × 10 6
Table 7. Numerical test reports for the three tested methods for Problem 5.
Table 7. Numerical test reports for the three tested methods for Problem 5.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 351390.0281774.01 × 10 7 902680.0247478.03 × 10 7 22820.0124297.48 × 10 6
t 2 21830.0172036.42 × 10 7 892650.027339.07 × 10 7 23870.0129667.31 × 10 6
t 3 311230.021238.88 × 10 7 882620.0259838.96 × 10 7 23890.0112849.31 × 10 6
t 4 301200.0320953.98 × 10 7 892660.0409259.36 × 10 7 491950.0264298.45 × 10 6
t 5 271080.0483484.09 × 10 7 872600.0251819.16 × 10 7 532110.0198728.38 × 10 6
t 6 261040.0193894.97 × 10 7 842510.0311678.32 × 10 7 461830.0277048.80 × 10 6
t 7 281110.0663125.41 × 10 7 902680.0262679.86 × 10 7 1686700.057679.42 × 10 6
5000 t 1 301190.0998794.22 × 10 7 972890.0994548.47 × 10 7 24900.0362976.36 × 10 6
t 2 491950.206842.06 × 10 7 962860.0857129.58 × 10 7 25940.039586.24 × 10 6
t 3 271070.0911896.71 × 10 7 952830.125749.47 × 10 7 25970.0341655.86 × 10 6
t 4 251000.110717.95 × 10 7 972900.102528.05 × 10 7 532110.0761529.11 × 10 6
t 5 411640.131258.90 × 10 7 942810.0922319.87 × 10 7 582310.087558.56 × 10 6
t 6 301200.102327.65 × 10 7 912720.139528.88 × 10 7 501990.074177.65 × 10 6
t 7 351390.152394.54 × 10 7 972890.0948819.23 × 10 7 31612620.395349.96 × 10 6
10,000 t 1 481910.612555.75 × 10 7 1002980.303488.69 × 10 7 25940.0790575.40 × 10 6
t 2 371470.325992.23 × 10 7 992950.181619.83 × 10 7 25940.0791238.90 × 10 6
t 3 25990.179685.85 × 10 7 982920.174879.72 × 10 7 25970.0728278.64 × 10 6
t 4 24960.124348.81 × 10 7 1002990.183428.29 × 10 7 552190.153629.11 × 10 6
t 5 311240.171392.16 × 10 7 982930.176748.14 × 10 7 602390.136889.01 × 10 6
t 6 331320.257737.87 × 10 7 942810.167799.15 × 10 7 512030.159629.62 × 10 6
t 7 431710.356953.19 × 10 7 992950.183998.84 × 10 7 32512980.74849.67 × 10 6
50,000 t 1 993956.6315.43 × 10 7 1073190.778989.16 × 10 7 26980.266726.75 × 10 6
t 2 783114.92026.69 × 10 7 1073190.797898.28 × 10 7 271020.239435.16 × 10 6
t 3 361430.814874.78 × 10 7 1063160.911748.19 × 10 7 271050.308075.28 × 10 6
t 4 291160.664597.02 × 10 7 1073201.14488.77 × 10 7 602390.59788.66 × 10 6
t 5 301200.77376.25 × 10 7 1053140.769878.61 × 10 7 652590.612249.05 × 10 6
t 6 351400.970533.67 × 10 7 1013020.753669.68 × 10 7 562230.537228.19 × 10 6
t 7 481912.27744.52 × 10 7 1063160.773818.74 × 10 7 FFFF
100,000 t 1 9537911.91129.12 × 10 7 1103281.70989.39 × 10 7 26980.43469.73 × 10 6
t 2 9537910.77674.09 × 10 7 1103281.46048.50 × 10 7 271020.447747.39 × 10 6
t 3 271071.26424.65 × 10 7 1093251.44368.40 × 10 7 271050.462657.77 × 10 6
t 4 301201.50626.16 × 10 7 1103291.79569.00 × 10 7 622471.08149.00 × 10 6
t 5 572285.22679.73 × 10 7 1083231.41428.84 × 10 7 672671.24479.50 × 10 6
t 6 271071.45543.33 × 10 7 1043111.3869.94 × 10 7 582311.18978.32 × 10 6
t 7 11947516.65737.19 × 10 7 1093251.70259.46 × 10 7 FFFF
Table 8. Numerical test reports for the three tested methods for Problem 6.
Table 8. Numerical test reports for the three tested methods for Problem 6.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 12480.031882.30 × 10 7 832480.0317328.42 × 10 7 23910.0141839.28 × 10 6
t 2 12480.0419712.23 × 10 7 832480.0319018.09 × 10 7 23910.0149258.92 × 10 6
t 3 12480.0405532.02 × 10 7 822450.0316228.91 × 10 7 23910.0201357.86 × 10 6
t 4 11440.0241416.97 × 10 7 802390.0313879.53 × 10 7 23910.0144735.38 × 10 6
t 5 11440.0386265.62 × 10 7 792360.0438369.56 × 10 7 22870.0162928.62 × 10 6
t 6 11440.019373.35 × 10 7 772300.0294428.81 × 10 7 22870.0146275.08 × 10 6
t 7 12480.0416214.70 × 10 7 822450.0316749.04 × 10 7 23910.0267827.91 × 10 6
5000 t 1 12480.0967234.01 × 10 7 862570.162639.65 × 10 7 25990.0623775.22 × 10 6
t 2 12480.149733.86 × 10 7 862570.206479.28 × 10 7 25990.0639815.02 × 10 6
t 3 12480.120933.40 × 10 7 862570.172058.17 × 10 7 24950.0560848.82 × 10 6
t 4 12480.104932.33 × 10 7 842510.171678.74 × 10 7 24950.0593746.04 × 10 6
t 5 12480.0831361.87 × 10 7 832480.154478.77 × 10 7 23910.0552949.67 × 10 6
t 6 11440.0827147.05 × 10 7 812420.151928.08 × 10 7 23910.0554785.70 × 10 6
t 7 12480.0841043.48 × 10 7 862570.166418.25 × 10 7 24950.063388.87 × 10 6
10,000 t 1 12480.1585.68 × 10 7 882630.340398.73 × 10 7 25990.0982997.38 × 10 6
t 2 12480.156945.46 × 10 7 882630.351498.40 × 10 7 25990.136447.09 × 10 6
t 3 12480.23814.81 × 10 7 872600.342949.25 × 10 7 25990.112716.25 × 10 6
t 4 12480.207463.29 × 10 7 852540.264759.89 × 10 7 24950.0938858.54 × 10 6
t 5 12480.158572.64 × 10 7 842510.24619.92 × 10 7 24950.103966.85 × 10 6
t 6 11440.811019.97 × 10 7 822450.243819.14 × 10 7 23910.124038.06 × 10 6
t 7 12480.165254.85 × 10 7 872600.266379.35 × 10 7 25990.101286.30 × 10 6
50,000 t 1 13521.05061.98 × 10 7 912721.29071.00 × 10 6 261030.407618.26 × 10 6
t 2 13520.751141.91 × 10 7 912721.10719.61 × 10 7 261030.403567.95 × 10 6
t 3 13520.981171.68 × 10 7 912721.07388.47 × 10 7 261030.412517.00 × 10 6
t 4 12480.659977.36 × 10 7 892661.05429.06 × 10 7 25990.392629.56 × 10 6
t 5 12480.468825.91 × 10 7 882631.51099.08 × 10 7 25990.391867.67 × 10 6
t 6 12480.613933.48 × 10 7 862571.04528.37 × 10 7 24950.384239.03 × 10 6
t 7 13520.542981.70 × 10 7 912721.05568.54 × 10 7 261030.40477.06 × 10 6
100,000 t 1 13521.40922.81 × 10 7 932782.77169.05 × 10 7 271071.0675.86 × 10 6
t 2 13521.32292.70 × 10 7 932782.38128.70 × 10 7 271071.20865.63 × 10 6
t 3 13521.69132.38 × 10 7 922752.65949.58 × 10 7 261030.969529.90 × 10 6
t 4 13521.34081.63 × 10 7 912722.32378.20 × 10 7 261030.941036.78 × 10 6
t 5 12481.24258.35 × 10 7 902693.11738.22 × 10 7 261031.05025.44 × 10 6
t 6 12481.33324.93 × 10 7 872602.43599.47 × 10 7 25990.883946.40 × 10 6
t 7 13521.38462.40 × 10 7 922752.66679.66 × 10 7 261030.91989.98 × 10 6
Table 9. Numerical test reports for the three tested methods for Problem 7.
Table 9. Numerical test reports for the three tested methods for Problem 7.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 8320.0115585.30 × 10 7 371100.0131479.48 × 10 7 17670.0093136.98 × 10 6
t 2 9360.0133819.70 × 10 7 371100.0135146.87 × 10 7 15590.0107149.89 × 10 6
t 3 7280.0115513.38 × 10 7 30890.0112366.51 × 10 7 16630.0089485.79 × 10 6
t 4 9360.010242.83 × 10 7 381130.0132818.05 × 10 7 16630.0089655.21 × 10 6
t 5 9360.009788.28 × 10 7 381130.0214898.05 × 10 7 19750.0141964.95 × 10 6
t 6 10390.0120211.26 × 10 7 371090.0117499.07 × 10 7 18700.0093898.93 × 10 6
t 7 21840.0356585.40 × 10 7 371100.0157656.61 × 10 7 20790.0197598.71 × 10 6
5000 t 1 9360.0423381.32 × 10 7 391160.0525488.30 × 10 7 18710.0376427.60 × 10 6
t 2 10400.0811062.42 × 10 7 381130.0518349.60 × 10 7 17670.0271585.25 × 10 6
t 3 7280.0398517.55 × 10 7 31920.0415459.10 × 10 7 17670.0289176.31 × 10 6
t 4 9360.0399676.33 × 10 7 401190.0574377.05 × 10 7 17670.0372755.68 × 10 6
t 5 10400.0461652.06 × 10 7 401190.0521647.05 × 10 7 20790.0309175.39 × 10 6
t 6 10390.0448812.82 × 10 7 391150.0644387.94 × 10 7 19740.0466549.73 × 10 6
t 7 24960.187016.38 × 10 7 381130.0743839.00 × 10 7 21830.0486269.52 × 10 6
10,000 t 1 9360.0840211.87 × 10 7 401190.14597.34 × 10 7 19750.0605545.23 × 10 6
t 2 10400.0917883.42 × 10 7 391160.0953258.50 × 10 7 17670.0576497.42 × 10 6
t 3 8320.0821441.19 × 10 7 32950.0748428.05 × 10 7 17670.0469058.92 × 10 6
t 4 9360.0657878.95 × 10 7 401190.100249.96 × 10 7 17670.050478.03 × 10 6
t 5 10400.0696982.92 × 10 7 401190.100039.96 × 10 7 20790.0693037.62 × 10 6
t 6 10390.14573.99 × 10 7 401180.0977497.02 × 10 7 20780.0553236.70 × 10 6
t 7 21840.143323.62 × 10 7 391160.129838.05 × 10 7 22870.0783086.51 × 10 6
50,000 t 1 9360.251764.17 × 10 7 421250.471576.42 × 10 7 20790.295845.70 × 10 6
t 2 10400.763047.64 × 10 7 411220.446647.43 × 10 7 18710.212468.08 × 10 6
t 3 8320.229032.66 × 10 7 341010.306397.04 × 10 7 18710.220249.71 × 10 6
t 4 10400.461072.23 × 10 7 421250.405348.72 × 10 7 18710.198718.75 × 10 6
t 5 10400.399566.52 × 10 7 421250.461038.72 × 10 7 21830.220278.30 × 10 6
t 6 10390.303158.92 × 10 7 411210.516519.82 × 10 7 21820.231397.30 × 10 6
t 7 21840.888814.14 × 10 7 411220.732627.00 × 10 7 23910.372487.08 × 10 6
100,000 t 1 9360.499845.90 × 10 7 421250.901069.08 × 10 7 20790.576928.06 × 10 6
t 2 11440.834371.20 × 10 7 421250.950046.58 × 10 7 19750.514075.57 × 10 6
t 3 8320.636113.76 × 10 7 341010.742259.96 × 10 7 19750.422496.69 × 10 6
t 4 10400.800333.15 × 10 7 431280.765917.71 × 10 7 19750.411026.03 × 10 6
t 5 10400.537669.23 × 10 7 431280.792517.71 × 10 7 22870.452055.72 × 10 6
t 6 11430.693931.41 × 10 7 421240.894778.69 × 10 7 22860.509025.03 × 10 6
t 7 21841.33818.09 × 10 7 411221.4519.90 × 10 7 24950.740064.89 × 10 6
Table 10. Numerical test reports for the three tested methods for Problem 8.
Table 10. Numerical test reports for the three tested methods for Problem 8.
DF-LSTTCGDPCG
DIMINPITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 t 1 13520.0115024.53 × 10 7 21620.0071039.28 × 10 7 9350.0076592.15 × 10 6
t 2 13520.0104972.74 × 10 7 21620.0076845.62 × 10 7 8310.0057967.72 × 10 6
t 3 12480.0189227.51 × 10 7 21620.0062085.36 × 10 7 8310.0046827.36 × 10 6
t 4 14560.0112982.60 × 10 7 23680.008715.84 × 10 7 9350.0052097.18 × 10 6
t 5 14560.01193.53 × 10 7 23680.0062417.91 × 10 7 9350.0048989.73 × 10 6
t 6 14560.0201755.07 × 10 7 24710.0067714.93 × 10 7 10390.0052482.36 × 10 6
t 7 9360.0076019.21 × 10 7 22650.0059345.11 × 10 7 9350.0047812.82 × 10 6
5000 t 1 14560.0447472.48 × 10 7 22650.0302819.01 × 10 7 9350.0149614.81 × 10 6
t 2 13520.0399916.13 × 10 7 22650.0373675.46 × 10 7 9350.0199752.91 × 10 6
t 3 13520.0617864.11 × 10 7 22650.0242565.20 × 10 7 9350.0122362.78 × 10 6
t 4 14560.0370065.82 × 10 7 24710.0309635.67 × 10 7 10390.0126132.71 × 10 6
t 5 14560.0428277.89 × 10 7 24710.0271677.68 × 10 7 10390.0121673.67 × 10 6
t 6 15600.0407442.77 × 10 7 25740.0323694.79 × 10 7 10390.0143055.27 × 10 6
t 7 10400.0316935.07 × 10 7 23680.0172025.02 × 10 7 9350.0115326.18 × 10 6
10,000 t 1 14560.13043.51 × 10 7 23680.0314365.53 × 10 7 9350.0158526.80 × 10 6
t 2 13520.0829828.67 × 10 7 22650.0326297.71 × 10 7 9350.019034.12 × 10 6
t 3 13520.101245.82 × 10 7 22650.0310877.36 × 10 7 9350.0256733.93 × 10 6
t 4 14560.0840778.24 × 10 7 24710.0581218.02 × 10 7 10390.0158083.83 × 10 6
t 5 15600.138362.73 × 10 7 25740.0346954.72 × 10 7 10390.0219545.19 × 10 6
t 6 15600.091513.92 × 10 7 25740.0338726.78 × 10 7 10390.0176647.45 × 10 6
t 7 10400.0743147.22 × 10 7 23680.045177.08 × 10 7 9350.02458.64 × 10 6
50,000 t 1 14560.489897.84 × 10 7 24710.152095.37 × 10 7 10390.0704662.57 × 10 6
t 2 14560.396644.75 × 10 7 23680.128677.49 × 10 7 9350.0597969.21 × 10 6
t 3 14561.10253.19 × 10 7 23680.267747.15 × 10 7 9350.0555528.79 × 10 6
t 4 15600.377564.51 × 10 7 25740.180667.79 × 10 7 10390.0636158.57 × 10 6
t 5 15600.483646.11 × 10 7 26770.205214.58 × 10 7 11430.0668111.96 × 10 6
t 6 15600.382158.77 × 10 7 26770.14686.58 × 10 7 11430.0708022.81 × 10 6
t 7 11440.297493.97 × 10 7 24710.153896.86 × 10 7 10390.114563.27 × 10 6
100,000 t 1 15600.765332.72 × 10 7 24710.285567.60 × 10 7 10390.151363.63 × 10 6
t 2 14560.965456.72 × 10 7 24710.306564.60 × 10 7 10390.154712.20 × 10 6
t 3 14560.716914.51 × 10 7 24710.279514.39 × 10 7 10390.20062.10 × 10 6
t 4 15600.761536.38 × 10 7 26770.390444.78 × 10 7 11430.186272.04 × 10 6
t 5 15600.795398.64 × 10 7 26770.578756.48 × 10 7 11430.133072.77 × 10 6
t 6 16640.825743.04 × 10 7 26770.41419.31 × 10 7 11430.1423.98 × 10 6
t 7 11440.524295.60 × 10 7 24710.474249.71 × 10 7 10390.156944.64 × 10 6

Share and Cite

MDPI and ACS Style

Hassan Ibrahim, A.; Kumam, P.; Abubakar, A.B.; Abubakar, J.; Muhammad, A.B. Least-Square-Based Three-Term Conjugate Gradient Projection Method for 1-Norm Problems with Application to Compressed Sensing. Mathematics 2020, 8, 602. https://doi.org/10.3390/math8040602

AMA Style

Hassan Ibrahim A, Kumam P, Abubakar AB, Abubakar J, Muhammad AB. Least-Square-Based Three-Term Conjugate Gradient Projection Method for 1-Norm Problems with Application to Compressed Sensing. Mathematics. 2020; 8(4):602. https://doi.org/10.3390/math8040602

Chicago/Turabian Style

Hassan Ibrahim, Abdulkarim, Poom Kumam, Auwal Bala Abubakar, Jamilu Abubakar, and Abubakar Bakoji Muhammad. 2020. "Least-Square-Based Three-Term Conjugate Gradient Projection Method for 1-Norm Problems with Application to Compressed Sensing" Mathematics 8, no. 4: 602. https://doi.org/10.3390/math8040602

APA Style

Hassan Ibrahim, A., Kumam, P., Abubakar, A. B., Abubakar, J., & Muhammad, A. B. (2020). Least-Square-Based Three-Term Conjugate Gradient Projection Method for 1-Norm Problems with Application to Compressed Sensing. Mathematics, 8(4), 602. https://doi.org/10.3390/math8040602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop