Next Article in Journal
Editorial for Special Issue Feature Papers 2020
Next Article in Special Issue
Solving Integral Equations via Hybrid Interpolative ℛℐ-Type Contractions in 𝔟-Metric Spaces
Previous Article in Journal
Multiple Existence Results of Nontrivial Solutions for a Class of Second-Order Partial Difference Equations
Previous Article in Special Issue
Common Fixed-Points Technique for the Existence of a Solution to Fractional Integro-Differential Equations via Orthogonal Branciari Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Algorithm for Solving Common Points of Convex Minimization Problems with Applications

1
Department of Science and Mathematics, Rajamangala University of Technology Isan Surin Campus, Surin 32000, Thailand
2
School of Science, University of Phayao, Phayao 56000, Thailand
3
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
4
Research Group in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(1), 7; https://doi.org/10.3390/sym15010007
Submission received: 26 October 2022 / Revised: 23 November 2022 / Accepted: 2 December 2022 / Published: 20 December 2022
(This article belongs to the Special Issue Symmetry in Nonlinear Analysis and Fixed Point Theory)

Abstract

:
In algorithm development, symmetry plays a vital part in managing optimization problems in scientific models. The aim of this work is to propose a new accelerated method for finding a common point of convex minimization problems and then use the fixed point of the forward-backward operator to explain and analyze a weak convergence result of the proposed algorithm in real Hilbert spaces under certain conditions. As applications, we demonstrate the suggested method for solving image inpainting and image restoration problems.

1. Introduction

In this study, let H be a real Hilbert space with an inner product · , · and the induced norm · . Let N be the set of all positive integers and R be the set of all real numbers. The operator I : H H denotes the identity operator. Weak and strong convergence are denoted by the symbols ⇀ and →, respectively.
In recent years, the convex minimization problem in the form of the sum of two convex functions plays and important role in solving real-world problems such as in signal and image processing, machine learning and medical image reconstruction, see [1,2,3,4,5,6,7,8,9,10], for instance. This problem can be written in the following form:
minimize z H ϕ 1 ( z ) + ϕ 2 ( z ) ,
where ϕ 1 : H R is a convex and differentiable function such that ϕ 1 is L -Lipschitz continuous and ϕ 2 : H R { } is a convex and proper lower semi-continuous function. Symmetry, or invariance, serves as the foundation for the solution of problem (1). The solution set for problem (1) is equivalent to the fixed point Equation (2),
z = prox σ ϕ 2 ( I σ ϕ 1 ) ( z ) ,
where σ > 0 , prox ϕ 2 is the proximity operator of ϕ 2 and ϕ 1 stands for the gradient of ϕ 1 . It is known that if the step size σ ( 0 , 2 / L ) , then prox σ ϕ 2 ( I σ ϕ 1 ) ( z ) is nonexpansive. For the past decade, many algorithms based on fixed point method were proposed to solve the problem (1), see [4,8,11,12,13,14,15].
Lions and Mercier proposed the forward-backward splitting (FBS) algorithm [6] as the following:
z k + 1 = prox σ k ϕ 2 ( I σ k ϕ 1 ) ( z k ) , k N ,
where z 1 H and 0 < σ k < 2 / L .
Combettes and Wajs [3] studied the relaxed forward-backward splitting (R-FBS) method in 2005, which was defined as follows:
y k = z k σ k ϕ 1 ( z k ) , z k + 1 = z k + β k ( prox σ k ϕ 2 ( y k ) z k ) , k N ,
where ε ( 0 , min ( 1 , 1 L ) ) , z 1 R N , σ k [ ε , 2 L ε ] and β k [ ε , 1 ] .
An inertial technique is often used to speed up the forward-backward splitting procedure. As a result, numerous inertial algorithms were created and explored in order to speed up the algorithms’ convergence behavior, see [14,16,17,18] for example. Beck and Teboulle [17] recently published FISTA, a fast iterative shrinkage-thresholding algorithm to solve the problem (1). The following are the characteristics of FISTA:
t k + 1 = 1 + 1 + 4 t k 2 2 , α k = t k 1 t k + 1 , y k = prox 1 L ϕ 2 ( I 1 L ϕ 1 ) ( z k ) , z k + 1 = y k + α k ( y k y k 1 ) , k N ,
where z 1 = y 0 R N , t 1 = 1 . It is worth noting that α k is an inertial parameter that determines the momentum y k y k 1 .
In this work, we are interested to construct a new accelerated algorithm for finding a common element of the convex minimization problems (6) by using inertial and fixed point techniques of forward-backward operators:
min x H ϕ 1 ( x ) + ϕ 2 ( x ) , and min x H ω 1 ( x ) + ω 2 ( x ) ,
where ϕ 1 : H R , ϕ 1 : H R , ω 1 : H R { } and ω 2 : H R { } are convex and proper lower semi-continuous function. Then, we prove a weak convergence result of the proposed algorithm in real Hilbert spaces under certain conditions and illustrate the theoretical results via some numerical experiments in image inpainting and image restoration problems.

2. Preliminaries

Basic concepts, definitions, notations and some relevant lemmas for usage in the following parts will be discussed in this section.
Let ϕ : H R { } be a convex and proper lower semi-continuous function. The proximity operator can be written in the equivalent form:
prox ϕ = ( I + ϕ ) 1 : H H ,
when ϕ is the subdifferential of ϕ given by
ϕ ( z ) : = { u H : ϕ ( z ) + u , y z ϕ ( y ) , y H } , z H .
We notice that prox δ C = P C , where C H is a nonempty closed convex set, δ C is the indicator function and P C : H C is the orthogonal projection operator on C . The subdifferential operator ϕ is a maximal monotone (for additional information, see [19]), and the solution of (1) is a fixed point of the operator below:
z Argmin ( ϕ 1 + ϕ 2 ) z = prox σ ϕ 2 ( I σ ϕ 1 ) ( z ) ,
where σ > 0 , and Argmin ( ϕ 1 + ϕ 2 ) is solution set for problem (1).
The following Lipschitz continuous and nonexpansive operators are considered. An operator T : H H is called Lipschitz continuous if there exists L > 0 such that
T x T y L x y , x , y H .
When T is 1-Lipschitz continuous, it is referred to as nonexpansive. If z = T z , a point z H is called fixed point of T and Fix ( T ) denotes the set of fixed points for T .
The operator I T is called demiclosed at zero if any sequence { z k } converges weakly to z and the sequence { z k T z k } converges strongly to zero, then z Fix ( T ) . If T is a nonexpansive operator, then I T is known to be demiclosed at zero [20].
Let T : H H and { T k : H H } be such that Fix ( T ) k = 1 Fix ( T k ) . Then, { T k } is said to satisfy NST-condition (I) with T [21] if for each bounded sequence { z k } H ,
lim k z k T k z k = 0 implies lim k z k T z k = 0 .
The following basic property on H will be used in the study (see [22]): for all x , y H and γ [ 0 , 1 ] ,
γ x + ( 1 γ ) y 2 = γ x 2 + ( 1 γ ) y 2 γ ( 1 γ ) x y 2 ,
x ± y 2 = x 2 ± 2 x , y + y 2 .
Lemma 1
([18]). Let ϕ 1 : H R be a convex and differentiable function such that ϕ 1 is L -Lipschitz continuous and ϕ 2 : H R { } be a convex and proper lower semi-continuous function. Let T k : = prox λ k ϕ 2 ( I λ k ϕ 1 ) and T : = prox λ ϕ 2 ( I λ ϕ 1 ) , where λ k , λ ( 0 , 2 / L ) with λ k λ . Then { T k } satisfies NST-condition (I) with T .
Lemma 2
([14]). Let { z k } and { α k } be two sequences of non-negative real numbers such that
z k + 1 ( 1 + α k ) z k + α k z k 1 , k 1 .
Then z k + 1 E · j = 1 k ( 1 + 2 α j ) , where E = max { z 1 , z 2 } . Moreover, if k = 1 α k < , then { z k } is bounded.
Lemma 3
([23]). Let { z k } and { w k } be two sequences of non-negative real numbers such that
z k + 1 z k + w k ,
for all k 1 . If k = 1 w k < , then lim k z k exists.
Lemma 4
([24]). Let { z k } be a sequence in H and Θ H that satisfies
(I) 
For every z * Θ , lim k z k z * exists;
(II) 
ω w ( z k ) Θ , where ω w ( z k ) is the set of all weak-cluster points of { z k } .
Then, { z k } converges weakly to a point in Θ .

3. Main Results

In this section, we suggest an inertial forward-backward splitting algorithm to solve common points of convex minimization problems and prove weak convergence of the proposed algorithm. Assumptions that will be used throughout this section are as follows:
ϕ 1 and ω 1 are convex and differentiable functions from H to R ;
ϕ 1 and ω 1 are Lipschitz continuous with constants L 1 and L 2 , respectively;
ϕ 2 and ω 2 are convex and proper lower semi-continuous functions from H to R { } ;
Θ : = Argmin ( ϕ 1 + ϕ 2 ) Argmin ( ω 1 + ω 2 ) .
Remark 1.
Let U k : = prox σ k ϕ 2 ( I σ k ϕ 1 ) and U : = prox σ ϕ 2 ( I σ ϕ 1 ) . If 0 < σ k , σ < 2 / L 1 , then U k and U are nonexpansive operators with Fix ( U ) = Argmin ( ϕ 1 + ϕ 2 ) = k = 1 Fix ( U k ) . Moreover, if σ k σ , then Lemma 1 asserts that { U k } satisfies NST-condition (I) with U .
Algorithm 1: Given: z 0 , z 1 H . Choose { α k } , { β k } , { γ k } , { σ k } and { σ k * } .
For k = 1 , 2 , , do
w k = z k + α k ( z k z k 1 ) ; y k = w k + β k ( prox σ k ϕ 2 ( I σ k ϕ 1 ) w k w k ) ; z k + 1 = ( 1 γ k ) prox σ k ϕ 2 ( I σ k ϕ 1 ) w k + γ k prox σ k * ω 2 ( I σ k * ω 1 ) y k ,
end for.
Next, the convergence result of Algorithm 1 can be shown as follows:
Theorem 1.
Let { z k } be the sequence created by Algorithm 1. Suppose that { α k } , { β k } , { γ k } , { σ k } and { σ k * } are the sequences which satisfy the following conditions:
(A1) 
β k [ a , b ] ( 0 , 1 ) , γ k [ c , d ] ( 0 , 1 ) k N , for some a , b , c , d R with a < b and c < d ;
(A2) 
α k 0 , k N and k = 1 α k < ;
(A3) 
0 < σ k , σ < 2 / L 1 , 0 < σ k * , σ * < 2 / L 2 , k N such that σ k σ and σ k * σ * as k .
Then, the following holds:
(i) 
z k + 1 z * E j = 1 k ( 1 + 2 α j ) , where E = max { z 1 z * , z 2 z * } and z * Θ .
(ii) 
{ z k } converges weakly to common point in Θ : = Argmin ( ϕ 1 + ϕ 2 ) Argmin ( ω 1 + ω 2 ) .
Proof. 
Define operators U k , T k , U , T : H H as follows:
U k : = prox σ k ϕ 2 ( I σ k ϕ 1 ) , U : = prox σ ϕ 2 ( I σ ϕ 1 ) , T k : = prox σ k * ω 2 ( I σ k * ω 1 ) and T : = prox σ * ω 2 ( I σ * ω 1 ) .
Then, Algorithm 1 can be written as follows:
w k = z k + α k ( z k z k 1 ) ;
y k = w k + β k ( U k w k w k ) ;
z k + 1 = ( 1 γ k ) U k w k + γ k T k y k .
Let z * Θ . By (10), we have
w k z * z k z * + α k z k z k 1 .
By (11)–(13) and the nonexpansiveness of U k and T k , we have
z k + 1 z * ( 1 γ k ) U k w k z * + γ k T k y k z * ( 1 γ k ) w k z * + γ k y k z * ( 1 γ k ) w k z * + γ k ( 1 β k ) w k z * + β k U k w k z * w k z * z k z * + α k z k z k 1 .
This implies
z k + 1 z * ( 1 + α k ) z k z * + α k z k 1 z * .
When we apply Lemma 2 to the Equation (15), we obtain z k + 1 z * E · j = 1 k ( 1 + 2 α j ) , where E = max { z 1 z * , z 2 z * } . Hence, the proof of (i) is now complete.
By (15) and condition (A2), we have that { z k } is bounded. This implies k = 1 α k z k z k 1 < . By (14) and Lemma 3, we obtain that lim k z k z * exists. By (9) and (10), we obtain
w k z * 2 z k z * 2 + α k 2 z k z k 1 2 + 2 α k z k z * z k z k 1 .
By (8), (11) and the nonexpansiveness of U k , we obtain
y k z * 2 =   ( 1 β k ) w k z * 2 + β k U k w k z * 2 β k ( 1 β k ) w k U k w k 2   w k z * 2 β k ( 1 β k ) w k U k w k 2 .
By (8), (12), (16), (17) and the nonexpansiveness of U k and T k , we have
z k + 1 z * 2   ( 1 γ k ) U k w k z * 2 + γ k T k y k z * 2 γ k ( 1 γ k ) T k y k U k w k 2   ( 1 γ k ) w k z * 2 + γ k y k z * 2 γ k ( 1 γ k ) T k y k U k w k 2   w k z * 2 γ k β k ( 1 β k ) w k U k w k 2 γ k ( 1 γ k ) T k y k U k w k 2   z k z * 2 + α k 2 z k z k 1 2 + 2 α k z k z * z k z k 1 γ k β k ( 1 β k ) w k U k w k 2 γ k ( 1 γ k ) T k y k U k w k 2 .
From (18) and by condition (A1), (A2), k = 1 α k z k z k 1 < and lim k z k z * exists, we obtain
lim k T k y k U k w k   = lim k w k U k w k   = 0 and lim k y k w k = 0 .
From (19), we obtain
T k y k y k   T k y k U k w k + U k w k w k + w k y k 0 as k .
From (10) and k = 1 α k z k z k 1 < , we have
w k z k = α k z k z k 1 0 as k .
Since { z k } is bounded, we have ω w ( z k ) . By (19) and (21), we obtain ω w ( z k ) ω w ( w k ) ω w ( y k ) . By Condition (A3) and Remark 1, we know that { U k } and { T k } satisfies NST-condition (I) with U and T , respectively. From (19), (20) and by using the demiclosedness of I U and I T , we obtain ω w ( z k ) Fix ( U ) Fix ( T ) = Θ . From Lemma 4, we conclude that { z k } converges weakly to a point in Θ . This completes the proof. □
Open Problem: Can we choose the step size σ k and σ k * that does not depend on the Lipschitz constant of the gradient of the function L 1 and L 2 , respectively, and the obtained convergence result of the proposed algorithm?
If we set ϕ 1 = ω 1 , ϕ 2 = ω 2 and σ k = σ k * for all k 1 , then Algorithm 1 is reduced to Algorithm 2.
Algorithm 2: Given: z 0 , z 1 H . Choose { α k } , { β k } , { γ k } and { σ k } .
For k = 1 , 2 , , do
w k = z k + α k ( z k z k 1 ) ; y k = w k + β k ( prox σ k ϕ 2 ( I σ k ϕ 1 ) w k w k ) ; z k + 1 = ( 1 γ k ) prox σ k ϕ 2 ( I σ k ϕ 1 ) w k + γ k prox σ k ϕ 2 ( I σ k ϕ 1 ) y k ,
end for.
   The following result is immediately obtained by Theorem 1.
Corollary 1.
Let { z k } be the sequence created by Algorithm 2. Suppose that { α k } , { β k } , { γ k } and { σ k } are the sequences which satisfy the following conditions:
(A1) 
β k [ a , b ] ( 0 , 1 ) , γ k [ c , d ] ( 0 , 1 ) k N , for some a , b , c , d R with a < b and c < d ;
(A2) 
α k 0 , k N and k = 1 α k < ;
(A3) 
0 < σ k , σ < 2 / L 1 , k N such that σ k σ as k .
Then the following hold:
(i) 
z k + 1 z * E j = 1 k ( 1 + 2 α j ) , where E = max { z 1 z * , z 2 z * } and
z * Argmin ( ϕ 1 + ϕ 1 ) .
(ii) 
{ z k } converges weakly to a point in Argmin ( ϕ 1 + ϕ 1 ) .

4. Applications

For this part, we apply the Algorithm 1 to solving constrained image inpainting problems (22) and apply the Algorithm 2 to solving image restoration problems (24). As image quality metrics, we utilize the peak signal-to-noise ratio (PSNR) in decibel (dB) [25], which is formulated as follows:
P S N R : = 10 log 10 255 2 1 M z k z 2 2 ,
where z and M are the original image and the number of image samples, respectively. All experimental simulations are performed in MATLAB\R2022a on a PC with an Intel Core-i5 processor and 4.00 GB of RAM running Windows 8 64-bit.

4.1. Image Inpainting Problems

In this experiment, we apply the Algorithm 1 to solving the following constrained image inpainting problems [13]:
min z C 1 2 P Λ ( z 0 ) P Λ ( z ) F 2 + τ z * ,
where z 0 R m × n is a given image, { z i j 0 } ( i , j ) Λ are observed, Λ is a subset of the index set { 1 , 2 , 3 , m } × { 1 , 2 , 3 , n } , which indicates where data are available in the image domain and the rest are missed, C = { z R m × n | z i j 0 } and define P Λ by
P Λ ( z 0 ) = z i j 0 , ( i , j ) Λ , 0 , otherwise .
In Algorithm 1, we set
ϕ 1 ( z ) = 1 2 P Λ ( z 0 ) P Λ ( z ) F 2 , ϕ 2 ( z ) = τ z * , ω 1 ( z ) = 0 and ω 2 ( z ) = δ C ( z ) ,
where τ > 0 is regularization parameter, · F is the Frobenius matrix norm and · * is the nuclear matrix norm. Then, ϕ 1 ( z ) is convex differentiable and ϕ 1 ( z ) = P Λ ( z 0 ) P Λ ( z ) with 1-Lipschitz continuous. We note that the proximity operator of ϕ 2 ( z ) can be computed by the singular value decomposition (SVD), see [26], and the proximity operator of ω 2 ( z ) is the orthogonal projection onto the closed convex set C . Therefore, Algorithm 1 is reduced to Algorithm 3 which can be used for solving constrained image inpainting problems (22), we have the following algorithm:
Algorithm 3: Given: z 0 , z 1 H . Choose { α k } , { β k } , { γ k } , and { σ k } .
For k = 1 , 2 , , do
w k = z k + α k ( z k z k 1 ) ; y k = w k + β k ( prox σ k ϕ 2 ( I σ k ϕ 1 ) w k w k ) ; z k + 1 = ( 1 γ k ) prox σ k ϕ 2 ( I σ k ϕ 1 ) w k + γ k P C y k ,
end for.
In the standard Gallery, we marked and fixed the damaged portion of the image, and we compared Algorithm 3 with different inertial parameters settings. The following are the details of the parameters for Algorithm 3:
β k = 0.9 k k + 1 , γ k = 0.01 k k + 1 , α k = ρ k if 1 k M 1 2 k otherwise ,
where M is a positive integer depending on the number of iterations of Algorithm 3.
The regularization parameter was set to τ = 0.01 and the stopping criterion is as follows:
z k + 1 z k F z k F ε ,
where ε is a given small constant. The number of iterations is indicated by Iter., and CPU time is indicated by CPU (second). We use the parameters selection cases I–V in Table 1 to evaluate the performance of Algorithm 3. Table 2 displays the results that were achieved. We observe from Table 2 that when the stopping criterion ε = 10 5 or at the 2000th iteration, Algorithm 3 with inertial parameter (Case V) outperforms the other cases in terms of PSNR performance. We may infer from Table 2 that Algorithm 3 is more effective at recovering images when inertial parameters are added. The test image and the restored images are shown in Figure 1 and Figure 2.
To solve a general convex optimization problem, model the sum of three convex functions in the form:
min x H ϕ 1 ( x ) + ϕ 2 ( x ) + ϕ 3 ( x ) ,
where ϕ 1 : H R , ϕ 2 : H R { } and ϕ 3 : H R { } are convex and proper lower semi-continuous function and ϕ 1 is a differentiable function with a L -Lipschitz continuous gradient. Cui et al. introduced an inertial three-operator splitting (iTOS) algorithm [13] which can be applied to solving constrained image inpainting problems (22).
Next experiment, we set ϕ 1 ( z ) = 1 2 P Λ ( z 0 ) P Λ ( z ) F 2 , ϕ 2 ( z ) = τ z * , and ϕ 3 ( z ) = δ C ( z ) , for Algorithm 4 (iTOS algorithm) and use the parameters selection as in Table 3 to evaluate the performance. Table 3 displays the results that were achieved. We observe from Table 2 and Table 3 that when the stopping criterion ε = 10 5 or at the 2000th iteration, the Algorithm 3 with inertial parameter (Case V) outperforms all cases of the iTOS algorithm in terms of PSNR performance.
Algorithm 4: An inertial three-operator splitting (iTOS) algorithm [13].
Let z 0 , z 1 H and λ ( 0 , 2 L ε ¯ ) , where ε ¯ ( 0 , 1 ) . For k 1 , let
w k = z k + α k ( z k z k 1 ) ; y ϕ 3 k = prox λ ϕ 3 w k ; y ϕ 2 k = prox λ ϕ 2 ( 2 y ϕ 3 k y k λ ϕ 1 ( y ϕ 3 k ) ) ; z k + 1 = w k + β k ( y ϕ 2 k y ϕ 3 k ) ,
  where { α k } is nondecreasing with k 1 , 0 α k α < 1 and for all k 1 , and
   β , a , b > 0 such that
b > α 2 ( 1 + α ) + α a 1 α 2 and 0 < β β k b α [ α ( 1 + α ) + α b + a ] α ¯ b [ 1 + α ( 1 + α ) + α b + a ] , where α ¯ = 1 2 ε ¯ .

4.2. Image Restoration Problems

In this experiment, we apply the Algorithm 2 to solving the image restoration problems by using the LASSO model [25]:
min z R N 1 2 B z ϵ 2 2 + τ z 1 ,
where τ > 0 , · 1 is the l 1 -norm and · 2 is the Euclidean norm.
In Algorithm 2, we set ϕ 1 ( z ) = 1 2 ϵ B z 2 2 and ϕ 2 ( z ) = τ z 1 , where ϵ is the observed image and B = R W , when R and W are the kernel matrix and 2-D fast Fourier transform, respectively.
We will use two test photos (Pepper and Bird, with sizes of 512 × 512 and 288 × 288 , respectively) to exhibit two scenarios of blurring processes in Table 4 and add a random Gaussian white noise 10 5 , with the original and blurred images shown in Figure 3.
We examine and compare the efficiency of our algorithms (Algorithm 2 := ALG 2) to that of FBS, R-FBS and FISTA algorithms. The image restoration performance of the examined methods is next tested by setting as described in (25) and using blurred images as starting points. For all algorithms, the maximum number of iterations is set at 300. The regularization parameter in the LASSO model (24) is set to τ = 10 5 . The following are the parameters for the studied algorithms under consideration:
σ k = 1 L , β k = γ k = 0.99 k k + 1 , α k = k k + 1 if 1 k M 1 2 k otherwise ,
where M is a positive integer depending on the number of iteration of Algorithm 2.
Figure 4, Figure 5, Figure 6 and Figure 7 present the deblurring test images by the studied algorithms. In Figure 8, we see that the graph of PSNR of Algorithm 2 is higher than the others, which means that the efficiency of restored images by Algorithm 2 is better than the other methods. The number of iterations is indicated by Iter., and CPU time is indicated by CPU (second).

5. Conclusions

In this research, an inertial forward-backward splitting algorithm for solving a common point of convex minimization problems is developed. We investigated the weak convergence of the suggested algorithm based on the fixed point equation of the forward-backward operator under some suitable control conditions. Finally, we use numerical simulations to show the benefits of the inertial terms in the studied algorithms for the constrained image inpainting problems (22) and the image restoration problems (24).

Author Contributions

Formal analysis, writing—original draft preparation, methodology, writing—review and editing, A.H.; software, N.P.; conceptualization, supervision, manuscript revision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by Rajamangala University of Technology Isan, Contract No. RMUTI/RF/01, the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183] and Chiang Mai University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research project was supported by Rajamangala University of Technology Isan, Contract No. RMUTI/RF/01, the NSRF via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183]. We also would like to thank Chiang Mai University and Rajamangala University of Technology Isan for the partial financial support. N. Pholasa was supported by University of Phayao and Thailand Science Research and Innovation grant no. FF66-UoE.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Bertsekas, D.P.; Tsitsiklis, J.N. Parallel and Distributed Computation: Numerical Methods; Athena Scientific: Belmont, MA, USA, 1997. [Google Scholar]
  2. Combettes, P.L.; Pesquet, J.C. A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
  3. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  4. Hanjing, A.; Suantai, S. An inertial alternating projection algorithm for convex minimization problems with applications to signal recovery problems. J. Nonlinear Convex Anal. 2022, 22, 2647–2660. [Google Scholar]
  5. Lin, L.J.; Takahashi, W. A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 2012, 16, 429–453. [Google Scholar] [CrossRef]
  6. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  7. Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. D’Inform. Rech. Oper. 1970, 4, 154–158. [Google Scholar]
  8. Yatakoat, P.; Suantai, S.; Hanjing, A. On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications. Adv. Contin. Discret. Model. 2022, 25. [Google Scholar] [CrossRef]
  9. Suantai, S.; Jailoka, P.; Hanjing, A. An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems. J. Inequal. Appl. 2021, 42. [Google Scholar] [CrossRef]
  10. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 17, 877–898. [Google Scholar] [CrossRef] [Green Version]
  11. Aremu, K.O.; Izuchukwu, C.; Grace, O.N.; Mewomo, O.T. Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 2020, 13, 2161–2180. [Google Scholar] [CrossRef] [Green Version]
  12. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef] [Green Version]
  13. Cui, F.; Tang, Y.; Yang, Y. An inertial three-operator splitting algorithm with applications to image inpainting. arXiv 2019, arXiv:1904.11684. [Google Scholar]
  14. Hanjing, A.; Suantai, S. A fast image restoration algorithm based on a fixed point and optimization. Mathematics 2020, 8, 378. [Google Scholar] [CrossRef]
  15. Thongpaen, P.; Wattanataweekul, R. A fast fixed-point algorithm for convex minimization problems and its application in image restoration problems. Mathematics 2021, 9, 2619. [Google Scholar] [CrossRef]
  16. Suantai, S.; Kankam, K.; Cholamjiak, P. A novel forward-backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 2020, 8, 42. [Google Scholar] [CrossRef] [Green Version]
  17. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  18. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 35–44. [Google Scholar] [CrossRef]
  19. Burachik, R.S.; Iusem, A.N. Set-Valued Mappings and Enlargements of Monotone Operator; Springer Science Business Media: New York, NY, USA, 2007. [Google Scholar]
  20. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  21. Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. Theory Methods Appl. 2009, 71, 112–119. [Google Scholar] [CrossRef]
  22. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  23. Tan, K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  24. Moudafi, A.; Al-Shemas, E. Simultaneous iterative methods for split equality problem. Trans. Math. Program. Appl. 2013, 1, 1–11. [Google Scholar] [CrossRef] [Green Version]
  25. Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
  26. Cai, J.F.; Candes, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
Figure 1. Test image.
Figure 1. Test image.
Symmetry 15 00007 g001
Figure 2. The painted image and restored images. (a) The painted image; (bf) Images that have been recovered for cases I through V with σ k = 1.3 , respectively.
Figure 2. The painted image and restored images. (a) The painted image; (bf) Images that have been recovered for cases I through V with σ k = 1.3 , respectively.
Symmetry 15 00007 g002
Figure 3. The deblurring images of Pepper and Bird.
Figure 3. The deblurring images of Pepper and Bird.
Symmetry 15 00007 g003
Figure 4. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario I of the Pepper.
Figure 4. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario I of the Pepper.
Symmetry 15 00007 g004
Figure 5. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario II of the Pepper.
Figure 5. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario II of the Pepper.
Symmetry 15 00007 g005
Figure 6. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario I of the Bird.
Figure 6. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario I of the Bird.
Symmetry 15 00007 g006
Figure 7. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario II of the Bird.
Figure 7. The PSNR, Iter. and CPU of the FBS, R-FBS, FISTA and ALG 2 for scenario II of the Bird.
Symmetry 15 00007 g007
Figure 8. The PSNR graphs of the studied algorithms: (a,b) for Pepper; (c,d) for Bird.
Figure 8. The PSNR graphs of the studied algorithms: (a,b) for Pepper; (c,d) for Bird.
Symmetry 15 00007 g008
Table 1. The different inertial parameters settings.
Table 1. The different inertial parameters settings.
CasesInertial Parameters
I ρ k = 0
II ρ k = 0.5
III ρ k = 0.9
IV ρ k = t k 1 t k + 1 , t 1 = 1 , t k + 1 = 1 + 1 + 4 t k 2 2
V ρ k = k k + 1
Table 2. Results of comparing the selection of inertial parameters in terms of number of iterations, CPU time, PSNR, and the stopping criteria for Algorithm 3.
Table 2. Results of comparing the selection of inertial parameters in terms of number of iterations, CPU time, PSNR, and the stopping criteria for Algorithm 3.
σ k Inertial ParametersIter.CPUPSNR (dB) ε
         Case I2000148.653723.14864.6305 ×  10 5
         Case II2000148.730727.18415.4313 ×  10 5
0.5         Case III122591.331933.16039.9616 ×  10 6
         Case IV2000148.154133.31121.8945 ×  10 5
         Case V87865.178633.32649.9611 ×  10 6
         Case I2000147.916527.17665.4335 ×  10 5
         Case II2000148.220532.14622.5990 ×  10 5
1         Case III68250.420733.24159.9935 ×  10 6
         Case IV1692125.417833.30259.9841 ×  10 6
         Case V85262.901333.32769.9929 ×  10 6
         Case I2000150.405429.26704.8888 ×  10 5
         Case II2000147.725232.92891.2150 ×  10 5
1.3         Case III54240.137533.26059.9835 ×  10 6
         Case IV1485109.817633.30389.9924 ×  10 6
         Case V83561.533633.31239.9484 ×  10 6
Table 3. Results of comparing the selection of parameters in terms of number of iterations, CPU time, PSNR, and the stopping criteria for iTOS algorithm.
Table 3. Results of comparing the selection of parameters in terms of number of iterations, CPU time, PSNR, and the stopping criteria for iTOS algorithm.
σ k ParametersIter.CPUPSNR (dB) ε
α k = 0.1 , β k = 1.4 2000150.305725.34345.5297 ×  10 5
α k = 0.2 , β k = 0.8 2000152.321823.08764.6797 ×  10 5
0.5 α k = 0.5 , β k = 0.3 2000151.093521.45063.5078 ×  10 5
α k = 0.8 , β k = 0.4 2000161.514327.04925.5804 ×  10 5
α k = 0.9 , β k = 0.5 2000163.110630.44063.9901 ×  10 5
α k = 0.1 , β k = 1.4 2000150.694730.22524.7538 ×  10 5
α k = 0.2 , β k = 0.8 2000150.951027.05855.5603 ×  10 5
1 α k = 0.5 , β k = 0.3 2000164.330423.90335.0955 ×  10 5
α k = 0.8 , β k = 0.4 2000156.725530.97554.0485 ×  10 5
α k = 0.9 , β k = 0.5 2000158.622325.31987.2758 ×  10 5
α k = 0.1 , β k = 1.4 2000149.749730.99214.0100 ×  10 5
α k = 0.2 , β k = 0.8 2000151.201529.04765.2936 ×  10 5
1.3 α k = 0.5 , β k = 0.3 2000153.618125.35845.5326 ×  10 6
α k = 0.8 , β k = 0.4 2000155.331730.34213.9970 ×  10 5
α k = 0.9 , β k = 0.5 2000155.755122.97169.3178 ×  10 5
Table 4. Processes of blurring in Detail.
Table 4. Processes of blurring in Detail.
ScenariosKernel Matrix
IGaussian blur of filter size 9 × 9 with standard deviation σ ^ = 17
IIMotion blur specifying with motion length of 21 pixels and motion orientation 15 °
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hanjing, A.; Pholasa, N.; Suantai, S. An Algorithm for Solving Common Points of Convex Minimization Problems with Applications. Symmetry 2023, 15, 7. https://doi.org/10.3390/sym15010007

AMA Style

Hanjing A, Pholasa N, Suantai S. An Algorithm for Solving Common Points of Convex Minimization Problems with Applications. Symmetry. 2023; 15(1):7. https://doi.org/10.3390/sym15010007

Chicago/Turabian Style

Hanjing, Adisak, Nattawut Pholasa, and Suthep Suantai. 2023. "An Algorithm for Solving Common Points of Convex Minimization Problems with Applications" Symmetry 15, no. 1: 7. https://doi.org/10.3390/sym15010007

APA Style

Hanjing, A., Pholasa, N., & Suantai, S. (2023). An Algorithm for Solving Common Points of Convex Minimization Problems with Applications. Symmetry, 15(1), 7. https://doi.org/10.3390/sym15010007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop