Next Article in Journal
Arrival Time and Bohmian Mechanics: It Is the Theory Which Decides What We Can Measure
Previous Article in Journal
Estimates for Certain Rough Multiple Singular Integrals on Triebel–Lizorkin Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Double Inertial Extragradient-like Approaches for Convex Bilevel Optimization Problems with VIP and CFPP Constraints

1
Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
2
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(10), 1324; https://doi.org/10.3390/sym16101324
Submission received: 29 July 2024 / Revised: 4 October 2024 / Accepted: 6 October 2024 / Published: 8 October 2024

Abstract

:
Convex bilevel optimization problems (CBOPs) exhibit a vital impact on the decision-making process under the hierarchical setting when image restoration plays a key role in signal processing and computer vision. In this paper, a modified double inertial extragradient-like approach with a line search procedure is introduced to tackle the CBOP with constraints of the CFPP and VIP, where the CFPP and VIP represent a common fixed point problem and a variational inequality problem, respectively. The strong convergence analysis of the proposed algorithm is discussed under certain mild assumptions, where it constitutes both sections that possess a mutual symmetry structure to a certain extent. As an application, our proposed algorithm is exploited for treating the image restoration problem, i.e., the LASSO problem with the constraints of fractional programming and fixed-point problems. The illustrative instance highlights the specific advantages and potential infect of the our proposed algorithm over the existing algorithms in the literature, particularly in the domain of image restoration.

1. Introduction

Suppose C H where C possesses both convexity and closedness and the real Hilbert space H has the inner product · , · and induced norm · . Let P C be the nearest-point projection from H onto C. For a mapping S : C C , we use Fix ( S ) and R to indicate the fixed-point set of S and the real-number set, respectively. For an operator A : H H , we recall the classical variational inequality problem (VIP), i.e., the objective is to find u C such that A u , v u 0 v C , where VI( C , A ) stands for the solution set of the VIP.
As far as we know, in 1976, the Korpelevich extragradient rule put forth in [1], is one of the most effective tools for tackling the VIP, i.e., for arbitrarily initial u 0 C , { u n } is the sequence fabricated by
h n = P C ( u n A u n ) , u n + 1 = P C ( u n A h n ) n 0 ,
with constant ( 0 , 1 L ) . The research outcomes on the VIP are abundant and the Korpelevich extragradient rule has captured broad attention paid by numerous scholars. Moreover, they ameliorated this rule in different forms; refer to [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18] and references therein, to name but a few.
In 2018, Thong and Hieu [18] first put forward the inertial subgradient extragradient approach, i.e., for any starting u 1 , u 0 H , { u n } is the sequence fabricated by
q n = u n + α n ( u n u n 1 ) , h n = P C ( q n A q n ) , C n = { v H : q n A q n h n , v h n 0 } , u n + 1 = P C n ( q n A h n ) n 1 ,
with constant ( 0 , 1 L ) . Under mild restrictions, it was shown that { u n } is weakly convergent to a solution of the VIP. In 2020, the inertial subgradient extragradient-type approach was proposed in [14] for tackling the pseudomonotone VIP with Lipschitzian self-mapping on H and the common fixed-point problem (CFPP) of finite nonexpansive self-mappings { T i } i = 1 N on H . Assume Ω = i = 1 N Fix ( T i ) VI ( C , A ) . Let f : H H be of δ -contractivity with 0 δ < 1 , and F : H H be of both η -strong monotonicity and κ -Lipschitz continuity s.t. δ < τ : = 1 1 ρ ( 2 η ρ κ 2 ) for ρ ( 0 , 2 η κ 2 ) . Presume that { n } , { γ n } , { β n } are the sequences in ( 0 , ) s.t. n = 1 β n = , β n 0 , n = o ( β n ) , 0 < lim inf n γ n lim sup n γ n < 1 and β n + γ n < 1 n . Besides, we define T n : = T n mod N n , where n mod N takes values in { 1 , 2 , , N } .
Algorithm 1
(see [14], Algorithm 3.1). Initialization: Let λ 1 > 0 , ϵ > 0 , 0 < μ < 1 and u 1 , u 0 H be arbitrarily selected. For given u n and u n 1 , select ϵ n [ 0 , ϵ ¯ n ] , with
ϵ ¯ n = min { ϵ , n u n u n 1 } if u n u n 1 , ϵ otherwise .
Iterations: Reckon u n + 1 below:
Step 1. Reckon q n = T n u n + ϵ n ( T n u n T n u n 1 ) and y n = P C ( q n λ n A q n ) .
Step 2. Put (half-space) C n : = { v H : q n λ n A q n y n , v y n 0 } , and reckon v n = P C n ( q n λ n A y n ) .
Step 3. Reckon u n + 1 = β n f ( u n ) + γ n u n + ( ( 1 γ n ) I β n ρ F ) v n , and update
λ n + 1 : = min { λ n , μ q n y n 2 + v n y n 2 2 A q n A y n , v n y n } if A q n A y n , v n y n > 0 , λ n otherwise .
Set n : = n + 1 and go to Step 1.
Under suitable conditions, it was proved that { u n } is strongly convergent to a point in Ω . On the other hand, we recall the bilevel optimization problem (BOP) (see [12]), i.e., the objective is to seek the minima below
min x S * ω ( x ) ,
in which ω : H R denotes a differentiable and strongly convex function and S * stands for the nonempty solution set of the inner-level optimization problem below
min x H { f ( x ) + g ( x ) } ,
in which f : H R is a differentiable and convex function, f is L-Lipschitz continuous, and g : H R { + } is a proper, convex and lower semi-continuous (l.s.c.) function. To the most of our knowledge, convex BOPs (CBOPs) display a crucial impact on the decision-making process under the hierarchical setting, while image restoration plays a critical role in signal processing and computer vision.
As well known from (3), x * Λ if and only if x * VI ( S * , ω ) , i.e., x * S * solves the VIP: ω ( x * ) , x x * 0 x S * , with Λ being the solution set of (3).
If there is the existence of a minimizer x * of f + g , then the forward-backward operator F B α : = prox α g ( I α f ) has a fixed point x * , where f denotes the gradient of f, α > 0 denotes the stepsize and prox α g denotes the proximity operator of g. That is, x * = F B α ( x * ) . If f is ζ -inverse-strongly monotone with ζ > 0 and α ( 0 , 2 ζ ] , then the operator F B α is nonexpansive. In 2016, in order to mitigate the constraints imposed by the Lipschitz condition on the gradient of f, Cruz and Nghia [19] introduced the linesearch procedure. They replaced the requirement of Lipschitz continuity for f with more lenient hypotheses, as outlined below:
Hypothesis 1.
f , g : H R ¯ = ( , ] are two proper convex l.s.c. functionals s.t. dom g dom f ;
Hypothesis 2.
f is of differentiability on some open set covering dom g , the gradient of f possesses the uniform continuity on each bounded subset of dom f , and there holds the relation for f to map each bounded set in dom g to a bounded set in H .
In particular, Wattanataweekul et al. [12] designed the double inertial forward-backward viscosity algorithm with Linesearch C below, to solve the convex minimization problem (CMP) for the sum of both convex functions.
Linesearch C. Fix x dom g , θ ( 0 , 1 ) , δ > 0 and σ > 0 .
Input Let α = σ .
When  α 2 { f ( F B α 2 ( x ) ) f ( F B α ( x ) ) + f ( F B α ( x ) ) f ( x ) }
     > δ ( F B α 2 ( x ) F B α ( x ) + F B α ( x ) x ) , conduct
     α = θ α .
End
Output α .
Assume that f and g satisfy the Hypotheses (H1)–(H2), and dom f = dom g = H . Their double inertial forward-backward viscosity algorithm with Linesearch C is specified below.
Algorithm 2
(see [12], Algorithm 5). Initialization: Let { μ n } , { ρ n } , { γ n } , { τ n } R + be bounded sequences. Choose x 1 , x 0 H , θ ( 0 , 1 ) , δ ( 0 , 1 8 ) and σ > 0 . Given a κ -contractive self-mapping F on H with 0 κ < 1 .
Iterations: For any n, reckon x n + 1 below.
Step 1.
Reckon w n = x n θ n ( x n 1 x n ) with θ n = min { μ n , γ n τ n x n 1 x n } if x n 1 x n , μ n otherwise .
Step 2.
Reckon z n = prox α n g ( I α n f ) w n and y n = prox α n g ( I α n f ) z n , with
α n = Linesearch   C   ( w n , σ , θ , δ ) .
Step 3.
Reckon u n = y n + δ n ( y n x n 1 ) with δ n = min { ρ n , γ n τ n y n x n 1 } if y n x n 1 , ρ n otherwise .
Step 4.
Reckon x n + 1 = γ n F ( x n ) + ( 1 γ n ) u n . Set n : = n + 1 and go to Step 1.
The strong convergence result for Algorithm 2 was established in [12]. As a consequence, they obtained an algorithm for solving the CBOP (3)–(4). Inspired by the research works in [12,14], we devise a modified double inertial extragradient-like algorithm with Linesearch C for solving the CBOP with VIP and CFPP constraints. The strong convergence result for the proposed algorithm is proved under certain appropriate assumptions, where the proposed algorithm consists of both sections that possess a mutual symmetry structure to a certain extent. As an application, our proposed algorithm is invoked to deal with the image restoration problem, i.e., the LASSO problem with the constraints of fractional programming and fixed-point problems. The illustrative instance highlights the specific advantages and potential influence of our proposed algorithm over the existing algorithms in the literature, particularly in the domain of image restoration.
The structure of the article is sketched below: In Section 2, we release certain basic tools and terminologies for later usage. Section 3 discusses and analyzes the strong convergence of the proposed algorithm. Finally, in Section 4, our main result is invoked to deal with the image restoration problem, i.e., the LASSO problem with the constraints of fractional programming and fixed-point problems.
Our algorithm is more advantageous and more flexible than Algorithm 5 in [12] because it involves solving the VIP for the Lipschitzian pseudomonotone operator and the CFPP of finite nonexpansive operators. Our result improves and extends the corresponding results in [12,14,18].
Lastly, it is worth addressing that the existing method (see [12]) is most closely relevant to our suggested method, that is, the double inertial forward-backward viscosity algorithm with Linesearch C for tackling a CBOP (see [12]) is developed into the modified double inertial extragradient-like algorithm with Linesearch C for tackling a CBOP with CFPP and VIP constraints, where this VIP implicates a Lipschitzian pseudomonotone operator and this CFPP involves a finite family of nonexpansive mappings. It is noteworthy that the double inertial forward-backward viscosity algorithm with Linesearch C for settling the CBOP (see [12]) is invalid for tackling the CBOP with CFPP and VIP constraints due to the reasons below: (i) the first constraint imposed on the CBOP is the VIP for Lipschitzian pseudomonotone operator and (ii) the second constraint imposed on the CBOP is the CFPP of finite nonexpansive mappings. Therefore, there is no way for the double inertial forward-backward viscosity algorithm with Linesearch C to treat the CBOP with CFPP and VIP constraints. In this work, it is a natural motivation that the double inertial forward-backward viscosity algorithm with Linesearch C for tackling the CBOP is developed into the modified double inertial extragradient-like algorithm with Linesearch C for tackling the CBOP with CFPP and VIP constraints.

2. Preliminaries

Suppose C H throughout, with C being of both convexity and closedness in H . For a given { h n } H , we use h n h (resp., h n h ) to denote the strong (resp., weak) convergence of { h n } to h. Let T : C H be a mapping. T is termed as being nonexpansive if T x T y x y x , y C . In addition, T : C H is termed to be
(i) L-Lipschitzian or L-Lipschitz continuous iff L > 0 s.t. T h T u L h u h , u C ;
(ii) monotone iff T h T u , h u 0 h , u C ;
(iii) pseudomonotone iff T h , u h 0 T u , u h 0 h , u C ;
(iv) of α ˘ -strong monotonicity iff α ˘ > 0 s.t. T h T v , h v α ˘ h v 2 h , v C ;
(v) of β ˘ -inverse-strong monotonicity iff β ˘ > 0 s.t. T h T v , h v β ˘ T h T v 2 h , v C ;
(vi) of sequentially weak continuity if { h n } C , there holds the relation: h n h T h n T h .
One can clearly see that the monotonicity implies the pseudomonotonicity but the reverse implication is false. It is easily known that h H , | z C s.t. h z h x x C . We define P C h = z h H . Then, P C is known as the nearest-point projection from H onto C.
Lemma 1
([20]). For each v , x H , y C , s [ 0 , 1 ] , there are the relations below:
(i) P C v P C x 2 P C v P C x , v x ;
(ii) v P C v , y P C v 0 ;
(iii) v P C v 2 + y P C v 2 v y 2 ;
(iv) v x 2 = v 2 x 2 2 v x , x ;
(v) s v + ( 1 s ) x 2 = s v 2 + ( 1 s ) x 2 s ( 1 s ) v x 2 .
Lemma 2
([7]). For 0 < β α and h H , there are the relations below:
h P C ( h α A h ) α h P C ( h β A h ) β   and   h P C ( h β A h ) h P C ( h α A h ) .
Lemma 3
([6]). Let A : C H be pseudomonotone and continuous. Given an h C . Then, A h , h h 0 h C A h , h h 0 h C .
Lemma 4
([21]). Let { a n } be a sequence in [ 0 , ) s.t. a n + 1 ( 1 μ n ) a n + μ n b n n 1 , where { μ n } and { b n } are two real sequences s.t. (i) { μ n } [ 0 , 1 ] and n = 1 μ n = , and (ii) lim sup n b n 0 or n = 1 | μ n b n | < . Then, lim n a n = 0 .
Lemma 5
([20]). Demiclosedness principle. Let T : C C be a nonexpansive mapping with Fix ( T ) . Then, I T is demiclosed at zero, that is, if { h n } C s.t. h n h C and ( I T ) h n 0 , then ( I T ) h = 0 , where I is the identity mapping of H .
On the other hand, the terminology of nearest-point projection is extended to the notion below.
Let g : H R ¯ be a proper convex l.s.c. function. According to [22,23], one knows that the mapping proxg, which is termed as the proximity operator associated with g, is formulated below:
prox g ( x ) : = argmin y H g ( y ) + 1 2 x y 2 .
Meanwhile, it is also of formulation proxg   =   ( I + g ) 1 , in which g denotes the subdifferential of g, written as g ( x ) : = { u H : u , v x g ( v ) g ( x ) v H } x H .
We present some connections between the proximity and subdifferential operators. For α > 0 and u H , then proxαg   =   ( I + α g ) 1 : H dom g , and ( u prox α g ( u ) ) / α g ( prox α g ( u ) ) .
Lemma 6
([24]). Given a proper convex l.s.c. function g : H R ¯ , and two sequences { h n } , { u n } H are considered such that u n g ( h n ) n 1 . If h n h and u n u , then u g ( h ) .
Lemma 7
([25]). Presume that { Φ n } is a real sequence that does not decrease at infinity in the sense that, { Φ n k } { Φ n } s.t. Φ n k < Φ n k + 1 k 1 . If the sequence { φ ( n ) } n n 0 of integers is defined as φ ( n ) = max { k n : Φ k < Φ k + 1 } , with integer n 0 1 fulfilling { k n 0 : Φ k < Φ k + 1 } , then the following holds:
(i) φ ( n 0 ) φ ( n 0 + 1 ) and φ ( n ) ;
(ii) Φ φ ( n ) Φ φ ( n ) + 1 and Φ n Φ φ ( n ) + 1 n n 0 .

3. Convergence Analysis

In what follows, we introduce and analyze a modified double inertial extragradient-like approach with Linesearch C, to resolve the convex minimization problem (CMP) for the sum of both convex functions, with the VIP and CFPP constraints. The strong convergence outcome for the suggested approach is acquired. Whereby, we derive a new algorithm for tackling the CBOP with VIP and CFPP constraints. From now on, let dom f = dom g = H , and suppose f and g fulfill the requirements (H1)-(H2). Moreover, assume always that the following holds:
A is L-Lipschitzian pseudomonotone self-mapping on H satisfying lim inf n A h n A h { h n } C s.t. h n h ;
{ T i } i = 1 N is a finite family of nonexpansive self-mappings on H s.t. Ω = i = 1 N Fix ( T i ) VI ( C , A ) S * .
In addition, let the sequence { T n } be defined as in Algorithm 1, i.e., T n : = T n mod N for each n 1 . Next, we first present a modified double inertial extragradient-like algorithm with Linesearch C as follows.
Algorithm 3.
Initial Step: Let { μ n } , { ρ n } , { β n } , { γ n } , { τ n } R + be bounded sequences. Choose x 1 , x 0 H , σ , λ 1 > 0 , 0 < δ < 1 / 8 and 0 θ , μ < 1 . Given a κ -contractive self-mapping F on H with 0 κ < 1 .
Iterative Steps: For any n, reckon x n + 1 below.
Step 1.
Reckon w n = x n θ n ( x n 1 x n ) with θ n = min { μ n , γ n τ n x n 1 x n } if x n 1 x n , μ n otherwise .
Step 2.
Reckon z n = prox α n g ( I α n f ) w n and y n = prox α n g ( I α n f ) z n , with
α n = Linesearch   C   ( w n , σ , θ , δ ) .
Step 3.
Reckon q n = T n y n + δ n ( T n y n T n x n 1 ) with
δ n = min { ρ n , γ n τ n y n x n 1 } if y n x n 1 , ρ n otherwise .
Step 4.
Reckon u n = P C ( q n λ n A q n ) and v n = P C n ( q n λ n A u n ) , with
C n : = { v H : q n λ n A q n u n , v u n 0 } .
Step 5.
Reckon x n + 1 = γ n F ( x n ) + β n x n + ( 1 β n γ n ) v n , and update
λ n + 1 : = min { λ n , μ q n u n 2 + v n u n 2 2 A q n A u n , v n u n } if A q n A u n , v n u n > 0 , λ n otherwise .
Set n : = n + 1 and go to Step 1.
Condition 3.Presume that { τ n } , { γ n } , { β n } , { α n } R + are such that the following hold:
(C1)
0 < a 1 α n a 2 < 1 ;
(C2)
γ n ( 0 , 1 ) , γ n 0 and n = 1 γ n = ;
(C3)
β n + γ n 1 , 0 < lim inf n β n lim sup n β n < 1 and τ n 0 .
Remark 1.
It is easy to see that, from the definitions of θ n , δ n we obtain that lim n θ n γ n × x n x n 1 = 0 and lim n δ n γ n y n x n 1 = 0 . Indeed, we have θ n x n x n 1 γ n τ n and δ n y n x n 1 γ n τ n n 1 , which together with lim n τ n = 0 imply that θ n γ n x n x n 1 τ n 0 and δ n γ n y n x n 1 τ n 0 as n .
In order to show the strong convergence of Algorithm 3, we need several lemmas below. The first lemma can be found in [12], Lemma 3.1.
Lemma 8.
Let { x n } be the sequence generated by Algorithm 3 and p H . Then,
w n p 2 y n p 2 2 α n [ ( f + g ) ( y n ) + ( f + g ) ( z n ) 2 ( f + g ) ( p ) ] + ( 1 8 δ ) ( w n z n 2 + z n y n 2 ) n 1 .
Lemma 9.
Suppose { λ n } is fabricated in (6). Then, the following hold: (i) { λ n } is nonincreasing and (ii) min { λ 1 , μ L } = : λ λ n n .
Proof. 
By (6) we first obtain λ n + 1 λ n n . Also, it is evident that
1 2 ( q n u n 2 + v n u n 2 ) q n u n v n u n A q n A u n , v n u n L q n u n v n u n λ n + 1 min { λ n , μ L } .
Remark 2.
In case q n = u n or A u n = 0 , one has u n VI ( C , A ) . In fact, by Lemmas 2 and 9, when q n = u n or A u n = 0 , we obtain
0 = u n P C ( q n λ n A q n ) u n P C ( u n λ A u n ) .
We are now ready to show several lemmas, which are vital to discuss the strong convergence of our algorithm.
Lemma 10.
For the sequences { q n } , { u n } , { v n } fabricated in Algorithm 3, one has
v n q 2 q n q 2 ( 1 μ λ n λ n + 1 ) q n u n 2 ( 1 μ λ n λ n + 1 ) v n u n 2 q Ω .
Proof. 
We first assert that
2 A q n A u n , v n u n μ λ n + 1 q n u n 2 + μ λ n + 1 v n u n 2 n 1 .
Indeed, if A q n A u n , v n u n 0 , (8) is valid. On the contrary, by (6) one has (8). Also, let q Ω C C n . It can be readily seen that
v n q 2 = P C n ( q n λ n A u n ) P C n q 2 v n q , q n λ n A u n q = 1 2 v n q 2 + 1 2 q n q 2 1 2 v n q n 2 v n q , λ n A u n .
This means that
v n q 2 q n q 2 v n q n 2 2 v n q , λ n A u n .
According to q VI ( C , A ) , one obtains A q , y q 0 y C . Because A is of pseudomonotonicity on C, one has A y , y q 0 q C . Setting y : = u n C one obtains A u n , q u n 0 . As a result,
A u n , q v n = A u n , q u n + A u n , u n v n A u n , u n v n .
Combining (9) and (10), one obtains
v n q 2 q n q 2 v n u n 2 u n q n 2 + 2 q n λ n A u n u n , v n u n .
Since u n = P C ( q n λ n A q n ) and v n C n , we have
2 q n λ n A u n u n , v n u n = 2 q n λ n A q n u n , v n u n + 2 λ n A q n A u n , v n u n 2 λ n A q n A u n , v n u n ,
which together with (8), implies that
2 q n λ n A u n u n , v n u n μ λ n λ n + 1 q n u n 2 + μ λ n λ n + 1 v n u n 2 .
Therefore, substituting (12) for (11), we infer that inequality (7) holds. □
Lemma 11.
Suppose { x n } is fabricated in Algorithm 3. Assume lim n τ n = 0 . Then { x n } is bounded.
Proof. 
Let q Ω . Using Lemma 8, we obtain
w n q 2 y n q 2 2 α n [ ( f + g ) ( y n ) + ( f + g ) ( z n ) 2 ( f + g ) ( q ) ] + ( 1 8 δ ) ( w n z n 2 + z n y n 2 ) ( 1 8 δ ) ( w n z n 2 + z n y n 2 ) 0 .
Thanks to w n = x n θ n ( x n 1 x n ) , we deduce that
y n q w n q x n q + θ n x n 1 x n .
This along with the definition of q n , leads to
q n q T n y n q + δ n T n y n T n x n 1 y n q + δ n x n 1 y n x n q + θ n x n 1 x n + δ n x n 1 y n .
On the other hand, using (7) we obtain
v n q 2 q n q 2 ( 1 μ λ n λ n + 1 ) q n u n 2 ( 1 μ λ n λ n + 1 ) v n u n 2 .
Because 1 μ λ n λ n + 1 1 μ > 0 as n , we might assume 1 μ λ n λ n + 1 > 0 n . Therefore,
v n q q n q n .
According to Remark 1, one has that θ n γ n x n 1 x n 0 and δ n γ n x n 1 y n 0 as n . As a result, M 1 > 0 s.t.
θ n γ n x n 1 x n + δ n γ n x n 1 y n M 1 n 1 .
Combining (15)–(17), we obtain
v n q q n q x n q + θ n x n x n 1 + δ n y n x n 1 = x n q + γ n [ θ n γ n x n x n 1 + δ n γ n y n x n 1 ] x n q + γ n M 1 n 1 .
Using the definition of x n + 1 and (18), we have
x n + 1 q γ n F ( x n ) q + β n x n q + ( 1 β n γ n ) v n q γ n F ( x n ) F ( q ) + γ n F ( q ) q + β n x n q + ( 1 β n γ n ) v n q γ n κ x n q + γ n F ( q ) q + β n x n q + ( 1 β n γ n ) v n q γ n κ x n q + γ n F ( q ) q + β n x n q + ( 1 β n γ n ) [ x n q + γ n M 1 ] [ 1 γ n ( 1 κ ) ] x n q + γ n [ M 1 + F ( q ) q ] = [ 1 γ n ( 1 κ ) ] x n q + γ n ( 1 κ ) · M 1 + F ( q ) q 1 κ max { x n q , M 1 + F ( q ) q 1 κ } .
By induction, we obtain x n q max { x 1 q , M 1 + F ( q ) q 1 κ } n . As a result, { x n } is of boundedness. Consequently, { F ( x n ) } , { y n } , { z n } , { q n } , { u n } , { v n } and { w n } all are of boundedness. □
Lemma 12.
Let { q n } , { y n } , { x n } , { u n } and { w n } be fabricated in Algorithm 3. Suppose x n + 1 x n 0 , q n y n 0 , q n u n 0 , y n w n 0 , and { q n k } { q n } s.t. q n k z H . Then, z lies in Ω provided Condition 3 holds.
Proof. 
From Algorithm 3, we obtain q n y n = T n y n y n + δ n ( T n y n T n x n 1 ) n 1 , and hence
T n y n y n = q n y n δ n ( T n y n T n x n 1 ) q n y n + δ n T n y n T n x n 1 q n y n + γ n · δ n γ n y n x n 1 .
Using Remark 1 and the assumption q n y n 0 , we have
lim n y n T n y n = 0 .
Also, from u n = P C ( q n λ n A q n ) , we have q n λ n A q n u n , v u n 0 v C , and hence
1 λ n q n u n , v u n + A q n , u n q n A q n , v q n v C .
Thanks to Lipschitz’s condition on A, { A q n k } is of boundedness. Noticing λ n min { λ 1 , μ L } , one deduces from (20) that lim inf k A q n k , v q n k 0 . Meanwhile, it is clear that A u n , v u n = A u n A q n , v q n + A q n , v q n + A u n , q n u n . Since q n u n 0 , one obtains A q n A u n 0 . This along with (20) arrives at lim inf k A u n k , v u n k 0 .
Let us assert y n T l y n 0 l { 1 , , N } . In fact, since x n w n = θ n x n 1 x n = γ n · θ n γ n x n 1 x n 0 , we deduce from y n w n 0 that
x n y n x n w n + w n y n 0 ( n ) ,
and hence
y n y n + 1 y n x n + x n x n + 1 + x n + 1 y n + 1 0 n .
It is clear that for m = 1 , , N ,
y n T n + m y n y n y n + m + y n + m T n + m y n + m + T n + m y n + m T n + m y n 2 y n y n + m + y n + m T n + m y n + m .
So, using (19) and y n y n + 1 0 one obtains lim n y n T n + m y n = 0 for m = 1 , , N . This immediately implies that
lim n y n T l y n = 0 for l = 1 , , N .
Next, we select { ϵ ˘ k } ( 0 , 1 ) s.t. ϵ ˘ k 0 . For each k, one denotes by m k ( 1 ) the smallest number satisfying
A u n j , v u n j + ϵ ˘ k 0 j m k .
Because of the decreasing property of { ϵ ˘ k } , we obtain the increasing property of { m k } . For simplicity, { u n m k } is still written as { u m k } . From q n u n 0 and q n k z it is easy to see that A z lim inf k A u n k . In case A z = 0 , one has that z lies in VI ( C , A ) . In case A z 0 , from { u m k } { u n k } we might assume A u m k 0 k . Hence, one sets h m k = A u m k A u m k 2 . As a result, A u m k , h m k = 1 . Thus, by (22) one obtains A u m k , x + ε k h m k u m k 0 . Because A is of pseudomonotonicity, one has A ( v + ϵ ˘ k h m k ) , v + ϵ ˘ k h m k u m k 0 , hence arriving at
A v , v u m k A v A ( v + ϵ ˘ k h m k ) , v + ϵ ˘ k h m k u m k ϵ ˘ k A v , h m k k .
Let us show lim k ϵ ˘ k h m k = 0 . In fact, using q n k z and q n u n 0 , one obtains u n k z . Thus, { u n } C implies that z lies in C. Note, that { u m k } { u n k } and ε k 0 . Therefore,
0 lim sup k ϵ ˘ k h m k = lim sup k ϵ ˘ k A u m k lim sup k ϵ ˘ k lim inf k A u n k = 0 .
As a result, ϵ ˘ k u m k 0 .
In what follows, one claims that z lies in Ω . In fact, using q n y n 0 and q n k z , one obtains y n k z . By (21) one has y n k T l y n k 0 l { 1 , , N } . Because Lemma 5 implies that I T l is demiclosed at zero, one obtains z Fix ( T l ) . Therefore, one obtains z m = 1 N Fix ( T m ) . Additionally, as k , one obtains that the right-hand side of (23) converges to 0 due to the fact that A is uniformly continuous, the sequences { q m k } , { h m k } are bounded and lim k ϵ ˘ k h m k = 0 . As a result, A v , v z = lim inf k A v , v u m k 0 v C . Using Lemma 3 one has that z lies in VI ( C , A ) . As a result, z m = 1 N Fix ( T m ) VI ( C , A ) .
Finally, let us show z S * . Indeed, from (13) and w n y n 0 it follows that
( 1 8 δ ) ( w n z n 2 + z n y n 2 ) w n p 2 y n p 2 w n y n ( w n p + y n p ) 0 ( n ) ,
which hence arrives at
lim n w n z n = lim n z n y n = 0 ,
and
z n q n z n y n + y n q n 0 ( n ) .
Based on the hypothesis (H2), we obtain
f ( w n k ) f ( z n k ) 0 ( n ) .
Using the condition (C1) and (24), we obtain
lim k w n k z n k α n k + f ( z n k ) f ( w n k ) = 0 .
Thanks to z n k = prox α n k g ( I α n k f ) w n k , we have
w n k α n k f ( w n k ) z n k + α n k g ( z n k ) ,
which hence yields
w n k z n k α n k + f ( z n k ) f ( w n k ) g ( z n k ) + f ( z n k ) = ( f + g ) ( z n k ) .
Using Lemma 6 we conclude from (25), (26) and z n k z that 0 ( f + g ) ( z ) . As a result, z S * . Consequently, z i = 1 N Fix ( T i ) VI ( C , A ) S * = Ω . □
Theorem 1.
Let { x n } be fabricated in Algorithm 3. If Condition 3 holds, then { x n } strongly converges to an element x * Ω , where x * = P Ω F ( x * ) .
Proof. 
First of all, by the Banach Contraction Principle, one knows that there is only a fixed point x * of P Ω F in H . So, there is only a solution x * Ω of the VIP
( I F ) x * , v x * 0 v Ω .
In what follows, one divides the remainder of the proofs into a few claims.
Claim 1. We show that
{ ( 1 8 δ ) ( z n w n 2 + y n z n 2 ) + ( 1 μ λ n λ n + 1 ) ( q n u n 2 + v n u n 2 ) } ( 1 β n ) + ( 1 β n ) β n v n x n 2 x n x * 2 x n + 1 x * 2 + θ n x n 1 x n M 2 + δ n x n 1 y n M 3 + γ n M 4
for some M i > 0 , i = 2 , 3 , 4 .
Indeed, using Lemma 11, { x n } is bounded, and hence { F ( x n ) } and { q n } are of boundedness. From (13) and the definition of w n , one has
y n x * 2 w n x * 2 ( 1 8 δ ) ( w n z n 2 + z n y n 2 ) = x n x * 2 + θ n 2 x n x n 1 2 + 2 θ n x n x * , x n x n 1 ( 1 8 δ ) ( z n w n 2 + y n z n 2 ) x n x * 2 + θ n x n 1 x n [ 2 x n x * + θ n x n 1 x n ] ( 1 8 δ ) ( z n w n 2 + y n z n 2 ) .
Since q n = T n y n + δ n ( T n y n T n x n 1 ) , by (29) one obtains
q n x * 2 [ T n y n x * + δ n T n y n T n x n 1 ] 2 [ y n x * + δ n y n x n 1 ] 2 = y n x * 2 + δ n x n 1 y n [ 2 y n x * + δ n x n 1 y n ] x n x * 2 + θ n x n 1 x n [ 2 x n x * + θ n x n 1 x n ] ( 1 8 δ ) ( z n w n 2 + y n z n 2 ) + δ n x n 1 y n × [ 2 y n x * + δ n x n 1 y n ] ,
which together with (7), arrives at
x n + 1 x * 2 = γ n ( F ( x n ) v n ) + β n ( x n x * ) + ( 1 β n ) ( v n x * ) 2 β n x n x * 2 + ( 1 β n ) v n x * 2 ( 1 β n ) β n v n x n 2 + 2 γ n F ( x n ) v n , x n + 1 x * β n x n x * 2 + ( 1 β n ) [ q n x * 2 ( 1 μ λ n λ n + 1 ) ( q n u n 2 + v n u n 2 ) ] ( 1 β n ) β n v n x n 2 + 2 γ n F ( x n ) v n , x n + 1 x * β n x n x * 2 + ( 1 β n ) { x n x * 2 + x n 1 x n θ n [ 2 x n x * + x n 1 x n θ n ] ( 1 8 δ ) ( z n w n 2 + y n z n 2 ) + δ n x n 1 y n [ 2 y n x * + x n 1 y n δ n ] ( 1 μ λ n λ n + 1 ) ( q n u n 2 + v n u n 2 ) } ( 1 β n ) β n v n x n 2 + 2 γ n F ( x n ) v n , x n + 1 x * β n x n x * 2 + ( 1 β n ) { x n x * 2 + x n 1 x n θ n M 2 ( 1 8 δ ) ( w n z n 2 + z n y n 2 ) + x n 1 y n δ n M 3 ( 1 μ λ n λ n + 1 ) ( q n u n 2 + v n u n 2 ) } β n ( 1 β n ) x n v n 2 + γ n M 4 x n x * 2 ( 1 β n ) { ( 1 8 δ ) ( z n w n 2 + y n z n 2 ) + ( 1 μ λ n λ n + 1 ) ( q n u n 2 + v n u n 2 ) } ( 1 β n ) β n × v n x n 2 + θ n x n 1 x n M 2 + δ n x n 1 y n M 3 + γ n M 4
where sup n 1 [ 2 x n x * + x n 1 x n θ n ] M 2 , sup n 1 [ 2 y n x * + x n 1 y n δ n ] M 3 and sup n 1 2 F ( x n ) v n x n + 1 x * M 4 for some M i > 0 , i = 2 , 3 , 4 .
Claim 2. We show that
x n + 1 x * 2 [ 1 γ n ( 1 κ ) ] x n x * 2 + γ n ( 1 κ ) { θ n x n x n 1 γ n · M 2 1 κ + δ n y n x n 1 γ n · M 3 1 κ + 2 1 κ F ( x * ) x * , x n + 1 x * } .
Indeed, noticing x n + 1 = γ n F ( x n ) + β n x n + ( 1 β n γ n ) v n , we deduce from (16) and (30) that
x n + 1 x * 2 = γ n ( F ( x n ) x * ) + β n ( x n x * ) + ( 1 β n γ n ) ( v n x * ) 2 = γ n ( F ( x n ) F ( x * ) ) + β n ( x n x * ) + ( 1 β n γ n ) ( v n x * ) + γ n ( F ( x * ) x * ) 2 γ n ( F ( x n ) F ( x * ) ) + β n ( x n x * ) + ( 1 β n γ n ) ( v n x * ) 2 + 2 γ n F ( x * ) x * , x n + 1 x * γ n F ( x n ) F ( x * ) 2 + β n x n x * 2 + ( 1 β n γ n ) v n x * 2 + 2 γ n F ( x * ) x * , x n + 1 x * γ n κ x n x * 2 + β n x n x * 2 + ( 1 β n γ n ) q n x * 2 + 2 γ n F ( x * ) x * , x n + 1 x * γ n κ x n x * 2 + β n x n x * 2 + ( 1 β n γ n ) [ x n x * 2 + θ n x n x n 1 M 2 + δ n y n x n 1 M 3 ] + 2 γ n F ( x * ) x * , x n + 1 x * [ 1 γ n ( 1 κ ) ] x n x * 2 + θ n x n x n 1 M 2 + δ n y n x n 1 M 3 + 2 γ n F ( x * ) x * , x n + 1 x * = [ 1 γ n ( 1 κ ) ] x n x * 2 + γ n ( 1 κ ) { θ n x n x n 1 γ n · M 2 1 κ + δ n y n x n 1 γ n · M 3 1 κ + 2 1 κ F ( x * ) x * , x n + 1 x * } .
Claim 3. We show that x n x * Ω , which is only a solution of VIP (27). In fact, setting Φ n = x n x * 2 , one can derive Φ n 0 in both aspects below.
Aspect 1. Presume that ∃ (integer) n 0 1 s.t. there holds the nonincreasing property of { Φ n } . One then has that Φ n d < + and Φ n Φ n + 1 0 as n . From (28) and (17) one obtains
( 1 β n ) { ( 1 8 δ ) ( w n z n 2 + z n y n 2 ) + ( 1 μ λ n λ n + 1 ) ( q n u n 2 + v n u n 2 ) } + β n ( 1 β n ) x n v n 2 x n x * 2 x n + 1 x * 2 + θ n x n x n 1 M 2 + δ n y n x n 1 M 3 + γ n M 4 = Φ n Φ n + 1 + γ n [ θ n γ n x n x n 1 M 2 + δ n γ n y n x n 1 M 3 ] + γ n M 4 Φ n Φ n + 1 + γ n M 1 ( M 2 + M 3 ) + γ n M 4 .
Since 0 < lim inf n β n lim sup n β n < 1 , γ n 0 , Φ n Φ n + 1 0 and lim n ( 1 μ λ n λ n + 1 ) = 1 μ > 0 , we infer from (32) that
lim n w n z n = lim n z n y n = 0
and
lim n q n u n = lim n v n u n = lim n x n v n = 0 .
Thus, we conclude that
v n q n v n u n + u n q n 0 ( n ) ,
x n q n x n v n + v n q n 0 ( n ) ,
y n w n y n z n z n w n 0 ( n ) ,
v n y n v n x n + x n w n + w n y n v n x n + θ n x n x n 1 + w n y n 0 ( n ) ,
and hence
q n y n q n v n + v n y n 0 ( n ) .
Thanks to x n + 1 = γ n F ( x n ) + β n x n + ( 1 β n γ n ) v n , we have
x n + 1 x n γ n F ( x n ) x n + ( 1 β n γ n ) v n x n γ n F ( x n ) x n + v n x n 0 ( n ) .
Thanks to the boundedness of { x n } , there exists a subsequence { x n k } of { x n } such that
lim sup n F ( x * ) x * , x n x * = lim k F ( x * ) x * , x n k x * .
Since H is reflexive and { x n } is bounded, we might assume that x n k x ˜ . Hence, from (37) we obtain
lim sup n F ( x * ) x * , x n x * = lim k F ( x * ) x * , x n k x * = F ( x * ) x * , x ˜ x * .
Because x n + 1 x n 0 , q n y n 0 , q n u n 0 , y n w n 0 and q n k x ˜ (due to q n x n 0 ), by Lemma 12 we infer that x ˜ Ω . Hence from (27) and (38) we obtain
lim sup n F ( x * ) x * , x n x * = F ( x * ) x * , x ˜ x * 0 ,
which immediately leads to
lim sup n F ( x * ) x * , x n + 1 x * = lim sup n [ F ( x * ) x * , x n + 1 x n + F ( x * ) x * , x n x * ] lim sup n [ F ( x * ) x * x n + 1 x n + F ( x * ) x * , x n x * ] 0 .
Note, that { γ n ( 1 κ ) } [ 0 , 1 ] , n = 1 γ n ( 1 κ ) = , and
lim sup n { θ n x n x n 1 γ n · M 2 1 κ + δ n y n x n 1 γ n · M 3 1 κ + 2 1 κ F ( x * ) x * , x n + 1 x * } 0 .
Therefore, using Lemma 4 we deduce from (31) that x n x * as n .
Aspect 2. Suppose that { Φ n k } { Φ n } s.t. Φ n k < Φ n k + 1 k N , where N is the set of all positive integers. Define the mapping φ : N N by
φ ( n ) : = max { k n : Φ k < Φ k + 1 } .
By Lemma 7, we obtain
Φ φ ( n ) Φ φ ( n ) + 1 and Φ n Φ φ ( n ) + 1 .
From (28) and (17) we obtain
( 1 β φ ( n ) ) { ( 1 8 δ ) ( w φ ( n ) z φ ( n ) 2 + z φ ( n ) y φ ( n ) 2 ) + ( 1 μ λ φ ( n ) λ φ ( n ) + 1 ) ( q φ ( n ) u φ ( n ) 2 + v φ ( n ) u φ ( n ) 2 ) } + β φ ( n ) ( 1 β φ ( n ) ) x φ ( n ) v φ ( n ) 2 Φ φ ( n ) Φ φ ( n ) + 1 + θ φ ( n ) x φ ( n ) x φ ( n ) 1 M 2 + δ φ ( n ) y φ ( n ) x φ ( n ) 1 M 3 + γ φ ( n ) M 4 Φ φ ( n ) Φ φ ( n ) + 1 + γ φ ( n ) M 1 ( M 2 + M 3 ) + γ φ ( n ) M 4 .
This hence implies that
lim n z φ ( n ) w φ ( n ) = lim n y φ ( n ) z φ ( n ) = 0
and
lim n q φ ( n ) u φ ( n ) = lim n v φ ( n ) u φ ( n ) = lim n x φ ( n ) v φ ( n ) = 0 .
So it follows that
lim n q φ ( n ) v φ ( n ) = lim n q φ ( n ) x φ ( n ) = lim n y φ ( n ) v φ ( n ) = 0 .
Applying the analogous reasonings to those in the proofs of Aspect 1, one obtains
lim n w φ ( n ) y φ ( n ) = lim n q φ ( n ) y φ ( n ) = lim n x φ ( n ) + 1 x φ ( n ) = 0 ,
and
lim sup n F ( x * ) x * , x φ ( n ) + 1 x * 0 .
In what follows, using (31) one obtains
γ φ ( n ) ( 1 κ ) Φ φ ( n ) Φ φ ( n ) Φ φ ( n ) + 1 + γ φ ( n ) ( 1 κ ) { x φ ( n ) 1 x φ ( n ) θ φ ( n ) γ φ ( n ) · M 2 1 κ + x φ ( n ) 1 y φ ( n ) δ φ ( n ) γ φ ( n ) · M 3 1 κ + 2 1 κ F ( x * ) x * , x φ ( n ) + 1 x * } ,
which hence arrives at
lim sup n Φ φ ( n ) lim sup n { θ φ ( n ) x φ ( n ) x φ ( n ) 1 γ φ ( n ) · M 2 1 κ + δ φ ( n ) y φ ( n ) x φ ( n ) 1 γ φ ( n ) · M 3 1 κ + 2 1 κ F ( x * ) x * , x φ ( n ) + 1 x * } 0 .
Thus, lim n Φ φ ( n ) = 0 . Also, it is easily known that
Φ φ ( n ) + 1 Φ φ ( n ) = 2 x φ ( n ) x φ ( n ) + 1 , x φ ( n ) x * + x φ ( n ) x φ ( n ) + 1 2 2 x φ ( n ) x φ ( n ) + 1 x φ ( n ) x * + x φ ( n ) x φ ( n ) + 1 2 .
Thanks to Φ n Φ φ ( n ) + 1 , we obtain
Φ n Φ φ ( n ) + 1 Φ φ ( n ) + 2 x φ ( n ) x φ ( n ) + 1 Φ φ ( n ) + x φ ( n ) x φ ( n ) + 1 2 0 ( n ) .
As a result, Φ n 0 ( n ) . □
In the forthcoming discussion, we let H = R n and introduce specific assumptions regarding the mappings f , g , and ω , that are pertinent to problem (3)–(4).
(B1) f , g : R n ( , ) are proper convex l.s.c. functions, with f being uniformly continuous;
(B2) ω : R n ( , ) is of strong convexity possessing parameter σ , where the gradient of ω is L ω -Lipschitzian, and s ( 0 , 2 L ω + σ ) .
Based on the stated assumptions, we propose the modified double inertial extragradient-like algorithm with Linesearch C (Algorithm 4) to solve problem (4)–(4) with VIP and CFPP constraints.
Algorithm 4.
Initial Step: Let { μ n } , { ρ n } , { β n } , { γ n } , { τ n } R + be bounded sequences. Choose x 1 , x 0 R n , σ , λ 1 > 0 , 0 < δ < 1 8 and 0 < θ , μ < 1 .
Iterative Steps: For any n, reckon x n + 1 below.
Step 1.
Reckon w n = x n θ n ( x n 1 x n ) with θ n = min { μ n , γ n τ n x n 1 x n } if x n 1 x n , μ n otherwise .
Step 2.
Reckon z n = prox α n g ( I α n f ) w n and y n = prox α n g ( I α n f ) z n , with
α n = Linesearch   C   ( w n , σ , θ , δ ) .
Step 3.
Reckon q n = T n y n + δ n ( T n y n T n x n 1 ) with
δ n = min { ρ n , γ n τ n y n x n 1 } if y n x n 1 , ρ n otherwise .
Step 4.
Reckon u n = P C ( q n λ n A q n ) and v n = P C n ( q n λ n A u n ) , with
C n : = { v H : q n λ n A q n u n , v u n 0 } .
Step 5.
Reckon x n + 1 = γ n ( I s ω ) ( x n ) + β n x n + ( 1 β n γ n ) v n , and update
λ n + 1 : = min { λ n , μ q n u n 2 + v n u n 2 2 A q n A u n , v n u n } if A q n A u n , v n u n > 0 , λ n otherwise .
Set n : = n + 1 and go to Step 1.
We provide a lemma, which is vital to our forthcoming results.
Lemma 13
([26]). Suppose ω : R n ( , ) is of strong convexity with σ > 0 and the gradient of ω is Lipschitzian with constant L ω . Choose s ( 0 , 2 σ + L ω ) arbitrarily. Then, the mapping S s = I s ω is a contractive map s.t.
( I s ω ) u ( I s ω ) v 1 2 s σ L ω σ + L ω u v u , v R n .
Theorem 2.
Suppose { x n } is fabricated in Algorithm 4 and let ℧ be the solution set of problem (3)–(4) with VIP and CFPP constraints and x * = P Ω ( I s ω ) ( x * ) . Then { x n } converges strongly to x * provided all conditions in Theorem 1 are fulfilled.
Proof. 
Consider F = I s ω . According to Lemma 13, F acts as a contractive map. Using Theorem 1, we conclude that x n x * Ω , where x * = P Ω F ( x * ) . Therefore, for each v Ω , one has
0 F ( x * ) x * , v x * = x * s ω ( x * ) x * , v x * = s ω ( x * ) , v x * .
This immediately yields
ω ( x * ) , v x * 0 v Ω .
As a result, x * . Therefore, we obtain x n x * by Theorem 1. □

4. An Application

In this section, our Algorithm 4 is applied to find a solution to the LASSO problem with constraints of fractional programming and fixed-point problems. Since the accurate solution of this problem is unknown, one employs x n x n + 1 to estimate the error of the n-th iterate, which shows the utility of verifying whether the suggested algorithm converges to the solution as well or not.
First, recall some preliminaries. We set a mapping
Γ ( x ) : = M x + q ,
that is found in [27] and was discussed by numerous scholars for applicable examples (see, e.g., [28]), with
M = B B T + D + G and B , D , G R m × m
where D is skew-symmetric and G is diagonal matrix, for which diagonal entries are nonnegative (hence M is positive semidefinite), and q is an element in R m . The feasible C R m is of both closedness and convexity, and formulated below
C : = { x R m : H x d } ,
with H R l × m and the vector d being nonnegative. It is not hard to find that Γ is of both β -(strong) monotonicity and L-Lipschitz continuity with β = min { eig ( M ) } and L = max { eig ( M ) } .
As far as we know, image reconstruction implicates invoking varied matters to meliorate the quality of images. This encompasses tasks, e.g., image deblurring or deconvolution, which aim to repair any blurriness emerging in an image and rehabilitate it to a clearer and more visually appealing status. The attempt to reconstruct the image comes back to the 1950s and has been exploited in different areas, e.g., consumer photography, image/video decoding, and scientific exploration [29,30]. From mathematical viewpoint, image reconstruction is usually formulated below:
v = A x + b ˘ ,
where v R m represents the observed image, A R m × n denotes the blurring matrix, x R n denotes the original image, and b ˘ is noise. To attain our aim of rehabilitating the optimal valid image x ¯ R n that meets (43), we are devoted to tackling the least squares problem (44) while minimizing the impact of b ˘ . Via doing so, we can make sure that our technique is efficient and superior for acquiring the desired outcomes. This technique seeks to minimize the squared discrepancy between v and A x with the goal of ameliorating the reconstruction procedure and strengthening the image quality
min x v A x 2 2 ,
where · 2 is the spectral norm. Varied iterative processes can be utilized to evaluate the solution shown in (44). It is noteworthy that (44) causes a challenge because it lies in the category of ill-posed problems. In the case when the number of unknown variables goes over the number of observations, it commonly arrives at an unstable norm. This is a vital issue that can pose varied problems. This issue has been broadly explored and recorded in varied research, e.g., [31,32]. Regularization approaches have been suggested to resolve the challenge of improving the least squares problem. In particular, Tikhonov regularizing technique becomes a crucial method that ameliorates the accuracy and stableness of solutions. Via this approach, we can attain our goal of resolving problems in the most effective and superior way possible
min x { v A x 2 2 + ζ L x 2 } ,
where ζ > 0 is a constant, which is termed the regularization parameter, and · 2 is the spectral norm. Besides, the Tikhonov matrix is represented by L R m × n with a default configuration viewing L as the identity matrix. The least absolute shrinkage and selection operator (LASSO), invented in Tibshirani [33], is a prominent way to tackle (43). Denoting by S * the solution set of the minimization problem below
min x { v A x 2 2 + ζ x 1 } ,
we aim at seeking a point x * S * s.t.
x * = arg min x S * 1 2 x 2 2 .
Next, in our illustrative instance, we explore and apply Algorithm 4 for tackling the CBOP with constraints of fractional programming and fixed point problems. We set ω ( x ) = 1 2 x 2 2 , f ( x ) = v A x 2 2 and g ( x ) = ζ x 1 with ζ = 5 × 10 5 . In this case, the observed images under consideration are blurred ones.
For convenience, let m = n = l = 4 . We give the operator A. Consider the following fractional programming problem:
min g ( x ) = x T Q x + a T x + a 0 b T x + b 0 , subject to x X : = { x R 4 : b T x + b 0 > 0 } ,
where
Q = 5 1 2 0 1 5 1 3 2 1 3 0 0 3 0 5 , a = 1 2 2 1 , b = 2 1 1 0 , a 0 = 2 , b 0 = 4 .
It is easy to verify that Q is symmetric and positive definite in R 4 and consequently g is pseudoconvex on X = { x R 4 : b T x + b 0 > 0 } . Then,
A x : = g ( x ) = ( b T x + b 0 ) ( 2 Q x + a ) b ( x T Q x + a T x + a 0 ) ( b T x + b 0 ) 2 .
It is easily known that A is pseudomonotone (see [34] for more details). Now, we give a nonexpansive mapping T 1 : R 4 C defined by T 1 x = P C x x R 4 .
Let starting points x 1 , x 0 be randomly selected in R 4 . Take F ( x ) = ( I 0.01 ω ) x , μ = 0.3 , γ n = τ n = 1 n + 1 , μ n = ρ n = 0.1 , β n = 1 3 , σ = 2 , θ = 0.9 , δ = 0.1 ,
θ n = min { 0.1 , 1 / ( n + 1 ) 2 x n 1 x n } if x n 1 x n , 0.1 otherwise ,
α n = Linesearch C ( w n , 2 , 0.9 , 0.1 ) , and
δ n = min { 0.1 , 1 / ( n + 1 ) 2 x n 1 y n } if x n 1 y n , 0.1 otherwise .
As a result, Algorithm 4 is rephrased as follows:
w n = x n θ n ( x n 1 x n ) , z n = prox α n g ( I α n f ) w n , y n = prox α n g ( I α n f ) z n , q n = T 1 y n δ n ( T 1 x n 1 T 1 y n ) , u n = P C ( q n λ n A q n ) , v n = P C n ( q n λ n A u n ) , x n + 1 = 1 n + 1 · 99 100 x n + 1 3 x n + ( n n + 1 I 1 3 I ) v n n 1 ,
in which λ n and C n are selected as in Algorithm 4 for every n. Therefore, Theorem 2 guarantees that { x n } is convergent to a solution of the LASSO problem with constraints of the fractional programming problem and the fixed-point problem of T 1 .
In the end, it is worth mentioning that, there have been many works that deal with the problem of designing an algorithm to solve (46) see, e.g., [35,36] and the references wherein. Moreover, some of them are able to solve globally the problem using non-convexity assumptions.

5. Conclusions

This article is focused on designing and analyzing iterative algorithms to tackle convex bilevel optimization problem (CBOP) with CFPP and VIP constraints, with the CFPP and VIP representing a common fixed point problem and a variational inequality problem, respectively. Here, the CFPP implicates finite nonexpansive mappings and the VIP involves a Lipschitzian pseudomonotone mapping in a real Hilbert space.
To the best of our awareness, the CBOP reveals a prominent role in the decision-making process under the hierarchical setting, when image reconstruction exhibits a vital effect on signal processing and computer vision. With the help of the subgradient extragradient and forward-backward viscosity methods, we have designed a novel double inertial extragradient-like approach with Linesearch C for tackling the CBOP with the CFPP and VIP constraints. Under certain appropriate conditions, we have proved that the sequence fabricated by the proposed algorithm is strongly convergent to a solution of the CBOP with CFPP and VIP constraints, where the proposed algorithm consists of both sections which possess a mutual symmetry structure to a certain extent. As an application, our proposed algorithm is exploited for treating the image restoration problem, i.e., the LASSO problem with the constraints of fractional programming and fixed-point problems. The illustrative instance highlights the specific advantages and potential influence of our proposed algorithm over the existing algorithms in the literature, particularly in the domain of image restoration. Finally, it is worth mentioning that a section of our subsequent investigation is concentrated on establishing the strong convergence outcome for the modification of our devised method with quasi-inertial Tseng’s extragradient steps (see [13]) and adaptive step sizes.

Author Contributions

Conceptualization, Y.Z., L.-C.C., L.-F.Z. and X.W.; methodology, Y.Z. and L.-C.C.; software, Y.Z.; validation, L.-F.Z. and X.W.; formal analysis, Y.Z., L.-C.C., L.-F.Z. and X.W.; investigation, Y.Z., L.-C.C., L.-F.Z. and X.W.; resources, L.-C.C.; data curation, Y.Z., L.-F.Z. and X.W.; writing—original draft preparation, Y.Z., L.-C.C., L.-F.Z. and X.W.; writing—review and editing, Y.Z. and L.-C.C.; visualization, Y.Z., L.-F.Z. and X.W.; supervision, L.-C.C.; project administration, L.-C.C.; funding acquisition, L.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the 2020 Shanghai Leading Talents Program of the Shanghai Municipal Human Resources and Social Security Bureau (20LJ2006100), the Innovation Program of Shanghai Municipal Education Commission (15ZZ068) and the Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Yao, Y.; Liou, Y.C.; Yao, J.C. An extragradient method for fixed point problems and variational inequality problems. J. Inequal. Appl. 2007, 2007, 38752. [Google Scholar] [CrossRef]
  3. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef]
  4. Yao, Y.; Marino, G.; Muglia, L. A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 2014, 63, 559–569. [Google Scholar] [CrossRef]
  5. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–501. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  7. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  8. Dong, Q.L.; Lu, Y.Y.; Yang, J.F. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  9. Ceng, L.C.; Pang, C.T.; Wen, C.F. Multi-step extragradient method with regularization for triple hierarchical variational inequalities with variational inclusion and split feasibility constraints. J. Inequal. Appl. 2014, 2014, 492. [Google Scholar] [CrossRef]
  10. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 2021, 70, 1337–1358. [Google Scholar] [CrossRef]
  11. Ceng, L.C.; Latif, A.; Ansari, Q.H.; Yao, J.C. Hybrid extragradient method for hierarchical variational inequalities. Fixed Point Theory Appl. 2014, 2014, 222. [Google Scholar] [CrossRef]
  12. Wattanataweekula, R.; Janngamb, K.; Suantai, S. A Novel Double Inertial Viscosity Algorithm for Convex Bilevel Optimization Problems Applied to Image Restoration Problems, Optimization. Available online: http://mc.manuscriptcentral.com/gopt (accessed on 13 June 2024).
  13. Zhao, T.Y.; Wang, D.Q.; Ceng, L.C.; He, L.; Wang, C.Y.; Fan, H.L. Quasi-inertial Tseng’s extragradient algorithms for pseudomonotone variational inequalities and fixed point problems of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 2020, 42, 69–90. [Google Scholar] [CrossRef]
  14. Ceng, L.C.; Petrusel, A.; Qin, X.; Yao, J.C. A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 2020, 21, 93–108. [Google Scholar] [CrossRef]
  15. Ceng, L.C.; Liou, Y.C.; Wen, C.F. A hybrid extragradient method for bilevel pseudomonotone variational inequalities with multiple solutions. J. Nonlinear Sci. Appl. 2016, 9, 4052–4069. [Google Scholar] [CrossRef]
  16. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  17. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef]
  18. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  19. Cruz, J.Y.B.; Nghia, T.T.A. On the convergence of the forward-backward splitting method with line searches. Optim. Methods Softw. 2016, 31, 1209–1238. [Google Scholar] [CrossRef]
  20. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  21. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  22. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  23. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  24. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef]
  25. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  26. Sabach, S.; Shtern, S. A first-order method for solving convex bilevel optimization problems. SIAM J. Optim. 2017, 27, 640–660. [Google Scholar] [CrossRef]
  27. Harker, P.T.; Pang, J.S. A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 1990, 26, 265–284. [Google Scholar]
  28. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  29. Suseela, G.; Basha, S.A.; Babu, K.P. Image restoration using Lucy Richardson algorithm for X-ray images. Int. J. Innov. Sci. Eng. Technol. 2016, 3, 280–285. [Google Scholar]
  30. Maurya, A.; Tiwari, R. A novel method of image restoration by using different types of filtering techniques. Int. J. Eng. Sci. Innov. Technol. 2014, 3, 124–129. [Google Scholar]
  31. Eldén, L. Algorithms for the regularization of ill-conditioned least squares problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
  32. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
  33. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  34. Karamardian, S.; Schaible, S. Seven kinds of monotone maps. J. Optim. Theory Appl. 1990, 66, 37–46. [Google Scholar] [CrossRef]
  35. Attouch, H.; Bolte, J.; Svaiter, B.F. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 2013, 137, 91–129. [Google Scholar] [CrossRef]
  36. Frankel, P.; Garrigos, G.; Peypouquet, J. Splitting methods with variable metric for Kurdyka-Lojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 2015, 165, 874–900. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, Y.; Ceng, L.-C.; Zheng, L.-F.; Wang, X. Modified Double Inertial Extragradient-like Approaches for Convex Bilevel Optimization Problems with VIP and CFPP Constraints. Symmetry 2024, 16, 1324. https://doi.org/10.3390/sym16101324

AMA Style

Zeng Y, Ceng L-C, Zheng L-F, Wang X. Modified Double Inertial Extragradient-like Approaches for Convex Bilevel Optimization Problems with VIP and CFPP Constraints. Symmetry. 2024; 16(10):1324. https://doi.org/10.3390/sym16101324

Chicago/Turabian Style

Zeng, Yue, Lu-Chuan Ceng, Liu-Fang Zheng, and Xie Wang. 2024. "Modified Double Inertial Extragradient-like Approaches for Convex Bilevel Optimization Problems with VIP and CFPP Constraints" Symmetry 16, no. 10: 1324. https://doi.org/10.3390/sym16101324

APA Style

Zeng, Y., Ceng, L. -C., Zheng, L. -F., & Wang, X. (2024). Modified Double Inertial Extragradient-like Approaches for Convex Bilevel Optimization Problems with VIP and CFPP Constraints. Symmetry, 16(10), 1324. https://doi.org/10.3390/sym16101324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop