1. Introduction
Suppose where C possesses both convexity and closedness and the real Hilbert space has the inner product and induced norm . Let be the nearest-point projection from onto C. For a mapping , we use and to indicate the fixed-point set of S and the real-number set, respectively. For an operator , we recall the classical variational inequality problem (VIP), i.e., the objective is to find such that , where VI() stands for the solution set of the VIP.
As far as we know, in 1976, the Korpelevich extragradient rule put forth in [
1], is one of the most effective tools for tackling the VIP, i.e., for arbitrarily initial
,
is the sequence fabricated by
with constant
. The research outcomes on the VIP are abundant and the Korpelevich extragradient rule has captured broad attention paid by numerous scholars. Moreover, they ameliorated this rule in different forms; refer to [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18] and references therein, to name but a few.
In 2018, Thong and Hieu [
18] first put forward the inertial subgradient extragradient approach, i.e., for any starting
,
is the sequence fabricated by
with constant
. Under mild restrictions, it was shown that
is weakly convergent to a solution of the VIP. In 2020, the inertial subgradient extragradient-type approach was proposed in [
14] for tackling the pseudomonotone VIP with Lipschitzian self-mapping on
and the common fixed-point problem (CFPP) of finite nonexpansive self-mappings
on
. Assume
. Let
be of
-contractivity with
, and
be of both
-strong monotonicity and
-Lipschitz continuity s.t.
for
. Presume that
are the sequences in
s.t.
and
. Besides, we define
, where
takes values in
.
Algorithm 1 (see [
14], Algorithm 3.1).
Initialization: Let
and
be arbitrarily selected. For given
and
, select
, with
Iterations: Reckon below:
Step 1. Reckon and .
Step 2. Put (half-space) , and reckon .
Step 3. Reckon
, and update
Set
and go to Step 1.
Under suitable conditions, it was proved that
is strongly convergent to a point in
. On the other hand, we recall the bilevel optimization problem (BOP) (see [
12]), i.e., the objective is to seek the minima below
in which
denotes a differentiable and strongly convex function and
stands for the nonempty solution set of the inner-level optimization problem below
in which
is a differentiable and convex function,
is
L-Lipschitz continuous, and
is a proper, convex and lower semi-continuous (l.s.c.) function. To the most of our knowledge, convex BOPs (CBOPs) display a crucial impact on the decision-making process under the hierarchical setting, while image restoration plays a critical role in signal processing and computer vision.
As well known from (3), if and only if , i.e., solves the VIP: , with being the solution set of (3).
If there is the existence of a minimizer
of
, then the forward-backward operator
has a fixed point
, where
denotes the gradient of
f,
denotes the stepsize and
denotes the proximity operator of
g. That is,
. If
is
-inverse-strongly monotone with
and
, then the operator
is nonexpansive. In 2016, in order to mitigate the constraints imposed by the Lipschitz condition on the gradient of
f, Cruz and Nghia [
19] introduced the linesearch procedure. They replaced the requirement of Lipschitz continuity for
with more lenient hypotheses, as outlined below:
Hypothesis 1. are two proper convex l.s.c. functionals s.t. ;
Hypothesis 2. f is of differentiability on some open set covering , the gradient of f possesses the uniform continuity on each bounded subset of , and there holds the relation for to map each bounded set in to a bounded set in .
In particular, Wattanataweekul et al. [
12] designed the double inertial forward-backward viscosity algorithm with Linesearch C below, to solve the convex minimization problem (CMP) for the sum of both convex functions.
Linesearch C. Fix and .
Input Let .
When
, conduct
.
End
Output.
Assume that f and g satisfy the Hypotheses (H1)–(H2), and . Their double inertial forward-backward viscosity algorithm with Linesearch C is specified below.
Algorithm 2 (see [
12], Algorithm 5).
Initialization: Let
be bounded sequences. Choose
and
. Given a
-contractive self-mapping
F on
with
.
Iterations: For any n, reckon below.
- Step 1.
Reckon with
- Step 2.
Reckon
and
, with
- Step 3.
Reckon with
- Step 4.
Reckon . Set and go to Step 1.
The strong convergence result for Algorithm 2 was established in [
12]. As a consequence, they obtained an algorithm for solving the CBOP (3)–(4). Inspired by the research works in [
12,
14], we devise a modified double inertial extragradient-like algorithm with Linesearch C for solving the CBOP with VIP and CFPP constraints. The strong convergence result for the proposed algorithm is proved under certain appropriate assumptions, where the proposed algorithm consists of both sections that possess a mutual symmetry structure to a certain extent. As an application, our proposed algorithm is invoked to deal with the image restoration problem, i.e., the LASSO problem with the constraints of fractional programming and fixed-point problems. The illustrative instance highlights the specific advantages and potential influence of our proposed algorithm over the existing algorithms in the literature, particularly in the domain of image restoration.
The structure of the article is sketched below: In
Section 2, we release certain basic tools and terminologies for later usage.
Section 3 discusses and analyzes the strong convergence of the proposed algorithm. Finally, in
Section 4, our main result is invoked to deal with the image restoration problem, i.e., the LASSO problem with the constraints of fractional programming and fixed-point problems.
Our algorithm is more advantageous and more flexible than Algorithm 5 in [
12] because it involves solving the VIP for the Lipschitzian pseudomonotone operator and the CFPP of finite nonexpansive operators. Our result improves and extends the corresponding results in [
12,
14,
18].
Lastly, it is worth addressing that the existing method (see [
12]) is most closely relevant to our suggested method, that is, the double inertial forward-backward viscosity algorithm with Linesearch C for tackling a CBOP (see [
12]) is developed into the modified double inertial extragradient-like algorithm with Linesearch C for tackling a CBOP with CFPP and VIP constraints, where this VIP implicates a Lipschitzian pseudomonotone operator and this CFPP involves a finite family of nonexpansive mappings. It is noteworthy that the double inertial forward-backward viscosity algorithm with Linesearch C for settling the CBOP (see [
12]) is invalid for tackling the CBOP with CFPP and VIP constraints due to the reasons below: (i) the first constraint imposed on the CBOP is the VIP for Lipschitzian pseudomonotone operator and (ii) the second constraint imposed on the CBOP is the CFPP of finite nonexpansive mappings. Therefore, there is no way for the double inertial forward-backward viscosity algorithm with Linesearch C to treat the CBOP with CFPP and VIP constraints. In this work, it is a natural motivation that the double inertial forward-backward viscosity algorithm with Linesearch C for tackling the CBOP is developed into the modified double inertial extragradient-like algorithm with Linesearch C for tackling the CBOP with CFPP and VIP constraints.
2. Preliminaries
Suppose throughout, with C being of both convexity and closedness in . For a given , we use (resp., ) to denote the strong (resp., weak) convergence of to h. Let be a mapping. T is termed as being nonexpansive if . In addition, is termed to be
(i) L-Lipschitzian or L-Lipschitz continuous iff s.t. ;
(ii) monotone iff ;
(iii) pseudomonotone iff ;
(iv) of -strong monotonicity iff s.t. ;
(v) of -inverse-strong monotonicity iff s.t. ;
(vi) of sequentially weak continuity if , there holds the relation: .
One can clearly see that the monotonicity implies the pseudomonotonicity but the reverse implication is false. It is easily known that s.t. . We define . Then, is known as the nearest-point projection from onto C.
Lemma 1 ([
20]).
For each , there are the relations below:(i) ;
(ii) ;
(iii) ;
(iv) ;
(v) .
Lemma 2 ([
7]).
For and , there are the relations below: Lemma 3 ([
6]).
Let be pseudomonotone and continuous. Given an . Then, . Lemma 4 ([
21]).
Let be a sequence in s.t. , where and are two real sequences s.t. (i) and , and (ii) or . Then, . Lemma 5 ([
20]).
Demiclosedness principle. Let be a nonexpansive mapping with . Then, is demiclosed at zero, that is, if s.t. and , then , where I is the identity mapping of . On the other hand, the terminology of nearest-point projection is extended to the notion below.
Let
be a proper convex l.s.c. function. According to [
22,
23], one knows that the mapping prox
g, which is termed as the proximity operator associated with
g, is formulated below:
Meanwhile, it is also of formulation prox
g, in which
denotes the subdifferential of
g, written as
.
We present some connections between the proximity and subdifferential operators. For and , then proxαg, and .
Lemma 6 ([
24]).
Given a proper convex l.s.c. function , and two sequences are considered such that . If and , then . Lemma 7 ([
25]).
Presume that is a real sequence that does not decrease at infinity in the sense that, s.t. . If the sequence of integers is defined as , with integer fulfilling , then the following holds:(i) and ;
(ii) and .
3. Convergence Analysis
In what follows, we introduce and analyze a modified double inertial extragradient-like approach with Linesearch C, to resolve the convex minimization problem (CMP) for the sum of both convex functions, with the VIP and CFPP constraints. The strong convergence outcome for the suggested approach is acquired. Whereby, we derive a new algorithm for tackling the CBOP with VIP and CFPP constraints. From now on, let , and suppose f and g fulfill the requirements (H1)-(H2). Moreover, assume always that the following holds:
A is L-Lipschitzian pseudomonotone self-mapping on satisfying s.t. ;
is a finite family of nonexpansive self-mappings on s.t. .
In addition, let the sequence be defined as in Algorithm 1, i.e., for each . Next, we first present a modified double inertial extragradient-like algorithm with Linesearch C as follows.
Algorithm 3. Initial Step: Let be bounded sequences. Choose and . Given a -contractive self-mapping F on with .
Iterative Steps: For any n, reckon below.
- Step 1.
Reckon with
- Step 2.
Reckon
and
, with
- Step 3.
Reckon
with
- Step 4.
Reckon
and
, with
- Step 5.
Reckon
, and update
Set and go to Step 1.
Condition 3.Presume that are such that the following hold:
- (C1)
;
- (C2)
and;
- (C3)
and.
Remark 1. It is easy to see that, from the definitions of we obtain that and . Indeed, we have and , which together with imply that and as .
In order to show the strong convergence of Algorithm 3, we need several lemmas below. The first lemma can be found in [
12], Lemma 3.1.
Lemma 8. Let be the sequence generated by Algorithm 3 and . Then, Lemma 9. Suppose is fabricated in (6). Then, the following hold: (i) is nonincreasing and (ii) .
Proof. By (6) we first obtain
. Also, it is evident that
□
Remark 2. In case or , one has . In fact, by Lemmas 2 and 9, when or , we obtain We are now ready to show several lemmas, which are vital to discuss the strong convergence of our algorithm.
Lemma 10. For the sequences fabricated in Algorithm 3, one has Proof. Indeed, if
, (8) is valid. On the contrary, by (6) one has (8). Also, let
. It can be readily seen that
This means that
According to
, one obtains
. Because
A is of pseudomonotonicity on
C, one has
. Setting
one obtains
. As a result,
Combining (9) and (10), one obtains
Since
and
, we have
which together with (8), implies that
Therefore, substituting (12) for (11), we infer that inequality (7) holds. □
Lemma 11. Suppose is fabricated in Algorithm 3. Assume . Then is bounded.
Proof. Let
. Using Lemma 8, we obtain
Thanks to
, we deduce that
This along with the definition of
, leads to
On the other hand, using (7) we obtain
Because
as
, we might assume
. Therefore,
According to Remark 1, one has that
and
as
. As a result,
s.t.
Combining (15)–(17), we obtain
Using the definition of
and (18), we have
By induction, we obtain
. As a result,
is of boundedness. Consequently,
and
all are of boundedness. □
Lemma 12. Let and be fabricated in Algorithm 3. Suppose , , and s.t. . Then, z lies in Ω provided Condition 3 holds.
Proof. From Algorithm 3, we obtain
, and hence
Using Remark 1 and the assumption
, we have
Also, from
, we have
, and hence
Thanks to Lipschitz’s condition on
A,
is of boundedness. Noticing
, one deduces from (20) that
. Meanwhile, it is clear that
. Since
, one obtains
. This along with (20) arrives at
.
Let us assert
. In fact, since
, we deduce from
that
and hence
It is clear that for
,
So, using (19) and
one obtains
for
. This immediately implies that
Next, we select
s.t.
. For each
k, one denotes by
the smallest number satisfying
Because of the decreasing property of
, we obtain the increasing property of
. For simplicity,
is still written as
. From
and
it is easy to see that
. In case
, one has that
z lies in
. In case
, from
we might assume
. Hence, one sets
. As a result,
. Thus, by (22) one obtains
. Because
A is of pseudomonotonicity, one has
, hence arriving at
Let us show
. In fact, using
and
, one obtains
. Thus,
implies that
z lies in
C. Note, that
and
. Therefore,
As a result,
.
In what follows, one claims that z lies in . In fact, using and , one obtains . By (21) one has . Because Lemma 5 implies that is demiclosed at zero, one obtains . Therefore, one obtains . Additionally, as , one obtains that the right-hand side of (23) converges to 0 due to the fact that A is uniformly continuous, the sequences are bounded and . As a result, . Using Lemma 3 one has that z lies in . As a result, .
Finally, let us show
. Indeed, from (13) and
it follows that
which hence arrives at
and
Based on the hypothesis (H2), we obtain
Using the condition (C1) and (24), we obtain
Thanks to
, we have
which hence yields
Using Lemma 6 we conclude from (25), (26) and
that
. As a result,
. Consequently,
. □
Theorem 1. Let be fabricated in Algorithm 3. If Condition 3 holds, then strongly converges to an element , where .
Proof. First of all, by the Banach Contraction Principle, one knows that there is only a fixed point
of
in
. So, there is only a solution
of the VIP
In what follows, one divides the remainder of the proofs into a few claims.
Claim 1. We show that
for some
.
Indeed, using Lemma 11,
is bounded, and hence
and
are of boundedness. From (13) and the definition of
, one has
Since
, by (29) one obtains
which together with (7), arrives at
where
and
for some
.
Indeed, noticing
, we deduce from (16) and (30) that
Claim 3. We show that , which is only a solution of VIP (27). In fact, setting , one can derive in both aspects below.
Aspect 1. Presume that ∃ (integer)
s.t. there holds the nonincreasing property of
. One then has that
and
as
. From (28) and (17) one obtains
Since
,
and
, we infer from (32) that
and
Thus, we conclude that
and hence
Thanks to
, we have
Thanks to the boundedness of
, there exists a subsequence
of
such that
Since
is reflexive and
is bounded, we might assume that
. Hence, from (37) we obtain
Because
and
(due to
), by Lemma 12 we infer that
. Hence from (27) and (38) we obtain
which immediately leads to
Note, that
, and
Therefore, using Lemma 4 we deduce from (31) that
as
.
Aspect 2. Suppose that
s.t.
, where
is the set of all positive integers. Define the mapping
by
By Lemma 7, we obtain
From (28) and (17) we obtain
This hence implies that
and
So it follows that
Applying the analogous reasonings to those in the proofs of Aspect 1, one obtains
and
In what follows, using (31) one obtains
which hence arrives at
Thus,
. Also, it is easily known that
Thanks to
, we obtain
As a result,
. □
In the forthcoming discussion, we let and introduce specific assumptions regarding the mappings , and , that are pertinent to problem (3)–(4).
(B1) are proper convex l.s.c. functions, with being uniformly continuous;
(B2) is of strong convexity possessing parameter , where the gradient of is -Lipschitzian, and .
Based on the stated assumptions, we propose the modified double inertial extragradient-like algorithm with Linesearch C (Algorithm 4) to solve problem (4)–(4) with VIP and CFPP constraints.
Algorithm 4. Initial Step: Let be bounded sequences. Choose and .
Iterative Steps: For any n, reckon below.
- Step 1.
Reckon with
- Step 2.
Reckon
and
, with
- Step 3.
Reckon
with
- Step 4.
Reckon
and
, with
- Step 5.
Reckon
, and update
Set and go to Step 1.
We provide a lemma, which is vital to our forthcoming results.
Lemma 13 ([
26]).
Suppose is of strong convexity with and the gradient of ω is Lipschitzian with constant . Choose arbitrarily. Then, the mapping is a contractive map s.t. Theorem 2. Suppose is fabricated in Algorithm 4 and let ℧ be the solution set of problem (3)–(4) with VIP and CFPP constraints and . Then converges strongly to provided all conditions in Theorem 1 are fulfilled.
Proof. Consider
. According to Lemma 13,
F acts as a contractive map. Using Theorem 1, we conclude that
, where
. Therefore, for each
, one has
This immediately yields
As a result,
. Therefore, we obtain
by Theorem 1. □
4. An Application
In this section, our Algorithm 4 is applied to find a solution to the LASSO problem with constraints of fractional programming and fixed-point problems. Since the accurate solution of this problem is unknown, one employs to estimate the error of the n-th iterate, which shows the utility of verifying whether the suggested algorithm converges to the solution as well or not.
First, recall some preliminaries. We set a mapping
that is found in [
27] and was discussed by numerous scholars for applicable examples (see, e.g., [
28]), with
where
D is skew-symmetric and
G is diagonal matrix, for which diagonal entries are nonnegative (hence
M is positive semidefinite), and
q is an element in
. The feasible
is of both closedness and convexity, and formulated below
with
and the vector
d being nonnegative. It is not hard to find that
is of both
-(strong) monotonicity and
L-Lipschitz continuity with
and
.
As far as we know, image reconstruction implicates invoking varied matters to meliorate the quality of images. This encompasses tasks, e.g., image deblurring or deconvolution, which aim to repair any blurriness emerging in an image and rehabilitate it to a clearer and more visually appealing status. The attempt to reconstruct the image comes back to the 1950s and has been exploited in different areas, e.g., consumer photography, image/video decoding, and scientific exploration [
29,
30]. From mathematical viewpoint, image reconstruction is usually formulated below:
where
represents the observed image,
denotes the blurring matrix,
denotes the original image, and
is noise. To attain our aim of rehabilitating the optimal valid image
that meets (43), we are devoted to tackling the least squares problem (44) while minimizing the impact of
. Via doing so, we can make sure that our technique is efficient and superior for acquiring the desired outcomes. This technique seeks to minimize the squared discrepancy between
v and
with the goal of ameliorating the reconstruction procedure and strengthening the image quality
where
is the spectral norm. Varied iterative processes can be utilized to evaluate the solution shown in (44). It is noteworthy that (44) causes a challenge because it lies in the category of ill-posed problems. In the case when the number of unknown variables goes over the number of observations, it commonly arrives at an unstable norm. This is a vital issue that can pose varied problems. This issue has been broadly explored and recorded in varied research, e.g., [
31,
32]. Regularization approaches have been suggested to resolve the challenge of improving the least squares problem. In particular, Tikhonov regularizing technique becomes a crucial method that ameliorates the accuracy and stableness of solutions. Via this approach, we can attain our goal of resolving problems in the most effective and superior way possible
where
is a constant, which is termed the regularization parameter, and
is the spectral norm. Besides, the Tikhonov matrix is represented by
with a default configuration viewing
L as the identity matrix. The least absolute shrinkage and selection operator (LASSO), invented in Tibshirani [
33], is a prominent way to tackle (43). Denoting by
the solution set of the minimization problem below
we aim at seeking a point
s.t.
Next, in our illustrative instance, we explore and apply Algorithm 4 for tackling the CBOP with constraints of fractional programming and fixed point problems. We set and with . In this case, the observed images under consideration are blurred ones.
For convenience, let
. We give the operator
A. Consider the following fractional programming problem:
where
It is easy to verify that
Q is symmetric and positive definite in
and consequently
g is pseudoconvex on
. Then,
It is easily known that
A is pseudomonotone (see [
34] for more details). Now, we give a nonexpansive mapping
defined by
.
Let starting points
be randomly selected in
. Take
,
, and
As a result, Algorithm 4 is rephrased as follows:
in which
and
are selected as in Algorithm 4 for every
n. Therefore, Theorem 2 guarantees that
is convergent to a solution of the LASSO problem with constraints of the fractional programming problem and the fixed-point problem of
.
In the end, it is worth mentioning that, there have been many works that deal with the problem of designing an algorithm to solve (46) see, e.g., [
35,
36] and the references wherein. Moreover, some of them are able to solve globally the problem using non-convexity assumptions.