1. Introduction
Let ℶ be a real HS equipped with inner product and induced norm and A be a non-empty CCS of a real HS ℶ. The mapping is called, for all ,
The set of FPs of the mapping
P is denoted by
that is,
. It is common in the science arena for iterations that Mann’s algorithm is one of many successful iteration schemes for approximating FPs of NEMs. The Mann algorithm is written as follows: Let
be an arbitrary point in
for all
where
is a sequence of non-negative real numbers in
Researchers in this direction proved under the hypothesis
and suitable stipulations forced on
the sequence
created by (
1) converges weakly to an FP of
P.
Approximating FP problems for NEMs has many vital applications. Many problems can be seen as FP problems of NEMs, such as convex optimization problems, image restoration problems, monotone variational inequalities, and convex feasibility [
1,
2,
3]. A large number of the symmetrical iteration methods (in Mann-term) for FP problems of NEMs were presented in the literature; for example, see [
4,
5,
6,
7,
8,
9]. Therefore, the innovation of efficient and stable algorithm methods that involved the Mann algorithm attracted tremendous interest from researchers, for example, the forward–backward algorithm [
10] and the Douglas–Rachford algorithm [
11]. All these symmetric algorithms have a weak convergence, which is a disadvantage of the algorithms mentioned above. Despite this defect, they have more applications in infinite-dimensional spaces, such as quantum physics and image reconstruction. The need for strong convergence became desperate to save time and effort, and especially a weak convergence is disappointing. Several authors obtained strong convergence by putting solid restrictions on the involved mapping, such as in optimization, they considered the concept of strong convexity. In monotone inclusions, they considered strong monotonicity. As a result that there are many cases, this system cannot obtain strong convergence, so it was necessary to investigate new effective algorithms. Recently, several mathematicians were able to apply the strong convergence of algorithms, see [
12,
13,
14,
15].
In 2019, Bot et al. [
16] proposed a new form for Mann’s algorithm to overcome the deficiency described before, and he formulated it as follows: Let
be an arbitrary point in
for all
Under mild stipulations for
and
they proved that the iterative sequence
by (
2) is strongly convergent. In addition, they applied (
2) to obtain the strong convergence of the forward–backward algorithm for MIPs. Sequence
in scheme (
2) plays an important role in accelerating convergence and is called Tikhonov regularization sequence. On the other hand, the Tikhonov method which generates a sequence
by the rule
where
are defined in the next section,
Many theoretical and numerical results for studying strong convergence by Tikhonov regularization technique have been provided, for example, see [
16,
17,
18,
19,
20,
21].
Regularization terms intervene loosely in many applications about artificial regularization. To various gradient flows, the artificial regularization term has played an essential role in the energy stability analysis such as epitaxial thin film model either with or without slope selection, a second order energy stable BDF method for the epitaxial thin film equation with slope selection, and a third order exponential time differencing numerical scheme for no-slope-selection epitaxial thin film model. For more details about this topic, see [
22,
23,
24,
25,
26,
27,
28].
Recently, new types of fast algorithms have emerged. One of these algorithms is called the inertial algorithm which was first proposed by Polyak [
29]. He presented an inertial extrapolation technique for minimizing a smooth convex function. A common characteristic of inertial algorithms is that the next iteration depends on a combination of the previous two iterates. It should be noted that this slight change has significantly improved the effectiveness and performance of these algorithms. After the emergence of this idea, the authors made efforts to introduce other terms to the inertial algorithm with the aim of increasing the acceleration and studying more in-depth applications, such as inertial extragradient algorithms [
9,
30,
31], inertial projection algorithms [
15,
32,
33], inertial Mann algorithms [
33,
34], and inertial forward–backward splitting algorithms [
35,
36,
37]. All of these algorithms have one symmetrical convergence form, which is stronger and faster than the inertial algorithms.
According to the above results, this manuscript aims to accelerate the inertial algorithm by introducing a modified inertial Mann-like algorithm. This algorithm is used here to study the strong convergence of FPs of NEMs in real HSs. The forward–backward terms are involved in studying the strong convergence for the minimal norm solution in zeros set under the symmetrical conditions. Finally, our algorithm’s performance and efficiency have been illustrated by some numerical comparisons with previous algorithms. We found that our algorithms converge faster, which indicates our method’s success.
2. Preliminaries
In this section, we shall introduce some previous symmetrical subsequences which greatly help in understanding our paper. Throughout this manuscript, the notions ⟶ and ⇒ denote strong convergence and multivalued mappings, respectively, denotes the graph of the mapping P and A is a non-empty CCS in a real HS ℶ.
Lemma 1 ([38]). Let , and υ be points in a HS ℶ and . Then Lemma 2 ([39]). Let ℶ be a real HS. Then, for each
(i) ,
(ii)
Definition 1. Let be fixed, we say that a mapping is averaged if V is written as where I is an identity mapping and is an NEM.
It is easy to prove that averaged operators are also NE.
Definition 2. Assume that is a multi-valued operator and its graph described as The operator Γ is called:
- (1)
monotone, if for all
- (2)
MM, if it is monotone and its graph is not a proper subset of one of any other monotone mapping;
- (3)
if for all there is a constant so that
The resolvent of
is described by
and the reflected resolvent of
is defined by
The mapping
is single-valued, maximally monotone, and NE if
is maximally monotone [
1]. In addition,
iff
If
is
ISM with
then
is averaged.
We have to remember that, if the function
is convex, proper, and lower-semicontinuous, then
is called the differentiable of
r and it described by
for
with
We define the proximal operator of
r as
It is worth mentioning that
i.e., the proximal operator of
r, and the resolvent of
[
1] are symmetric. The indicator function
of a non-empty closed and convex set
is given by
By the theorem of Baillon-Haddad ([
1], Corollary 18.16),
is a
ISM operator provided that the function
is Fréchet differentiable with
Lipschitz gradient.
Lemma 3 (Demi-closedness property). Assume that A is a non-empty CCS of a real HS ℶ. Let be an NEM and be a sequence in A and so that converges weakly to φ and as , then
Lemma 4 ([40]) Assume that is a sequence of non-negative real numbers so thatwhere is a sequence and is a real sequence. Let Then the assumptions below hold: - (1)
For some if then the sequence is bounded;
- (2)
If and then
3. The Strong Convergence of a Modified Inertial Mann-Like Algorithm
In this part, we shall discuss the strong convergence for our proposed algorithm under mild conditions.
Theorem 1. Let A be a CCS of a real HS ℶ and be an NE mapping so that In addition, suppose the hypotheses below are fulfilled:
- (i)
with and
- (ii)
where and
Set arbitrary. Define the sequence by the following iterative scheme: Then the sequence defined by the algorithm (3) converges to in norm. Proof. We split the proof into the following steps:
Step 1. Show that
is bounded. Indeed, for any
as
P is NE and by the definition of
, we have
From (
5) and (
6) in (
4), and using the hypothesis (ii), we get
By the fact that
exists, consider
Then, inequality (
7) is reduced to
By Lemma 4, we conclude that is bounded.
On the other side, by the definition of
we have
Therefore, by the boundedness of
and hypothesis (ii), we see that
is bounded, and thus
and
are bounded too. From (
3), one has
and
Using (
4) and Lemma 1, 2 (i), we have
Applying (
12) in (
11), one can write
this implies that
Setting
, one can obtain
From (
8), (
10), and (
13) and Lemma 2 (i), we have
Since
and
are bounded, then there is a constant
so that
Set
Then, the inequality (
14) is reduced to
Step 2. Prove that converges strongly to In order to reach it, we will discuss the two cases below:
Case 1. If the sequence
is monotonically decreasing, then it is convergent. Therefore, we obtain
this implies with (
15) and hypotheses (i) and (ii) that
On the other hand, it is easy to see that
and
It follows from (
17) and (
18) and the triangle inequality that
By the same manner, using (
17) and (
19), one can obtain
Applying (
16) and (
20), we get
In addition, applying (
16) and (
21), we have
Going to Equation (
9) and using (
20), we can write
Combining (
22)–(
24), one sees that
Using the fact that is demi-closed, then converges weakly to an FP of
Still in our minds is proof that
converges strongly to an FP of
In view of (
14)
, we have
By hypotheses (i) and (ii), we deduce that
Applying (
20) and (
27) in (
26) after taking the limit as
, we get
Thus, by Lemma 4, we conclude that
Case 2. Assume that
is not a monotonically decreasing sequence. Put
Suppose that
is a mapping defined by
for all
(for some large enough
). It is clear that
is a non-decreasing with
as
and for all
By Applying (
15), we have
which leads to
as
By the same manner as (
17)–(
27) in Case 1, we get directly that
converges weakly to
as
Using (
26) for
, we get
in another form, we have
which leads to
Hence, by hypotheses (i), (ii), and (
28), we can obtain
Therefore,
Further, for any
it can be easily obtained that
if
that is
because
for
Based on it, we find that for each
Consequently, we conclude that that is in norm. This finishes the proof. □
Remark 1. We have discussions below about algorithm (3): - (1)
If , and for each in algorithm (3), then we get the exciting results of Bot et al. [16]. - (2)
For the Tikhonov regularization sequences and , one can set for any .
- (3)
Algorithm (3) is reduced to inertial Mann algorithm introduced by Maingé [34], if we take - (4)
The hypothesis (ii) of our theorem is easy to calculate and not complicated because the value is known before choosing so it plays an important role in the numerical discussions. For special options, the parameter in the proposed algorithm can be taken as: for some and is a positive sequence so that This concept was introduced by Beck and Teboulle [41] for the inertial extrapolated step. - (5)
If we set in our algorithm, we have the results of Tan and Cho [42].
The result below is very important in the next section, where it has a prominent role in obtaining the strong convergence by the forward–backward method equipped with the Tikhonov regularization term.
Corollary 1. Assume that A is a non-empty CCS of a real HS ℶ and is an averaged mapping, where with Suppose that the assumptions below hold:
- (C)
with and
- (C)
where and
Set arbitrary and define the sequence by: Then the iterative sequence by (29) converges to an FP of V in norm. Proof. Algorithm (
29) is equivalent to algorithm (
3) with the equation
As a result that
is an
averaged mapping, then
is NE. Therefore,
and the result follow immediately by Theorem 1. □