All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces
In each iteration, the projection methods require computing at least one projection onto the closed convex set. However, projections onto a general closed convex set are not easily executed, a fact that might affect the efficiency and applicability of the projection methods. To overcome this drawback, we propose two iterative methods with self-adaptive step size that combines the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in the setting of Banach spaces. The core of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space and this guarantees the easy implementation of our proposed methods. Moreover, the step size of each algorithm is self-adaptive. We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. Finally, we present some numerical examples in order to demonstrate the efficiency of our algorithms in comparison with some recent iterative methods.
Let E be a real Banach space with norm and be the dual of For and let be the value of f at Suppose that C is a nonempty, closed, and convex subset of The Variational Inequality Problem (VIP) that is associated with C and A is formulated, as follows: find a point , such that
where is a single-valued mapping. We denote the solution set of VIP (1) by
Let C be a nonempty, closed, and convex subset of a real Hilbert space A mapping is said to be the metric projection of H onto C if, for all there exists a unique nearest point in denoted by such that
Variational inequality was first introduced independently by Fichera [1] and Stampacchia [2]. The VIP is a useful mathematical model that unifies many important concepts in applied mathematics, such as necessary optimality conditions, complementarity problems, network equilibrium problems, and systems of nonlinear equations (see [3,4,5,6,7]). Many authors have proposed and analysed several iterative algorithms for solving the VIP (1) and related optimization problems, see [8,9,10,11,12,13,14,15,16,17,18,19,20,21] and references therein. One of the most famous of these methods is the extragradient method (EgM) proposed by Korpelevich [22], which is presented in Algorithm 1, as follows:
Algorithm 1: Extragradient Method (EgM)
where is a monotone and Lipschitz continuous operator with If the solution set is nonempty, then the sequence generated by EgM converges to an element in The EgM was further extended to infinite dimensional spaces by many authors; see, for instance, [11,23,24]. Observe that the EgM involves two projections onto the closed convex set C and two evaluations of A per iteration. Computing projection onto an arbitrary closed convex set is a difficult task, a drawback that may affect the efficiency of the EgM, as mentioned in [25]. Hence, a major improvement on the EgM is to minimize the number of evaluations of per iteration. Censor et al. [25] initiated an attempt in this direction, modifying the EgM by replacing the second projection with a projection onto a half-space. This new method only involves one projection onto C and it is called the subgradient extragradient method (SEgM). The algorithm is given in Algorithm 2, as follows:
Censor et al. [25] showed that if the solution set is nonempty, the sequence generated by converges weakly to an element , where
Bauschke and Combettes [26] pointed out that, in solving optimization problems, strong convergence of iterative schemes are more desirable than their weak convergence counterparts. Hence, the need to develop algorithms that generate strong convergence sequence.
Let be a nonlinear mapping, a point is called a fixed point of S if . We denote, by , the set of all fixed points of i.e.,
If S is a multivalued mapping, i.e., then is called a fixed point of S if
Here, we consider the fixed point problem for an infinite family of multivalued relatively nonexpansive mappings. The fixed point theory for multivalued mappings can be utilized in various areas, such as game theory, control theory, mathematical economics, and differential inclusion (see [13,27,28] and the references therein). The existence of common fixed points for a family of mappings has been considered by many authors (see [13,28,29,30] and the references therein). Many optimization problems can be formulated as finding a point in the intersection of the fixed point sets of a family of nonlinear mappings. For instance, the well-known convex feasibility problem reduces to finding a point in the intersection of the fixed point sets of a family of nonexpansive mappings (see [31,32]). The problem of finding an optimal point that minimizes a given cost function over the common set of fixed points of a family of nonlinear mappings is of wide interdisciplinary interest and practical importance (see [33,34]). A simple algorithmic solution to the problem of minimizing a quadratic function over the common set of fixed points of a family of nonexpansive mappings is of extreme value in many applications including set theoretic signal estimation (see [34,35]).
In this article, we are interested in studying the problem of finding a common solution of both the VIP (1) and the common fixed point problem for multivalued mappings. The motivation for studying such problems lies in its potential application to mathematical models whose constraints can be expressed as fixed point problem and VIP. This happens, in particular, in practical problems, such as signal processing, network resource allocation, and image recovery. A scenario is in network bandwidth allocation problem for two services in a heterogeneous wireless access networks in which the bandwidth of the services are mathematically related (see, for instance, [36,37,38,39] and the references therein).
Observe that all of the above results about the extragradient method or subgradient extragradient method are all confined in Hilbert spaces. However, many important problems related to practical problems are generally defined in Banach spaces. For instance, Zhang et al. [40] pointed out that in machine learning, Banach spaces possess much richer geometric structures, which are potentially useful for developing learning algorithms. This is due to the fact that any two Hilbert spaces over of the same dimension are isometrically isomorphic. Der and Lee [41] also pointed out that most data in machine learning do not come with any natural notion distance that can be induced from an inner-product. Zhang et al. [40] further argued that most data come with intrinsic structures that make them impossible to be embedded into a Hilbert space. Hence, it is more desirable to propose an iterative algorithm for finding a solution of VIP (1) in Banach spaces.
Very recently, Liu [42] extended the subgradient extragradient method from Hilbert spaces into Banach spaces and proposed the following strong convergence theorem.
Theorem1.
Let E be a two-uniformly convex and uniformly smooth Banach space with the two-uniformly convexity constant and C be a nonempty closed convex subset of Let be a relatively nonexpansive mapping and a monotone and L-Lipschitz mapping on C with Let be a real number sequence satisfying Suppose that is nonempty. Let be a sequence generated by Algorithm 3, as follows:
Algorithm 3:
where for some with and Subsequently,
We notice that Algorithm 3 has one projection made onto the closed convex set As earlier pointed out, computing projection onto an arbitrary closed convex set is a difficult task. Another shortcoming of Algorithm 3, the EgM and SEgM, is that the step size is defined by a constant (or a sequence) which is dependent on the Lipschitz constant of the monotone operator. The Lipschitz constant is typically assumed to be known, or at least estimated prior. However, in many cases, this parameter is unknown or difficult to estimate. Moreover, the step size that is defined by the constant is often very small and slows down the convergence rate. In practice, a larger step size can often be used and yields better numerical results. To overcome these drawbacks, in this article, motivated by the cited works and the ongoing research in this area, we propose some relaxed subgradient extragdient methods with self-adaptive step size for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive mappings in the setting of Banach spaces. In our algorithms, the two projections are made onto some half-space, which guarantees the easy implementation of the proposed methods. Moreover, the step size can be selected adaptively. We prove strong convergence theorems without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. Finally, we present some numerical examples to demonstrate the efficiency of our algorithms.
2. Preliminaries
In what follows, we let and be the set of positive integers and real numbers, respectively. Let E be a Banach space, be the dual space of and let denote the duality pairing of E and When is a sequence in we denote the strong convergence of to by and the weak convergence by An element is called a weak cluster point of if there exists a subsequence of converging weakly to We write to indicate the set of all weak cluster points of
Next, we present some definitions and results that are employed in our subsequent analysis.
Definition1.
A function is said to be weakly lower semicontinuous (w-lsc) at if
holds for an arbitrary sequence in E satisfying
Let be a function. The subdifferential of g at x is defined by
If then we say that g is subdifferentiable at
Definition2.
An operatoris said to be
(i)
monotone if
(ii)
α-inverse-strongly-monotone if there exists a positive real number α, such that
(iii)
L-Lipschitz continuous if there exists a constant such that
Clearly, an α-inverse-strongly-monotone mapping is monotone and -Lipschitz continuous. However, the converse is not always true.
A Banach space E is said to be strictly convex, if for all , such that and implies E is said to be uniformly convex, if for each there exists such that for all and implies It is well known that a uniformly convex Banach space is strictly convex and reflexive. The modulus of convexity of E is defined as
Subsequently, the Banach space E is uniformly convex if and only if for all In particular, let H be a real Hilbert space, then E is said to be p-uniformly convex if there exists a constant such that for all with It is easy to see that a p-uniformly convex Banach space is uniformly convex. In particular, a Hilbert space is two-uniformly convex.
A Banach space E is said to be smooth, if the limit exists for all where Moreover, if this limit is attained uniformly for then E is said to be uniformly smooth. It is obvious that a uniformly smooth space is smooth. In particular, a Hilbert space is uniformly smooth.
For the generalized duality mapping is defined, as follows:
In particular, is called the normalized duality mapping. If where H is a real Hilbert space, then The normalized duality mapping J has the following properties (see [43]):
(1)
if E is smooth, then J is single-valued;
(2)
if E is strictly convex, then J is one-to-one and strictly monotone;
(3)
if E is reflexive, then J is surjective; and,
(4)
if E is uniformly smooth, then J is uniformly norm-to-norm continuous on each bounded subset of
Let E be a smooth Banach space. The Lyapunov functional (see [44]) is defined by
From the definition, it is easy to see that for every If E is strictly convex, then If E is a Hilbert space, it is easy to see that for all Moreover, for every and the Lyapunov functional satisfies the following properties:
(P1)
(P2)
(P3)
(P4)
(P5)
Next, we define the functional by
It can be deduced from (2) that V is non-negative and
We have the following result in a reflexive strictly convex and smooth Banach space.
Lemma1.
([45]) Let E be a reflexive strictly convex and smooth Banach space with as its dual. Subsequently,
for all and
Definition3.
Let C be a nonempty closed convex subset of a real Banach space A point is called an asymptotic fixed point (see [46]) of T if C contains a sequence which converges weakly to p, such that We denote the set of asymptotic fixed points of T by A mapping is said to be:
generalized nonspreading [48] if there are such that
The following result shows the relationship between generalized nonspreading mappings and relatively nonexpansive mappings.
Lemma 2.
([48]) Let E be a strictly convex Banach space with a uniformly Gâteaux differentiable norm, C a nonempty closed convex subset of E and T a generalized nonspreading mapping of C into itself such that Subsequently, T is relatively nonexpansive.
Let and denote the family of nonempty subsets and nonempty closed bounded subsets of C, respectively. The Hausdorff metric on is defined by
for all where
Let be a multivalued mapping. An element is called a fixed point of T if A point is called an asymptotic fixed point of if there exists a sequence in C, which converges weakly to p, such that
A mapping is said to be relatively nonexpansive if:
and,
The class of relatively nonexpansive multivalued mappings contains the class of relatively nonexpansive single-valued mappings.
Remark1.
(See [49]) Let E be a strictly convex and smooth Banach space, and C a nonempty closed convex subset of Suppose is a relatively nonexpansive multi-valued mapping. If then
Lemma3.
([49]) Let E be a strictly convex and smooth Banach space and C be a nonempty closed convex subset of Let be a relatively nonexpansive multi-valued mapping. Subsequently, is closed and convex.
Lemma4.
([50]) Let E be a smooth and uniformly convex Banach space, and and be sequences in E, such that either or is bounded. If as then as
Remark2.
From Property (P4) of the Lyapunov functional, it follows that the converse of Lemma 4 also holds if the sequences and are bounded (see [51]).
Let C be a nonempty closed convex subset of a smooth, strictly convex and reflexive Banach space then generalized projection defined by
is the unique minimizer of the Lyapunov functional It is known that, if E is a real Hilbert space, then coincides with the metric projection The following results relating to the generalized projection are well known.
Lemma5.
([52]) Let C be a nonempty closed convex subset of a reflexive, strictly convex, and smooth Banach space Given and Then, implies
Lemma6.
([52,53]) Let C be a nonempty closed and convex subset of a smooth Banach space E and Subsequently, if and only if
Lemma7.
([52]) Let p be a real number with Then, E is p-uniformly convex if and only if there exists such that
Here, the best constant is called the p-uniformly convexity constant of
Lemma8.
([54]) Let E be a two-uniformly convex and smooth Banach space. Then, for every where is the two-uniformly convexity constant of
Lemma9.
([52,55]) Let E be a p-uniformly convex Banach space with Subsequently,
where is the p-uniformly convexity constant.
Lemma10.
([56]) Let E be a uniformly convex Banach space, be a positive number and be a closed ball of Subsequently, for any given sequence and for any given sequence of positive number with there exists a continuous, strictly increasing, and convex function with , such that, for any positive integer with
Lemma11.
([57]) Let be a sequence of nonnegative real numbers, be a sequence in with and be a sequence of real numbers. Assume that
If for every subsequence of satisfying then
An operator A of C into is said to be hemicontinuous if for all the mapping f on into defined by is continuous with respect to the weak-topology of
Lemma12.
([52]) Let C be a nonempty, closed, and convex subset of a Banach space E and A a monotone, hemicontinuous operator of C into Subsequently,
It is obvious from Lemma 12 that the set is a closed and convex subset of
3. Main Results
In this section, we present our algorithms and prove some strong convergence results for the proposed algorithms. We establish the convergence of the algorithms under the following conditions:
Condition A:
(A1)
E is a 2-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant
(A2)
C is a nonempty closed convex set, which satisfies the following conditions:
where is a convex function;
(A3)
is weakly lower semicontinuous on
(A4)
For any at least one subgradient can be calculated (i.e., g is subdifferentiable on E), where is defined as follows:
In addition, is bounded on bounded sets.
Condition B:
(B1)
The solution set that is denoted by is nonempty, where is an infinite family of multivalued relatively nonexpnsive mappings;
(B2)
The mapping is monotone and Lipschitz continuous with Lipschitz constant
Condition C:
(C1)
, and
(C2)
for some and for all
(C3)
Now, we present our first algorithm in Algorithm 4, as follows:
Algorithm 4:
Step0.
Select and such that Condition C holds. Choose and set
Step1.
Construct the half-space
where and compute
If then set and go to Step 4. Else, go to Step 2.
Step2.
Construct the half-space
and compute
Step3.
Compute
Step4.
Compute
Step5.
Compute
Set and return to Step 1.
Remark3.
From the construction of the half-spaces and it can easily be verified that and
Remark4.
([58]) The sequence is a monotonically decreasing sequence with lower bound and, hence, the limit of exists and is denoted by It is clear that
Lemma13.
Let be a sequence generated by Algorithm 4. Subsequently, the following inequality holds for all and
In particular, there exists , such that, for all we have
Proof.
Let then by applying Lemma 5 and Property (P3) of the Lyapunov functional together with the monotonicity of we have
By the definition of we have that
Subsequently, by the definition of Lipschitz continuity of Lemma 8 and Cauchy–Schwartz inequality, we obtain
This, together with (8), implies that , as required. □
Theorem2.
Let be a sequence generated by Algorithm 4 such that Conditions A–C are satisfied. Subsequently, the sequence converges strongly to
Proof.
Let Afterwards, it follows from Lemma 15 that
Now, we claim that the sequence converges to zero. In order to establish this, in view of Lemma 11, it suffices to show that for every subsequence of satisfying
Suppose is a subsequence of such that
Subsequently, it follows from Lemma 17 that Additionally, from (11) we have that Because is bounded, there exists a subsequence of , such that and
Applying Lemma 11 to (14), and, together with (16), we deduce that that is, and this completes the proof. □
Next, we propose our second algorithm in Algorithm 5, which is a slight modification of Algorithm 4. Because the proof of this result is very similar to that of Theorem 2, the proof is left for the reader to verify.
Theorem3.
Let be a sequence generated by Algorithm 5, as follows:
Algorithm 5:
Step 0.
Select and μ, such that Condition C holds. Choose and set
Step 1.
Construct the half-space
where and compute
If then set and go to Step 4. Else, go to Step 2.
Step 2.
Construct the half-space
and compute
Step 3.
Compute
Step 4.
Compute
Step 5.
Compute
Set and return to Step 1.
Suppose that all of the conditions of Theorem 2 are satisfied. Subsequently, the sequence generated by Algorithm 5 converges strongly to
Now, we obtain some results, which are consequences of Theorem 2.
If we take the multivalued relatively nonexpansive mappings in Theorem 2 as single-valued relatively nonexpansive mappings, then we obtain the following result.
Corollary1.
Let be an infinite family of single-valued relatively nonexpansive mappings and be a sequence generated by Algorithm 6, as follows:
Algorithm 6:
Step 0.
Select and μ such that Condition C holds. Choose and set
Step 1.
Construct the half-space
where and compute
If then set and go to Step 4. Else, go to Step 2.
Step 2.
Construct the half-space
and compute
Step 3.
Compute
Step 4.
Compute
Step 5.
Compute
Set and return to Step 1.
Suppose that the solution set is nonempty and the remaining conditions of Theorem 2 are satisfied. Subsequently, the sequence generated by Algorithm 6 converges strongly to
Remark5.
Corollary 1 improves and extends Theorem 1 [42] in the following senses:
(i)
The projection onto the closed convex set C is replaced with a projection onto a half-space, which can easily be computed.
(ii)
The step size is self-adaptive and independent on the Lipschitz constant of the monotone operator.
(iii)
The result extends the fixed point problem from a single relatively nonexpansive mapping to an infinite family of relatively nonexpansive mappings.
The next result follows from Lemma 2 and Corollary 1.
Corollary2.
Let be an infinite family of generalized nonspreading mappings and be a sequence generated by Algorithm 7, as follows:
Algorithm 7:
Step 0.
Select and μ such that Condition C holds. Choose and set
Step 1.
Construct the half-space
where and compute
If then set and go to Step 4. Else, go to Step 2.
Step 2.
Construct the half-space
and compute
Step 3.
Compute
Step 4.
Compute
Step 5.
Compute
Set and return to Step 1.
Suppose that the solution set is nonempty and the remaining conditions of Theorem 2 are satisfied. Afterwards, the sequence generated by Algorithm 7 converges strongly to
4. Application
Constrained Convex Minimization and Fixed Point Problems
In this section, we present an application of our result to finding a common solution of constrained convex minimization problem [59,60,61] and fixed point problem in Banach spaces.
Let E be a real Banach space and C be a nonempty closed convex subset of The constrained convex minimization problem is defined as finding a point , such that
where f is a real-valued convex function. Convex optimization theory is a powerful tool for solving many practical problems in operation research. In particular, it has been widely employed to solve practical minimization problems over complicated constraints [32,62], e.g., convex optimization problems with a fixed point constraint and a variational inequality constraint.
The lemma that follows will be required:
Lemma18.
[63] Let E be a real Banach space and let C be a nonempty closed convex subset of Let f be a convex function of E into If f is Fréchet differentiable, then z is a solution of Problem (17) if and only if
By applying Theorem 2 and Lemma 18, we can approximate a common solution of the constrained convex minimization Problem (17) and fixed point problem for an infinite family of multivalued relatively nonexpansive mappings.
Theorem4.
Let E be a two-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant and be an infinite family of multivalued relatively nonexpansive mappings. Let be a fréchet differentiable convex function and suppose is L-Lipschitz continuous with Assume that Problem (17) is consistent. Let be a sequence generated by Algorithm 8, as follows:
Algorithm 8:
Step 0.
Select and μ such that Condition C holds. Choose and set
Step 1.
Construct the half-space
where and compute
If then set and go to Step 4. Else, go to Step 2.
Step 2.
Construct the half-space
and compute
Step 3.
Compute
Step 4.
Compute
Step 5.
Compute
Set and return to Step 1.
Suppose that the solution set is nonempty and the remaining conditions of Theorem 2 are satisfied. Subsequently, the sequence generated by Algorithm 8 converges strongly to
Proof.
Because f is convex, then is monotone [63]. The result then follows by letting in Theorem 2 and applying Lemma 18. □
5. Numerical Example
In this section, we present some numerical examples to demonstrate the efficiency of our methods, Algorithm 4 and Algorithm 5 in comparison with Algorithm 3. All of the numerical computations were carried out using Matlab version R2019 (b).
We choose
Example1.
Consider a nonlinear operator defined by
and let the feasible set C be a box defined by It can easily be verified that A is monotone and Lipschitz continuous with the constant Let B be a matrix defined by
Now, we consider the mapping defined by for all where It is easily verified that each is quasi-nonexpansive (note that in a Hilbert space relatively nonexpansive mapping reduces to quasi-nonexpansive mapping). The solution of the problem is We test the algorithms for three different starting points using as stopping criterion, where The numerical result is reported in Figure 1 and Table 1.
Case I:
Case II:
Case III:
Case IV:
Example2.
Suppose that with inner product
and induced norm
Let denote the continuous function space defined on the interval and choose an arbitrary fixed Let It can easily be verified that C is a nonempty closed convex subset of Define an operator by
Subsequently, A is two-Lipschitz continuous and monotone on E (see [64]). With these given C and the solution set to the VIP (1) is given by Define by
then g is a convex function and C is a level set of i.e., Also, g is differentiable on E and (see [65]). In this numerical example, we choose Let be defined by
Observe that since for each Moreover, are nonexpansive for each and, hence, quasi-nonexpansive. Therefore, the solution set of the problem is We test the algorithms for three different starting points using as stopping criterion, where The numerical result is reported in Figure 2 and Table 2.
Case I:
Case II:
Case III:
6. Conclusions
In this paper, we studied a classical monotone and Lipschitz continuous variational inequality and fixed point problems defined on a level set of a convex function in the setting of Banach spaces. We proposed two iterative methods with self-adaptive step size that combine the Halpern method with a relaxed projection method for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive mappings in Banach spaces. The main advantage of our algorithms is to replace every projection onto the closed convex set with a projection onto some half-space which guarantees easy implementation of our proposed methods. Moreover, the step size can be adaptively selected. We obtained strong convergence results for the proposed methods without the knowledge of the Lipschitz constant of the monotone operator and we apply our results to finding a common solution of constrained convex minimization and fixed point problems in Banach spaces. The obtained results improve and extend several existing results in the current literature in this direction.
Author Contributions
Conceptualization of the article was given by S.H.K., methodology by O.T.M., software by T.O.A., validation by S.H.K., O.T.M. and T.O.A., formal analysis, investigation, data curation, and writing–original draft preparation by T.O.A., resources by S.H.K., O.T.M. and T.O.A., writing–review and editing by S.H.K. and O.T.M., visualization by S.H.K. and O.T.M., project administration by O.T.M., Funding acquisition by S.H.K. All authors have read and agreed to the published version of the manuscript.
Acknowledgments
The authors sincerely thank the anonymous reviewers for their careful reading, constructive comments and fruitful suggestions that substantially improved the manuscript. The third author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.
Conflicts of Interest
The authors declare that they have no competing interest.
References
Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei VIII Ser. Rend. Cl. Sci. Fis. Mat. Nat.1963, 34, 138–142. [Google Scholar]
Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris1964, 258, 4413–4416. [Google Scholar]
Abass, H.A.; Aremu, K.O.; Jolaoso, L.O.; Mewomo, O.T. An inertial forward-backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Funct. Anal.2020. [Google Scholar] [CrossRef]
Gibali, A.; Reich, S.; Zalas, R. Outer approximation methods for solving variational inequalities in Hilbert space. Optimization2017, 66, 417–437. [Google Scholar] [CrossRef]
Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. Inertial extragradient method via viscosity approximation approach for solving Equilibrium problem in Hilbert space. Optimization2020. [Google Scholar] [CrossRef]
Kassay, G.; Reich, S.; Sabach, S. Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim.2011, 21, 1319–1344. [Google Scholar] [CrossRef]
Mewomo, O.T.; Ogbuisi, F.U. Convergence analysis of an iterative method for solving multiple-set split feasibility problems in certain Banach spaces. Quaest. Math.2018, 41, 129–148. [Google Scholar] [CrossRef]
Jolaoso, L.O.; Alakoya, T.; Taiwo, A.; Mewomo, O.T. A parallel combination extragradient method with Armijo line searching for finding common solution of finite families of equilibrium and fixed point problems. Rend. del Circ. Mat. di Palermo Ser. 22019. [Google Scholar] [CrossRef]
Abbas, M.; Ibrahim, Y.; Khan, A.R.; De la Sen, M. Strong Convergence of a System of Generalized Mixed Equilibrium Problem, Split Variational Inclusion Problem and Fixed Point Problem in Banach Spaces. Symmetry2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. A general iterative method for finding common fixed point of finite family of demicontractive mappings with accretive variational inequality problems in Banach spaces. Nonlinear Stud.2020, 27, 1–24. [Google Scholar]
Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich extragradient method for the variational inequality problem in Euclidean space. Optimization2012, 61, 1119–1132. [Google Scholar] [CrossRef]
Jolaoso, L.O.; Ogbuisi, F.U.; Mewomo, O.T. An iterative method for solving minimization, variational inequality and fixed point problems in reflexive Banach spaces. Adv. Pure Appl. Math.2017, 9, 167–184. [Google Scholar] [CrossRef]
Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces. Demonstr. Math.2019, 52, 183–203. [Google Scholar] [CrossRef]
Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem. Comput. Appl. Math.2019. [Google Scholar] [CrossRef]
Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Strong convergence theorem for solving pseudo-monotone variational inequality problem using projection method in a reflexive Banach space. J. Optim. Theory Appl.2020. [Google Scholar] [CrossRef]
Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett.2014, 8, 1113–1124. [Google Scholar] [CrossRef]
Ogwo, G.N.; Izuchukwu, C.; Aremu, K.O.; Mewomo, O.T. A viscosity iterative algorithm for a family of monotone inclusion problems in an Hadamard space. Bull. Belg. Math. Soc. Simon Stevin2020, 27, 125–152. [Google Scholar] [CrossRef]
Oyewole, O.K.; Abass, H.A.; Mewomo, O.T. A Strong convergence algorithm for a fixed point constrainted split null point problem. Rend. Circ. Mat. Palermo II2020. [Google Scholar] [CrossRef]
Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. General alternative regularization method for solving split equality common fixed point problem for quasi-pseudocontractive mappings in Hilbert spaces. Ric. Mat.2019. [Google Scholar] [CrossRef]
Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert Spaces. J. Ind. Manag. Optim.2020. [Google Scholar] [CrossRef]
Xu, Y. Ishikawa and Mann iterative processes with errors for nonlinear strongly accretive operator equations. J. Math. Anal. Appl.1998, 224, 91–101. [Google Scholar] [CrossRef] [Green Version]
Korpelevich, G.M. An extragradient method for finding saddle points and other problems. Ekon. Mat. Metody1976, 12, 747–756. [Google Scholar]
Ceng, L.C.; Hadjisavas, N.; Weng, N.C. Strong convergence theorems by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim.2010, 46, 635–646. [Google Scholar] [CrossRef]
Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl.2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces. Math. Oper. Res.2001, 26, 248–264. [Google Scholar] [CrossRef]
Okeke, G.A.; Abbas, M.; De La Sen, M. Approximation of the Fixed Point of Multivalued Quasi-Nonexpansive Mappings via a Faster Iterative Process with Applications. Discret. Dyn. Nat. Soc.2020. [Google Scholar] [CrossRef]
Wang, Y.; Fang, X.; Guan, J.L.; Kim, T.H. On split null point and common fixed point problems for multivalued demicontractive mappings. Optimization2020, 1–20. [Google Scholar] [CrossRef]
Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal.2007, 67, 2350–2360. [Google Scholar] [CrossRef]
Shimoji, K.; Takahashi, W. Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan J. Math.2001, 5, 387–404. [Google Scholar] [CrossRef]
Combettes, P.L. A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans. Signal Process.2003, 51, 1771–1782. [Google Scholar] [CrossRef] [Green Version]
Bauschke, H.H. The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl.1996, 202, 150–159. [Google Scholar] [CrossRef] [Green Version]
Youla, D.C. Mathematical theory of image restoration by the method of convex projections. In Image Recovery: Theory and Applications; Stark, H., Ed.; Academic Press: New York, NY, USA, 1987. [Google Scholar]
Iusem, A.N.; De Pierro, A.R. On the convergence of Han’s method for convex programming with quadratic objective. Math. Program. Ser. B1991, 52, 265–284. [Google Scholar] [CrossRef]
Iiduka, H. Acceleration method for convex optimization over the fixed point set of a nonexpansive mappings. Math. Prog. Ser. A2015, 149, 131–165. [Google Scholar] [CrossRef]
Iiduka, H. Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math.2012, 236, 1733–1742. [Google Scholar] [CrossRef] [Green Version]
Luo, C.; Ji, H.; Li, Y. Utility-based multi-service bandwidth allocation in the 4G heterogeneous wireless networks. IEEE Wirel. Commun. Netw. Conf.2009. [Google Scholar] [CrossRef]
Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim.2008, 47, 1499–1515. [Google Scholar] [CrossRef]
Zhang, H.; Xu, Y.; Zhang, J. Reproducing Kernel Banach Spaces for Machine Learning. J. Mach. Learn. Res.2009, 10. [Google Scholar] [CrossRef] [Green Version]
Liu, Y. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Banach spaces. J. Nonlinear Sci. Appl.2017, 10, 395–409. [Google Scholar] [CrossRef]
Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces. Numer. Algorithms2020. [Google Scholar] [CrossRef]
Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Springer: London, UK, 2006. [Google Scholar]
Reich, S. A weak convergence theorem for the alternating method with Bregman distances. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Marcel Dekker: New York, NY, USA, 1996; pp. 313–318. [Google Scholar]
Kohsaka, F.; Takahashi, W. Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces. SIAM J. Optim.2008, 19, 824–835. [Google Scholar] [CrossRef]
Hsu, M.H.; Takahashi, W.; Yao, J.C. Generalized hybrid mappings in Hilbert spaces and Banach spaces. Taiwan. J. Math.2012, 16, 129–149. [Google Scholar] [CrossRef]
Homaeipour, S.; Razani, A. Weak and strong convergence theorems for relatively nonexpansive multi-valued mappings in Banach spaces. Fixed Point Theory Appl.2011, 73. [Google Scholar] [CrossRef] [Green Version]
Kamimura, S.; Takahashi, W. Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim.2003, 13, 938–945. [Google Scholar] [CrossRef]
Xu, H.K. Strong convergence of approximating fixed point sequences for nonexpansive mappings. Bull. Aust. Math. Soc.2006, 74, 143–151. [Google Scholar] [CrossRef] [Green Version]
Iiduka, H.; Takahashi, W. Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl.2008, 339, 668–679. [Google Scholar] [CrossRef]
Matsushita, S.Y.; Takahashi, W. A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory2005, 134, 257–266. [Google Scholar] [CrossRef] [Green Version]
Nakajo, K. Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput.2015, 271, 251–258. [Google Scholar] [CrossRef]
Chang, S.S.; Kim, J.K.; Wang, X.R. Modified block iterative algorithm for solving convex feasibility problems in Banach spaces. J. Inequal. Appl.2010, 869684. [Google Scholar] [CrossRef] [Green Version]
Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization2020. [Google Scholar] [CrossRef]
Ma, F. A subgradient extragradient algorithm for solving monotone variational inequalities in Banach spaces. J. Inequal. Appl.2020, 26. [Google Scholar] [CrossRef]
Aremu, K.O.; Abass, H.A.; Izuchukwu, C.; Mewomo, O.T. A viscosity-type algorithm for an infinitely countable family of (f,g)-generalized k-strictly pseudononspreading mappings in CAT(0) spaces. Analysis2020, 40, 19–37. [Google Scholar] [CrossRef]
Aremu, K.O.; Izuchukwu, C.; Ogwo, G.N.; Mewomo, O.T. Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim.2020. [Google Scholar] [CrossRef] [Green Version]
Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal.2011, 74, 5286–5302. [Google Scholar] [CrossRef]
Panyanak, B. Ishikawa iteration processes for multi-valued mappings in Banach Spaces. Comput. Math. Appl.2007, 54, 872–877. [Google Scholar] [CrossRef] [Green Version]
Tian, M.; Jiang, B. Inertial Haugazeau’s hybrid subgradient extragradient algorithm for variational inequality problems in Banach spaces. Optimization2020. [Google Scholar] [CrossRef]
Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl.2017, 66, 75–96. [Google Scholar] [CrossRef]
He, S.; Dong, Q.; Tian, H. Relaxed projection and contraction methods for solving Lipschitz continuous monotone variational inequalities. Rev. Real Acad. Cienc. Exatc. Fis. Nat. Ser. A Mat.2019, 113, 2773–2791. [Google Scholar] [CrossRef]
Figure 1.
Top: Case I; Next: Case II; Next: Case III; Bottom: Case IV.
Figure 1.
Top: Case I; Next: Case II; Next: Case III; Bottom: Case IV.
Figure 2.
Top: Case I; Middle: Case II; Bottom: Case III.
Figure 2.
Top: Case I; Middle: Case II; Bottom: Case III.
Khan, S.H.; Alakoya, T.O.; Mewomo, O.T.
Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Math. Comput. Appl.2020, 25, 54.
https://doi.org/10.3390/mca25030054
AMA Style
Khan SH, Alakoya TO, Mewomo OT.
Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Mathematical and Computational Applications. 2020; 25(3):54.
https://doi.org/10.3390/mca25030054
Chicago/Turabian Style
Khan, Safeer Hussain, Timilehin Opeyemi Alakoya, and Oluwatosin Temitope Mewomo.
2020. "Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces" Mathematical and Computational Applications 25, no. 3: 54.
https://doi.org/10.3390/mca25030054
APA Style
Khan, S. H., Alakoya, T. O., & Mewomo, O. T.
(2020). Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Mathematical and Computational Applications, 25(3), 54.
https://doi.org/10.3390/mca25030054
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Khan, S.H.; Alakoya, T.O.; Mewomo, O.T.
Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Math. Comput. Appl.2020, 25, 54.
https://doi.org/10.3390/mca25030054
AMA Style
Khan SH, Alakoya TO, Mewomo OT.
Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Mathematical and Computational Applications. 2020; 25(3):54.
https://doi.org/10.3390/mca25030054
Chicago/Turabian Style
Khan, Safeer Hussain, Timilehin Opeyemi Alakoya, and Oluwatosin Temitope Mewomo.
2020. "Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces" Mathematical and Computational Applications 25, no. 3: 54.
https://doi.org/10.3390/mca25030054
APA Style
Khan, S. H., Alakoya, T. O., & Mewomo, O. T.
(2020). Relaxed Projection Methods with Self-Adaptive Step Size for Solving Variational Inequality and Fixed Point Problems for an Infinite Family of Multivalued Relatively Nonexpansive Mappings in Banach Spaces. Mathematical and Computational Applications, 25(3), 54.
https://doi.org/10.3390/mca25030054