Next Article in Journal
An Efficient Local Formulation for Time–Dependent PDEs
Next Article in Special Issue
Some Mann-Type Implicit Iteration Methods for Triple Hierarchical Variational Inequalities, Systems Variational Inequalities and Fixed Point Problems
Previous Article in Journal
Some New Generalization of Darbo’s Fixed Point Theorem and Its Application on Integral Equations
Previous Article in Special Issue
Convergence and Best Proximity Points for Generalized Contraction Pairs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extension and Application of the Yamada Iteration Algorithm in Hilbert Spaces

1
College of Science, Civil Aviation University of China, Tianjin 300300, China
2
Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 215; https://doi.org/10.3390/math7030215
Submission received: 17 January 2019 / Revised: 19 February 2019 / Accepted: 20 February 2019 / Published: 26 February 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, based on the Yamada iteration, we propose an iteration algorithm to find a common element of the set of fixed points of a nonexpansive mapping and the set of zeros of an inverse strongly-monotone mapping. We obtain a weak convergence theorem in Hilbert space. In particular, the set of zero points of an inverse strongly-monotone mapping can be transformed into the solution set of the variational inequality problem. Further, based on this result, we also obtain some new weak convergence theorems which are used to solve the equilibrium problem and the split feasibility problem.

1. Introduction

Throughout this paper, let N and R be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product · , · and norm · . Let C be a nonempty, closed and convex subset of H. Let A : C H be a nonlinear operator. For the variational inequality problem, one has to find some x * C such that
A x , x x 0 , x C .
The set of solutions of variational inequality can be denoted as V I ( C , A ) . Nowadays, the variational inequality problem has aroused more and more attention by many scholars and it is an important branch of the nonlinear problem. Its applications involve different fields, such as engineering sciences, medical image processions and so on.
Through the transformation of (1), we know that the variational inequality problem is equivalent to the fixed point problem. In other words, it can be converted to find a point x C such that
x = P C ( I λ A ) x ,
where P C is the metric projection of H to C and λ is a positive real constant. Its iteration format is that
x n + 1 = P C ( I λ A ) x n .
This method is an example of the so-called gradient projection method. It is well known that if A is η -strongly monotone and L-Lipschitz continuous, the variational inequality has a unique solution and the sequence { x n } generated by (3) converges strongly, when 0 < λ < 2 η L 2 , to this unique solution. If A is k-inverse strongly monotone and 0 < λ < 2 k , assuming its solution is nonempty, then the sequence { x n } converges weakly to a point of V I ( C , A ) .
In 1976, Korpelevich [1] proposed an algorithm which was known as the extragradient method [1,2]:
y n = P C ( x n λ A x n ) , x n + 1 = P C ( x n λ A y n ) ,
for every n = 0 , 1 , 2 , , where λ ( 0 , 1 / k ) , A is Lipschitz continuous and monotone. Compared with Equation (3), Equation (4) avoids the hypotheses of the strong monotonicity of the operator A. If V I ( C , A ) , the sequence { x n } generated by (4) converges weakly to an element of V I ( C , A ) . In fact, although the extragradient method has weaken the condition of the operator, we need to calculate two projections onto C in each iteration.
Meanwhile, the extragradient method is applicable to the case that P C has a closed form, in other words, P C has an explicit expression. But in some cases, P C is not easy to calculate and has some limitations. When C is a closed ball or half space, P C has an analytical expression, while for a general closed convex set, P C often does not have analytical expression.
To overcome this difficulty, it has received great attention by many authors who had improved it in various ways. To our knowledge, there are three kinds of methods which are all improvements to the second projection about Equation (4). In all three methods, operator A is Lipschitz continuous and monotone. The first one is the subgradient extragradient method which was proposed by Censor [3] in 2011 and the iterate x n + 1 by the process:
y n = P C ( x n λ A x n ) , T n = { ω H x n λ A x n y n , ω y n 0 } , x n + 1 = P T n ( x n λ A y n ) ,
where λ ( 0 , 1 / L ) , A is Lipschitz continuous and monotone. The key operation of subgradient extragradient method replaces the second projection onto C of the extragradient method by a projection onto a special constructible half-space. Obviously, this reduces the difficulty of the calculation. The second one is the Tseng’s extragradient method by Duong Viet Thong and Dang Van Hieu [4] in 2017:
y n = P C ( x n λ A x n ) , λ n i s c h o s e n t o b e t h e l a r g e s t λ { γ , γ l , γ l 2 , } s a t i s f y i n g λ A x n A y n μ x n y n , x n + 1 = y n λ ( A y n A x n ) ,
where γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
In particular, this algorithm does not require to know the Lipschitz constant. The third one is the projection and contraction method which was studied by Q.L. Dong [5] in 2017:
ω n = x n + α n ( x n x n 1 ) , y n = P C ( ω n λ A ω n ) , d ( ω n , y n ) = ( ω n y n ) λ ( A ω n A y n ) , x n + 1 = ω n γ β n d ( ω n , y n ) .
for each k 1 , where γ ( 0 , 2 ) , λ > 0 ,
β n : = φ ( ω n , y n ) / d ( ω n , y n ) 2 , i f d ( ω n , y n ) 0 . 0 , i f d ( ω n , y n ) = 0 .
φ ( ω n , y n ) = ω n y n , d ( ω n , y n ) .
As a result, the sequences generated by Equations (5)–(7) both converge weakly to a solution of the variational inequality.
By comparing the above three methods, we find that reducing the condition of the algorithm can also solve the variational inequality problem. However, calculating projections is essential in these methods. So is there a way to avoid the calculation of projections that can solve the variational inequality problem?
As we all know, Yamada [6] introduced the so-called hybrid steepest descent method in 2001:
x n + 1 = ( I μ α n F ) T x n , n N ,
which is essentially an algorithmic solution to the variational inequality problem. It does not require calculate P C but requires a closed form expression of a nonexpansive mapping T. The fixed point set of T is C. So if T is a nonexpansive mapping with F i x ( T ) and F is k-Lipschitz continuous and η -strongly monotone, the sequence { x n } generated by (8) converges strongly to the unique solution x F i x ( T ) of V I ( F i x ( T ) , F ) .
Inspired by this thought, in 2014, Zhou and Wang [7] proposed a new iterative algorithm which based on Yamada’s hybrid steepest descent method and Mann iterative method:
x n + 1 = ( I μ α n F ) T N n T N 1 n T 1 n x n , n N ,
where μ ( 0 , 2 η L 2 ) and T i n : = ( 1 λ n i ) I + λ n i T i , f o r i = 1 , 2 , , N . In (9), { T i } i = 1 N are nonexpansive mappings of H H with i = 1 N F i x ( T i ) and F is an η -strongly monotone, L-Lipschitz continuous mapping. Then the sequence { x n } generated by (9) converges strongly to the unique point x of V I ( C , F ) . In particular, when N = 1 , (9) can be rewritten as
x n + 1 = ( I μ α n F ) ( ( 1 λ n ) I + λ n T ) x n ,
and we can also get the sequence { x n } generated by (10) converges strongly to x of V I ( C , F ) .
The advantage of the Equations (4)–(7) is that the condition of the algorithms is reduced while the advantage of Yamada algorithm avoids the influence of the projection. So, can we combine the advantages of ideas of these several methods to design a new algorithm? In that way, we cannot help but pose such a question: If we weaken the condition of Equation (10), will we get a different result? This is the main issue we will explore in this paper.
In this paper, motivated and inspired by the above results, we introduce a new iteration algorithm: x 1 C and
y n = ( I μ α n F ) x n , x n + 1 = β n x n + ( 1 β n ) T y n , n N .
In this iteration algorithm, we weaken the conditions of operators. In other words, we change the strongly monotone of F into inverse strongly monotone. Then the weak convergence of our algorithmic will be proved. It is worth emphasizing that the advantage of our algorithm is that it does not require projection and the condition of the operator is also properly weakened.
Finally, the outline of this paper is as follows. In Section 2, we list some useful basic definitions and lemmas which will be used in this paper. In Section 3, we prove the weak convergence theorems of our main algorithm. In Section 4, through the above-mentioned conclusion, we get some new weak convergence theorems to the equilibrium problem and the split feasibility problem and so on. In Section 6, we give a concrete example and the numerical result to verify the correctness of our conclusions.

2. Preliminaries

In what follows, H denotes a real Hilbert space with the inner product · , · and norm · . And C denotes a nonempty, closed and convex subset of H. We use the sign to denote that the sequence { x n } converges strongly to a point x, i.e., { x n } x and use to denote that the sequence { x n } converges weakly to a point x, i.e., { x n } x . If there exists a subsequence { x n i } of { x n } converging weakly to a point z, then z is called a weak cluster point of { x n } . We use ω w ( x n ) to denote the set of all weak cluster points of { x n } .
Definition 1. 
([8]) A mapping T : H H is called nonexpansive if
T x T y x y , x , y H .
The set of fixed points of T is the set
F i x ( T ) : = { x H : T x = x } .
As we all know, if T is nonexpansive, assume F i x ( T ) , the F i x ( T ) is closed convex.
Definition 2.
A mapping F : H H is called
  • (i) L-Lipschitz, where L > 0 , iff
    F x F y L x y , x , y H ;
  • (ii) monotone iff
    x y , F x F y 0 , x , y H ;
  • (iii) strongly monotone iff
    x y , F x F y η x y 2 , x , y H ,
    where η > 0 . In this case, F is said to be η-strongly monotone;
  • (iv) inverse strongly monotone iff
    x y , F x F y k F x F y 2 , x , y H ,
    where k > 0 . In this case, F is said to be k-inverse strongly monotone.
As we all know, if F is k-inverse strongly monotone, F is also 1 k -Lipschitz continuous.
Definition 3. 
([9]) A mapping T : H H is said to be an averaged mapping, if and only if it can be written as the convex combination of the identifier I and a nonexpansive mapping, that is to say
T = ( 1 α ) I + α S ,
where α ( 0 , 1 ) and S : H H is a nonexpansive mapping. To be more precise, we also say that T is α-averaged.
Lemma 1.
Let H be a real Hilbert space, then the following relationships are established:
  • ( i ) x + y 2 = x 2 + 2 x , y + y 2 , x , y H ;
  • ( i i ) x + y 2 x 2 + 2 y , x + y , x , y H ;
  • ( i i i ) λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 , x , y H , λ [ 0 , 1 ] .
Lemma 2. 
([9]) Let H be a real Hilbert space and C is nonempty bounded closed convex of H. Let T is a nonexpansive mapping of C to C, then F i x ( T ) .
Lemma 3. 
([9]) Let H be a real Hilbert space, then:
  • ( i ) T is nonexpansive if and only if the complement I T is ( 1 / 2 ) -inverse strongly monotone;
  • ( i i ) if T is ν-inverse strongly monotone, then for γ > 0 , γ T is ( ν / γ ) -inverse strongly monotone;
  • ( i i i ) T is averaged if and only if the complement I T is ν-inverse strongly monotone for some ν > 1 / 2 ; indeed, for α ( 0 , 1 ) , T is α-averaged if and only if the complement I T is ( 1 / 2 α ) -inverse strongly monotone.
Lemma 4. 
(Demiclosedness Principle [10]): Let C be a closed and convex subset of a real Hilbert space H. Let T : C C be a nonexpansive mapping with F i x ( T ) . If the sequence { x n } n = 1 converges weakly to x and { ( I T ) x n } n = 1 converges strongly to y, then ( I T ) x = y .
In particular, if x n x and lim n x n T x n = 0 , then T x = x . In other words, x F i x ( T ) .
Lemma 5. 
([9]) Let C be a nonempty closed and convex subset of a real Hilbert space H. Let { x n } n = 1 be a sequence in H such that the following two properties hold:
  • ( i ) l i m n x n x exists for each x C ;
  • ( i i ) ω w ( x n ) C .
  • Then the sequence { x n } n = 1 is converges weakly to a point in C.
Lemma 6. 
([11]) Let C be a nonempty closed and convex subset and { x n } n = 1 be a sequence of a real Hilbert space H. Suppose
x n + 1 u x n u , u C
for every n = 0 , 1 , 2 , . Then, the sequence { P C x n } converges strongly to a point in C.

3. Main Results

In this section, we will give our main result of this paper. Based on the iterative format (10), we weaken the condition of the operator to present our new algorithm. By proving this, we find that the sequence { x n } generated by the new algorithm is weakly convergent and the result of convergence is the same. In other words, it converges weakly to a solution for finding an element of intersection of the set of zero points of inverse strongly monotone mapping and the set of fixed points of a nonexpansive mapping in a real Hilbert space.
Theorem 1.
Let H be a real Hilbert space and let T : H H be a nonexpansive mapping with F i x ( T ) . Let F be a k-inverse strongly monotone mapping of H H . Assume that F 1 0 F i x ( T ) . Let the sequences { x n } and { y n } are generated by x 1 H and
y n = ( I μ α n F ) x n , x n + 1 = β n x n + ( 1 β n ) T y n , n N ,
where { μ α n } , { β n } satisfy the following conditions:
  • ( i ) { μ α n } [ a , b ] , a , b ( 0 , 2 k ) ;
  • ( i i ) { β n } [ c , d ] , c , d ( 0 , 1 ) .
  • Then the sequence { x n } generated by (11) converges weakly to a point x F 1 0 F i x ( T ) , where x = lim n P F 1 0 F i x ( T ) x n . At the same time, x is also a solution of V I ( F i x ( T ) , F ) .
Proof. 
Let u F 1 0 F i x ( T ) , we have
x n + 1 u = β n x n + ( 1 β n ) T y n u = β n ( x n u ) + ( 1 β n ) ( T y n u ) β n x n u + ( 1 β n ) y n u .
We also deduce
y n u 2 = ( I μ α n F ) x n ( I μ α n F ) u 2 = x n u 2 + μ 2 α n 2 F x n F u 2 2 μ α n x n u , F x n F u x n u 2 + μ 2 α n 2 F x n F u 2 2 μ α n k F x n F u 2 = x n u 2 + μ α n ( μ α n 2 k ) F x n F u 2 x n u 2 .
Combine (12) and (13), we have
x n + 1 u x n u
Therefore, there exists M R such that
M = lim n x n u
and the sequence { x n } is bounded. Meanwhile, we also obtain that the sequence { y n } is bounded.
Below, we divide the problem into two steps to prove that ω w ( x n ) F 1 0 F i x ( T ) .
Firstly, let we show that ω w ( x n ) F i x ( T ) .
x n + 1 u 2 = β n x n + ( 1 β n ) T y n u 2 = β n ( x n u ) + ( 1 β n ) ( T y n u ) 2 = β n x n u 2 + ( 1 β n ) T y n u 2 β n ( 1 β n ) x n T y n 2 x n u 2 β n ( 1 β n ) x n T y n 2 x n u 2
So, we obtain that
x n T y n 0 ( n ) .
By (12), we have
( 1 β n ) ( y n u x n u ) 0 ( n ) .
Hence,
lim n y n u = M .
Since F is k-inverse strongly monotone, we can rewrite y n in the following format:
y n = ( 1 λ n ) x n + λ n V n x n ,
where β n = μ α n 2 k and V n is a nonexpansive mapping of H to H for each n N .
So
y n u 2 = ( 1 λ n ) x n + λ n V n x n u 2 = ( 1 λ n ) ( x n u ) + λ n ( V n x n u ) 2 = ( 1 λ n ) x n u 2 + λ n V n x n u 2 λ n ( 1 λ n ) x n V n x n 2 x n u 2 λ n ( 1 λ n ) x n V n x n 2 x n u 2 .
Obviously, we can obtain that
x n V n x n 0 ( n ) .
By (14), we get
y n x n = λ n ( V n x n x n ) ,
so
y n x n 0 ( n ) .
Hence
y n T y n y n x n + x n T y n 0 ( n ) .
So, by Lemma 4, we get ω w ( x n ) F i x ( T ) .
Secondly, let us show that ω w ( x n ) F 1 0 .
In the following, we firstly prove that I μ α F is a nonexpansive mapping.
( I μ α F ) x ( I μ α F ) y 2 = ( I μ α F ) x ( I μ α F ) y , ( I μ α F ) x ( I μ α F ) y = x y 2 2 μ α x y , F x F y + μ 2 α 2 F x F y 2 x y 2 + μ α ( μ α 2 k ) F x F y 2 x y 2
So, I μ α F is a nonexpansive mapping.
Since x n y n 0 ( n ) , it is x n ( I μ α n F ) x n 0 ( n ) .
Because { α n } is bounded, we can find a subsequence { α n i } which converges to α and μ α [ c , d ] . And for each x ω w ( x n ) , assume that the subsequence { x n i } of { x n } converges weakly to x . We have
( I μ α n i F ) x n i ( I μ α F ) x n i μ ( α n i α ) F x n i 0 ( i ) .
Let us assume
x n i ( I μ α n i F ) x n i 0 ( i ) .
Hence, we can obtain
x n i ( I μ α F ) x n i x n i ( I μ α n i F ) x n i + ( I μ α n i F ) x n i ( I μ α F ) x n i 0 ( i )
From Lemma 4, we get the conclusion that
x F i x ( I μ α F ) = F 1 0 .
According to all of the above, we have
ω w ( x n ) F i x ( T ) F 1 0 .
Consequently, from Lemma 5, we get
x n x F 1 0 F i x ( T ) .
Hence, by Lemma 6, we obtain that
x = lim n P F 1 0 F i x ( T ) x n .
On the other hand, if x F 1 0 , we can write it as F x = 0 .
With regard to V I ( F i x ( T ) , F ) , it is expressed as
find  x F i x ( T ) ,  such that  F x , x x 0 , x F i x ( T ) .
Obviously, when x F 1 0 , the above formula is established.
Therefore,
F 1 0 F i x ( T ) V I ( F i x ( T ) , F ) ,
so x is also a point of V I ( F i x ( T ) , F ) .
This completes the proof.
 □

4. Application

In this section, we will illustrate the practical value of our algorithm and give some applications, which are useful in nonlinear analysis and optimization.
In the following, we mainly discuss the equilibrium problem and the split feasibility problem by applying the idea of Theorem 1 to obtain weak convergence theorems in a real Hilbert space.
First of all, let us understand the equilibrium problem.
Let C be a nonempty closed convex subset of a real Hilbert space H and let f : C × C R be a bifunction. Then, we consider the equilibrium problem (see [12,13,14,15]) which is to find z C such that
f ( z , y ) 0 , y C .
We denote the set all of z C by E P ( f ) , i.e.,
E P ( f ) = { z C : F ( z , y ) 0 , y C } .
Assume that the bifunction f satisfies the following conditions:
  • ( A 1 ) f ( x , x ) = 0 for all x C ;
  • ( A 2 ) f ( x , y ) + f ( y , x ) 0 for all x , y C , i.e., f is monotone;
  • ( A 3 ) lim sup t 0 f ( t z + ( 1 t ) x , y ) f ( x , y ) for all x , y , z C ;
  • ( A 4 ) for each x C , y f ( x , y ) is convex and lower semicontinuous.
If f satisfies the above conditions ( A 1 ) ( A 4 ) , let r > 0 and x H , then there exists z C , such that [16]
f ( z , y ) + 1 r y z , z x 0 , y C .
Lemma 7. 
([15]) Assume that f : C × C R satisfies the conditions of ( A 1 ) ( A 4 ) , define a mapping J r : H C which we also call J r the resolvent of f for r > 0 and x H as follows:
J r ( x ) = { z C : f ( z , y ) + 1 r y z , z x 0 , y C } , x H .
Then the following holds:
  • ( i ) J r is single-valued;
  • ( i i ) J r is a firmly nonexpansive mapping, i.e., for all x , y H
    J r ( x ) J r ( y ) , x y J r ( x ) J r ( y ) 2 ;
  • ( i i i ) F i x ( J r ) = E P ( f ) ;
  • ( i v ) E P ( f ) is closed and convex.
From Lemma 7, we know that under certain conditions, solving the equilibrium problem can be transformed into solving the fixed point problem. Combined with the idea of Theorem 1, we can get the following result.
Theorem 2.
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let f : C × C R be a bifunction which satisfies the conditions ( A 1 ) ( A 4 ) . Let F be a k-inverse strongly monotone of H to H. Assume F 1 0 E P ( f ) . Let the sequences { x n } and { y n } are generated by x 1 H and
y n = ( I μ α n F ) x n , x n + 1 = β n x n + ( 1 β n ) J r y n , n N ,
where { μ α n } , { β n } satisfy the following conditions:
  • ( i ) { μ α n } [ a , b ] , a , b ( 0 , 2 k ) ;
  • ( i i ) { β n } [ c , d ] , c , d ( 0 , 1 ) ;
  • ( i i i ) r is a positive real number.
  • Then the sequence { x n } converges weakly to a point x F 1 0 E P ( f ) , where x = lim n P F 1 0 E P ( f ) x n . At the same time, x is a solution of V I ( E P ( f ) , F ) .
Proof. 
Let T = J r , combining Theorem 1 and Lemma 7, the result is proven. □
Next, we are looking at the split feasibility problem.
In 1994, the split feasibility problem was introduced by Censor and Elfving [17]. The split feasibility problem is as follows:
Find x , such that x C and A x Q ,
(see [17,18,19,20,21,22,23]) where C and Q are nonempty closed convex subset of real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator. We usually abbreviate the split feasibility problem as SFP.
In 2002, the so-called CQ algorithm was first introduced by Byrne [21]. Define { x n } as the following:
x n + 1 = P C ( I β A ( I P Q ) A ) x n , n 0 ,
where 0 < β < 2 / A 2 and P C and P Q are the metric projections.
From (8), we can see that the CQ algorithm needs to calculate two projections. So, can we use the idea of the Yamada iteration to improve the algorithm? We consider that C is the set of a fixed point of a nonexpansive mapping T, so we come to the following conclusion.
Before solving this problem, we give a lemma.
Lemma 8.
Let H 1 and H 2 be real Hilbert spaces, let A : H 1 H 2 be a bounded linear operator and A be the adjoint of A, let C be a nonempty closed convex subset, and let G be a firmly nonexpansive mapping of H 2 to H 2 . Then A ( I G ) A is a 1 / A 2 -inverse strongly monotone operator, i.e., x , y H 1
x y , A ( I G ) A x A ( I G ) A y 1 / A 2 · A ( I G ) A x A ( I G ) A y 2 .
Proof. 
Since G is a firmly nonexpansive mapping, then
x y , G x G y G x G y .
Let x , y H 1 ,
A ( I G ) A x A ( I G ) A y = A 2 ( I G ) A x ( I G ) A y 2 = A 2 ( A x A y 2 + G A x G A y 2 2 A x A y , G A x G A y ) A 2 ( A x A y , A x A y ( G A x G A y ) ) = A 2 ( A x A y , ( I G ) A x ( I G ) A y ) = A 2 ( x y , A ( I G ) A x A ( I G ) A y ) .
Hence,
x y , A ( I G ) A x A ( I G ) A y 1 / A 2 · A ( I G ) A x A ( I G ) A y 2 .
So that A ( I G ) A is 1 / A 2 -inverse strongly monotone. □
Below we present the related theorem for the split feasibility problem.
Theorem 3.
Let H 1 and H 2 be real Hilbert spaces, let T : H 1 H 1 be a nonexpansive mapping with F i x ( T ) , let A : H 1 H 2 be a bounded linear operator and A be the adjoint of A. Assume that the solution of the split feasibility problem is nonempty. Let the sequences { x n } and { y n } are generated by x 1 H 1 and
y n = ( I μ α n A ( I P Q ) A ) x n , x n + 1 = β n x n + ( 1 β n ) T y n , n N .
where { μ α n } , { β n } satisfy the following conditions:
  • ( i ) { μ α n } [ a , b ] , a , b ( 0 , 2 / A 2 ) ;
  • ( i i ) { β n } [ c , d ] , c , d ( 0 , 1 ) .
    Then the sequence { x n } converges weakly to a point x which is the solution of S F P F i x ( T ) .
Proof. 
We notice that P Q is nonexpansive mapping, according to Lemma 8, we know that A ( I P Q ) A is 1 / A 2 -inverse strongly monotone.
Put F = A ( I P Q ) A in Theorem 1, the conclusion is obtained. □

5. Numerical Result

In this section, we give a concrete example of solving a solution of a system of linear equations to judge the validity of our algorithm by comparing with the Equation (4.7) of Theorem 4.5 in [24].
In the following, we give a 5 × 5 system of linear equations that are verified by the iterative algorithm in Theorem 3.
Example 1.
Let us solve the linear equation A x = b .
Assume that H 1 = H 2 = R 5 . In the following, we take
T = 1 3 1 3 0 0 0 0 1 3 1 3 0 0 0 0 1 3 1 3 0 0 0 0 1 3 1 3 0 0 0 0 1 ,
and given the parameters that μ α n = 1 324 ( n + 1 ) + 1 324 and β n = 1 2 + 1 3 n .
Consider A x = b , where
A = 1 1 2 2 1 0 2 1 5 1 1 1 0 4 1 2 0 3 1 5 2 2 3 6 1 , b = 43 16 2 19 16 51 8 41 8 .
We clearly know that the SFP can be formulated as the problem of finding a point x such that
x C a n d A x Q ,
where C = R 5 , Q = { b } .
In other words, x is the solution of system of the linear equation A x = b , and
x = 1 16 1 8 1 4 1 2 1 .
Then by Theorem 3, the sequence { x n } is generated by
y n = x n 1 1 / 324 ( n + 1 ) + 1 / 324 A A x n + 1 1 / 324 ( n + 1 ) + 1 / 324 A b , x n + 1 = ( 1 2 + 1 3 n ) x n + ( 1 ( 1 2 + 1 3 n ) ) T y n , n N .
When n , { x n } x = ( 1 16 , 1 8 , 1 4 , 1 2 , 1 ) T .
From the following Table 1 and Table 2, we can easily observe that as the iterative number increases, x n gets closer and closer to the exact solution x and the errors gradually approach zero.
From the above verification, we can know that our algorithm is effective, and the algorithm of Theorem 3 is better than the algorithm of Theorem 4.5 [24].

6. Conclusions

The variational inequality problem is an important branch of mathematics research and plays an important role in nonlinear analysis and optimization problems. Nowadays, there are many different ways to solve the variational inequality problem. The main methods are projection methods and Yamada methods. However, there are limitations in both methods. The projection in the projection methods is not easy to calculate in some cases, and the condition of the operator of the Yamada method is too strong. However, they each have their own advantages. Based on the Yamada algorithm, Zhou and Wang proposed a new iterative algorithm and obtained a strong convergence conclusion. In this paper, we consider how to avoid using projection and weaken the condition of the algorithm. In other words, our algorithm does not require projection and the condition of the operator is inverse-strongly monotone. Then we obtain the result of weak convergence. Finally, we apply this algorithm to the equilibrium problem and the split feasibility problem and prove their effectiveness.

Author Contributions

All authors contributed equally in writing this article. All authors read and approved the manuscript.

Funding

This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing.

Conflicts of Interest

The author declare that they have no competing interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problem. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Nadezhkina, N.; Takashi, W. Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128, 191–201. [Google Scholar] [CrossRef]
  3. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed]
  4. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2018, 78, 1045–1060. [Google Scholar] [CrossRef]
  5. Dong, Q.L.; Cho, Y.J.; Zhong, L.L. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  6. Yamada, I. The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and There Applications; Elsevier: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  7. Zhou, H.; Wang, P. A simpler explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 161, C716–C727. [Google Scholar] [CrossRef]
  8. Marino, G.; Xu, H.K. A general method for nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef]
  9. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  10. Hundal, H. An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57, 35–61. [Google Scholar] [CrossRef]
  11. Takahashi, W.; Toyoda, M. Weak convergence theorem for nonexpensive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118, 417–428. [Google Scholar] [CrossRef]
  12. Moudafi, A. Weak convergence theorems for nonexpansive mappings and equilibrium problems. J. Nonlinear Anal. 2008, 9, 37–144. [Google Scholar]
  13. Flam, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1997, 78, 29–41. [Google Scholar] [CrossRef]
  14. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  15. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  16. Blum, E.; Oettli, W. From optimization and variational inequalities to a equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  17. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  18. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 10518. [Google Scholar] [CrossRef]
  19. Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed-point iteration process. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
  20. Zhao, J.L.; Zhang, Y.J.; Yang, Q.Z. Moudified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219, 1644–1653. [Google Scholar]
  21. Byrne, C. Iterative obique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  22. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655–1662. [Google Scholar] [CrossRef]
  23. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  24. Tian, M.; Jiang, B.N. Weak convergence theorem for zero points of inverse strongly monotone mapping and fixed points of nonexpansive mapping in Hilbert space. Optimization 2017, 66, 1689–1698. [Google Scholar] [CrossRef]
Table 1. Numerical results as regards Example.
Table 1. Numerical results as regards Example.
nAlgorithm in Theorem 3
x n 1 x n 2 x n 3 x n 4 x n 5 E n
01.00001.00001.00001.00001.00001.5675
100.18280.18580.24560.44400.88350.1868
200.07280.12680.24040.46640.92600.0825
500.06400.12560.24850.49350.98540.0161
1000.06290.12540.25040.50031.00037.8067 × 10 4
Table 2. Numerical results as regards Theorem 4.5 [24].
Table 2. Numerical results as regards Theorem 4.5 [24].
nAlgorithm in Theorem 4.5 [24]
x n 1 x n 2 x n 3 x n 4 x n 5 E n
01.00001.00001.00001.00001.00001.5675
100.17760.17040.24410.44330.87750.1832
200.08170.13400.25600.48410.95090.0562
500.06410.12630.25120.49970.99800.0031
1000.06300.12570.25090.50081.00140.0020

Share and Cite

MDPI and ACS Style

Tian, M.; Tong, M.-Y. Extension and Application of the Yamada Iteration Algorithm in Hilbert Spaces. Mathematics 2019, 7, 215. https://doi.org/10.3390/math7030215

AMA Style

Tian M, Tong M-Y. Extension and Application of the Yamada Iteration Algorithm in Hilbert Spaces. Mathematics. 2019; 7(3):215. https://doi.org/10.3390/math7030215

Chicago/Turabian Style

Tian, Ming, and Meng-Ying Tong. 2019. "Extension and Application of the Yamada Iteration Algorithm in Hilbert Spaces" Mathematics 7, no. 3: 215. https://doi.org/10.3390/math7030215

APA Style

Tian, M., & Tong, M. -Y. (2019). Extension and Application of the Yamada Iteration Algorithm in Hilbert Spaces. Mathematics, 7(3), 215. https://doi.org/10.3390/math7030215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop