Next Article in Journal
Exact and Nonstandard Finite Difference Schemes for Coupled Linear Delay Differential Systems
Next Article in Special Issue
Common Fixed Point Results for Generalized Wardowski Type Contractive Multi-Valued Mappings
Previous Article in Journal
Foldness of Bipolar Fuzzy Sets and Its Application in BCK/BCI-Algebras
Previous Article in Special Issue
An Investigation of the Common Solutions for Coupled Systems of Functional Equations Arising in Dynamic Programming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Halpern Iterative Method for Solving Hierarchical Problem and Split Combination of Variational Inclusion Problem in Hilbert Space

by
Bunyawee Chaloemyotphong
and
Atid Kangtunyakarn
*
Department of Mathematics, Faculty of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(11), 1037; https://doi.org/10.3390/math7111037
Submission received: 29 August 2019 / Revised: 28 October 2019 / Accepted: 29 October 2019 / Published: 3 November 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
The purpose of this paper is to introduce the split combination of variational inclusion problem which combines the concept of the modified variational inclusion problem introduced by Khuangsatung and Kangtunyakarn and the split variational inclusion problem introduced by Moudafi. Using a modified Halpern iterative method, we prove the strong convergence theorem for finding a common solution for the hierarchical fixed point problem and the split combination of variational inclusion problem. The result presented in this paper demonstrates the corresponding result for the split zero point problem and the split combination of variation inequality problem. Moreover, we discuss a numerical example for supporting our result and the numerical example shows that our result is not true if some conditions fail.

1. Introduction

Throughout this article, we let H be a real Hilbert spaces with inner products · , · and norms · and let C be a nonempty closed convex subset of a real Hilbert spaces H.
Definition 1.
Let C be a nonempty subset of a real Hilbert spaces H and Z : C C be a self mapping. Z is called a nonexpansive mapping if
Z x Z y     x y , for all x , y C .
Z is called a firmly nonexpansive mapping if
Z x Z y 2 x y , Z x Z y , for all x , y C .
A mapping W : C H is called α-inverse strongly monotone [1], if there exists a positive real number α such that
x y , W x W y α W x W y 2 , x , y C .
If W : C H is α -inverse strongly monotone, then W is monotone mapping, that is,
W x W y , x y 0 , x , y H .
Remark 1.
(i) If α = 1 in Equation (1), then W is firmly nonexpansive mapping.
For i = 1 , 2 , , N , let A i : H H be a single-valued mapping and M : H 2 H be a multi-valued mapping, from the concept of variational inclusion problems, Khuangsatung and Kangtunyakarn [2] introduced the problem of finding x H such that
θ Σ i = 1 N a i A i x + M x ,
for all a i ( 0 , 1 ) with i = 1 N a i = 1 and θ is a zero vector. This problem is called the modified variational inclusion. The set of solutions of Equation (2) is denoted by V I ( H , i = 1 N a i A i , M ) . If we set A i = B for i = 1 , 2 , , N then Equation (2) reduces to θ B x + M x , which is the variational inclusion problem. The set of solution of variational inclusion problem is denoted by V I ( H , B , M ) .
The variational inclusion problems are extensively studied in mathematical programming, optimal control, mathematical economics, etc. In recent years, considerable interest has been shown in developing various extensions and generalization of the variational inclusion problem; for instance [3,4] and reference therein.
The operator M is called a maximal monotone [5], if M is monotone, i.e., u v , x y 0 , wherever u M ( x ) , v M ( y ) and the graph G ( M ) of M (that is, G ( M ) : = { ( x , u ) H × H : u M ( x ) } ) is not property contained in the graph of any other monotone operator.
Let resolvent operator J λ M : H H be defined by J λ M ( x ) = ( I + λ M ) 1 ( x ) , for all x H , where M is a multi-valued maximal monotone mapping, λ > 0 and I is an identity mapping.
Let T : C C be a mapping. A point x C is called a fixed point of T if T x = x . The set of fixed points of T is denoted F i x ( T ) = { x C : T x = x } . Fixed point problem is an important area of mathematical analysis. This problem applies about the solution in many problem in Hilbert space such as nonlinear operator equation, variational inclusion problem, etc.; for instance [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18].
Khuangsatung and Kangtunyakarn [2] proposed the following iterative algorithm:
w 1 , μ H , i = 1 N b i Ψ i ( z n , y ) + 1 r n y z n , z n w n 0 , y C , w n + 1 = α n μ + β n w n + γ n J λ M ( I λ i = 1 N a i A i ) w n + η n ( I ρ n ( I S ) ) w n + δ n z n , n 1 ,
where S : H H is a κ -strictly pseudononspreading mapping (i.e., if there exists κ [ 0 , 1 ) such that S u S v 2 u v 2 + κ ( I S ) u ( I S ) v 2 + 2 u S u , v S v , u , v H ) and under certain assumptions of Ψ i : C × C R is a bifunction for all i = 1 , 2 , , N , they proved strong convergence theorem for solving the modified variational inclusion problem under some suitable conditions of { α n } , { β n } , { γ n } , { η n } , { δ n } and { ρ n } .
Over the decades, there are many mathematicians interested in studying the variational inequality problem, which is one of the important problems. The methods used to solve this problem can be applied for other solutions such as physics, economics, finance, optimization, network analysis, medical images, water resourced and structural analysis. The set of solution of the variational inequality problem is denoted by
V I ( C , A ) = { u C : v u , A u 0 } ,
for all v C and A : C H is a mapping.
Many iterative methods have been developed for solving variational inequality problem, see, for instance [7,8].
By using the concept of the variational inequality problem, Moudafi and Mainge [9] firstly introduced hierarchical fixed point problem for a nonexpansive mapping T with respect to another nonexpansive mapping S on H: Find x F i x ( T ) such that
S x x , x x 0 , x F i x ( T ) ,
where S : H H is a nonexpansive mapping. It is easy to see that Equation (3) is equivalent to the following fixed point problem: Find x H such that
x = P F i x ( T ) S x ,
where P F i x ( T ) is the metric projection of H onto F i x ( T ) . The solution set of Equation (3) is denoted by Φ = { x H : S x x , x x 0 , x F i x ( T ) } . It is obvious that Φ = V I ( F i x ( T ) , I S ) . Note that Equation (3) covers monotone variational inequality on fixed point sets, minimization problem, etc. Many iterative methods have been developed for solving the hierarchical fixed point problem in Equation (3), see example [9,10,11].
By using the concept of Krasnoselski–Mann iterative algorithm, Moudafi [10] introduced iterative scheme (5) for nonexpansive mapping S , T on a subset C of Hilbert space:
x 0 C , x n + 1 = ( 1 α n ) x n + α n ( σ n P x n + ( 1 σ n ) T x n ) , n 0 .
He proved the weak convergence theorem of the sequence { x n } , where { α n } , { σ n } ( 0 , 1 ) satisfies
(i)
n = 0 + σ n < + ,
(ii)
n = 0 + α n ( 1 α n ) = + ,
(iii)
lim n + x n + 1 x n ( 1 α n ) σ n = 0 .
Let H 1 and H 2 be two real Hilbert spaces and C, Q be a nonempty closed convex subset of a real Hilbert spaces H 1 and H 2 , respectively. Let A : H 1 H 2 be a bounded linear operator. Censor and Elfving [14] introduced the split feasibility problem ( S E P ) which is to find a point x C and A x Q . Many authors have studied this concept of SEP to modified their problem, see example [12,13,14,15].
In 2010, Censor, Gibali and Reich [13] introduced the split variational inequality problem which relies on the split feasibility problem and thus created the iterative algorithm for solving a strong convergence theorem of the split variational inclusion problem; more detail [13].
The split monotone variational inclusion problem, which consists of special cases, which is being used in practice as a model in the intensity-modulated radiation therapy treatment planning, the modeling of many inverse problems, and other problems; see for instance [11,12,13,14,15].
For every i = 1 , 2 , , N . Let A i : H 1 H 1 , B i : H 2 H 2 be mappings and M A : H 1 2 H 1 and M B : H 2 2 H 2 be multi-value mappings. Inspired and motivated by Moudafi [12] and Khuangsatung and Kangtunyakarn [2], we define the split combination of the variational inclusion problem (SCVIP) which is find x H 1 such that
θ H 1 i = 1 N a i A i x + M A x ,
and
y = A x such that θ H 2 i = 1 N b i B i y + M B y ,
where A : H 1 H 2 is a bounded linear operator and i = 1 N a i = i = 1 N b i = 1 .
The set of all the solutions for Equations (6) and (7) are denoted by Ω = { x V I ( H 1 , i = 1 N a i A i , M A ) : A x V I ( H 2 , i = 1 N b i B i , M B ) } .
If we set A i = A and B i = B for all i = 1 , 2 , , N then SCVIP reduces to the split monotone variational inclusion problem (SMVI), which is,
find x H 1 such that 0 A ( x ) + M A ( x ) ,
and such that
y = A x H 2 solves 0 B ( y ) + M B ( y ) ,
introduced by Moudafi [12]. The set of all these solutions for Equations (8) and (9) are denoted by Θ = { x V I ( H 1 , f , B 2 ) : A x V I ( H 2 , g , B 2 ) } .
Very recently, Kazmi et al. [11] proved the strong convergence theorem under suitable condition of parameters for solving the hierarchical fixed point problem and SMVI by using hybrid iterative method as follows:
x 0 C , C 0 = C ; u n = ( 1 α n ) x n + α n P C ( σ n S x n + ( 1 σ n ) W n x n ) ; z n = J λ M 1 ( I λ f ) ( u n ) ; w n = J λ M 2 ( I λ g ) ( A z n ) ; y n = z n + γ A ( w n A z n ) ; C n = { z C : y n z 2 ( 1 α n σ n ) x n z 2 + α n σ n S x n z 2 } ; Q n { z C : x n z , x 0 x n 0 } ; x n + 1 = P C n Q n x 0 , n 0 .
where M 1 : H 1 2 H 1 , M 2 : H 2 2 H 2 are multi-valued maximal monotone operators, f : C H 1 is θ 1 -inverse strongly monotone mapping, g : Q H 2 is θ 2 -inverse strongly monotone mapping, { T i } i = 1 N : C C is a finite family of nonexpansive mappings and W n is a W-mapping generated by T 1 , T 2 , , T N and λ n , 1 , λ n , 2 , , λ n , N for all n N { 0 } .
Based on the results mentioned above, we give our theorem for SCVIP and some important results as follows:
(i)
We first establish Lemma 8 which shows the equivalence between SCVIP and fixed point problem of nonexpansive mapping under suitable conditions on our parameters. Further, we give some example to support Lemma 8 and the example shows that Lemma 8 is not true if some condition fails.
(ii)
We establish a strong convergence theorem of the sequences generated by the modified Halpern iterative method for finding a common solution of hierarchical fixed point problem for a nonexpansive mapping and SCVIP.
(iii)
We apply our main result to obtain a strong convergence theorem of the sequences generated by the modified Halpern iterative method for finding a common solution of hierarchical fixed point problem for a nonexpansive mapping and split combination of variational inequality problem and a strong convergence theorem for finding a common solution of hierarchical fixed point problem for nonexpansive mapping and split zero point problem.
(iv)
We give some illustrative numerical examples to support our main result and our examples show that our main result is not true if some conditions fail.

2. Preliminaries

In this paper, we denote weak and strong convergence by the notations ’⇀’ and ’→’, respectively. We recall some concepts and results needed in the sequel.
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Then for any x H , there exists a unique nearest point in C, denoted by P C x , such that
x P C x     x y , y C .
The mapping P C is called the matric projection of H onto C. It is well known that P C is nonexpansive and satisfies
x y , P C x P C y P C x P C y 2 , x H .
Moreover, P C x is characterized by the fact P C x C and
x P C x , y P C x 0 , y C ,
which implies that
x y 2     x P C x 2 + y P C x 2 , x H , y C .
Lemma 1
([4]). Let { a n } , { c n } R + , { α n } ( 0 , 1 ) and { b n } R be sequences such that
a n + 1 = ( 1 α n ) a n + b n + c n , for all n 0 .
Assume n = 0 c n < . Then the following results hold:
(i) 
if b n α n C where C 0 , then { a n } is a bounded sequence,
(ii) 
if n = 0 α n = and lim sup n b n α n 0 , then lim n a n = 0 .
Lemma 2
([19]). Let E be a uniformly convex Banach space, C a nonempty closed convex subset of E, and S : C C a nonexpansive mapping with F i x ( S ) . Then I S is demiclosed at zero.
Lemma 3
([4]). Let u H be a solution of variational inclusion if and only if u = J λ M ( u λ B u ) , λ > 0 , i.e.,
V I ( H , B , M ) = F i x ( J λ M ( I λ B ) ) , λ > 0 .
where B : H H is a single-valued mapping. Further, if λ ( 0 , 2 α ] , then V I ( H , B , M ) is a closed convex subset in H.
Lemma 4
([4]). The resolvent operator J λ M associated with M is single-valued, nonexpansive for all λ > 0 and 1-inverse strongly monotone.
Lemma 5
([2]). Let H be a real Hilbert space and let M : H 2 H be a multi-valued maximal monotone mapping. For every i = 1 , 2 , , N , let A i : H H be α i -inverse strongly monotone mapping with η = min i = 1 , 2 , , N { α i } and i = 1 N V I ( H , A i , M ) . Then
V I H , i = 1 N a i A i , M = i = 1 N V I ( H , A i , M ) ,
where i = 1 N a i = 1 and 0 < a i < 1 for every i = 1 , 2 , , N . Moreover, J λ M ( I λ i = 1 N a i A i ) is a nonexpansive mapping, for all 0 < λ < 2 η .
Example 1.
Let H = R . For every i = 1 , 2 , , N , let A i : R R define by A i x = i x 4 + ( i + 1 ) for all x H and M : R 2 R be defined by M x = { x 4 } for all x R . Let a i = 3 4 i + 1 N 4 N for all i = 1 , 2 , , N . Then V I ( H , i = 1 N a i A i , M ) = i = 1 N V I ( H , A i , M ) .
Proof of Solution.
Since A i x = i x 4 + ( i + 1 ) , we have A i is 4 i -inverse strongly monotone mapping. By definition of a i and A i , we have
i = 1 N a i A i x = i = 1 N ( 3 4 i + 1 N 4 N ) A i x = i = 1 N ( 3 4 i + 1 N 4 N ) ( i x 4 + ( i + 1 ) ) .
From Lemma 5, we have V I ( H , i = 1 N a i A i , M ) = i = 1 N V I ( H , A i , M ) = { 4 } . □
Example 2.
Let H = R . For every i = 1 , 2 , , N , let A i : R R define by A i x = i x 4 + ( i + 1 ) for all x H and M : R 2 R be defined by M x = { x 4 } for all x R . Let a i = 3 4 i + 1 N ( 1 4 N + 1 ) for all i = 1 , 2 , , N . Then V I ( H , i = 1 N a i A i , M ) i = 1 N V I ( H , A i , M ) .
Proof of Solution.
Since A i x = i x 4 + ( i + 1 ) , we have A i is 4 i -inverse strongly monotone mapping. By definition of a i and A i , we have
i = 1 N a i A i x = i = 1 N ( 3 4 i + 1 N ( 1 4 N + 1 ) ) A i x = i = 1 N ( 3 4 i + 1 N ( 1 4 N + 1 ) ) ( i x 4 + ( i + 1 ) ) .
Then i = 1 N V I ( H , A i , M ) = { 4 } and V I ( H , i = 1 N a i A i , M ) { 4 } . It implies that V I ( H , i = 1 N a i A i , M ) i = 1 N V I ( H , A i , M ) because i = 1 N a i = 2 . □
Remark 2.
Example 1 shows that Lemma 5 is true where i = 1 N a i = 1 and Example 2 shows that Lemma 5 is not true if a condition fails, that is i = 1 N a i 1 .
Lemma 6
([17]). Let C H be a nonempty closed and convex set and let T : C H be a nonexpansive mapping. Then F i x ( T ) is closed and convex.
Lemma 7.
Let H 1 and H 2 be Hilbert spaces. Let M A : H 1 2 H 1 be a multi-valued maximal monotone mapping and M B : H 2 2 H 2 be a multi-valued maximal monotone mapping. Let A : H 1 H 2 be a bounded linear operator. For every i = 1 , 2 , , N , let A i : H 1 H 1 be α i -inverse strongly monotone with η A = min i = 1 , 2 , , N { α i } and B i : H 2 H 2 be β i -inverse strongly monotone with η B = min i = 1 , 2 , , N { β i } . For each x , y H 1 , then
J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) J λ A M A ( I λ A i = 1 N a i A i ) ( y γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y ) 2 x y 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 ,
where λ A ( 0 , 2 η A ) , λ B ( 0 , 2 η B ) , i = 1 N a i = i = 1 N b i = 1 and γ ( 0 , 1 L ) with L is the spectral radius of A A .
Proof. 
Let x , y H 1 . Consider
J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) J λ A M A ( I λ A i = 1 N a i A i ) ( y γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y ) 2 ( x y ) γ ( A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y ) 2 = x y 2 2 γ x y , A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y + γ 2 A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 x y 2 + 2 γ A y A x , ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y + γ 2 L ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 = x y 2 + 2 γ A y A x + J λ B M B ( I λ B i = 1 N b i B i ) A x J λ B M B ( I λ B i = 1 N b i B i ) A x + J λ B M B ( I λ B i = 1 N b i B i ) A y J λ B M B ( I λ B i = 1 N b i B i ) A y , ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y + γ 2 L ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 = x y 2 + 2 γ [ J λ B M B ( I λ B i = 1 N b i B i ) A y J λ B M B ( I λ B i = 1 N b i B i ) A x , ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y , ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y ] + γ 2 L ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 x y 2 + 2 γ [ 1 2 ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 ] + γ 2 L ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 = x y 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 .
Hence
J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) J λ A M A ( I λ A i = 1 N a i A i ) ( y γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y ) 2 x y 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A y 2 .
 □
We introduce Lemma 8 which shows an association between the SCVIP and the fixed point problem of nonexpansive mapping under suitable conditions on our parameters. Furthermore, we give examples for supporting Lemma 8 and the examples shows that Lemma 8 is not true if parameters are not satisfied.
Lemma 8.
Let H 1 and H 2 be Hilbert spaces. Let M A : H 1 2 H 1 be a multi-valued maximal monotone mapping and M B : H 2 2 H 2 be a multi-valued maximal monotone mapping. Let A : H 1 H 2 be a bounded linear operator. For every i = 1 , 2 , , N , let A i : H 1 H 1 be α i -inverse strongly monotone with η A = min i = 1 , 2 , , N { α i } and B i : H 2 H 2 be β i -inverse strongly monotone with η B = min i = 1 , 2 , , N { β i } . Suppose that Ω . Then the following are equivalent:
(i) 
x Ω
(ii) 
x = J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) ,
where λ A ( 0 , 2 η A ) , λ B ( 0 , 2 η B ) , i = 1 N a i = i = 1 N b i = 1 and γ ( 0 , 1 L ) with L is the spectral radius of A A .
Proof. 
Let the condition holds.
( i ) ( i i ) Let x Ω , we have x V I ( H 1 , i = 1 N a i A i , M A ) and A x V I ( H 2 , i = 1 N b i B i , M B ) .
From Lemma 3, we have x F i x ( J λ A M A ( I λ A i = 1 N a i A i ) ) and A x F i x ( J λ B M B ( I λ B i = 1 N b i B i ) ) , which implies that x = J λ A M A ( I λ A i = 1 N a i A i ) x and A x = J λ B M B ( I λ B i = 1 N b i B i ) A x .
By x = J λ A M A ( I λ A i = 1 N a i A i ) x and A x = J λ B M B ( I λ B i = 1 N b i B i ) A x , we have
J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( A x J λ B M B ( I λ B i = 1 N b i B i ) A x ) = J λ A M A ( I λ A i = 1 N a i A i ) x = x .
It implies that
J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = x .
( i i ) ( i ) Let J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = x and let w Ω .
We will show that I λ A i = 1 N a i A i and I λ B i = 1 N b i B i are nonexpansive, for all i = 1 , 2 , , N .
Since A i : C H be α i -inverse strongly monotone mapping with η A = min i = 1 , 2 , , N { α i } and λ A ( 0 , 2 η A ) , we have
( I λ A i = 1 N a i A i ) x ( I λ A i = 1 N a i A i ) y 2 = x y 2 2 λ A i = 1 N a i x y , A i x A i y + λ A 2 i = 1 N a i A i x A i y 2 x y 2 2 λ A i = 1 N a i α i A i x A i y 2 + λ A 2 i = 1 N a i A i x A i y 2 x y 2 2 λ A η A i = 1 N a i A i x A i y 2 + λ A 2 i = 1 N a i A i x A i y 2 = x y 2 + λ A i = 1 N a i ( λ A 2 η A ) A i x A i y 2 x y 2 .
Thus I λ A i = 1 N a i A i is a nonexpansive mapping, for all i = 1 , 2 , , N . By using the same proof, we obtain that I λ B i = 1 N b i B i , for all i = 1 , 2 , , N is a nonexpansive mapping and J λ B M B ( I λ B i = 1 N b i B i ) is nonexpansive mapping.
From w Ω and ( i ) ( i i ) , we have J λ B M B ( I λ B i = 1 N b i B i ) A w = A w and J λ A M A ( I λ A i = 1 N a i A i ) ( w γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A w ) = w .
From Lemma 7 and J λ B M B ( I λ B i = 1 N b i B i ) A w = w , we have
x w 2 = J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) J λ A M A ( I λ A i = 1 N a i A i ) ( w γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A w ) 2   x w 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A w 2 = x w 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x 2 .
Applying Equation (16), we have
A x F i x ( J λ B M B ( I λ B i = 1 N b i B i ) ) .
From Lemma 5, we have
A x V I H 1 , i = 1 N b i B i , M B .
From the definition of x and Equation (17), we have
x = J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = J λ A M A ( I λ A i = 1 N a i A i ) x .
From Lemma 5, we have
x V I H 2 , i = 1 N a i A i , M A .
From Equations (18) and (19), we have x Ω . □
Example 3.
Let H 1 = H 2 = R . For every i = 1 , 2 , , N , let A i : R R define by A i x = i x 4 + ( i + 1 ) for all x H 1 and B i : R R define by B i y = i y 2 + ( i + 1 ) for all y H 2 . Let M A : R 2 R be defined by M A x = { x 4 } for all x R and M B : R 2 R be defined by M B x = { y 2 } for all y R . Let A x = x , for all x R . Let a i = 3 4 i + 1 N 4 N and b i = 2 3 i + 1 N 3 N for all i = 1 , 2 , , N . Then J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = 4 .
Proof of Solution
It is easy to observe A i is 4 i -inverse strongly monotone mapping and B i is 2 i -inverse strongly monotone mapping. By definition of A i , B i and a i , b i , we have
i = 1 N a i A i x = i = 1 N ( 3 4 i + 1 N 4 N ) A i x = i = 1 N ( 3 4 i + 1 N 4 N ) ( i x 4 + ( i + 1 ) ) ,
and
i = 1 N b i B i y = i = 1 N ( 2 3 i + 1 N 3 N ) B i y = i = 1 N ( 2 3 i + 1 N 3 N ) ( i y 2 + ( i + 1 ) ) .
Then Ω = { 4 } . From definition of A, we have L = 1 . Choose λ A = 1 N , λ B = 1 N and γ = 1 10 . From Lemma 8, we have J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = 4 . □
Example 4.
Let H 1 = H 2 = R . For every i = 1 , 2 , , N , let A i : R R define by A i x = i x 4 + ( i + 1 ) for all x H 1 and B i : R R define by B i y = i y 2 + ( i + 1 ) for all y H 2 . Let M A : R 2 R be defined by M A x = { x 4 } for all x R and M B : R 2 R be defined by M B x = { y 2 } for all y R . Let A x = x , for all x R . Let a i = 3 4 i + 1 N 4 N and b i = 2 3 i + 1 N 3 N for all i = 1 , 2 , , N . Then J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = x for all x R .
Proof of Solution
It is easy to observe A i is 4 i -inverse strongly monotone mapping and B i is 2 i -inverse strongly monotone mapping. By definition of A i , B i and a i , b i , we have
i = 1 N a i A i x = i = 1 N ( 3 4 i + 1 N 4 N ) A i x = i = 1 N ( 3 4 i + 1 N 4 N ) ( i x 4 + ( i + 1 ) ) ,
and
i = 1 N b i B i y = i = 1 N ( 2 3 i + 1 N 3 N ) B i y = i = 1 N ( 2 3 i + 1 N 3 N ) ( i y 2 + ( i + 1 ) ) .
Then Ω = { 4 } . From definition of A, we have L = 1 . Choose λ A = 0 , λ B = 0 and γ = 1 10 , we have J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) = x for all x R .
So Example 4 shows that Lemma 8 is not true because λ A = 0 and λ B = 0 . □

3. Main Result

We prove a strong convergence theorem to approximate a common solution of SCVIP and hierarchical fixed point problem of nonexpansive mapping.
Theorem 1.
Let H 1 , H 2 be real Hilbert spaces. Let M A : H 1 2 H 1 be a multi-valued maximal monotone mapping and M B : H 2 2 H 2 be a multi-valued maximal monotone mapping. Let A : H 1 H 2 be a bounded linear operator with its adjoint operator A . Let A i : H 1 H 1 be α i -inverse strongly monotone with η A = min i = 1 { α i } and B i : H 2 H 2 be β i -inverse strongly monotone with η B = min i = 1 { β i } . Let S , T : H 1 H 1 be two nonexpansive mappings. Assume that F = Φ Ω . Let the sequence { x n } generated by u , x 1 H 1 and
u n = J λ A M A ( I λ A i = 1 N a i A i ) ( x n γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n ) , y n = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) , x n + 1 = μ n u + φ n y n + θ n u n ,
where { μ n } , { φ n } , { θ n } , { α n } , { σ n } [ 0 , 1 ] with μ n + φ n + θ n = 1 for all n 1 , λ A ( 0 , 2 η A ) , λ B ( 0 , 2 η B ) and γ ( 0 , 1 L ) with L is the spectral radius of A A . Suppose the following conditions hold:
(i) 
lim n μ n = 0 , n = 1 μ n = ,
(ii) 
0 < c φ n , θ n d < 1 , c , d > 0 ,
(iii) 
n = 1 | μ n + 1 μ n | < , n = 1 | φ n + 1 φ n | < , n = 1 | θ n + 1 θ n | <
(iv) 
lim n σ n = 0 , n = 1 σ n < ,
(v) 
lim n x n y n α n σ n = 0 ,
(vi) 
i = 1 N a i = i = 1 N b i = 1 , a i > 0 and b i > 0 for all i = 1 , 2 , , N .
Then { x n } converges strongly to z 0 F , where z 0 = P F u .
Proof. 
Step1. First, we prove that { x n } , { y n } and { u n } are bounded.
We will show that J λ A M A ( I λ A i = 1 N a i A i ) and J λ B M B ( I λ B i = 1 N b i B i ) are nonexpansive mapping. Since A i is α i -inverse strongly monotone with η A = min i = 1 { α i } , we have
( I λ A i = 1 N a i A i ) x ( I λ A i = 1 N a i A i ) y 2 = ( x y ) λ A ( i = 1 N a i A i x i = 1 N a i A i y ) 2 x y 2 2 λ A i = 1 N a i x y , A i x A i y + λ A 2 i = 1 N a i A i x A i y 2 x y 2 2 λ A i = 1 N a i α i A i x A i y 2 + λ A 2 i = 1 N a i A i x A i y 2 x y 2 + λ A i = 1 N a i ( λ n 2 η A ) A i x A i y 2 x y 2 .
Thus I λ A i = 1 N a i A i is a nonexpansive mapping, for all i = 1 , 2 , , N . By using the same proof, we obtain that I λ B i = 1 N b i B i is a nonexpansive mapping. Since J λ A M A and J λ B M B are nonexpnsive mapping, we have J λ A M A ( I λ A i = 1 N a i A i ) and J λ B M B ( I λ B i = 1 N b i B i ) are nonexpansive mapping.
Let p F then p H 1 and p Φ which T p = p . Now, we estimate
y n p 2 = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) p 2 = ( 1 α n ) ( x n p ) + α n ( σ n ( S x n p ) + ( 1 σ n ) ( T x n p ) ) 2 ( 1 α n ) x n p 2 + α n ( σ n ( S x n p ) + ( 1 σ n ) ( T x n p ) ) 2 ( 1 α n ) x n p 2 + α n σ n S x n p 2 + α n ( 1 σ n ) T x n p 2 ( 1 α n ) x n p 2 + α n σ n S x n p 2 + α n ( 1 σ n ) x n p 2 = ( 1 α n σ n ) x n p 2 + α n σ n S x n p 2 .
Since p F , then p Ω and J λ A M A ( I λ A i = 1 N a i A i ) p = p and J λ B M B ( I λ B i = 1 N b i B i ) A p = A p . By Lemma 8, we have
J λ A M A ( I λ A i = 1 N a i A i ) ( p γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A p ) = p .
By Lemma 7, we have
u n p 2 = J λ A M A ( I λ A i = 1 N a i A i ) ( x n γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n ) J λ A M A ( I λ A i = 1 N a i A i ) ( p γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A p ) 2 x n p 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A p 2 x n p 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n 2 x n p 2 .
By Equations (21) and (22), we have
x n + 1 p 2 = μ n u + φ n y n + θ n u n p 2 μ n u p 2 + φ n y n p 2 + θ n u n p 2 μ n u p 2 + φ n ( 1 α n σ n ) x n p 2 + α n σ n S x n p 2 + θ n x n p 2 = μ n u p 2 + φ n x n p 2 φ n α n σ n x n p 2 + φ n α n σ n S x n p 2 + θ n x n p 2 = μ n u p 2 + ( 1 μ n ) x n p 2 φ n α n σ n x n p 2 + φ n α n σ n S x n p 2 ( 1 μ n ) x n p 2 + μ n u p 2 + μ n α n σ n S x n p 2 .
From Lemma 1(i), therefore { x n } is bounded. So are { u n } , { y n } .
Step2. Show that lim n x n + 1 x n = 0 , lim n x n u n = 0 and lim n x n y n = 0 .
x n + 1 x n = μ n u + φ n y n + θ n u n μ n 1 u φ n 1 y n 1 θ n 1 u n 1 = ( μ n μ n 1 ) u + ( φ n φ n 1 ) y n 1 + φ n ( y n y n 1 ) + ( θ n θ n 1 ) u n 1 + θ n ( u n u n 1 ) | μ n μ n 1 | u + | φ n φ n 1 | y n 1 + φ n y n y n 1 + | θ n θ n 1 | u n 1 + θ n u n u n 1 .
From definition of u n , Lemma 7 and γ ( 0 , 1 L ) , we have
u n u n 1 2 = J λ A M A ( I λ A i = 1 N a i A i ) ( x n γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n ) J λ A M A ( I λ A i = 1 N a i A i ) ( x n 1 γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n 1 ) 2 x n x n 1 2 γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n 1 2 x n x n 1 2 .
It implies that
u n u n 1 x n x n 1 .
From definition of y n , we have
y n y n 1 = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) [ ( 1 α n 1 ) x n 1 + α n 1 ( σ n 1 S x n 1 + ( 1 σ n 1 ) T x n 1 ) ] = ( x n x n 1 ) α n x n + α n 1 x n 1 + α n x n 1 α n x n 1 + α n σ n S x n α n 1 σ n 1 S x n 1 + α n σ n S x n 1 α n σ n S x n 1 + α n T x n α n 1 T x n 1 + α n T x n 1 α n T x n 1 α n σ n T x n + α n 1 σ n 1 T x n 1 + α n σ n T x n 1 α n σ n T x n 1 = ( 1 α n ) ( x n x n 1 ) + ( α n 1 α n ) x n 1 + ( α n σ n α n 1 σ n 1 ) S x n 1 + α n σ n ( S x n S x n 1 ) + α n ( 1 σ n ) ( T x n T x n 1 ) + ( α n α n 1 ) T x n 1 + ( α n 1 σ n 1 α n σ n ) T x n 1 ( 1 α n ) x n x n 1 + | α n 1 α n | x n 1 + | α n σ n α n 1 σ n 1 | S x n 1 + α n σ n S x n S x n 1 + α n ( 1 σ n ) T x n T x n 1 + | α n α n 1 | T x n 1 + | α n 1 σ n 1 α n σ n | T x n 1 ( 1 α n ) x n x n 1 + | α n 1 α n | x n 1 + | α n σ n α n 1 σ n 1 | S x n 1 + α n σ n x n x n 1 + α n ( 1 σ n ) x n x n 1 + | α n α n 1 | T x n 1 + | α n 1 σ n 1 α n σ n | T x n 1 = x n x n 1 + | α n 1 α n | x n 1 + | α n σ n α n 1 σ n 1 | S x n 1 + | α n α n 1 | T x n 1 + | α n 1 σ n 1 α n σ n | T x n 1 .
From Equations (24)–(26), we have
x n + 1 x n | μ n μ n 1 | u + | φ n φ n 1 | y n 1 + φ n y n y n 1 + | θ n θ n 1 | u n 1 + θ n u n u n 1 | μ n μ n 1 | u + | φ n φ n 1 | y n 1 + φ n [ x n x n 1 + | α n 1 α n | x n 1 + | α n σ n α n 1 σ n 1 | S x n 1 + | α n α n 1 | T x n 1 + | α n 1 σ n 1 α n σ n | T x n 1 ] + | θ n θ n 1 | u n 1 + θ n x n x n 1 = ( φ n + θ n ) x n x n 1 + | μ n μ n 1 | u + | φ n φ n 1 | y n 1 + φ n | α n 1 α n | x n 1 + φ n | α n σ n α n 1 σ n 1 | S x n 1 + φ n | α n α n 1 | T x n 1 + φ n | α n 1 σ n 1 α n σ n | T x n 1 + | θ n θ n 1 | u n 1 ( 1 μ n ) x n x n 1 + | μ n μ n 1 | u + | φ n φ n 1 | y n 1 + | α n 1 α n | x n 1 + | α n σ n α n 1 σ n 1 | S x n 1 + | α n α n 1 | T x n 1 + | α n 1 σ n 1 α n σ n | T x n 1 + | θ n θ n 1 | u n 1 .
By Lemma 1(i), conditions (i) and (iii), we have
lim n x n + 1 x n = 0 .
From definition of u n , we have
x n + 1 u n = μ n u + φ n y n + θ n u n u n = μ n ( u u n ) + φ n ( y n u n ) .
From Equations (21) and (22), we have
x n + 1 p 2 = μ n u + φ n y n + θ n u n p 2 = μ n u p 2 + φ n y n p 2 + θ n u n p 2 μ n φ n u y n 2 μ n θ n u u n 2 φ n θ n y n u n 2 μ n u p 2 + φ n y n p 2 + θ n u n p 2 φ n θ n y n u n 2 μ n u p 2 + φ n [ ( 1 α n σ n ) x n p 2 + α n σ n S x n p 2 ] + θ n x n p 2 φ n θ n y n u n 2 = μ n u p 2 + φ n x n p 2 φ n α n σ n x n p 2 + φ n α n σ n S x n p 2 + θ n x n p 2 φ n θ n y n u n 2 μ n u p 2 + ( 1 μ n ) x n p 2 + φ n α n σ n S x n p 2 φ n θ n y n u n 2 μ n u p 2 + x n p 2 + φ n α n σ n K φ n θ n y n u n 2 ,
where K = sup n { S x n p 2 } . It follow that
φ n θ n y n u n 2 μ n u p 2 + φ n α n σ n K + x n p 2 x n + 1 p 2 μ n u p 2 + φ n α n σ n K + x n x n + 1 ( x n p + x n + 1 p ) μ n u p 2 + φ n α n σ n K + x n x n + 1 L 1 ,
where L 1 = sup n { x n p + x n + 1 p } . From Equation (27), conditions (i), (ii) and (v), we have
lim n y n u n = 0 .
From Equations (28) and (29), we have
x n + 1 u n = μ n ( u u n ) + φ n ( y n u n ) μ n u u n + φ n y n u n .
From Equation (29) and a condition (i), we have
lim n x n + 1 u n = 0 .
Since
x n u n = x n x n + 1 + x n + 1 u n x n x n 1 + x n + 1 u n .
From Equations (28) and (30), we have
lim n x n u n = 0 .
Since
x n y n = x n u n + u n y n x n u n + u n y n .
From Equations (29) and (31), we have
lim n x n y n = 0 .
Step3. lim n x n T x n = 0 .
We have
x n T x n x n y n + y n T x n .
Since { x n } is bounded and the mappings S , T are nonexpansive then there exists a K 1 > 0 such that S x n T x n K 1 , for all n 0 . Now, we estimate
y n T x n = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) T x n = ( 1 α n ) ( x n T x n ) + α n ( σ n S x n + ( 1 σ n ) T x n T x n ) = ( 1 α n ) ( x n T x n ) + α n ( σ n S x n σ n T x n ) ( 1 α n ) x n T x n + α n σ n S x n T x n ( 1 α n ) x n y n + y n T x n + α n σ n S x n T x n ( 1 α n ) x n u n + ( 1 α n ) y n T x n + α n σ n S x n T x n ,
which implies
α n y n T x n ( 1 α n ) x n y n + α n σ n S x n T x n x n y n + α n σ n K 1 .
It follow that
y n T x n x n y n α n + σ n K 1 .
Since lim n x n y n α n σ n = 0 , we have lim n x n y n α n = lim n σ n · x n y n α n σ n = 0 .
From lim n x n y n α n = 0 , Equation (34) and a condition (v), we have
lim n y n T x n = 0 .
Thus, it follows from Equations (32), (33) and (35), we have
lim n x n T x n = 0 .
Step4. x F
Since { x n } is bounded, there exists a subsequence { x n k } which converges weakly to x . We may assume that
lim inf n x n , x y n x n α n x n = lim k x n k , x y n k x n k α n k x n k ,
and
lim inf n S x n , x y n x n α n x n = lim k S x n k , x y n k x n k α n k x n k .
We will show that x F i x ( T ) . Assume that x F i x ( T ) , then x T x and using Opial’s property of Hilbert space and Equation (35), we have
lim inf k x n k x < lim inf k x n k T x lim inf k x n k T x n k + T x n k T x lim inf k x n k x ,
which is a contradiction. Therefore, x F i x ( T ) .
Next, we show that x Φ . Consider
y n x n = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) x n = α n σ n ( S x n x n ) + α n ( 1 σ n ) ( T x n x n ) ,
which implies
S x n x n = y n x n α n σ n α n ( 1 σ n ) ( T x n x n ) α n σ n = y n x n α n σ n + ( 1 σ n ) ( I T ) x n σ n .
It follows that
S x n x n y n x n α n σ n = ( 1 σ n ) ( I T ) x n σ n .
Since T is nonexpansive, we have I T is monotone. Let x F i x ( T ) , we have
S x n x n y n x n α n σ n , x y n x n α n x n = ( 1 σ n ) σ n ( I T ) x n , x y n x n α n x n = ( 1 σ n ) σ n ( I T ) x n ( I T ) ( x y n x n α n ) + ( I T ) ( x y n x n α n ) , x y n x n α n x n = ( 1 σ n ) σ n [ ( I T ) x n ( I T ) ( x y n x n α n ) , x y n x n α n x n + ( I T ) ( x y n x n α n ) , x y n x n α n x n ] ( 1 σ n ) σ n ( I T ) ( x y n x n α n ) , x y n x n α n x n ( 1 σ n ) σ n ( I T ) ( x y n x n α n ) x y n x n α n x n = ( 1 σ n ) σ n ( I T ) ( x y n x n α n ) ( I T ) x x y n x n α n x n 2 ( 1 σ n ) y n x n α n σ n x y n x n α n x n ,
which implies
S x n x n , x y n x n α n x n 2 ( 1 σ n ) y n x n α n σ n x y n x n α n x n + y n x n α n σ n , x y n x n α n x n 3 y n x n α n σ n x y n x n α n x n .
Since lim n x n y n α n = 0 , we have
lim k y n k x n k α n k , x y n k x n k α n k x n k = 0 .
From Equation (38) and y n k x n k α n k + x n k x , we have
lim inf n x n , x y n x n α n x n = lim k x n k , x y n k x n k α n k x n k = lim k x n k ( x y n k x n k α n k ) + ( x y n k x n k α n k ) , x y n k x n k α n k x n k = lim k [ x y n k x n k α n k x n k , x y n k x n k α n k x n k x y n k x n k α n k , x y n k x n k α n k x n k ] = lim k [ x y n k x n k α n k x n k , x y n k x n k α n k x n k x , x y n k x n k α n k x n k + y n k x n k α n k , x y n k x n k α n k x n k ] = x x 2 x , x x .
Since S is weakly continuous and y n k x n k α n k + x n k x , we obtain
lim inf n S x n , x y n x n α n x n = lim k S x n k , x y n k x n k α n k x n k = S x , x x .
From Equations (37), (39) and (40), we have
S x x , x x = S x , x x x , x x = S x , x x + x x 2 x , x x = lim inf n S x n , x y n x n α n x n x n , x y n x n α n x n = lim inf n S x n x n , x y n x n α n x n lim inf n 3 y n x n α n σ n x y n x n α n x n 0 .
Hence x solve Hierarchical fixed point problem, i.e., x Φ .
Next, we show that x Ω . Assume that x J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ) . Applying the Opial’s property, Equation (31) and Lemma 7, we have
lim inf k x n k x < lim inf k x n k J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x = lim inf k x n k J λ A M A ( I λ A i = 1 N a i A i ) ( x n k γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n k + J λ A M A ( I λ A i = 1 N a i A i ) ( x n k γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n k J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x lim inf k [ x n k J λ A M A ( I λ A i = 1 N a i A i ) ( x n k γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n k + J λ A M A ( I λ A i = 1 N a i A i ) ( x n k γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n k J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ] lim inf k [ x n k u n k + x n k x γ ( 1 γ L ) ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x n k ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x ] lim inf k x n k u n k + x n k x = lim inf k x n k x .
This is a contradiction. Then x = J λ A M A ( I λ A i = 1 N a i A i ) ( x γ A ( I J λ B M B ( I λ B i = 1 N b i B i ) ) A x . From Lemma 8, we have x Ω . Therefore, x F .
Step5. Finally, we will prove that { x n } converges strongly to z 0 = P F u .
We show that lim sup n u z 0 , x n z 0 0 , where z 0 = P F u . We may assume the subsequence { x n k } of { x n } with
lim sup n u z 0 , x n z 0 = lim k u z 0 , x n k z 0 .
Since x n k x as k and x F . By Equations (13) and (41), we have
lim sup n u z 0 , x n z 0 = lim k u z 0 , x n k z 0 0
From Equations (21) and (22), we have
x n + 1 z 0 2 = μ n u + φ n y n + θ u n z 0 2 = μ n ( u z 0 ) + φ n ( y n z 0 ) + θ ( u n z 0 ) 2 φ n ( y n z 0 ) + θ ( u n z 0 ) 2 + 2 μ n ( u z 0 ) , x n + 1 z 0 φ n y n z 0 2 + θ u n z 0 2 + 2 μ n u z 0 , x n + 1 z 0 φ n [ ( 1 α n σ n ) x n z 0 2 + α n σ n S x n z 0 2 ] + θ x n z 0 2 + 2 μ n u z 0 , x n + 1 z 0 φ n x n z 0 2 + φ n α n σ n S x n z 0 2 + θ x n z 0 2 + 2 μ n u z 0 , x n + 1 z 0 ( 1 μ n ) x n z 0 2 + φ n α n σ n S x n z 0 2 + 2 μ n u z 0 , x n + 1 z 0 .
Applying Lemma 1(ii), conditions (i), (iv) and Equation (42), we can conclude that the { x n } converges strongly to z 0 = P F u . This completes the proof. □
Next, we have the following strong convergence to approximation a common element of solution the set of SMVI and hierarchical fixed point problem of nonexpansive mapping.
Corollary 1.
Let H 1 , H 2 be real Hilbert spaces. Let M A : H 1 2 H 1 be a multi-valued maximal monotone mapping and M B : H 2 2 H 2 be a multi-valued maximal monotone mapping. Let F : H 1 H 2 be a bounded linear operator with its adjoint operator F . Let A : H 1 H 1 be α-inverse strongly monotone and B : H 2 H 2 be β-inverse strongly monotone. Let S , T : H 1 H 1 be two nonexpansive mappings. Assume that F = Φ Θ . Let the sequence { x n } generated by u , x 1 H 1 and
u n = J λ A M A ( I λ A A ) ( x n γ A ( I J λ B M B ( I λ B B ) ) F x n ) , y n = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) , x n + 1 = μ n u + φ n y n + θ n u n ,
where { μ n } , { φ n } , { θ n } , { α n } , { σ n } [ 0 , 1 ] with μ n + φ n + θ n = 1 for all n 1 , λ A ( 0 , 2 α ) , λ B ( 0 , 2 β ) and γ ( 0 , 1 L ) with L is the spectral radius of F F . Suppose the following conditions hold:
(i) 
lim n μ n = 0 , n = 1 μ n = ,
(ii) 
0 < c φ n , θ n d < 1 , c , d > 0 ,
(iii) 
n = 1 | μ n + 1 μ n | < , n = 1 | φ n + 1 φ n | < , n = 1 | θ n + 1 θ n | <
(iv) 
lim n σ n = 0 , n = 1 σ n < ,
(v) 
lim n x n y n α n σ n = 0 ,
Then { x n } converges strongly to z 0 F , where z 0 = P F u .
Proof. 
Put A i A and B i B for all i = 1 , 2 , , N in Theorem 1. From Theorem 1, we obtain the desired result. □

4. Application

4.1. Split Zero Point Problem

Let H be a real HIlbert space. Let M : H 2 H be a maximal monotone operator. Then the zero point problem is to find x H such that
0 M x ,
such an x H is called a zero point of M. The set of zero point of M is denoted by M 1 ( 0 ) .
Let H 1 and H 2 be two real Hilbert spaces. Setting A i 0 and B i 0 for all i = 1 , 2 , . . , N in SCVIP, then SCVIP reduce to the split zero point problem: Find x H 1 such that
0 M A x ,
and
y A x such that 0 M B y ,
where A : H 1 H 2 is bounded linear operator, M A : H 1 2 H 1 and M B : H 2 2 H 2 are multi-valued mapping. The set of all solution of this problem is denoted by Ω 2 = { x M A 1 ( 0 ) : A x M B 1 ( 0 ) } .
The split zero point problem which consists of the special cases, split feasibility problem, variational inequalities, etc., which is used in practice as a model in machine learning, image processing and linear inverse problem.
Next, we give the strong convergence theorem for solving the split zero point problem and the hierarchical fixed point problem of nonexpansive mapping.
Corollary 2.
Let H 1 , H 2 be real Hilbert spaces. Let M A : H 1 2 H 1 be a multi-valued maximal monotone mapping and M B : H 2 2 H 2 be a multi-valued maximal monotone mapping. Let A : H 1 H 2 be a bounded linear operator with its adjoint operator A . Let S , T : H 1 H 1 be two nonexpansive mappings. Assume that F = Φ Ω 2 . Let the iterative sequence generated by hybrid iterative algorithm:
u n = J λ A M A ( x n γ A J λ B M B A x n ) , y n = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) , x n + 1 = μ n u + φ n y n + θ n u n ,
where { δ n } , { φ n } , { η n } , { α n } , { σ n } [ 0 , 1 ] with δ n + φ n + η n = 1 for all n 1 , and γ ( 0 , 1 L ) with L is the spectral radius of A A . Suppose the following conditions hold:
(i) 
lim n μ n = 0 , n = 1 μ n = ,
(ii) 
0 < c φ n , θ n d < 1 , c , d > 0 ,
(iii) 
n = 1 | μ n + 1 μ n | < , n = 1 | φ n + 1 φ n | < , n = 1 | θ n + 1 θ n | <
(iv) 
lim n σ n = 0 , n = 1 σ n < ,
(v) 
lim n x n y n α n σ n = 0 ,
Then { x n } converges strongly to z F , where z = P F u .
Proof. 
Put A i 0 and B i 0 for all i = 1 , 2 , , N in Theorem 1. From Theorem 1, we obtain the desired conclusion. □

4.2. Split Combination of Variational Inequalities Problem

Let H be a real Hilbert space, let C be a nonempty closed convex subset of H and let h be a proper lower semicontinuous convex function of H into ( , + ] . The subdifferential h of h is defined by
h ( x ) = { z H : h ( x ) + z , u x h ( u ) , u H } ,
for all x H . From Rockafellar [16], we get that h is a maximal monotone operator. Let i C be the indicator function of C, i.e.,
i C = 0 ; i f x C , + ; i f x C .
Then i C is a proper, lower semicontinuous and convex function on H and so the subdifferential i C of i C is a maximal monotone operator. The resolvent operator J r i C of i C for λ > 0 defined by J r i C ( x ) = ( I + λ i C ) 1 ( x ) , x H , then we have J r i C ( x ) = P C x for all x H and λ > 0 ; see more detail [18]. Moreover, let h : H H be a single valued operator, we have x V I ( H , h , i C ) = V I ( C , h ) .
Setting M A = i H 1 and M B = i H 2 in Equations (6) and (7), then SCVIP reduce to the split combination of variational inequality problem, that is find x H 1 such that
i = 1 N a i A i x , x x 0 , x H 1 ,
and
y = A x H 2 such that i = 1 N b i B i y , y y 0 , y H 2 ,
where A : H 1 H 2 is bounded linear operator and i = 1 N a i = i = 1 N b i = 1 . The set of all this is denoted by Ω 3 = { x V I ( H 1 , i = 1 N a i A i ) : A x V I ( H 2 , i = 1 N b i B i ) } .
Remark 3.
If M A = i H 1 and M B = i H 2 , then we have Ω reduce to Ω 3 .
Proof. 
We will show that V I ( H 1 , i = 1 N a i A i , M A ) = V I ( H 1 , i = 1 N a i A i ) . We have for x H 1 .
Consider,
x V I ( H 1 , i = 1 N a i A i , M A ) θ H 1 i = 1 N a i A i x + M A x θ H 1 i = 1 N a i A i x + i H 1 ( x ) i = 1 N a i A i x i H 1 ( x ) i = 1 N a i A i x , x x 0 , x H 1 , x V I ( H 1 , i = 1 N a i A i ) .
Similarly, we also have V I ( H 2 , i = 1 N b i B i , M B ) = V I ( H 2 , i = 1 N b i B i ) . Then Ω Ω 3 where M A = i H 1 and M B = i H 2 . □
The split combination of variational inequality problem has played an essential role for concrete problems in dynamic emission tomographic image reconstruction, signal recovery problems, beam-forming problems, power-control problems, bandwidth allocation problems and optimal control problems.
Next, we establish a strong convergence theorem for solving the split combination of variational inequality problem and hierarchical fixed point problem of nonexpansive mapping by using a modified Halpern iterative method as follows:
Theorem 2.
Let H 1 , H 2 be real Hilbert spaces. Let A : H 1 H 2 be a bounded linear operator with its adjoint operator A . Let A i : H 1 H 1 be α i -inverse strongly monotone with η A = min i = 1 { α i } and B i : H 2 H 2 be β i -inverse strongly monotone with η B = min i = 1 { β i } . Let S , T : H 1 H 1 be two nonexpansive mappings. Assume that F = Φ Ω 3 . Let the sequence { x n } generated by u , x 1 H 1 and
u n = P H 1 ( I λ A i = 1 N a i A i ) ( x n γ A ( I P H 2 ( I λ B i = 1 N b i B i ) ) A x n ) , y n = ( 1 α n ) x n + α n ( σ n S x n + ( 1 σ n ) T x n ) , x n + 1 = μ n u + φ n y n + θ n u n ,
where { μ n } , { φ n } , { θ n } , { α n } , { σ n } [ 0 , 1 ] with μ n + φ n + θ n = 1 for all n 1 , λ A ( 0 , 2 η A ) , λ B ( 0 , 2 η B ) and γ ( 0 , 1 L ) with L is the spectral radius of A A . Suppose the following conditions hold:
(i) 
lim n μ n = 0 , n = 1 μ n = ,
(ii) 
0 < c φ n , θ n d < 1 , c , d > 0 ,
(iii) 
n = 1 | μ n + 1 μ n | < , n = 1 | φ n + 1 φ n | < , n = 1 | θ n + 1 θ n | <
(iv) 
lim n σ n = 0 , n = 1 σ n < ,
(v) 
lim n x n y n α n σ n = 0 ,
(vi) 
i = 1 N a i = i = 1 N b i = 1 , a i > 0 and b i > 0 for all i = 1 , 2 , , N .
Then { x n } converges strongly to z 0 F , where z 0 = P F u .
Proof. 
Put M A = i H 1 and M B = i H 2 in Theorem 1. Using the same method in Theorem 1, we have the desired conclusion. □

5. Numerical

The purpose of this section is to give a numerical example to support some of our. The following example given for supporting Theorem 1 and example show that Theorem 1 is not true if condition (iv) fails, but conditions (i), (ii), (iii), (v) and (vi) are satisfied.
Since Theorem 1 can solve hierarchical fixed point problem for a nonexpansive mapping and SCVIP which our problems can modify for concrete problem in signal processing, image reconstruction, intensity-modulated radiationtherapy treatment planning and sensor networks in computerized tomography. So, we give a numerical example as follows:
Example 5.
Let H 1 = H 2 = R , the set of all real numbers, with the inner product defined by x , y = x y , for all x , y R and induced usual norm | · | . For every i = 1 , 2 , . . , N , let the mapping A i : R R define by A i x = x 4 i for all x H 1 and B i : R R define by B i y = y 3 i for all y H 2 , respectively, let M A , M B : R 2 R be defined by M A ( x ) = { 2 x } , for all x R and M B ( y ) = { 2 y } , for all y R . Let the mapping A : R R be defined by A ( x ) = 2 x , for all x R and let γ ( 0 , 1 4 ) , so we choose γ = 1 10 . Let the mapping T : R R be defined by T x = max { 0 , x } , for all x R and let the mapping S : R R be defined by S x = min { 0 , x 2 } , for all x R . Setting { μ n } = { 1 5 n } , { φ n } = { 7 n + 1 15 n } , { θ n } = { 8 n 4 15 n } , { α n } = { 1 n } and { σ n } = { 1 4 n 2 } , n N . For every i = 1 , 2 , , N , suppose that a i = 3 4 i + 1 N 4 N and b i = 2 3 i + 1 N 3 N . Then { x n } converges strongly to a point x = 0 F .
Proof of Solution.
It is easy to check that a i and b i satisfies all the conditions of Theorem 1 and A i is 1 4 i -inverse strongly monotone and B i is 1 3 i -inverse strongly monotone for all i = 1 , 2 , , N . We choose λ A = 1 4 N , λ B = 1 3 N . Since a i = 3 4 i + 1 N 4 N , we obtain
i = 1 N a i A i x = i = 1 N 3 4 i + 1 N 4 N x 4 i .
Then 0 V I ( H 1 , i = 1 N a i A i , M A ) . Since b i = 2 3 i + 1 N 3 N , we have
i = 1 N b i B i y = i = 1 N 2 3 i + 1 N 3 N y 3 i .
Then 0 V I ( H 2 , i = 1 N b i B i , M B ) . Thus { 0 } = Ω .
It is easy to observe that T , S are nonexpansive mappings with F i x ( T ) = { 0 } , F i x ( S ) = { 0 } . Hence Φ = { 0 } . Therefore F = Φ Ω = { 0 } .
For every n N , { μ n } = { 1 5 n } , { φ n } = { 7 n + 1 15 n } , { θ n } = { 8 n 4 15 n } , { α n } = { 1 n } and { σ n } = { 1 4 n 2 } , then the sequence { μ n } , { φ n } , { θ n } , { α n } and { σ n } satisfy all the conditions of Theorem 1. We rewrite (20) as follows:
u n = J λ A M A ( I 1 4 N i = 1 N a i A i ) ( x n γ A ( I J λ B M B ( I 1 3 N i = 1 N b i B i ) ) A x n ) , y n = ( 1 1 n ) x n + 1 0 ( 1 4 n 2 S x n + ( 1 1 4 n 2 ) T x n ) , x n + 1 = 1 5 n u + 7 n + 1 15 n y n + 8 n 4 15 n u n ,
Choose u = 1 , x 1 = 1 , N = 100 and n = 100 . The numerical for the sequence { x n } are shown Table 1 and Figure 1. □
Example 6.
Let H 1 = H 2 = R , the set of all real numbers, with the inner product defined by x , y = x y , for all x , y R and induced usual norm | · | . For every i = 1 , 2 , . . , N , let the mapping A i : R R define by A i x = x 4 i for all x H 1 and B i : R R define by B i y = y 3 i for all y H 2 , respectively, let M A , M B : R 2 R be defined by M A ( x ) = { 2 x } , for all x R and M B ( y ) = { 2 y } , for all y R . Let the mapping A : R R be defined by A ( x ) = 2 x , for all x R and let γ ( 0 , 1 4 ) , so we choose γ = 1 10 . Let the mapping T : R R be defined by T x = max { 0 , x } , for all x R and let the mapping S : R R be defined by S x = min { 0 , x 2 } , for all x R . Setting { μ n } = { 1 5 n } , { φ n } = { 7 n + 1 15 n } , { θ n } = { 8 n 4 15 n } , { α n } = { 1 n } and { σ n } = { n } , n N . For every i = 1 , 2 , , N , suppose that a i = 3 4 i + 1 N 4 N and b i = 2 3 i + 1 N 3 N . Then { x n } is divegence.
Proof of Solution.
Note that the sequence { μ n } , { φ n } , { θ n } , { α n } , a i and b i satisfies the conditions (i), (ii), (iii), (v) and (vi) from Theorem 1, while assumption (iv) does not converge to 0 since
lim n n = .
Choose u = 1 , x 1 = 1 , N = 100 and n = 25 . The numerical for the sequence { x n } are shown in Table 2 and Figure 2. Therefore, { x n } does not converge to 0. □
Next, we give example to support out some result in a two dimensional space of real numbers.
Example 7.
Let H 1 = H 2 = R 2 , with the inner product defined by x , y = x y = x 1 · y 1 + x 2 · y 2 , for all x = ( x 1 , x 2 ) , y = ( y 1 , y 2 ) R 2 and induced usual norm · defined by x = x 1 2 + x 2 2 for all x = ( x 1 , x 2 ) R 2 . For every i = 1 , 2 , . . , N , let the mapping A i : R 2 R 2 define by A i x = x 3 i for all x = ( x 1 , x 2 ) H 1 and B i : R 2 R 2 define by B i y = y 4 i for all y = ( y 1 , y 2 ) H 2 , respectively, let M A , M B : R 2 2 R 2 be defined by M A ( x ) = { x } , for all x = ( x 1 , x 2 ) R 2 and M B ( y ) = { 3 y } , for all y = ( y 1 , y 2 ) R 2 . Let the mapping A : R 2 R 2 be defined by A ( x ) = 3 x , for all x = ( x 1 , x 2 ) R 2 and let γ ( 0 , 1 9 ) , so we choose γ = 1 15 . Let the mapping T : R 2 R 2 be defined by T x = x 3 , for all x = ( x 1 , x 2 ) R 2 and let the mapping S : R 2 R 2 be defined by S x = min { 0 , x 5 } , for all x = ( x 1 , x 2 ) R 2 . Setting { μ n } = { 1 4 n } , { φ n } = { 7 n + 1 12 n } , { θ n } = { 5 n 4 12 n } , { α n } = { 1 n } and { σ n } = { 1 4 n 2 } , n N . For every i = 1 , 2 , , N , suppose that a i = 9 10 i + 1 N 10 N and b i = 2 3 i + 1 N 3 N . Then { x n } converges strongly to a point x = ( 0 , 0 ) F .
Proof of Solution.
It is easy to check that a i and b i satisfies all the conditions of Theorem 1 and A i is 1 3 i -inverse strongly monotone and B i is 1 4 i -inverse strongly monotone for all i = 1 , 2 , , N . We choose λ A = 1 5 N , λ B = 1 7 N . Thus { ( 0 , 0 ) } = Ω .
For definition of T and S, then T and S are nonexpansive mapping with F i x ( T ) = { ( 0 , 0 ) } . Hence Φ = { ( 0 , 0 ) } . Therefore F = Φ Ω = { ( 0 , 0 ) } .
For every n N , { μ n } = { 1 4 n } , { φ n } = { 7 n + 1 12 n } , { θ n } = { 5 n 4 12 n } , { α n } = { 1 n } and { σ n } = { 1 4 n 2 } , then the sequence { μ n } , { φ n } , { θ n } , { α n } and { σ n } satisfy all the conditions of Theorem 1.
From Theorem 1, we can conclude that the sequence { x n } converges to ( 0 , 0 ) . □

6. Conclusions

(i)
Table 1 and Figure 1 show that the sequence { x n } converges to 0, where { 0 } = Φ Ω .
(ii)
Table 2 and Figure 2 show that the sequence { x n } diverge, where condition ( i v ) is violated since lim n σ n 0 .

Author Contributions

Conceptualization, A.K.; formal analysis, A.K. and B.C.; writing-original draft, B.C.; supervision, A.K.; writing-review and editing, A.K. and B.C.

Funding

This research received no external funding.

Acknowledgments

This work is supported by King Mongkut’s Institute of Technology Ladkrabang.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iiduka, H.; Takahashi, W. Weak convergence theorem by Ces’aro means for nonexpansive mappings and inverse-strongly monotone mappings. J. Nonlinear Convex Anal. 2006, 7, 105–113. [Google Scholar]
  2. Kangtunyakarn, A. The methods for variational inequality problems and fixed point of κ-strictly pseudononspreading mapping. Fixed Point Theory Appl. 2013, 2013, 171. [Google Scholar] [CrossRef]
  3. Kangtunyakarn, A. Iterative algorithms for finding a common solution of system of the set of variational inclusion problems and the set of fixed point problems. Fixed Point Theory Appl. 2011, 2011, 38. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, S.S.; Lee, J.H.W.; Chan, C.K. Algorithm of common solutions for quasi-variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29, 571–581. [Google Scholar] [CrossRef]
  5. Brẽzis, H. Operateurs maximaux monotone et semi-groupes de contractions les espaces de Hilbert. Math. Stud. 1973, 5, 759–775. [Google Scholar]
  6. Geobel, K.; Kirl, W.A. Topics in Metric Fixed Point Theory Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  7. Moudafi, A. Viscosity approximation methods for fixed-points problem. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  8. Kangtunyakarn, A. Strong convergence of the hybrid method for a finite family of nonspreading mappings and variational inequality problems. Fixed Point Theory Appl. 2012, 2012, 188. [Google Scholar] [CrossRef] [Green Version]
  9. Moudafi, A.; Mainge, P.-E. Towards viscosity approximations of hierarchical fixed-point problems. Fixed Point Theory Appl. 2006, 2006, 95453. [Google Scholar] [CrossRef]
  10. Moudafi, A. Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 2007, 23, 1635–1640. [Google Scholar] [CrossRef]
  11. Kazmi, K.R.; Ali, R.; Furkan, M. Hybrid iterative method for split monotone variational inclusion problem and hierarchical fixed point problem for finite family of nonexpansive mappings. Numer. Algorithms 2018, 79, 499–527. [Google Scholar] [CrossRef]
  12. Moudafi, A. Split Monotone Variational Inclusions. J. Optim. Theory Appl. 2011, 250, 275–283. [Google Scholar] [CrossRef]
  13. Censor, Y.; Gibali, A.; Reich, S. The Split Variational Inequality Problem; The Technion-Israel Institue of Technology: Haifa, Israel, 2010. [Google Scholar]
  14. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  15. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  16. Rockafellar, R.T. On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33, 209–216. [Google Scholar] [CrossRef] [Green Version]
  17. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  18. Takahashi, W. Introducion to Nunlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  19. Browder, E.F. Nonlinear operators and nonlinear equarions of evolution in Banach spaces. In Proceedings of the Symposia in Pure Mathematics, American Mathematical Society, Providence, RI, USA, 16–19 April 1968; pp. 1–308. [Google Scholar]
Figure 1. The sequence { x n } converges strongly to 0 with initial values x 1 = 1 , N = 100 and n = 100 .
Figure 1. The sequence { x n } converges strongly to 0 with initial values x 1 = 1 , N = 100 and n = 100 .
Mathematics 07 01037 g001
Figure 2. The sequence { x n } is divergence with initial values x 1 = 1 , N = 100 and n = 25 .
Figure 2. The sequence { x n } is divergence with initial values x 1 = 1 , N = 100 and n = 25 .
Mathematics 07 01037 g002
Table 1. The values of { x n } with N = 100 , n = 100 .
Table 1. The values of { x n } with N = 100 , n = 100 .
n x n
11.0000
2−0.0767
3−0.1178
4−0.1125
5−0.1027
50−0.0139
96−0.0073
97−0.0072
98−0.0071
99−0.0070
100−0.0070
Table 2. The values of { x n } with N = 100 , n = 25 .
Table 2. The values of { x n } with N = 100 , n = 25 .
n x n
11.0000
2−0.0767
3−0.1717
4−0.3525
5−0.4736
17−12.0108
21−41.7292
22−57.2056
23−78.5297
24−107.9388
25−148.5342

Share and Cite

MDPI and ACS Style

Chaloemyotphong, B.; Kangtunyakarn, A. Modified Halpern Iterative Method for Solving Hierarchical Problem and Split Combination of Variational Inclusion Problem in Hilbert Space. Mathematics 2019, 7, 1037. https://doi.org/10.3390/math7111037

AMA Style

Chaloemyotphong B, Kangtunyakarn A. Modified Halpern Iterative Method for Solving Hierarchical Problem and Split Combination of Variational Inclusion Problem in Hilbert Space. Mathematics. 2019; 7(11):1037. https://doi.org/10.3390/math7111037

Chicago/Turabian Style

Chaloemyotphong, Bunyawee, and Atid Kangtunyakarn. 2019. "Modified Halpern Iterative Method for Solving Hierarchical Problem and Split Combination of Variational Inclusion Problem in Hilbert Space" Mathematics 7, no. 11: 1037. https://doi.org/10.3390/math7111037

APA Style

Chaloemyotphong, B., & Kangtunyakarn, A. (2019). Modified Halpern Iterative Method for Solving Hierarchical Problem and Split Combination of Variational Inclusion Problem in Hilbert Space. Mathematics, 7(11), 1037. https://doi.org/10.3390/math7111037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop