Next Article in Journal
An Efficient Asymmetric Nonlinear Activation Function for Deep Neural Networks
Next Article in Special Issue
A Generalization of Szász–Mirakyan Operators Based on α Non-Negative Parameter
Previous Article in Journal
A Rapid Deployment Mechanism of Forwarding Rules for Reactive Mode SDN Networks
Previous Article in Special Issue
The Spectrum of Second Order Quantum Difference Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Parallel Fixed Point Algorithms and Their Application to a System of Variational Inequalities

Department of Mathematics, Aksaray University, 68100 Aksaray, Turkey
Symmetry 2022, 14(5), 1025; https://doi.org/10.3390/sym14051025
Submission received: 19 March 2022 / Revised: 29 April 2022 / Accepted: 13 May 2022 / Published: 17 May 2022
(This article belongs to the Special Issue Advances in Matrix Transformations, Operators and Symmetry)

Abstract

:
In this study, considering the advantages of parallel fixed point algorithms arising from their symmetrical behavior, new types of parallel algorithms have been defined. Strong convergence of these algorithms for certain mappings with altering points has been analyzed, and it has been observed that their convergence behavior is better than existing algorithms with non-simple samples. In addition, the concept of data dependency for these algorithms has been examined for the first time in this study. Finally, it has been proven that the solution of a variational inequality system can be obtained using newly defined parallel algorithms under suitable conditions.

1. Introduction

Nonlinear analysis, a branch of functional analysis, is a dynamic field that is used for solving real-life problems encountered in science and engineering. Among these real life problems, game theory, mechanics, and optimization can be counted as some of the remarkable applications of nonlinear analysis.
Variational inequality theory, on the other hand, is one of the fields of study of nonlinear analysis and has been introduced by the joint efforts of Stampacchia and Lions [1]. While variational inequality theory has a strong mathematical background, it also has had remarkable applications in many branches of science, especially in the last fifty years. This theory not only aims to solve problems encountered in nonlinear analysis, but it also intends to make them more computationally efficient. For this purpose, various techniques have been proposed by the researchers to find approximate solutions to the problems in question (see [2,3,4,5,6,7,8,9,10]).
In this context, fixed point theory offers us an effective method. This theory makes it possible to approach the solution in question with algorithms called iteration. For the solution in question, it is necessary to include the problem to be solved in an operator class under appropriate conditions. For this reason, many researchers have defined new iteration algorithms in the classical sense with the claim that they have better convergence speed, and the properties of these algorithms such as convergence, data dependency, and stability have been examined (see [11,12,13,14,15,16,17,18,19]).
When it is desired to produce two sequences like ( x n ) and ( y n ) , from classical fixed point iteration algorithms, these algorithms calculate sequences ( x n + 1 ) and ( y n + 1 ) , respectively. Therefore, classical fixed point iteration algorithms are suitable for single-processor computers. While the parallel fixed-point algorithm generates two sequences like ( x n + 1 ) and ( y n + 1 ) at the same time, it uses the sequence of ( y n ) for the ( x n + 1 ) , and it uses the sequence of ( x n ) for the ( y n + 1 ) . Because of this symmetry process parallel, fixed-point iteration algorithms are more effective than classical algorithms in order to meet the requirements of multiprocessor computers as aimed to be proved in this study.
In particular, inspired by these advantages of parallel iteration algorithms obtained through mappings, which have altering points, and their success in solving variational inequalities, in this study, we propose two new parallel fixed-point algorithms. Furthermore, we aim to investigate their strong convergence under appropriate conditions. We also give the data dependence result as both theoretical and numerical and prove that one of the new parallel algorithms has a better rate of convergence than a parallel S and parallel Mann iteration algorithm [20] by a numerical example. Finally, we show that these new algorithms can be used to reach the solution of general variational inequalities system. It should be emphasized here that the concept of data dependency for parallel algorithms has been introduced for the first time in this study.

2. Preliminaries

Let ( H , · ) be a Hilbert space in which · is defined by an inner product . , . and let C H . For all x , y C , T : C C is called
  • L -Lipschitzian if there exists a constant L > 0 , such that
    T x T y L x y .
  • ω -strongly monotone if there exist constants ω > 0 such that
    T x T y , x y ω x y 2 .
  • relaxed ( κ , ω ) -cocoercive if there exist constants κ > 0 , ω > 0 , such that
    T x T y , x y κ T x T y 2 + ω x y 2 .
It is clear that relaxed ( κ , ω ) -cocoercive mappings are more general than the ω -strongly monotone mappings.
Definition 1
([21]). Let X be a metric space and C 1 , C 2 X . We say x C 1 and y C 2 are altering points of mappings T 1 : C 1 C 2 and T 2 : C 2 C 1 if
T 1 ( x ) = y , T 2 ( y ) = x .
Sahu [21] has obtained some convergence results using Picard, Mann, and S-algorithms constructed with Lipschitz continuous mappings that have altering points. He also has defined the parallel-S algorithm to reach the altering points of nonlinear mappings as under:
Algorithm 1.
( x 1 , y 1 ) ( C 1 × C 2 ) x n + 1 = T 2 [ ( 1 α n ) y n + α n T 1 x n ] y n + 1 = T 1 [ ( 1 α n ) x n + α n T 2 y n ]
in which α n n = 1 0 , 1 .
Using Algorithm 1, Sahu et al. [20] have examined the solution of the general system of generalized variational inequalities (SGVI), which they have defined as follows:
t 1 ( μ 1 F 1 s 1 V 1 ) ( x ) + y g 1 ( x ) , g 1 ( y ) y 0 , t 2 ( μ 2 F 2 s 2 V 2 ) ( y ) + x g 2 ( y ) , g 2 ( x ) x 0 ,
in which t i , s i , and μ i are constants and H is a Hilbert space, g i : H H and V i , F i : C i H are mappings for i = 1 , 2 .
  • By taking ( μ 1 F 1 s 1 V 1 ) = T 1 , ( μ 2 F 2 s 2 V 2 ) = T 2 in (2), then one can obtain the following SGVI (see [20]):
    t 1 T 1 ( x ) + y g 1 ( x ) , g 1 ( y ) y 0 , t 2 T 2 ( y ) + x g 2 ( y ) , g 2 ( x ) x 0 ,
They also have proposed a parallel Mann algorithm as follows:
Algorithm 2.
( x 1 , y 1 ) ( C 1 × C 2 ) x n + 1 = ( 1 α n ) x n + α n T 2 y n y n + 1 = ( 1 α n ) y n + α n T 1 x n
The authors in [20] have performed the strong convergence of the sequences obtained from Algorithms 1 and 2. They have showed that convergence speed of Algorithm 1 is better than Algorithm 2 through a numerical example.
Using the information mentioned above, in this study, two parallel fixed point algorithms based on the Sintunavarat and Pitea algorithm [22] are defined as follows:
Algorithm 3.
( x 1 , y 1 ) ( C 1 × C 2 ) x n + 1 = ( 1 α n ) T 2 z n + α n T 2 w n y n + 1 = ( 1 α n ) T 1 u n + α n T 1 v n z n = ( 1 β n ) y n + β n w n u n = ( 1 β n ) x n + β n v n w n = ( 1 γ n ) y n + γ n T 1 x n v n = ( 1 γ n ) x n + γ n T 2 y n
If we choose γ n = 1 in Algorithm 3, it reduces the following algorithm:
Algorithm 4.
( x 1 , y 1 ) ( C 1 × C 2 ) x n + 1 = ( 1 α n ) T 2 z n + α n T 2 w n y n + 1 = ( 1 α n ) T 1 u n + α n T 1 v n z n = ( 1 β n ) y n + β n w n u n = ( 1 β n ) x n + β n v n w n = T 1 x n v n = T 2 y n
in which α n n = 1 ,   β n n = 1 ,   γ n n = 1 0 , 1 .
The convergence of these algorithms is examined under suitable conditions, and it is shown through a numerical example that Algorithm 4 has a better convergence speed than Algorithm 1. In addition, the data dependency result of this algorithm is analyzed. Finally, it is shown that Algorithm 4 can be used to reach the solution of the SGVI (2).
Now, we give some known results:
Lemma 1.
For a given z E , x C satisfies the inequality
x z , y x 0 , y C ,
if and only if
x = P C [ z ]
where P C is the projection of E onto C. In addition, the projection operator P C is nonexpansive and satisfies u v , P C u P C v P C u P C v 2 , for all x , y E .
Definition 2
([23]). Let T, S : X X be two operators. S is called an approximate operator of T for all x X and a fixed ε > 0 if T x S x ε .
Lemma 2
([23]). Let u n n = 0 be a nonnegative real sequence and there exists n 0 N such that, for all n n 0 satisfying the following condition:
u n + 1 1 μ n u n + μ n η n ,
where μ n ( 0 , 1 ) such that n = 1 μ n = and c n 0 . Then, the following inequality holds:
0 lim n sup u n lim n sup η n

3. Results

3.1. Altering Points

In this section, the convergence of the Sintunavarat and Pitea algorithm [22], and the following algorithm to the altering points of the Lipschitz continuous mappings are discussed and a numerical example supporting this result is given:
Algorithm 5.
x 1 C 1 , x n + 1 = ( 1 α n ) T z n + α n T w n , z n = ( 1 β n ) x n + β n w n , w n = T x n ,
Theorem 1.
Let C 1 and C 2 be nonempty closed subsets of a Banach space X and let T 1 : C 1 C 2 and T 2 : C 2 C 1 be two Lipschitz continuous mappings with Lipschitz constants δ 1 and δ 2 such that δ 1 δ 2 < 1 . Then, we have the following:
i.
There exists a unique point ( x , y ) C 1 × C 2 such that x and y are altering points of mappings T 1 and T 2 , respectively.
ii.
For arbitrary x 1 C 1 , a sequence { ( x n , y n ) } C 1 × C 2 generated by Algorithm 4 converges to ( x , y ) .
Proof. 
It is clear that the mapping T : = T 2 T 1 : C 1 C 1 is contraction mapping; therefore, there exists a unique point ( x , y ) C 1 × C 2 such that x and y are altering points of mappings T 1 and T 2 , respectively. Using Algorithm 4 and Definition 1, we obtain
x n + 1 x = ( 1 α n ) T z n + α n T w n x δ 2 ( 1 α n ) T 1 z n y + δ 2 α n T 1 w n y = δ 2 ( 1 α n ) T 1 z n T 1 x + δ 2 α n T 1 w n T 1 x δ 1 δ 2 ( 1 α n ) z n x + δ 1 δ 2 α n w n x
and
w n x = T x n x δ 2 T 1 x n y δ 1 δ 2 x n x
and, using inequality (5) and δ 1 δ 2 < 1 , we obtain
z n x = ( 1 β n ) x n + β n w n x ( 1 β n ) x n x + β n w n x ( 1 β n ) x n x + β n δ 1 δ 2 x n x x n x
Substituting (6) and (5) in (4), we have
x n + 1 x δ 1 δ 2 ( 1 α n ) z n x + δ 1 δ 2 α n w n x δ 1 δ 2 ( 1 α n ) x n x + δ 1 δ 2 α n x n x δ 1 δ 2 x n x
we obtain
x n + 1 x ( δ 1 δ 2 ) n x 1 x
Taking the limit on both sides of (7) and using δ 1 δ 2 < 1 , we obtain
lim n x n x = 0 .
Moreover, y n = T 1 x n T 1 x = y . Thus, we conclude that ( x n , y n ) ( x , y ) . □
Theorem 2.
Let C 1 , C 2 , X, T 1 , and T 2 be the same as in Theorem 1. Let δ 1 δ 2 < 1 . Then, we have the following
i.
There exists a unique point ( x , y ) C 1 × C 2 such that x and y are altering points of mappings T 1 and T 2 , respectively.
ii.
For arbitrary x 1 C 1 , a sequence ( x , y ) C 1 × C 2 generated by the Sintunavarat and Pitea algorithm [22] converges to ( x , y ) with the following estimate:
x n + 1 x δ 1 δ 2 x n x .
Proof. 
The proof can be obtained by performing calculations similar to the proof of Theorem 1. □
Example 1.
Let C 1 = C 2 = [ 0 , 1 ] . Define T 1 : C 1 C 2 , T 2 : C 2 C 1 such that T 1 x = 1 3 e 2 x + 1 4 s i n ( 4 x ) , T 2 x = 1 12 l n ( 3 x + 1 ) . It can be seen from the following figure that these operators satisfy the Lipschitz condition for δ 1 = 0.68 and δ 2 = 0.28 with altering points ( 0.06045648688172 , 0.35523931738830 ) :
By considering the operators T 1 and T 2 given by (a,b) in Figure 1 and by taking the initial points ( 0.5 , 0.5 ) C 1 × C 2 for the Algorithm 4, the Sintunavarat and Pitea algorithm [22], the Normal-S algorithm [21], and the Mann algorithm, we obtain the following Table 1 and it shows that Algorithm 4 reaches the unique altering points faster than the other algorithms:

3.2. Convergence Analysis and Data Dependence for the New Parallel Algorithms

In this section, the convergence of the Algorithms 3 and 4 to the unique altering points of the Lipschitz continuous mappings have been analyzed and a numerical example has been given to demonstrate their efficiency of them. In addition, a data dependence result has been obtained for Algorithm 3.
Theorem 3.
Let C 1 , C 2 , X, T 1 , and T 2 be the same as in Theorem 1. Let δ 1 and δ 2 be Lipschitz constants such that δ 1 + δ 2 < 1 . Then, the sequence { ( x n , y n ) } n = 0 in C 1 × C 2 generated by Algorithm 4 converges strongly to a unique point ( x , y ) in C 1 × C 2 so that x and y are altering points of mappings T 1 and T 2 , respectively.
Proof. 
By Theorem 1, there exists a unique point ( x , y ) in C 1 × C 2 so that x and y are altering points of mappings T 1 and T 2 , respectively. Using Algorithm 4 and Definition 1, we obtain
x n + 1 x = ( 1 α n ) T 2 z n + α n T 2 w n x = ( 1 α n ) T 2 z n + α n T 2 w n T 2 y δ 2 ( 1 α n ) z n y + δ 2 α n w n y
and
w n y = T 1 x n y = T 1 x n T 1 x δ 1 x n x .
From (9), we obtain
z n y = ( 1 β n ) y n + β n w n y ( 1 β n ) y n y + β n w n y ( 1 β n ) y n y + δ 1 β n x n x
Substituting (10) and (9) in (8), we have
x n + 1 x δ 2 ( 1 α n ) z n y + δ 2 α n w n y δ 2 ( 1 α n ) y n y + δ 1 β n γ n x n x + δ 2 α n y n y + δ 1 γ n x n x δ 2 ( 1 α n ) y n y + δ 1 δ 2 x n x δ 2 [ y n y + x n x ]
The following inequality can be obtained similar to the processes performed in (8)–(11):
y n + 1 y δ 1 y n y + x n x
From (11) and (12), we have
x n + 1 x + y n + 1 y λ x n x + y n y
in which λ = δ 1 + δ 2 < 1 . Now, define the norm · on X × X by ( x , y ) = x + y for all ( x , y ) X × X . Note that ( X × X , · ) is a Banach space. From (13), we have
( x n + 1 , y n + 1 ) ( x , y ) λ ( x n , y n ) ( x , y ) .
By induction, we obtain
( x n + 1 , y n + 1 ) ( x , y ) λ n ( x 1 , y 1 ) ( x , y )
Taking the limit on both sides of (15), we obtain
lim n ( x n + 1 , y n + 1 ) ( x , y ) = 0 .
Thus, we have lim n x n x = lim n y n y = 0 . Therefore, { x n } and { y n } converge to x and y, respectively. □
Theorem 4.
Let C 1 , C 2 , X, T 1 , and T 2 be the same as in Theorem 1. Let δ 1 and δ 2 be Lipschitz constants such that δ 1 + δ 2 < 1 . Then, the sequence { ( x n , y n ) } n = 0 in C 1 × C 2 generated by a Algorithm 3 converges strongly to a unique point ( x , y ) in C 1 × C 2 so that x and y are altering points of mappings T 1 and T 2 , respectively with the following estimate:
( x n + 1 , y n + 1 ) ( x , y ) ( δ 1 + δ 2 ) ( x n , y n ) ( x , y ) .
Proof. 
The proof can be obtained by performing calculations similar to the proof of Theorem 3. □
Example 2.
Let C 1 = C 2 = [ 0 , 1 ] . Define T 1 : C 1 C 2 and T 2 : C 2 C 1 , by
T 1 x = x 2 c o s 2 ( 2 π 9 x ) + 1 2 ( x + 1 ) 6 1 + e a r c o s h 3 π 2 x + 1 85 + 3 s i n 2 4 π 15 x 5 x 2 + 12 T 2 x = l n ( 2 x + 3 ) s e c 2 π 9 x s i n ( 2 x 2 + 1 ) 9 x + 1 3 / 2 ,
respectively. It can be seen from the following figure that these operators satisfy the Lipschitz condition for δ 1 = 0.35 and δ 2 = 0.15 with unique altering points ( x , y ) = ( 0.01268227439847 , 0.05571471149404 ) :
By considering the operators T 1 and T 2 given by (a,b) in Figure 2 and by choosing α n = β n = γ n = 1 n + 1 and an initial point ( 1 , 1 ) C 1 × C 2 for the Algorithms 1–4, we get the following Table 2 and Figure 3, it can be seen that Algorithm 4 has a better convergence speed than the other algorithms.
Now, we discuss the data dependency concept of Algorithm 4 for Lipschitz continuous mappings:
Theorem 5.
Let C 1 , C 2 , X, T 1 , and T 2 be the same as in Theorem 1 and δ 1 and δ 2 be Lipschitz constants such that δ 1 + δ 2 < 1 . Let S 1 , S 2 be approximate operators of T 1 and T 2 , respectively. Let { x n } n = 0 and { y n } n = 0 be iterative sequences generated by Algorithm 4 and define iterative sequences { a n } n = 0 and { b n } n = 0 as follows:
Algorithm 6.
a n + 1 = ( 1 α n ) S 2 c n + α n S 2 d n b n + 1 = ( 1 α n ) S 1 h n + α n S 1 k n c n = ( 1 β n ) b n + β n d n h n = ( 1 β n ) a n + β n k n d n = S 1 a n k n = S 2 b n
in which { α n } n = 0 and { β n } n = 0 are real sequences in [ 0 , 1 ] . In addition, we suppose that there exist nonnegative constants ε 1 and ε 2 such that T 1 ϑ S 1 ϑ ε 1 and T 2 σ S 2 σ ε 2 for all ϑ C 1 and σ C 2 . If ( x , y ) C 1 × C 2 , which are altering points of mappings T 1 and T 2 , and ( a , b ) C 1 × C 2 , which are altering points of mappings S 1 and S 2 , such that ( a n , b n ) ( a , b ) as n , then we have
( x , y ) ( a , b ) = x a + y b δ 2 ε 1 + ε 2 + δ 1 ε 2 + ε 1 1 ( δ 1 + δ 2 ) .
Proof. 
Using Algorithms 3 and 6, we have
x n + 1 a n + 1 ( 1 α n ) T 2 z n S 2 c n + α n T 2 w n S 2 d n ( 1 α n ) T 2 z n T 2 c n + ( 1 α n ) T 2 c n S 2 c n + α n T 2 w n T 2 d n + α n T 2 d n S 2 d n ( 1 α n ) T 2 z n T 2 c n + ( 1 α n ) ε 2 + α n T 2 w n T 2 d n + α n ε 2 ( 1 α n ) δ 2 z n c n + α n δ 2 w n d n + ε 2
and
w n d n T 1 x n S 1 a n T 1 x n T 1 a n + T 1 a n S 1 a n δ 1 x n a n + ε 1 .
Using inequality (19), we obtain
z n c n ( 1 β n ) y n b n + β n w n d n ( 1 β n ) y n b n + β n δ 1 x n a n + β n ε 1 .
Substituting (20) and (19) in (18), we have
x n + 1 a n + 1 ( 1 α n ) δ 2 z n c n + α n δ 2 w n d n + ε 2 δ 2 y n b n + x n a n + δ 2 ε 1 + ε 2 .
By doing calculations similar to the inequality (18), we attain
y n + 1 b n + 1 ( 1 α n ) δ 1 u n h n + α n δ 1 v n k n + ε 1 .
The following inequalities can be obtained similar to the processes performed in (19) and (20):
v n k n δ 2 y n b n + ε 2 ,
and, by using inequality (23), we obtain
u n h n ( 1 β n ) x n a n + δ 2 β n y n b n + β n ε 2 .
Substituting (24) and (23) in (22), we have
y n + 1 b n + 1 δ 1 x n a n + y n b n + δ 1 ε 2 + ε 1 ,
If (21) and (25) are combined, we attain the following inequality:
x n + 1 a n + 1 + y n + 1 b n + 1 ( δ 1 + δ 2 ) y n b n + x n a n + δ 2 ε 1 + ε 2 + δ 1 ε 2 + ε 1 .
There exists a real number δ ( 0 , 1 ) such that 1 δ = ( δ 1 + δ 2 ) < 1 . Hence, we have
x n + 1 a n + 1 + y n + 1 b n + 1 ( 1 δ ) x n a n + y n b n + δ ( δ 2 ε 1 + ε 2 + δ 1 ε 2 + ε 1 ) δ
Denote that
u n = x n a n + y n b n μ n = δ ( 0 , 1 ) η n = δ 2 ε 1 + ε 2 + δ 1 ε 2 + ε 1 δ
It is now easy to check that (27) satisfies all the requirements of Lemma 2. Hence, it follows by its conclusion that
0 lim n sup x n a n + y n b n lim n sup δ 2 ε 1 + ε 2 + δ 1 ε 2 + ε 1 δ
Since, ( a n , b n ) ( a , b ) as n , then we obtain
x a + y b δ 2 ε 1 + ε 2 + δ 1 ε 2 + ε 1 δ .
Example 3.
Let C = C 1 = C 2 = [ 1 , 1 ] be a subset of R with the usual norm and the norm · be defined by ( x , y ) = x + y for all ( x , y ) R × R . We choose the operators T 1 and T 2 as T 1 x = 1 2 + 1 8 l n ( x + 1 ) , T 2 x = 1 3 + 1 6 c o s x , respectively. It can be seen from the following figure that these operators satisfy the Lipschitz condition for δ 1 = 0.3 and δ 2 = 0.5 with unique altering points ( x , y ) = ( 0.475540 , 0.548628 ) :
Define operators S 1 and S 2
S 1 x = x 6 + x 2 3 + x 5 36 + 1 12 S 2 x = x 12 + x 3 24 + x 5 48
It is clear that S 1 and S 2 have a unique altering point ( a , b ) = ( 0.007069 , 0.084528 ) . By utilizing Wolfram Mathematica 9 Software Package and the operators T 1 and T 2 given by (a,b) in Figure 4, we obtain m a x x C | T 1 S 1 | = 0.417757 . Hence, for all x C and for a fixed ϵ 1 > 0 , we have | T 1 S 1 | 0.417757 . Similarly, m a x x C | T 2 S 2 | = 0.528926 and hence, for all x C and for a fixed ϵ 2 > 0 , we obtain | T 2 S 2 | 0.528926 . Thus, S 1 and S 2 are approximate operators of T 1 and T 2 , respectively. Hence, the distance between two altering points ( x , y ) ( a , b ) = 0.475540 0.007069 + 0.5486280 0.08452820 = 0.932571 . If we take the initial point ( a 1 , b 1 ) = ( 1 , 1 ) and we put α n = β n = 1 n + 1 for all n N in the Algorithm 6, then we obtain the following Table 3:
Then, we have the following estimate:
0.932571 = | x a | + | y b | 1.31424 1 0.8 .

3.3. Application to a System of Nonlinear Variational Inequalities

In this section, it is proved that the solution of the system of nonlinear variational inequalities (2) can be reached under suitable conditions by rewriting Algorithm 4 with the help of certain mappings as under:
Theorem 6.
Let C 1 and C 2 be closed convex subsets of a real Hilbert space H. Let F i be relaxed ( κ i , ω i ) -cocoercive and L i -Lipschitzian mappings, g i be relaxed ( a i , b i ) -cocoercive and η i -Lipschitzian mappings, and V i be φ i -Lipschitzian mappings with the constants L i , η i , φ i 0 for i { 1 , 2 } . Suppose that i = 1 2 ( θ i + ν i + λ i ) < 1 in which ν i = 1 + 2 a i η i 2 2 b i + η i 2 , θ i = 1 + 2 t i μ i κ i L i 2 2 t i μ i ω i + t i 2 μ i 2 L i 2 , and λ i = t i s i φ i for i { 1 , 2 } . Let { ( x n , y n ) } n = 1 be a sequence in C 1 × C 2 obtained from the following algorithm, which is defined by using Lemma 1:
( x 1 , y 1 ) C 1 × C 2 x n + 1 = ( 1 α n ) P C 1 g 2 ( z n ) t 2 μ 2 F 2 ( z n ) s 2 V 2 ( z n ) + α n P C 1 g 2 ( w n ) t 2 μ 2 F 2 ( w n ) s 2 V 2 ( w n ) z n = ( 1 β n ) y n + β n w n w n = P C 2 g 1 ( x n ) t 1 μ 1 F 1 ( x n ) s 1 V 1 ( x n ) y n + 1 = ( 1 α n ) P C 2 g 1 ( u n ) t 1 μ 1 F 1 ( u n ) s 1 V 1 ( u n ) + α n P C 2 g 1 ( v n ) t 1 μ 1 F 1 ( v n ) s 1 V 1 ( v n ) u n = ( 1 β n ) x n + β n v n v n = P C 1 g 2 ( y n ) t 2 μ 2 F 2 ( y n ) s 2 V 2 ( y n )
in which { α n } n = 0 and { β n } n = 0 are real sequences in [ 0 , 1 ] . Then, { ( x n , y n ) } converges strongly to a unique point ( x , y ) .
Proof. 
Let ( x , y ) H × H be the solution of (2) as under:
x = ( 1 α n ) P C 1 g 2 ( z ) t 2 μ 2 F 2 ( z ) s 2 V 2 ( z ) + α n P C 1 g 2 ( w ) t 2 μ 2 F 2 ( w ) s 2 V 2 ( w ) z = ( 1 β n ) y + β n w w = P C 2 g 1 ( x ) t 1 μ 1 F 1 ( x ) s 1 V 1 ( x ) y = ( 1 α n ) P C 2 g 1 ( u ) t 1 μ 1 F 1 ( u ) s 1 V 1 ( z ) + α n P C 2 g 1 ( v ) t 1 μ 1 F 1 ( v ) s 1 V 1 ( v ) u = ( 1 β n ) x + β n v v = P C 1 g 2 ( y ) t 2 μ 2 F 2 ( y ) s 2 V 2 ( y )
From (31) and (32), we obtain
w n w P C 2 g 1 ( x n ) t 1 μ 1 F 1 ( x n ) s 1 V 1 ( x n ) P C 2 g 1 ( x ) t 1 μ 1 F 1 ( x ) s 1 V 1 ( x ) g 1 ( x n ) g 1 ( x ) ( x n x ) + x n x t 1 μ 1 ( F 1 ( x n ) F 1 ( x ) ) + t 1 s 1 V 1 ( x n ) V 1 ( x ) .
Since g 1 is relaxed ( a 1 , b 1 ) -cocoercive and η 1 -Lipschitzian mapping, we have
g 1 ( x n ) g 1 ( x ) x n x 2 = g 1 ( x n ) g 1 ( x ) 2 2 g 1 ( x n ) g 1 ( x ) , x n x + x n x 2 g 1 ( x n ) g 1 ( x ) 2 + 2 a 1 g 1 ( x n ) g 1 ( x ) 2 b 1 x n x 2 + x n x 2 η 1 2 x n x 2 + 2 a 1 η 1 2 x n x 2 2 b 1 x n x 2 + x n x 2 = 1 + 2 a 1 η 1 2 2 b 1 + η 1 2 x n x 2 .
Using the assumption, we obtain
g 1 ( x n ) g 1 ( x ) x n x ν 1 x n x .
Since F 1 is relaxed ( κ 1 , ω 1 ) -cocoercive and L 1 -Lipschitzian mappings, we have
x n x t 1 μ 1 ( F 1 ( x n ) F 1 ( x ) ) 2 = x n x 2 + t 1 2 μ 1 2 F 1 ( x n ) F 1 ( x ) 2 2 t 1 μ 1 F 1 ( x n ) F 1 ( x ) , x n x x n x 2 + 2 t 1 μ 1 κ 1 F 1 ( x n ) F 1 ( x ) 2 ω 1 x n x 2 + t 1 2 μ 1 2 F 1 ( x n ) F 1 ( x ) 2 x n x 2 + 2 t 1 μ 1 κ 1 L 1 2 x n x 2 2 t 1 μ 1 ω 1 x n x 2 + t 1 2 μ 1 2 L 1 2 x n x 2 = 1 + 2 t 1 μ 1 κ 1 L 1 2 2 t 1 μ 1 ω 1 + t 1 2 μ 1 2 L 1 2 x n x 2
Using the assumption, we obtain
x n x t 1 μ 1 ( F 1 ( x n ) F 1 ( x ) ) θ 1 x n x .
In addition, V 1 is φ 1 -Lipschitzian mapping, and we obtain
t 1 s 1 ( V 1 ( x n ) V 1 ( x ) ) t 1 s 1 φ 1 x n x .
Substituting (35), (37), and (38) in (33), we obtain
w n w ν 1 + θ 1 + λ 1 x n x .
The following inequality can be obtained similar to the processes performed in (39):
z n z y n y + ν 1 + θ 1 + λ 1 x n x .
Moreover,
x n + 1 x ( 1 α n ) P C 1 g 2 ( z n ) t 2 μ 2 F 2 ( z n ) s 2 V 2 ( z n ) P C 1 g 2 ( z ) t 2 μ 2 F 2 ( z ) s 2 V 2 ( z ) + α n P C 1 g 2 ( w n ) t 2 μ 2 F 2 ( w n ) s 2 V 2 ( w n ) P C 1 g 2 ( w ) t 2 μ 2 F 2 ( w ) s 2 V 2 ( w ) ( 1 α n ) g 2 ( z n ) g 2 ( z ) ( z n z ) + ( 1 α n ) z n z t 2 μ 2 ( F 2 ( z n ) F 2 ( z ) ) + ( 1 α n ) t 2 s 2 V 2 ( z n ) V 2 ( z ) + α n g 2 ( w n ) g 2 ( w ) ( w n w ) + α n w n w t 2 μ 2 ( F 2 ( w n ) F 2 ( w ) ) + α n t 2 s 2 V 2 ( w n ) V 2 ( w ) .
The following inequality can be obtained from (41) similar to the processes performed in (34)–(38):
x n + 1 x ( 1 α n ) [ ν 2 + θ 2 + λ 2 ] z n z + α n [ ν 2 + θ 2 + λ 2 ] w n w .
Substituting (39) and (40) in (42), we obtain
x n + 1 x ( 1 α n ) [ ν 2 + θ 2 + λ 2 ] × [ y n y + ( ν 1 + θ 1 + λ 1 ) x n x ] + α n [ ν 2 + θ 2 + λ 2 ] [ ν 1 + θ 1 + λ 1 ] x n x .
If similar calculations are performed as in the processes (34)–(43) for the sequence of ( y n + 1 ) , we obtain
y n + 1 y ( 1 α n ) [ ν 1 + θ 1 + λ 1 ] × [ x n x + ( ν 2 + θ 2 + λ 2 ) y n y ] + α n [ ν 1 + θ 1 + λ 1 ] [ ν 2 + θ 2 + λ 2 ] y n y .
If (43) and (44) are combined and necessary simplifications are done, we have the following inequality:
x n + 1 x + y n + 1 y Θ x n x + y n y
in which Θ = i = 1 2 ( θ i + ν i + λ i ) < 1 . By using (45) and the norm ( x , y ) = x + y for all ( x , y ) H × H , we obtain
( x n + 1 , y n + 1 ) ( x , y ) Θ ( x n , y n ) ( x , y ) .
By induction, we obtain
( x n + 1 , y n + 1 ) ( x , y ) Θ n ( x 1 , y 1 ) ( x , y ) .
Taking the limit on both sides of (47), we obtain
lim n ( x n + 1 , y n + 1 ) ( x , y ) = 0 .
Thus, we have lim n x n x = lim n y n y = 0 . Therefore, { x n } and { y n } converge to x and y , respectively. □

4. Conclusions

In this work, we have analyzed some strong convergence theorems by using new parallel algorithms obtained from Sintunavarat and Pitea [22] fixed point algorithms. Furthermore, we have observed that the convergence speed of one of the new algorithms is better than the other algorithms mentioned in this manuscript through nontrivial examples. In addition, we have discussed the concept of data dependency for the new parallel algorithms, and we have given a numerical example for this result. As an application, we have examined the solution of a variational inequality system considering newly defined parallel algorithms. It should be especially noted that the concept of data dependency for parallel algorithms has been introduced for the first time in this study. In addition, a nontrivial numerical example has been presented to support this result. The results obtained here can be interpreted as an improvement and development of the corresponding results in the literature.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author gratefully thanks the referees for the constructive comments and recommendations which definitely help to improve the readability and quality of the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Lions, J.L.; Stampacchia, G. Variational inequalities. Commun. Pure Appl. Math. 1967, 20, 493–519. [Google Scholar] [CrossRef] [Green Version]
  2. Yao, Y.; Liou, Y.C.; Kang, S.M.; Yu, Y. Algorithms with strong convergence for a system of nonlinear variational inequalities in Banach spaces. Nonlinear Anal Theory Methods Appl. 2011, 74, 6024–6034. [Google Scholar] [CrossRef]
  3. Jolaoso, L.O.; Aphane, M. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. J. Ind. Manag. Optim. 2022, 18, 773. [Google Scholar] [CrossRef]
  4. Atalan, Y. On a new fixed Point iterative algorithm for general variational inequalities. J. Nonlinear Convex Anal. 2019, 20, 2371–2386. [Google Scholar]
  5. Noor, M.A.; Noor, K.I. Some parallel algorithms for a new system of quasi variational inequalities. Appl. Math. Inf. Sci. 2013, 7, 2493. [Google Scholar] [CrossRef] [Green Version]
  6. Noor, M.A.; Noor, K.I.; Khan, A.G. Parallel schemes for solving a system of extended general quasi variational inequalities. Appl. Math. Comput. 2014, 245, 566–574. [Google Scholar] [CrossRef]
  7. Uzor, V.A.; Alakoya, T.O.; Mewomo, O.T. Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Appl. Open Math. J. 2022, 20, 234–257. [Google Scholar] [CrossRef]
  8. Alakoya, T.O.; Uzor, V.A.; Mewomo, O.T.; Yao, J.C. On a system of monotone variational inclusion problems with fixed-point constraint. J. Inequal. Appl. 2022, 1, 1–30. [Google Scholar] [CrossRef]
  9. Ogwo, G.N.; Izuchukwu, C.; Shehu, Y.; Mewomo, O.T. Convergence of Relaxed Inertial Subgradient Extragradient Methods for Quasimonotone Variational Inequality Problems. J. Sci. Comput. 2022, 90, 1–35. [Google Scholar] [CrossRef]
  10. Chidume, C.E.; Nnakwe, M.O. Iterative algorithms for split variational inequalities and generalized split feasibility problems with applications. J. Nonlinear Var. Anal. 2019, 3, 127–140. [Google Scholar]
  11. Atalan, Y.; Karakaya, V. Iterative solution of functional Volterra-Fredholm integral equation with deviating argument. J. Nonlinear Convex Anal. 2017, 18, 675–684. [Google Scholar]
  12. Atalan, Y.; Gursoy, F.; Khan, A.R. Convergence of S-Iterative Method to a Solution of Fredholm Integral Equation and Data Depency. FU. Math. Inform. 2021, 36, 685–694. [Google Scholar]
  13. Karakaya, V.; Atalan, Y.; Dogan, K.; Bouzara, N. Some fixed point results for a new three steps iteration process in Banach spaces. Fixed Point Theory 2017, 18, 625–640. [Google Scholar] [CrossRef] [Green Version]
  14. Dogan, K. A comparative study on some recent iterative schemes. J. Nonlinear Convex Anal. 2019, 20, 2411–2423. [Google Scholar]
  15. Hacıoglu, E. A comparative study on iterative algorithms of almost contractions in the context of convergence, stability and data dependency. Comput. Appl. Math. 2021, 40, 1–25. [Google Scholar] [CrossRef]
  16. Hacıoglu, E.; Gursoy, F.; Maldar, S.; Atalan, Y.; Milovanović, G.V. Iterative approximation of fixed points and applications to two-point second-order boundary value problems and to machine learning. Appl. Numer. Math. 2021, 167, 143–172. [Google Scholar] [CrossRef]
  17. Xu, H.K.; Sahu, D.R. Parallel Normal S-Iteration Methods with Applications to Optimization Problems. Numer. Funct. Anal. Optim. 2021, 42, 1925–1953. [Google Scholar] [CrossRef]
  18. Maldar, S. Iterative algorithms of generalized nonexpansive mappings and monotone operators with application to convex minimization problem. J. Appl. Math. Comput. 2021, 1–28. [Google Scholar] [CrossRef]
  19. Maldar, S.; Gursoy, F.; Atalan, Y.; Abbas, M. On a three-step iteration process for multivalued Reich-Suzuki type α-nonexpansive and contractive mappings. J. Appl. Math. Comput. 2022, 68, 863–883. [Google Scholar] [CrossRef]
  20. Sahu, D.R.; Kang, S.M.; Kumar, A. Convergence Analysis of Parallel S-Iteration Process for System of Generalized Variational Inequalities. J. Funct. Spaces. 2017, 2017, 5847096. [Google Scholar] [CrossRef] [Green Version]
  21. Sahu, D.R. Altering points and applications. Nonlinear Stud. 2014, 21, 349–365. [Google Scholar]
  22. Sintunavarat, W.; Pitea, A. On a new iteration scheme for numerical reckoning fixed points of Berinde mappings with convergence analysis. J. Nonlinear Sci. Appl. 2016, 9, 2553–2562. [Google Scholar] [CrossRef] [Green Version]
  23. Soltuz, S.M.; Grosan, T. Data dependence for Ishikawa iteration when dealing with contractive like operators. Fixed Point Theory Appl. 2008, 2008, 242916. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Demonstration of Lipschitz condition of T 1 and T 2 for δ 1 = 0.68 and δ 2 = 0.28 .
Figure 1. Demonstration of Lipschitz condition of T 1 and T 2 for δ 1 = 0.68 and δ 2 = 0.28 .
Symmetry 14 01025 g001
Figure 2. Demonstration of Lipschitz condition of T 1 and T 2 for δ 1 = 0.35 and δ 2 = 0.15 .
Figure 2. Demonstration of Lipschitz condition of T 1 and T 2 for δ 1 = 0.35 and δ 2 = 0.15 .
Symmetry 14 01025 g002
Figure 3. Convergence of Algorithms 1–4 to altering points ( 0.01268227439847 , 0.05571471149404 ) .
Figure 3. Convergence of Algorithms 1–4 to altering points ( 0.01268227439847 , 0.05571471149404 ) .
Symmetry 14 01025 g003
Figure 4. Demonstration of Lipschitz condition of T 1 and T 2 for δ 1 = 0.3 and δ 2 = 0.5 .
Figure 4. Demonstration of Lipschitz condition of T 1 and T 2 for δ 1 = 0.3 and δ 2 = 0.5 .
Symmetry 14 01025 g004
Table 1. Convergence behavior of some iterative algorithms for the initial point ( 0.5 , 0.5 ) .
Table 1. Convergence behavior of some iterative algorithms for the initial point ( 0.5 , 0.5 ) .
Algorithm
Steps
Algorithm
4
Sintunavarat and Pitea
Algorithm
Normal-S
Algorithm
Mann
Algorithm
1 ( 0.5 , 0.5 ) ( 0.5 , 0.5 ) ( 0.5 , 0.5 ) ( 0.5 , 0.5 )
2 ( 0.06393307340285 , 0.34995083709690 ) ( 0.06673085242513 , 0.34995083709690 ) ( 0.06743921988137 , 0.34995083709690 ) ( 0.27990699353543 , 0.34995083709690 )
3 ( 0.06053166492603 , 0.35656220805078 ) ( 0.06069413180103 , 0.35762825567971 ) ( 0.06067556852525 , 0.35789832151171 ) ( 0.20908440231741 , 0.41542301568985 )
4 ( 0.06045850209295 , 0.35526789782514 ) ( 0.06046627973190 , 0.35532966713651 ) ( 0.06046416253895 , 0.35532260915260 ) ( 0.17338008494788 , 0.40496543984954 )
5 ( 0.06045654775606 , 0.35524008349470 ) ( 0.06045690648793 , 0.35524304026470 ) ( 0.06045677264712 , 0.35524223538509 ) ( 0.15174175828654 , 0.39547035736503 )
6 ( 0.06045648886605 , 0.35523934053039 ) ( 0.06045650524922 , 0.35523947690649 ) ( 0.06045649793888 , 0.35523942602534 ) ( 0.13718610756003 , 0.38867540187321 )
7 ( 0.06045648694991 , 0.35523931814267 ) ( 0.06045648769623 , 0.35523932437092 ) ( 0.06045648732111 , 0.35523932159181 ) ( 0.12670778721884 , 0.38375381024463 )
8 ( 0.06045648688416 , 0.35523931741422 ) ( 0.06045648691815 , 0.35523931769795 ) ( 0.06045648689953 , 0.35523931755534 ) ( 0.11879535857668 , 0.38006739031688 )
9 ( 0.06045648688181 , 0.35523931738923 ) ( 0.06045648688336 , 0.35523931740215 ) ( 0.06045648688245 , 0.35523931739507 ) ( 0.11260417447190 , 0.37721673195170 )
10 ( 0.06045648688173 , 0.35523931738834 ) ( 0.06045648688180 , 0.35523931738892 ) ( 0.06045648688175 , 0.35523931738858 ) ( 0.10762462162967 , 0.37495179971198 )
11 ( 0.06045648688172 , 0.35523931738830 ) ( 0.06045648688173 , 0.35523931738833 ) ( 0.06045648688173 , 0.35523931738831 ) ( 0.10353072855670 , 0.37311111118384 )
12 ( 0.06045648688172 , 0.35523931738830 ) ( 0.06045648688172 , 0.35523931738830 ) ( 0.06045648688172 , 0.35523931738830 ) ( 0.10010414944144 , 0.37158668020240 )
Table 2. Convergence behavior of some iterative algorithms for the initial point ( 1 , 1 ) .
Table 2. Convergence behavior of some iterative algorithms for the initial point ( 1 , 1 ) .
Algor. StepsAlgorithm 4Algorithm 3Algorithm 1Algorithm 2
1 ( 1 , 1 ) ( 1 , 1 ) ( 1 , 1 ) ( 1 , 1 )
2 ( 0.01746770514354 , 0.03605621776599 ) ( 0.02008493722556 , 0.02378174140661 ) ( 0.02149580497429 , 0.01451404209268 ) ( 0.50351934641331 , 0.53718054798139 )
3 ( 0.01226767939092 , 0.05518169734777 ) ( 0.01150624076569 , 0.05386515269753 ) ( 0.01141579425075 , 0.05402793360432 ) ( 0.34284483260030 , 0.36295837935182 )
4 ( 0.01267214275374 , 0.05579434935393 ) ( 0.01261446309582 , 0.05606161907946 ) ( 0.01263326706924 , 0.05603251296309 ) ( 0.26223106553773 , 0.27554078811636 )
5 ( 0.01268447093092 , 0.05571643883843 ) ( 0.01269589108401 , 0.05573496984877 ) ( 0.01269311139830 , 0.05572667034670 ) ( 0.21358787924007 , 0.22369610288701 )
6 ( 0.01268231566509 , 0.05571420493235 ) ( 0.01268307454077 , 0.05571046738399 ) ( 0.01268266919941 , 0.05571172811024 ) ( 0.18097966767757 , 0.18969356117691 )
7 ( 0.01268225857370 , 0.05571470342279 ) ( 0.01268210201284 , 0.05571446173336 ) ( 0.01268216592459 , 0.05571460660295 ) ( 0.15757040366303 , 0.16584401592732 )
12 ( 0.01268227439847 , 0.05571471149438 ) ( 0.01268227439671 , 0.05571471150378 ) ( 0.01268227439808 , 0.05571471149806 ) ( 0.09199061332091 , 0.10329528916663 )
13 ( 0.01268227439847 , 0.05571471149404 ) ( 0.01268227439888 , 0.05571471149461 ) ( 0.01268227439863 , 0.05571471149415 ) ( 0.09199061332091 , 0.10329528916663 )
14 ( 0.01268227439847 , 0.05571471149404 ) ( 0.01268227439850 , 0.05571471149391 ) ( 0.01268227439848 , 0.05571471149399 ) ( 0.08645646458745 , 0.09846212154010 )
15 ( 0.01268227439847 , 0.05571471149404 ) ( 0.01268227439847 , 0.05571471149403 ) ( 0.01268227439847 , 0.05571471149404 ) ( 0.08164879103137 , 0.09434552704985 )
16 ( 0.01268227439847 , 0.05571471149404 ) ( 0.01268227439847 , 0.05571471149404 ) ( 0.01268227439847 , 0.05571471149404 ) ( 0.07743281069852 , 0.09080431913154 )
Table 3. Convergence behaviour of the Algorithm 6.
Table 3. Convergence behaviour of the Algorithm 6.
Iter. NoAlgorithm 6
1 ( 1 , 1 )
2 ( 0.07909421 , 0.20233751 )
3 ( 0.01220522 , 0.09047984 )
4 ( 0.00738420 , 0.08457396 )
5 ( 0.00707338 , 0.08416763 )
6 ( 0.00704820 , 0.08419203 )
7 ( 0.00704839 , 0.08423309 )
148 ( 0.00706799 , 0.08451292 )
149 ( 0.00706800 , 0.08451302 )
150 ( 0.00706801 , 0.08451313 )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maldar, S. New Parallel Fixed Point Algorithms and Their Application to a System of Variational Inequalities. Symmetry 2022, 14, 1025. https://doi.org/10.3390/sym14051025

AMA Style

Maldar S. New Parallel Fixed Point Algorithms and Their Application to a System of Variational Inequalities. Symmetry. 2022; 14(5):1025. https://doi.org/10.3390/sym14051025

Chicago/Turabian Style

Maldar, Samet. 2022. "New Parallel Fixed Point Algorithms and Their Application to a System of Variational Inequalities" Symmetry 14, no. 5: 1025. https://doi.org/10.3390/sym14051025

APA Style

Maldar, S. (2022). New Parallel Fixed Point Algorithms and Their Application to a System of Variational Inequalities. Symmetry, 14(5), 1025. https://doi.org/10.3390/sym14051025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop