Next Article in Journal
Optimization Problems in Spanish Differential Calculus Books Published in the 18th Century
Previous Article in Journal
Economic-Statistical Performance of Auxiliary Information-Based Maximum EWMA Charts for Monitoring Manufacturing Processes
 
 
Retraction published on 13 June 2024, see Mathematics 2024, 12(12), 1834.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RETRACTED: Computational Analysis of Variational Inequalities Using Mean Extra-Gradient Approach

1
Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin 132000, China
2
School of Information Engineering, Nanchang University, Nanchang 330027, China
*
Author to whom correspondence should be addressed.
Northeast Electric Power University and Nanchang University are the co-first affiliation of this paper.
Mathematics 2022, 10(13), 2318; https://doi.org/10.3390/math10132318
Submission received: 20 May 2022 / Revised: 24 June 2022 / Accepted: 27 June 2022 / Published: 2 July 2022 / Retracted: 13 June 2024

Abstract

:
An improved variational inequality strategy for dealing with variational inequality in a Hilbert space is proposed in this article as an alternative; if Hilbert space is used as the domain of interest, the original extra-gradient method is proposed for resolving variational inequality. This improved variational inequality strategy can be used as a substitute for the original extra-gradient method in some situations. Mann’s mean value method, coupled with the widely used sub-gradient extra-gradient strategy, makes it possible to update all of the previous iterations in a single step, thus saving time and effort. All of this is made feasible via the use of Mann’s mean value technique in conjunction with the convex hull of all prior iterations of the algorithm. It is guaranteed that the mean value iteration will result in an acceptable resolution of a variational inequality issue as long as one or more of the criteria for the averaging matrix are fulfilled. Numerous experiments were performed in order to demonstrate the correctness of the theoretical conclusion obtained.
MSC:
46C05; 46E22; 47B32

1. Introduction

Suppose that H is the Hilbert space structure with a product existing on the interior of the space structure . , . in addition to the already defined standard norm   . . To begin, consider C to be a closed convex, non-empty subset of H, and F : H H to be a monotone operator with a non-degenerate definition in the space of closed convex subsets of H.
η ζ , F η F ζ 0 ,
The L-Lipschitz operator is formed by combining the η , ζ H and L-Lipschitz operators.
F η F ζ L η ζ ,
for η , ζ H . The task was created to help those who wanted to apply the Stampacchia variational inequality to an additive measurement of an array of C objects, such as determining the location of the items, by utilizing the Stampacchia variational inequality [1] as a starting point.
F η * , z η * 0   for   all   z C .
We will represent the solution set of the deliberated variational inequality given above as VIP(F, C). According to the assumption, in the case of the investigated variational inequality, there are unlimited solutions to VIP(F, C). Many iterative techniques have been developed to deal with it, making use of its properties to describe both mathematical and practical issues (see [2] for further discussions). We can use the η 1 H equation to derive the answer.
η k + 1 = J c η k ξ F η k ,                 k .
The metric’s projection onto C is indicated by the letter Jc if the step size is greater than zero. Assume F is η−strongly monotone, L-Lipschitz continuous, and τ 0.2   η / L 2 [3,4] to show that the arrangement produced by (2) meets the unique solution of the problem VIP(F, C).
Korpelevich developed the EM approach [3] in response to the requirement for strong monotonicity, which was needed to aid in the convergence of iterative techniques for F, which was only available in limited quantities at the time of its creation. It is defined as η 1     H .
ζ k = J c η k ξ F η k ,                                       η k + 1 = J c η k ξ F ζ k ,         k .
With the help of EM (3), it is possible to construct a sequence in a finite dimensional space that is controlled by the Lipschitz continuity and monotonicity of F, which can then be used to obtain the VIP(F, C) solution in a finite dimensional space using the EM (3) formula for the VIP(F, C) solution. A number of variants of Korpelevich’s EM have been examined as a consequence of this starting point, e.g., [5,6,7,8,9,10], as well as the sources cited within [5,6,7,8,9,10] and elsewhere. A few of the researchers who have made significant contributions to this area of study include Censor, Gibali, and Reich [11]. Each iteration of EM must be completed in order for the figure to be completed correctly. This is shown by the completion of two metric projections. This means that EM is a suitable method to use if the limited set C is simple enough that a closed-form equation for the metric projection PC onto C exists; otherwise, a hidden minimization sub-issue must be addressed in addition to the main problem, as previously stated. Censor, Gibali, and Reich were the ones who came up with the SEM (sub-gradient extra-gradient technique) to solve this problem, and they were successful in their endeavors. Instead of updating with two metric projections onto C, the SEM only updates with one metric projection onto C when updating the next iteration ηk+1, as opposed to when updating the prior iteration ζk. Due to the fact that the SEM was formed during the previous iteration, it includes a half-space containing C that was modified during the previous iteration, which explains why it happened in this case. In order to accomplish the objectives of this research technique, it is necessary to follow the formula below:
ζ k = J c η k ξ F η k , η k + 1 = J T k η k ξ F ζ k ,       k ,
where
T k = ω H : η k ξ F ζ k ζ k , ω η k 0 .
Apart from that, Formula (7) is well documented in the literature and clearly demonstrates the exact formula in an understandable way. The weak convergence outcome is likewise agreed in [9]. For [12,13,14,15,16,17,18,19], other SEM methods, such as electron energy loss spectroscopy, have been explored. When utilizing closed convex simple sets, SEM restricted the performance of the metric projection to a subset of the set’s members, which was not the case when the closed convex simple set was not itself a simple set, as was the case in the absence of such an assumption. A less challenging approach is to predict the intersection of a smaller number of non-empty, convex closed sets first, followed by the intersection of a larger number of such closed sets [20,21,22,23,24,25,26].
Rather than concentrating only on this nonlinear issue, it may be more productive to take a different approach to the problem. The presence of the operator indicates nonlinearity, and vice versa. As a consequence, the equation for the issue is η * F i x   T = η H : η = T x . Starting with the Picard iteration, we find the solution where η k + 1 equals the sum of ηk plus an offset, η k + 1 = T η k ,   k , and where the solution is defined by the sum of ηk+1 plus an offset and the total of ηk+1 plus an offset, η k + 1 = T η k ,   k .
Picard’s iterative technique does not converge, as shown in earlier study studies, indicating that this series of occurrences has no possibility of convergence, as well. T. Mann improved on Picard’s initial method in 1953, but in 2009, he enhanced the process even more by creating an even more complicated iteration. Because of the changes made by T. Mann to this edition, it is frequently referred to as T. Mann’s edition,
η k + 1 = T η k ¯ ,     k .
In informal conversations about this technique, the phrase “Mann’s mean value iteration” is often used to refer to this method as a whole. It is a widely used technique for resolving optimization difficulties because it helps to avoid numerically unfavorable circumstances such as zigzagging or spiraling behavior in a produced sequence around the solution set, which may occur when using other ways to handle optimization issues [27,28,29,30]. The Mann mean value iteration [24] is useful in a wide range of optimization situations. It is also one of the most extensively studied techniques accessible (see [31] for more information). There has been a great deal of study [32,33,34] that has used the recurrence of Mann’s mean value as a measure of dependability, and it has been shown to be successful. Using the monotone and the Lipschitz continuous operator, as well as concepts from the well-known SEM and Mann’s mean value iteration, this technique is presented here as an iterative approach. It is worth mentioning that some novel schemes given in [35,36] were developed that were used in power control and battery charge planning, resulting in dynamic uncertainties, perturbation of irradiation and temperature, and abrupt faults in output loads.
A new iterative method proposed in the article by using the idea of well-known SEM and Mann’s mean value iteration. A weakly convergent sequence is created at the beginning of the proposed technique, as shown in the illustration. The answer is finally found, and it is both written down in the text and graphically depicted in the picture VIP(F, C). When dealing with a constrained minimization issue, a finite family of non-empty closed convex simple sets is defined as one that is intersected by a constrained set. If certain circumstances are fulfilled, it is conceivable that the new approach will outperform the old one [37].

2. Important Concepts and Preliminaries

References [21,22] may be utilized to acquire more information. The following notations should be considered: When a series is converging, the sign η k k = 1 indicates whether it is converging strongly or weakly; when a sequence is converging, the symbol η k k = 1 indicates whether it is converging strongly or weakly; and when a series is converging, the symbol η k k = 1 indicates whether it is converging strongly or weakly. We represent the strong and weak convergence of the sequence η k k = 1 to η H by η k η and η k η correspondingly. As the identifying operator, the letter “I” is utilized to differentiate H from the rest of the alphabet. To answer the question, a closed, convex, and non-empty subset of H and C must be investigated, and this subset must be closed, convex, and non-empty [38,39]. We can obtain an η H point for any given η H point in the coordinate system by reversing the direction of the η H point. J c η is the point in C that is closest to the origin, and it is often referred to as J c η .
η J c η = inf η C η ζ .
As H is projected onto the letter C in this case, it is rendered as Jc(η). It is important to remember that Jc is a non-expansive H to C transformation, and this should be taken into account. J c η : H C . It is tough to understand why this is the case when Jc is non-restrictive and non-expansive H to C mapping.
J c η J c ζ η ζ ,         η , ζ H .
Furthermore, the predictions are based on measurements. Jc fulfils the attribute of variation:
η J c η , J c η ζ 0 ,             η H , ζ C .
The hyperplane can be defined on the basis of the integer parameters a H 0 and β .
H α ; β = η H :   a , η β .
The half-space and the hyperplane are both closed and convex sets, and their intersection is likewise a closed set. We can also use the following formula to project the metric onto the half-space H α ; β :
J H α ; β η = η a , η β a 2 a ,             if     a , η > β ,     η                                                             if     a , η β .
As illustrated below, we can claim that a point exists. Tη separates C from another point for any non-empty closed convex C H , if the point Tη is located on the convex η C border (6). An intriguing aspect of the site is that it also provides the following services: When we examine the hyperplane H η J c ; J c η , η J c ( η ) , we can see that it has two distinct forms, and H is independent of the value of η . It is determined that the first site η is in the first space, and the second site C is in the second space. We know that,
C H η J c η ; J c η , η J c η .
In addition to the hyperplane, H η J C η ; J c η , η J c η . If the primary hyperplane fails, another option is to seek the help of a secondary hyperplane to finish the job C at Jc(η).
Let A : H 2 H . It is capable of executing an operation on a set of values using a set-valued operator, according to the graph.
G r A : η , u H × H : u A η .
To sum up, all of A’s unmarked papers are marked as A.
A 1 0 : η H : 0 A η .
A monotone operator is defined as follows: Based on the concept of monotonicity, if A is a monotone operator, then B must likewise be a monotone operator.
η ζ , u v 0 ,
η , u ζ , v G r A .
Despite the fact that the monotone operator’s graph includes no links to any other monotone operators, it is considered the most monotonous operator [39] that can be found. Furthermore, since A has the greatest degree of monotonicity (even when convex and closed), all of its subsets (including convex and closed) are zeros.
It is conceivable that the set C H ; furthermore, depending on the circumstances, it can have a concave or convex shape. Nc(η) is the typical daily cone of the same size and form, as seen at η C .
N C η : ζ H :   ζ , z η ,     z C .
Allow F : H H . Assume that H and C are both monotone continuous operators, and that H and C are both sets of the same type. Then, C is a monotone continuous operation that is a closed convex subset of H and is not empty. We can then find out who the operator is. A : H 2 H by
A η : F η + N C η ,         for         x C , ,                                                         for       x C .  
At that time, A is a maximally monotone operator, and the subsequent significant property is satisfied:
VIP F , C = A 1 0 .

3. Methodology of Proposed Scheme

This section is formulated to present an efficient approach, i.e., a mean extra-gradient approach to investigate the solutions of the problems related to the variational inequalities. Before detailing the methodology of the extra-gradient method, we present some preliminaries.
An infinite lower-triangular-row matrix a l , m l , m = 1 is supposed to be an averaging matrix if the subsequent situations are fulfilled:
 A1.
a l , m 0 ,     l , m 1 ;
 A2.
If l < m , then a l , m = 0 ;     l 1 ;
 A3.
a l , 1 + a l , 2 + + a l , l = 1 ,     l 1 ;
 A4.
lim l + a l , m = 0 ,     m 1 .
Considering an averaging matrix a l , m l , m = 1 and a sequence η l l = 1 from a real Hilbert space H , we represent the mean iterate as;
η ¯ l = a l , 1 η 1 + a l , 2 η 2 + a l , 3 η 3 + + a l , l η l ,     l 1.
The solution procedure of variation inequality by means of Mann’s type mean extra-gradient scheme is given as Algorithm 1.
Algorithm 1: Solution procedure by Mann’s type mean extra-gradient scheme.
  • 1. INITIALIZATION:   Choose   a   point   η 1   belonging   to   Hilbert   space   H , a positive
  • 2. parameter   ξ ,   and   a l , m l , m = 1 averaging matrix.
  • 3. STEP 1.   Assumed   a   present   iterate   η l H , calculate the mean iterate as;
  • 4. η ¯ l = a l , 1 η 1 + a l , 2 η 2 + a l , 3 η 3 + + a l , l η l ,
  • 5. also calculate
  • 6. ζ l = P C η ¯ l ξ F η ¯ l .
  • 7. STEP 2.   If   ζ l = η ¯ l ,   then   η ¯ l   belongs   to   VIP F , C and break the procedure.
  • 8. Otherwise ,   build   half   space   T l , which is given by
  • 9. T l = ς H : η ¯ l ξ F η ¯ l ζ l , ς ζ l 0 ,
  • 10. and compute the subsequent iterate as;
  • 11. η l + 1 = P T l η ¯ l ξ F ζ l .
  • 12. Update   the   dummy   variable   l   as   l = l + 1 , and perform STEP 1.
Remark 1.
It is important to mention that when a l , m l , m = 1 , is the identity matrix, and then the above Mann’s type mean extra-gradient scheme becomes the classical sub-gradient extra-gradient scheme given in Ref. [11].
Now, we explain the stopping principles of the proposed scheme in STEP 2.
Proposition 1.
Suppose that the sequences η ¯ l l = 1 and ζ l l = 1 are generated by means of the suggested Mann’s type mean extra-gradient scheme. If there exist a constant l 0 so that η ¯ l 0 = ζ l 0 , then show that η ¯ l 0 V I P F , C .
 Proof.
Suppose a constant l 0 so that η ¯ l 0 = ζ l 0 , then by means of the definition ζl, we obtain
η ¯ l 0 = ζ l 0 = P C η ¯ l 0 ξ F η ¯ l 0 ,
which produces η ¯ l 0 C . For all z C , we obtain from the following inequality
  ζ C ,   η H ; η P C η ,   P C η ζ 0 ,
That
z η ¯ l 0 ,   η ¯ l 0 ξ F η ¯ l 0 η ¯ l 0 0.
This implies that
z η ¯ l 0 ,   F η ¯ l 0 0 ,
which satisfy that ξ > 0 and this implies that η ¯ l 0 VIP F , C .
By the above proposition, for the remaining convergence analysis, we can consider all over this segment that the proposed scheme does not dismiss after some finite number of repetitions; explicitly, we consider that   l 1 ;   ζ l η ¯ l . □
Lemma 1.
Suppose the sequence η ¯ l l = 1 is obtained by means of Mann’s type mean extra-gradient scheme; then u V I P F , C and   l 1 , and the following relation must hold.
η l + 1 u 2 η ¯ l u 2 1 ξ 2 L 2 η ¯ l ζ l 2 , m = 1 l a l , m η m u 2 1 ξ 2 L 2 η ¯ l ζ l 2 .
 Proof.
Suppose u VIP F , C and l ≥ 1 be fixed. We know that the operator F is monotone, therefore
F ζ l F u , ζ l u 0.
This implies the following relation
0 F u , ζ l u F ζ l , ζ l u .
In the above, the second inequality is true because of u VIP F , C and ζ l C . Therefore, we also obtain
F ζ l , η l + 1 u F ζ l , η l + 1 ζ l .
By means of the definition of Tl, we obtain the following relation
η l + 1 ζ l , η ¯ l ξ F η ¯ l ζ l 0
Now, it follows that
η l + 1 ζ l , η ¯ l ξ F ζ l ζ l = η l + 1 ζ l , η ¯ l ξ F η ¯ l ζ l + η l + 1 ζ l , η ¯ l ξ F ζ l + ξ F η ¯ l ξ η l + 1 ζ l , F η ¯ l F ζ l .
Introducing a parameter zl as z l = η ¯ l ξ F ζ l , then
η l + 1 u 2 = P T l z l u 2 = P T l z l z l + z l u 2 = P T l z l z l 2 + z l u 2 + 2 P T l z l z l , z l u .
By means of the property of PTl, we have
0 2 z l P T l z l , u u P T l z l = 2 z l P T l z l 2 + 2 P T l z l z l , z l u ,
this implies the following relation
z l P T l z l 2 + 2 P T l z l z l , z l u z l P T l z l 2 .
By means of the above relation in η l + 1 u 2 to have
η l + 1 u 2 z l u 2 z l P T l z l 2 , = η ¯ l ξ F ζ l u 2 η ¯ l ξ F ζ l η l + 1 2 , = η ¯ l u 2 + ξ 2 F ζ l 2 2 ξ F ζ l , η ¯ l u η ¯ l η l + 1 2 ξ 2 F ζ l 2 + 2 ξ F ζ l , η ¯ l η l + 1 = η ¯ l u 2 η ¯ l η l + 1 2 + 2 ξ F ζ l , u η l + 1 .
It can also be rewritten, by means of the above relations, as
η l + 1 u 2 η ¯ l u 2 η ¯ l η l + 1 2 + 2 ξ F ζ l , ζ l η l + 1 , η ¯ l u 2 η ¯ l ζ l + ζ l η l + 1 2 + 2 ξ F ζ l , ζ l η l + 1 , = η ¯ l u 2 η ¯ l ζ l 2 ζ l η l + 1 2 2 η ¯ l ζ l , ζ l η l + 1 + 2 ξ F ζ l , ζ l η l + 1 , = η ¯ l u 2 η ¯ l ζ l 2 ζ l η l + 1 2 2 ζ l η l + 1 , η ¯ l ζ l ξ F ζ l , η ¯ l u 2 η ¯ l ζ l 2 ζ l η l + 1 2 + 2 ξ ζ l η l + 1 , F η ¯ l F ζ l , η ¯ l u 2 η ¯ l ζ l 2 ζ l η l + 1 2 + 2 ξ ζ l η l + 1 F η ¯ l F ζ l .
By means of the L-Lipschitz continuity and using the relation 2xyx2 + y2, we obtain the following form
η l + 1 u 2 η ¯ l u 2 1 ξ 2 L 2 η ¯ l ζ l 2 ζ l η l + 1 2 + 2 ξ L ζ l η l + 1 η ¯ l ζ l , η ¯ l u 2 η ¯ l ζ l 2 ζ l η l + 1 2 + ξ 2 L 2 η ¯ l ζ l 2 + ζ l η l + 1 2 , = η ¯ l u 2 1 τ 2 L 2 η ¯ l ζ l 2 .
Lastly, by means of the convexity of the norm · 2 and an averaging matrix a l , m l , m = 1 , we obtain the following form
η l + 1 u 2 η ¯ l u 2 1 ξ 2 L 2 η ¯ l ζ l 2 , = m l a l , m η m m l a l , m u 2 1 ξ 2 L 2 η ¯ l ζ l 2 , = m l a l , m η m u 2 1 ξ 2 L 2 η ¯ l ζ l 2 , m l a l , m η m u 2 1 ξ 2 L 2 η ¯ l ζ l 2 .
Now, we discuss a concept which we later use in the convergence analysis of the scheme. □
 Proposition 2
([39]). Consider a real sequence, ω l l = 1 , the averaging matrix a l , m l , m = 1 , and r . If ω l r , then ω ¯ l = m = 1 l a l , m ω m r .
The averaging matrix a l , m l , m = 1 is known as M-concentrating, if for all real sequences a l , m l , m = 1 and ϵ l l = 1 , so that l = 1 ϵ l < + , and it is satisfied that
ω l + 1 ω ¯ l + ϵ l .
In the above, ω ¯ l = m = 1 l a l , m ω m ,     l 1 , we obtained lim l ω l .
By means of Lemma 1, if we include an extra previous criterion on ξ, the term on the right-hand side, which is 1 ξ 2 L 2 η ¯ l ζ l 2 , is non-positive. Along with this condition, the a l , m l , m = 1 averaging matrix is known as M-concentrating.
Theorem 1.
Consider that the matrix a l , m l , m = 1 is M-concentrating and ξ 0 , 1 / L . Then, η ¯ l l = 1 is any sequence produced by means of Mann’s type mean extra-gradient approach and weakly converges to the solution of the problem V I P F ,   C .
 Proof.
Consider an element u VIP F , C and l ≥ 1; then, by means of Lemma 1
η l + 1 u 2 m = 1 l a l , m η m u 2 1 ξ 2 L 2 η ¯ l ζ l 2 .
As we know that ξ 0 , 1 / L , we obtained
0 < 1 ξ 2 L 2 < 1.
The relation ( 6 ) takes the following form:
η l + 1 u 2 m = 1 l a l , m η m u 2 .
Bearing in mind that ω = η l u 2 and for all k 1 ,   ϵ l = 0 , and by means of the supposition that the averaging matrix is M -concentrating, we determine that the limit lim l η l u 2 exists and declare e u . By means of lemma, we obtain that lim l m = 1 l a l , m η m u 2 exists having the limit e(u), and afterwards, it follows from these composed with ( 6 ) and 0 < 1 ξ 2 L 2 < 1 that
lim l η ¯ l ζ l = 0.
In addition, we observe from Lemma 1
η l + 1 u 2 η ¯ l u 2 m = 1 l a l , m η m u 2 .
We also have the limit lim l η ¯ l u 2 = e u . As the sequence η ¯ l l = 1 is a bounded sequence, there is a weak cluster point, η’, from the Hilbert space H and there is a subset η ¯ l i i = 1 so that η ¯ l i η . Therefore, from the relation (7) ζ l i η . We then assume another operator A, which is defined as A : H 2 H , read as
Q ν = F ν + N C ν ,       when   ν C ,               ,                           otherwise .
Now, Q is the operator that is maximally monotone besides VIP F , C = Q 1 0 . Additionally, as (v, w) belongs to G(Q), this means w Q = F ν + N C ν , and we obtain w F ν N C ν ; that is:
w F ν , ν ζ 0 ,   ζ C .
Therefore, by means of the property of ζ l , we obtain
η ¯ l ξ F η ¯ l ζ l , ζ l ν 0.
This implies that
ζ l η ¯ l ξ + F η ¯ l , ν ζ l 0 ,   l 1.
Hence, by means of the relations ( 8 ) and ( 9 ), substituting ζ with ζli and ζl with ζli, respectively, we have
w , ν ζ l i F ν , ν ζ l i F ν , ν ζ l i ζ l i η ¯ l i ξ + F η ¯ l i , ν ζ l i , = F ν F ζ l i , ν ζ l i F ζ l i + F η ¯ l i , ν ζ l i ζ l i η ¯ l i ξ , ν ζ l i .
Now, taking the limit of the above expression i , we have
w , ν η 0.
We know that the operator Q is maximally monotone; we have η VIP F , C = Q 1 0 . Now, we have to prove that the sequence η ¯ l = 1 weakly converges to η’ For this, consider that there is a subsequence η ¯ m m = 1 of the sequence η ¯ l = 1 so that it converges weakly to ζ η . Considering the above statements, we also have ζ VIP F , C and lim l η ¯ l ζ . Using Opial’s condition, we observe
lim l η ¯ l η = lim i inf η ¯ l i η lim i inf η ¯ l i ζ = lim i η ¯ l i ζ = lim j η ¯ l j ζ , lim j inf η ¯ l j η = lim l η ¯ l η ,
which is a paradox. Thus, η’ = ζ’, and hereafter, we accomplish that η ¯ l = 1 converges weakly to η’. □
 Proposition 3
([27]). Suppose the averaging matrix a l , m l , m = 1 α fulfills the generalized segmenting condition. Then, the averaging matrix is M -concentrating if l i m   i n f l a l , l > 0 .

4. Important Results and Discussion

This section is devoted to the detailed study of the proposed method and its effectiveness by minimizing the distance of assumed point. Suppose that p , r i n and s i 0 are known data,   i = 1 , 2 , 3 , , l . In this examination, we need to explore the controlled minimization model, which is given as:
min 1 2 η p 2 ,   subject   to   r i , η s i ,   i = 1 , 2 , 3 , , l ,
It is to be noted that the function f = 0.5 · p 2 is the convex Fréchet differentiable function and Δf is the 1-Lipschitz continuous gradient; besides the constrained set C i = η n : r i , η s i ,   i = 1 m , is a non-empty set which is closed and convex. Therefore, the considered problem (10) appears as problem (1), with C = i = 1 m and Δf = F. It is noted that the operator F is 1-Lipschitz continuous. In this condition, the attained theoretical solutions satisfy and we can use Mann’s type mean extra-gradient scheme for investigating the problem (10). For simplicity, the classical sub-gradient extra-gradient scheme is denoted as SEM, whereas Mann’s type mean extra-gradient scheme is denoted as Mann-MEM with the general segmenting a l , m l , m = 1 defined as:
a l , m = 1 a k 1 ,           if   j = 1   and   k 1 , 0 ,                                             if   j 2   and   k < j , a 1 a k 1 ,   if   j 2   and   k j .
In above, a 0 ,   1 . It is noted that the following set
T k = H η ¯ k ξ F η ¯ k ζ k ;   ζ k , η ¯ k ξ F η ¯ k ζ k ,
is the Mann-MEM and an auxiliary hyperplane to the constrained set C at the point ζk In this condition, JTk can be calculated explicitly if the approximation η ¯ k ξ F η ¯ k ζ k 0 . However, if the approximation η ¯ k ξ F η ¯ k ζ k = 0 , the half-space Tk becomes the full space H such that the iterate ηk+1 is nothing else but the approximation η ¯ k ξ F η ¯ k . In order to investigate the solutions, we first make use the traditional Halpern iteration by accomplishing the inner loop: we choose an arbitrary initial point ω 1 n and a sequence λ i i = 1 , and calculate
ω i + 1 = λ i η ¯ k ξ F η ¯ k + 1 λ 1 P C m P C m 1 P C 2 P C 1 ω i ,   i 1.
We use the following stopping criterion for the inner loop in all the computations to find the numerical value of the point ζk
ω i + 1 ω i ω i + 1 10 8 .
In the first computation, we deliberated the performance of the proposed scheme in a simple condition. For this, we assume m = 3, n = 2, c = [0.1, 0.1]T, a 3 = 1 ,   2 T , a 2 = 1 ,   1 T , a 1 = 1.5 ,   1 T , and b3 = b2 = b1 = 0. It can be observed that the sole solution is nothing else than this point [0.1, 0.1]T. Now, let us begin with the effect of the step size λ k = λ / 1 + k for numerous choices of λ 0 ,   2 while applying the suggested Mann-MEM and SEM. We select the initial point η 1 = 0.2 ,   0.15 T , step size ξ = 0.5, and α = 0.9. Stopping criteria for both Mann-MEM and SEM are η k c 10 5 or 100 iterations, whichever comes first. Table 1 shows that the significant influence of λ belongs to [1.3, 1.9] on the number of iterations, computational time, and total number of inner iterations.
Based on Table 1, both schemes give accurate solutions for enhancing the value of λ. This behavior might be possible because of the larger step size, which is given by the parameter λ, as it can dismiss the inner loop in fewer iterations with the intention of reducing the algorithmic runtime. On the other hand, we can observe that Mann-MEM when λ = 1.3, 1.4 and SEM when λ = 1.7 required >100 iterations to meet the stopping criteria. It is noted that when λ = 1.9, both schemes demonstrate excellent solutions. In addition, when λ = 1.9, the scheme Mann-MEM produced excellent results of algorithm runtime 6.01 × 10−2 seconds. Figure 1 and Figure 2 are plotted against the step size for the discussed schemes. Taking the same assumption as we considered before and setting the inner-loop step size λk = 1.9/(1 + k) for both schemes, we see that for both schemes, the best computation time is attained when ξ = 0.6. In order to learn more about the behavior of convergence analysis of the scheme Mann-MEM, we also assume the effect of a. Figure 3 is plotted against the selection of τ = 0.6 and λk = 1.9/(1 + k). It is detected that for the large value of a, we attained the lowest number of iterations and computational time; thus, the superlative algorithm’s performance is attained when a = 0.99.
Table 2 shows the comparison between SEM and Mann-MEM. It is to be noted that the Mann-MEM is more effective than SEM in that Mann-MEM needs less computational time as compared to SEM. One distinguished performance is that when constraints are quite large, the Mann-MEM needs considerably less computational runtime than average.

5. Conclusions

The aim of this research was to discover a solution to the problem of variational inequality that was controlled by a monotone and Lipschitz continuous operator rather than a single operator. We were able to show that the iteration sequence of Mann’s mean extra-gradient technique was ineffective in addressing the problem at hand. In the case of a specified range of acceptable values, the calculations show that the proposed approach shows better convergence behavior than the traditional sub-gradient extra-gradient method, while the conventional sub-gradient extra-gradient method exhibits poorer convergence behavior. Some conclusions are outlined below.
  • In order for the Mann-MEM technique to properly converge, the Lipschitz constant of the operator F must be known. If this knowledge is not accessible, the plan is doomed. The letter F in the formula represents the Lipschitz constant for the element F. Given the difficulties in determining the Lipschitz constant, some may question the validity of the conclusion that Mann-MEM and its convergence properties can be used in real-world situations. However, this is not an unreasonable stance to take. For example, among the many interesting Mann-MEM variants are those that utilize a variable step size rather than a fixed step size ξ k k = 1 , and those that do not need prior knowledge of the L function, such as the Mann-MEM form that does not require prior knowledge of the L function.
  • Another finding that should be noted is that when the average matrix α l . m l , m = 1 is adjusted to its optimum value, Mann-MEM outperforms SEM, as compared to when it is not altered. Indeed, at this point in time, the search for more examples of average matrices that meet the M-concentration criterion is an interesting alternative to consider.
  • It is to be noted that the Mann-MEM is more effective than SEM in that Mann-MEM needs less computation work as compared to SEM. One distinguished performance is that when constraints are quite large, the Mann-MEM needs much less computational runtime than average.

Author Contributions

Conceptualization, T.C.; Data curation, H.L. and F.G.; Funding acquisition, D.Y.; Resources, F.G.; Software, D.Y.; Writing—original draft, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [Google Scholar] [CrossRef]
  2. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  3. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  4. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710. [Google Scholar] [CrossRef]
  5. Cho, S.Y. Hybrid algorithms for variational inequalities involving a strict pseudocontraction. Symmetry 2019, 11, 1502. [Google Scholar] [CrossRef]
  6. Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems. Acta Appl. Math. 2020, 169, 217–245. [Google Scholar] [CrossRef]
  7. Hieu, D.V.; Cho, Y.J.; Xiao, Y.-B.; Kumam, P. Relaxed extragradient algorithm for solving pseudomonotone variational inequalities in Hilbert spaces. Optimization 2020, 69, 2279–2304. [Google Scholar] [CrossRef]
  8. Muangchoo, K.; Alreshidi, N.A.; Argyros, I.K. Approximation results for variational inequalities involving pseudomonotone bifunction in real Hilbert spaces. Symmetry 2021, 13, 182. [Google Scholar] [CrossRef]
  9. Thong, D.V.; Vinh, N.T.; Cho, Y.J. New strong convergence theorem of the inertial projection and contraction method for variational inequality problems. Numer. Algorithms 2020, 84, 285–305. [Google Scholar] [CrossRef]
  10. Yao, Y.; Postolache, M.; Yao, J.-C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Sci. Bull. Ser. A 2020, 82, 3–12. [Google Scholar]
  11. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318335. [Google Scholar] [CrossRef]
  12. Gibali, A. A new non-Lipschitzian method for solving variational inequalities in Euclideanspaces. J. Nonlinear Anal. Optim. 2015, 6, 41–51. [Google Scholar]
  13. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  14. Malitsky, Y.; Semenov, V. An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [Google Scholar] [CrossRef]
  15. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient algorithms for variational inequality problems and fixed point problems. Optimization 2018, 67, 83–102. [Google Scholar] [CrossRef]
  16. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Luo, C.; Zhao, Z. Application of probabilistic method in maximum tsunami height prediction considering stochastic seabed topography. Nat. Hazards 2020, 104, 2511–2530. [Google Scholar] [CrossRef]
  18. Zheng, W.; Liu, X.; Yin, L. Research on image classification method based on improved multi-scale relational network. PeerJ Comput. Sci. 2021, 7, e613. [Google Scholar] [CrossRef]
  19. Ma, Z.; Zheng, W.; Chen, X.; Yin, L. Joint embedding VQA model based on dynamic word vector. PeerJ Comput. Sci. 2021, 7, e353. [Google Scholar] [CrossRef]
  20. Yang, J.; Liu, H.; Li, G. Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities. Numer. Algorithms 2020, 84, 389–405. [Google Scholar] [CrossRef]
  21. Fan, S.; Wang, Y.; Cao, S.; Zhao, B.; Sun, T.; Liu, P. A deep residual neural network identification method for uneven dust accumulation on photovoltaic (PV) panels. Energy 2022, 239, 122302. [Google Scholar] [CrossRef]
  22. Fan, S.; Wang, Y.; Cao, S.; Sun, T.; Liu, P. A novel method for analyzing the effect of dust accumulation on energy efficiency loss in photovoltaic (PV) system. Energy 2021, 234, 121112. [Google Scholar] [CrossRef]
  23. Cai, T.; Dong, M.; Liu, H.; Nojavan, S. Integration of hydrogen storage system and wind generation in power systems under demand response program: A novel p-robust stochastic programming. Int. J. Hydrogen Energy 2021, 47, 443–458. [Google Scholar] [CrossRef]
  24. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  25. Gao, F.; Yu, D.; Sheng, Q. Analytical treatment of unsteady fluid flow of nonhomogeneous nanofluids among two infinite parallel surfaces: Collocation method-based study. Mathematics 2022, 10, 1556. [Google Scholar] [CrossRef]
  26. Yu, D.; Wang, R. An optimal investigation of convective fluid flow suspended by carbon nanotubes and thermal radiation impact. Mathematics 2022, 10, 1542. [Google Scholar] [CrossRef]
  27. Combettes, P.L.; Pennanen, T. Generalized Mann iterates for constructing fixed points in Hilbert spaces. J. Math. Anal. Appl. 2002, 275, 521–536. [Google Scholar] [CrossRef]
  28. Zheng, W.; Yin, L.; Chen, X.; Ma, Z.; Liu, S.; Yang, B. Knowledge base graph embedding module design for Visual question answering model. Pattern Recognit. 2021, 120, 108153. [Google Scholar] [CrossRef]
  29. Wang, Z.; Ramamoorthy, R.; Xi, X.; Namazi, H. Synchronization of the neurons coupled with sequential developing electrical and chemical synapses. Math. Biosci. Eng. MBE 2022, 19, 1877–1890. [Google Scholar] [CrossRef]
  30. Xiong, Q.; Chen, Z.; Huang, J.; Zhang, M.; Song, H.; Hou, X.; Feng, Z. Preparation, structure and mechanical properties of Sialon ceramics by transition metal-catalyzed nitriding reaction. Rare Met. 2020, 39, 589–596. [Google Scholar] [CrossRef]
  31. Combettes, P.L.; Glaudin, L.E. Quasi-nonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to inertial methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef]
  32. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer: Cham, Switzerland, 2017. [Google Scholar]
  33. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces. In Lecture Notes in Mathematics 2057; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  34. Yu, D.M.; Ma, Z.M.; Wang, R.J. Efficient Smart Grid Load Balancing via Fog and Cloud Computing. Math. Probl. Eng. 2022, 2022, 3151249. [Google Scholar] [CrossRef]
  35. Mosavi, A.; Qasem, S.N.; Shokri, M.; Band, S.S.; Mohammadzadeh, A. Fractional-order fuzzy control approach for photovoltaic/battery systems under unknown dynamics, variable irradiation and temperature. Electronics 2020, 9, 1455. [Google Scholar] [CrossRef]
  36. Liu, Z.; Mohammadzadeh, A.; Turabieh, H.; Mafarja, M.; Band, S.S.; Mosavi, A. A new online learned interval type-3 fuzzy control system for solar energy management systems. IEEE Access 2021, 9, 10498–10508. [Google Scholar] [CrossRef]
  37. Knopp, K. Infinite Sequences and Series; Dover: New York, NY, USA, 1956. [Google Scholar]
  38. Jaipranop, C.; Saejung, S. On the strong convergence of sequences of Halpern type in Hilbert spaces. Optimization 2018, 67, 1895–1922. [Google Scholar] [CrossRef]
  39. Chansangiam, P. A survey on operator monotonicity, operator convexity, and operator means. Int. J. Anal. 2015, 2015, 649839. [Google Scholar] [CrossRef]
Figure 1. Comparison of the rate of convergence between iterative algorithms (8) and (15).
Figure 1. Comparison of the rate of convergence between iterative algorithms (8) and (15).
Mathematics 10 02318 g001
Figure 2. Represents convergence behavior of ∥sn+1(x) − 0∥2 for the initial value s0(x) = x.
Figure 2. Represents convergence behavior of ∥sn+1(x) − 0∥2 for the initial value s0(x) = x.
Mathematics 10 02318 g002
Figure 3. Effect of a > 0 for Mann-MEM.
Figure 3. Effect of a > 0 for Mann-MEM.
Mathematics 10 02318 g003
Table 1. Effects of the step size for numerous parameters λ > 0, with execution sub-gradient extra-gradient and Mann mean extra-gradient methods.
Table 1. Effects of the step size for numerous parameters λ > 0, with execution sub-gradient extra-gradient and Mann mean extra-gradient methods.
MethodλIterationsTimeInner. Iter.
SEM1.3140.182647,630
1.4150.150137,476
1.5150.117728,591
1.6160.090623,691
1.7>100>0.2871>84,555
1.8300.089923,648
1.9280.069917,749
Mann-MEM1.3>100>1.0595>319,322
1.4>100>0.7589>224,422
1.5180.11833,285
1.6180.092425,846
1.7190.090621,495
1.8300.085123,508
1.9230.060715,925
Table 2. Performance of the SEM and Mann-MEM for various dimensions (n) and number of constraints (m).
Table 2. Performance of the SEM and Mann-MEM for various dimensions (n) and number of constraints (m).
mnTimeIteration
Mann-MEMSEMMann-MEMSEM
5050036.336838.798651.251
100 88.438394.36475151
200 239.0405248.49605150
50100058.625361.80895352
100 137.0350143.54515352
200 344.8198368.266852.752
502000118.4089123.37315453.1
100 245.7529257.44445453
200 576.3775604.05555453
503000242.2706247.88555554
100 440.8821452.06475554
200 1031.53491070.56995554
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, T.; Yu, D.; Liu, H.; Gao, F. RETRACTED: Computational Analysis of Variational Inequalities Using Mean Extra-Gradient Approach. Mathematics 2022, 10, 2318. https://doi.org/10.3390/math10132318

AMA Style

Cai T, Yu D, Liu H, Gao F. RETRACTED: Computational Analysis of Variational Inequalities Using Mean Extra-Gradient Approach. Mathematics. 2022; 10(13):2318. https://doi.org/10.3390/math10132318

Chicago/Turabian Style

Cai, Tingting, Dongmin Yu, Huanan Liu, and Fengkai Gao. 2022. "RETRACTED: Computational Analysis of Variational Inequalities Using Mean Extra-Gradient Approach" Mathematics 10, no. 13: 2318. https://doi.org/10.3390/math10132318

APA Style

Cai, T., Yu, D., Liu, H., & Gao, F. (2022). RETRACTED: Computational Analysis of Variational Inequalities Using Mean Extra-Gradient Approach. Mathematics, 10(13), 2318. https://doi.org/10.3390/math10132318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop