Next Article in Journal
Between Soft θ-Openness and Soft ω0-Openness
Next Article in Special Issue
On the Existence of Solutions to a Boundary Value Problem via New Weakly Contractive Operator
Previous Article in Journal
On Construction and Estimation of Mixture of Log-Bilal Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Iterative Scheme Involving Self-Adaptive Method for Solving Mixed Variational Inequalities

1
Department of Mathematics and Sciences, College of Humanities and Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
2
Department of Mathematics, Air University, PAF Complex E-9, Islamabad 44000, Pakistan
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(3), 310; https://doi.org/10.3390/axioms12030310
Submission received: 14 February 2023 / Revised: 12 March 2023 / Accepted: 13 March 2023 / Published: 20 March 2023
(This article belongs to the Special Issue Fixed Point Theory and Its Related Topics IV)

Abstract

:
Variational inequalities (VI) problems have been generalized and expanded in various ways. The VI principle has become a remarkable study area combining pure and applied research. The study of variational inequality in mathematics is significantly aided by providing an important framework by fixed-point theory. The concept of fixed-point theory can be considered an inherent component of the VI. We consider a mixed variational inequality (MVI) a useful generalization of a classical variational inequality. The projection method is not applicable to solve MVI due to the involvement of the nonlinear term ϕ . MVI is equivalent to fixed-point problems and the resolvent equation techniques. This technique is commonly used in the research on the existence of a solution to the MVI. This paper uses a new self-adaptive method using step size to modify the fixed-point formulation for solving the MVI. We will also provide the convergence of the proposed scheme. Our output could be seen as a significant refinement of the previously known results for MVI. A numerical example is also provided for the implementation of the generated algorithm.

1. Introduction

It is recognized that the theory related to variational inequalities has provided an important part in the development of diverse areas of applied and pure mathematics in the field of sciences, such as gauge field theory in particle physics and the general theory of relativity. This is the most important and main field of engineering and advancement in the discipline of mathematics. Lagrange, Newton, Leibniz, Fermat, and Bernoulli set the basis for variational theories, see [1,2,3,4,5,6] for more information.
In previous years, the advancement in variational expansions has been upgraded in the variational inequalities (VI) field, generally owing to Stampacchia [6]. Variational inequalities theory involved an important and novel expansion of variational fields. It explained on a broader scale of absorbing developments and a link surrounded by many fields of mathematics, economics, optimization, equilibrium, finance, physics, and regional and engineering disciplines. This is because of advancement in the variational inequality theory that gives the main idea of straight, easy, and efficient construction for the formation of wider problems.
In VI theory, numerical methods play an essential role in solving given problems. VI problems are transformed into fixed-point problems by using these methods. Fixed-point formulation is equivalent to VI problems as it is utilized to solve VI problems and design new iterative strategies. The projection technique, implicit techniques, and their various variants are examples of iterative schemes. We know [6,7,8,9] that variational inequalities theory has appeared as an effective and powerful tool of the current mathematical technology. The theory of variational inequalities has been considered in various fields of mathematics arising in both pure and applied sciences. The theory of VI provides us with a tool for formulalting a series of equilibrium figures, qualitatively analyzing for the existence and uniqueness of solutions, analyzing stability and sensitivity, and providing us with algoritms along with convergence analysis for computational purposes, see [9,10,11,12,13]. It contains, as special cases, such well-known issues in mathematical programming as a system of nonlinear equations, optimization, and complementarity problems. It is also related to fixed-point formulation. An approximate proximal-extra gradient-type method presents in [7,14] for monotone variational inequalities. A new predictor-corrector self-adaptive approach for solving nonlinear variational inequalities was proposed in [11,13,14,15,16,17]. The theory of VI has been developed in several directions using new and novel methods. Some of these developments have made mutually enriching contacts with other pure and applied science areas. In [12,18], we focus mainly on the recent iterative algorithms for solving various variational inequalities.
The projection method is a useful resource for obtaining VI solutions. The major point of this method is to establish the theory of projection by applying fixed-point formulation. The development of various projection-type algorithms for addressing VI was greatly assisted by this alternative formulation. By using the projection theory, we make fixed-point formulation and generate a new iterative scheme. Then, under the conditions, we can demonstrate the fixed-point uniqueness and convergence criteria of the new generated scheme. The projection operator restricts us when we have VI with a non-linear term, then other stratagies are considered to tackle the problems.
Variational inequality has been expanded in many directions. Different techniques have been used to extend and broaden VI problems. An important and constructive expansion of the variational theory is recognized as the MVI or the variational inequality of the II kind because of the involvement of the term ϕ , which is nonlinear. For the function ϕ : H R + and nonlinear T : H H , we consider the problem to find a point f H , where H is a Hilbert space, such that
T f , g f + ϕ ( g ) ϕ ( f ) 0 , g H .
The expression (1) is known as the MVI. Also the auxiliary principle technique is suggested for solving general mixed variational inequalities see, [10,19,20,21]. The origin of this method can be traced back to Lions and Stampaccbia. Glowinski, Lions, and Tremolieres [9] used this technique to study the existence of a solution of the mixed variational inequalities. It has been considered that a large class of problems, including linear and non-linear operators, considering the fields of applied and pure areas of mathematics, can be investigated in the structure of MVI (1), see [4,20]. If we consider that the term ϕ is semi-lower continuous, proper, and convex, then the inequality (1) is considered to find a point f H such that
0 T f + ϕ ( f ) ,
where ϕ ( . ) is defined as function of the subdifferential term. We called this expression (2) a variational problem. One can also say that the expression (2) is also known as the challenge of sum of two operators finding zeros that are considered monotone . For further theory and applications in the field mathematics and particularly in the numerical areas and other importance of mixed MVI, see [10,22].
As we know, the projection technique cannot be used to a set the equivalent relation between MVI and the fixed-point problem just because of term ϕ . However, if we define ϕ , which is the nonlinear term in the MVI as a lower semicontinuous function, convex and proper, then the resolvent operator technique plays an important role in establishing the equivalence between the MVI and the fixed-point problem. The resolvent step only constitutes the sub-differential of a proper, convex, and lower semicontinuous function component and the other part describes the problem of disintegration. This step helps to establish very proficient techniques for solving the MVI by means of resolvent equations, see [21]. In this research, we suggest a new self adaptive technique involving step size to solve the MVI. The convergence analysis of the proposed method is also provided.

2. Preliminaries

In this section, we provide necessary and basic information, which are required for constructing new results. These basic lemmas help us to develop linkage and correlation to understand the new iterative schemes. These are basic and important results. We require the following familiar results.
Lemma 1. 
Consider F is a differentiable convex function and E is a convex set. Then, f E is the minimum of f, iff f E satisfies the inequality
F ( f ) , g f 0 , g E .
Proof. 
Consider f E be the minimum of function F ( f ) , then
F ( f ) F ( g ) , f E .
f , g E , we know that t [ 0 , 1 ] and let
V t = f + t ( g f ) E .
Replace g by V t in Equation (4), we have
F ( f ) F ( V t )
F ( f ) F ( f + t ( g f ) )
F ( f + t ( g f ) ) F ( f ) 0 .
Divide by t and then take limit
l i m t 0 F ( f + t ( g f ) ) F ( f ) t 0
F ( f ) , g f 0 , g E .
Conversely: As F is known as convex function then let f E satisfies ( 3 ) .
F ( ( 1 t ) f + t g ) ( 1 t ) F ( f ) + t F ( g )
Rearrange the expression, we have
F ( ( 1 t ) f + t g ) F ( f ) + t [ F ( g ) F ( f ) ]
After adjustment, we obtain
t [ F ( g ) F ( f ) ] F ( ( 1 t ) f + t g ) F ( f ) .
Dividing by t
F ( g ) F ( f ) F ( ( 1 t ) f + t g ) F ( f ) t
F ( g ) F ( f ) F ( f + t ( g f ) ) F ( f ) ) t .
Taking l i m t 0
F ( g ) F ( f ) l i m t 0 F ( f + t ( g f ) ) F ( f ) ) t
F ( g ) F ( f ) F ( f ) , g f .
From Equation (5), we have
F ( g ) F ( f ) F ( f ) , g f 0
We obtain
F ( g ) F ( f ) 0
F ( g ) F ( f )
F ( f ) F ( g ) .
   □
This shows that f E is the minimum of F ( f ) . Where F ( f ) is the Frechet derivative of F at f E . The inequality (3) is called the variational inequality. From this lemma, we conclude that convexity plays an important role in VI.
We know that VI have been extended in various directions. An important generalization of VI is MVI or VI of the second kind involving the non-linear term ϕ . We observe that if ϕ is defined as the indicator function of a close convex set E in H , that is
ϕ ( f ) I E ( f ) = 0 , if f E , + , otherwise ,
then the inequality (1) is equivalent to find f E such that
T f , g f 0 , g E ,
problem (7) is called the classical variational inequality, which was investigated by Stampacchia, see [6]. Application purpose VI is used to investigate many problems of unrelated odd order and nonsymetric obstacles, as well as free, moving, and equilibrium problems arising in regional, engineering and applied sciences, and in physical and mathematical fields , see [7,10,12,18,23,24].
We know that the projection technique along with Wiener–Hopf equations is not useful for the solution of MVI. To overcome this draw back, we use resolvent operator technique.
We now define some basic concepts.
Definition 1 
([2]). We can define the resolvent operator involving a maximal monotone operator A on H, for a given constant ρ > 0 , such as:
J A ( f ) = ( I + ρ A ) 1 ( f ) , f H .
This is the fact that the resolvent operator is defined everywhere, if and only if the monotone operator is maximal. Additionally, it is a nonexpansive single valued function and satisfied the given inequality,
J A ( f ) J A ( g ) f g , f , g H .
Remark 1. 
Being maximal monotone subdifferential ϕ of a proper, convex, and lower semicontineous function ϕ , so it can be written as
J ϕ ( f ) = ( I + ϕ ϕ ) 1 ( f ) , f H .
The following are the characterization of the resolvent operator J ϕ .
Lemma 2 
([2]). For a given f E , and z H , we have
f z , g f + ρ ϕ ( g ) ρ ϕ ( f ) 0 , g H ,
iff,
f = J ϕ z ,
where
J ϕ ( f ) = ( I + ϕ ϕ ) 1 ( f ) .
J ϕ , here known as the resolvent operator.
This lemma shows the equivalence relation between MVI and the fixed-point problem.
Lemma 3 
([19,21]). Given a function f H as a solution of the inequality (1), then we have
f = J ϕ [ f ρ T f ] ,
This formulation is used to establish a self-adaptive technique for the solution of the MVI. Consider
h = J ϕ [ f γ T f ] , f o r γ > 0 ,
f = J ϕ [ h ρ T h ] , f o r ρ > 0 .
We now define R ( f ) , the residue vector as                              
R ( f ) : = f J ϕ [ f ρ T f ] .
From Lemma 2, we can see that if f is a solution of (1), then
R ( f ) : = 0 .
Related to the MVI (1), we take the problem for resolvent equations. Suppose R ϕ = I J ϕ , where I and J ϕ are the identity and resolvent operators, respectively. For given operator T : H H , where H is a Hilbert space then the problem of finding z H defined the resolvent equation, such that
ρ T J ϕ z + R ϕ z = 0 ,
which was studied and introduced by Noor [14]. The resolvent equation is used to develop various efficient numerical techniques, which are more flexible.
Lemma 4. 
For f H , z H satisfies the resolvent Equation (14) iff,                  
f = J ϕ z
z = f ρ T f ,
where, ρ > 0 is constant.
From Lemma 4, the MVI (1) and resolvent Equation (14) are equal. This can be verified as:
From (15) and (16), we can write
z = J φ z ρ T J φ z ,
where f = J φ z , we see
z J φ z = ρ T J φ z
( I J φ ) z = ρ T J φ z
We consider R φ = I J φ , then
R φ z = ρ T J φ z ,
ρ 1 R φ z = T J φ z .
ρ 1 R φ z + T J φ z = 0 .
We can write as
T J φ z + ρ 1 R φ z = 0 .
This indicates that MVI and resolvent equation are equivalent.
This alternative method of equivalence has been considered for studying many efficient iterative schemes for MVI and is also related to optimization problems. This represents the solution of MVI (1) from the Lemma 3; we see that the inequality (1) and the resolvent equation are the same. This alternating formulation is useful for numerical and approximation schemes. We exercise this formulation to establish and analyze a number of iterative schemes for solving the MVI (1).

3. Main Results

In this section, first by using the basic lemmas and results captioned in preliminaries, we establish the new and modified scheme. By using this scheme, we modify the fixed-point formulation, and this updates the solution. This scheme is a new in the theory of MVI and is also a extention of VI.
Using (13), (15) and (16), the Equation (14) can be considered in the form
0 = f J ϕ [ f ρ T f ] ρ T f + ρ T J ϕ [ f ρ T f ] = R ( f ) ρ T f + ρ T J ϕ [ f ρ T f ] .
We now define the relation
D ( f ) = R ( f ) ρ T f + ρ T J ϕ [ f ρ T f ] .
It is known that f H is a solution of MVI (1.1), if and only if f H is a zero of the function
D ( f ) = 0 .
Using (11) and (18), we can rewrite as
h = J ϕ [ f γ D ( f ) γ T f ] .
The above results are used for establishing the iterative schemes for the MVI problem (1). This is a modification to upgrade the iterative scheme and is new in the theory of MVI.
For solving the MVI (1), the above modification in the result has provoked us to make the following new self-adaptive iterative schemes.
This technique has their own standard procedure closely related to projection residue technique.
In the next section we consider the convergence criteria of Algorithm 1 and this is main motivation of our results and output. Convergence analysis is very important to define existence of the solution under cetain conditions. The Theorem 1 is the convergence of the newly established results.
Algorithm 1 Self-adaptive Iterative Scheme
Step 0:
         Given ρ > 0 , ϵ > 0 , μ ( 0 , 1 ) , γ [ 1 , 2 ) , δ 0 , δ ( 0 , 1 ) and f 0 H , set n = 0 .
Step 1:
         Stopping criteria: Set ρ n = ρ . If R ( f n ) < ϵ ,
         otherwise, satisfying
ρ n ( T ( f n ) T ( h n ) ) < δ R ( f n ) ,
where
h n = J ϕ [ f n γ D ( f n ) γ T ( f n ) ] ,
Step 2:
         Compute
D ( f n ) = R ( f n ) ρ T ( f n ) + ρ T J ϕ [ f n ρ T ( f n ) ] ,
where
R ( f n ) : = f n J ϕ [ f n ρ T ( f n ) ] .
Step 3:
         Get the next iterate
h n + 1 = J ϕ [ f n γ D ( f n ) γ T ( f n ) ] ,
f n + 1 = J ϕ [ g ( h n + 1 ) ρ T ( h n + 1 ) ] ,
then set ρ = ρ n μ , else set ρ = ρ n . Repeat step 1 by substituting n = n + 1 .
Theorem 1. 
Let the operators T : H H be strongly monotone and Lipschitz continous with constant α > 0 and β > 0 , respectively. If
ρ α β 2 < α 2 β 2 ( 1 K 2 ) β 2
where
α > β ( 1 K 2 ) , a n d K < 1
and 0 α n 1 , for all n 0 , then the approximate solution f n obtained from the Algorithm 1 converges to a solution satisfying the MVI (1).
Proof. 
Since f * is a solution of MVI (1), it follows from Lemma 2 that
h * = J ϕ [ f * ρ T f * ] ,
f * = J ϕ [ h * ρ T h * ] .
Applying Algorithm 1 and using the property of nonexpansive of J ϕ , we obtain the following result:
f n + 1 f * = J ϕ [ h n ρ T h n ] J ϕ [ h * ρ T h * ] h n ρ T h n h * + ρ T h * .
By using the strong monotocity and Lipschitz continuity property of T , we have
h n h * ρ ( T ( h n ) T ( h * ) ) 2 h n h * , h n h * 2 ρ h n h * , T ( h n ) T ( h * ) + T ( h n ) T ( h * ) , T ( h n ) T ( h * ) h n h * 2 2 ρ α h n h * 2 + ρ 2 β 2 h n h * 2 ,
which is equivalent to
h n h * ρ ( T h n T ( h * ) ) 1 2 ρ α + ρ 2 β 2 h n h * .
From (25) and (26), we obtain
f n + 1 f * ( 1 2 ρ α + ρ 2 β 2 ) h n h * .
Let t ( ρ ) = 1 2 ρ α + ρ 2 β 2 and equation ( 27 ) becomes
f n + 1 f * ( t ( ρ ) ) h n h * .
The results of (21), (24) and the definition of d ( f n ) give the following expression:
h n h * = P E [ f n γ D ( f n ) γ T f n ] P E [ f * ρ T f * ] [ f n γ D ( f n ) γ T f n ] [ f * ρ T f * ] f n f * γ D ( f n ) + γ T f n ] T f * ] .
It follows that
f n f * γ D ( f n ) 2 f n f * 2 2 γ f n f * , D ( f n ) + γ 2 D ( f n ) 2 f n f * 2 . f n f * γ D ( f n ) f n f * .
Similarly,
T ( f n ) ] T ( f * ) ] β f n f * .
From (29)–(31), we have
h n h * ( 1 + γ β ) f n f * .
From (28) and (32), we obtain
f n + 1 f * ( t ( ρ ) ) ( 1 + γ β ) f n f * .
Let ( t ( ρ ) ) ( 1 + γ β ) = θ , where 0 < θ < 1 . Inequality (33) becomes
For convergence criteria
( t ( ρ ) ) ( 1 + γ β ) < 1 1 2 ρ α + ρ 2 β 2 < 1 ( 1 + γ β ) 1 2 ρ α + ρ 2 β 2 < K , where K = 1 ( 1 + γ β ) < 1 1 2 ρ α + ρ 2 β 2 < K 2 ρ 2 β 2 2 ρ α + 1 K 2 < 0
By applying quadratic formula
ρ < 2 α ± 4 α 2 4 β 2 ( 1 K 2 ) 2 β 2 ρ < α ± α 2 β 2 ( 1 K 2 ) β 2 ρ α β 2 < α 2 β 2 ( 1 K 2 ) β 2 , where α > β ( 1 K 2 )
From (33), we obatin
f n + 1 f * θ f n f * .
In a similar way, we obtain
f n + 1 f * θ n f 0 f * ,
The above result shows that the general solution f n + 1 of Algorithm 1 converges to approximate solution f * . Since θ < 1 and n = 0 θ n = 0 , the problem (1) has a unique solution consequently f n + 1 , which is the required result. These results show that under certain conditions, the solution exists and it is unique. This was the main target of the results.
In the next section, we provide the numerical example for the solution of the problem. This is the implementation of the defined results.

4. Numerical Results

Here, numerical results are presented for VI. As we know that if ϕ is an indicator, then the MVI reduces to variational inequality. For the related result, we consider the following example.
Example 1. 
We consider the variational inequality (7), where we take T 1 ( f ) = D 1 ( f ) + D 2 ( f ) + s , D 1 ( f ) as nonlinear and D 2 ( f ) + s as a linear expression of T 1 ( f ) . Example (1) is considered a special case of problem (1). The matrix D 2 = X t X + Y , where X is n × n order matrix with elements randomly chosen from the given interval ( 5 , + 5 ) . Similarly, the way skew symmetric matrix Y is obtained. The vector s is considered in the interval ( 500 , + 500 ) from a uniform distribution for easy problems and ( 500 , 0 ) for problems considered to be hard, respectively. In D 1 ( f ) , the nonlinear part of T 1 ( f ) , the components are D j ( f ) = z j arctan ( f j ) and z j is generated a random variable in ( 0 , 1 ) .
In this problem, the values of μ , δ , δ 0 , γ, and ρ are 2 3 , 0.95 , 0.95 , 1.95 , and 0, respectively. Additionally, consider f 0 = ( 0 , 0 , 0 , , 0 ) T as a starting point . The computation starts with ρ 0 = 1 and vanish at R ( f n ) 10 7 . Codes are written in Matlab and computations are shown in Table 1.

5. Conclusions

In this study, we have considered the solution of MVI(1). We have proposed new self-adaptive iterative schemes for MVI. For MVI, we use the resolvent operator for fixed-point formulation. The new iterative methods are established using the resolvent operator. The strategy of the technique is based on the resolvent operator equation. We consider the step-size or self-adaptive method to modify the iteration. This technique is new in the theory of MVI. Convergence analysis is also proved under some defined conditions. The numerical example is also provided for the implementation of the algorithm.

Author Contributions

Conceptualization, methodology, and analysis, S.U.; funding acquisition, A.M.; investigation, M.B.; methodology, M.S.A.; project administration, K.A.; resources, K.A.; supervision, S.U.; visualization, A.M.; writing—review and editing, M.B.; proofreading and editing, M.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to acknowledge the support of Prince Sultan University for providing the Article Processing Charges (APC) of this publication.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The manuscript included all required data and implementing information.

Acknowledgments

The authors wish to express their gratitude to Prince Sultan University for facilitating the publication of this article through the Theoretical and Applied Sciences Lab.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baioochi, C.; Capelo, A. Variational and Quasi-Variational Inequalities; John Wiley and Sons: New York, NY, USA, 1984. [Google Scholar]
  2. Brezis, H. Operateurs Maximaux Monotone et Semigroups de Contraction dan les Espaces de Hilbert; Elsevier: Ameterdam, The Netherlands, 1973. [Google Scholar]
  3. Daniele, P.; Giannessi, F.; Maugeri, A. Equilibrium Problems and Variational Models; Kluwer Academic Publishers: London, UK, 2003. [Google Scholar]
  4. Tu, K.; Xia, F. A projection type algorithm for solving generalized mixed variational inequalities. Act. Math. Sci. 2016, 36, 1619–1630. [Google Scholar] [CrossRef]
  5. Rahman, H.; Kumam, P.; Argyros, L.K.; Alreshidi, N.A. Modified proximal-like extragaradient method for two classes of equilibriums in hilbert spaces with applications. Comput. Appl. Math. 2023, 40, 38. [Google Scholar] [CrossRef]
  6. Stampacchia, G. Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
  7. He, B.S.; Yang, Z.H.; Yuan, X.M. An Approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300, 362–374. [Google Scholar] [CrossRef] [Green Version]
  8. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  9. Glowinski, R.; Lions, J.L.; Trbmolis, R. Numerical Analysis of Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 1981. [Google Scholar]
  10. Cruz, J.Y.B.; Iusem, A.N. Full convergence of an approximate projection method for nonsmooth variational inequalities. Math. Comput. Simul. 2015, 114, 2–13. [Google Scholar] [CrossRef]
  11. Noor, M.A.; Ullah, S. Predictor-corrector self-adaptive methods for variational inequalities. Transylv. Rev. 2017, 16, 4147–4152. [Google Scholar]
  12. Noor, M.A. Some recent advances in variational inequalities, part I, basic concepts. N. Z. J. Math. 1997, 26, 53–80. [Google Scholar]
  13. Bux, M.; Ullah, S.; Arif, M.S.; Abodayeh, K. A self-Adaptive Technique for Solving Variational Inequalities: A New Approach to the Problem. J. Funct. Spaces 2022, 2022, 7078707. [Google Scholar] [CrossRef]
  14. Shi, P. Equivalence of variational inequalities with Wiener–Hopf equations. Proc. Am. Math. Soc. 1991, 111, 339–346. [Google Scholar] [CrossRef]
  15. Ullah, S.; Noor, M.A. An efficient method for solving new general mixed variational inequalities. J. Inequal. Spec. 2020, 11, 1–9. [Google Scholar]
  16. Alzabut, J.; Khuddush, M.; Selvam, A.G.M.; Vignesh, D. Second Order Iterative Dynamic Boundary Value Problems with Mixed Derivative Operators with Applications. Qual. Theory Dyn. Syst. 2023, 22, 32. [Google Scholar] [CrossRef]
  17. Dyab, W.M.; Sakr, A.A.; Ibrahim, M.S.; Wu, K. Variational Analysis of a Dually Polarized Waveguide Skew Loaded by Dielectric Slab. IEEE Microw. Wirel. Components Lett. 2020, 30, 737–740. [Google Scholar] [CrossRef]
  18. Sanaullah, K.; Ullah, S.; Arif, M.S.; Abodayeh, K.; Fayyaz, A. Self-Adaptive Predictor-Corrector Approach for General Variational Inequalities Using a Fixed-Point Formulation. J. Funct. Spaces 2022, 2022, 2478644. [Google Scholar] [CrossRef]
  19. Bnouhachem, A. A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 2005, 309, 136–150. [Google Scholar] [CrossRef]
  20. Bnouhachem, A.; Noor, M.A. Numerical methods for general mixed variational inequalities. App. Math. Comput. 2008, 204, 27–36. [Google Scholar] [CrossRef]
  21. Noor, M.A. A class of new iterative methods for general mixed variational inequalities. Math. Comput. Model. 2001, 31, 11–19. [Google Scholar] [CrossRef]
  22. Moudafi, A.; Thera, M. Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 1994, 94, 425–448. [Google Scholar] [CrossRef]
  23. Smith, M.J. The existence, uniqueness and stability of traffic c equilibria. Trans. Res. 1979, 133, 295–304. [Google Scholar] [CrossRef]
  24. Jarad, F.; Abdeljawad, T. Variational principles in the frame of certain generalized fractional derivatives. Discret. Contin. Syst. Ser. S 2020, 13, 695–708. [Google Scholar] [CrossRef] [Green Version]
Table 1. Numerical results.
Table 1. Numerical results.
Order of MatrixAlgorithm 1
nNo. It.
10042
20054
30046
50031
70041
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mukheimer, A.; Ullah, S.; Bux, M.; Arif, M.S.; Abodayeh, K. New Iterative Scheme Involving Self-Adaptive Method for Solving Mixed Variational Inequalities. Axioms 2023, 12, 310. https://doi.org/10.3390/axioms12030310

AMA Style

Mukheimer A, Ullah S, Bux M, Arif MS, Abodayeh K. New Iterative Scheme Involving Self-Adaptive Method for Solving Mixed Variational Inequalities. Axioms. 2023; 12(3):310. https://doi.org/10.3390/axioms12030310

Chicago/Turabian Style

Mukheimer, Aiman, Saleem Ullah, Muhammad Bux, Muhammad Shoaib Arif, and Kamaleldin Abodayeh. 2023. "New Iterative Scheme Involving Self-Adaptive Method for Solving Mixed Variational Inequalities" Axioms 12, no. 3: 310. https://doi.org/10.3390/axioms12030310

APA Style

Mukheimer, A., Ullah, S., Bux, M., Arif, M. S., & Abodayeh, K. (2023). New Iterative Scheme Involving Self-Adaptive Method for Solving Mixed Variational Inequalities. Axioms, 12(3), 310. https://doi.org/10.3390/axioms12030310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop