Next Article in Journal
Existence and Attractivity of Mild Solutions for Fractional Diffusion Equations Involving the Regularized ψ-Hilfer Fractional Derivatives
Previous Article in Journal
On the Derivation of a Fast Solver for Nonlinear Systems of Equations Utilizing Frozen Substeps with Applications
Previous Article in Special Issue
Using Artificial Neural Networks to Solve the Gross–Pitaevskii Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Korpelevich Method for Solving Bilevel Variational Inequalities on Riemannian Manifolds

1
Three Gorges Mathematical Research Center, China Three Gorges University, Yichang 443002, China
2
College of Mathematics and Physics, China Three Gorges University, Yichang 443002, China
3
School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2025, 14(2), 78; https://doi.org/10.3390/axioms14020078
Submission received: 6 December 2024 / Revised: 8 January 2025 / Accepted: 21 January 2025 / Published: 22 January 2025
(This article belongs to the Special Issue Advances in Mathematical Optimization Algorithms and Its Applications)

Abstract

:
The bilevel variational inequality on Riemannian manifolds refers to a mathematical problem involving the interaction between two levels of optimization, where one level is constrained by the other level. In this context, we present a variant of Korpelevich’s method specifically designed for solving bilevel variational inequalities on Riemannian manifolds with nonnegative sectional curvature and pseudomonotone vector fields. This variant aims to find a solution that satisfies certain conditions. Through our proposed algorithm, we are able to generate iteration sequences that converge to a solution, given mild conditions. Finally, we provide an example to demonstrate the effectiveness of our algorithm.

1. Introduction

Variational inequalities, initially introduced by Stampacchia, have garnered significant attention, and have been extensively explored in various disciplines, including economics, transportation, network, structural analysis, supply chain management, and game theory. They have proven to be invaluable tools for addressing practical problems in these fields.
As mentioned by Németh [1] in 2003, variational inequalities on manifolds have been utilized to formulate numerous problems in applied fields. Manifolds, as non-linear spaces, provide a more general framework for addressing these problems. Generalizing optimization methods from Euclidean spaces to Riemannian manifolds offers several important advantages. For instance, as demonstrated in [2,3,4,5], constrained optimization problems can be reframed as unconstrained problems from the perspective of Riemannian geometry. Another advantage is that, by introducing an appropriate Riemannian metric, optimization problems with non-convex objective functions can be transformed into convex ones. Therefore, the natural and intriguing extension of the concepts and techniques of variational inequality theory, and related topics, from Euclidean spaces to Riemannian manifolds is significant. However, it is crucial to acknowledge that this endeavor is nontrivial. For further information, please refer to the cited references [1,6].
The projection method is known to be convergent for solving bilevel variational inequalities when the cost operator is cocoercive. However, it may not converge if the cost operator is merely monotone. To address this limitation, the extragradient method, also known as the double projection method, has been introduced. Unlike the projection method, the extragradient method retains convergence, even when dealing with pseudomonotone cost operators. Consequently, the extragradient method has been extended to solve problems involving pseudomonotone variational inequalities (see, for example, [7,8,9,10,11]).
Solving variational inequalities on manifolds is a challenging task. Recent studies, based on [1,6,12,13,14], have focused on developing methods for variational inequalities on manifolds. Tang and Huang [15] investigated a variant of Korpelevich’s method for pseudomonotone variational inequalities on Hadamard manifolds. Liou, Obukhovskii, and Yao [16] justified the existence of a solution for a variational inequality problem on Riemannian manifolds using the properties of topological characteristics. Tang et al. [17,18] explored the proximal point algorithm and a projection-type method for variational inequalities with pseudomonotone vector fields on Hadamard manifolds. Li, Zhou, and Huang [19] derived some gap functions for generalized mixed variational inequalities on Hadamard manifolds under appropriate conditions. Tang et al. [20] established existence results for a class of hemivariational inequality problems on Hadamard manifolds, while Hung et al. [21] examined mixed quasi-hemivariational inequality problems on Hadamard manifolds and obtained global error bounds using regularized gap functions under suitable conditions. For further details on this topic, we recommend referring to [22,23,24,25,26].
Motivated by the aforementioned research, we propose a framework for investigating bilevel variational inequalities on Riemannian manifolds with nonnegative sectional curvature and pseudomonotone vector fields. In this framework, we explore a modification of Korpelevich’s method specifically tailored to solve bilevel variational inequalities on Riemannian manifolds. To establish the convergence of our method, we utilize the concept of quasi-Fejér convergence as introduced by Quiroz [2]. Under certain assumptions regarding continuity and pseudomonotonicity of vector fields on Riemannian manifolds, we provide a proof demonstrating that the sequence generated by our proposed method converges to a solution of the bilevel variational inequalities on Riemannian manifolds.
The remainder of this paper is structured as follows: Section 2 provides a comprehensive overview of fundamental concepts, notations, and significant findings in Riemannian geometry. It also introduces the notion of pseudomonotone vector fields, and presents pivotal results on variational inequality on Riemannian manifolds. Section 3 focuses on introducing the formulation of bilevel variational inequality on Riemannian manifolds with nonnegative sectional curvature, along with the application of Korpelevich’s method specifically for solving bilevel variational inequalities on Riemannian manifolds. Section 4 is dedicated to studying the convergence properties of the proposed method for solving bilevel variational inequalities on Riemannian manifolds.

2. Preliminaries

2.1. Riemannian Geometry

An m-dimensional Riemannian manifold is represented by the pair ( M , g ) , where M denotes an m-dimensional smooth manifold, and g denotes a smooth, symmetric positive definite ( 0 , 2 ) -tensor field on M, known as the Riemannian metric. For any point x M , the restriction g x : T x M × T x M R defines an inner product on the tangent space T x M . The scalar product on T x M is defined as · , · with the associated norm . .
Let γ : [ a , b ] R M . If there exists a partition of [ a , b ] with partition points { t 0 , t 1 , , t n } , such that a = t 0 < t 1 < < t n = b , and γ restricted to each [ t k 1 , t k ] for k = 1 , 2 , , n represents a smooth curve on M, then the curve represented by γ is called a piecewise smooth curve.
Definition 1
([13]). Let x , y M and γ : [ a , b ] M be a piecewise smooth curve joining x and y (i.e., γ ( a ) = x and γ ( b ) = y ). The length of γ is given by
L ( γ ) : = a b γ ˙ ( t ) γ ( t ) d t ,
where γ ˙ denotes the first derivative of γ with respect to t.
Theorem 1
([13]). Let ( M , g ) be a connected Riemannian manifold, let Γ x , y M be the set of all piecewise smooth curves joining x and y in M. The function
d : M × M R , d ( x , y ) : = inf { L ( γ ) : γ Γ x , y M } ,
defines a distance on M. A geodesic joining x and y in M is said to be minimal if its length equals d ( x , y ) .
Definition 2
([13]). Let ( M , g ) be a connected Riemannian manifold, and X ( M ) the Lie algebra of smooth vector fields on M. A function
: X ( M ) × X ( M ) X ( M ) , ( X , Y ) X Y ,
with the properties
f X + g Y Z = f X Z + g Y Z , X ( a Y + b Z ) = a X Y + b X Z , X ( f Y ) = X ( f ) Y + f X Y , X Y , Z = X Y , Z + Y , X Z ,
where f , g : M R , a , b R , X , Y , Z X ( M ) , is called Riemannian connection or Levi-Civita connection on M.
Let ∇ be the Levi-Civita connection associated with the Riemannian metric and γ be a smooth curve in M. A vector field X is said to be parallel along γ : [ 0 , 1 ] M if γ ˙ X = 0 . If γ ˙ itself is parallel along γ joining x to y,
γ ( 0 ) = x , γ ( 1 ) = y and γ ˙ γ ˙ = 0 on [ 0 , 1 ] ,
we say that γ is a geodesic, and in this case γ ˙ is constant. When γ ˙ = 1 , γ is said to be normalized.
The exponential map at x, denoted as exp x : T x M M is well-defined on the tangent space T x M . Specifically, a curve γ : [ 0 , 1 ] M is a minimal geodesic joining x to y if and only if there exists a vector v T x M such that v = d ( x , y ) and γ ( t ) = exp x ( t v ) for each t [ 0 , 1 ] . By the Hopf–Rinow Theorem, we know that if M is complete, then any pair of points in M can be joined by a minimal geodesic. Moreover, ( M , d ) is a complete metric space, and bounded closed subsets are compact [27].
The set A M is said to be convex if contains a geodesic segment γ whenever it contains the end points of γ , i.e., γ ( ( 1 t ) a + t b ) is in A whenever x = γ ( a ) and y = γ ( b ) are in A, and t [ 0 , 1 ] . Recall that, for a point x M , the convexity radius at x is defined by
r ( x ) = sup { r > 0 each ball in B ( x , r ) is strongly convex and each geodesic in B ( x , r ) is minimal .

2.2. Properties of Variational Inequalities

In the following, we will briefly examine the established definitions related to pseudomonotone vector fields and important properties of variational inequalities on Riemannian manifolds. These definitions and properties will form the basis for our subsequent analysis (see [1,6]).
Variational Inequalities on Riemannian Manifold (referred to as (RVI)): Let A be a nonempty set of the Riemannian manifold M, and V be a vector field on A. The variational inequalities on Riemannian manifold M is of finding x ¯ A , such that
V ( x ¯ ) , γ ˙ x ¯ y ( 0 ) 0 , for each y A and each γ x ¯ y Γ x ¯ , y A .
Definition 3
([6]). Let A M be a closed convex set. A vector field V on A is said to be the following:
(1) 
monotone if, for all  x , y A  and each  γ x y Γ x , y A ,
V ( x ) , γ ˙ x y ( 0 ) V ( y ) , γ ˙ x y ( 1 ) 0 ;
(2) 
strictly monotone if, for all  x , y A  and each  γ x y Γ x , y A ,
V ( x ) , γ ˙ x y ( 0 ) V ( y ) , γ ˙ x y ( 1 ) < 0 ;
(3) 
pseudomonotone monotone if, for all  x , y A  and each  γ x y Γ x , y A ,
V ( x ) , γ ˙ x y ( 0 ) V ( y ) , γ ˙ x y ( 1 ) 0 .
Let P A denote the projection onto the set A M . For each point x M , the projection P A ( x ) is defined as follows:
P A x = x ¯ A : d ( x , x ¯ ) = inf z A d ( x , z ) .
Then, P A x for each x M if A is closed. In general, P A is a set valued map, and has the following basic properties.
Theorem 2
([6]). Let A M be a nonempty closed set, and let x ¯ A . Let x M and γ x x ¯ Γ x , x ¯ be a minimal geodesic. If x ¯ P A x , then
γ ˙ x x ¯ ( 1 ) , γ ˙ x ¯ z ( 0 ) 0 for each z A and each γ x ¯ z Γ x ¯ , z A .
Proposition 1
([6]). Let A be a locally convex closed subset of M. Then, there exists an open subset U of M with A U such that P A is single-valued and Lipschitz continuous on U.
Proposition 2
([14]). The metric projection onto a closed and convex set A M is a nonexpansive mapping, i.e.,
d P A ( x ) , P A ( y ) d ( x , y ) , x , y M .
Proposition 3
([18]). Let A M be a nonempty closed convex set. Then, there holds
d 2 P A ( x ) , x ¯ d 2 ( x , x ¯ ) d 2 ( x , P A ( x ) ) , x M a n d x ¯ A .
Theorem 3
([6]). Let A M be a nonempty weakly convex set, and let x ¯ A . Let V be a vector field on A. Then, there exists r ¯ x ¯ > 0 such that, for each r ( 0 , r ¯ x ¯ ) ,
x ¯ P A exp x ¯ ( r V ( x ¯ ) ) x ¯ is a solution of ( 1 ) .
Recall that a geodesic triangle ( p 1 p 2 p 3 ) of a Riemannian manifold is a set consisting of three points ( p 1 , p 2 , and p 3 ), and three minimal geodesics joining these points. The most important property is described in Proposition 4.5, taken from [27].
Proposition 4
( Comparison theorem for triangles ). Let ( p 1 p 2 p 3 ) be a geodesic triangle. Denote, for each i = 1 , 2 , 3 ( m o d 3 ) , by γ i : [ 0 , l i ] M the geodesic joining p i to p i + 1 , and set l i : = L ( γ i ) and α i : = γ ˙ i ( 0 ) , γ ˙ i 1 ( l i 1 ) . Then,
(i) 
α 1 + α 2 + α 3 π ;
(ii) 
l i 2 + l i + 1 2 2 l i l i + 1 cos α i + 1 l i 1 2 ;
(iii) 
l i + 1 cos α i + 2 + l i cos α i l i + 2 .
If the Riemannian manifold possesses non-negative curvature, the following conclusion can be drawn.
Proposition 5
([2]). In a complete finite dimensional Riemannian manifold with nonnegative sectional curvature, we have
l 3 2 l 1 2 + l 2 2 2 l 1 l 2 cos α
where l i denotes the length of γ i ( i = 1 , 2 ) , l 3 = d ( γ 1 ( l 1 ) , γ 2 ( l 2 ) ) , and α = ( γ ˙ 1 ( 0 ) , γ ˙ 2 ( 0 ) ) .

3. The Algorithm

Firstly, we introduce the bilevel variational inequalities on Riemannian manifold M with nonnegative sectional curvature. Subsequently, we present the algorithm devised to solve these problem.
Bilevel Variational Inequalities on Riemannian Manifold (referred to as (RBVI)): Let A M be a nonempty closed convex set and U be a vector field on Sol ( V , A ) . Find x ¯ Sol ( V , A ) such that
U ( x ¯ ) , γ ˙ x ¯ x ( 0 ) 0 , for each x Sol ( V , A ) and each γ x ¯ x Γ x ¯ , x Sol ( V , A ) ,
where V is a vector field on A, and Sol ( V , A ) denotes the set of all solutions of the following variational inequality. Find y ¯ A such that
V ( y ¯ ) , γ ˙ y ¯ y ( 0 ) 0 , for each y A and each γ y ¯ y Γ y ¯ , y A .
In what follows, we suppose that the vector fields V and U satisfy the following conditions:
( H 1 )
V is continuous and pseudomonotone vector field on A.
( H 2 )
U is continuous and pseudomonotone vector field on Sol ( V , A ) .
( H 3 )
The solution sets Sol ( V , A ) and Sol(RBVI) of problem (RBVI) are nonempty.
Under the above assumptions, we extend Korpelevich’s method for Problems (9) and (10) on Riemannian manifolds.
According to Theorem 3, we know that x is a solution to Problem (1) if and only if x P A exp x ( r V ( x ) ) . So, we generate a sequence { x n } through
x k + 1 = P A exp x k β k V ( x k ) .
Korpelevich suggested an algorithm in Euclidean space of the form
y k = P A x k α k V ( x k ) ,
and
x k + 1 = P A x k α k V ( y k ) .
If V is Lipschitz continuous with constant L and variational inequality problem has solution, then the sequence generated by (11) and (12) converges to solution of variational inequality problem, provided that α k = α ( 0 , 1 / L ) ; see [15]. In the case, when V is not Lipschitz continuous or the Lipschitz constant is not easy to compute, the extragradient method requires a line search procedure to compute the step size.
Building upon this foundation, we give the extragradient method for the lower-level variational inequality on Riemannian manifold as follows.
Take δ ( 0 , 1 ) , β ¯ , β ˜ satisfying 0 β ¯ β ˜ , and a sequence { β k } [ β ¯ , β ˜ ] . The method is initialized with any x 0 A , and the iterative step is as follows.
Given x k A , define y k : = P A exp x k ( β k V ( x k ) ) ; if x k = y k , then stop. Otherwise, take
j ( k ) : = min { j N + : V ( γ k ( 2 j ) , γ k ( 2 j ) ) δ β k d ( x k , y k ) } , α k = 2 j ( k ) , μ k : = γ k 2 j ( k ) , H k : = { x M : V ( μ k ) , γ ˙ x μ k ( 1 ) 0 , γ x μ k Γ x , μ k } , z k : = P A P H k ( x k ) .
The following algorithm for solving the problem (RBVI) contains two loops. In each iteration of the outer loop, we apply Korpelevich’s method to the lower variational inequality on Riemannian manifold. Subsequently, starting from the iterate obtained in the outer loop, we compute x k + 1 : = exp z k λ k U ( z k ) for the upper variational inequality on Riemannian manifold. With this, we can now present the algorithm for solving the (RBVI) problem using Korpelevich’s method.
Algorithm 1.
Choose x 0 A , k = 0 , δ ( 0 , 1 ) , β ¯ , β ˜ satisfying 0 β ¯ β ˜ , positive sequences { β k } , { λ k } and { ϵ k } , such that
{ β k } [ β ¯ , β ˜ ] , lim k λ k = 0 , lim k ϵ k = 0 , k = 0 ϵ k < .
Step 1.
Given x k A , compute
y k : = P A exp x k ( β k V ( x k ) ) ,
and define γ k ( t ) : = exp x k t γ ˙ x k y k ( 0 ) .
Let
j ( k ) : = min j N + : V ( γ k ( 2 j ) , γ k ( 2 j ) ) δ β k d ( x k , y k ) ,
and
μ k : = γ k 2 j ( k ) .
Define
H k : = x M : V ( μ k ) , γ ˙ x μ k ( 1 ) 0 , γ x μ k Γ x , μ k ,
and compute
ω k : = P H k ( x k ) ,
z k : = P A ( ω k ) .
Step 2.
Set x k + 1 : = exp z k λ k U ( z k ) , where
λ k 0 , ϵ k U ( z k ) .
Then, increase k by 1, and go to Step 1.
Remark 1.
If M = R n , then Algorithm 1 reduces to the algorithm proposed by Anh [10].

4. Convergence Results

In this section, we consider a nonempty closed convex subset A of a Riemannian manifold M with nonnegative sectional curvature. The following lemmas and theorems demonstrate the convergence of the algorithm for the (RBVI) problem.
Before proceeding, let us review the concept of quasi-Fejér convergence.
Definition 4
([2]). Let ( X , d ) be a complete metric space and A X be a nonempty set. A sequence { x k } X , k 0 is said to have quasi-Fejér convergence to A if, for every y A , there exists a sequence { ϵ k } R , such that ϵ k 0 , k = 0 ϵ k < + , and the following condition holds:
d 2 ( x k + 1 , y ) d 2 ( x k , y ) + ϵ k .
Lemma 1.
In a complete metric space ( X , d ) , if { x k } X is quasi-Fejér convergence to a nonempty set A X , then { x k } is bounded. If, furthermore, a cluster point of { x k } belongs to A, then { x k } converges to a point of A.
Proof. 
Analogous to Burachik et al. [28], we replace the Euclidean norm with the Riemannian distance d . □
Lemma 2.
Suppose the sequences { x k } and { z k } are generated by Algorithm 1. Let V be a continuous and pseudomonotone vector field on A, and x ¯ Sol ( V , A ) . Then, we have that
d ( z k , x ¯ ) d ( x k , x ¯ )
Proof. 
Let x ¯ Sol ( V , A ) . That means
V ( x ¯ ) , γ ˙ x ¯ x ( 0 ) 0 , for each x A and each γ x ¯ x Γ x ¯ , x A .
Due to the pseudomonotonicity of V, we can infer that
V ( x ) , γ ˙ x ¯ x ( 1 ) 0 , for each x A .
By utilizing the fact that μ k A , we can proceed to infer that
V ( μ k ) , γ ˙ x ¯ μ k ( 1 ) 0 , and x ¯ H k .
Since ω k = P H k ( x k ) , we can apply Proposition 3 to conclude that
d 2 ( ω k , x ¯ ) = d 2 P H k ( x k ) , x ¯ d 2 ( x k , x ¯ ) d 2 x k , P H k ( x k ) d 2 ( x k , x ¯ ) .
Similarly, taking z k = P A ( ω k ) , we obtain
d 2 ( z k , x ¯ ) = d 2 P A ( ω k ) , x ¯ d 2 ω k , x ¯ ) d 2 ( x k , P A ( ω k ) d 2 ( ω k , x ¯ ) d 2 ( x k , x ¯ ) .
This completes the proof. □
Lemma 3.
Under Assumptions H 1 to H 3 , the sequence { x k } generated by Algorithm 1 is bounded.
Proof. 
Referring to Algorithm 1, we can observe that
x k + 1 : = exp z k λ k U ( z k ) .
Let x ¯ be a solution to the problem (RBVI). Suppose γ z k x ¯ : [ 0 , 1 ] M is a minimal geodesic segment linking z k to x ¯ , and γ z k x k + 1 : [ 0 , 1 ] M is the geodesic segment linking z k to x k + 1 such that γ ˙ z k x k + 1 ( 0 ) = λ k U ( z k ) , where α = γ ˙ z k x ¯ ( 0 ) , γ ˙ z k x k + 1 ( 0 ) .
According to Proposition 5, we have that
d 2 ( x k + 1 , x ¯ ) d 2 ( z k , x ¯ ) + λ k 2 U ( z k ) 2 2 λ k U ( z k ) d ( z k , x ¯ ) cos α .
From the fact that
γ ˙ z k z k + 1 ( 0 ) , γ ˙ z k x ¯ ( 0 ) = λ k U ( z k ) d ( z k , x ¯ ) cos α ,
and using Inequality (21), we obtain
d 2 ( x k + 1 , x ¯ ) d 2 ( z k , x ¯ ) + λ k 2 U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ z k x ¯ ( 0 ) .
Since x ¯ is a solution to the problem (RBVI), we have x ¯ Sol ( V , A ) . From Lemma 2, we see that
d ( z k , x ¯ ) d ( x k , x ¯ ) ,
and combining inequality (22), we can conclude that
d 2 ( x k + 1 , x ¯ ) d 2 ( z k , x ¯ ) + λ k 2 U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ z k x ¯ ( 0 ) d 2 ( x k , x ¯ ) + λ k 2 U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ z k x ¯ ( 0 ) .
It follows from the pseudomonotonicity of U on Sol ( V , A ) that
U ( z k ) , γ ˙ z k x ¯ ( 0 ) 0 .
So, we obtain
d 2 ( x k + 1 , x ¯ ) d 2 ( x k , x ¯ ) + λ k 2 U ( z k ) 2 .
Combining the above Inequality (23), and applying Lemma 2, we know that
k = 0 λ k 2 U ( z k ) 2 < k = 0 ϵ k < .
So, the sequence { x k } is quasi-Fejér convergent concerning the solution of problem (RBVI), and { x k } is bounded. □
Lemma 4.
Suppose that Assumptions H 1 H 3 hold, and the sequences { x k } and { z k } are generated by Algorithm 1. Then, we have
d 2 ( x k + 1 , x k ) d 2 ( z k , x k ) + λ k 2 U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ z k x k ( 0 ) ,
and
lim k d ( x k + 1 , x k ) = lim k d ( z k , x k ) = 0 .
Proof. 
From Lemma 3 and Proposition 5, let γ z k x ¯ : [ 0 , 1 ] M be a minimal geodesic segment linking z k to x k , and γ z k x k + 1 : [ 0 , 1 ] M be the geodesic segment linking z k to x k + 1 with γ ˙ z k x k + 1 ( 0 ) = λ k U ( z k ) , where α = γ ˙ z k x k ( 0 ) , γ ˙ z k x k + 1 ( 0 ) . We have
d 2 ( x k + 1 , x k ) d 2 ( z k , x k ) + λ k 2 U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ z k x k ( 0 ) .
This is desired result (24).
By Algorithm 1 and Proposition 3, we have
d 2 ( z k , x k ) = d 2 ( P A ( ω k ) , x k ) d 2 ( ω k , x k ) d 2 ( ω k , P A ( ω k ) ) d 2 ( ω k , x k ) .
Applying Lemma 3, the sequence { x k } is a quasi-Fejér convergent sequence, and d ( x k , x ¯ ) is a convergent sequence. Since ω k = P H k ( x k ) and lim k d ( ω k , x k ) = 0 , then, we see that
lim k d ( z k , x k ) = 0 .
Using Inequality (26) and lim k λ k = 0 , we obtain
lim k d ( x k + 1 , x k ) = lim k d ( z k , x k ) = 0 .
Theorem 4.
Suppose that the assumptions H 1 H 3 hold. Then, the two sequences { x k } and { z k } generated by Algorithm 1 converge to the same solution of Problem (RBVI).
Proof. 
By Lemma 1, we need to prove that the convergence points of { x k } and { z k } belong to Sol(RBVI). Let x ¯ and z ¯ be the convergence points of { x k } and { z k } , respectively. Then, lim k x k = x ¯ and lim k z k = z ¯ . So, we will have to show that
lim k d x k , P Sol ( V , A ) exp x k λ k U ( x k ) = 0 ,
and
lim k d z k , P Sol ( V , A ) exp z k λ k U ( z k ) = 0 .
Now, using Proposition 3 and Lemma 4, we have
lim k d 2 x k , P Sol ( V , A ) exp x k λ k U ( x k ) = lim k d 2 x k + 1 , P Sol ( V , A ) exp x k λ k U ( x k ) lim k d 2 exp z k λ k U ( z k ) , P Sol ( V , A ) exp x k λ k U ( x k ) lim k d 2 ( exp z k λ k U ( z k ) , exp x k λ k U ( x k ) ) .
Applying Proposition 5, we obtain
d 2 exp z k λ k U ( z k ) , exp x k λ k U ( x k ) d 2 x k , exp z k λ k U ( z k ) + λ k U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ x k exp z k λ k U ( z k ) ( 0 ) ,
and
d 2 x k , exp z k λ k U ( z k ) d 2 ( x k , z k ) + λ k U ( z k ) 2 + 2 λ k U ( z k ) , γ ˙ z k x k ( 0 ) .
Combining the above inequalities and lim k 0 λ k = 0 , lim k d ( z k , x k ) = 0 , we obtain
lim k d x k , P Sol ( V , A ) exp x k λ k U ( x k ) = 0 ,
and, so, lim k x k = x ¯ Sol ( RBVI ) . In a similar way, we can show that
lim k d z k , P Sol ( V , A ) exp z k λ k U ( z k ) = 0 .
Thus, lim k z k = z ¯ Sol ( RBVI ) . By Lemma 4, we have lim k d ( z k , x k ) = 0 . So, two sequences, { x k } and { z k } , generated by Algorithm 1 converge to the same solution of Problem (RBVI). □
The following example shows the effectiveness of our algorithm.
Example 1.
Let X denote the diagonal matrix X = d i a g ( x 1 , , x n ) . We consider in this section the particular case M = R + + n with the metric g = X 2 . This space is a connected and complete finite dimensional Riemannian manifold with null sectional curvature. For any vectors u and v in the tangent plane at x M , we have
u , v = i = 1 n u i v i x i 2 , u = ( u i ) T x M , v = ( v i ) T x M , x = ( x i ) M .
It is easy to see that the (minimizing) geodesic curve t γ ( t ) , verifying γ ( 0 ) = x , and γ ( 0 ) = v is given by
R t x 1 e v 1 x 1 t , , x n e v n x n t .
Then, it implies that the exponential map can be expressed as
exp x ( r v ) = x 1 e v 1 x 1 t , , x n e v n x n t .
Also, the (minimizing) geodesic segment γ : [ 0 , 1 ] M joining the points x and y, i . e ., γ ( 0 ) = x , γ ( 1 ) = y is given by γ i ( t ) = x i 1 t y i t , i = 1 , 2 , , n . Thus, the distance d on the metric space ( R + + n , X 2 ) is defined by
d ( x , y ) = 0 1 γ ( t ) γ ( t ) d t = 0 1 i = 1 n γ i ( t ) γ i ( t ) 2 d t = i = 1 n ln x i y i 2 .
To obtain the expression of the inverse exponential map, we write
y = exp x ( exp x 1 y ) = x 1 e exp x 1 y x 1 , , x n e exp x 1 y x n .
Therefore, we obtain exp x 1 y = x 1 ln y 1 x 1 , , x n ln y n x n .
Applying Algorithm 1 to the case ( R + + n , X 2 ) , we can translate the iterative scheme into the following one. Having x k , then let
y k : = P A exp x k β k V ( x k ) = P A ( x 1 k e β k V ( x k ) x 1 k , , x n k e β k V ( x k ) x n k ) ,
and
γ k ( t ) = exp x k t γ ˙ x k y k ( 0 ) = ( x 1 k e t γ ˙ x k y k ( 0 ) x 1 k , , x n k e t γ ˙ x k y k ( 0 ) x n k ) .
Thus, we have
γ k ( t ) = γ ˙ x k y k ( 0 ) e t γ ˙ x k y k ( 0 ) x 1 k , , γ ˙ x k y k ( 0 ) e t γ ˙ x k y k ( 0 ) x n k ,
and
d ( x k , y k ) = i = 1 n ln x i k y i k 2 ,
respectively. Let
j ( k ) = min { j N + : V ( γ k ( 2 j ) , γ k ( 2 j ) ) δ β k d ( x k , y k ) } , μ k = γ k 2 j ( k ) , H k = { x M : V ( μ k ) , γ ˙ x μ k ( 1 ) 0 , γ x μ k Γ x , μ k } , ω k = P H k ( x k ) , z k = P A ( ω k ) , x k + 1 = exp z k λ k U ( z k ) ,
where λ k 0 , ϵ k U ( z k ) , lim k λ k = 0 , lim k ϵ k = 0 , k = 0 ϵ k < . Now, consider M = R + + 2 , X = d i a g ( x 1 , x 2 ) . Taking x = ( x 1 , x 2 ) M , we obtain T x M = R 2 . Let
A = { x M : ( x 1 1 ) 2 + x 2 2 5 , ( x 1 + 1 ) 2 + x 2 2 5 , y 2 1 } ,
which is obviously a closed and convex subset of  M = R + + 2 , and f , h : A R is defined as
f ( x ) = 1 + x 1 2 x 2 2 , h ( x ) = 1 2 ( | x 1 2 + x 2 2 x 2 2 x 2 + x 1 2 + x 2 2 x 2 + 2 x 2 ,
and then V ( x ) and U ( x ) are given by V ( x ) = g r a d f ( x ) = ( 2 x 1 , 2 x 1 2 x 2 3 ) , and
U ( x ) = g r a d h ( x ) = ( 2 x 1 x 2 , x 2 2 x 1 2 ) , x 1 2 + x 2 2 > 2 ; ( 2 t x 1 x 2 , t ( x 1 2 x 2 2 + 2 ) 2 ) : t [ 0 , 1 ] , x 1 2 + x 2 2 = 2 ; ( 0 , 2 ) , x 1 2 + x 2 2 < 2 .
We know that V and U are pseudomonotone vector fields. We can easily check that the bilevel variational inequalities on M = R + + 2 have a solution ( 0 , 2 ) . Hence, the sequences { x k } and { z k } generated by the method presented in this paper converges to a solution of bilevel variational inequalities.

5. Conclusions

In this work, we introduce the concept of pseudomonotone vector fields, and delve into bilevel variational inequalities on Riemannian manifolds. Subsequently, we propose a variant of Korpelevich’s method specifically tailored to solve bilevel variational inequalities on Riemannian manifolds characterized by non-negative sectional curvature and pseudomonotone vector fields.
Furthermore, we establish the convergence of the iteration sequences produced by our proposed algorithm under mild conditions. Additionally, we provide an example to showcase the validity and convergence properties of our algorithm.

Author Contributions

The idea of the present paper was proposed by J.L. and improved by Z.W. and J.L. wrote and completed the calculations. J.L. and Z.W. checked all of the results. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Jiagen Liao of Science Foundation of Hubei Provincial Department of Education grant number Q20231210 and Zhongping Wan of Natural Science Foundation of China grant number 11871383.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors sincerely appreciate the anonymous referees and the editors for their careful reading and constructive comments which have resulted in the present improved version of the original paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Németh, S.Z. Variational inequalities on Hadamard manifolds. Nonlinear Anal. Theory Methods Appl. 2003, 52, 1491–1498. [Google Scholar] [CrossRef]
  2. Quiroz, E.A.P.; Quispe, E.M.; Oliveira, P.R. Steepest descent method with a generalized armijo search for quasiconvex functions on Riemannian manifolds. J.Math. Anal. Appl. 2008, 341, 467–477. [Google Scholar] [CrossRef]
  3. Barani, A.; Pouryayevali, M.R. Invariant monotone vector fields on Riemannian manifolds. Nonlinear Anal. Theory Methods Appl. 2009, 70, 1850–1861. [Google Scholar] [CrossRef]
  4. Kristàly, A. Nash-type equilibria on Riemannian manifolds: A variational approach. J. Math. Pures. Appl. 2014, 101, 660–688. [Google Scholar] [CrossRef]
  5. Wang, X.M.; López, G.; Li, C.; Yao, J.C. Equilibrium problems on Riemannian manifolds with applications. J. Math. Anal. Appl. 2019, 473, 866–891. [Google Scholar] [CrossRef]
  6. Li, S.L.; Li, C.; Liou, Y.C.; Yao, J.C. Existence of solutions for variational inequalities on Riemannian manifolds. Nonlinear Anal-Theor. 2009, 71, 5695–5706. [Google Scholar] [CrossRef]
  7. Anh, P.N.; Kim, J.K.; Muu, L.D. An extragradient algorithm for solving bilevel pseudomonotone variational inequalities. J. Glob. Optim. 2012, 52, 627–639. [Google Scholar] [CrossRef]
  8. Censor, Y.; Gibali, A.; Reich, S. Extensions of korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  9. Anh, T.V.; Muu, L.D. A projection-fixed point method for a class of bilevel variational inequalities with split fixed point constraints. Optimization 2016, 65, 1229–1243. [Google Scholar] [CrossRef]
  10. Anh, T.V. A strongly convergent subgradient extragradient-halpern method for solving a class of bilevel pseudomonotone variational inequalities. Vietnam J. Math. 2017, 45, 317–332. [Google Scholar] [CrossRef]
  11. Thong, D.V.; Triet, N.A.; Li, X.H.; Dong, Q.L. Strong convergence of extragradient methods for solving bilevel pseudo-monotone variational inequality problems. Numer. Algorithms 2020, 83, 1123–1143. [Google Scholar] [CrossRef]
  12. Walter, R. On the metric projection onto convex sets in Riemannian spaces. Arch. Math. 1974, 25, 91–98. [Google Scholar] [CrossRef]
  13. Udriste, C. Convex Functions and Optimization Methods on Riemannian Manifolds; Springer Science & Business Media: Dordrecht, The Netherlands, 1994; Volume 297. [Google Scholar]
  14. Li, C.; López, G.; Márquezand, V.M.; Wang, J.H. Resolvents of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Var. Anal. 2011, 19, 361–383. [Google Scholar] [CrossRef]
  15. Tang, G.J.; Huang, N.J. Korpelevichs method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 2012, 54, 493–509. [Google Scholar] [CrossRef]
  16. Liou, Y.C.; Obukhovskii, V.; Yao, J.C. On topological index of solutions for variational inequalities on Riemannian manifolds. Set-Valued Var. Anal. 2012, 20, 369–386. [Google Scholar] [CrossRef]
  17. Tang, G.J.; Zhou, L.W.; Huang, N.J. The proximal point algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Optim. Lett. 2013, 7, 779–790. [Google Scholar] [CrossRef]
  18. Tang, G.J.; Wang, X.; Liu, H.W. A projection-type method for variational inequalities on Hadamard manifolds and verification of solution existence. Optimization 2015, 64, 1081–1096. [Google Scholar] [CrossRef]
  19. Li, X.B.; Zhou, L.W.; Huang, N.J. Gap functions and global error bounds for generalized mixed variational inequalities on Hadamard manifolds. J. Optim. Theory Appl. 2016, 168, 830–849. [Google Scholar] [CrossRef]
  20. Tang, G.J.; Zhou, L.W.; Huang, N.J. Existence results for a class of hemivariational inequality problems on Hadamard manifolds. Optimization 2016, 65, 1451–1461. [Google Scholar] [CrossRef]
  21. Hung, N.V.; Tam, V.M.; Pitea, A. Global error bounds for mixed quasi-hemivariational inequality problems on Hadamard manifolds. Optimization 2020, 69, 2033–2052. [Google Scholar] [CrossRef]
  22. Ferreira, O.P.; Pérez, L.R.L.; Németh, S.Z. Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 2005, 31, 133–151. [Google Scholar] [CrossRef]
  23. Li, C.; Yao, J.C. Variational inequalities for set-valued vector fields on riemannian manifolds: Convexity of the solution set and the proximal point algorithm. SIAM J. Control Optim. 2012, 50, 2486–2514. [Google Scholar] [CrossRef]
  24. Chen, S.L.; Fang, C.J. Vector variational inequality with pseudoconvexity on Hadamard manifolds. Optimization 2016, 65, 2067–2080. [Google Scholar] [CrossRef]
  25. Batista, E.E.A.; Bento, G.C.; Ferreira, O.P. An extragradient-type algorithm for variational inequality on Hadamard manifolds. ESAIM-Control Optim. Calc. Var. 2020, 63, 1–16. [Google Scholar] [CrossRef]
  26. Ansari, Q.H.; Islam, M.; Yao, J.C. Nonsmooth variational inequalities on Hadamard manifolds. Numer. Algorithms 2020, 99, 340–358. [Google Scholar] [CrossRef]
  27. Carmo, D.; Perdigao, M.; Flaherty Francis, J. Riemannian Geometry; Birkhäuser: Boston, MA, USA, 1992; Volume 2. [Google Scholar]
  28. Burachik, R.; Gran˜s, L.M.; Iusem Drummond, A.N.; Svaiter, B. Full convergence of the steepest descent method with inexact line searches. Optimization 1995, 32, 137–146. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, J.; Wan, Z. Korpelevich Method for Solving Bilevel Variational Inequalities on Riemannian Manifolds. Axioms 2025, 14, 78. https://doi.org/10.3390/axioms14020078

AMA Style

Liao J, Wan Z. Korpelevich Method for Solving Bilevel Variational Inequalities on Riemannian Manifolds. Axioms. 2025; 14(2):78. https://doi.org/10.3390/axioms14020078

Chicago/Turabian Style

Liao, Jiagen, and Zhongping Wan. 2025. "Korpelevich Method for Solving Bilevel Variational Inequalities on Riemannian Manifolds" Axioms 14, no. 2: 78. https://doi.org/10.3390/axioms14020078

APA Style

Liao, J., & Wan, Z. (2025). Korpelevich Method for Solving Bilevel Variational Inequalities on Riemannian Manifolds. Axioms, 14(2), 78. https://doi.org/10.3390/axioms14020078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop