Next Article in Journal
Robust Reinforcement Learning-Based Multiple Inputs and Multiple Outputs Controller for Wind Turbines
Next Article in Special Issue
Solving the Fredholm Integral Equation by Common Fixed Point Results in Bicomplex Valued Metric Spaces
Previous Article in Journal
Optimal Multi-Attribute Auctions Based on Multi-Scale Loss Network
Previous Article in Special Issue
A New Extension of CJ Metric Spaces—Partially Controlled J Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems

by
Kobkoon Janngam
1,
Suthep Suantai
2,
Yeol Je Cho
3,
Attapol Kaewkhao
2 and
Rattanakorn Wattanataweekul
4,*
1
Graduate Ph.D. Degree Program in Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Research Center in Optimization and Computational Intelligence for Big Data Prediction, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
Department of Mathematics Education, Gyeogsang National University, Jinju 52828, Republic of Korea
4
Department of Mathematics, Statistics and Computer, Faculty of Science, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3241; https://doi.org/10.3390/math11143241
Submission received: 6 June 2023 / Revised: 16 July 2023 / Accepted: 19 July 2023 / Published: 24 July 2023

Abstract

:
Fixed-point theory plays many important roles in real-world problems, such as image processing, classification problem, etc. This paper introduces and analyzes a new, accelerated common-fixed-point algorithm using the viscosity approximation method and then employs it to solve convex bilevel optimization problems. The proposed method was applied to data classification with the Diabetes, Heart Disease UCI and Iris datasets. According to the data classification experiment results, the proposed algorithm outperformed the others in the literature.

1. Introduction

Many real-world hierarchical problems can be modeled mathematically as bilevel problems and appear in many practical applications. They are often encountered in the fields of production and capacity planning [1,2], traffic and transportation [3,4], chemistry [5,6] and management science [7,8], as well as energy networks and markets [9,10]. In addition, Nimana et al. [11] proposed an algorithm combining the incremental proximal gradient method with a smooth penalization technique to solve convex bilevel problems and applied it to image inpainting and binary classification problem.
Nowadays, we are in a world with various types of big data. In order to obtain the benefits of such data, we need to integrate advanced knowledge concerning both theory and methods from many areas, such as mathematics, computer science, statistics, medicine, etc. In mathematics, optimization plays a very important role in classifying and predicting large amounts of data because it can provide deep machine learning algorithms with high accuracy. Among optimization models for machine learning, the bilevel optimization model approach is an efficient one that makes it possible to create intelligent machine learning algorithms for data prediction and classification.
In this work, we study a bilevel problem that is an optimization problem where the constraint is another optimization problem. This problem is formulated as follows:
min x S * ω ( x ) ,
where ω : R n R is assumed to be strongly convex and differentiable, while S * is a nonempty set of inner-level problem minimizers given by
min x R n { f ( x ) + g ( x ) } ,
where f : R n R is a differentiable and convex function such that f is L f -Lipschitz-continuous and g : R n R { } is a lower-semicontinuous, proper, convex function.
It can be observed that the above bilevel optimization model contains both inner- and outer-level minimization problems (Equations (1) and (2)). Normally, the minimization problem in Equation (2) can be applied to data prediction and classification; see [12,13]. However, among the solutions to the inner-level problem (Equation (2)), we use the objective function ω to select solutions that are minimizers of ω . This method can provide more accuracy for data prediction and classification than Equation (2) alone.
The inner-level optimization problem is a constraint on the outer-level optimization problem. There are several algorithms for solving the problem in Equation (2); see [12,14,15].
The proximal gradient (PG) method, also known as the proximal forward–backward technique, is the basic algorithm used to solve the problem in Equation (2) (see [16,17]). It is defined by
x n + 1 = p r o x α n g ( I α n f ) ( x n ) ,
where α n > 0 is the step size, p r o x g is the proximity operator of g and f is the gradient of f. The algorithm in Equation (3), which is also known as the forward–backward splitting algorithm (FBSA) [14], is suitable to solve Equation (2) if f is L-Lipschitz-continuous. The FBSA is also called an iterative denoising method [18] or a fixed-point continuation algorithm [19].
One of the most well-known first-order optimization schemes is the fast iterative shrinkage-thresholding algorithm (FISTA). Beck and Teboulle [15] proposed the FISTA to solve the problem in Equation (2) by using an inertial technique as follows:
w n = p r o x 1 L g ( I 1 L f ) ( x n ) , p n + 1 = ( 1 + 1 + 4 p n 2 ) / 2 , x n + 1 = y n + p n 1 p n + 1 ( w n w n 1 ) , n 1 ,
where x 1 = w 0 R N and p 1 = 1 . They applied the FISTA to image restoration problems and showed that the rate of convergence of the FISTA was better than other existing algorithms. The generated sequence’s weak convergence was then proved by Liang and Schonlieb [20], who modified the FISTA by setting p n + 1 = ( u + v + s p n 2 ) / 2 , where u , v > 0 and 0 < s 4 .
It may be noticed that the convex minimization problem and fixed-point problem are related. If 0 < α < 2 / L , then we know that a forward–backward operator T : = p r o x α g ( I α f ) is nonexpansive. It is known that F i x ( T ) = a r g m i n { f ( x ) + g ( x ) } . Fixed-point problems with nonexpansive mappings have been investigated by many authors using the method of viscosity approximation [21,22,23,24]. This method provides a strong convergence result and it is defined by the following:
x n + 1 = β n S ( x n ) + ( 1 β n ) T x n , n 1 ,
where x 1 H , S : H H is a contraction when H is a Hilbert space and { β n } ( 0 , 1 ) . We can also call Equation (5) the viscosity forward–backward algorithm if T : = p r o x α g ( I α f ) .
In 2014, Beck et al. [25] introduced a new, direct first-order method to solve the problem in Equation (1) and established its convergence results under some suitable conditions, as well as the rate of convergence of the sequence of function values. After that, Sabach and Shtern [26] proposed the following algorithm, called the bilevel gradient sequential averaging method (BiG-SAM), to solve the problems in Equations (1) and (2). The iterative process can be defined as follows
u n = p r o x c g ( x n 1 c f ( x n 1 ) ) , v n = x n 1 λ ω ( x n 1 ) , x n + 1 = γ n v n + ( 1 γ n ) u n , n 1 ,
where x 0 R n , c ( 0 , 1 / L f ] , λ ( 0 , 2 / ( L ω + σ ) ] , in which L f and L ω are the Lipschitz constants of f and ω , and { γ n } satisfies certain conditions from [22]. In terms of the values of the inner objective function, the authors of [22] studied and analyzed the convergence behavior of the BiG-SAM with a nonasymptotic O ( 1 / n ) global rate of convergence.
In 2019, Shehu et al. [27] introduced an inertial extrapolation step into BiG-SAM (Equation (6)), calling the result the inertial bilevel gradient sequential averaging method (iBiG-SAM), to solve the problems in Equations (1) and (2). This iterative scheme is defined by
s n = x n + θ n ( x n x n 1 ) , u n = p r o x c g ( I c f ) ( s n ) , v n = s n λ ω ( s n ) , x n + 1 = γ n v n + ( 1 γ n ) u n , n 1 .
In their study, they presented a strong convergence analysis of an inertial algorithm that can be used to approximate fixed points of nonexpansive mappings in infinite-dimensional real Hilbert spaces. Furthermore, they converted the bilevel optimization problems into a fixed-point problem of nonexpansive mappings and showed its convergence under certain conditions.
In 2022, Duan and Zhang [28] introduced an alternated inertial step into BiG-SAM to create an alternated inertial bilevel gradient sequential averaging method (aiBiG-SAM) for solving convex bilevel optimization problems. It is defined as
s n = x n if n is even , x n + θ n ( x n x n 1 )   if n is odd ,
and
u n = p r o x c g ( I c f ) ( s n ) , v n = s n λ ω ( s n ) , x n + 1 = γ n v n + ( 1 γ n ) u n , n 1 .
They proved that the aiBiG-SAM converges strongly to a solution for the problem and extended the method into a more general alternating inertial acceleration method.
Recently, in [29,30], the authors proposed new bilevel optimization methods within the framework of Hilbert spaces and proved the strong convergence of their algorithms using the viscosity approximation technique.
In this paper, motivated by these results, we present a novel accelerated algorithm using the viscosity approximation method and the inertial parameter of the FISTA to solve the convex bilevel optimization problem. We then demonstrate the efficacy of this algorithm in solving data classification problems.
The paper is organized as follows. In Section 2, we present the preliminaries in terms of definitions, notations and lemmas for proving the main results. The new accelerated viscosity-type algorithm is introduced and studied, and then we apply it to solving the convex bilevel optimization problems described in Section 3. Then, in Section 4, we present mathematical models for the classification of datasets and the application of the results obtained in the previous section, and we provide numerical experimental results in Section 5. Finally, we present the conclusions and future work in Section 6.

2. Preliminaries

In this section, we present fundamental ideas and principles that will be utilized in the the rest of the research.
Throughout the present paper, H denotes a real Hilbert space with norm · and inner product · , · , R is the set of real numbers and N is the set of positive integers. I denotes the identity operator on H. Let C be a nonempty subset of H and let T be a mapping of C into itself. The strong convergence of a sequence { x n } in H to x H is denoted by x n x , weak convergence by x n x and F i x ( T ) symbolizes the set of all fixed points of T.
For this work, nonlinear mappings from the following classes were essentially needed.
Definition 1.
The mapping T : C C is said to be L-Lipschitz with L 0 if
T u T v L u v
for all u , v C .
An L-Lipschitz mapping T is said to be a contraction mapping if L [ 0 , 1 ) , and it is nonexpansive if L = 1 .
Definition 2
([31]). Let T be a nonexpansive mapping of C into itself and let T n : C C be a family of nonexpansive mappings such that F i x ( T ) Γ : = n = 1 F i x ( T n ) , where F i x ( T n ) is the set of all fixed points of T n for each n 1 . Then, { T n } is said to satisfy the NST-condition (I) with T if, for any bounded sequence { x n } C ,
lim n x n T n x n = 0 implies lim n x n T x n = 0 .
Definition 3
([32,33]). For any bounded sequence { u n } in H, a family { T n } of nonexpansive mappings T n : C C with n = 1 F i x ( T n ) is said to satisfy the condition (Z) if
lim n u n T n u n = 0 .
Then, every weak cluster point of { u n } n = 1 F i x ( T n ) .
Using the demicloseness of I T where T : C C is a nonexpansive mapping, we obtain the following remark.
Remark 1.
Let T be a nonexpansive mapping. Then, { T n } satisfies the condition (Z) if { T n } is a family of nonexpansive mappings that satisfies NST-condition (I) with T.
The metric projection P C from H onto C is defined by
P C x = a r g m i n { x y : y C }
for all x H , where C is a nonempty closed convex subset of H. It is known that v = P C x if and only if x v , y v 0 for all y C .
Let us recall the definition of the proximity operator and its properties.
Definition 4
([34,35]). Let f : H R { } be a convex, proper and lower-semicontinuous function. The proximity operator of f, denoted by p r o x f , is defined as follows:
p r o x f = min y H f ( y ) + 1 2 x y 2
and it can be formulated in the equivalent form:
p r o x f = ( I + f ) 1 ,
where f is the subdifferential of f defined by
f ( x ) : = { v H : f ( x ) + v , u x f ( u ) for all u H }
for all x H . For ρ > 0 , we also know that p r o x ρ f is firmly nonexpansive and
F i x ( p r o x ρ f ) = A r g m i n f : = { v H : f ( v ) f ( u ) for all u H } .
Let C be closed convex with C H . In particular, if f : = i C , the indicator function on C is defined by
i C ( x ) = 0 if x C , otherwise .
Then, p r o x ρ f = P C .
The following lemmas are well known; see [13,36,37].
Lemma 1
([13]). Let g : H R { } be a lower-semicontinuous, proper and convex function and let f : H R be differentiable and convex such that f is L-Lipschitz-continuous. Let
T n : = p r o x ρ n g ( I ρ n f ) and T : = p r o x ρ g ( I ρ f ) ,
where ρ n , ρ ( 0 , 2 / L ) with ρ n ρ as n . Then, { T n } satisfies the NST-condition (I) with T.
Lemma 2
([36]). Let η , μ H and ζ [ 0 , 1 ] . Then, the following properties hold for H:
(1)
η + μ 2 η 2 + 2 μ , η + μ ;
(2)
η ± μ 2 = η 2 ± 2 η , μ + μ 2 ;
(3)
ζ η + ( 1 ζ ) μ 2 = ζ η 2 + ( 1 ζ ) μ 2 ζ ( 1 ζ ) η μ 2 .
Lemma 3
([37]). Let { c n } R + , { b n } R and { t n } ( 0 , 1 ) such that n = 1 t n = . Suppose that
c n + 1 ( 1 t n ) c n + t n b n
for all n N . If lim sup i b n i 0 , and for any subsequence { c n i } of { c n } satisfying
lim inf i ( c n i + 1 c n i ) 0 ,
then lim n c n = 0 .
In the next section, we introduce an inertial viscosity modified SP algorithm and its application to the convex bilevel optimization problem.

3. Proposed Method

We first present a new inertial viscosity algorithm and prove a strong convergence theorem under mild conditions as follows.
Let C be closed convex with C H and the mapping S : C C be a k-contraction where 0 < k < 1 . Let { T n } be a family of nonexpansive mappings of C onto itself satisfying the condition (Z) such that Γ : = n = 1 F i x ( T n ) .
Many mathematicians often use inertial-type extrapolation [38,39] in optimization problems to speed up the convergence of iterative methods by using the technical term θ n ( x n x n 1 ) . The momentum x n x n 1 is controlled by the parameter θ n , also known as an inertial parameter.
In 2012, Phuengrattana and Suantai [40] introduced an SP algorithm and showed that its convergence behavior is better than Mann and Ishikawa iterations [41,42]. By using the idea of the SP algorithm, in this paper, we introduce an inertial viscosity modified SP algorithm (IVMSPA) for obtaining a common fixed point for { T n } as follows.
The following theorem establishes strong convergence for the proposed algorithm.
Theorem 1.
A sequence { x n } generated by Algorithm 1 converges strongly to an element a ˘ Γ , where a ˘ = P Γ S ( a ˘ ) , provided that the sequences { α n } , { β n } , { γ n } and { τ n } satisfy the following conditions:
(C1)
0 < a 1 β n a 2 < 1 ;
(C2)
0 < α n , γ n < 1 , lim n α n = 0 and n = 1 α n = ;
(C3)
lim n τ n = 0 .
Algorithm 1 An Inertial Viscosity Modified SP Algorithm (IVMSPA)
Initialization: Let { α n } , { β n } , { γ n } and { τ n } be sequences of positive real  numbers. Take x 0 , x 1 H arbitrarily.
Iterative steps: For n 1 , calculate x n + 1 as follows:
Step 1. Compute an inertial parameter
θ n = min p n 1 p n + 1 , α n τ n x n x n 1 if x n x n 1 , p n 1 p n + 1 otherwise ,
 where p 1 = 1 and p n + 1 = 1 + 1 + 4 p n 2 2 .
Step 2. Compute
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 α n ) y n + α n S ( y n ) , w n = ( 1 β n ) z n + β n T n z n , x n + 1 = ( 1 γ n ) w n + γ n T n w n .
Proof. 
Let a ˘ = P Γ S ( a ˘ ) . Then, a ˘ n = 1 F ( T n ) . First of all, we show that { x n } is bounded. From Algorithm 1, we have
w n a ˘ ( 1 β n ) z n a ˘ + β n T n z n a ˘ ( 1 β n ) z n a ˘ + β n z n a ˘ = z n a ˘
and
x n + 1 a ˘ γ n w n a ˘ + ( 1 γ n ) T n w n a ˘ γ n w n a ˘ + ( 1 γ n ) w n a ˘ = w n a ˘ z n a ˘ .
From the definition of y n and z n , we obtain
z n a ˘ α n S ( y n ) a ˘ + ( 1 α n ) y n a ˘ α n S ( y n ) S ( a ˘ ) + α n S ( a ˘ ) a ˘ + ( 1 α n ) y n a ˘ α n k y n a ˘ + α n S ( a ˘ ) a ˘ + ( 1 α n ) y n a ˘ = 1 α n ( 1 k ) y n a ˘ + α n S ( a ˘ ) a ˘ 1 α n ( 1 k ) x n a ˘ + θ n x n 1 x n + α n S ( a ˘ ) a ˘ 1 α n ( 1 k ) x n a ˘ + α n θ n α n x n 1 x n + S ( a ˘ ) a ˘ .
From (C3), we have
θ n α n x n 1 x n 0 as n .
From Equation (11), we know that there exists M > 0 such that θ n α n x n 1 x n M for all n 1 . Thus,
z n a ˘ 1 ( 1 k ) α n x n a ˘ + ( 1 k ) α n S ( a ˘ ) a ˘ + M 1 k max x n a ˘ , S ( a ˘ ) a ˘ + M 1 k .
From Equation (10) and the above inequality, we obtain
x n + 1 a ˘ max x n a ˘ , S ( a ˘ ) a ˘ + M 1 k .
Using mathematical induction, we have
x n a ˘ max x 1 a ˘ , S ( a ˘ ) a ˘ + M 1 k
for all n 1 . It follows that { x n } is bounded and, hence, { z n } is bounded. According to part (3) of Lemma 2, we obtain
x n + 1 a ˘ 2 = γ n T n w n a ˘ 2 + ( 1 γ n ) w n a ˘ 2 ( 1 γ n ) γ n w n T n w n 2 ( 1 γ n ) w n a ˘ 2 + γ n w n a ˘ 2 = w n a ˘ 2
and
w n a ˘ 2 = β n T n z n a ˘ 2 + ( 1 β n ) z n a ˘ 2 ( 1 β n ) β n z n T n z n 2 = z n a ˘ 2 ( 1 β n ) β n z n T n z n 2 .
Using Lemma 2, we obtain
z n a ˘ 2 ( 1 α n ) ( y n a ˘ ) + α n ( S ( y n ) S ( a ˘ ) ) 2 + 2 α n S ( a ˘ ) a ˘ , z n a ˘ ( 1 α n ) y n a ˘ 2 + α n S ( y n ) S ( a ˘ ) 2 + 2 α n S ( a ˘ ) a ˘ , z n a ˘ ( 1 α n ) y n a ˘ 2 + α n k y n a ˘ 2 + 2 α n S ( a ˘ ) a ˘ , z n a ˘ = 1 α n ( 1 k ) y n a ˘ 2 + 2 α n S ( a ˘ ) a ˘ , z n a ˘
and
y n a ˘ 2 x n a ˘ 2 + 2 θ n x n a ˘ x n 1 x n + θ n 2 x n 1 x n 2 .
From Equations (12)–(15), we obtain
x n + 1 a ˘ 2 z n a ˘ 2 ( 1 β n ) β n z n T n z n 2 = 1 ( 1 k ) α n y n a ˘ 2 + 2 α n S ( a ˘ ) a ˘ , z n a ˘ ( 1 β n ) β n z n T n z n 2 1 ( 1 k ) α n x n a ˘ 2 + 2 θ n x n a ˘ x n 1 x n + θ n 2 x n 1 x n 2 + 2 α n S ( a ˘ ) a ˘ , z n a ˘ ( 1 β n ) β n z n T n z n 2 = 1 ( 1 k ) α n x n a ˘ 2 ( 1 β n ) β n z n T n z n 2 + ( 1 k ) α n b n ,
where
b n = 1 1 k 2 S ( a ˘ ) a ˘ , z n a ˘ + θ n x n 1 x n θ n α n x n 1 x n + 2 x n a ˘ θ n α n x n 1 x n .
It follows that
( 1 β n ) β n z n T n z n 2 x n a ˘ 2 x n + 1 a ˘ 2 + ( 1 k ) α n B ,
where B = sup { b n : n N } .
We next show that { x n } converges strongly to a ˘ . To apply Lemma 3, we set a n : = x n a ˘ 2 and t n : = α n ( 1 k ) . From Equation (16), we obtain
a n + 1 ( 1 t n ) a n + t n b n .
Suppose that { a n i } is a subsequence of { a n } such that lim inf i ( a n i + 1 a n i ) 0 . Using Equation (17) and (C2), we obtain
lim sup i β n i 1 β n i z n i T n i z n i 2 lim sup i a n i a n i + 1 + α n i ( 1 k ) B = lim sup i ( a n i + 1 a n i ) 0 .
From (C1) and Equation (18), we obtain
lim i z n i T n i z n i = 0 .
Next, we show that lim sup i b n i 0 . Obviously, it suffices to show that
lim sup i S ( a ˘ ) a ˘ , z n i a ˘ 0 .
Since { z n i } is bounded, there exists a subsequence { z n i j } of { z n i } and y H such that z n i j y as j and
lim sup i S ( a ˘ ) a ˘ , z n i a ˘ = lim j S ( a ˘ ) a ˘ , z n i j a ˘ .
Since { T n } satisfies the condition (Z) and Equation (19), we obtain y Γ . Using a ˘ = P Γ S ( a ˘ ) , we get
lim j S ( a ˘ ) a ˘ , z n i j a ˘ 0 .
So, we have
lim sup i S ( a ˘ ) a ˘ , z n i a ˘ 0 .
Thus, in view of Lemma 3, { x n } converges to a ˘ as required.    □
We next establish our inertial bilevel gradient modified SP algorithm (IBiG-MSPA) to solve the convex bilevel optimization problems in Equations (1) and (2) by applying Algorithm 1 and present its strong convergence. We use the following assumptions in order to solve this problem:
  • f : H R is a convex and differentiable function such that f is Lipschitz-continuous with constant f > 0 and g : H ( , ] are proper, lower-semicontinuous and convex functions;
  • ω : R n R is strongly convex with a parameter σ such that ω is L ω -Lipschitz continuous and s ( 0 , 2 L ω + σ ) .
Our IBiG-MSPA algorithm is defined as follows.
The following useful result was proved by Sabach and Shtern [26].
Proposition 1.
Suppose that ω : R n R is strongly convex with σ > 0 and ω is Lipschitz-continuous with constant L ω . Then, the mapping defined by S s = I s ω is a contraction for all s ( 0 , 2 σ + L ω ) . Thus,
x s ω ( u ) ( v s ω ( v ) ) 1 2 s σ L ω σ + L ω u v
for all u , v R n .
Combining Theorem 1 and Proposition 1, we obtain the following result.
Theorem 2.
Let Λ be the set of all solutions to Equation (1) and a ˘ = P S * ( I s ω ) ( a ˘ ) and let (C1)–(C3) in Theorem 1 hold. Then, { x n } generated by Algorithm 2 converges strongly to a ˘ Λ .
Algorithm 2 An Inertial Bilevel Gradient Modified SP Algorithm (IBiG-MSPA)
Initialization: Let { α n } , { β n } , { γ n } , { τ n } and { c n } be sequences of positive  real numbers. Take x 0 , x 1 H arbitrarily.
Iterative steps: For n 1 , calculate x n + 1 as follows:
Step 1. Compute an inertial parameter
θ n = min p n 1 p n + 1 , α n τ n x n x n 1 if x n x n 1 , p n 1 p n + 1 otherwise ,
 where p 1 = 1 and p n + 1 = 1 + 1 + 4 p n 2 2 .
Step 2. Compute
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 α n ) y n + α n ( I s ω ) y n , w n = ( 1 β n ) z n + β n p r o x c n g ( I c n f ) z n , x n + 1 = ( 1 γ n ) w n + γ n p r o x c n g ( I c n f ) w n .
Proof. 
Set S = I s ω and T n = p r o x c n g ( I c n f ) . Then, according to Proposition 1, S is a contraction mapping. We also know that T n is nonexpansive. Using Theorem 1, we can conclude that x n a ˘ Γ , where a ˘ = P Γ S ( a ˘ ) . It can be noted that, in this case, Γ = n = 1 F ( T n ) = S * . Then, for all x S * , we have
0 S ( a ˘ ) a ˘ , x a ˘ = a ˘ s ω ( a ˘ ) a ˘ , x a ˘ = s ω ( a ˘ ) , x a ˘ .
Dividing the above inequalities by s , we obtain
ω ( a ˘ ) , x a ˘ 0
for all x S * . Then, x n a ˘ Λ . This completes the proof. □
Using our main results (Theorems 1 and 2), we apply the IBiG-MSPA in the next section to solve a classification problem.

4. Applications with Classification Problems

There are several mathematical models used for the classification of datasets. For this paper, we use the extreme learning machine model and present the advantages of this model as follows.
The advantages of feedforward neural networks have led to their widespread use in diverse fields over the past few decades. Stated concisely, feedforward neural networks allow for the approximation of complex nonlinear mappings directly from input samples and provide models for numerous natural and artificial phenomena that are difficult to deal with using classical parametric techniques. However, the rendering of feedforward neural networks is time-consuming due to the dependence of the parameters of the different layers and the requirement to configure all of the parameters. One of the widely used feedforward neural networks is the single hidden layer feedforward neural network (SLFN). It has been widely studied in terms of both theory and application because of its learning ability and anti-error ability (see [43,44,45] for more detail).
In order to increase the effectiveness of SLFNs, a development model of a neural learning algorithm called the extreme learning machine (ELM) [46] was recently established. The advantage of the ELM is that hidden node learning parameters, such as input weights and biases, are generated at random and do not need to be adjusted, whereas the output weights can be obtained analytically by using a simply generalized inverse operation. The ELM has been effectively used in several real-world applications, including regression and classification problems [47,48].
We next examine some aspects of the ELM regarding the classification of data.
Let { ( x l , t l ) : x l R n , t l R m , l = 1 , 2 , , N } be a set of training data taken from different samples with a total sample size N, where x l = [ x l 1 , x l 2 , , x l n ] R n and t l = [ t l 1 , t l 2 , , t l m ] R m are the input data and target data, respectively. The mathematical formula for an ELM with M hidden nodes is as follows:
r = 1 M K r E ( v r , x s + a r ) = o s , s = 1 , , N ,
where E ( x ) represents the activation function, K r = [ K r 1 , K r 2 , , K r m ] T is the weight that links the r-th output node and the hidden node, v r = [ v r 1 , v r 2 , , v r n ] T is the weight that links the r-th input nodes and the hidden node and a r is a bias.
The purpose of SLFNs is the prediction of N output nodes that satisfy s = 1 N o s t s = 0 . This means that there exist K r , v r and a r such that
r = 1 M K r E ( v r , x s + a r ) = t s , s = 1 , , N .
From the above system of linear equations, we can rewrite the following:
H K = T ,
where
H = E ( v 1 , x 1 + a 1 ) E ( v M , x 1 + a M ) E ( v 1 , x N + a 1 ) E ( v M , x N + a M ) ,
K = [ K 1 T , , K M T ] m × M T , and T = [ t 1 T , , t N T ] m × N T .
As the Moore–Penrose generalized inverse H ˘ of H exists, K can be obtained from K = H ˘ T (see [46]). If H ˘ does not exist, then it could be impossible to find K using this approach. To solve this issue, we determine K as a minimizer of the ordinary least squares minimization problem (OLS):
min K H K T 2 2 ,
where H R N × M is called the hidden layer output matrix, K R M × m is the weight of the output layer, T R N × m is the training data target matrix, M is the number of hidden nodes and N is the number of training samples.
However, in a real situation, the use of OLS (Equation (20)) may cause an overfitting problem. To overcome such problems, the output weight K can be approximated with the least absolute shrinkage and selection operator (lasso) (see [49]):
min K H K T 2 2 + λ K 1 ,
where λ is a regularization parameter. Now, let S * be the set of all solutions to Equation (21). Among the solutions in S * , we would like to select a solution K * S * in such a way that K * is a minimizer of
min K S * 1 2 K 2 .
Our aim in the next section is to employ the IBiG-MSPA to solve the convex bilevel optimization problems in Equations (21) and (22) and to use the obtained optimal weight for classification of the Diabetes [50], Heart Disease UCI [51] and Iris datasets [52]. These databases are widely used as benchmarks in many research works in the area of data classification.

5. Numerical Experiments

In this section, we present the experimental results from applying our proposed algorithm to classify the Diabetes, Heart Disease UCI and Iris datasets.
We employed our algorithm (IBiG-MSPA) to solve the convex bilevel optimization problems in Equations (21) and (22) by setting ω ( K ) = 1 2 K 2 2 , f ( K ) = H K T 2 2 , g ( K ) = λ K 1 and E ( x ) as sigmoid.
The parameters selected for this experiment are shown in Table 1, where L f = 2 H 2 . We measured the efficiency of each algorithm using the output data accuracy as follows:
accuracy = 100 × correct   prediction total   cases .
Next, we used the Diabetes, Heart Disease UCI and Iris datasets for classification, as described below.
Diabetes dataset [50]: There are nine features in the dataset. We categorized two data classes in this dataset.
Heart Disease UCI dataset [51]: The dataset has 14 features. Patients’ heart disease data are presented in this dataset. We wanted to categorize the data into two groups.
Iris dataset [52]: The dataset contains four features and three classes. We aimed to classify the data into three types of iris plants.
Testing and training data are shown in Table 2.
For each dataset, the numbers of iterations and hidden nodes were determined as follows Table 3:
The number of iterations for each dataset was chosen to produce the best results for each method under consideration, as can be seen in Table 2.
We conducted experiments to compare the efficiency of the IBiG-MSPA with other algorithms under consideration; namely, the BiG-SAM, iBiG-SAM and aiBiG-SAM.
As representations of the accuracy of testing and training, we use the terms Ac.Test and Ac.Train, respectively, in Table 4.
The results from Table 4 show that our proposed method, the IBiG-MSPA, performed better than the BiG-SAM, iBiG-SAM and aiBiG-SAM in terms of training and testing accuracy for each dataset. Therefore, based on our study, the IBiG-MSPA could classify the chosen datasets with greater accuracy than the other methods.
We can observe that, for the Heart Disease UCI dataset, the accuracies of the existing algorithms were around 70%, while our proposed algorithm achieved accuracy over 80%. In a practical scenario, even small improvements in classification accuracy can have significant effects. For instance, in the case of medical diagnoses, for which the Heart Disease UCI dataset is often used as a benchmark, a slight increase in accuracy can lead to more reliable predictions and better patient outcomes. It may help identify more individuals at risk or improve the overall efficiency of the classification process, leading to appropriate treatments. This observation applies equally well to the other two datasets and datasets similar to them.

6. Conclusions

We first introduced an inertial viscosity modified SP algorithm (IVMSPA). Then, the strong convergence of the IVMSPA was proved under mild conditions with the control sequence. Next, we proposed the inertial bilevel gradient modified SP algorithm (IBiG-MSPA) to solve the convex bilevel optimization problem. Finally, we applied our method to classifying the Diabetes, Heart Disease UCI and Iris datasets. The numerical experiments showed that the IBiG-MSPA had higher efficiency than the BiG-SAM, iBiG-SAM and aiBiG-SAM.
The performances of the algorithms discussed in this paper depend in part on the characteristics of the datasets. In order to improve the accuracy, one needs to address issues related to the preprocessing of the data, such as feature selection, missing data and dataset imbalances. The goal of our future research is to develop new techniques or processes that can improve the efficiency of algorithms in classifying imbalanced and big datasets.

Author Contributions

Conceptualization, R.W.; Formal analysis, K.J. and R.W.; Investigation, K.J. and R.W.; Methodology, R.W.; Software, K.J.; Supervision, S.S. and Y.J.C.; Validation, S.S., Y.J.C., A.K. and R.W.; Visualization, R.W.; Writing—original draft, K.J.; Writing—review and editing, S.S., Y.J.C., A.K. and R.W. All authors have discussed the results and approved the final manuscript.

Funding

NSRF (grant number B05F640183).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets used in this paper were obtained from https://archive.ics.uci.edu (accessed on 30 October 2022).

Acknowledgments

This research received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (grant number B05F640183), and it was also partially supported by Chiang Mai University and Ubon Ratchathani University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Garcia-Herreros, P.; Zhang, L.; Misra, P.; Arslan, E.; Mehta, S.; Grossmann, I.E. Mixed-integer bilevel optimization for capacity planning with rational markets. Comput. Chem. Eng. 2016, 86, 33–47. [Google Scholar] [CrossRef]
  2. Maravillo, H.; Camacho-Vallejo, J.; Puerto, J.; Labbé, M. A market regulation bilevel problem: A case study of the mexican petrochemical industry. Omega 2019, 97, 102–105. [Google Scholar] [CrossRef] [Green Version]
  3. Marcotte, P. Network design problem with congestion effects: A case of bilevel programming. Math. Program. 1986, 34, 142–162. [Google Scholar] [CrossRef]
  4. Migdalas, A. Bilevel programming in traffic planning: Models, methods and challenge. J. Glob. Optim. 1995, 7, 381–405. [Google Scholar] [CrossRef]
  5. Clark, P.A.; Westerberg, A.W. Bilevel programming for steady-state chemical process design–I. Fundamentals and algorithms. Comput. Chem. Eng. 1990, 14, 87–97. [Google Scholar] [CrossRef]
  6. Dempe, S. Foundations of Bi-Level Programming, 1st ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
  7. Bard, J.F. Coordination of a multidivisional organization through two levels of management. Omega 1983, 11, 457–468. [Google Scholar] [CrossRef]
  8. Dan, T.; Marcotte, P. Competitive facility location with selfish users and queues. Oper. Res. 2019, 67, 479–497. [Google Scholar] [CrossRef]
  9. Gabriel, S.A.; Conejo, A.J.; Fuller, J.D.; Hobbs, B.F.; Ruiz, C. Complementarity Modeling in Energy, 1st ed.; Springer: New York, NY, USA, 2012. [Google Scholar]
  10. Wogrin, S.; Pineda, S.; Tejada-Arango, D.A. Applications of bilevel optimization in energy and electricity markets. Bilevel Optim. 2020, 161, 139–168. [Google Scholar]
  11. Nimana, N.; Petrot, N. Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems. Optimization 2019, 70, 1307–1336. [Google Scholar] [CrossRef]
  12. Janngam, K.; Suantai, S. An inertial modified S-Algorithm for convex minimization problems with directed graphs and their applications in classification problems. Mathematics 2022, 10, 4442. [Google Scholar] [CrossRef]
  13. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward-backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 35–44. [Google Scholar] [CrossRef]
  14. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  15. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  16. Bruck, R.E., Jr. On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 1977, 61, 159–164. [Google Scholar] [CrossRef] [Green Version]
  17. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  18. Figueiredo, M.; Nowak, R. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  19. Hale, E.; Yin, W.; Zhang, Y. A Fixed-Point Continuation Method for l1-Regularized Minimization with Applications to Compressed Sensing; Department of Computational and Applied Mathematics, Rice University: Houston, TX, USA, 2007. [Google Scholar]
  20. Liang, J.; Schonlieb, C.B. Improving fista: Faster, smarter and greedier. arXiv 2018, arXiv:1811.01430. [Google Scholar]
  21. Moudafi, A. Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  22. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  23. Jailoka, P.; Suantai, S. and Hanjing, A. A fast viscosity forward-backward algorithm for convex minimization problems with an application in image recovery. Carpathian J. Math. 2020, 37, 449–461. [Google Scholar] [CrossRef]
  24. Tan, B.; Zhou, Z.; Li, S. Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems. J. Appl. Math. Comput. 2022, 68, 1387–1411. [Google Scholar] [CrossRef]
  25. Beck, A.; Sabach, S. A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Program. 2014, 147, 25–46. [Google Scholar] [CrossRef]
  26. Sabach, S.; Shtern, S. A first order method for solving convex bilevel optimization problems. SIAM J. Optim. 2017, 27, 640–660. [Google Scholar] [CrossRef] [Green Version]
  27. Shehu, Y.; Vuong, P.T.; Zemkoho, A. An inertial extrapolation method for convex simple bilevel optimization. Optim. Methods Softw. 2019, 2019, 1–19. [Google Scholar] [CrossRef]
  28. Duan, P.; Zhang, Y. Alternated and multi-step inertial approximation methods for solving convex bilevel optimization problems. Optimization 2022, 2022, 1–30. [Google Scholar] [CrossRef]
  29. Jiang, R.; Abolfazli, N.; Mokhtari, A.; Hamedani, E.Y. A Conditional Gradient-based Method for Simple Bilevel Optimization with Convex Lower-level Problem. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 25–27 April 2023; pp. 10305–10323. [Google Scholar]
  30. Thongsri, P.; Panyanak, B.; Suantai, S. A New Accelerated Algorithm Based on Fixed Point Method for Convex Bilevel Optimization Problems with Applications. Mathematics 2023, 11, 702. [Google Scholar] [CrossRef]
  31. Nakajo, K.; Shimoji, K.; Takahashi, W. Strong convergence to a common fixed point of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 2007, 8, 11–34. [Google Scholar]
  32. Aoyama, K.; Kimura, Y. Strong convergence theorems for strongly nonexpansive sequences. Appl. Math. Comput. 2011, 217, 7537–7545. [Google Scholar] [CrossRef]
  33. Aoyama, K.; Kohsaka, F.; Takahashi, W. Strong convergence theorems by shrinking and hybrid projection methods for relatively nonexpansive mappings in Banach spaces. In Nonlinear Analysis and Convex Analysis, Proceedings of the 5th International Conference on Nonlinear Analysis and Convex Analysi, Hsinchu, Taiwan, 31 May–4 June 2007; Yokohama Publishers: Yokohama, Japan, 2009; pp. 7–26. [Google Scholar]
  34. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  35. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  36. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  37. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 724–750. [Google Scholar] [CrossRef]
  38. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  39. Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  40. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2000, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  41. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  42. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  43. Ding, S.F.; Jia, W.K.; Su, C.Y.; Zhang, L.W. Research of neural network algorithm based on factor analysis and cluster analysis. Neural Comput. Appl. 2011, 20, 297–302. [Google Scholar] [CrossRef]
  44. Razavi, S.; Tolson, B.A. A new formulation for feedforward neural networks. IEEE Trans. Neural Netw. 2011, 22, 1588–1598. [Google Scholar] [CrossRef]
  45. Chen, Y.; Zheng, W.X. Stochastic state estimation for neural networks with distributed delays and Markovian jump. Neural Netw. 2012, 25, 14–20. [Google Scholar] [CrossRef]
  46. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  47. Rong, H.J.; Ong, Y.S.; Tan, A.H.; Zhu, Z. A fast pruned-extreme learning machine for classification problem. Neurocomputing 2008, 72, 359–366. [Google Scholar] [CrossRef]
  48. Huang, G.B.; Ding, X.; Zhou, H. Optimization method based extreme learning machine for classification. Neurocomputing 2010, 74, 155–163. [Google Scholar] [CrossRef]
  49. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  50. Smith, J.W.; Everhart, J.E.; Dickson, W.C.; Knowler, D.C.; Johannes, R.S. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Symposium on Computer Applications and Medical Care; IEEE Computer Society Press: New York, NY, USA, 1998; pp. 261–265. [Google Scholar]
  51. Lichman, M. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu (accessed on 20 October 2022).
  52. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
Table 1. Parameter selection for the IBIG-MSPA, BiG-SAM, iBiG-SAM and aiBiG-SAM.
Table 1. Parameter selection for the IBIG-MSPA, BiG-SAM, iBiG-SAM and aiBiG-SAM.
MethodsSettings
IBiG-MSPA s = 0.01 , c n = 1 L f , α n = 1 50 n , β n = γ n = 0.5 , τ n = 10 20 n
BiG-SAM λ = 0.01 , c = 1 L f , γ n = 2 ( 0.1 ) 1 2 + c L f 4
iBiG-SAM λ = 0.01 , c = 1 L f , α = 3 , γ n = 2 ( 0.1 ) 1 2 + c L f 4 , β n = γ n n 0.01
θ n = min n n + α 1 , β n x n x n 1 if x n x n 1 , n n + α 1   otherwise .
aiBiG-SAM λ = 0.01 , c = 1 L f , γ n = 2 ( 0.1 ) 1 2 + c L f 4 , β n = γ n n 0.01 θ n = min n n + α 1 , β n x n x n 1 if x n x n 1 , n n + α 1   otherwise .
Table 2. Diabetes, Heart Disease UCI and Iris datasets, with 30% of each dataset used for testing and 70% for training.
Table 2. Diabetes, Heart Disease UCI and Iris datasets, with 30% of each dataset used for testing and 70% for training.
DatasetsFeaturesSample
Testing SetTraining Set
Diabetes9230538
Heart Disease UCI1490213
Iris445105
Table 3. Numbers of iterations and hidden nodes specified for each data collection.
Table 3. Numbers of iterations and hidden nodes specified for each data collection.
DatasetsNumber of Iterations ( I ¯ )Number of Hidden Nodes (M)
Diabetes200100
Heart Disease UCI10060
Iris30030
Table 4. Accuracy of predictions using various algorithms.
Table 4. Accuracy of predictions using various algorithms.
DatasetIBiG-MSPABiG-SAMiBiG-SAMaiBiG-SaM
Ac.TrainAc.TestAc.TrainAc.TestAc.TrainAc.TestAc.TrainAc.Test
Diabetes77.1181.0871.9876.1372.3476.5870.8873.42
Heart Disease UCI85.7183.8774.7674.1982.3878.4983.8179.57
Iris99.05100.0094.2995.5694.2995.5694.2995.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Janngam, K.; Suantai, S.; Cho, Y.J.; Kaewkhao, A.; Wattanataweekul, R. A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems. Mathematics 2023, 11, 3241. https://doi.org/10.3390/math11143241

AMA Style

Janngam K, Suantai S, Cho YJ, Kaewkhao A, Wattanataweekul R. A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems. Mathematics. 2023; 11(14):3241. https://doi.org/10.3390/math11143241

Chicago/Turabian Style

Janngam, Kobkoon, Suthep Suantai, Yeol Je Cho, Attapol Kaewkhao, and Rattanakorn Wattanataweekul. 2023. "A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems" Mathematics 11, no. 14: 3241. https://doi.org/10.3390/math11143241

APA Style

Janngam, K., Suantai, S., Cho, Y. J., Kaewkhao, A., & Wattanataweekul, R. (2023). A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems. Mathematics, 11(14), 3241. https://doi.org/10.3390/math11143241

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop