Next Article in Journal
Dirichlet Problem with L1(S) Boundary Values
Next Article in Special Issue
A Fractional-Order SIR-C Cyber Rumor Propagation Prediction Model with a Clarification Mechanism
Previous Article in Journal
Sharp Bounds for the Second Hankel Determinant of Logarithmic Coefficients for Strongly Starlike and Strongly Convex Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Global Harmony Search Algorithm for General Linear Complementarity Problem

School of Mathematics and Computer Science, Shaanxi University of Technology, Hanzhong 723001, China
Axioms 2022, 11(8), 370; https://doi.org/10.3390/axioms11080370
Submission received: 2 May 2022 / Revised: 13 July 2022 / Accepted: 24 July 2022 / Published: 28 July 2022
(This article belongs to the Special Issue Fractional-Order Equations and Optimization Models in Engineering)

Abstract

:
Linear complementarity problem (LCP) is studied. After reforming general LCP as the system of nonlinear equations by NCP-function, LCP is equivalent to solving an unconstrained optimization model, which can be solved by a recently proposed algorithm named novel global harmony search (NGHS). NGHS algorithm can overcome the disadvantage of interior-point methods. Numerical results show that the NGHS algorithm has a higher rate of convergence than the other HS variants. For LCP with a unique solution, NGHS converges to its unique solution. For LCP with multiple solutions, NGHS can find as many solutions as possible. Meanwhile, for unsolvable LCP, all algorithms are terminated on the solution with the minimum error.

1. Introduction

Consider the general linear complementarity problem (LCP):
x 0 , M x + q 0 , x T M x + q = 0 ,
where M R n × n , q R n are given. LCP is a fundamental problem in mathematical programming. It is known that any differentiable linear and quadratic programming can be formulated into an LCP. LCP also has a wide range of applications in economics and engineering [1].
Many direct methods have been proposed for their solutions. The most famous among the pivotal methods for the LCP is Lemke’s method. The book by Cottle et al. [2] is a good reference for pivoting methods. Interior-point methods (IPMs) are an important method for LCP. Modern interior-point methods were introduced by Karmarkar in 1984 for linear programming [3]. Kojima et al. [4] proposed a polynomial-time algorithm for monotone LCP under the nonemptiness assumption of the set of feasible interior-point. Each algorithm in the class of interior-point methods for monotone LCP has a common feature that generates a sequence in the positive orthant of R n under the assumption of knowing a feasible initial point. However, it is a very difficult task to find a feasible initial point to start the interior-point methods. To overcome this difficulty, recent studies have focused on some new interior-point algorithms without the need to set a feasible initial point. In 1993, Kojima et al. presented the first infeasible interior-point algorithm with global convergence [5]; soon after, Zhang et al. [6,7] introduced this technique to the linear complementarity problem.
The study of LCP mainly concentrated on theory and algorithm: the former focused on the existence and uniqueness of solution [8,9,10,11], and the latter mainly designed efficient algorithms for LCP, such as interior-point algorithm, smooth function method, neural network method, matrix splitting iteration method, kernel function method, etc. [12,13,14,15,16,17]. In recent years, stochastic LCP [18], symmetry cone complementary [19], tensor complementary problem [20,21,22], parameter LCP [23,24], vertical linear complementary [25], and the sparse solution [26,27] of LCP have become research hotspots. Although most methods have polynomial-time complexity for monotone LCP with a unique solution after setting a feasible initial point properly, they cannot apply to nonmonotone LCP. Moreover, for the LCP with multiple solutions, how to obtain as many solutions as possible is rarely studied.
Based on the above reasons, in this paper, we use a recently proposed algorithm named novel global harmony search (NGHS) for solving general linear complementarity problems, including LCP with a unique solution or multiple solutions and the unsolvable LCP. We reform the LCP as an unconstrained optimization model, and then we use the NGHS algorithm to solve. If LCP is solvable (unique solution or multiple solutions), the objective function (merit function) converges to zero. If LCP is unsolvable, the objective function (merit function) converges to the solution with the minimum error. Thus, this method can also be used to test whether LCP has a solution. Numerical results, compared with the classical HS and HS variants, show that the NGHS method has good convergence property.
This paper is outlined as follows. In Section 2, we give some lemmas that ensure the solution to LCP (1) exists and reform the LCP as an unconstrained optimization model. In Section 3, the NGHS is discussed and compared with other HS variants. Numerical results and comparisons are provided in Section 4 by solving some given LCPs. Section 5 contains the concluding remarks.

2. Preliminaries and LCP as Optimization Model

We briefly summarize some of the major results in existence and uniqueness of the solution for standard LCP. This research has identified a wide variety of classes of square matrices that correspond to certain properties related to the LCP [28].
In 1958, Samelson, Thrall, and Wesler established that M must be a P-matrix, i.e., a square matrix all of whose principal subdeterminants are greater than 0. For the LCP (1), this result leads to the theorem that (1) has a unique solution for all q R n iff M is a P-matrix. For convenience, a positive (semi)definite matrix is also applied to the existence of a solution for standard LCP.
Definition 1.
The matrix M is positive semidefinite, i.e., d T M d 0 for every d R n .
Definition 2.
The matrix M is positive definite, i.e., d T M d > 0 for every d R n , d 0
Let A R n × n . Splitting A = H + S , where H = 1 2 ( A + A T ) , S = 1 2 ( A A T ) . Since H T = H , S T = S , we call A = H + S Hermitian/skew-Hermitian splitting [29,30] and H symmetry component matrix. Thus for any A R n × n , if A T = A and eigenvalues of matrix A are all greater than (or equal to) 0, A is positive (semi)definite matrix. If A T A , A is positive (semi)definite if and only if H is positive (semi)definite.
Lemma 1.
When M is a positive definite matrix, LCP has a unique solution.
Lemma 2.
When M is a positive semidefinite matrix, the solution set is nonempty and convex if the feasible set is nonempty.
Lemmas 1 and 2 are only sufficient conditions, but not necessary conditions. The LCP is called monotone if M is a positive semidefinite.
Let ϕ : R 2 R 1 be defined by
ϕ ( a , b ) = a 2 + b 2 ( a + b )
Then
ϕ ( a , b ) = 0 a 0 , b 0 , a b = 0
From the characterization of NCP-function [31], LCP (1) can be recast as a system of nonlinear equations defined by
Φ ( x ) ϕ ( x 1 , f 1 ( x ) ) ϕ ( x 2 , f 2 ( x ) ) ϕ ( x n 1 , f n 1 ( x ) ) ϕ ( x n , f n ( x ) ) = 0
where f i ( x ) = ( M x + q ) i , i = 1 , 2 , , n . Define a merit function (or object function)
Φ ( x ) 2 i = 1 n ϕ ( x i , f i ( x ) ) 2 ,
so (1) is equivalent to solving the unconstrained optimization model
min x R n Φ ( x ) 2 i = 1 n ϕ ( x i , f i ( x ) ) 2
If LCP is solvable (unique solution or multiple solutions), Φ ( x ) 2 converges to zero. If LCP is unsolvable, Φ ( x ) 2 converges to the solution with the minimum error.
Following, we use a recently proposed algorithm named novel global harmony search (NGHS) for solving some given LCPs.

3. NGHS Algorithm

3.1. HS Algorithm

Traditional optimization methods, such as interior-point methods, smooth function methods, neural network methods, and matrix splitting iteration methods, have taken on major roles in solving real world optimization problems. Nevertheless, their common drawbacks, requiring an initial setting in feasible region for decision variables, generate a demand for other types of algorithms, such as metaheuristic algorithms. These algorithms have been identified as efficient, intelligent techniques to solve complex real-world problems. In recent years, various metaheuristic algorithms have been proposed to solve a wide range of complex real-world problems.
The harmony search (HS) algorithm is one of the metaheuristic algorithms developed by Geem et al. [32,33]. Inspired by the improvisation of music players, they first play music randomly with existing instruments. This harmony is kept in the musician’s memory. In the next stage, according to the harmony in their memory, the musician plays new music that has changed from the previous one. The concept of HS is based on the idea that during the improvisation process, musicians try different combinations of memorized pitches, which is analogous to the optimization process applied to most engineering problems. Therefore, a feasible solution is called a harmony, and each decision variable corresponds to a note, which generates a value for finding the global optimum.
HS algorithm possesses several advantages over the traditional optimization techniques: (1) it is a simple metaheuristic algorithm that does not require an initial setting for decision variables; (2) it uses stochastic random searches, so derivative information is not necessary; and (3) it has a few parameters. For these reasons, HS has been demonstrated in several studies to be a useful optimization algorithm for various engineering applications owing to its convenient implementation, rapid convergence, and reasonable computational cost. This can be found in the literature [34,35,36,37,38,39,40].
The steps in the procedure of the classical harmony search algorithm are as follows:
Step 1. Initialize the problem and algorithm parameters. The optimization problem is specified as follows:
Minimize   f ( x )   subject   to     x i X i , i = 1 , 2 , , n ,
where f ( x ) is an objective function; x is the set of each decision variable x i ; n is the number of decision variables; X i is the set of the possible range of values (named search space) for each decision variable; and X i : x i L X i x i U . The HS algorithm parameters are also specified in this step. These are the harmony memory size (HMS), or the number of solution vectors in the harmony memory; harmony memory considering rate (HMCR); pitch adjusting rate (PAR); and the number of improvisations (Tmax), or stopping criterion.
Step 2. Initialize the harmony memory. The HM matrix is filled with as many randomly generated solution vectors as the HMS:
HM = x 1 x 2 x HMS f ( x 1 ) f ( x 2 ) f ( x HMS ) = x 1 1 x 2 1 x n 1 x 1 2 x 2 2 x n 2 x 1 HMS x 2 HMS x n HMS f ( x 1 ) f ( x 2 ) f ( x HMS )
Step 3. Improvise a new harmony. Generating a new harmony is called ‘improvisation’. A new harmony vector, x = ( x 1 , x 2 , , x n ) , is generated based on three rules: (1) memory consideration, (2) pitch adjustment, and (3) random selection. The procedure works as shown in Figure 1.
Here, x i ( i = 1 , 2 , , n ) is the ith component of x , and x i j ( j = 1 , 2 , , HMS ) is the ith component of the jth candidate solution vector in HM. Both r and rand() are uniformly generated random numbers in the region of (0, 1), and bw is an arbitrary distance bandwidth.
Step 4. Update harmony memory. If the new harmony vector x = ( x 1 , x 2 , , x n ) is better than the worst harmony in the HM, judged in terms of the objective function value, the new harmony is included in the HM, and the existing worst harmony is excluded from the HM.
Step 5. Check the stopping criterion. If the stopping criterion (Tmax) is satisfied, computation is terminated. Otherwise, Steps 3 and 4 are repeated.

3.2. NGHS Algorithm

Experiments with the classical HS algorithm over the benchmark problems show that the algorithm suffers from the problem of premature and/or false convergence and slow convergence especially over multimodal fitness landscape. To enrich the searching behavior and to avoid being trapped into the local optimum, more improved HS algorithms were presented. The NGHS algorithm modifies the improvisation step of the HS such that the new harmony can mimic the global-best harmony in the HM [41,42,43,44]. In step 3 it works as shown in Figure 2.
Here, ‘best’ and ‘worst’ are the indexes of the best harmony and the worst harmony in HM, respectively. r and rand() are all uniformly generated random number in (0, 1).
The NGHS method has strong global search ability in the early stage of the optimization process and has strong local search ability in the late stage of optimization process. In the early stage of the optimization process, all solution vectors are sporadic in feasible space, so x s is large, which is beneficial to the global search, while in the late stage of the optimization process, all nonbest solution vectors are inclined to move to the global-best solution vector, so most solution vectors are close to each other. In this case, most x s is small and most trust regions are narrow, which is beneficial to the local search of the NGHS.
The NGHS and the HS are different in the following:
(1) In Step 1, harmony memory considering rate (HMCR), pitch adjusting rate (PAR), and adjusting step (bw) are excluded from the NGHS, and genetic mutation probability ( p m ) is included in the NGHS;
(2) The HS carries out mutation with the probability HMCR × PAR and carries out random selection with the probability 1-HMCR, while the NGHS carries out genetic mutation with the probability p m . In fact, both operations are exactly the same, and they are used to keep the individual variety better, which can effectively prevent both algorithms from being trapped into the local optimum.
(3) In Step 4, the NGHS replaces the worst harmony x w o r s t in HM with the new harmony x even if x is worse than x w o r s t .

4. Computational Results

In this section, we perform some numerical tests in order to illustrate the implementation and efficiency of the NGHS method for some linear complementary problems. All the experiments were performed on MatlabR2009a system with Intel(R) Core (TM) 4 × 3.3 GHz and 2 GB RAM.

4.1. Problems

4.1.1. LCP with a Unique Solution

LCP1. Consider the following linear complementary problem, where
M = 3 2 1 2 2 1 1 1 1 , q = 14 11 7 .
Eigenvalues of the symmetric matrix M are λ 1 = 0.3080 , λ 2 = 0.6431 , λ 3 = 5.0489 ; thus, M is a (symmetric) positive definite matrix, and LCP1 has a unique solution x * = ( 0 , 4 , 3 ) T .
LCP2. Consider the following LCP, where
M = 4 1 1 4 1 1 4 1 1 1 4 , q = 1 1 1 1 .
Here M is a tridiagonal matrix, whose eigenvalues are all greater than 0; thus, M is a (symmetric) positive definite matrix, and LCP2 has a unique solution x * = ( 0 . 3660 , 0 . 4641 , , 0 . 4641 , 0 . 3660 ) T .
LCP3. Consider the following linear complementary problem, where
M = 1 2 2 2 2 5 6 6 2 6 9 10 2 6 10 4 n 3 , q = 1 1 1 1 .
Since eigenvalues of the matrix M are all greater than 0, M is a (symmetric) positive definite matrix, and LCP3 has a unique solution x * = ( 1 , 0 , , 0 ) T .
LCP 4. Consider the following linear complementary problem, where
M = 1 / n 0 0 0 0 2 / n 0 0 0 0 3 / n 0       0 0 0   n / n , q = 1 1 1 1 .
Since M is a (symmetric) positive definite, LCP4 has a unique solution x * = ( n , n / 2 , , 1 ) T .
LCP 5. Consider the following linear complementary problem, where
M = 3 0 1 0 1 3 1 0 0 1 4 2 1 1 1 5 , q = 2 3 4 5 .
Based on the Hermitian/skew-Hermitian splitting, eigenvalues of symmetry component matrix 1 2 ( M + M T ) are all greater than 0; thus, M is a (nonsymmetric) positive definite matrix, and LCP5 has a unique solution x * = ( 1 , 0 , 1 , 0 ) T .
LCP 6. Consider the following linear complementary problem, where
M = 1 2 2 2 0 1 2 2 0 0 1 2 0 0 0 1 , q = 1 1 1 1 .
Since eigenvalues of symmetry component matrix 1 2 ( M + M T ) are all greater than or equal to 0, M is a (nonsymmetric) positive semidefinite matrix, and LCP6 has a unique solution x * = ( 0 , 0 , , 0 , 1 ) T .
LCP 7. Consider the following linear complementary problem, where
M = 2 1 1 1 1 2 0 1 1 0 1 2 1 1 2 0 , q = 8 6 4 3 .
Since eigenvalues of symmetry component matrix 1 2 ( M + M T ) are all greater than or equal to 0, M is a (nonsymmetric) positive semidefinite matrix, and LCP7 has a unique solution x * = ( 2.5 , 0.5 , 0 , 2.5 ) T .
In fact, x 0 ,   x = ( x 1 , x 2 , x 3 , x 4 ) T ,
x T M x = 2 x 1 2 + 2 x 1 x 2 + 2 x 1 x 3 + 2 x 2 2 + x 3 2 = ( x 1 + x 2 ) 2 + x 2 2 + ( x 1 + x 3 ) 2 0
so M is a positive semidefinite matrix, too.
LCP8. Consider the following linear complementary problem, where
M = 1 0 0.5 0 1 3 0 0 0.5 0 0 2 1 1 0.5 0 1 0.5 1 2 4 0 0 0.5 0.5 1 1 0 1 2 1 1 0 0 0 3 1 2 1 0 0 0 0 1 4 0 0 0 0 , q = 1 3 1 1 5 4 1.5 .
Since eigenvalues of symmetry component matrix 1 2 ( M + M T ) are all greater than or equal to 0, M is a (nonsymmetric) positive semidefinite matrix, and LCP7 has a unique solution x * = ( 0.0909 , 2.3636 , 0 , 0.1818 , 0.9091 , 0 , 0 ) T .
In fact, x 0 ,   x = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) T ,
x T M x = x 1 2 x 1 x 3 + 1 2 x 2 2 + x 3 2 + x 3 x 4 + 1 2 x 4 2 = x 1 1 2 x 3 2 + 1 2 x 2 2 + 3 4 x 2 + 2 3 x 3 2 + 1 6 x 4 2 0 ,
so M is a positive semidefinite matrix, too.
LCP 9. Consider the following linear complementary problem, where
M = 1 4 1 0 0 1 0 1 1 0 0 0 0 1 0 0 , q = 5 5 1 1 .
For arbitrary x = ( x 1 , x 2 , x 3 , x 4 ) T ,
x T M x = ( x 1 2 x 2 ) 2 3 x 2 2 .
If x ˜ = ( 2 , 1 , 0 , 0 ) T , then x ˜ T M x ˜ = 3 < 0 , which demonstrates M is not a positive semidefinite matrix. LCP9 has a unique solution x * = ( 1 , 1 , 8 , 4 ) T .
LCP 10. Consider the following linear complementary problem, where
M = 7 0 4 2 2 4 2 8 1 5 9 0 1 0 3 0 3 2 0 1 4 6 1 0 8 0 1 2 0 3 5 0 9 2 6 8 , q = 7.3 7.9 3.4 10.2 5.9 7.2 .
For arbitrary x = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) T ,
x T M x = 7 x 1 2 + ( 9 x 6 + 2 x 2 + 5 x 3 + 10 x 5 + 2 x 4 ) x 1 + ( 2 x 6 + 5 x 2 + 6 x 4 + 2 x 5 ) x 4 + ( 8 x 2 + x 4 ) x 2 + ( x 5 + x 2 + 3 x 3 + 4 x 4 + 9 x 6 ) x 3 + ( 2 x 3 + 3 x 5 + 8 x 6 ) x 6 + ( 6 x 6 + 9 x 2 + 3 x 3 + x 4 ) x 5 .
If x ˜ = ( 0 , 0 , 0 , 0 , 8 , 3 ) T , then x ˜ T M x ˜ = 144 < 0 , which demonstrates M is not a positive semidefinite matrix. LCP10 has a unique solution x * = ( 0 . 7 , 0 , 0 . 3 , 0 , 0 . 6 , 0 ) T .

4.1.2. LCP with Multiple Solutions

LCP 11. Consider the following linear complementary problem, where
M = 0 0 2 2 1 0 0 1 2 2 1 2 0 0 0 3 1 0 0 0 2 3 0 0 0 , q = 1 1 1 1 1 .
For arbitrary x = ( x 1 , x 2 , x 3 , x 4 , x 5 ) T ,
x T M x = 3 x 1 x 3 + 5 x 1 x 4 + 3 x 1 x 5 + 3 x 2 x 3 + 3 x 2 x 4 + 5 x 2 x 5 .
If x ˜ = ( 0 , 2 , 1 , 1 , 1 ) T , then x ˜ T M x ˜ = 22 < 0 , which demonstrates M is not a positive semidefinite matrix. The solution set of LCP11′ is
x * = ( λ , 1 3 λ , 0 , 0 . 5 , 0 ) T , 0 λ 0.2
LCP 12. Consider the following linear complementary problem, where
M = 1 1 1 1 1 1 1 1 1 , q = 1 1 1 .
x = ( x 1 , x 2 , x 3 ) T , and x T M x = ( x 1 + x 1 + x 3 ) 2 0 , so M is a (symmetric) positive semidefinite matrix. The solution set of LCP12 is
x * = ( λ 1 , λ 2 , λ 3 ) T , λ 1 + λ 2 + λ 3 = 1 , λ 1 0 , λ 2 0 , λ 3 0 .
LCP 13. Consider the following linear complementary problem, where
M = 1 0 1 0 2 0 1 0 1 , q = 1 0 1 .
x = ( x 1 , x 2 , x 3 ) T , and x T M x = ( x 1 x 3 ) 2 + 2 x 2 2 0 , so M is a (symmetric) positive semidefinite matrix. The solution set of LCP13 is x * = ( λ + 1 , 0 , λ ) T , λ 0 .
LCP 14. Consider the following linear complementary problem, where
M = 1 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 , q = 5 3 3 5 10 .
x = ( x 1 , x 2 , x 3 , x 4 , x 5 ) T , and x T M x = ( x 1 + x 3 + x 4 x 5 ) 2 0 , so M is a (symmetric) positive semidefinite matrix. The solution set of LCP14 is x * = ( λ , 0 , 0 , 5 λ , 0 ) T , 0 λ 5 .

4.1.3. LCP without Solution

LCP 15. Consider the following linear complementary problem, where
M = 1 1 1 1 , q = 2 1 .
LCP 15 is feasible but not solvable [45,46,47].
LCP 16. Consider the following linear complementary problem, where
M = 1 1 0 5 1 0 1 1 0 , q = 2 1 2 .
LCP 16 is feasible but not solvable [45,46,47].

4.2. Parameters Setting

Simulations were carried out to compare the optimization capabilities of the NGHS method with respect to: (1) classical HS (HS) [33], (2) HSCH (Harmony Search Algorithm with Chaos [48]), and (3) HSWB (Harmony Search with Worst and Best [49]). To make the comparison fair, the populations for all HS variants were initialized using the same random seeds. All HS-variants algorithms set the same parameters: HMS = 10, HMCR = 0.85, PAR = 0.35, Tmax = 50,000. In NGHS, we set p m = 0.005. In LCP4 and LCP9, we set search space in the region [ n , n ] n and [ 10 , 10 ] n , respectively. In other LCPs, we set search space in the region [ 5 , 5 ] n .

4.3. Results and Analysis

To judge the accuracy of different algorithms, ten independent runs of four algorithms (HS, HSCH, HSWB, NGHS) were carried out. The best, the mean, the worst fitness values, the standard deviation (Std), and the mean time are recorded in Table 1. Figure 3 shows the convergence of the mean fitness value and boxplot of the final best fitness value for given LCPs. The values plotted for every generation are averaged over ten independent runs. The boxplot is the best fitness value in the final population for the different algorithms.
From Table 1, for solvable LCPs (LCP1~LCP14), we can see that all of the best fitness, mean fitness, worst fitness, and standard deviation (Std) obtained by NGHS are better than those obtained by HS, HSCH, and HSWB. So NGHS has a very fast convergence speed when compared with HS, HSCH, and HSWB, which is indicated in the convergence plot of the mean fitness value of Figure 3. From these figures, we can see that almost all of NGHS find the solutions with the unchangeable convergence speed for solving LCPs. Meanwhile, almost all of mean time of NGHS are very similar to that of HS, HSCH, and HSWB.
After ten independent runs, for LCPs with unique solution (LCP1~LCP10), NGHS converges to its unique solution respectively; and for LCPs with multiple solutions (LCP11~LCP14), NGHS can find as many solutions as possible. Figure 4 shows the computation results of NGHS (10 independent runs) for LCP11~LCP14. Meanwhile, for unsolvable LCPs (LCP15~LCP16), four algorithms are terminated on the solution with the minimum error, and NGHS can more early terminate than other HS variants.

5. Conclusions

General linear complementarity problem is studied in this paper. After reforming the LCP as an unconstrained optimization model, the LCP can be solved by the NGHS algorithm. For the solvable LCP, NGHS terminated on the solution. For the unsolvable LCP, NGHS terminated on the solution with the minimum error.
Thus, we can use this method to decide whether the solution of LCP is existent or not. That is, if objective function (merit function) cannot converge to the zeros anyway, the LCP is maybe unsolvable.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

This work is supported by the Science and Technology Innovation Team of Shaanxi University of Technology and the Key Project of Shaanxi Provincial Education Department (20JS021).

Conflicts of Interest

The author declares that there is no conflict of interest regarding the publication of this paper.

References

  1. Lemke, C.E.; Howson, J.T. Equilibrium points of bimatrix games. SIAM J. Appl. Math. 1964, 12, 413–423. [Google Scholar] [CrossRef]
  2. Cottle, R.W.; Pang, J.S.; Stone, R.E. The Linear Complementarity Problems; Academic Press: Cambridge, MA, USA, 1992. [Google Scholar]
  3. Karmarkar, N. A new polynomial-time algorithm for linear programming. Combinatorica 1984, 4, 373–395. [Google Scholar] [CrossRef]
  4. Kojima, M. A polynomial-time algorithm for a class of linear complementary problems. Math. Program. 1989, 44, 1–26. [Google Scholar] [CrossRef]
  5. Kojima, M.; Megiddo, N.; Mizuno, S. A primal-dual infeasible-interior-point algorithm for linear programming. Math. Program. 1993, 61, 263–280. [Google Scholar] [CrossRef]
  6. Zhang, Y. On the Convergence of a Class of Infeasible-Interior-Point Methods for the Horizontal Linear Complementarity Problem. SIMA J. Optim. 1994, 4, 208–227. [Google Scholar] [CrossRef]
  7. Wright, S.J. An Infeasible-Interior-Point Algorithm for Linear Complementarily Problems. Math. Program. 1994, 67, 29–52. [Google Scholar] [CrossRef]
  8. Chen, X.; Xiang, S. Computation of Error Bounds for P-matrix Linear Complementarity Problems. Math. Program. 2006, 106, 513–525. [Google Scholar] [CrossRef]
  9. Chen, J.-S. On some NCP-functions based on the generalized Fischer-burmeister function. Asia-Pac. J. Oper. Res. 2007, 24, 401–420. [Google Scholar] [CrossRef]
  10. Gao, L.; Wang, Y.; Li, C. New error bounds for the linear complementarity problem of QN-matrices. Numer. Algorithms 2017, 77, 1–14. [Google Scholar] [CrossRef]
  11. Dai, P.F. Error bounds for linear complementarity problems of DB-matrices. Linear Algebra Its Appl. 2016, 53, 647–657. [Google Scholar] [CrossRef]
  12. Bai, Y.Q.; Lesaja, G.; Roos, C. A new class of polynomial interior-point algorithms for P*(κ)-linear complementary problems. Pac. J. Optim. 2008, 4, 248–263. [Google Scholar]
  13. Mansouri, H.; Pirhaji, M. A Polynomial Interior-Point Algorithm for Monotone Linear Complementarity Problems. J. Optim. Theory Appl. 2013, 157, 451–461. [Google Scholar] [CrossRef]
  14. Chen, J.-S.; Ko, C.-H.; Pan, S. A neural network based on the generalized Fischer–Burmeister function for nonlinear complementarity problems. Inf. Sci. 2010, 180, 697–711. [Google Scholar] [CrossRef]
  15. Dai, P.F.; Li, J.C.; Li, Y.T.; Bai, J. A general preconditioner for linear complementarity problem with an M-matrix. J. Comput. Appl. Math. 2017, 317, 100–112. [Google Scholar] [CrossRef]
  16. Kheirfam, B.; Haghighi, M. An infeasible interior-point method for the P*-matrix linear complementarity problem based on a trigonometric kernel function with full-Newton step. Commun. Comb. Optim. 2018, 3, 51–70. [Google Scholar]
  17. Kheirfam, B.; Chitsaz, M. Polynomial convergence of two higher order interior-point methods for P*(κ)-LCP in a wide neighborhood of the central path. Period. Math. Hungar. 2018, 76, 243–264. [Google Scholar] [CrossRef]
  18. Zhang, C.; Chen, X. Smoothing Projected Gradient Method and Its Application to Stochastic Linear Complementarity Problems. SIAM J. Optim. 2009, 20, 627–649. [Google Scholar] [CrossRef]
  19. WANG, G.Q. A new polynomial interior-point algorithm for the monotone linear complementarity problem over symmetric cones with full NT-steps. Asia-Pac. J. Oper. Res. 2012, 29, 1250015. [Google Scholar] [CrossRef]
  20. Luo, Z.; Qi, L.; Xiu, N. The sparsest solutions to Z -tensor complementarity problems. Optim. Lett. 2017, 11, 471–482. [Google Scholar] [CrossRef]
  21. Song, Y.; Qi, L. Tensor Complementarity Problem and Semi-positive Tensors. J. Optim. Theory Appl. 2016, 169, 1069–1078. [Google Scholar] [CrossRef]
  22. Che, M.; Qi, L.; Wei, Y. Positive-Definite Tensors to Nonlinear Complementarity Problems. J. Optim. Theory Appl. 2016, 168, 475–487. [Google Scholar] [CrossRef]
  23. Xiao, B. The linear complementarity problem with a parametric input. Eur. J. Oper. Res. 1995, 81, 420–429. [Google Scholar] [CrossRef]
  24. Adelgren, N.; Wiecek, M.M. A two-phase algorithm for the multiparametric linear complementarity problem. Eur. J. Oper. Res. 2016, 254, 715–738. [Google Scholar] [CrossRef]
  25. Zhang, J.; He, S.X.; Wang, Q. A SAA nonlinear regularization method for a stochastic extended vertical linear complementarity problem. Appl. Math. Comput. 2014, 232, 888–897. [Google Scholar] [CrossRef]
  26. Shang, M.; Zhang, C.; Xiu, N. Minimal Zero Norm Solutions of Linear Complementarity Problems. J. Optim. Theory Appl. 2014, 163, 795–814. [Google Scholar] [CrossRef]
  27. Chen, X.; Xiang, S. Sparse solutions of linear complementarity problems. Math. Program. Ser. A 2015, 159, 539–556. [Google Scholar] [CrossRef]
  28. Billups, S.C.; Murty, K.G. Complementarity problems. J. Comput. Appl. Math. 2000, 124, 303–318. [Google Scholar] [CrossRef]
  29. Bai, Z.Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  30. Bai, Z.Z.; Yang, X. On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 2009, 59, 2923–2936. [Google Scholar] [CrossRef]
  31. Jiang, H.; Qi, L. A New Nonsmooth Equations Approach To Nonlinear Complementarity Problems. SIAM J. Control Optim. 1997, 35, 178–193. [Google Scholar] [CrossRef]
  32. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  33. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  34. Bekdaş, G.; Nigdeli, S.M.; Kim, S.; Geem, Z.W. Modified Harmony Search Algorithm-Based Optimization for Eco-Friendly Reinforced Concrete Frames. Sustainability 2022, 14, 3361. [Google Scholar] [CrossRef]
  35. Yan, L.; Zhu, Z.; Kang, X.; Qu, B.; Qiao, B.; Huan, J.; Chai, X. Multi-Objective Dynamic Economic Emission Dispatch with Electric Vehicle–Wind Power Interaction Based on a Self-Adaptive Multiple-Learning Harmony-Search Algorithm. Energies 2022, 15, 4942. [Google Scholar] [CrossRef]
  36. Taheri, A.; Makarian, E.; Manaman, N.S.; Ju, H.; Kim, T.-H.; Geem, Z.W.; RahimiZadeh, K. A Fully-Self-Adaptive Harmony Search GMDH-Type Neural Network Algorithm to Estimate Shear-Wave Velocity in Porous Media. Appl. Sci. 2022, 12, 6339. [Google Scholar] [CrossRef]
  37. Ocak, A.; Nigdeli, S.M.; Bekdaş, G.; Kim, S.; Geem, Z.W. Optimization of Seismic Base Isolation System Using Adaptive Harmony Search Algorithm. Sustainability 2022, 14, 7456. [Google Scholar] [CrossRef]
  38. Botella Langa, A.; Choi, Y.-G.; Kim, K.-S.; Jang, D.-W. Application of the Harmony Search Algorithm for Optimization of WDN and Assessment of Pipe Deterioration. Appl. Sci. 2022, 12, 3550. [Google Scholar] [CrossRef]
  39. Ocak, A.; Nigdeli, S.M.; Bekdaş, G.; Kim, S.; Geem, Z.W. Adaptive Harmony Search for Tuned Liquid Damper Optimization under Seismic Excitation. Appl. Sci. 2022, 12, 2645. [Google Scholar] [CrossRef]
  40. Tuo, S.; Zhang, J.; Yong, L.; Yuan, X.; Liu, B.; Xu, X. A harmony search algorithm for high-dimensional multimodal optimization problems. Digit. Signal Process. 2015, 46, 151–163. [Google Scholar] [CrossRef]
  41. Zou, D.; Gao, L.; Wu, J.; Li, S.; Li, Y. A novel global harmony search algorithm for reliability problems. Comput. Ind. Eng. 2010, 58, 307–316. [Google Scholar] [CrossRef]
  42. Zou, D.; Gao, L.; Wu, J.; Li, S. Novel global harmony search algorithm for unconstrained problems. Neurocomputing 2010, 73, 3308–3318. [Google Scholar] [CrossRef]
  43. Zou, D.; Gao, L.; Li, S.; Wu, J. Solving 0–1 knapsack problem by a novel global harmony search algorithm. Appl. Soft Comput. 2011, 11, 1556–1564. [Google Scholar] [CrossRef]
  44. Yong, L.; Ding, R.; Zhang, G. Novel Global Harmony Search Algorithm for Monotone Linear Complementarity Problem. ICIC Express Lett. Part B Appl. 2014, 5, 1513–1521. [Google Scholar]
  45. Kostreva, M.M.; Wiecek, M.M. Linear complementarity problems and multiple objective programming. Math. Program. 1993, 60, 349–359. [Google Scholar] [CrossRef]
  46. Isac, G.; Kostreva, M.M.; Wiecek, M.M. Multiple-objective approximation of feasible but unsolvable linear complementarity problems. J. Optim. Theory Appl. 1995, 86, 389–405. [Google Scholar] [CrossRef]
  47. Kostreva, M.M.; Yang, X.Q. Unified approaches for solvable and unsolvable linear complementarity problems. Eur. J. Oper. Res. 2004, 158, 409–417. [Google Scholar] [CrossRef]
  48. Yong, L.; Liu, S.; Tuo, S.; Gao, K. Improved Harmony Search Algorithm with Chaos for Absolute Value Equation. TELKOMNIKA 2013, 11, 835–844. [Google Scholar] [CrossRef]
  49. Yong, L.; Liu, S.; Tuo, S. Improved harmony search algorithm for absolute value equation. J. Nat. Sci. Heilongjiang Univ. 2013, 30, 321–327. [Google Scholar] [CrossRef]
Figure 1. Generating a new harmony by classical HS algorithm.
Figure 1. Generating a new harmony by classical HS algorithm.
Axioms 11 00370 g001
Figure 2. Generating a new harmony by NGHS algorithm.
Figure 2. Generating a new harmony by NGHS algorithm.
Axioms 11 00370 g002
Figure 3. Convergence of mean fitness and boxplot of final best fitness for given LCPs.
Figure 3. Convergence of mean fitness and boxplot of final best fitness for given LCPs.
Axioms 11 00370 g003aAxioms 11 00370 g003bAxioms 11 00370 g003cAxioms 11 00370 g003dAxioms 11 00370 g003eAxioms 11 00370 g003f
Figure 4. Computation results of NGHS (10 independent runs) for LCP 11~LCP 14.
Figure 4. Computation results of NGHS (10 independent runs) for LCP 11~LCP 14.
Axioms 11 00370 g004
Table 1. Results for 10 runs on given LCPs.
Table 1. Results for 10 runs on given LCPs.
LCPsAlgorithmBestMeanWorstStdMeantime (s)
LCP1
n = 3
HS0.000120.0020180.0062120.0021720.7930772
HSCH9.53 × 10−83.17 × 10−76.51 × 10−71.89 × 10−70.8098424
HSWB6.24 × 10−87.69 × 10−73.53 × 10−61.03 × 10−60.8240383
NGHS4.93 × 10−307.66 × 10−292.33 × 10−287.28 × 10−290.8852437
LCP2
n = 10
HS0.0491710.0772590.0922790.0125631.4272887
HSCH1.08 × 10−75 × 10−79.78 × 10−72.53 × 10−71.4395747
HSWB2.93 × 10−81.57 × 10−75.17 × 10−71.46 × 10−71.4575034
NGHS1.05 × 10−277.91 × 10−263.6 × 10−251.09 × 10−251.5603642
LCP3
n = 10
HS0.0008790.0079510.0188990.0062241.2304132
HSCH2.21 × 10−67.45 × 10−61.53 × 10−54.52 × 10−61.2584141
HSWB2.48 × 10−66.4 × 10−61.45 × 10−54.03 × 10−61.2722705
NGHS4.44 × 10−227.55 × 10−195.88 × 10−181.81 × 10−181.3203617
LCP4
n = 10
HS8.8 × 10−50.0004580.0008490.0002811.2091804
HSCH1.46 × 10−93.52 × 10−98.17 × 10−92.06 × 10−91.228747
HSWB5.36 × 10−101.78 × 10−93.94 × 10−91.24 × 10−91.2424723
NGHS00001.3108552
LCP5
n = 4
HS6.71 × 10−50.0148560.032930.0102250.8205359
HSCH1.78 × 10−75.49 × 10−71.35 × 10−63.8 × 10−70.8371677
HSWB1.95 × 10−81.79 × 10−76.17 × 10−71.74 × 10−70.8490459
NGHS07.44 × 10−319.86 × 10−313.55 × 10−310.9181975
LCP6
n = 10
HS0.0001970.0005310.0012970.0003231.2196362
HSCH4.01 × 10−102.39 × 10−94.95 × 10−91.37 × 10−91.2251887
HSWB2.91 × 10−101.3 × 10−92.42 × 10−96.24 × 10−101.2407602
NGHS02.47 × 10−332.47 × 10−327.8 × 10−331.3161821
LCP7
n = 4
HS0.0051250.0155870.0308910.0079320.8171942
HSCH1.26 × 10−52.73 × 10−54.65 × 10−51.4 × 10−50.8416201
HSWB2.94 × 10−66 × 10−50.0004360.0001330.8573362
NGHS2.78 × 10−217.55 × 10−196.11 × 10−181.89 × 10−180.9224063
LCP8
n = 7
HS0.0034570.0117760.0461850.0124590.9173754
HSCH6.31 × 10−60.1091861.0915390.3451640.9359316
HSWB8.64 × 10−60.0659450.490820.1583590.9496413
NGHS3.01 × 10−128.63 × 10−116.03 × 10−101.84 × 10−101.0178789
LCP9
n = 4
HS1.38 × 10−50.0023630.0221270.0069450.8277304
HSCH9 × 10−73.13 × 10−50.0001283.81 × 10−50.8443199
HSWB3.46 × 10−64.87 × 10−50.0002277.57 × 10−50.8582979
NGHS4.19 × 10−282.24 × 10−276.97 × 10−272.14 × 10−270.9241014
LCP10
n = 6
HS0.0128540.0998040.2573140.0887320.8767348
HSCH4.5 × 10−50.0002480.000620.0001730.8921561
HSWB1.41 × 10−50.0002270.0007470.0002510.9055236
NGHS4.6 × 10−235.11 × 10−142.94 × 10−131.07 × 10−130.9837116
LCP11
n = 5
HS4.63 × 10−50.0069060.0257280.0098570.8943217
HSCH1.55 × 10−71.8 × 10−56.46 × 10−52.4 × 10−50.9109492
HSWB7.3 × 10−86.71 × 10−50.0004930.0001510.9244671
NGHS8.71 × 10−323.15 × 10−271 × 10−263.81 × 10−270.9889576
LCP12
n = 3
HS4.88 × 10−141.37 × 10−69 × 10−62.98 × 10−60.8374631
HSCH2.17 × 10−141.01 × 10−95.51 × 10−91.82 × 10−90.8581188
HSWB3.09 × 10−139.33 × 10−107.38 × 10−92.29 × 10−90.871607
NGHS00000.9422746
LCP13
n = 3
HS2.22 × 10−93.06 × 10−61.44 × 10−55.13 × 10−60.7916857
HSCH3.92 × 10−116.18 × 10−101.26 × 10−94.53 × 10−100.8085951
HSWB1 × 10−124.56 × 10−102.16 × 10−96.96 × 10−100.8221383
NGHS00000.9289744
LCP14
n = 5
HS3.56 × 10−50.000530.0015970.0005150.848102
HSCH3.57 × 10−92.33 × 10−85.41 × 10−81.85 × 10−80.8672321
HSWB3.04 × 10−106.88 × 10−91.67 × 10−86.24 × 10−90.8818419
NGHS06.51 × 10−311.23 × 10−304.8 × 10−310.9494152
LCP15
n = 2
HS0.411120.4111270.4111691.49 × 10−50.7566656
HSCH0.411120.411120.411124.29 × 10−100.7321389
HSWB0.411120.411120.411124.43 × 10−100.7490452
NGHS0.411120.411120.411126.14 × 10−170.8155493
LCP16
n = 3
HS0.0322560.0403540.0635290.010810.7415792
HSCH0.0322560.0322560.0322564.11 × 10−80.7572208
HSWB0.0322560.0322560.0322561.1 × 10−70.7765369
NGHS0.0322560.0322560.0322566.04 × 10−170.8379764
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yong, L. Novel Global Harmony Search Algorithm for General Linear Complementarity Problem. Axioms 2022, 11, 370. https://doi.org/10.3390/axioms11080370

AMA Style

Yong L. Novel Global Harmony Search Algorithm for General Linear Complementarity Problem. Axioms. 2022; 11(8):370. https://doi.org/10.3390/axioms11080370

Chicago/Turabian Style

Yong, Longquan. 2022. "Novel Global Harmony Search Algorithm for General Linear Complementarity Problem" Axioms 11, no. 8: 370. https://doi.org/10.3390/axioms11080370

APA Style

Yong, L. (2022). Novel Global Harmony Search Algorithm for General Linear Complementarity Problem. Axioms, 11(8), 370. https://doi.org/10.3390/axioms11080370

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop