Next Article in Journal
Deep Learning Nonhomogeneous Elliptic Interface Problems by Soft Constraint Physics-Informed Neural Networks
Next Article in Special Issue
Heavy-Ball-Based Hard Thresholding Pursuit for Sparse Phase Retrieval Problems
Previous Article in Journal
A Novel Coupled Meshless Model for Simulation of Acoustic Wave Propagation in Infinite Domain Containing Multiple Heterogeneous Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

An Improved Convergence Condition of the MMS Iteration Method for Horizontal LCP of H+-Matrices

School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1842; https://doi.org/10.3390/math11081842
Submission received: 15 March 2023 / Revised: 5 April 2023 / Accepted: 10 April 2023 / Published: 13 April 2023
(This article belongs to the Special Issue Optimization Theory, Method and Application)

Abstract

:
In this paper, inspired by the previous work in (Appl. Math. Comput., 369 (2020) 124890), we focus on the convergence condition of the modulus-based matrix splitting (MMS) iteration method for solving the horizontal linear complementarity problem (HLCP) with H + -matrices. An improved convergence condition of the MMS iteration method is given to improve the range of its applications, in a way which is better than that in the above published article.

1. Introduction

As is known, the horizontal linear complementarity problem, for the given matrices A , B R n × n , is to find that two vectors z , w R n satisfy
A z = B w + q 0 , z 0 , w 0 and z T w = 0 ,
where q R n is given, which is often abbreviated as HLCP. If A = I in (1), the HLCP (1) is no other than the classical linear complementarity problem (LCP) in [1], where I denotes the identity matrix. This implies that the HLCP (1) is a general form of the LCP.
The HLCP (1), used as a useful tool, often arises in a diverse range of fields, including transportation science, telecommunication systems, structural mechanics, mechanical and electrical engineering, and so on, see [2,3,4,5,6,7]. In the past several years, some efficient algorithms have been designed to solve the HLCP (1), such as the interior point method [8], the neural network [9], and so on. Particularly, in [10], the modulus-based matrix splitting (MMS) iteration method in [11] was adopted to solve the HLCP (1). In addition, the partial motivation of the present paper is from complex systems with matrix formulation, see [12,13,14] for more details.
Recently, Zheng and Vong [15] further discussed the MMS method, as described below.
The MMS method [10,15]. Let Ω be a positive diagonal matrix and r > 0 , and let A = M A N A and B = M B N B be the splitting of matrices A and B, respectively. Assume that ( z ( 0 ) , w ( 0 ) ) is an arbitrary initial vector. For k = 0 , 1 , 2 , until the iteration sequence ( z ( k ) , w ( k ) ) converges, compute ( z ( k + 1 ) , w ( k + 1 ) ) by
z ( k + 1 ) = 1 r ( | x ( k + 1 ) | + x ( k + 1 ) ) , w ( k + 1 ) = 1 r Ω ( | x ( k + 1 ) | x ( k + 1 ) ) ,
where x ( k + 1 ) is obtained by
( M A + M B Ω ) x ( k + 1 ) = ( N A + N B Ω ) x ( k ) + ( B Ω A ) | x ( k ) | + r q .
For the later discussion, some preliminaries are gone over. For a square matrix A = ( a i j ) R n × n , | A | = ( | a i j | ) , and A = ( a i j ) , where a i i = | a i i | and a i j = | a i j | for i j . A matrix A = ( a i j ) R n × n is called a non-singular M-matrix if A 1 0 and a i j 0 for i j ; an H-matrix if its comparison matrix A is a non-singular M-matrix; an H + -matrix if it is an H-matrix with positive diagonals; and a strictly diagonally dominant (s.d.d.) matrix if | a i i | > j i | a i j | , i = 1 , 2 , , n . In addition, A ( > ) B with A , B R n × n , means a i j ( > ) b i j for i , j = 1 , 2 , , n .
For the MMS method with H + -matrix, two new convergence conditions are obtained in [15], which are weaker than the corresponding convergence conditions in [10]. One of these is given below.
Theorem 1
([15]). Assume that A , B R n × n are two H + -matrices and Ω = d i a g ( ω j j ) R n × n with ω j j > 0 , i , 2 , , n ,
| b i j | ω j j | a i j | ( i j ) a n d s i g n ( b i j ) = s i g n ( a i j ) , b i j 0 .
Let A = M A N A be an H-splitting of A, B = M B N B be an H-compatible splitting of B, and M A + M B Ω be an H + -matrix. Then the MMS method is convergent, provided one of the following conditions holds:
(a) Ω D A D B 1 ;
(b) Ω < D A D B 1 ,
D B 1 ( D A 1 2 D 1 ( A + M A | N A | ) D ) e < Ω e < D A D B 1 e
with Ω = k D 1 D 1 and k < D A D B 1 D 1 1 D , where e = ( 1 , 1 , , 1 ) T , D and D 1 are positive diagonal matrices such that ( M A | N A | ) D and ( M B | N B | ) D 1 are two strictly diagonally dominant (s.d.d.) matrices.
At present, the difficulty in Theorem 1 is to check the condition (4). Besides that, the condition (4) of Theorem 1 is limited by the parameter k. That is to say, if the choice of k is improper, then we cannot use the condition (4) of Theorem 1 to guarantee the convergence of the MMS method. To overcome this drawback, the purpose of this paper is to provide an improved convergence condition of the MMS method, for solving the HLCP of H + -matrices, to improve the range of its applications, in a way which is better than that in Theorem 1 [15].

2. An Improved Convergence Condition

In fact, by investigating condition (b) of Theorem 1, we know that the left inequality in (4) may have a flaw. Particularly, when the choice of k is improper, we cannot use condition (b) of Theorem 1 to guarantee the convergence of the MMS method. For instance, we consider two matrices
A = 6 2 2 6 , B = 6 1 3 6 .
To make A and B satisfy the convergence conditions of Theorem 1, we take
M A = 6 0 3.5 6 , N A = 0 2 1.5 0 , M B = 6 0 0 6 , N B = 0 1 3 0 .
By the simple computations,
M A | N A | = 6 2 5 6 , ( M A | N A | ) 1 = 1 26 6 2 5 6 0 .
Hence, M A | N A | is a non-singular M-matrix, so that A = M A N A is an H-splitting. On the other hand, B = M B | N B | , so that B = M B N B is an H-compatible splitting.
For convenience, we take D = D 1 = I , where I denotes the identity matrix. By simple calculations, we have
D B 1 ( D A 1 2 D 1 ( A + M A | N A | ) D ) e = 1 3 3.5 6
and
Ω = k D 1 D 1 = k 1 0 0 1 , a n d k < D A D B 1 D 1 1 D = 1 .
Further, we have
Ω e = k 1 1 .
Obviously, when k 1 / 3 , we naturally do not get that
1 3 3.5 6 < k 1 1 .
This implies that condition (b) of Theorem 1 may be invalid when we use condition (b) of Theorem 1 to judge the convergence of the MMS method for solving the HLCP. To overcome this disadvantage, we obtain an improved convergence condition for the MMS method, see Theorem 2, whose proof is similar to the proof of Theorem 2.5 in [15].
Theorem 2.
Assume that A , B R n × n are two H + -matrices, and Ω = d i a g ( ω j j ) R n × n with ω j j > 0 , i , 2 , , n ,
| b i j | ω j j | a i j | ( i j ) a n d s i g n ( b i j ) = s i g n ( a i j ) , b i j 0 .
Let A = M A N A be an H-splitting of A, B = M B N B be an H-compatible splitting of B, and M A + M B Ω be an H + -matrix. Then the MMS method is convergent, provided one of the following conditions holds:
(a) Ω D A D B 1 ;
(b) when Ω < D A D B 1 ,
D B 1 ( D A 1 2 D 1 ( A + M A | N A | ) D ) e < Ω e < D A D B 1 e ,
where D is a positive diagonal matrix, such that M A + M B Ω D is an s.d.d. matrix.
Proof. 
For Case (a), see the proof of Theorem 2.5 in [15].
For Case (b), by simple calculations, we have
M B Ω | N B Ω | = M B Ω | N B | Ω = B Ω , | B Ω A | = | A | | B | Ω 0 .
Making use of Equation (6), based on the proof of Theorem 2.5 in [15], we have
| x ( k + 1 ) x * | M A + M B Ω 1 ( | N A + N B Ω | + | B Ω A | ) | x ( k ) x * | = M A + M B Ω 1 ( | N A + N B Ω | + | A | | B | Ω ) | x ( k ) x * | M A + M B Ω 1 ( | N A | + | N B | Ω + | A | | B | Ω ) | x ( k ) x * | = W ^ | x ( k ) x * | ,
where
W ^ = S ^ 1 T ^ , S ^ = M A + M B Ω and T ^ = | N A | + | N B | Ω + | A | | B | Ω .
Since M A + M B Ω is an H + -matrix, it follows that S ^ = M A + M B Ω is a non-singular M-matrix, and the existence of such a matrix D (see [16], p. 137) satisfies
S ^ D e = M A + M B Ω D e > 0 .
From the left inequality in (5), we have
( 2 D B Ω + M A | N A | | A | ) D e > 0 .
Further, based on the inequality (7), we have
( S ^ T ^ ) D e = ( M A + M B Ω | N A | | N B | Ω | A | + | B | Ω ) D e ( M A + M B Ω | N A | | N B | Ω | A | + | B | Ω ) D e = ( M A | N A | | A | + M B Ω | N B | Ω + | B | Ω ) D e = ( M A | N A | | A | + B Ω + | B | Ω ) D e = ( M A | N A | | A | + 2 D B Ω ) D e > 0 .
Thus, based on Lemma 2.3 in [15], we have
ρ ( W ^ ) = ρ ( D 1 W ^ D ) D 1 W ^ D = ( M A + M B Ω ) D ) 1 ( | N A | + | N B | Ω + | A | | B | Ω ) D max 1 i n ( ( | N A | + | N B | Ω + | A | | B | Ω ) D e ) i ( M A + M B Ω D e ) i < 1 .
The proof of Theorem 2 is completed. □
Comparing Theorem 2 with Theorem 1, the advantage of the former is that condition (b) of Theorem 2 is not limited by the parameter k of the latter. Besides that, we do not need to find two positive diagonal matrices D and D 1 , such that ( M A | N A | ) D and ( M B | N B | ) D 1 are, respectively, s.d.d. matrices, we just find one positive diagonal matrix D, such that M A + M B Ω D is an s.d.d. matrix.
Incidentally, there exists a simple approach to obtain a positive diagonal matrix D in Theorem 2: first, solving the system A ¯ x = e gives the positive vector x, where A ¯ = M A + M B Ω ; secondly, we take D = d i a g ( A ¯ 1 e ) , which can make M A + M B Ω D an s.d.d. matrix.
In addition, if the H + -matrix M A + M B Ω itself is an s.d.d. matrix, then we can take D = I in Theorem 2. In this case, we can obtain the following corollary.
Corollary 1.
Assume that A , B R n × n are two H + -matrices, and Ω = d i a g ( ω j j ) R n × n with ω j j > 0 , i , 2 , , n ,
| b i j | ω j j | a i j | ( i j ) a n d s i g n ( b i j ) = s i g n ( a i j ) , b i j 0 .
Let A = M A N A be an H-splitting of A, B = M B N B be an H-compatible splitting of B, and the H + -matrix M A + M B Ω be an s.d.d. matrix. Then, the MMS method is convergent, provided one of the following conditions holds:
(a) Ω D A D B 1 ;
(b) when Ω < D A D B 1 ,
D B 1 ( D A 1 2 ( A + M A | N A | ) ) e < Ω e < D A D B 1 e .

3. Numerical Experiments

In this section, we consider a simple example to illustrate our theoretical results in Theorem 2. All the computations are performed in MATLAB R2016B.
Example 1.
Consider the HLCP ( A , B , q ) , in which A = A ¯ + μ I , B = B ¯ + ν I , where A ¯ = b l k t r i d i a g ( I , S , I ) R n × n , B ¯ = I S R n × n , S = t r i d i a g ( 1 , 4 , 1 ) R m × m , and μ, ν are real parameters. Let q = A z * B w * , with
z * = ( 0 , 1 , 0 , 1 , 0 , 1 , ) T R n , w * = ( 1 , 0 , 1 , 0 , 1 , 0 , ) T R n .
In our calculations, we take μ = 4 and ν = 0 for A and B in Example 1, x ( 0 ) = ( 2 , 2 , , 2 ) T R n is used for the initial vector. The modulus-based Jacobi (NMJ) method and Gauss–Seidel (NMGS) method, with r = 2 , are adopted. The NMJ and NMGS methods are stopped once the number of iterations is larger than 500 or the norm of residual vectors (RES) is less than 10 6 , where
RES : = A z k B w k q 2 .
Here, we consider two cases of Theorem 2. When Ω D A D B 1 , we take Ω = 2 I for the NMJ method and the NMGS method. In this case, Table 1 is obtained. When Ω < D A D B 1 , we take D = I , and obtain that I < Ω < 2 I and M A + M B Ω D is an s.d.d. matrix. In this case, we take Ω = 1.5 I and Ω = 1.2 I for the NMJ and NMGS methods, and obtain Table 2 and Table 3.
The numerical results in Table 1, Table 2 and Table 3 not only further confirm that the MMS method is feasible and effective, but also show that the convergence condition in Theorem 2 is reasonable.

4. Conclusions

In this paper, the modulus-based matrix splitting (MMS) iteration method for solving the horizontal linear complementarity problem (HLCP) with H + -matrices, has been further considered. The main aim of this paper is to present an improved convergence condition of the MMS iteration method, to enlarge the range of its applications, in a way which is better than previous work [15].

Author Contributions

Conceptualization, methodology, software, S.W.; original draft preparation, C.L.; translation, editing and review, S.W.; validation, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 11961082).

Data Availability Statement

Data will be made available on request.

Acknowledgments

The author would like to thank three referees; their opinions and comments improved the presentation of the paper greatly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cottle, R.W.; Pang, J.-S.; Stone, R.E. The Linear Complementarity Problem; Academic: San Diego, CA, USA, 1992. [Google Scholar]
  2. Eaves, B.C.; Lemke, C.E. Equivalence of LCP and PLS. Math. Oper. Res. 1981, 6, 475–484. [Google Scholar] [CrossRef]
  3. Ye, Y. A fully polynomial time approximation algorithm for computing a stationary point of the generalized linear complementarity problems. Math. Oper. Res. 1993, 18, 334–345. [Google Scholar] [CrossRef]
  4. Mangasarian, O.L.; Pang, J.S. The extended linear complementarity problem. SIAM J. Matrix Anal. Appl. 1995, 16, 359–368. [Google Scholar] [CrossRef] [Green Version]
  5. Sznajder, R.; Gowda, M.S. Generalizations of P0- and P-properties; extended vertical and horizontal linear complementarity problems. Linear Algebra Appl. 1995, 223, 695–715. [Google Scholar] [CrossRef] [Green Version]
  6. Ferris, M.C.; Pang, J.S. Complementarity and Variational Problems: State of the Arts; SIAM Publisher: Philadelphia, PA, USA, 1997. [Google Scholar]
  7. Xiu, N.; Zhang, J. A smoothing Gauss-Newton method for the generalized horizontal linear complementarity problems. J. Comput. Appl. Math. 2001, 129, 195–208. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, Y. On the convergence of a class on infeasible interior-point methods for the horizontal linear complementarity problem. SIAM J. Optim. 1994, 4, 208–227. [Google Scholar] [CrossRef]
  9. Gao, X.; Wang, J. Analysis and application of a one-layer neural network for solving horizontal linear complementarity problems. Int. J. Comput. Int. Syst. 2014, 7, 724–732. [Google Scholar] [CrossRef] [Green Version]
  10. Mezzadri, F.; Galligani, E. Modulus-based matrix splitting methods for horizontal linear complementarity problems. Numer. Algor. 2020, 83, 201–219. [Google Scholar] [CrossRef]
  11. Bai, Z.-Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  12. Zhang, J.-X.; Yang, G.-H. Low-complexity tracking control of strict-feedback systems with unknown control directions. IEEE T. Automat. Contrl 2019, 64, 5175–5182. [Google Scholar] [CrossRef]
  13. Zhang, X.-F.; Chen, Y.Q. Admissibility and robust stabilization of continuous linear singular fractional order systems with the fractional order α: The 0< α <1 case. Isa Trans. 2018, 82, 42–50. [Google Scholar] [PubMed]
  14. Zhang, J.-X.; Wang, Q.-G.; Ding, W. Global output-feedback prescribed performance control of nonlinear systems with unknown virtual control coefficients. IEEE T. Automat. Contrl 2022, 67, 6904–6911. [Google Scholar] [CrossRef]
  15. Zheng, H.; Vong, S. On convergence of the modulus-based matrix splitting iteration method for horizontal linear complementarity problems of H+-matrices. Appl. Math. Comput. 2020, 369, 124890. [Google Scholar] [CrossRef]
  16. Berman, A.; Plemmons, R.J. Nonnegative Matrices in the Mathematical Sciences; Academi: New York, NY, USA, 1979. [Google Scholar]
Table 1. Numerical results for Ω = 2 I .
Table 1. Numerical results for Ω = 2 I .
m100200300
NMJIT303132
CPU0.03810.21200.4114
RES6.35 × 10 7 6.61 × 10 7 5.02 × 10 7
NMGSIT192020
CPU0.03140.09520.2488
RES6.86 × 10 7 4.79 × 10 7 7.30 × 10 7
Table 2. Numerical results for Ω = 1.5 I .
Table 2. Numerical results for Ω = 1.5 I .
m100200300
NMJIT293031
CPU0.03790.15530.3976
RES9.71 × 10 7 9.05 × 10 7 6.65 × 10 7
NMGSIT181919
CPU0.02430.09310.2300
RES6.01 × 10 7 4.00 × 10 7 6.11 × 10 7
Table 3. Numerical results for Ω = 1.2 I .
Table 3. Numerical results for Ω = 1.2 I .
m100200300
NMJIT393940
CPU0.04740.19300.5127
RES6.78 × 10 7 9.78 × 10 7 8.12 × 10 7
NMGSIT202021
CPU0.02830.11090.2595
RES4.76 × 10 7 8.47 × 10 7 4.59 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, C.; Wu, S. An Improved Convergence Condition of the MMS Iteration Method for Horizontal LCP of H+-Matrices. Mathematics 2023, 11, 1842. https://doi.org/10.3390/math11081842

AMA Style

Li C, Wu S. An Improved Convergence Condition of the MMS Iteration Method for Horizontal LCP of H+-Matrices. Mathematics. 2023; 11(8):1842. https://doi.org/10.3390/math11081842

Chicago/Turabian Style

Li, Cuixia, and Shiliang Wu. 2023. "An Improved Convergence Condition of the MMS Iteration Method for Horizontal LCP of H+-Matrices" Mathematics 11, no. 8: 1842. https://doi.org/10.3390/math11081842

APA Style

Li, C., & Wu, S. (2023). An Improved Convergence Condition of the MMS Iteration Method for Horizontal LCP of H+-Matrices. Mathematics, 11(8), 1842. https://doi.org/10.3390/math11081842

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop