Next Article in Journal
Polynomial-Time Verification of Decentralized Fault Pattern Diagnosability for Discrete-Event Systems
Previous Article in Journal
Trace Formulae for Second-Order Differential Pencils with a Frozen Argument
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularized Normalization Methods for Solving Linear and Nonlinear Eigenvalue Problems

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(18), 3997; https://doi.org/10.3390/math11183997
Submission received: 21 August 2023 / Revised: 6 September 2023 / Accepted: 14 September 2023 / Published: 20 September 2023

Abstract

:
To solve linear and nonlinear eigenvalue problems, we develop a simple method by directly solving a nonhomogeneous system obtained by supplementing a normalization condition on the eigen-equation for the uniqueness of the eigenvector. The novelty of the present paper is that we transform the original homogeneous eigen-equation to a nonhomogeneous eigen-equation by a normalization technique and the introduction of a simple merit function, the minimum of which leads to a precise eigenvalue. For complex eigenvalue problems, two normalization equations are derived utilizing two different normalization conditions. The golden section search algorithms are employed to minimize the merit functions to locate real and complex eigenvalues, and simultaneously, we can obtain precise eigenvectors to satisfy the eigen-equation. Two regularized normalization methods can accelerate the convergence speed for two extensions of the simple method, and a derivative-free fixed-point Newton iterative scheme is developed to compute real eigenvalues, the convergence speed of which is ten times faster than the golden section search algorithm. Newton methods are developed for solving two systems of nonlinear regularized equations, and the efficiency and accuracy are significantly improved. Over ten examples demonstrate the high performance of the proposed methods. Among them, the two regularization methods are better than the simple method.

1. Introduction

It is well known that the Rayleigh quotient [1,2]:
R ( x ) : = x T A x x T x
can be used to determine the real eigenvalue of a symmetric matrix A .
In this paper, we derive a simple normalized condition solver to obtain the eigenvalues of
A x = λ x ,
where A R n × n is a given matrix, x R n is an unknown vector, and λ is an unknown eigenvalue in the standard linear eigen-equation. When A is not a symmetric matrix, the Rayleigh quotient (1) cannot be used to determine the eigenvalues. Liu et al. [3] developed a new quotient to determine the eigenvalues of Equation (2).
As noticed by Liu et al. [4], it is hard to directly determine the eigenvalue and eigenvector from Equation (2) by a numerical method. In fact, from A x λ x = 0 , we always obtain x = 0 by a numerical method since the right-hand side is a zero vector. In [4], a new strategy to overcome this difficulty uses the variable transformation to a new nonhomogeneous linear system. It possesses a nonzero external excitation term on the right-hand side, such that one can obtain a nonzero eigenvector when the eigen-parameter λ is an eigenvalue. We are going to propose a simple method to nonhomogenize the eigen-equation to obtain a nonhomogeneous linear system, and then it is easy to find the eigenvalue and eigenvector by using the minimization technique.
The standard free vibration model of elastic structural elements is
M q ¨ ( t ) + C q ˙ ( t ) + K q ( t ) = 0 ,
which by q ( t ) = e λ t x renders a quadratically nonlinear eigenvalue problem [5]:
( λ 2 M + λ C + K ) x = 0 .
A lot of applications and solvers of quadratic eigenvalue problems have been proposed, e.g., stability analysis of time-delay systems [6], free vibrations of fluid–solids structures [7], a modified second-order Arnoldi method [8], the inexact residual iteration method [9], the homotopy perturbation technique [10], electromagnetic wave propagation and analysis of an acoustic fluid contained in a cavity with absorbing walls [11], and a friction-induced vibration problem under variability [12]. In addition, several applications and solvers of generalized eigenvalue problems have been addressed, e.g., the block Arnoldi-type contour integral spectral projection method [13], small-sample statistical condition estimation [14], matrix perturbation methods [15], the overlapping finite element method [16], the complex HZ method [17], the context of sensor selection [18], and a generalized Arnoldi method [19].
As done in [4], we can take
y = λ x ,
and combine Equations (5) and (4) to obtain
0 n I n K C x y = λ I n 0 n 0 n M x y .
Upon defining
X : = x y , A : = 0 n I n K C , B : = I n 0 n 0 n M ,
Equation (6) becomes a generalized eigenvalue problem for the 2 n -vector X :
A X = λ B X ,
where A , B R 2 n × 2 n . Equation (8) is used to determine the eigen-pair ( λ , X ) , which is a linear eigen-equation associated with the pencil A λ B , where λ is an eigen-parameter. A main drawback of this argumentation is that the dimension is raised doubly from n to 2 n .
Equation (8) can be written as
( A λ B ) X = 0 .
Because the right-hand side is a zero vector, solving it by the numerical method we can only obtain the trivial solution X = 0 . To avoid this situation, Liu et al. [4] introduced an external excitation method by letting X = Y + X 0 , such that
( A λ B ) Y = ( A λ B ) X 0 .
Solving this equation for Y , then the eigenvector X = Y + X 0 is obtained. However, it is a problem how to select the proper exciting vector X 0 . The basic idea is the transformation from a homogeneous Equation (9) to a nonhomogeneous Equation (10). We need to determine if we can develop a simpler method to realize such a type transformation but without introducing an extra exciting vector X 0 , which is an interesting problem. The present paper attempts to make this type transformation very easy, which is the main motivation and the major novelty: to realize this transformation by a simple normalization technique. The present idea is simpler than that in [4], so the new technique is called a simple method and is introduced in Section 2.
Nonlinear eigenvalue problems are important and find a lot of real applications in engineering and applied fields. Betcke et al. [20] collected 52 nonlinear eigenvalue problems in the form of a MATLAB toolbox, which contains problems from models of real-life applications as well as ones constructed specifically to have particular properties. Recently, El-Guide et al. [21] presented two approximation methods for computing eigenfrequencies and eigenmodes of large-scale nonlinear eigenvalue problems resulting from boundary element method solutions of some types of acoustic eigenvalue problems in 3D space. We extend Equation (4) to a general nonlinear eigenvalue problem [20]:
N ( λ ) x = 0 ,
which is a nonlinear eigen-equation of λ used to solve the eigen-pair ( λ , x ) , where N ( λ ) R n × n . Equation (11) is a nonlinear eigenvalue problem because N ( λ ) is a nonlinear matrix function of the eigen-parameter λ . In Equation (9), N ( λ ) = A λ B is a linear matrix function of λ , so that it is a linear eigenvalue problem.
Most numerical methods that deal with the nonlinear eigenvalue problems are Newton-type methods [22,23,24,25]. In [26], some available solution techniques for nonlinear eigenvalue problems using the Jacobi–Davidson, Arnoldi and the rational Krylov methods were presented. Zhou [27] used the Leray–Schauder fixed-point theorem to acquire the existence of positive solutions of a nonlinear eigenvalue problem. El-Ajou [28] demonstrated the general exact and numerical solutions of four significant matrix fractional differential equations, and a new computational skill was applied for obtaining the general solutions of the nonlinear issue in the Caputo sense. Jadamba et al. [29] addressed the nonlinear inverse problem of estimating the stochasticity of a random parameter in stochastic partial differential equations by using a regularized projected stochastic gradient scheme. Later, Harcha et al. [30] tackled the nonlinear eigenvalue problem with the p-Laplacian fractional involving singular weights and obtained the nonexistence of solutions by utilizing a typical version of Picone’s identity.
The nonlinear eigenvalue problem is a great challenge for developing efficient and accurate methods [31]. Even for polynomial nonlinear eigenvalue problems, the linearizations to the linear eigenvalue problems in a larger space are quite complicated and are in general not unique. The present paper intends to overcome these challenges, wherein we will directly solve the nonlinear eigenvalue problem in its nonhomogeneous form by incorporating a normalization condition in the original space.
In this paper, we will encounter a problem to solve a nonlinear equation f ( x ) = 0 , but the explicit function of f ( x ) is not available. The Newton method for iteratively solving f ( x ) = 0 is given by
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , ,
which needs to carry out a point-wise derivative f ( x n ) in the iteration. For some problems, the explicit function f ( x ) might not be available, and this induces great difficulty in using the Newton method to solve the nonlinear equation. To overcome this inefficiency, Liu [32] derived a derivative-free iterative scheme based on a new splitting technique:
x n + 1 = x n f ( x n ) a + b f ( x n ) , n = 0 , 1 , ,
where a and b are constants. In Section 6.3, we will develop a derivative-free fixed-point Newton method to determine a and b. With regard to the derivative-free fixed-point Newton methods, one can refer to [33] and references therein.
In addition to the derivative-free fixed-point Newton method and the minimization techniques, we will also develop the Newton method for the nonlinear equations system by incorporating the normalization condition into the eigen-equation. Arnoldi [34] proposed that the method of minimized iterations was recommended as a rapid means for determining a small number of the larger eigenvalues and modal columns of a large matrix. After that, many iterative methods were surveyed in [35] at length. Argyros et al. [36] addressed a semilocal analysis of the Newton–Kurchatov method for solving nonlinear equations involving the splitting of an operator. They also acquired weaker sufficient semilocal convergence criteria and tighter error estimates than in earlier works. Argyros and Shakhno [37] employed local convergence of the combined Newton–Kurchatov method for solving Banach-space-valued equations. Further, they also mentioned that these modifications of earlier conditions resulted in tighter convergence analysis and more precise information on the location of the solution.
This paper develops several simple approaches, including two regularization methods, to solve nonlinear eigenvalue problems. The contributions and innovation points of this paper are given as follows:
  • When solving nonlinear eigenvalue problems, they can be transformed into minimization problems regardless of real and complex eigenvalues.
  • For solving linear and nonlinear eigenvalue problems, this paper presents normalization techniques to create new nonhomogeneous systems and merit functions.
  • Two simple regularization methods are combined with the Newton iteration method, which results in very fast convergence to solve nonlinear eigenvalue problems.
  • Using the derivative-free fixed-point Newton method to directly solve the regularized scalar equation for nonlinear eigenvalue problems, we can can quickly obtain high-precision eigenvalues.
The remainder of the paper is arranged as follows. In Section 2, we consider a normalization condition for the uniqueness of the eigenvector and derive a simple nonhomogeneous linear system to minimize the residual of the eigen-equation by using the 1D golden section search algorithm (1D GSSA) to determine the real eigenvalue, which results in a simple method (SM). Some examples of linear eigenvalue problems in Section 3 exhibit the advantages of the present methodology of the SM to find the approximate solution of Equation (2). A simple method (SM) of the nonlinear eigenvalue problem (11) is presented in Section 4, which is combined with the golden section search algorithm to be a stable solver of eigenvalues and eigenvectors. For complex eigenvalue problems, we propose two normalization equations with nonhomogeneous terms that appear on the right-hand side. Section 5 displays some examples of nonlinear eigenvalue problems solved by the SM and GSSA. In Section 6, we discuss two simple regularization methods and provide a derivative-free fixed-point Newton method for quickly finding the real eigenvalues. The combination of Newton’s method and regularized equations is carried out in this section. Finally, the conclusions are drawn in Section 7.

2. A Simple Method for Standard Linear Eigenvalue Problems

We can observe that x 0 in Equation (2) is not unique because β x , β 0 is also a solution if x is a solution of Equation (2). Therefore, for the uniqueness of the eigenvector of Equation (2), an extra normalization condition
b T x = 1
can be imposed, where b is a given nonzero vector. For example, if we take b = ( 1 , 0 , , 0 ) T , then the first component of x is normalized to x 1 = 1 . If b J = 1 for a given J and b k = 0 , k J , we normalize the J-th component of x to be x J = 1 .
Theorem 1.
If x R n in Equation (2) is constrained by a normalization condition (14) for the uniqueness of x , we can derive a nonhomogeneous system to determine x :
( A T A λ A λ A T + λ 2 I n + b b T ) x = b .
Proof. 
Equations (2) and (14) yield an over-determined system:
A λ I n b T x = 0 n 1 .
Multiplying both sides by
A λ I n b T T = A T λ I n b ,
we have
A T λ I n b A λ I n b T x = A T λ I n b 0 n 1 .
Expanding this, we can prove Equation (15). □
Remark 1.
If the coefficient matrix A is highly ill-conditioned, it is better to replace Equation (15) with
( A λ I n + b b T ) x = b ,
the condition number of which is reduced to one-half. Equation (18) is easily derived by adding b T x b on both sides of Equation (2) and taking into account Equation (14) with b T x = 1 .
If λ is an eigenvalue and x 0 is the corresponding eigenvector, A x λ x = 0 by Equation (2). In other cases, A x λ x > 0 x 0 . As a consequence, A x λ x 0 x R n . For a given λ R , if x is solved from Equation (18), then we can determine the correct value of λ by minimizing the following merit function:
min λ [ a , b ] f ( λ ) : = A x λ x 0 ,
where A x λ x denotes the Euclidean norm of A x λ x , and λ [ a , b ] is a real eigenvalue to be sought.
The numerical procedures for solving Equation (2) are given by (i) selecting [ a , b ] and b , (ii) solving Equation (18) for each required λ i [ a , b ] , and (iii) applying the one-dimensional golden section search algorithm (1D GSSA) to Equation (19) to pick up the eigenvalue.
Remark 2.
This method, involving merely two Equations: (18) and (19), is the simplest method to find the real eigenvalues of Equation (2) and is thus labeled as a simple method (SM). When a SM is used to solve the nonhomogeneous linear system in Equation (18) for each eigen-parameter λ, we can easily compute the real eigenvalue λ and then the corresponding eigenvector x with the aid of Equation (19).

3. Examples of Linear Eigenvalue Problems

In order to assess the performance of the newly developed SM, we test some linear eigenvalue problems.
Example 1.
We first demonstrate the case
A = 6 2 2 2 5 0 2 0 7 ,
and the exact eigenvalues are λ ( A ) = { 3 , 6 , 9 } .
Although this example is very simple, we adopt it to test the accuracy and efficiency of the proposed SM since the exact eigenvalues are known. By applying the SM, we take J = 1 and ε = 10 15 as used in the 1D GSSA. We plot f ( λ ) with respect to the eigen-parameter over an interval, as shown in Figure 1, for which three minimal points are the corresponding eigenvalues λ = 3 , 6 , 9 , the corresponding eigenvectors of which are given as follows:
x ( 1 ) = 1 1 0.5 , x ( 2 ) = 1 2 2 , x ( 3 ) = 1 0.5 1 .
Table 1 lists some results obtained by SM, where EE means the error of the eigenvalue and NI denotes the number of iterations.
Example 2.
We consider
A = 1.6407 1.0814 1.2014 1.1539 1.0814 4.1573 7.4035 1.0463 1.2014 7.4035 2.7890 1.5737 1.1539 1.0463 1.5737 8.6944 ,
where λ ( A ) = { 4 , 2 , 8 , 12 } .
In the SM, we take b = ( 1 , 1 , 0 , 0 ) T and ε = 10 15 for the 1D GSSA. We plot f ( λ ) in Figure 2, the four minimal points of which are the corresponding eigenvalues λ = 4 , 2 , 8 , 12 .
Table 2 lists some results obtained by SM. The eigenvectors corresponding to λ = { 4 , 2 , 8 , 12 } are given as follows:
x ( 1 ) = 0.134043 0.865957 0.982563 0.062616 , x ( 2 ) = 1.219790 0.219790 0.017379 0.155674 , x ( 3 ) = 0.335959 0.664041 0.526805 1.636074 , x ( 4 ) = 0.091303 0.908697 0.855567 0.663053 .
Example 3.
A nonsymmetric matrix is given by
A = 3.5488 15.593 8.5775 4.0123 2.3595 24.526 14.596 5.8157 0.089953 27.599 21.483 5.8415 1.9227 55.667 39.717 10.558 ,
where λ ( A ) = { 1 , 2 , 6 , 30 } .
With b = ( 1 , 1 , 1 , 1 ) T , we plot f ( λ ) in Figure 3, the four minimal points of which are the corresponding eigenvalues λ = 1 , 2 , 6 , 30 .
Table 3 lists some results obtained by SM. The eigenvectors corresponding to λ = { 1 , 2 , 6 , 30 } are given as follows:
x ( 1 ) = 2.320260 2.054856 2.309442 1.574846 , x ( 2 ) = 0.271484 0.133857 0.009676 0.604335 , x ( 3 ) = 0.281715 0.120998 0.542044 0.860669 , x ( 4 ) = 0.107796 0.174668 0.238818 0.478718 .
Example 4.
Consider the Frank nonsymmetric matrix [38,39,40]:
A = 1 1 1 1 1 2 2 2 0 2 3 3 0 0 0 n 1 n .
For n = 30 , the largest eigenvalue is given by
λ = 96.200622293285 .
In the SM, we take J = 10 and ε = 10 15 for the 1D GSSA. The function f ( λ ) is plotted in Figure 4, for which the minimal points are the last nine eigenvalues. Through 103 iterations of the 1D GSSA, λ = 96.20062229328501 and A x λ x = 3.22 × 10 16 are obtained.
Example 5.
Let A = [ a i j ] , i , j = 1 , , n . In Equation (2), we take the Hilbert matrix:
a i j = 1 i + j 1 , i , j = 1 , , n .
Since the Hilbert matrix is highly ill-conditioned, we take Equation (18) instead of Equation (15) to compute the eigenvalue. In Figure 5, with b = 1 n , n = 7 , we plot f ( λ ) with respect to the eigen-parameter over an interval, for which the seven minimal points are the corresponding eigenvalues. There exists the largest eigenvalue between 1.5 and 2.
In Table 4, we list the largest eigenvalues for n = 7 , , 10 obtained in [41] using the cyclic Jacobi method [42] and in [3] using the external excitation method.
Due to the highly ill-conditioned nature of the Hilbert matrix with n = 100 , it is a quite difficult linear eigenvalue problem. For this problem, we take J = 1 to compute the largest eigenvalue, which is given by λ = 2.182696097757424 . The SM converges very fast with 69 iterations under ε = 10 15 , and the error of the eigen-equation is A x λ x = 2.4 × 10 15 . Notice that the smallest eigenvalue of the Hilbert matrix with a large n is very difficult to compute since it is very close to zero. However, for n = 100 and b 49 = b 50 = 1 , we can obtain the smallest eigenvalue 6.18 × 10 30 , whose error is A x λ x = 1.41 × 10 16 .

4. A Simple Method for Nonlinear Eigenvalue Problems

If x is an eigenvector of Equation (11), then with α 0 , α x is also an eigenvector, which means that the eigenvector of Equation (11) is not unique. Therefore, we can impose on Equation (11) an extra normalization condition (14).
Theorem 2.
If x R n in Equation (11) is imposed by a normalization condition (14) for the uniqueness of x R n , we can derive a nonhomogeneous equation system to determine x :
[ N ( λ ) + b b T ] x = b .
Proof. 
Equation (29) is easily derived by adding b T x b on both sides of Equation (11):
N ( λ ) x + b b T x = b T x b ,
which, with b T x = 1 by Equation (14) being taken on the right-hand side, yields
N ( λ ) x + b b T x = b .
Thus, we prove Equation (29). □
The numerical procedures for determining the real eigenvalues of Equation (11) are summarized as follows: (i) Select [ a , b ] and b . (ii) Solve Equation (29) for each required λ i [ a , b ] . (iii) Apply the 1D GSSA to Equation (19) with f ( λ ) = N ( λ ) x for picking up the eigenvalue. This method is the simplest method to find the real eigenvalues of Equation (11), which is labeled as a simple method (SM).
Theorem 3.
If x C n in Equation (11) is imposed by a normalization condition (14) for the uniqueness of x , and λ is a complex eigenvalue, we can derive a nonhomogeneous equations system to determine x :
N 1 + b b T N 2 N 2 N 1 + b b T u v = b 0 n ,
where
λ = λ R + i λ I , N = N 1 + i N 2 , x = u + i v .
Proof. 
Inserting Equation (33) into Equation (29) yields
( N 1 + b b T + i N 2 ) ( u + i v ) = b .
Equating the real and imaginary parts of Equation (34), we have
( N 1 + b b T ) u N 2 v = b , N 2 u + ( N 1 + b b T ) v = 0 n .
which can be recast to that in Equation (32). □
When x is solved from Equation (32), we can employ the following minimization:
min ( λ R , λ I ) [ a , b ] × [ c , d ] f ( λ R , λ I ) : = N ( λ R , λ I ) x 0
to determine the complex eigenvalue.
For the complex eigenvalue problem, we can also derive another normalization equation.
Theorem 4.
For x = u + i v C n in Equation (11), it is imposed by a normalization condition:
c T y = 1 , y : = u v
for the uniqueness of x , and λ is a complex eigenvalue. We can derive a nonhomogeneous equation system to determine x = u + i v :
( D + c c T ) y = c ,
where c is a 2 n -dimensional constant vector and
D : = N 1 N 2 N 2 N 1 .
Proof. 
Inserting Equation (33) into Equation (11) and equating the real and imaginary parts, we can derive
D y = 0 .
Adding c c T y on both sides yields
( D + c c T ) y = c c T y ,
which by using Equation (37) on the right-hand side generates Equation (38). □
Therefore, the numerical procedures for determining the complex eigenvalue of a nonlinear eigenvalue problem (11) are depicted as follows: (i) Select [ a , b ] × [ c , d ] and b . (ii) For each ( λ R , λ I ) [ a , b ] × [ c , d ] required, we solve Equation (32). (iii) Apply the two-dimensional GSSA (2D GSSA) to Equation (36). With regard to the two-dimensional golden section search algorithm, one may refer to [43].
Remark 3.
Even the proofs of Theorems 2–4 are simple and straightforward; they are crucial for the development of the proposed numerical methods for effectively and accurately solving nonlinear eigenvalue problems.

5. Examples of Nonlinear Eigenvalue Problems

Example 6.
To demonstrate the new idea in Equation (29), we consider a generalized eigenvalue problem A x = λ B x endowed by [44]:
A = 2 3 4 5 6 4 4 5 6 7 0 3 6 7 8 0 0 2 8 9 0 0 0 1 10 , B = 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0 1 .
By using the SM with b = ( 1 , 0 , 0 , 0 , 0 ) T and ε = 10 15 for the 1D GSSA, f ( λ ) is plotted in Figure 6, for which five eigenvalues appear as minimums. With [ a , b ] = [ 1 , 0 ] , we obtain λ = 0.1873528931969768 , NI = 73 and N x = 1.07 × 10 15 . Those eigenvalues are listed as follows:
λ 1 = 0.1873528931969768 , λ 2 = 1.313278952662423 , λ 3 = 5.537956370847891 , λ 4 = 12.0896928530668 , λ 5 = 21.24642471661987 .
Example 7.
To display the advantage of Equation (32), we consider a standard eigenvalue problem with
A = 1 1 1 2 , B = 1 0 0 1 ,
which possesses the complex eigenvalues:
λ = 3 2 ± i 3 2 .
By using the SM with b = ( 1 , 1 ) T and ε = 10 15 for the 2D GSSA and with [ a , b ] × [ c , d ] = [ 1.2 , 1.9 ] × [ 0.5 , 0.9 ] , we can obtain NI = 73 and N x = 7.22 × 10 16 , and the error of the eigenvalue is 4.44 × 10 16 . The SM is very accurate in finding the complex eigenvalue.
On the other hand, using the SM in Theorem 4 with c = ( 0 , 1 , 0 , 0 ) T and ε = 10 15 for the 2D GSSA and with [ a , b ] × [ c , d ] = [ 1.2 , 1.9 ] × [ 0.5 , 0.9 ] , we can obtain NI = 73 and N x = 1.67 × 10 16 , and the error of the obtained eigenvalue is zero.
Example 8.
We consider
N ( λ ) x = ( λ 2 M + λ C + K ) x = 0 ,
where
M = 0 6 0 0 6 0 0 0 1 , C = 1 6 0 2 7 0 0 0 0 , K = I 3 .
By using the SM with b = ( 1 , 1 , 1 ) T and ε = 10 15 , we plot f ( λ ) in Figure 7a with respect to the eigen-parameter over an interval, for which two real eigenvalues are λ = 0.04080314176866112 and λ = 0.7425972620277184 , where N x = 1.22 × 10 15 and N x = 2.53 × 10 15 are obtained, respectively.
Let λ = μ , and we can derive
N 1 = [ ( μ R 2 μ I 2 ) 2 4 μ R 2 μ I 2 ] M + μ R C + K , N 2 = 4 μ R μ I ( μ R 2 μ I 2 ) M + μ I C .
There are a total of 24 eigenvalues, as shown in Figure 7b, with respect to λ R = μ R 2 μ I 2 and λ I = 2 μ R μ I .
Example 9.
As an application, we consider a time-delay linear system of first-order ordinary differential equations:
q ˙ ( t ) = A q ( t ) + B q ( t 1 ) ,
where B q ( t 1 ) is a time-delay external force. Inserting q ( t ) = e λ t x into Equation (49) renders
λ e λ t x = A e λ t x + e λ e λ t B x .
By canceling e λ t on both sides, we obtain a time-delay nonlinear eigenvalue problem:
A x + e λ B x = λ x ,
where A , B R n × n . The eigenvalues of this system are very important as they reflect the stability of the time-delay system.
We take n = 3 and consider [25]
N ( λ ) x = [ A + exp ( λ ) B λ I 3 ] x = 0 ,
where
A = 0 1 0 0 0 1 a 3 a 2 a 1 , B = 0 0 0 0 0 0 b 3 b 2 b 1 .
This describes a time-delay system.
We take a 1 = 1.5 , a 2 = 1 , a 3 = 0.5 and b 1 = 0.3 , b 2 = 0.2 , b 3 = 0.1 , and there exist four complex eigenvalues. Through some manipulations, we find that
N 1 ( λ R , λ I ) = A + exp ( λ R ) cos λ I B λ R I 3 , N 2 ( λ R , λ I ) = exp ( λ R ) sin λ I B λ I I 3 .
We apply the SM to solve this problem with b = ( 1 , 1 , 1 ) T and ε = 10 15 . In [ a , b ] × [ c , d ] = [ 1.5 , 1.5 ] × [ 0 , 2 ] , we obtain NI = 76 and N x = 9.76 × 10 16 . The complex eigenvalue is λ = 3.208498325636312 ± i 0.6608850649667832 . In [ a , b ] × [ c , d ] = [ 1.5 , 1.4 ] × [ 0.9 , 1.1 ] , we obtain NI = 75 and N x = 1.05 × 10 15 . The complex eigenvalue is λ = 1.422926059141501 ± i 1.035178169828797 .
By taking the parameters of a i , b i , i = 1 , 2 , 3 as those listed in [25,45], there exists a double non-semisimple eigenvalue 3 π i . With [ a , b ] × [ c , d ] = [ 10 15 , 10 15 ] × [ 9.42 , 9.45 ] , we obtain NI = 62 and N x = 2 × 10 15 . The eigenvalue obtained is very close to 3 π i , with an error of 4.72 × 10 8 .
Example 10.
Consider
N ( λ ) = λ 2 A 2 + λ A 1 + A 0 ,
where
A 2 = 4 3 12 17 11 0 1 1 3 , A 1 = 2 6 1 2 22 11 7 1 1 , A 0 = 16 4 7 14 7 13 6 8 7 .
We can derive
N 1 ( λ R , λ I ) = ( λ R 2 λ I 2 ) A 2 + λ R A 1 + A 0 , N 2 ( λ R , λ I ) = 2 λ R λ I A 2 + λ I A 1 .
As shown in Figure 8, there exist two minima of f ( λ ) for this quadratic eigenvalue problem having two real eigenvalues. We apply the SM to solve this problem with b = ( 0 , 1 , 0 ) T and ε = 10 15 . The first real eigenvalue is λ = 0.2328574586400297 , and we obtain NI = 71 and N x = 3.97 × 10 15 . The second real eigenvalue is λ = 2.355885632295363 , and we obtain NI = 72 and N x = 8.33 × 10 15 . Then, by applying the 2D GSSA to solve this problem with [ a , b ] × [ c , d ] = [ 1 , 1 ] × [ 0.1 , 2 ] , we obtain NI = 75 and N x = 2.95 × 10 14 , and the complex eigenvalues are λ = 0.188835959350602 ± i 1.06014959301131 .
Similarly, we adopt the second normalization equation in Theorem 4 with c = ( 1 , 0 , 0 , 0 , 0 , 0 ) T , and by applying the 2D GSSA to solve this problem with [ a , b ] × [ c , d ] = [ 1 , 1 ] × [ 0.1 , 2 ] , we obtain NI = 75 and N x = 5.73 × 10 14 , and the complex eigenvalues are λ = 0.188835959350602 ± i 1.06014959301131 , which are equal to those obtained above with the SM derived from Theorem 3.
Example 11
(From [25]).
N ( λ ) = λ 3 A 3 + λ 2 A 2 + A 0 ,
where
A 3 = 4 3 12 17 11 0 1 1 3 , A 2 = 2 6 1 2 22 11 7 1 1 , A 0 = 16 4 7 14 7 13 6 8 7 ,
we can derive
N 1 ( λ R , λ I ) = ( λ R 3 3 λ R λ I 2 ) A 3 + ( λ R 2 λ I 2 ) A 2 + A 0 , N 2 ( λ R , λ I ) = ( 3 λ R 2 λ I λ I 3 ) A 3 + 2 λ R λ I A 2 .
We adopt the second normalization Equation (38) in Theorem 4 to solve this nonlinear eigenvalue problem. By applying the 2D GSSA to solve this problem with [ a , b ] × [ c , d ] = [ 0.01 , 0.03 ] × [ 0.4 , 0.5 ] and with c = ( 0 , 0 , 0 , 0 , 0 , 1 ) T , NI = 68, N x = 3.73 × 10 15 , and λ = 0.02570242595103074 + i 0.4701394321627313 can be obtained.

6. A Derivative-Free Newton Method and Regularizations for the Simple Method

6.1. Two Regularization Methods

In the first regularization method (FRM), we take
b = α d ,
where α 0 is a regularization parameter and d is a constant vector. Inserting Equation (61) into Equations (29) and (14) generates the first regularization equation:
[ N ( λ ) + α 2 d d T ] x = α d , α d T x = 1 .
In the second regularization method (SRM), we consider another normalization condition instead of Equation (14):
d T x = α .
If α = 1 , we recover to Equation (14). Then, as done in the proof of Theorem 2, we can derive the second regularization equation:
[ N ( λ ) + d d T ] x = α d .
Equations (62) and (64) are different as the α 2 d d T in the first one becomes d d T in the second one. The second regularization method is simpler than the first regularization method.

6.2. Newton Iterative Methods

The regularized Equation (62) consists of a system of nonlinear equations for x and x m : = λ , m = n + 1 , the Jacobian matrix of which at the k-th step is
J k = N ( x m k ) + α 2 d d T N ( x m k ) x k α d T 0 .
Thus, the Newton iterative method together with the FRM is given by
x k + 1 x m k + 1 = x k x m k N ( x m k ) + α 2 d d T N ( x m k ) x k α d T 0 1 N ( x m k ) x k + α 2 d · x k d α d α d · x k 1 .
To construct the Newton method, letting x n + 1 = λ and using Equations (64) and (63) yields the following nonlinear regularized equations with dimension m = n + 1 :
[ N ( x m ) + d d T ] x = α d ,
d 1 x 1 + + d n x n α = 0 .
At the k-th step, the Jacobian matrix reads as
J k = N ( x m k ) + d d T N ( x m k ) x k d T 0 .
Then, the Newton iterative method together with the SRM is given by
x k + 1 x m k + 1 = x k x m k N ( x m k ) + d d T N ( x m k ) x k d T 0 1 N ( x m k ) x k + d · x k d α d d · x k α .
The iteration is terminated if a given convergence criterion ε = 10 15 is fulfilled.
As usual, the Newton iterative method needs to invert the Jacobian matrix at each iteration step, which may result in much computational time for a large-scale eigenvalue problem. In order to save the computational time, we return to solve the scalar equation N ( λ ) x = 0 given below.

6.3. A Derivative-Free Fixed-Point Newton Method

The NI of 1D GSSA is usually over 70, as shown by Examples 6–10 in Section 5. To reduce the computational burden, a derivative-free fixed-point Newton method (DFFPNM) for solving a scalar equation F ( λ ) = 0 can be derived as follows [46,47]. Mathematically speaking, solving Equation (11) is equivalent to solving
F ( λ ) : = N ( λ ) x = 0 ,
which, however, after inserting the solution x obtained from Equation (62) or Equation (64), is a highly nonlinear and implicit function of λ .
Suppose that λ * is root with F ( λ * ) = 0 . In order to get rid of the derivative term in the Newton method, we consider
F ( λ n ) = F ( λ * ) + F ( λ * ) ( λ n λ * ) + 1 2 F ( λ * ) ( λ n λ * ) 2 + .
Neglecting the higher-order terms and inserting it into the Newton iterative scheme, we have
λ n + 1 = λ n F ( λ n ) F ( λ * ) + F ( λ * ) ( λ n λ * ) ,
which, combining with
F ( λ n ) = F ( λ * ) + F ( λ * ) ( λ n λ * ) = F ( λ * ) ( λ n λ * ) ,
yields
λ n + 1 = λ n F ( λ n ) a + b F ( λ n ) ,
where
a = F ( λ * ) , b = F ( λ * ) F ( λ * ) .
Determining a and b by a fixed-point estimation, the first step is to choose two initial guesses λ 0 and λ 2 . Then, we take λ 1 = ( λ 0 + λ 2 ) / 2 . As the approximations of a and b in Equation (76), we can evaluate them by using the technique of finite difference:
a = F ( λ 2 ) F ( λ 0 ) λ 2 λ 0 , b = 1 a F ( λ 2 ) 2 F ( λ 1 ) + F ( λ 0 ) ( λ 1 λ 0 ) 2 .
The resulting iterative scheme (75) together with a and b above is a derivative-free fixed-point Newton method (DFFPNM).

6.4. Numerical Tests

We employ Example 6 in Section 5 to demonstrate the effectiveness of these two regularization methods. In the first regularization method (FRM), we take α = 4 and d = ( 1 , 0 , 0 , 0 , 0 ) T , and in the second regularization method (SRM), we take α = 0.1 and d = ( 1 , 0 , 0 , 0 , 0 ) T .
Comparing the two curves in Figure 9 to Figure 6, we see it is easier to find the zero points of F ( λ ) = 0 by the Newton-like method.
Now, we solve Example 6 in Section 5 by the first regularization method (FRM). Table 5 lists the eigenvalue, α , error N x , number of iterations (NI), and [ λ 0 , λ 2 ] results used in the DFFPNM.
Next we solve Example 6 by the second regularization method (SRM). Table 6 lists the eigenvalue, α , N x , NI, and [ λ 0 , λ 2 ] results used in the DFFPNM.
In Table 7, we list the results computed from the Newton method together with the FRM, for which the initial guess is x 0 = 1 , λ 0 = c 0 , and α = 3 .
In Table 8, we list the results computed from the Newton method together with the SRM, for which the initial guess is x 0 = 1 , λ 0 = c 0 , and α = 0.01 .
We solve Example 8 by the FRM with a fixed d = ( 1 , 1 , 1 ) T and two values of α = 1 and α = 2 . The two curves are compared in Figure 10: it can be seen that the curve with α = 2 is better than that with α = 1 . Table 9 reveals that α = 2 converges faster than α = 1 .
Here we apply the FRM to solve Example 10 with d = ( 0 , 1 , 0 ) T and ε = 10 15 . For α = 5 , the first real eigenvalue is found to be λ = 0.2328574586400297 , and NI = 7 and N x = 3.14 × 10 16 are obtained. Then, with d = ( 1 , 1 , 1 ) T and α = 10 , the second real eigenvalue is given by λ = 2.355885632295364 , and we obtain NI = 6 and N x = 9.89 × 10 16 . The two curves of F ( λ ) are compared in Figure 11. By comparing the SM together with the 1D GSSA, NI can save ten times.
When we apply the SRM to solve Example 10 with α = 10 10 , d = ( 0 , 1 , 0 ) T , and ε = 10 15 , we obtain λ = 0.2328576719480232 , NI = 10, and N x = 4.65 × 10 16 . For the second real eigenvalue, we obtain λ = 2.355885667978917 , NI = 7, and N x = 1.95 × 10 16 .
Example 12.
This example is Example 6.2 of [48], and we have a quadratic eigenvalue problem (4) with
M = c 11 I m M ˜ + c 12 M ˜ I m , C = c 21 I m C ˜ + c 22 C ˜ I m , K = c 31 I m K ˜ + c 32 K ˜ I m ,
where
M ˜ = 1 6 ( 4 I m + B + B T ) , C ˜ = B B T , K ˜ = B + B T 2 I m , B = 0 0 0 1 0 0 0 1 0 .
By taking m = 5 , we have n = 25 , and we take c 11 = 1 , c 12 = 1.3 , c 21 = 0.1 , c 22 = 1.1 , c 31 = 1 , and c 32 = 1.2 . In Table 10, we list the results computed from the Newton method and FRM, where d = ( 1 , 0 , , 0 ) T and the initial guess is x 0 = 1 , λ 0 = c 0 , and α = 2.5 .
In Table 11, we list the results computed from the Newton method and SRM, where d = ( 1 , 0 , , 0 ) T and the initial guess is x 0 = 1 , λ 0 = c 0 , and α = 0.01 .
Example 13.
As a practical application, we consider a five-story shear building with [49]
M = 140 0 0 0 0 0 120 0 0 0 0 0 120 0 0 0 0 0 120 0 0 0 0 0 100 k i p / g , K = 800 400 0 0 0 400 600 200 0 0 0 200 400 200 0 0 0 200 300 100 0 0 0 100 100 k i p / i n ,
and C = 0 . By inserting q = e i ω t x into Equation (3), we can obtain a nonlinear eigenvalue problem:
( ω 2 M K ) x = 0 .
For the design of engineering structures, knowing the frequencies ω of the free vibration modes x is of utmost importance.
In Table 12, we list the results computed from the Newton method and FRM, where d = 1 and the initial guess is x 0 = 1 , ω 0 = c 0 , and α = 2 . It can be seen that all N x , where N = ω 2 M K are very small.
In Table 13, we list the results computed from the Newton method and SRM, where d 1 = 1 , d j = 0 , j = 2 , , n and the initial guess is x 0 = 1 , ω 0 = c 0 , and α = 0.01 . The corresponding five modes of free vibration are plotted in Figure 12, wherein all the first components are normalized to one. It can be seen that all N x are very small, which indicates the high accuracy of the proposed Newton method based on the SRM, which is slightly more accurate than the FRM in Table 12.
Indeed, the regularization parameter α controls the convergence speed and accuracy. In Table 14, we list NI and N x for different values of α . When α 6 , it does not converge within 1000 steps, and the accuracy is reduced to 5.21 × 10 13 . The best value is α = 0.01 .
When α = 1 , the normalization condition (63) in the SRM recovers to the normalization condition (14) in the SM. However, α = 1 is not the best one, as shown in Table 14. When the proper values of α are taken in the FRM and SRM, they are better than the SM.

7. Conclusions

Fast and accurate iterative solution methods of linear eigenvalue problems and nonlinear eigenvalue problems were studied in the paper. We transformed the original homogeneous eigen-equation to a nonhomogeneous linear system by imposing an extra normalization condition for the uniqueness of the eigenvector. Over the range given, the curve of the merit function is constructed, for which the real eigenvalues are local minimums. In the merit function, the vector variable is solved from the newly derived nonhomogeneous linear system. Real eigenvalues by using the 1D golden section search algorithm and complex eigenvalues by using the 2D golden section search algorithm can be obtained quite fast. Very accurate eigenvalues and eigenvectors—as reflected by the very small errors to satisfy the eigen-equation with orders of 10 15 and 10 16 —were available merely through a few iterations, and the computations of merit functions are quite saving. Complex eigenvalue problems are more difficult than real eigenvalue problems. In Theorems 3 and 4, we explored two normalization equations for complex eigenvalue problems. For real eigenvalue problems, two regularization methods were constructed, which upon combination with the derivative-free fixed-point Newton method can find the real eigenvalues about ten times more quickly than using the 1D GSSA in the simple methods. Moreover, the combination of Newton’s iterative technique with two regularized normalization methods confirmed that using suitable values for the regularization parameters can enhance the convergence speed and also the accuracy of the solutions. Compared to the two Newton methods, which need to invert the Jacobian matrices at each iteration step, the derivative-free fixed-point Newton method only solves a scalar equation; hence, it can save much computational time and have the same good convergence rate as the two Newton methods, for which the number of iterations for the tested examples are few.

Author Contributions

Conceptualization, C.-S.L., C.-L.K. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., C.-L.K. and C.-W.C.; Validation, C.-S.L., C.-L.K. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L. and C.-W.C.; Resources, C.-S.L. and C.-W.C.; Data curation, C.-S.L. and C.-L.K.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., C.-L.K. and C.-W.C.; Supervision, C.-S.L. and C.-W.C.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National United University [grant numbers: 111I1206-8] and the National Science and Technology Council [grant numbers: NSTC 112-2221-E-239-022].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. On the convergence of the Rayleigh quotient iteration for the computation of the characteristic roots and vectors I. Arch. Rat. Mech. 1958, 1, 233–241. [Google Scholar] [CrossRef]
  2. Parlett, B.N. The Rayleigh quotient iteration and some generalizations for nonnormal matrices. Math. Comput. 1974, 28, 679–693. [Google Scholar] [CrossRef]
  3. Liu, C.S.; Chang, J.R.; Shen, J.H.; Chen, Y.W. A new quotient and iterative detection method in an affine Krylov subspace for solving eigenvalue problems. J. Math. 2023, 2023, 9859889. [Google Scholar] [CrossRef]
  4. Liu, C.S.; Kuo, C.L.; Chang, C.W. Free vibrations of multi-degree structures: Solving quadratic eigenvalue problems with an excitation and fast iterative detection method. Vibration 2022, 5, 914–935. [Google Scholar] [CrossRef]
  5. Tisseur, F.; Meerbergen, K. The quadratic eigenvalue problem. SIAM Rev. 2001, 43, 235–286. [Google Scholar] [CrossRef]
  6. Li, T.; Chu, E.K.; Lin, W.W. A structure-preserving doubling algorithm for quadratic eigenvalue problems arising from time-delay systems. J. Comput. Appl. Math. 2010, 233, 1733–1745. [Google Scholar] [CrossRef]
  7. Kostic, A.; Sikalo, S. Definite quadratic eigenvalue problems. Procedia Eng. 2015, 100, 56–63. [Google Scholar] [CrossRef]
  8. Wang, X.; Tang, X.B.; Mao, L.Z. A modified second-order Arnoldi method for solving the quadratic eigenvalue problems. Comput. Math. Appl. 2017, 73, 327–338. [Google Scholar] [CrossRef]
  9. Yang, L.; Sun, Y.; Gong, F. The inexact residual iteration method for quadratic eigenvalue problem and the analysis of convergence. J. Comput. Appl. Math. 2018, 332, 45–55. [Google Scholar] [CrossRef]
  10. Sadet, J.; Massa, F.; Tison, T.; Turpin, I.; Lallemand, B.; Talbi, E. Homotopy perturbation technique for improving solutions of large quadratic eigenvalue problems: Application to friction-induced vibration. Mech. Syst. Signal Process. 2021, 153, 107492. [Google Scholar] [CrossRef]
  11. Hashemian, A.; Garcia, D.; Pardo, D.; Calo, V.M. Refined isogeometric analysis of quadratic eigenvalue problems. Comput. Meth. Appl. Mech. Eng. 2022, 399, 115327. [Google Scholar] [CrossRef]
  12. Sadet, J.; Massa, F.; Tison, T.; Talbi, E.; Turpin, I. Deep Gaussian process for the approximation of a quadratic eigenvalue problem application to friction-induced vibration. Vibration 2022, 5, 344–369. [Google Scholar] [CrossRef]
  13. Imakura, A.; Du, L.; Sakurai, T. A block Arnoldi-type contour integral spectral projection method for solving generalized eigenvalue problems. Appl. Math. Lett. 2014, 32, 22–27. [Google Scholar] [CrossRef]
  14. Weng, P.C.Y.; Phoa, F.K.H. Small-sample statistical condition estimation of large-scale generalized eigenvalue problems. J. Comput. Appl. Math. 2016, 298, 24–39. [Google Scholar] [CrossRef]
  15. Gorgizadeh, S.; Flisgen, T.; van Rienen, U. Eigenmode computation of cavities with perturbed geometry using matrix perturbation methods applied on generalized eigenvalue problems. J. Comput. Phys. 2018, 364, 347–364. [Google Scholar] [CrossRef]
  16. Lee, S.; Bathe, K.J. Solution of the generalized eigenvalue problem using overlapping finite elements. Adv. Eng. Softw. 2022, 173, 103241. [Google Scholar] [CrossRef]
  17. Hari, V. On the quadratic convergence of the complex HZ method for the positive definite generalized eigenvalue problem. Linear Alg. Appl. 2022, 632, 153–192. [Google Scholar] [CrossRef]
  18. Dan, J.; Geirnaert, S.; Bertrand, A. Grouped variable selection for generalized eigenvalue problems. Signal Process. 2022, 195, 108476. [Google Scholar] [CrossRef]
  19. Alkilayh, M.; Reichel, L.; Ye, Q. A method for computing a few eigenpairs of large generalized eigenvalue problems. Appl. Numer. Math. 2023, 183, 108–117. [Google Scholar] [CrossRef]
  20. Betcke, T.; Higham, N.; Mehrmann, V.; Schroder, C.; Tisseur, F. NLEVP: A collection of nonlinear eigenvalue problems. ACM Trans. Math. Softw. 2013, 39, 7. [Google Scholar] [CrossRef]
  21. El-Guide, M.; Miedlar, A.; Saad, Y. A rational approximation method for solving acoustic nonlinear eigenvalue problems. Eng. Anal. Bound. Elem. 2020, 111, 44–54. [Google Scholar] [CrossRef]
  22. Higham, N.J.; Kim, H. Solving a quadratic matrix equation by Newton’s method with exact line searches. SIAM J. Matrix Anal. Appl. 2001, 23, 303–316. [Google Scholar] [CrossRef]
  23. Meerbergen, K. The quadratic Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 2008, 30, 1463–1482. [Google Scholar] [CrossRef]
  24. Hammarling, S.; Munro, C.J.; Tisseur, F. An algorithm for the complete solution of quadratic eigenvalue problems. ACM Trans. Math. Softw. 2013, 39, 18. [Google Scholar] [CrossRef]
  25. Jarlebring, E. Convergence factors of Newton methods for nonlinear eigenvalue problems. Linear Algebra Appl. 2012, 436, 3943–3953. [Google Scholar] [CrossRef]
  26. Mehrmann, V.; Voss, H. Nonlinear eigenvalue problems: A challenge for modern eigenvalue methods. GAMM-Mitt. 2004, 27, 121–152. [Google Scholar] [CrossRef]
  27. Zhou, Y.H. Positive solutions to a nonlinear eigenvalue problem. J. Math. Comput. Sci. 2020, 21, 18–22. [Google Scholar] [CrossRef]
  28. El-Ajou, A. Taylor’s expansion for fractional matrix functions: Theory and applications. J. Math. Comput. Sci. 2020, 21, 1–17. [Google Scholar] [CrossRef]
  29. Jadamba, B.; Khan, A.A.; Sama, M. An iteratively regularized stochastic gradient method for estimating a random parameter in a stochastic PDE. A variational inequality approach. J. Nonlinear Var. Anal. 2021, 5, 865–880. [Google Scholar]
  30. Harcha, H.; Chakrone, O.; Tsouli, N. On the nonlinear eigenvalue problems involving the fractional p-Laplacian operator with singular weight. J. Nonlinear Funct. Anal. 2022, 2022, 40. [Google Scholar]
  31. Chiappinelli, R. What do you mean by “nonlinear eigenvalue problems”? Axioms 2018, 7, 39. [Google Scholar] [CrossRef]
  32. Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
  33. Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
  34. Arnoldi, W.E. The principle of minimized iterations in the solution of the matrix eigenvalue problem. Quart. Appl. Math. 1951, 9, 17–29. [Google Scholar] [CrossRef]
  35. Magreñán, A.A.; Argyros, I.K. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  36. Argyros, I.K.; Shakhno, S.M.; Yarmola, H.P. Extended semilocal convergence for the Newton-Kurchatov method. Mat. Stud. 2020, 53, 85–91. [Google Scholar]
  37. Argyros, I.K.; Shakhno, S.M. Extended local convergence for the combined Newton-Kurchatov method under the generalized Lipschitz conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef]
  38. Bai, Z. Error analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem. Math. Comput. 1994, 62, 209–226. [Google Scholar] [CrossRef]
  39. Golub, G.H.; Wilkinson, J.H. Ill-conditioned eigensystems and the computation of the Jordan canonical form. SIAM Rev. 1976, 18, 578–619. [Google Scholar] [CrossRef]
  40. Higham, N.J. Algorithm 694: A collection of test matrices in MATLAB. ACM Trans. Math. Softw. 1991, 17, 289–305. [Google Scholar] [CrossRef]
  41. Fettis, H.E.; Caslin, J.C. Eigenvalues and eigenvectors of Hilbert matrices of order 3 through 10. Math. Comput. 1967, 21, 431–441. [Google Scholar] [CrossRef]
  42. Forsythe, G.E.; Henrici, P. The cyclic Jacobi method for computing the principal values of a complex matrix. Trans. Amer. Math. Soc. 1960, 94, 1–23. [Google Scholar] [CrossRef]
  43. Rani, G.S.; Jayan, S.; Nagaraja, K.V. An extension of golden section algorithm for n-variable functions with MATLAB code. IOP Conf. Ser. Mater. Sci. Eng. 2018, 577, 012175. [Google Scholar] [CrossRef]
  44. Golub, G.H.; van Loan, C.F. Matrix Computations; The John Hopkins University Press: Maryland, MA, USA, 2012. [Google Scholar]
  45. Jarlebring, E.; Michiels, W. Invariance properties in the root sensitivity of time-delay systems with double imaginary roots. Automatica 2010, 46, 1112–1115. [Google Scholar] [CrossRef]
  46. Liu, C.S.; Chang, C.W. Lie-group shooting/boundary shape function methods for solving nonlinear boundary value problems. Symmetry 2022, 14, 778. [Google Scholar] [CrossRef]
  47. Liu, C.S.; Chang, C.W. Periodic solutions of nonlinear ordinary differential equations computed by a boundary shape function method and a generalized derivative-free Newton method. Mech. Sys. Signal Proces. 2023, 184, 109712. [Google Scholar] [CrossRef]
  48. Mehrmann, V.; Watkins, D. Structure-preserving methods for computing eigenpairs of large sparse skew-Hamiltonian/Hamiltonian pencils. SIAM J. Sci. Comput. 2001, 22, 1905–1925. [Google Scholar] [CrossRef]
  49. Berg, G.V. Elements of Structural Dynamics; Prentice-Hall: Hoboken, NJ, USA, 1988. [Google Scholar]
Figure 1. Plotting the merit function with respect to the eigen-parameter, the three minimal points of which are eigenvalues 3, 6, and 9 for Example 1.
Figure 1. Plotting the merit function with respect to the eigen-parameter, the three minimal points of which are eigenvalues 3, 6, and 9 for Example 1.
Mathematics 11 03997 g001
Figure 2. Plotting the merit function with respect to the eigen-parameter, the four minimal points of which are eigenvalues 4 , 2 , 8 , 12 for Example 2.
Figure 2. Plotting the merit function with respect to the eigen-parameter, the four minimal points of which are eigenvalues 4 , 2 , 8 , 12 for Example 2.
Mathematics 11 03997 g002
Figure 3. Plotting the merit function with respect to the eigen-parameter, the four minimal points of which are eigenvalues 1, 2, 6, 30 for Example 3.
Figure 3. Plotting the merit function with respect to the eigen-parameter, the four minimal points of which are eigenvalues 1, 2, 6, 30 for Example 3.
Mathematics 11 03997 g003
Figure 4. Plotting the merit function with respect to the eigen-parameter, for which the minimal points are last nine eigenvalues for Example 4.
Figure 4. Plotting the merit function with respect to the eigen-parameter, for which the minimal points are last nine eigenvalues for Example 4.
Mathematics 11 03997 g004
Figure 5. Plotting the merit function with respect to the eigen-parameter, for which the minimal points are seven eigenvalues of Example 5 with n = 7 .
Figure 5. Plotting the merit function with respect to the eigen-parameter, for which the minimal points are seven eigenvalues of Example 5 with n = 7 .
Mathematics 11 03997 g005
Figure 6. A generalized eigenvalue problem of Example 6 showing five minima in a merit function obtained by a simple method.
Figure 6. A generalized eigenvalue problem of Example 6 showing five minima in a merit function obtained by a simple method.
Mathematics 11 03997 g006
Figure 7. A nonlinear eigenvalue problem of Example 8 showing two minima in a merit function obtained by a simple method for (a) real eigenvalues and (b) complex eigenvalues.
Figure 7. A nonlinear eigenvalue problem of Example 8 showing two minima in a merit function obtained by a simple method for (a) real eigenvalues and (b) complex eigenvalues.
Mathematics 11 03997 g007
Figure 8. A nonlinear eigenvalue problem of Example 10 showing two minima in a merit function obtained by a simple method for real eigenvalues.
Figure 8. A nonlinear eigenvalue problem of Example 10 showing two minima in a merit function obtained by a simple method for real eigenvalues.
Mathematics 11 03997 g008
Figure 9. A generalized eigenvalue problem of Example 6 showing five minima in (a) the first regularization method and (b) the second regularization method.
Figure 9. A generalized eigenvalue problem of Example 6 showing five minima in (a) the first regularization method and (b) the second regularization method.
Mathematics 11 03997 g009
Figure 10. Example 8 showing two minima obtained by the first regularization method with different regularization parameters.
Figure 10. Example 8 showing two minima obtained by the first regularization method with different regularization parameters.
Mathematics 11 03997 g010
Figure 11. Example 10 showing two minima obtained by the first regularization method with different regularization parameters.
Figure 11. Example 10 showing two minima obtained by the first regularization method with different regularization parameters.
Mathematics 11 03997 g011
Figure 12. Example 13 of a five-degree free vibration system displaying the five vibration modes.
Figure 12. Example 13 of a five-degree free vibration system displaying the five vibration modes.
Mathematics 11 03997 g012
Table 1. Example 1 solved by SM, listing EE, the error A x λ x , and NI.
Table 1. Example 1 solved by SM, listing EE, the error A x λ x , and NI.
Exact λ 369
[ a , b ] [2.5,4][5,7][8,11]
EE000
A x λ x 9.62 × 10 17 2.48 × 10 16 0
NI737490
Table 2. Example 2 solved by SM, listing EE, the error A x λ x , and NI.
Table 2. Example 2 solved by SM, listing EE, the error A x λ x , and NI.
Exact λ −4−2812
[ a , b ] [−5,−3][−3,−1][7,9][11,13]
Numerical λ −4.0000734−1.99993577.9999588812.0000501
A x λ x 6.78 × 10 16 4.55 × 10 16 4.70 × 10 15 2.28 × 10 15
NI74757372
Table 3. Example 3 solved by SM, listing EE, the error A x λ x , and NI.
Table 3. Example 3 solved by SM, listing EE, the error A x λ x , and NI.
Exact λ 12630
[ a , b ] [0,2][1.5,2.5][5.5,7.5][29,31]
Numerical λ 1.00021591.99977656.000206429.999601
A x λ x 8.01 × 10 15 1.19 × 10 15 1.28 × 10 15 2.55 × 10 15
NI75737471
Table 4. For Example 5, the largest eigenvalues of the Hilbert matrices with n = 7, 8, 9, 10.
Table 4. For Example 5, the largest eigenvalues of the Hilbert matrices with n = 7, 8, 9, 10.
n78910
Present1.660885338926931.695938996921951.725882660901851.75191967026518
[41]1.660885338926931.695938996921951.725882660901851.75191967026518
[3]1.660885338926581.695938996922561.725882660901951.75191967026518
Table 5. Results of Example 6 solved by FRM and DFFPNM.
Table 5. Results of Example 6 solved by FRM and DFFPNM.
λ α N x NI [ λ 0 ,   λ 2 ]
−0.18735289319697554 5.93 × 10 16 9[−0.18,−0.17]
1.3132789526624224 5.57 × 10 16 12[1.2,1.3]
5.5379563708478924 5 × 10 16 7[5.51,5.52]
12.08969285306684 9.81 × 10 16 8[11.8,11.9]
21.246424716619865 1.11 × 10 16 10[21.1,21.2]
Table 6. Results of Example 6 solved by SRM and DFFPNM.
Table 6. Results of Example 6 solved by SRM and DFFPNM.
λ α N x NI [ λ 0 ,   λ 2 ]
−0.18735289319697560.1 2.77 × 10 16 14[−0.18,−0.17]
1.3132789526624240.1 6.07 × 10 16 13[1.32,1.33]
5.5379563708478920.1 1.85 × 10 16 12[5.54,5.55]
12.08969285306680.1 4.04 × 10 16 11[12.09,12.1]
21.246424716619860.1 8.89 × 10 17 7[21.247,21.248]
Table 7. Results of Example 6 solved by the Newton method and FRM.
Table 7. Results of Example 6 solved by the Newton method and FRM.
λ c 0 N x NI
−0.1873528931969773−0.2 1.0 × 10 15 5
1.3132789526624221.5 8.05 × 10 16 6
5.5379563708478925 4.34 × 10 16 7
12.089692853066812 4.58 × 10 16 6
21.2464247166198622 1.43 × 10 17 7
Table 8. Results of Example 6 solved by the Newton method and SRM.
Table 8. Results of Example 6 solved by the Newton method and SRM.
λ c 0 N x NI
−0.1873528931969765−0.2 9.6 × 10 18 5
1.3132789526624221.5 2.1 × 10 17 6
5.5379563708478925 7.81 × 10 18 7
12.089692853066812 1.59 × 10 17 5
21.2464247166198622 2.78 × 10 17 8
Table 9. Results of Example 8 solved by FRM and DFFPNM.
Table 9. Results of Example 8 solved by FRM and DFFPNM.
λ α N x NI [ λ 0 , λ 2 ]
0.040803141768661051 4.02 × 10 16 7[0.041,0.042]
0.74259726202771581 8.56 × 10 16 19[0.72,0.721]
0.040803141768661142 4.42 × 10 16 5[0.041,0.042]
0.74259726202771672 7.89 × 10 16 13[0.72,0.721]
Table 10. Results of Example 12 solved by the combination of FRM and the Newton method.
Table 10. Results of Example 12 solved by the combination of FRM and the Newton method.
λ c 0 N x NI
0.67264326064169500.6 8.45 × 10 16 7
0.98664429546825931 6.63 × 10 16 7
1.0689102014816071.06 7.15 × 10 16 7
Table 11. Results of Example 12 solved by the combination of SRM and the Newton method.
Table 11. Results of Example 12 solved by the combination of SRM and the Newton method.
λ c 0 N x NI
0.6726432606416950.6 2.12 × 10 17 7
0.98664429546825931 2.2 × 10 17 6
1.0689102014816071.06 1.02 × 10 17 7
Table 12. Results of Example 13 solved by the combination of FRM and the Newton method.
Table 12. Results of Example 13 solved by the combination of FRM and the Newton method.
ω c 0 N x NI
0.4516626631344030.4 6.32 × 10 15 6
1.0935833066890831.1 3.63 × 10 14 6
1.5973244160193661.7 8.64 × 10 14 8
2.2069985312165062.3 4.26 × 10 13 8
2.9538800975796123 6.44 × 10 13 11
Table 13. Results of Example 13 solved by the combination of SRM and the Newton method.
Table 13. Results of Example 13 solved by the combination of SRM and the Newton method.
ω c 0 N x NI
0.45166266313440280.4 5.58 × 10 15 6
1.0935833066890831 1.96 × 10 15 7
1.5973244160193661.5 6.28 × 10 16 7
2.2069985312165062.1 4.36 × 10 15 9
2.9538800975796112.9 1.83 × 10 15 7
Table 14. Example 13 solved by the combination of SRM and the Newton method, listing NI and N x for different values of α .
Table 14. Example 13 solved by the combination of SRM and the Newton method, listing NI and N x for different values of α .
α0.010.5144.86
NI7781012 > 1000
N x 1.96 × 10 15 7.11 × 10 14 1.17 × 10 13 2.05 × 10 13 2.54 × 10 13 5.21 × 10 13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Kuo, C.-L.; Chang, C.-W. Regularized Normalization Methods for Solving Linear and Nonlinear Eigenvalue Problems. Mathematics 2023, 11, 3997. https://doi.org/10.3390/math11183997

AMA Style

Liu C-S, Kuo C-L, Chang C-W. Regularized Normalization Methods for Solving Linear and Nonlinear Eigenvalue Problems. Mathematics. 2023; 11(18):3997. https://doi.org/10.3390/math11183997

Chicago/Turabian Style

Liu, Chein-Shan, Chung-Lun Kuo, and Chih-Wen Chang. 2023. "Regularized Normalization Methods for Solving Linear and Nonlinear Eigenvalue Problems" Mathematics 11, no. 18: 3997. https://doi.org/10.3390/math11183997

APA Style

Liu, C. -S., Kuo, C. -L., & Chang, C. -W. (2023). Regularized Normalization Methods for Solving Linear and Nonlinear Eigenvalue Problems. Mathematics, 11(18), 3997. https://doi.org/10.3390/math11183997

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop