1. Introduction
It is well known that the Rayleigh quotient [
1,
2]:
can be used to determine the real eigenvalue of a symmetric matrix
.
In this paper, we derive a simple normalized condition solver to obtain the eigenvalues of
where
is a given matrix,
is an unknown vector, and
is an unknown eigenvalue in the standard linear eigen-equation. When
is not a symmetric matrix, the Rayleigh quotient (
1) cannot be used to determine the eigenvalues. Liu et al. [
3] developed a new quotient to determine the eigenvalues of Equation (
2).
As noticed by Liu et al. [
4], it is hard to directly determine the eigenvalue and eigenvector from Equation (
2) by a numerical method. In fact, from
, we always obtain
by a numerical method since the right-hand side is a zero vector. In [
4], a new strategy to overcome this difficulty uses the variable transformation to a new nonhomogeneous linear system. It possesses a nonzero external excitation term on the right-hand side, such that one can obtain a nonzero eigenvector when the eigen-parameter
is an eigenvalue. We are going to propose a simple method to nonhomogenize the eigen-equation to obtain a nonhomogeneous linear system, and then it is easy to find the eigenvalue and eigenvector by using the minimization technique.
The standard free vibration model of elastic structural elements is
which by
renders a quadratically nonlinear eigenvalue problem [
5]:
A lot of applications and solvers of quadratic eigenvalue problems have been proposed, e.g., stability analysis of time-delay systems [
6], free vibrations of fluid–solids structures [
7], a modified second-order Arnoldi method [
8], the inexact residual iteration method [
9], the homotopy perturbation technique [
10], electromagnetic wave propagation and analysis of an acoustic fluid contained in a cavity with absorbing walls [
11], and a friction-induced vibration problem under variability [
12]. In addition, several applications and solvers of generalized eigenvalue problems have been addressed, e.g., the block Arnoldi-type contour integral spectral projection method [
13], small-sample statistical condition estimation [
14], matrix perturbation methods [
15], the overlapping finite element method [
16], the complex HZ method [
17], the context of sensor selection [
18], and a generalized Arnoldi method [
19].
As done in [
4], we can take
and combine Equations (
5) and (
4) to obtain
Upon defining
Equation (
6) becomes a generalized eigenvalue problem for the
-vector
:
where
. Equation (
8) is used to determine the eigen-pair
, which is a linear eigen-equation associated with the pencil
, where
is an eigen-parameter. A main drawback of this argumentation is that the dimension is raised doubly from
n to
.
Equation (
8) can be written as
Because the right-hand side is a zero vector, solving it by the numerical method we can only obtain the trivial solution
. To avoid this situation, Liu et al. [
4] introduced an external excitation method by letting
, such that
Solving this equation for
, then the eigenvector
is obtained. However, it is a problem how to select the proper exciting vector
. The basic idea is the transformation from a homogeneous Equation (
9) to a nonhomogeneous Equation (
10). We need to determine if we can develop a simpler method to realize such a type transformation but without introducing an extra exciting vector
, which is an interesting problem. The present paper attempts to make this type transformation very easy, which is the main motivation and the major novelty: to realize this transformation by a simple normalization technique. The present idea is simpler than that in [
4], so the new technique is called a simple method and is introduced in
Section 2.
Nonlinear eigenvalue problems are important and find a lot of real applications in engineering and applied fields. Betcke et al. [
20] collected 52 nonlinear eigenvalue problems in the form of a MATLAB toolbox, which contains problems from models of real-life applications as well as ones constructed specifically to have particular properties. Recently, El-Guide et al. [
21] presented two approximation methods for computing eigenfrequencies and eigenmodes of large-scale nonlinear eigenvalue problems resulting from boundary element method solutions of some types of acoustic eigenvalue problems in 3D space. We extend Equation (
4) to a general nonlinear eigenvalue problem [
20]:
which is a nonlinear eigen-equation of
used to solve the eigen-pair
, where
. Equation (
11) is a nonlinear eigenvalue problem because
is a nonlinear matrix function of the eigen-parameter
. In Equation (
9),
is a linear matrix function of
, so that it is a linear eigenvalue problem.
Most numerical methods that deal with the nonlinear eigenvalue problems are Newton-type methods [
22,
23,
24,
25]. In [
26], some available solution techniques for nonlinear eigenvalue problems using the Jacobi–Davidson, Arnoldi and the rational Krylov methods were presented. Zhou [
27] used the Leray–Schauder fixed-point theorem to acquire the existence of positive solutions of a nonlinear eigenvalue problem. El-Ajou [
28] demonstrated the general exact and numerical solutions of four significant matrix fractional differential equations, and a new computational skill was applied for obtaining the general solutions of the nonlinear issue in the Caputo sense. Jadamba et al. [
29] addressed the nonlinear inverse problem of estimating the stochasticity of a random parameter in stochastic partial differential equations by using a regularized projected stochastic gradient scheme. Later, Harcha et al. [
30] tackled the nonlinear eigenvalue problem with the p-Laplacian fractional involving singular weights and obtained the nonexistence of solutions by utilizing a typical version of Picone’s identity.
The nonlinear eigenvalue problem is a great challenge for developing efficient and accurate methods [
31]. Even for polynomial nonlinear eigenvalue problems, the linearizations to the linear eigenvalue problems in a larger space are quite complicated and are in general not unique. The present paper intends to overcome these challenges, wherein we will directly solve the nonlinear eigenvalue problem in its nonhomogeneous form by incorporating a normalization condition in the original space.
In this paper, we will encounter a problem to solve a nonlinear equation
, but the explicit function of
is not available. The Newton method for iteratively solving
is given by
which needs to carry out a point-wise derivative
in the iteration. For some problems, the explicit function
might not be available, and this induces great difficulty in using the Newton method to solve the nonlinear equation. To overcome this inefficiency, Liu [
32] derived a derivative-free iterative scheme based on a new splitting technique:
where
a and
b are constants. In
Section 6.3, we will develop a derivative-free fixed-point Newton method to determine
a and
b. With regard to the derivative-free fixed-point Newton methods, one can refer to [
33] and references therein.
In addition to the derivative-free fixed-point Newton method and the minimization techniques, we will also develop the Newton method for the nonlinear equations system by incorporating the normalization condition into the eigen-equation. Arnoldi [
34] proposed that the method of minimized iterations was recommended as a rapid means for determining a small number of the larger eigenvalues and modal columns of a large matrix. After that, many iterative methods were surveyed in [
35] at length. Argyros et al. [
36] addressed a semilocal analysis of the Newton–Kurchatov method for solving nonlinear equations involving the splitting of an operator. They also acquired weaker sufficient semilocal convergence criteria and tighter error estimates than in earlier works. Argyros and Shakhno [
37] employed local convergence of the combined Newton–Kurchatov method for solving Banach-space-valued equations. Further, they also mentioned that these modifications of earlier conditions resulted in tighter convergence analysis and more precise information on the location of the solution.
This paper develops several simple approaches, including two regularization methods, to solve nonlinear eigenvalue problems. The contributions and innovation points of this paper are given as follows:
When solving nonlinear eigenvalue problems, they can be transformed into minimization problems regardless of real and complex eigenvalues.
For solving linear and nonlinear eigenvalue problems, this paper presents normalization techniques to create new nonhomogeneous systems and merit functions.
Two simple regularization methods are combined with the Newton iteration method, which results in very fast convergence to solve nonlinear eigenvalue problems.
Using the derivative-free fixed-point Newton method to directly solve the regularized scalar equation for nonlinear eigenvalue problems, we can can quickly obtain high-precision eigenvalues.
The remainder of the paper is arranged as follows. In
Section 2, we consider a normalization condition for the uniqueness of the eigenvector and derive a simple nonhomogeneous linear system to minimize the residual of the eigen-equation by using the 1D golden section search algorithm (1D GSSA) to determine the real eigenvalue, which results in a simple method (SM). Some examples of linear eigenvalue problems in
Section 3 exhibit the advantages of the present methodology of the SM to find the approximate solution of Equation (
2). A simple method (SM) of the nonlinear eigenvalue problem (
11) is presented in
Section 4, which is combined with the golden section search algorithm to be a stable solver of eigenvalues and eigenvectors. For complex eigenvalue problems, we propose two normalization equations with nonhomogeneous terms that appear on the right-hand side.
Section 5 displays some examples of nonlinear eigenvalue problems solved by the SM and GSSA. In
Section 6, we discuss two simple regularization methods and provide a derivative-free fixed-point Newton method for quickly finding the real eigenvalues. The combination of Newton’s method and regularized equations is carried out in this section. Finally, the conclusions are drawn in
Section 7.
3. Examples of Linear Eigenvalue Problems
In order to assess the performance of the newly developed SM, we test some linear eigenvalue problems.
Example 1. We first demonstrate the caseand the exact eigenvalues are . Although this example is very simple, we adopt it to test the accuracy and efficiency of the proposed SM since the exact eigenvalues are known. By applying the SM, we take
and
as used in the 1D GSSA. We plot
with respect to the eigen-parameter over an interval, as shown in
Figure 1, for which three minimal points are the corresponding eigenvalues
, the corresponding eigenvectors of which are given as follows:
Table 1 lists some results obtained by SM, where EE means the error of the eigenvalue and NI denotes the number of iterations.
Example 2. We considerwhere . In the SM, we take
and
for the 1D GSSA. We plot
in
Figure 2, the four minimal points of which are the corresponding eigenvalues
.
Table 2 lists some results obtained by SM. The eigenvectors corresponding to
are given as follows:
Example 3. A nonsymmetric matrix is given bywhere . With
, we plot
in
Figure 3, the four minimal points of which are the corresponding eigenvalues
.
Table 3 lists some results obtained by SM. The eigenvectors corresponding to
are given as follows:
Example 4. Consider the Frank nonsymmetric matrix [38,39,40]:For , the largest eigenvalue is given by In the SM, we take
and
for the 1D GSSA. The function
is plotted in
Figure 4, for which the minimal points are the last nine eigenvalues. Through 103 iterations of the 1D GSSA,
and
are obtained.
Example 5. Let . In Equation (2), we take the Hilbert matrix: Since the Hilbert matrix is highly ill-conditioned, we take Equation (
18) instead of Equation (
15) to compute the eigenvalue. In
Figure 5, with
, we plot
with respect to the eigen-parameter over an interval, for which the seven minimal points are the corresponding eigenvalues. There exists the largest eigenvalue between 1.5 and 2.
In
Table 4, we list the largest eigenvalues for
obtained in [
41] using the cyclic Jacobi method [
42] and in [
3] using the external excitation method.
Due to the highly ill-conditioned nature of the Hilbert matrix with , it is a quite difficult linear eigenvalue problem. For this problem, we take to compute the largest eigenvalue, which is given by . The SM converges very fast with 69 iterations under , and the error of the eigen-equation is . Notice that the smallest eigenvalue of the Hilbert matrix with a large n is very difficult to compute since it is very close to zero. However, for and , we can obtain the smallest eigenvalue , whose error is .
5. Examples of Nonlinear Eigenvalue Problems
Example 6. To demonstrate the new idea in Equation (29), we consider a generalized eigenvalue problem endowed by [44]: By using the SM with
and
for the 1D GSSA,
is plotted in
Figure 6, for which five eigenvalues appear as minimums. With
, we obtain
, NI = 73 and
. Those eigenvalues are listed as follows:
Example 7. To display the advantage of Equation (32), we consider a standard eigenvalue problem withwhich possesses the complex eigenvalues: By using the SM with and for the 2D GSSA and with , we can obtain NI = 73 and , and the error of the eigenvalue is . The SM is very accurate in finding the complex eigenvalue.
On the other hand, using the SM in Theorem 4 with and for the 2D GSSA and with , we can obtain NI = 73 and , and the error of the obtained eigenvalue is zero.
By using the SM with
and
, we plot
in
Figure 7a with respect to the eigen-parameter over an interval, for which two real eigenvalues are
and
, where
and
are obtained, respectively.
Let
, and we can derive
There are a total of 24 eigenvalues, as shown in
Figure 7b, with respect to
and
.
Example 9. As an application, we consider a time-delay linear system of first-order ordinary differential equations:where is a time-delay external force. Inserting into Equation (49) rendersBy canceling on both sides, we obtain a time-delay nonlinear eigenvalue problem:where . The eigenvalues of this system are very important as they reflect the stability of the time-delay system. We take
and consider [
25]
where
This describes a time-delay system.
We take
and
, and there exist four complex eigenvalues. Through some manipulations, we find that
We apply the SM to solve this problem with and . In , we obtain NI = 76 and . The complex eigenvalue is . In , we obtain NI = 75 and . The complex eigenvalue is .
By taking the parameters of
as those listed in [
25,
45], there exists a double non-semisimple eigenvalue
. With
, we obtain NI = 62 and
. The eigenvalue obtained is very close to
, with an error of
.
Example 10. ConsiderwhereWe can derive As shown in
Figure 8, there exist two minima of
for this quadratic eigenvalue problem having two real eigenvalues. We apply the SM to solve this problem with
and
. The first real eigenvalue is
, and we obtain NI = 71 and
. The second real eigenvalue is
, and we obtain NI = 72 and
. Then, by applying the 2D GSSA to solve this problem with
, we obtain NI = 75 and
, and the complex eigenvalues are
.
Similarly, we adopt the second normalization equation in Theorem 4 with , and by applying the 2D GSSA to solve this problem with , we obtain NI = 75 and , and the complex eigenvalues are , which are equal to those obtained above with the SM derived from Theorem 3.
Example 11 (From [
25])
. where we can derive We adopt the second normalization Equation (
38) in Theorem 4 to solve this nonlinear eigenvalue problem. By applying the 2D GSSA to solve this problem with
and with
, NI = 68,
, and
can be obtained.