1. Introduction
Direct and inverse eigenvalue problems (EVP) of linear differential operators play an important role in all vibration problems in engineering and physics [
1]. The theory of direct Sturm–Liouville problems (SLP) started around the 1830s in the independent works of Sturm and Liouville.The inverse Sturm–Liouville theory originated in 1929 [
2].
In this paper, we consider the
th order, nonsingular, self-adjoint eigenvalue problem:
along with
y satisfying some general, separated boundary conditions at
a and
b (see Equation (
2)). Usually, the functions
, and
are continuous on the finite closed interval
, and
has continuous derivative. Further assumptions A1–A4 are placed on the coefficient functions in
Section 2.1. For Equation (
1), the direct eigenvalue problems is concerned with determining the
given the coefficient information
,
, and the inverse eigenvalue problem is concerned with reconstructing the unknown coefficient functions
,
from the knowledge of suitable spectral data satisfying the equation.
There is a variety of numerical solutions for the simplest case of Equation (
1) known as the direct Sturm–Liouville problem with
, the most notable ones being Finite difference method (FDM) [
3], Numerov’s method (NM) [
4], Finite element method (FEM) [
5], Modified Numerov’s method (MNM) [
6], FEM with trigonometric hat functions using Simpson’s rule (FEMS) and using trapezoidal rule (FEMT) [
7], Lie-Group methods (LGM) [
8], MATSLISE [
9], Magnus integrators (MI) [
10], and Homotopy Perturbation Method (HPM) [
11].
For a discussion on numerical methods for inverse SLP, refer to Freiling and Yurko [
12]. McLaughlin [
13] provided an overview of analytical methods for second- and fourth-order inverse problems. For the inverse SLP, some available methods are iterative solution (IS) [
14] and Numerov’s method (NM) [
15].
Slightly fewer techniques are available for the direct eigenvalue problem of case
known as fourth-order Sturm–Liouville problem (FSLP). For example, Adomian decomposition method (ADM) [
16], Chebyshev spectral collocation method (CSCM) [
17], Homotopy Perturbation Method (HPM) [
11], homotopy analysis method (HAM) [
18], Differential quadrature method (DQM) and Boubaker polynomials expansion scheme (BPES) [
19], Chebychev method (CM) [
20], Spectral parameter power series (SPPS) [
21], Chebyshev differentiation matrices (CDM) [
22], variational iteration method (VIM) [
23], Matrix methods (MM) [
24], and Lie Group method [
25] are the prominent techniques available.
Among others Barcilon [
26], and McLaughlin [
27,
28] proposed numerical algorithms for solving the inverse FSLP.
Several methods are available for
, or Sixth-order SLP (SSLP). They are shooting method [
29], Chebyshev spectral collocation method (CSCM) [
17], Adomian decomposition method (ADM) [
30], and Chebyshev polynomials (CP) [
31].
However, as the order of the direct problem increases, it becomes much harder to solve and also the accuracy of the eigenvalues decreases with the index since only a small portion of numerical eigenvalues are reliable [
32].
All of the above-mentioned methods are applicable only to a particular order of the SLP and specific set(s) of boundary conditions. In contrast, in this paper, the Lie group method in combination with the Magnus expansion is utilized to construct a method applicable to solving any order of SLP with arbitrary boundary conditions, which was initially applied for solving SLP by Moan [
8] and Ledoux et al. [
10] and FSLP by Mirzaei [
25]. The proposed method is universal in the sense that it can be applied to solve a SLP of any (even) order with different types of boundary conditions (including some singular) subject to computational feasibility inherent to large matrix computations.
The inverse problem is much harder as restrictions on the spectral data should be placed to ensure the uniqueness. Barcilon [
33] proved that, in general,
spectra associated with
distinct “admissible” boundary conditions are required to determine
’s uniquely (with
). Later, McLaughlin and Rundell [
34] proved that the measurement of a particular eigenvalue for an infinite set of different boundary conditions is sufficient to determine the unknown potential. Both these theorems an infinite number of accurate eigenvalue measurements, which are hard to obtain in practice as the higher eigenvalues are usually more expensive to compute than lower ones [
35]. However, the present technique can be applied to address some of the above difficulties in finding suitable set(s) of eigenvalues so that the inverse problem can be effectively solved.
Section 2 describes the method constructed using the Lie group method in combination with the Magnus expansion and
Section 3 provides some numerical examples of direct Sturm–Liouville problems of different orders
with a variety of (including one singular) boundary conditions. Furthermore, the efficiency and accuracy of the proposed technique is illustrated, using Example 1.
Section 4 presents two examples of Inverse SLP of order two with a symmetric and a non-symmetric potential. Finally,
Section 5 concludes with discussion and a summary.
2. Materials and Methods
2.1. Notations
In this paper, Equation (
1) is considered under the assumptions:
- A1:
All coefficient functions are real valued.
- A2:
Interval is finite.
- A3:
The coefficient functions , , w and are in .
- A4:
Infima of and w are both positive.
(The last two assumptions, A3 and A4, are relaxed in Example 3.) By Greenberg and Marletta [
29], provided the above assumptions are true, one gets:
- R1:
The eigenvalues are bounded below.
- R2:
The eigenvalues can be ordered: .
- R3:
.
Defining quasi-derivatives:
the general, separated, self-adjoint boundary conditions can be written in the form [
29]:
where
are real matrices;
and ; and
matrices and have rank m.
Equation (
1) in the matrix form is:
with
2.2. Lie Algebra
The Lie group
was introduced by Lie [
36] as a differentiable manifold with the algebraic structure of a group. The tangent space at identity is called the Lie algebra of
G and denoted by
. A binary operation
is called a Lie bracket. Let
, then the Lie bracket
is defined as:
Special Lie group is defined as
A system of linear ordinary differential equations on
has the form [
37]:
Letting
, where
represents the initial data,
satisfies the differential equation
where
I stands for the
dimensional identity matrix [
38]. Since
, we have
, so that Equation (
8) is on the Lie Group
, hence can be solved by the Magnus expansion. Let
be a solution of Equation (
9) for
; the solution of the system in Equation (
3) with initial condition
is
Thus,
and using boundary conditions in Equation (
4):
Here,
. Hence, the eigenvalues
of the SLP in Equation (
1) are the roots of the characteristic equation:
As the problem in Equation (
1) is self-adjoint, the roots of Equation (
10) are simple [
14], hence can be calculated using the bisection method or any other root finding method. Although an explicit expression for
cannot be found, we can find the roots by computing
for a real number
. Besides, one can plot the function
to get an idea of the locations of the roots, and to specify a smaller bracket for the bisection method, thus increasing the efficiency of the method. However, the slow convergence and high computational time requirements make it impossible to use the bisection method, especially when it comes to higher-order SLPs. As the explicit form of
F (and its derivatives) is unknown, a Newton-like method cannot apply. To overcome these difficulties, we propose an alternative root finding method in the following paragraph.
2.3. Multisection Method
A new root finding method, named “Multisection method”—a variant of the bisection method—is proposed, which can converge to the desired root faster. The idea is to divide the root interval into
m subsections (
being the bisection method) and locate the sign changing interval. Then, this interval is again refined into
m subsections and the sign changing interval is located. This continues until the desired accuracy or the maximum number of iterations is reached (See Example A1 in
Appendix A, which illustrates the usage and the performance of multisection method.)
This method overcomes drawbacks in the bisection method, yet provides a simple and faster, derivative-free solution to the root-finding problem. This method is able to find even multiple roots in an interval, and, given a sufficiently small subsection length, it is able to find roots that are cluster closer to each other (see Example 4). Unlike the bisection method, this method does not require the bracketing interval to have different signs at the endpoints. In the present paper, the input to this method is a set of function values, as it does not require specifying the explicit functional form of F, which is another advantage of this method. Due to the faster convergence of the multisection method (even at the risk of more computational cost), it can be used to find the eigenvalues (or roots) of the characteristic function.
2.4. Magnus Expansion
Magnus [
39] proposed a solution to Equation (
8) as
and a series expansion for the exponent
which is called the Magnus expansion. Its first term can be written as
where
is the matrix commutator of
A and
B.
2.5. Numerical Procedure
The Magnus series only converges locally, hence, to calculate , the interval should be divided into N steps such that the Magnus series converges in each subinterval , , with .
Then, the solution at
is represented by
and the series
has to be appropriately truncated.
The three steps in the procedure are:
- E1:
series is truncated at an appropriate order: For achieving an integration method of order
only terms up to
in the
series are required [
38].
- E2:
The multivariate integrals in the truncated series
are replaced by conveniently chosen approximations: if their exact evaluation is not possible or is computationally expensive, a numerical quadrature may be used instead. Suppose that
are the weights and nodes of a particular quadrature rule, say
X of order
p, respectively. Then,
with
. By Jódar and Marletta [
40], systems with matrices of general size require a
dependent step size restriction of the form
in order to be defined.
- E3:
The exponential of the matrix has to be computed. This is the most expensive step of the method. For this, Matlab function expm() is used, which provides the value for machine accuracy.
The algorithm then provides an approximation for starting from , with .
In the present paper,
is truncated using a sixth-order Magnus series and the integrals are approximated using the three-point Gaussian integration method [
41].
is obtained in the following equations. Let,
Then,
where
,
,
are the nodes of Gaussian quadrature.
2.6. Inverse SLP Algorithm
The Magnus method’s ability to handle various types of boundary conditions, adaptability to extend to higher-order problems, and accuracy with regard to higher index eigenvalues naturally allow it to be useful in generating suitable spectra for constructing and testing different inverse SLPs.
The iterative solution presented by Barcilon [
14] was selected as the inverse SLP algorithm for two reasons: First, there is no evidence of actual implementation of Barcilon’s algorithm in the literature. Second, other numerical methods available for the inverse problem are not amenable to generalization for higher order equations or systems.
The following is the Barcilon’s algorithm in brief:
Consider the Sturm–Liouville problem
for a symmetric and normalized potential
, i.e.,
Then, the spectrum
uniquely determines the symmetric potential
[
42]. The eigenvalue problem in Equation (
15) is equivalent to the following pair of Sturm–Liouville problems:
In addition, the two spectra
and
are interlaced,
Equations (
17) and (
18) are combined into:
Equation (
21) is not self-adjoint and its adjoint is given by:
Then, the inverse SLP procedure presented in Algorithm 1 can be used to recover the unknown potential q.
For a symmetric potential, a fixed set of eigenfunctions can be used in the updating Equation (24) for
, for example, eigenfunctions for the case
[
14]. This is possible because,
“if
, the
kth eigenvalue
behaves asymptotically as
where
is the
kth eigenvalue of the SLP with the same BCs and zero potential,
is the mean value of
q and
is the remainder for smooth potential. Moreover, the information that the given spectrum provides about the variation of the unknown potential are contained in the terms
and, in view of their behaviour, the first eigenvalues are the most important for the reconstruction of
q.”
This implies that the required set can be restricted to the first few eigenvalues. In fact, when the number of input eigenvalues increases, the error also increases, which can be attributed to the approximation errors in the higher index eigenvalues (see Figure 7).
Implementation details of the steps of Algorithm 1 are as follows:
- Line (Data:)
Input eigenvalue sequence is limited to
to get a finite sequence. Using the Magnus method on Equations (
17) and (
18), the truncated eigenvalue sequences
,
are obtained and, using Equation (
20), they are combined to obtain
.
- Line (Result:)
The output is a finite vector of function values: , where n is the number of subdivisions in the axis.
- Line (1)
Initial guess is set to .
- Line (2)
By setting
in Equations (
17) and (
18), the following expressions are easily obtained:
,
,
,
,
,
,
. For the rest of the algorithm, the eigenfunctions
are kept fixed at
.
- Line (3)
The optimal while loop condition: may never reach due to various errors in the numerical procedure, and is relaxed to , and/or where is a fixed tolerance and is a maximum number of iterations.
- Line (4)
The required derivatives and integrals in Equation (24) are approximated using the Matlab built-in functions
gradient() and
trapz(), respectively. Furthermore, the Matlab built-in function
griddedInterpolant() with
linear interpolation method is used to obtain an explicit functional form of
q from the set of points
. This is necessary because, in the next step, the explicit form of
q should be the input to the Magnus method (which allows evaluating
at an arbitrary point). Only one of the EVPs (Equation (
21) or (
22)), needs to be solved using the Magnus method since only the eigenvalue sequence
is needed (as the eigenfunctions are kept fixed) in the subsequent iterations.
Algorithm 1: Solving inverse SLP. |
|
3. Results of Direct Sturm–Liouville Problems
In this section, some numerical examples of Sturm–Liouville problems of different orders are presented. Furthermore, the efficiency and accuracy of the proposed technique outlined in the previous section is investigated, numerically. Two examples are presented in the next section, illustrating the Magnus method’s usefulness in solving inverse SLPs.
Here, we made use of the MATLAB 2014 package to code the three algorithms, namely: multi-section method (
Listing S1), the Magnus method for solving SLP (
Listing S2), and Barcilon’s method for solving inverse SLP (
Listing S3). The codes were executed on an Intel
® Core ™ i3 CPU with power 2.40 GHz, equipped with 8 GB RAM (see
Supplementary Materials).
For the numerical results (unless otherwise specified), the parameter values were: , , and where n is the number of subdivisions in the interval and m is the number of subdivisions in the interval , being the maximum eigenvalue searching, and L is the number of multisection steps used to calculate each eigenvalue in the characteristic function.
The performance of the Magnus method was measured by the absolute error
, which is defined as
where
indicates the
kth eigenvalue obtained by the Magnus method and
is the
kth exact eigenvalue and the relative error
which is defined as
There are several parameters in the Magnus method, which affect the performance and the accuracy of the method:
eigenvalue index: k or eigenvalue:
number of subdivisions in : m
number of subdivisions in x: n
number of multisection iterations: L
The performance was measured by the computation time used. Here, the Matlab function
timeit() was used.
timeit measures the time required to run a function by calling the specified function multiple times, and computing the median of the measurements. It provides a robust measurement of the time required for function execution and the function provides a more vigorous estimate [
44]. The accuracy was measured using the absolute and relative errors as defined in Equations (
26) and (
27), respectively. Example 1 illustrates the affect of these variables on a SLP.
Example 1. Consider the eigenvalue problem:having the exact eigenvalues , In Figure 1a, the absolute error is increasing (as the eigenvalues are increasing quadratically), however, the relative error is (independent of λ) gradually decreasing as .This shows the Magnus method’s ability to find higher index eigenvalues with less relative error. (see Table A1 in Appendix A for a detailed analysis of the code). In addition, the computation time is very slowly increasing linearly with the index of the eigenvalue (Figure 1b). As expected, accuracy and computation time increase with the number of subdivisions of λ and the number of multisection iteration steps (Figure 2). After around , the error no longer improves, thus it is sufficient to stop at 15 iterations. As Figure 2b shows, the computation time increases with the number of subdivisions of x: , but there is no change in the error (Figure 2a). This is obvious, since in Equation (28) all coefficient functions are constant, hence G is independent of x and the number of subdivisions n. Example 2 (SL Regular problem)
. This is an almost singular eigenvalue problem due to Paine et al. [3].Table 1 and Table 2 list the first four eigenvalues of this problem and compare the relative errors with FDM ([3], Table 4), NM ([4], Table 3), FEM ([5], Table 3), MNM ([6], Table 5), FEMT and FEMS ([7], Table 2), and LGM ([8], Table 4) methods. The Magnus method exhibits good relative errors compared with the other methods. Figure 3a shows the characteristic function for this problem. It can be seen that the roots (eigenvalues) are simple and also the eigenvalues are further apart as the index increases. Example 3 (Singular SLP)
. The Magnus method is robust and can handle even singular Sturm–Liouville Problems. For example, consider the Bessel equation of order ([45], Test problem 19):Here, and the exact eigenvalues are . The endpoint is singular and a limit-circle non-oscillatory (LCN) point (Note that Assumptions A3 and A4 mentioned in Section 2.1 no longer hold). Taking Friedrich’s BCs, , and writing the boundary condition coefficients as:the eigenvalues can be easily obtained as: , , and so on. Example 4. This example—a version of the Mathieu equation—is also due to (Pryce [45], Test problem 5). Here, the lower eigenvalues form clusters of 6 (see Figure 3b). Using a sufficiently small step size (<0.004: the minimum difference between two consecutive eigenvalues) for the subdivision in dimension, the Magnus method is able to find even the lower eigenvalues with an accuracy comparable to Matslise’s (see Table 3). Example 5 (FSLP: Simple BCs)
. This example is due to (Rattana and Böckmann [24], Example 1):By Schueller [46], exact eigenvalues are: . From the results in Table 4 and Table 5 and Figure 4, it is clear that the Magnus method is more accurate. However, unlike the other methods, the accuracy decreases with the index of the eigenvalue due to the increasing magnitude of the eigenvalue. In contrast, for the second-order SLP using the Magnus method, the relative error decreases with the index (Figure 1a). Example 6 (FSLP: General BCs)
. This example involving more general boundary conditions was taken from ([24], Example 4) to illustrate the Magnus method’s versatility.As Rattana and Böckmann [24], (pp. 153) claimed, the exact eigenvalues of this problem could not be computed and there is no other method except matrix methods (MM) proposed by them to approximate the eigenvalues. Table 6 lists the first five eigenvalues of the problem and it can be seen that the values by the Magnus method are comparable with values computed by matrix methods. Figure 4 we shows that the BVM8 method with correction terms performs better than the other methods considered (except Magnus) in that example. Table 6 clearly shows that Magnus method’s values are closer to the BVM8’s values for this example. Example 7. This example illustrates Magnus method’s superiority over some of the other existing methods. This problem describes the free lateral vibration of a uniform clamped-hinged beam (CH) [17]: From [17], Equation (45), the eigenvalues are the solutions of the equation: As Table 7 and Table 8 point out, the Magnus method has a comparable relative error to the other methods considered (CSCM ([17], Table 4), ADM [16], HAM [18], PDQ and FDQ ([19], Table 1), CM ([20], Tables 1 and 2), CDM ([22], Table 2), VIM ([23], Table 2), and SPSS [21]) with respect to the first five eigenvalues. More examples on Magnus methods for FSLP can be found in Mirzaei [
25].
Example 8 (SSLP).
This example is due to Greenberg and Marletta [29] (Problem (1)), having exact eigenvalues: , 1,2, … Again, as shown in Table 9, the Magnus method shows excellent accuracy with regard to this sixth-order SLP compared to other methods, namely Shooting method ([29], Table 1), ADM ([30], Table 2), CP ([31], Table 2), and VIM ([23], Table 3). To the authors’ best knowledge, there exists no algorithms for solving eighth-order SLP in the literature. However, the proposed Magnus method can be successfully applied with literally no modification to the algorithm. Only the matrix
G has to be modified using Equation (
36).
Example 9 (Eighth-order Sturm–Liouville problem)
. Squaring the fourth-order SLP:with exact eigenvalues , the following eighth-order SLP is obtained:with exact eigenvalues 1,2, ….The first five eigenvalues have good accuracy, although the error is steadily increasing with the index of the eigenvalue (see Table 10). This is because the error also depends on the magnitude of the eigenvalue. Specifically, the error in a pth-order method grows as , and thus one expects poor approximations for large eigenvalues [38]. It is clear that Magnus method has the potential to find the first eigenvalues of a SLP of arbitrary (even) order with good accuracy.
In addition, it is observed for the EVPs with constant coefficients (hence, constant and mostly sparse G matrix) the errors are smaller (: Example 1; : Examples 5 and 7; : Example 8; and : Example 9) irrespective of the order. However, when G is no longer a constant, the error is not as good even for the SLP of order two (see Example 2). This can be attributed to the matrix exponent calculation step and can be improved by using a more accurate method in place of the expm() function.
5. Discussion and Conclusions
There are several places where approximation errors occur in the Magnus method. For the direct problem, they are (see
Section 2.5 also):
- E1:
truncation of the Magnus series
- E2:
calculation of multivariate integrals
- E3:
computing the matrix exponent
In the case of the inverse problem (see also
Section 4) they are:
- E4:
using a finite number of eigenvalues
- E5:
using approximate eigenvalues
- E6:
errors in the direct problem
- E7:
computing the explicit form of q by an interpolation method
- E8:
approximating derivatives and integrals using numerical methods
Thus, the accumulated error would be high for the direct and inverse SLPs. For the direct SLP, we can address E1 by using a higher order Magnus method, E2 by using higher order quadrature methods, and E3 by using a more accurate method from the methods proposed by Moler and Van Loan [
47]. For the inverse SLP, E4 is no longer a problem as Aceto et al. [
43] argued that the first eigenvalues are the most important for the reconstruction of the unknown potential. E5 and E6 can also be reduced by reducing the errors in the direct problem (E1–E3). E8 can be eliminated by using exact derivatives and integrals when possible. Thus, it turn outs that the error in the inverse problem solely depends on the errors in the direct problem and E7 (interpolating
q from a set of points).
By construction, the Magnus method leads to a global error of order
if a
pth-order Magnus method is applied, yet it turns out that the error also depends on the magnitude of the eigenvalue. Specifically, the error in a
pth-order method grows as
, and thus one expects poor approximations for larger eigenvalues [
38]. This method has the disadvantage that the cost of integration, while increasing very slowly as a function of
, wouls be
for a problem arising from a
th-order Sturm–Liouville problem [
40].
The Magnus method was tested with finite interval SLPs of even orders (up to eight) along with regular and a finite singular endpoint BC, and good accuracy with regard to first few eigenvalues was obtained. One limitation of the Magnus method is that, although it is more accurate and efficient with constant coefficient BVPs irrespective of the order, when G is no longer a constant, the error is large even for the SLP of order 2. This needs to be addressed in a future work, as this would in turn effect the performance of the inverse SLP algorithm. It is an open problem whether this method can be extended to solve other types of EVPs such as different finite singular endpoints, infinite intervals, and multiple eigenvalues.
The present inverse SLP algorithm is less accurate with non-symmetric potentials, and it also should be improved to handle non-normalized, and/or non-smooth potentials. In a future work, the algorithm for inverse SLP may be modified to include general finite interval problems (i.e., problems in the domain
) for different sets of BCs (provided the spectra are interlaced), infinite BCS, and different classes of potential functions, such as non-smooth, discontinuous, oscillating, etc. In addition, a more suitable interpolation method for reconstructing
q may be examined. Furthermore, Barcilon [
26] generalized the inverse SLP algorithm presented in [
14] to higher-order problems. This can be utilized in combination with Magnus method so that a direct or inverse SLP of any (even) order can be solved effectively.
In conclusion, the Magnus method is able to deal with general the kind of boundary conditions, since the boundary conditions are presented in a matrix form. Here, it is shown that the proposed technique has the ability to solve regular Sturm–Liouville problems of any even order, efficiently. By Ledoux et al. [
10], a modified Magnus method can be used with Lobatto points to improve the error. Although in theory, this method can be used for solving higher order SLPs; for it to be effective in practice, the various errors in the approximation steps should be addressed adequately.