Next Article in Journal
Supporting Preservice Mathematics Teachers’ Culturally Responsive Teaching: A Focus on Teaching for Social Justice
Next Article in Special Issue
C2 Cubic Algebraic Hyperbolic Spline Interpolating Scheme by Means of Integral Values
Previous Article in Journal
Extended Beta and Gamma Matrix Functions via 2-Parameter Mittag-Leffler Matrix Function
Previous Article in Special Issue
Quadratic B-Spline Surfaces with Free Parameters for the Interpolation of Curve Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Superconvergent Nyström and Degenerate Kernel Methods for Integro-Differential Equations

1
Team ANAA, ANO Laboratory, Faculty of Sciences, University Mohammed First, Oujda 60000, Morocco
2
Team ANTO, ANO Laboratory, Faculty of Sciences, University Mohammed First, Oujda 60000, Morocco
3
Department of Applied Mathematics, University of Granada, Campus de Fuentenueva s/n, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(6), 893; https://doi.org/10.3390/math10060893
Submission received: 30 January 2022 / Revised: 14 February 2022 / Accepted: 21 February 2022 / Published: 11 March 2022
(This article belongs to the Special Issue Spline Functions and Applications)

Abstract

:
The aim of this paper is to carry out an improved analysis of the convergence of the Nyström and degenerate kernel methods and their superconvergent versions for the numerical solution of a class of linear Fredholm integro-differential equations of the second kind. By using an interpolatory projection at Gauss points onto the space of (discontinuous) piecewise polynomial functions of degree r 1 , we obtain convergence order 2 r for degenerate kernel and Nyström methods, while, for the superconvergent and the iterated versions of theses methods, the obtained convergence orders are 3 r + 1 and 4 r , respectively. Moreover, we show that the optimal convergence order 4 r is restored at the partition knots for the approximate solutions. The obtained theoretical results are illustrated by some numerical examples.

1. Introduction

Integro-differential equations emerged at the beginning of the twentieth century thanks to the work of Vito Volterra. The applications of these equations have proved worthy and effective in the fields of engineering, mechanics, physics, chemistry, astronomy, biology, economics, potential theory, electrostatics, etc. (see [1,2,3,4] and references therein).
Many numerical methods have been developed for solving integro-differential equations. Each of these methods has its inherent advantages and disadvantages, and the search for easier and more accurate methods is a continuous and ongoing process. Among the existing methods in the literature, we cite the Adomian decomposition [5], homotopy analysis [2], Chebyshev and Taylor collocation [6], Taylor series expansion [7,8], integral mean value [9], and decomposition method [10]. For other methods to solve integro-differential equations, see [11,12,13,14].
Recently, many authors have used spline functions for the numerical solution of integro-differential equations; in particular, a semi-orthogonal spline wavelets approximation method for Fredholm integro-differential equations was proposed in [15]. In [16], the authors used a fast multiscale Galerkin method for solving second order linear Fredholm integro-differential equation with Dirichlet boundary conditions. In [17], the authors applied B-spline collocation method for solving numerically linear and nonlinear Fredholm and Volterra integro-differential equations, and in [18] an exponential spline method for approximating the solution of Fredholm integro-differential equation was studied. More recently, in [19] Kulkarni introduced an efficient method called modified projection method or multi-projection method to solve Fredholm integral equations of the second kind. Inspired in Kulkarni’s method, authors in [20] have introduced superconvergent Nyström and degenerate kernel methods to solve the same type of equations.
This work is concerned with numerical methods to solve a class of linear Fredholm integro-differential equations of the form
y ( x ) + a ( x ) y ( x ) = 0 1 k ( x , t ) y ( t ) d t + f ( x ) , x [ 0 , 1 ] , y ( 0 ) = y 0 ,
where y 0 R , a, f, and k are continuous functions, and y is the function to be determined.
The paper is organised as follows. In Section 2, the proposed methods to solve (1) are defined along with relevant notations. In Section 3, error estimates are given and precise convergence orders are obtained. Implementation details on the linear systems are discussed in Section 4. Finally, in Section 5, we provide some numerical results that illustrate the convergence orders of the proposed methods and we give a comparison with other known approaches in the literature.

2. Methods and Notations

Consider the following partition of the interval [ 0 , 1 ]
0 = x 0 < x 1 < < x n = 1 .
Let I i = x i 1 , x i , h i = x i x i 1 , i = 1 , 2 , , n , and let h = max 1 i n h i be the maximum step size of the partition. We assume that h 0 as n . For r 1 , we denote by P r the space of all polynomials of degree r 1 . Let
S r , n : = u : [ 0 , 1 ] R : u I i P r , 1 i n ,
be the space of piecewise polynomials of degree r 1 , with breakpoints at x 1 , x 2 , , x n 1 . No continuity conditions are imposed at the breakpoints. Let B r : = τ 1 , , τ r be the set of r Gauss points, i.e., the zeros of the Legendre polynomials p r ( t ) = d r / d t r t 2 1 r in [ 1 , 1 ] . Define f i : [ 1 , 1 ] x i 1 , x i as follows:
f i ( t ) = 1 t 2 x i 1 + 1 + t 2 x i , t [ 1 , 1 ] .
Then
A = i = 1 n f i B r = τ i j = f i τ j : 1 i n , 1 j r : = { t i , i = 1 , , n r } ,
is the set of N h : = n r Gauss points in [ 0 , 1 ] . Let
i ( x ) : = k = 1 k i r x τ k τ i τ k , i = 1 , 2 , , r , x [ 1 , 1 ] ,
be the Lagrange polynomials of degree r 1 on [ 1 , 1 ] , which satisfy i τ j = δ i j .
Define
φ k p ( x ) : = k f p 1 ( x ) , x x p 1 , x p , 0 , otherwise .
It is easy to verify that φ k p S r , n and φ k p τ i j = δ j k δ i p , i , p = 1 , 2 , , n , j , k = 1 , 2 , , r .
Let
ϕ ( p 1 ) r + k : = φ k p , k = 1 , , r and p = 1 , , n .
For a fixed p, the family of functions φ k p : k = 1 , 2 , , r form a basis (Lagrange basis) for the space of polynomials functions of degree r 1 in [ x p 1 , x p ] . As, in the space S r , n , no continuity conditions are imposed at the breakpoints, we deduce that the set φ k p : k = 1 , , r , p = 1 , , n = ϕ j : j = 1 , , n r form a basis of this space.
Let π n : C [ 0 , 1 ] S r , n be the interpolatory operator defined by
π n u ( x ) : = i = 1 N h u t i ϕ i ( x ) .
It follows that π n u S r , n ,   π n u t i = u t i ,   i = 1 , 2 , , N h . Then π n u u as n for each u C [ 0 , 1 ] . By using a result in [21], π n can be extended to a projection from L [ 0 , 1 ] to S r , n .
Equation (1) can be written as
y ( x ) + a ( x ) y ( x ) = K y ( x ) + f ( x ) , x [ 0 , 1 ] , y ( 0 ) = y 0 ,
where K is the integral operator defined by
K ( u ) ( s ) : = 0 1 k ( s , t ) u ( t ) d t .
Under the regularity assumptions on a , f , and k, it is well known that (see e.g., [22]) the initial value problem (4) has a unique solution y that satisfies the integral equation
y ( x ) = y 0 e A ( 0 ) A ( x ) + 0 x K y ( t ) + f ( t ) e A ( t ) A ( x ) d t ,
where A is a primitive function of a.
We consider the following Volterra operator
V u ( x ) : = 0 x u ( t ) e A ( t ) A ( x ) d t ,
and we define
g ( x ) : = y 0 e A ( 0 ) A ( x ) + V f ( x ) .
Then, Equation (6) becomes
y VK y = g .
In this paper, we propose to solve the above equation by using the four following methods based on the projection π n given in (3).
(1)
Degenerate kernel method, where the operator K is approximated by the following degenerate kernel operator
K n , 1 ( u ) ( s ) : = 0 1 k n ( s , t ) u ( t ) d t ,
with
k n ( s , t ) : = π n k ( s , · ) = i = 1 N h k s , t i ϕ i ( t ) .
The approximate equation of (8) is then given by
y n , 1 VK n , 1 y n , 1 = g .
(2)
Nyström method, where the operator K is approximated by the Nyström operator based on π n and defined by
K n , 2 ( u ) ( s ) : = i = 1 N h w i k s , t i u t i ,
with w i : = 0 1 ϕ i ( t ) d t , i = 1 , 2 , , N h . The corresponding approximate equation of (8) is then given by
y n , 2 VK n , 2 y n , 2 = g .
(3)
Superconvergent degenerate kernel method, where the operator K is approximated by the following finite rank operator
K n , 1 S : = π n K + K n , 1 π n K n , 1 .
The corresponding approximation of (8) becomes
y n , 1 S VK n , 1 S y n , 1 S = g .
Furthermore, we define the iterated solution by
y ˜ n , 1 S : = VK y n , 1 S + g .
(4)
Superconvergent Nyström method, where the operator K is approximated by the following finite rank operator
K n , 2 S : = π n K + K n , 2 π n K n , 2 .
The corresponding approximation of (8) becomes
y n , 2 S VK n , 2 S y n , 2 S = g .
Additionally, we define the iterated solution by
y ˜ n , 2 S : = VK y n , 2 S + g .
We show later that, for i = 1 , 2 , the iterated solutions y ˜ n , i S converge to y faster than y n , i S . The reduction of (9)–(11) and (13) to systems of linear equations is presented in Section 4.

3. Convergence Analysis

In addition to the assumptions about a, f, and k required previously to insure the existence and the uniqueness of the exact solution of (1), we assume in the subsequent considerations that the operator I VK is invertible with a bounded inverse. Therefore, it is easy to verify that, for the above four methods, the operators I VK n , i and I VK n , i S are invertible for enough large n and we have
I VK n , i 1 L i < and I VK n , i S 1 L i < ,
where L i and L i are constants independent of n [20,21].
Hence for large enough n, the approximate equations have unique solutions. Moreover, in the following lemma, we give some error estimates essential in the proof of the convergence orders.
Lemma 1.
For a sufficiently large integer n and for i = 1 , 2 , the following estimates hold:
y y n , i L i VK VK n , i y ,
y y n , i S L i V I π n K K n , i y ,
y y ˜ n , i S C i KV I π n K K n , i y + KV I π n K K n , i y y n , i S ,
where L i , L i , and C i are constants independent of n.
Proof. 
The proof can be investigated in a similar way with the proof of Theorem 4 of [20]. □
In the rest of this section the following estimates are crucial. For y C r [ 0 , 1 ] , (see [23], Corollary 7.6, p. 328), it holds
I π n y C 1 h r y ( r ) .
For y C 2 r [ 0 , 1 ] and g C r [ 0 , 1 ] , we find
x i 1 x i g ( t ) I π n y ( t ) d t C 2 h 2 r + 1 g ( r ) y ( 2 r ) , i = 1 , , n ,
where C 1 and C 2 are constants independent of n.
The following results provide the convergence orders associated with each approximate solution defined above.
Theorem 1.
Let y n , 1 and y n , 2 be the approximate solutions defined, respectively, by (9) and (10). In the case of the degenerate kernel method, we assume that k ( · , · ) C r 1 , 2 r ( [ 0 , 1 ] × [ 0 , 1 ] ) , a C r 1 [ 0 , 1 ] , and f C r 1 [ 0 , 1 ] , while in the case of the Nyström method, we assume that k ( · , · ) C 2 r , 2 r ( [ 0 , 1 ] × [ 0 , 1 ] ) , a C 2 r [ 0 , 1 ] , and f C 2 r [ 0 , 1 ] . Then
y y n , i = O h 2 r , i = 1 , 2 .
Proof. 
Let i = 1 . From (15), we find
y y n , 1 L 1 V K K n , 1 y L 1 V K K n , 1 y .
Moreover, by using (19) we have
K K n , 1 y ( x ) = 0 1 y ( t ) ( I π n ) k ( x , . ) ( t ) d t C 2 h 2 r y ( r ) 2 r t 2 r k ( x , t ) .
By taking a supremum over x in the last inequality and by using (21), estimate (20) follows.
For i = 2 , the proof is similar. □
Theorem 2.
Let y n , 1 S and y n , 2 S be the approximate solutions defined, respectively, by (11) and (13). Let y ˜ n , 1 S and y ˜ n , 2 S be the iterated versions defined respectively by (12) and (14). For both methods, we assume that k ( · , · ) C 2 r , 2 r ( [ 0 , 1 ] × [ 0 , 1 ] ) , a C r 1 [ 0 , 1 ] , and f C r 1 [ 0 , 1 ] . Then for i = 1 , 2 , we have
y y n , i S = O h 3 r + 1 ,
y y ˜ n , i S = O h 4 r .
Proof. 
We only consider the case of superconvergent degenerate kernel method ( i = 1 ) . For the case of superconvergent Nyström method ( i = 2 ) , the proof can be investigated in a similar way. Let x [ 0 , 1 ] and let m ( 0 m n 1 ) be an integer such that x [ x m , x m + 1 ] . We have
VK y ( x ) VK n , 1 S y n ( x ) = V I π n K K n , 1 y ( x ) = 0 x e A ( t ) A ( x ) I π n K K n , 1 y ( t ) d t = j = 1 m x j 1 x j e A ( t ) A ( x ) I π n G ( t ) d t + x m x e A ( t ) A ( x ) I π n G ( t ) d t ,
where G ( t ) : = K K n , 1 y ( t ) .
On one hand, from (19), it follows that
j = 1 m x j 1 x j e A ( t ) A ( x ) I π n G ( t ) d t C 2 C r , x h 2 r G ( 2 r ) ,
and using (18) yields
x m x e A ( t ) A ( x ) I π n G ( t ) d t C 0 , x h I π n G C 1 C 0 , x h r + 1 G ( r ) ,
where C j , x : = sup t [ 0 , 1 ] j t j e A ( t ) A ( x ) .
On the other hand, for j = 0 , , 2 r and again using (19), we find
G ( j ) ( t ) = 0 1 y ( s ) I π n j t j k ( t , · ) ( s ) d s C 2 C j , t h 2 r y ( r ) .
where C j , t : = sup s [ 0 , 1 ] 2 r s 2 r j t j k ( t , s ) .
Taking supremum over x , t [ 0 , 1 ] in (25)–(27) and using (24), we deduce the error estimate (22).
Now, we prove (23). From (19), we can show that
KV I π n K K n , 1 y ( x ) = 0 1 k ( x , s ) V I π n K K n , 1 y ( s ) d s = 0 1 k ( x , s ) 0 s I π n K K n , 1 y ( t ) e A ( t ) A ( s ) d t d s = 0 1 v x ( t ) I π n G ( t ) d t C h 2 r v x ( r ) G ( 2 r ) ,
where v x ( t ) : = t 1 k ( x , s ) e A ( t ) A ( s ) d s . Using (27) for j = 2 r , we deduce that
KV I π n K K n , i y = O h 4 r .
Moreover, it is easy to prove that
VK I π n K K n , 1 = O h r ,
Then, from (22), it follows that
VK I π n K K n , 1 y y n , 1 S = O h 4 r + 1 .
Now, by combining (17), (28), and (29) we find (23). □
In the following theorem, we give superconvergence results for the approximate solutions y n , 1 S and y n , 2 S at the partition knots.
Theorem 3.
Let y n , 1 S and y n , 2 S be the approximate solutions defined, respectively, by (11) and (13). According to the same assumptions of Theorem 2, the following superconvergence orders at the partition knots hold
y ( x j ) y n , i S ( x j ) = O h 4 r , j = 1 , , n , i = 1 , 2 .
Proof. 
Let i = 1 . The error function e n , 1 : = y y n , 1 S satisfies the following equation
e n , 1 ( x ) + a ( x ) e n , 1 ( x ) = K e n , 1 ( x ) + δ n , 1 ( x ) ,
where
δ n , 1 ( x ) = K K n , 1 S y n , 1 S ( x ) .
Under the regularity assumptions on a , f , and k, Equation (31) has a unique solution satisfying the initial condition e n , 1 ( 0 ) = 0 , which is given by
e n , 1 ( x ) = 0 x r ( x , s ) δ n , 1 ( s ) d s ,
where r is the differential kernel (see [22]).
Then
e n , 1 ( x j ) = 0 x j r ( x j , s ) δ n , 1 ( s ) d s = = 1 j x 1 x r ( x j , s ) δ n , 1 ( s ) d s .
Next, for 1 j , we have
x 1 x r ( x j , s ) δ n , 1 ( s ) d s x 1 x r ( x j , s ) K K n , 1 S y ( s ) d s + x 1 x r ( x j , s ) K K n , 1 S ( y y n , 1 S ) ( s ) d s .
Using (19) and the regularity of the resolvent kernel r ( x , s ) , it is easy to show that the first term on the right hand side of (32) is on O ( h 4 r + 1 ) . For the second, using (18) and (22), we find
x 1 x r ( x j , s ) K K n , 1 S ( y y n , 1 S ) ( s ) d s h r ( x j , . ) K K n , 1 S y y n , 1 S = O ( h 4 r + 2 ) .
We deduce that
x 1 x r ( x j , s ) δ n , 1 ( s ) d s = O ( h 4 r + 1 ) .
Hence
e n , 1 ( x j ) = O ( h 4 r ) ,
which proves (30). For i = 2 , the proof is similar. □

4. Implementation Details

In this section, we consider the reduction of (9)–(11) and (13) to systems of linear equations. Let X : = L 2 [ 0 , 1 ] , k i : = k ( · , t i ) , k ˜ i : = k ( t i , · ) and let , denote the usual inner product on X , we put
  • Degenerate kernel and Nyström approximate solutions
Theorem 4.
Let B and B ˜ be the vectors with components
B i : = g , ϕ i a n d B ˜ i : = g ( t i ) .
Let M and M ˜ be the matrices with entries
M i , j : = V k j , ϕ i a n d M ˜ i , j : = w j V k j ( t i ) .
The approximate solutions y n , 1 and y n , 2 of (9) and (10) are given by
y n , 1 = g + j = 1 N h X j V k j a n d y n , 2 = g + j = 1 N h w j Y j V k j ,
where X : = ( X 1 , , X N h ) T and Y : = ( Y 1 , , Y N h ) T are, respectively, the solutions of the linear systems of size N h given by
( I M ) X = B a n d ( I M ˜ ) Y = B ˜ .
Proof. 
From Equation (9), the approximate solution y n , 1 can be written as
y n , 1 ( x ) = g ( x ) + 0 x 0 1 y n , 1 ( s ) π n k ( t , . ) ( s ) d s e A ( t ) A ( x ) d t = g ( x ) + j = 1 N h 0 1 y n , 1 ( s ) ϕ j ( s ) d s 0 x k ( t , t j ) e A ( t ) A ( x ) d t = g ( x ) + j = 1 N h X j V k j ( x ) .
The coefficients X j , j = 1 , , N h are obtained by replacing y n , 1 into Equation (9) and by identifying the coefficients of the functions k j , j = 1 , , N h , which we suppose to be linearly independent.
More precisely, we find the equations
X i j = 1 N h 0 1 0 t k j ( s ) e A ( s ) A ( t ) ϕ i ( t ) d s d t X j = 0 1 y 0 e A ( 0 ) A ( t ) + 0 t f ( s ) e A ( s ) A ( t ) d s ϕ i ( t ) d t , i = 1 , , N h ,
which are expressed in matrix form as
( I M ) X = B ,
where B and M are given by (33) and (34). This completes the proof for y n , 1 .
By the same techniques, the form of y n , 2 and the corresponding linear system are derived. □
  • Superconvergent degenerate kernel and Nyström approximate solutions
Theorem 5.
Let B and B ˜ be vectors with components
B i : = k ˜ i , g = 1 N h ϕ , g k ( t i ) a n d B ˜ i : = g , ϕ i ,
and let F , F ˜ , G , and G ˜ be matrices with entries
F i , j : = k ˜ i , V ϕ j = 1 N h ϕ , V ϕ j k ( t i ) a n d F ˜ i , j : = V k j , ϕ i ,
G i , j : = k ˜ i , V k j + = 1 N h ϕ , V k j k ( t i ) a n d G ˜ i , j : = V ϕ j , ϕ i .
The approximate solution y n , 1 S is given by
y n , 1 S = g + i = 1 N h Z i V ϕ i + j = 1 N h Z ˜ j V k j ,
where Z Z ˜ T is the solution of the following linear system of size 2 N h :
I F G G ˜ I F ˜ Z Z ˜ = B B ˜ .
Proof. 
From (11) and the explicit expression of K n , 1 S , it is easy to prove that y n , 1 takes the form
y n , 1 ( x ) = g ( x ) + V K n , 1 S y n , 1 ( x )
= g ( x ) + V π n K y n , 1 + K n , 1 y n , 1 π n K n , 1 y n , 1 ( x )
= g ( x ) + i = 1 N h Z i V ϕ i ( x ) + j = 1 N h Z ˜ j V k j ( x ) ,
where the coefficients Z j and Z ˜ j , j = 1 , , N h , are obtained by replacing y n , 1 given by (40) into the approximate Equation (11) and by identifying coefficients of the family of functions { ϕ j , k j } , j = 1 , , N h , supposed to be linearly independent. More precisely, we find the following equations
Z i = j = 1 N h 0 1 0 t k ˜ i ( t ) ϕ j ( s ) e A ( s ) A ( t ) d s d t = 1 N h 0 1 0 t ϕ j ( s ) ϕ ( t ) e A ( s ) A ( t ) d s d t k ( t i ) Z j + j = 1 N h 0 1 0 t k ˜ i ( t ) k j ( s ) e A ( s ) A ( t ) d s d t = 1 N h 0 1 0 t k j ( s ) ϕ ( t ) e A ( s ) A ( t ) d s d t k ( t i ) Z ˜ j + 0 1 k ˜ i ( t ) g ( t ) d t = 1 N h 0 1 g ( t ) ϕ ( t ) d t k ( t i ) , i = 1 , , N h ,
and
Z ˜ i = j = 1 N h 0 1 0 t k j ( s ) e A ( s ) A ( t ) ϕ i ( t ) d s d t Z ˜ j + j = 1 N h 0 1 0 t ϕ j ( s ) e A ( s ) A ( t ) ϕ i ( t ) d s d t Z j + 0 1 g ( t ) ϕ i ( t ) d t , i = 1 , , N h .
In matrix form
I F G G ˜ I F ˜ Z Z ˜ = B B ˜ .
where B, B ˜ , F, F ˜ , G, and G ˜ are given by (36)–(38), respectively.
The proof is complete. □
Theorem 6.
Let F and F ˜ be the vectors with the components
F i : = k ˜ i , g = 1 N h w k ˜ i ( t ) g ( t ) a n d F ˜ i : = g ( t i ) ,
and let M , M ˜ , H , and H ˜ be the matrices with the entries
M i , j = k ˜ i , V ψ j = 1 N h w k ˜ i V ψ j ( t ) a n d M ˜ i , j = w j V k j ( t i ) ,
H i , j = w j k ˜ i , V k j = 1 N h w k ˜ i V k j ( t ) a n d H ˜ i , j = w j V ψ j ( t i ) .
The approximate solution is given by
y n = g + i = 1 N h X i V ψ i + j = 1 N h w j X ˜ j V k j ,
where X X ˜ T is the solution of the following linear system of size 2 N h :
I M H H ˜ I M ˜ X X ˜ = F F ˜ .
Proof. 
The proof can be presented in a similar way as that of Theorem 5. □
Remark 1.
It should be noted that there are integrals in setting up the above systems and in evaluating the approximate solutions and their iterated versions. These integrals are evaluated numerically by suitable quadrature rules with high accuracy to imitate the exact integration.

5. Numerical Results

In this section, we illustrate the accuracy and effectiveness of theoretical results established in the previous sections for numerically solving Fredholm integro-differential equations. More precisely, we consider four numerical examples of such equations defined on [ 0 , 1 ] and given in the following table.
Kernel k Function a Function f Exact Solution y
Example 1 1 t + exp ( s ) 1 log t + e t + 1 exp ( t )
Example 2 sin ( t + s ) sin ( t ) 1 4 ( cos ( t + 2 ) cos ( t ) ) 1 2 ( 3 sin ( t ) + sin ( 2 t ) ) cos ( t )
Example 3 t s 1 2 sin ( x ) x ( 1 + 2 sin ( 1 ) ) cos ( x ) + sin ( x )
Example 4 sin ( 4 π t + 2 π s ) 1 2 π sin ( 2 π x ) cos ( 2 π x ) 1 + sin ( 2 π x ) cos ( 2 π t )
Firstly, for Examples 1 and 2, we consider the space of piecewise constant functions ( r = 1 ) and the space of piecewise linear functions ( r = 2 ) defined on the interval [ 0 , 1 ] endowed with the uniform partition
0 < 1 n < 2 n < < n 1 n < 1 .
For different values of n and for i = 1 , 2 , we compute the maximum absolute errors
E i , : = y y n , i , E i , S : = y y n , i S ,
E ˜ i , S : = y y ˜ n , i S , E i S : = max j | y ( x j ) y n , i S ( x j ) | .
Moreover, we present the corresponding numerical convergence orders denoted NCO and obtained by the logarithm to base 2 of the ratio between two consecutive errors. The obtained results are illustrated in the following tables.
Table 1, Table 2, Table 3 and Table 4 show that the superconvergent Nyström and degenerate kernel methods are more accurate than the Nyström and degenerate kernel methods, and the computed NCOs match well with the expected values.
Next, in order to give a comparison, we illustrate in Table 5 and Table 6 the punctual errors provided by the application of the superconvergent Nyström and degenerate kernel methods and other known errors obtained in [24,25]. In particular, for i = 1 , 2 we denote by
E i , j = | y ( x j ) y n , i S ( x j ) | , x j = j / 10 , j = 0 , , 10 ,
the punctual errors obtained by our methods for r = 1 and n = 4 , while E S p , j denote the errors obtained in [24] by using a cubic spline interpolation, and E A d , j are those obtained in [25] by using Adomian’s decomposition with four iterations.
The results in Table 5 and Table 6 show that the error obtained by our methods are comparable with those given in [24,25]. However, we notice that in [24] cubic spline functions (piecewise polynomials of degree three) are used, and in [25], four iterations were needed to obtain these errors, while in our case only piecewise constant polynomials defined on the partition (41) with n = 4 were enough to obtain the same accuracy.

6. Conclusions

In this paper, we have developed Nyström, degenerate kernel methods and their superconvergent/iterated superconvergent versions for the numerical solution of Fredholm linear integro-differential equations. We have proved that these methods exhibit high convergent orders. Finally, such methods turn out to be very effective, with low computational cost and comparable with other methods known in the literature.

Author Contributions

Conceptualization, D.S., M.T. and D.B.; methodology, M.T.; software, A.S.; validation, D.S. and D.B.; formal analysis, M.T. and D.B.; investigation, A.S. and M.T.; resources, D.S. and D.B.; data curation, A.S.; writing—original draft preparation, A.S.; writing—review and editing, A.S. and M.T.; visualization, D.S.; supervision, D.B. and D.S.; funding acquisition, D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding and APC was funded by University of Granada.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akyüz, A.; Sezer, M. A Taylor polynomial approach for solving high-order linear Fredholm integro-differential equations in the most general form. Int. J. Comput. Math. 2007, 84, 527–539. [Google Scholar] [CrossRef]
  2. Jaradat, H.; Alsayyed, O.; Al-Shara, S. Numerical solution of linear integro-differential equations. J. Math. Stat. 2008, 4, 250–254. [Google Scholar] [CrossRef] [Green Version]
  3. Nas, S.; Yalçinba, S.; Sezer, M. A Taylor polynomial approach for solving high-order linear Fredholm integro-differential equations. Int. J. Math. Educ. Sci. Technol. 2000, 31, 213–225. [Google Scholar]
  4. Saadatmandia, A.; Dehghan, M. Numerical solution of the higher-order linear Fredholm integro-differential-difference equation with variable coefficients. Appl. Math. Comput. 2010, 59, 2996–3004. [Google Scholar] [CrossRef] [Green Version]
  5. Hashim, I. Adomian decomposition method for solving BVPs for fourth-order integro-differential equations. J. Comput. Appl. Math. 2006, 193, 658–664. [Google Scholar] [CrossRef] [Green Version]
  6. Yalçinbas, S.; Sezer, M. A Taylor collocation method for the approximate solution of general linear Fredholm Volterra integro-difference equations with mixed argument. Appl. Math. Comput. 2006, 175, 675–690. [Google Scholar] [CrossRef]
  7. Darania, P.; Ebadian, A. A method for the numerical solution of the integro-differential equations. Appl. Math. Comput. 2007, 188, 657–668. [Google Scholar] [CrossRef]
  8. Kurta, N.; Sezer, M. Polynomial solution of high-order linear Fredholm integro-differential equations with constant coefficients. J. Frankl. Inst. 2008, 345, 839–850. [Google Scholar] [CrossRef]
  9. Avazzadeh, Z.; Heydari, M.; Loghmani, G. Numerical solution of Fredholm integral equations of the second kind by using integral mean value theorem. Appl. Math. Comput. 2011, 35, 2374–2383. [Google Scholar] [CrossRef]
  10. El-Sayed, S.M.; Kaya, D.; Zarea, S. The decomposition method applied to solve high-order linear Volterra–Fredholm integro-differential equations. Int. J. Nonlinear Sci. Numer. Simul. 2004, 5, 105–112. [Google Scholar] [CrossRef]
  11. El-Sayed, A.M.A.; Omar, Y.M.Y. On the Weak Solutions of a Delay Composite Functional Integral Equation of Volterra-Stieltjes Type in Reflexive Banach Space. Mathematics 2022, 10, 245. [Google Scholar] [CrossRef]
  12. Rubbioni, P. Solvability for a Class of Integro-Differential Inclusions Subject to Impulses on the Half-Line. Mathematics 2022, 10, 224. [Google Scholar] [CrossRef]
  13. Shokri, A.; Shokri, A.A. The hybrid Obrechkoff BDF methods for the numerical solution of first order initial value problems. Acta Univ. Apulensis 2014, 38, 23–33. [Google Scholar]
  14. Shokri, A. The multistep multiderivative methods for the numerical solution of first order initial value problems. TWMS J. Pure Appl. Math. 2016, 7, 88–97. [Google Scholar]
  15. Lakestani, M.; Razzaghi, M.; Dehghan, M. Semiorthogonal spline wavelets approximation for Fredholm integro-differential equations. Math. Probl. Eng. 2006, 2006, 096184. [Google Scholar] [CrossRef]
  16. Chen, J.; He, M.; Huang, Y. A fast multiscale Galerkin method for solving second order linear Fredholm integro-differential equation with Dirichlet boundary conditions. J. Comput. Appl. Math. 2020, 364, 112352. [Google Scholar] [CrossRef]
  17. Mahmoodi, Z.; Rashidinia, J.; Babolian, E. B-spline collocation method for linear and nonlinear Fredholm and Volterra integro-differential equations. Appl. Anal. 2013, 92, 1787–1802. [Google Scholar] [CrossRef]
  18. Jalilian, R.; Tahernezhad, T. Exponential spline method for approximation solution of Fredholm integro-differential equation. Int. J. Comput. Math. 2020, 97, 791–801. [Google Scholar] [CrossRef]
  19. Kulkarni, R.P. A superconvergence result for solutions of compact operator equations. Bull. Aust. Math. Soc. 2003, 68, 517–528. [Google Scholar] [CrossRef] [Green Version]
  20. Allouch, C.; Sablonnière, P.; Sbibih, D.; Tahrichi, M. Superconvergence Nyström and degenerate kernel methods for integral equations of the second kind. J. Integral Equ. Appl. 2012, 24, 463–485. [Google Scholar] [CrossRef]
  21. Atkinson, K.; Graham, I.; Sloan, I. Piecewise continuous collocation for integral equations. SIAM J. Numer. Anal. 1983, 20, 172–186. [Google Scholar] [CrossRef]
  22. Brunner, H. Collocation Methods for Volterra Integral and Related Functional Differential Equations; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  23. Chatelin, F. Spectral Approximation of Linear Operators; Academic Press: New York, NY, USA, 1983. [Google Scholar]
  24. Amirfakhrian, M.; Shakibi, K. Solving integro-differential equation by using b-spline interpolation. Int. J. Math. Model. Comput. 2013, 3, 237–244. [Google Scholar]
  25. Vahidi, A.R.; Babolian, E.; Cordshooli, G.A.; Azimzadeh, Z. Numerical solution of Fredholm integro-differential equation by Adomian’s decomposition method. Int. J. Math. Anal. 2009, 3, 1769–1773. [Google Scholar]
Table 1. Numerical methods based on piecewise constant functions ( r = 1 ) .
Table 1. Numerical methods based on piecewise constant functions ( r = 1 ) .
Example 1 i = 1
n E i , NCO E i , S NCO E ˜ i , S NCO E i S NCO
2 2.37 ( 02 ) 2.22 ( 04 ) 1.02 ( 04 ) 1.62 ( 04 )
4 5.82 ( 03 ) 2.02 1.23 ( 05 ) 4 , 17 4.92 ( 06 ) 4.38 7.04 ( 06 ) 4.52
8 1.43 ( 03 ) 2.02 7.11 ( 07 ) 4.11 2.80 ( 07 ) 4.13 3.75 ( 07 ) 4.23
16 3.21 ( 04 ) 2.02 4.52 ( 08 ) 4.00 1.82 ( 08 ) 3.93 2.23 ( 08 ) 4.07
Theoretical order 2.0 4.0 4.0 4.0
i = 2
2 1.81 ( 03 ) 2.81 ( 04 ) 1.20 ( 04 ) 2.81 ( 04 )
4 4.51 ( 04 ) 2.00 2.21 ( 05 ) 3.66 9.66 ( 06 ) 3.63 2.21 ( 05 ) 3.66
8 1.12 ( 04 ) 2.00 1.47 ( 06 ) 3.90 6.48 ( 07 ) 3.89 1.47 ( 06 ) 3.90
16 2.81 ( 05 ) 2.00 9.38 ( 08 ) 3.97 4.16 ( 08 ) 3.96 9.38 ( 08 ) 3.97
Theoretical order 2.0 4.0 4.0 4.0
Table 2. Numerical methods based on piecewise constant functions ( r = 1 ) .
Table 2. Numerical methods based on piecewise constant functions ( r = 1 ) .
Example 2 i = 1
n E i , NCO E i , S NCO E ˜ i , S NCO E i S NCO
2 2.85 ( 02 ) 1.43 ( 04 ) 4.55 ( 05 ) 5.07 ( 05 )
4 7.06 ( 03 ) 2.01 1.15 ( 05 ) 3.63 3.25 ( 06 ) 3.80 2.95 ( 06 ) 4.10
8 1.74 ( 03 ) 2.02 7.09 ( 07 ) 4.02 2.16 ( 07 ) 3.91 1.76 ( 07 ) 4.06
16 3.25 ( 04 ) 2.41 4.70 ( 08 ) 3.91 1.54 ( 08 ) 3.80 1.08 ( 08 ) 4.01
Theoretical order 2.0 4.0 4.0 4.0
i = 2
2 4.41 ( 02 ) 4.98 ( 04 ) 2.58 ( 04 ) 4.98 ( 04 )
4 1.11 ( 02 ) 1.98 2.96 ( 05 ) 4.07 1.53 ( 05 ) 4.07 2.96 ( 05 ) 4.07
8 2.78 ( 03 ) 1.99 1.83 ( 06 ) 4.01 9.44 ( 07 ) 4.01 1.83 ( 06 ) 4.01
16 6.69 ( 04 ) 1.99 1.14 ( 07 ) 4.00 5.89 ( 08 ) 4.00 1.14 ( 07 ) 4.00
Theoretical order 2.0 4.0 4.0 4.0
Table 3. Numerical methods based on piecewise linear functions ( r = 2 ) .
Table 3. Numerical methods based on piecewise linear functions ( r = 2 ) .
Example 1 i = 1
n E i , NCO E i , S NCO E ˜ i , S NCO E i S NCO
2 7.25 ( 05 ) 1.60 ( 07 ) 7.83 ( 08 ) 1.60 ( 07 )
4 4.51 ( 06 ) 4.00 1.40 ( 09 ) 6.83 4.68 ( 10 ) 7.38 9.51 ( 10 ) 7.40
8 2.82 ( 07 ) 4.00 1.28 ( 11 ) 6.77 2.07 ( 12 ) 7.82 4.27 ( 12 ) 7.79
16 1.76 ( 08 ) 4.00 1.01 ( 13 ) 6.97 8.02 ( 15 ) 8.01 1.70 ( 14 ) 7.97
Theoretical order 4.0 7.0 8.0 8.0
i = 2
2 1.94 ( 06 ) 9.40 ( 08 ) 4.19 ( 08 ) 9.40 ( 08 )
4 1.20 ( 07 ) 4.01 6.56 ( 10 ) 7.16 3.07 ( 10 ) 7.09 6.56 ( 10 ) 7.16
8 7.50 ( 09 ) 4.00 6.52 ( 12 ) 6.65 1.43 ( 12 ) 7.73 3.07 ( 12 ) 7.73
16 4.68 ( 10 ) 4.00 3.68 ( 14 ) 7.46 4.88 ( 15 ) 8.20 1.24 ( 14 ) 7.95
Theoretical order 4.0 7.0 8.0 8.0
Table 4. Numerical methods based on piecewise linear functions ( r = 2 ) .
Table 4. Numerical methods based on piecewise linear functions ( r = 2 ) .
Example 2 i = 1
n E i , NCO E i , S NCO E ˜ i , S NCO E i S NCO
2 1.97 ( 04 ) 8.96 ( 08 ) 1.14 ( 08 ) 1.13 ( 08 )
4 1.21 ( 05 ) 4.02 6.36 ( 10 ) 7.13 4.29 ( 11 ) 8.05 4.90 ( 11 ) 7.85
8 7.52 ( 07 ) 4.00 4.97 ( 12 ) 6.99 1.57 ( 13 ) 8.09 2.00 ( 13 ) 7.93
16 4.69 ( 08 ) 4.00 3.78 ( 14 ) 7.03 6.02 ( 16 ) 8.03 7.91 ( 16 ) 7.98
Theoretical order 4.0 7.0 8.0 8.0
i = 2
2 2.55 ( 04 ) 1.05 ( 07 ) 1.29 ( 08 ) 1.42 ( 08 )
4 1.56 ( 06 ) 4.03 7.69 ( 10 ) 7.10 4.87 ( 11 ) 8.05 5.85 ( 11 ) 7.93
8 9.70 ( 07 ) 4.00 6.08 ( 12 ) 6.98 1.81 ( 13 ) 8.06 2.40 ( 13 ) 7.92
16 6.05 ( 08 ) 4.00 4.56 ( 14 ) 7.05 3.33 ( 16 ) 9.09 1.11 ( 15 ) 7.76
Theoretical order 4.0 7.0 8.0 8.0
Table 5. Comparison with results given in [24].
Table 5. Comparison with results given in [24].
Example 3
Present MethodsMethod in [24]
x j E 1 , j E 2 , j E Sp , j
0000
0.1 1.59 × 10 5 1.71 × 10 6 1.71 × 10 5
0.2 1.27 × 10 5 1.37 × 10 6 3.27 × 10 5
0.3 1.39 × 10 5 1.50 × 10 6 3.59 × 10 5
0.4 2.12 × 10 5 2.29 × 10 6 4.17 × 10 5
0.5 8.54 × 10 6 8.87 × 10 7 4.94 × 10 5
0.6 2.59 × 10 5 2.64 × 10 6 5.88 × 10 5
0.7 2.41 × 10 5 2.18 × 10 6 6.88 × 10 5
0.8 2.65 × 10 5 1.85 × 10 6 8.49 × 10 5
0.9 3.47 × 10 5 1.59 × 10 6 8.79 × 10 5
1 2.20 × 10 5 1.84 × 10 6 1.48 × 10 4
Table 6. Comparison with results given in [25].
Table 6. Comparison with results given in [25].
Example 4
Present MethodsMethod in [25]
x j E 1 , j E 2 , j E Ad , j
0.1 2.37502 × 10 3 7.18395 × 10 6 6.77227 × 10 4
0.2 3.24853 × 10 3 1.32702 × 10 5 3.57926 × 10 4
0.3 3.78369 × 10 3 5.49501 × 10 5 7.20389 × 10 4
0.4 3.64555 × 10 3 1.14361 × 10 4 1.65557 × 10 3
0.5 1.69840 × 10 3 2.07833 × 10 4 2.33402 × 10 3
0.6 4.39557 × 10 3 3.47537 × 10 4 3.76522 × 10 3
0.7 5.61879 × 10 3 4.93954 × 10 4 6.78844 × 10 2
0.8 6.51049 × 10 3 6.49503 × 10 4 1.09211 × 10 2
0.9 6.91467 × 10 3 1.02800 × 10 3 1.49581 × 10 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saou, A.; Sbibih, D.; Tahrichi, M.; Barrera, D. Superconvergent Nyström and Degenerate Kernel Methods for Integro-Differential Equations. Mathematics 2022, 10, 893. https://doi.org/10.3390/math10060893

AMA Style

Saou A, Sbibih D, Tahrichi M, Barrera D. Superconvergent Nyström and Degenerate Kernel Methods for Integro-Differential Equations. Mathematics. 2022; 10(6):893. https://doi.org/10.3390/math10060893

Chicago/Turabian Style

Saou, Abdelmonaim, Driss Sbibih, Mohamed Tahrichi, and Domingo Barrera. 2022. "Superconvergent Nyström and Degenerate Kernel Methods for Integro-Differential Equations" Mathematics 10, no. 6: 893. https://doi.org/10.3390/math10060893

APA Style

Saou, A., Sbibih, D., Tahrichi, M., & Barrera, D. (2022). Superconvergent Nyström and Degenerate Kernel Methods for Integro-Differential Equations. Mathematics, 10(6), 893. https://doi.org/10.3390/math10060893

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop