Next Article in Journal
Confidence Interval Estimation for the Common Mean of Several Zero-Inflated Gamma Distributions
Next Article in Special Issue
Regular and Intra-Regular Ternary Semirings in Terms of m-Polar Fuzzy Ideals
Previous Article in Journal
Analysis of the Fractional Differential Equations Using Two Different Methods
Previous Article in Special Issue
Asymptotics and Summation of the Effective Properties of Suspensions, Simple Liquids and Composites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Numerical Methods for a Fredholm–Hammerstein Integral Equation with Modified Argument

Department of Mathematics, Faculty of Mathematics and Computer Science, Babeş-Bolyai University, 1 M. Kogălniceanu Street, 400084 Cluj-Napoca, Romania
Symmetry 2023, 15(1), 66; https://doi.org/10.3390/sym15010066
Submission received: 28 November 2022 / Revised: 9 December 2022 / Accepted: 22 December 2022 / Published: 26 December 2022
(This article belongs to the Special Issue Symmetry and Approximation Methods II)

Abstract

:
Iterative processes are a powerful tool for providing numerical methods for integral equations of the second kind. Integral equations with symmetric kernels are extensively used to model problems, e.g., optimization, electronic and optic problems. We analyze iterative methods for Fredholm–Hammerstein integral equations with modified argument. The approximation consists of two parts, a fixed point result and a quadrature formula. We derive a method that uses a Picard iterative process and the trapezium numerical integration formula, for which we prove convergence and give error estimates. Numerical experiments show the applicability of the method and the agreement with the theoretical results.

1. Introduction

Integral equations are used in many fields of applied mathematics, physics, engineering, biology, dynamical systems, etc., to model a wide variety of applications. They also appear as reformulations of the boundary value, initial value, and bilocal problems for partial differential equations. Integral equations with symmetric kernels occur frequently in optimization, spectral analysis, electronic, and optic problems.
A wide variety of numerical methods are used to approximate their solutions, such as projection–collocation, Galerkin methods ([1,2,3,4,5,6]), or decomposition methods ([7]). Many methods are based on searching for a solution of a certain form and then finding the values of the solution at a set of nodes by substituting that into the equation. Numerical solutions have been found using block-pulse functions ([8]), wavelets ([9,10]), Taylor series expansions ([11]), Bernoulli polynomials ([2]), and others. In general, this leads to solving a nonlinear system for the coefficients in the unknown function, a system which is often ill-conditioned, thus complicating the procedure.
In contrast, iterative numerical methods approximate the solution at each step, using the previous iteration, without the need to solve a nonlinear system. Iterative methods consist of two parts, one provided by an iterative process and the other by a numerical integration scheme. The first part is usually a fixed point result, such as Banach’s contraction principle [12], Altman’s algorithm [13], the Krasnoselskii [14], Mann [12], or Ishikawa [12] iteration, or various versions of Newton–Kantorovich theorems [15]. The quadrature formulas used in the second part can be interpolatory-based numerical schemes, Newton–Cotes formulas, or Gaussian quadratures. For iterative procedures, the number of unknowns (the approximations of the values of the solution at the nodes) does not increase at each step. They are just computed from the values obtained at the previous step, without significantly increasing the computational cost. Thus, the implementation of such methods is also simplified. Iterative numerical methods have been used successfully to approximate solutions of various integral equations of the second kind ([16,17,18,19,20,21,22]).
The composite error of the method will thus consist of two parts. Improving the speed of convergence of either will result in a faster converging method overall.
In this paper, we derive such a numerical method for Fredholm–Hammerstein integral equations of the second kind with modified argument (FHMA), using Banach’s fixed point theorem and the trapezium quadrature scheme. Nonlinear integral equations with modified argument arise in modeling populations dynamics or the spread of a disease in epidemiology ([23]). The nonlinearity of Hammerstein type allows for better numerical methods, making the proof of convergence, the applicability, and the implementation easier.
The paper is organized as follows: in Section 2, we give an overview of the iterative processes and recall some main results in fixed point theory. Section 3 contains the detailed description of the numerical method. We start with the description of the FHMA integral equation. Then, we use a local fixed point result and give conditions for the existence, uniqueness, and successive approximation of the solution. Next, we derive the conditions of convergence for a general quadrature formula. Finally, we apply the trapezium rule, which satisfies the previously derived condition, without requiring extra assumptions. Numerical examples are given in Section 4, showing the applicability of the method and the agreement of the numerical results with the theoretical findings. In Section 5, we give some concluding remarks and discuss ideas for future research.

2. Preliminaries on Iterative Processes and Integral Equations

We recall some results of iterative processes on a Banach space.
Let X and Y be two Banach spaces, A be a convex subset of X, and F : A Y be an operator, in general nonlinear. Many applications in all scientific areas are reduced to solving the operator equation
F u = 0 ,
where 0 is the zero-vector in the space Y.
A special and important case is when Y = X and F u = u T u = 0 . In fact, when Y = X , in most cases, it is possible to write Equation (1) in the form
T u = u ,
i.e., the problem is reduced to finding a fixed point  u * A for the operator T.
Definition 1.
A sequence of the form
u k + 1 = T u k , k = 0 , 1 , ,
starting with a point u 0 A , is called an iterative process.
Under certain conditions imposed on the operator T, the sequence { u k } k N will converge to the fixed point u * of the operator T.
We recall some well-known results in fixed point theory.
Definition 2.
Let ( X , | | · | | ) be a Banach space. A mapping T : X X is called a contraction if there exists a constant 0 α < 1 , such that
| | T u T v | | α | | u v | | ,
for all u , v X .
Theorem 1
(Banach’s contraction principle). Let ( X , · ) be a Banach space and T : X X be an α-contraction. Then,
(a) the equation u = T u has exactly one solution u * X ;
(b) the iterative process u k + 1 = T u k , k = 0 , 1 , , converges to the solution u * for any arbitrary initial point u 0 X ;
(c) the error estimate
u k u * α k 1 α T u 0 u 0
holds for each k N .
A stronger (faster converging) fixed point result can be formulated in the following form (see Altman [13]).
Theorem 2
(Altman’s Algorithm). Let ( X , · ) be a Banach space and T : X X be an α-contraction. Let { ε k } k = 0 be a sequence of positive numbers, such that ε k 1 and k = 0 ε k = .
Then,
(a) the equation u = T u has exactly one solution u * X ;
(b) the iterative process
u k + 1 = ( 1 ε k ) u k + ε k T u k , k = 0 , 1 , ,
converges to u * for any arbitrary initial point u 0 X ;
(c) the error estimate
u k u * e 1 α 1 α T u 0 u 0 e ( 1 α ) v k
holds for each k N , where v 0 = 0 and v k = ν = 0 k 1 ε ν , k 1 .
There are many results for the convergence of iterative processes, under certain conditions, mostly versions of Kantorovich’s Theorem ([24]). For more details, the reader is referred to [12,14,15] and the references therein.
Remark 1.
The above results remain true if instead of the entire space X, we consider any closed subset A X , satisfying T ( A ) A . For instance, such results are useful if applied on a closed ball B ϱ = { u X : u u 0 ϱ } , for a suitable point u 0 X .
Such iterative processes are very suitable for approximating solutions of integral equations of the second kind
u = T u ,
where T : X X is an (nonlinear, in general) integral operator. Not only do such results state the conditions for the existence of a unique solution u * of the integral Equation (3), but they also provide a sequence of approximating iterations u k . So, the first step in developing a numerical method for integral equations of the second kind is using such an iterative process. The second step consists of approximating numerically the integrals involved in the iterative method (2), i.e., finding the approximations u ˜ k , n of u k at a set of nodes t ν , ν = 0 , , n . Then, the composite error of the numerical method will be given by
| | u ˜ k , n u * | | | | u k u * | | + | | u ˜ k , n u k | | .
The first part depends on the iterative process and its speed, while the second one depends on the degree of precision of the chosen quadrature formula.

3. Iterative Methods

In this paper, we consider a special type of nonlinear Fredholm integral equations with modified argument
u ( t ) = 0 b K ( t , x ) g x , u ( x ) , u ( λ x ) d x + f ( t ) , t [ 0 , b ] ,
where K , g , and f are continuous functions and λ ( 0 , 1 ) . Such equations, where the kernel depends on the unknown u through the function g only, are called equations of Hammerstein type, and they can be handled better than equations with general nonlinear kernel, in the sense that the approximation of their solutions is much simplified. Equations with modified arguments arise in modeling dynamical systems of population growth/decay or spread of a virus ([23]).

3.1. Existence, Uniqueness, and Successive Approximation of the Solution

Let X = C [ 0 , b ] be equipped with the uniform norm u = max t [ 0 , b ] | u ( t ) | and consider the closed ball B ϱ : = { u X : u f ϱ } , for some suitable ϱ > 0 . Then, ( X , | | · | | ) is a Banach space, and B ϱ is a closed subset of X. Define the operator T : X X by
T u ( t ) = 0 b K ( t , x ) g x , u ( x ) , u ( λ x ) d x + f ( t ) .
Then, finding a solution of the integral Equation (4) is equivalent to finding a fixed point for the operator T:
u = T u .
Theorem 3.
Consider T : X X defined by (5), and let ϱ > 0 , ϱ 1 : = min t [ 0 , b ] f ( t ) , and ϱ 2 : = max t [ 0 , b ] f ( t ) . Assume that
(i) there exist constants L 1 , L 2 > 0 , such that
g ( x , u 1 , v 1 ) g ( x , u 2 , v 2 ) L 1 | u 1 u 2 | + L 2 | v 1 v 2 | ,
for all x [ 0 , b ] and all u 1 , u 2 , v 1 , v 2 [ ϱ 1 ϱ , ϱ 2 + ϱ ]
(ii) the constant
α : = b M K ( L 1 + L 2 ) < 1 , where M K = max t , x [ 0 , b ] | K ( t , x ) | ;
(iii) the following condition is satisfied
b M K M g ϱ ,
where M g : = max | g ( x , u , v ) | over all x [ 0 , b ] and all u , v [ ϱ 1 ϱ , ϱ 2 + ϱ ] .
Then,
(a) operator T has exactly one fixed point, i.e., the Equation (6) has exactly one solution u * B ϱ ;
(b) the iterative process
u k + 1 = T u k , k = 0 , 1 , ,
converges to the solution u * for any arbitrary initial point u 0 B ϱ ;
(c) the error estimate
u k u * α k 1 α T u 0 u 0
holds for each k N .
Proof. 
We use the Picard iteration, i.e., Theorem 1.
Let t [ 0 , b ] be fixed. By (7), we have
| ( T u T v ) ( t ) | 0 b | K ( t , x ) | g x , u ( x ) , u ( λ x ) g x , v ( x ) , v ( λ x ) d x M K L 1 0 b | u ( x ) v ( x ) | d x + M K L 2 0 b u ( λ x ) v ( λ x ) d x b M K L 1 | | u v | | + b M K L 2 max x [ 0 , b ] | u ( λ x ) v ( λ x ) | b M K ( L 1 + L 2 ) | | u v | | ,
because max x [ 0 , b ] | u ( λ x ) v ( λ x ) | max x [ 0 , b ] | u ( x ) v ( x ) | , since 0 < λ < 1 (so, 0 < λ x < b ).
Hence,
| | T u T v | | = max t [ 0 , b ] | ( T u T v ) ( t ) | α | | u v | | ,
and by (8), it follows that T is an α contraction.
Next, for every u B ϱ and t [ 0 , b ] , we have
| T u ( t ) f ( t ) | 0 b | K ( t , s ) | | g x , u ( x ) , u ( λ x ) | d x b M K M g .
Hence, by (9), it follows that F ( B ϱ ) B ϱ . Now, all the conclusions follow from Theorem 1 and Remark 1. □

3.2. Numerical Integration of the Iterates

To numerically approximate the iterates in (10), we consider a numerical integration scheme
0 b h ( x ) d x = i = 0 n a i h ( x i ) + R h ,
with nodes 0 = x 0 < x 1 < < x n = b , coefficients a i R , i = 0 , 1 , , n , and for which the remainder satisfies
| R h | M ,
for some M > 0 , with M 0 as n .
Let 0 = t 0 < t 1 < < t n = b be the nodes, and let u 0 = u ˜ 0 f be the initial approximation. Then, we use the iteration (10) and the quadrature Formula (11) to approximate u k ( t ν ) and u k λ t ν , with u ˜ k , n ( t ν ) and u ˜ k , n λ t ν , respectively, for ν = 0 , n ¯ and k = 0 , 1 , . For simplicity, we make the following notations:
A ν , i , k : = K ( t ν , t i ) g t i , u k ( t i ) , u k ( λ t i ) , A λ , ν , i , k : = K ( λ t ν , t i ) g t i , u k ( t i ) , u k ( λ t i ) , A ˜ ν , i , k : = K ( t ν , t i ) g t i , u ˜ k , n ( t i ) , u ˜ k , n ( λ t i ) , A ˜ λ , ν , i , k : = K ( λ t ν , t i ) g t i , u ˜ k , n ( t i ) , u ˜ k , n ( λ t i ) ,
where
u ˜ k + 1 , n ( t ν ) : = i = 0 n a i A ˜ ν , i , k + f ( t ν ) , u ˜ k + 1 , n ( λ t ν ) : = i = 0 n a i A ˜ λ , ν , i , k + f ( λ t ν ) .
We also use the notations
R ˜ k , ν : = u k ( t ν ) u ˜ k , n ( t ν ) , R ˜ λ , k , ν : = u k ( λ t ν ) u ˜ k , n ( λ t ν ) .
When applying Formula (11) to the functions K ( t ν , x ) g x , u k ( x ) , u k ( λ x ) and K ( λ t ν , x ) g x , u k ( x ) , u k ( λ x ) , we denote the remainder in (12) by R k , ν and R λ , k , ν , respectively, for ν = 0 , , n . We have:
u k + 1 ( t ν ) = 0 b K ( t ν , x ) g x , u k ( x ) , u k ( λ x ) d x + f ( t ν ) = i = 0 n a i A ν , i , k + R k , ν + f ( t ν ) = i = 0 n a i A ˜ ν , i , k + i = 0 n a i ( A ν , i , k A ˜ ν , i , k ) + R k , ν + f ( t ν ) = i = 0 n a i A ˜ ν , i , k + f ( t ν ) + R ˜ k + 1 , ν = u ˜ k + 1 , n ( t ν ) + R ˜ k + 1 , ν .
Similarly, we obtain
u k + 1 ( λ t ν ) = i = 0 n a i A ˜ λ , ν , i , k + f ( λ t ν ) + R ˜ λ , k + 1 , ν = u ˜ k + 1 , n ( λ t ν ) + R ˜ λ , k + 1 , ν .
Since we start with u 0 ( t ν ) = u ˜ 0 ( t ν ) = f ( t ν ) , there exists M n > 0 , such that
R k , ν , R λ , k , ν M n ,
where M n depends on M K , M g , ϱ 1 , and ϱ 2 but not on ν and k, and M n 0 as n . Then, denoting by
Err ( u k , u ˜ k , n ) : = max t ν [ 0 , b ] { | R ˜ k , ν | , | R ˜ λ , k , ν | }
and
R ˜ k : = max 0 ν n { | R ˜ k , ν | , | R ˜ λ , k , ν | } ,
we have
Err ( u 1 , u ˜ 1 , n ) R ˜ 1 M n .
Let γ = M K ( L 1 + L 2 ) i = 0 n | a i | . Then, for every ν = 0 , , n ,
| R ˜ 2 , ν | = | u 2 ( t ν ) u ˜ 2 , n ( t ν ) | = | i = 0 n a i ( A ν , i , 1 A ˜ ν , i , 1 ) + R 1 , ν | | R 1 , ν | + i = 0 n | a i | | K ( t ν , t i ) | | g ( t ν , u 1 ( t i ) , u 1 ( λ t i ) ) g ( t ν , u ˜ 1 ( t i ) , u ˜ 1 ( λ t i ) ) | | R 1 , ν | + i = 0 n | a i | M K L 1 | u 1 ( t i ) u ˜ 1 ( t i ) | + L 2 | u 1 ( λ t i ) u ˜ 1 ( λ t i ) | M n + i = 0 n | a i | M K ( L 1 + L 2 ) | R ˜ 1 , ν | M n + R ˜ 1 M K ( L 1 + L 2 ) i = 0 n | a i | M n + M n M K ( L 1 + L 2 ) | i = 0 n | a i | . = M n ( 1 + γ ) .
Absolutely similarly, we obtain
| R ˜ λ , 2 , ν | M n ( 1 + γ ) .
Hence,
Err ( u 2 , u ˜ 2 , n ) M n ( 1 + γ ) .
Inductively, we obtain
Err ( u k , u ˜ k , n ) M n ( 1 + γ + + γ k 1 ) .
We can now state the approximation result.
Theorem 4.
Under the assumptions of Theorem 3, if, in addition,
γ : = M K ( L 1 + L 2 ) i = 0 n | a i | < 1 ,
then the error estimate
Err ( u k , u ˜ k , n ) α k 1 α u 1 u 0 + M n 1 γ
holds for every k N . Thus, as k , n , u ˜ k , n u * .

3.3. An Iterative Numerical Method Using the Trapezium Rule

Let us consider the trapezium rule:
0 b h ( x ) d x = b 2 n h ( 0 ) + 2 j = 1 n 1 h ( x j ) + h ( b ) + R h ,
where the n + 1 nodes are x j = b n j , j = 0 , n ¯ , and the remainder is given by
R h = b 3 12 n 2 h ( η ) , η ( 0 , b ) .
We use it to approximate the integrals in (10), as described above, with the initial approximation u 0 = u ˜ 0 f . Let us check the conditions in the previous section. The error in the trapezium rule depends on [ K ( t ν , x ) g ( x , u k ( x ) , u k ( λ x ) ) ] x , which in its turn, depends on the second order (partial) derivatives of the functions K , g , and f. So, if we assume that K , g , and f are C 2 functions with bounded second order derivatives, then we have
R ˜ k , n , R ˜ λ , k , n b 3 12 n 2 M 0 ,
where M 0 depends on the constants b , L 1 , and L 2 and the functions K , g , and f but not on k or n. Thus, condition (12) is satisfied.
Now, for the constant γ in (13), in the case of the trapezium quadrature formula, we have
γ = M K ( L 1 + L 2 ) i = 0 n | a i | = M K ( L 1 + L 2 ) b 2 n 1 + 2 ( n 1 ) + 1 = b M K ( L 1 + L 2 ) = α
from Equation (8), which is already assumed to be less than 1 from the contraction condition. Thus, in this case, no extra assumptions are necessary for the coefficients of the quadrature formula. Then, by Theorem 4, the error estimate of our approximation is given by
Err ( u k , u ˜ k , n ) α k 1 α T u 0 u 0 + b 3 12 n 2 M 0 1 γ .

4. Numerical Experiments

Example 1.
Let us consider the Fredholm–Hammerstein integral equation
u ( t ) = 1 10 0 π u ( x ) u x 2 sin t d x + 13 15 sin t , t 0 , π .
The exact solution of (15) is u * ( t ) = sin t .
We have λ = 1 2 , and
K ( t , x ) = 1 10 sin t , g ( x , u , v ) = u v , f ( t ) = 13 15 sin t .
Let ϱ = 1 . We have
α = π 5 sin 1 < 1 ,
and
b M K M g = π 10 < ϱ .
Thus, all conditions of Theorem 3 were satisfied.
We used the trapezium rule with n = 12 and n = 24 , with the corresponding nodes t ν = π n ν , ν = 0 , n ¯ . Table 1 contains the errors Err ( u k , u ˜ k , n ) , for various values of k, with initial approximation u 0 ( t ) = f ( t ) . With n = 12 nodes, the CPU time per iteration was approximately 0.74 , while for n = 24 nodes, it was approximately 0.89 .
Example 2.
Next, we considered the Fredholm–Hammerstein integral equation
u ( t ) = 1 2 c 0 1 ( t 2 + 2 ) x e u ( x ) u ( 0.8 x ) 2 d x , t 0 , 1 ,
where c = e 25 2 ( 545 e 337 ) , whose exact solution was u * ( t ) = 1 2 t 2 + 1 .
Here, λ = 0.8 , and
K ( t , x ) = 1 2 c x ( t 2 + 2 ) , g ( x , u , v ) = e u v 2 , f ( t ) 0 .
Choosing ϱ = 1 3 , it follows that
α = 3 2 c e 1 / 3 < 1 ,
and
b M K M g = 1 18 c e 1 / 3 < ϱ ;
so, all theoretical assumptions were satisfied. Again, we took n = 12 and n = 24 in the trapezoidal rule, with the corresponding nodes t ν = 1 n ν , ν = 0 , n ¯ . The errors Err ( u k , u ˜ k , n ) are shown in Table 2, starting with the initial approximation u 0 ( t ) 0 . The CPU times per iteration for n = 12 and n = 24 nodes were approximately 0.86 and 1.02 , respectively.

5. Conclusions

In this paper, we discussed iterative numerical methods for a special type of nonlinear Fredholm integral equation with modified argument, of Hammerstein type, where the kernel can be factored into two functions K ( t , x ) and g ( x , u ( x ) , u ( λ x ) ) . Iterative methods consist of two parts, one that approximates the solution globally, by a sequence of iterates, and another that approximates the values of the solution at a set of nodes. For the first part, any iterative process can be used, while the second is determined by a numerical integration scheme.
We used the Picard iteration at the first step, from Banach’s contraction principle. For the second step, we gave the conditions of convergence of the numerical approximation. In particular, we used the trapezium rule, for which the assumption needed for convergence coincided with the contraction condition from the first part. The result was a numerical method with order of convergence O ( α k ) + O n 2 . This method is simple to use and understand, producing good approximating results.
Other iterative processes can be used, such as the Mann, Krasnoselskii, and Ishikawa iterations (to name a few), which may improve the speed of convergence of the first part of the approximation. Moreover, for the second step, more accurate quadrature schemes can increase the convergence rate of the method.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Azizipour, G.; Shahmorad, S. A new Tau-collocation method with fractional basis for solving weakly singular delay Volterra integro-differential equations. J. Appl. Math. Comput. 2022, 68, 2435–2469. [Google Scholar] [CrossRef]
  2. Hafez, R.M.; Doha, E.H.; Bhrawy, A.H.; Băleanu, D. Numerical Solutions of Two-Dimensional Mixed Volterra-Fredholm Integral Equations Via Bernoulli Collocation Method. Rom. J. Phys. 2017, 62, 111. [Google Scholar]
  3. Ordokhani, Y.; Razzaghi, M. Solution of nonlinear Volterra–Fredholm–Hammerstein integral equations via a collocation method and rationalized Haar functions. Appl. Math. Lett. 2008, 21, 4–9. [Google Scholar] [CrossRef] [Green Version]
  4. Talaei, Y.; Micula, S.; Hosseinzadeh, H.; Noeiaghdam, S. A novel algorithm to solve nonlinear fractional quadratic integral equations. AIMS Math. 2022, 7, 13237–13257. [Google Scholar] [CrossRef]
  5. Talaei, Y.; Shahmorad, S.; Mokhtary, P.; Faghih, A. A fractional version of the recursive Tau method for solving a general class of Abel-Volterra integral equations systems. Fract. Calc. Appl. Anal. 2022, 25, 1553–1584. [Google Scholar] [CrossRef]
  6. Talaei, Y.; Shahmorad, S.; Mokhtary, P. A new recursive formulation of the Tau method for solving linear Abel-Volterra integral equations and its application to fractional differential equations. Calcolo 2019, 56, 50. [Google Scholar] [CrossRef]
  7. El-Kalla, I.L.; Abd-Eemonem, R.A.; Gomaa, A.M. Numerical Approach For Solving a Class of Nonlinear Mixed Volterra Fredholm Integral Equations. Electron. J. Math. Anal. Appl. 2016, 4, 1–10. [Google Scholar]
  8. Mashayekhi, S.; Razzaghi, M.; Tripak, O. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials. Scient. World J. 2014, 2014, 1–8. [Google Scholar] [CrossRef] [PubMed]
  9. Aziz, I.; Islam, S. New algorithms for the numerical solution of nonlinear Fredholm and Volterra integral equations using Haar wavelets. J. Comput. Appl. Math. 2013, 239, 333–345. [Google Scholar] [CrossRef]
  10. Micula, S.; Cattani, C. On a numerical method based on wavelets for Fredholm-Hammerstein integral equations of the second kind. Math. Method. Appl. Sci. 2018, 41, 9103–9115. [Google Scholar] [CrossRef]
  11. Neamprem, K.; Klangrak, A.; Kaneko, H. Taylor-Series Expansion Methods for Multivariate Hammerstein Integral Equations. Int. J. Appl. Math. 2017, 47, 10. [Google Scholar]
  12. Berinde, V. Iterative Approximation of Fixed Points; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2007. [Google Scholar]
  13. Altman, M.M. A Stronger Fixed Point Theorem for Contraction Mappings. 1981; preprint. [Google Scholar]
  14. Krasnoselski, M.A.; Vainikko, G.M.; Zabreiko, P.P.; Rutitski, Y.B.; Stetsenko, V.Y. Approximate Solution of Operator Equations; Nauka: Moscow, Russian, 1969. [Google Scholar]
  15. Mastroianni, G.; Milovanović, G.V. Interpolation Processes—Basic Theory and Applications; Springer Monographs in Mathematics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  16. Bica, A.M.; Ziari, S.; Satmari, Z. An iterative method for solving linear fuzzy fractional integral equation. Soft Comput. 2022, 26, 6051–6062. [Google Scholar] [CrossRef]
  17. Micula, S. A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind. Symmetry 2020, 12, 1862. [Google Scholar] [CrossRef]
  18. Micula, S. On some iterative numerical methods for a Volterra functional integral equation of the second kind. J. Fixed Point Theory Appl. 2017, 2017 19, 1815–1824. [Google Scholar] [CrossRef]
  19. Micula, S. A fast converging iterative method for Volterra integral equations of the second kind with delayed arguments. Fixed Point Theory 2015, 16, 371–380. [Google Scholar]
  20. Micula, S. An iterative numerical method for Fredholm-Volterra integral equations of the second kind. Appl. Math. Comput. 2015, 270, 935–942. [Google Scholar] [CrossRef]
  21. Milovanović, G.V. A method to accelerate iterative processes in Banach space. Univ. Beograd. Publ. Elektrotehn. Fak. Ser. Mat. Fiz. 1974, 461–479, 67–71. [Google Scholar]
  22. Tufa, A.R.; Zegeye, H.; Thuto, M. Iterative Solutions of Nonlinear Integral Equations of Hammerstein Type. Int. J. Anal. Appl. 2015, 9, 129–141. [Google Scholar]
  23. Wazwaz, A.M. Linear and Nonlinear Integral Equations, Methods and Applications; Higher Education Press: Beijing, China; Springer: New York, NY, USA, 2011. [Google Scholar]
  24. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Nauka: Moscow, Russian, 1977. [Google Scholar]
Table 1. Errors for Example 1.
Table 1. Errors for Example 1.
n
k 12 24
1 1.49671 × 10 0 6.33236 × 10 1
5 1.63293 × 10 2 1.13281 × 10 3
10 1.05242 × 10 3 7.82658 × 10 5
Table 2. Errors for Example 2.
Table 2. Errors for Example 2.
n
k 12 24
1 9.75026 × 10 1 3.47286 × 10 1
5 3.62893 × 10 3 2.02376 × 10 4
10 9.79542 × 10 5 9.97458 × 10 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Micula, S. Iterative Numerical Methods for a Fredholm–Hammerstein Integral Equation with Modified Argument. Symmetry 2023, 15, 66. https://doi.org/10.3390/sym15010066

AMA Style

Micula S. Iterative Numerical Methods for a Fredholm–Hammerstein Integral Equation with Modified Argument. Symmetry. 2023; 15(1):66. https://doi.org/10.3390/sym15010066

Chicago/Turabian Style

Micula, Sanda. 2023. "Iterative Numerical Methods for a Fredholm–Hammerstein Integral Equation with Modified Argument" Symmetry 15, no. 1: 66. https://doi.org/10.3390/sym15010066

APA Style

Micula, S. (2023). Iterative Numerical Methods for a Fredholm–Hammerstein Integral Equation with Modified Argument. Symmetry, 15(1), 66. https://doi.org/10.3390/sym15010066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop