Next Article in Journal
The Correlation between Bone Density and Mechanical Variables in Bone Remodelling Models: Insights from a Case Study Corresponding to the Femur of a Healthy Adult
Next Article in Special Issue
The Dynamics of a Continuous Newton-like Method
Previous Article in Journal
The Improved Stability Analysis of Numerical Method for Stochastic Delay Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations

1
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Surathkal 575 025, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(18), 3365; https://doi.org/10.3390/math10183365
Submission received: 11 August 2022 / Revised: 31 August 2022 / Accepted: 7 September 2022 / Published: 16 September 2022
(This article belongs to the Special Issue Computational Methods in Analysis and Applications 2023)

Abstract

:
In this paper, we introduced a new source condition and a new parameter-choice strategy which also gives the known best error estimate. To obtain the results we used the assumptions used in earlier studies. Further, we studied the proposed new parameter-choice strategy and applied it to the method (in the finite-dimensional setting) considered in George and Nair (2017).

1. Introduction

Let H : D ( H ) U U be a nonlinear monotone operator, i.e.,
H ( v ) H ( w ) , v w 0 , v , w D ( H ) ,
defined on the real Hilbert space U . Here and below . , . and . , respectively, denote the inner product and corresponding norm in U ; B ( u , r ) and B ( u , r ) ¯ , respectively, denote open and closed ball in U with center u U and radius r > 0 . We are concerned with finite dimensional approximation of the ill-posed equation
H ( u ) = y ,
which has a solution u ^ for exact data y . However, we have y δ U for some δ > 0 , are the available data, such that
y y δ δ .
Due to the ill-posedness of (1), one has to apply regularization method to obtain an approximation for u ^ . For (1) with monotone H , Lavrentiev regularization (LR) method is widely used (see [1,2,3,4,5,6]). In (LR) method the solution u α δ of the equation
H ( u ) + α ( u u 0 ) = y δ ,
is used as an approximation for u ^ . Here (and below) u 0 is an initial approximation of u ^ with u 0 u ^ r 0 for some r 0 > 0 . The solution of (3), with y in place of y δ is denoted by u α , i.e., (cf. [5])
H ( u α ) + α ( u α u 0 ) = y .
Let u α δ and u α be as in Equations (3) and (4), respectively. Then, we have the following inequalities (cf. [5]).
u α u ^ 2 u 0 u ^ , u α u ^ , u α δ u α δ α ,
and hence,
u ^ u α δ u ^ u α + δ α
and
u ^ u α u ^ u 0 .
For proving our result, we assume that, either H ( u ) is self-adjoint or H ( u ) is positive type, i.e.,
σ ( H ( u ) ) [ 0 , )
and
( H ( u ) + s I ) 1 c s , s > 0 , for some constant c > 0 , u B ( u 0 , r ) ¯
(see [7]). Here and below H ( u ) is the Fréchet derivative of H ( u ) (if H ( u ) is self-adjoint, then c = 1 ).
Remark 1.
If H ( u ) is positive type, then
( H ( u ) + s I ) 1 H ( u ) = I s ( H ( u ) + s I ) 1 1 + c .
Further as in [8] (Lemma 2.2) one can prove
( H ( u ) + s I ) 1 H ( u ) μ = O ( s μ ) , 0 μ < 1 .
So, the results in this paper hold for positive type operator H ( u ) up to a constant. Therefore, for convenience, hereafter we assume H ( . ) is self-adjoint.
In earlier studies such as [4,5,6,9,10], the following source condition:
u 0 u ^ = H ( u ^ ) μ 1 z , z ρ , 0 < μ 1 1 .
or
u 0 u ^ = H ( u 0 ) μ 2 z , z ρ , 0 < μ 2 1
was used to obtain an estimate for u ^ u α . In fact, if the source condition (8) is satisfied, then, we have [5]
u ^ u α = O ( α μ 1 )
and if (9) is satisfied, then, we have [2]
u ^ u α = O ( α μ 2 ) .
In this study, we introduce a new source condition,
u 0 u ^ = A ν z , z ρ , 0 < ν 1 ,
where ρ > 0 and A = 0 1 H ( u ^ + t ( u 0 u ^ ) ) d t . We shall use this source condition (10) to obtain a convergence rate for u ^ u α and to introduce a new parameter-choice strategy.
Remark 2.
(a) Note that in a posteriori parameter-choice strategy, the regularization parameter α (depending on δ and y δ ) is chosen at the time of computing u α δ (see [11]). The new source condition (10) is used to choose the parameter α (depending on δ and y δ ) and independent of ν , before computing u α δ (see Section 2) and also it gives the best known convergence order (see Remark 4). This is the innovation of our approach.
(b) Notice that, the operator A and A ν are used to obtain an estimate for u ^ u α . In actual computation of the approximation u n + 1 , α h , δ (see Equation (38)) and α (see Section 4) we do not require the operator A or A ν .
The following formula ([12], p. 287) for fractional power of positive type operators B is used in our analysis.
B z x = sin π z π 0 τ z ( B + τ I ) 1 x Θ ( τ ) τ x + + ( 1 ) n Θ ( τ ) τ n B n 1 x d τ + sin π z π x z B x z 1 + + ( 1 ) n 1 B n 1 x z n + 1 , x U ,
where
Θ ( ς ) = 0 if 0 ς 1 1 if 1 < ς <
and z is a complex number such that 0 < R e z < n .
Let z = ν , and B = H ( . ) . Then, we have
H ( . ) ν x = sin π ( ν ) π x ν + 0 τ ν ( H ( . ) + τ I ) 1 x d τ 1 x τ 1 ν d τ .
Note that, if H ( . ) is self-adjoint, then, A is self-adjoint. Further, suppose H ( . ) is positive type, then we have
( A + s I ) 1 = ( 0 1 H ( u ^ + t ( u 0 u ^ ) ) d t + s I ) 1 = ( 0 1 ( H ( u ^ + t ( u 0 u ^ ) ) + s I ) d t ) 1 0 1 ( H ( u ^ + t ( u 0 u ^ ) ) + s I ) 1 d t c s ,
i.e., A is positive type.
Next, we shall prove that (10) implies
u 0 u ^ = H ( u 0 ) ν 1 ξ z , ξ z ρ 0 for 0 < ν 1 < ν < 1 H ( u 0 ) ξ z 1 , ξ z 1 ρ 1 for ν = 1 ,
for some constants ρ 0 and ρ 1 . For this, we use the standard non-linear assumptions in the literature (cf. [4,13]).
Assumption 1.
For every u , v B ( u 0 , r ) ¯ and w U , there exists k 0 > 0 and an element Φ ( u , v , w ) U with
[ H ( u ) H ( v ) ] w = H ( v ) Φ ( u , v , w )
and
Φ ( u , v , w ) k 0 w u v .
Suppose (10) holds for ν < 1 , then
u 0 u ^ = A ν z = [ A ν H ( u 0 ) ν ] z + H ( u 0 ) ν z = sin π ( ν ) π 0 τ ν ( H ( u 0 ) + τ I ) 1 ( A H ( u 0 ) ) ( A + τ I ) 1 z d τ + H ( u 0 ) ν z ,
so by the definition of A and Assumption 1, we have
u 0 u ^ = [ A ν H ( u 0 ) ν ] z + H ( u 0 ) ν z = sin π ( ν ) π 0 τ ν ( H ( u 0 ) + τ I ) 1 × 0 1 ( H ( u ^ + t ( u 0 u ^ ) ) H ( u 0 ) ) d t ( A + τ I ) 1 z d τ + H ( u 0 ) ν z = sin π ( ν ) π 0 τ ν ( H ( u 0 ) + τ I ) 1 H ( u 0 ) × 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , ( A + τ I ) 1 z ) d t d τ + H ( u 0 ) ν z = H ( u 0 ) [ sin π ( ν ) π 0 τ ν ( H ( u 0 ) + τ I ) 1 × 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , ( A + τ I ) 1 z ) d t d τ ] + H ( u 0 ) ν z = H ( u 0 ) ν 1 ξ z , ν 1 < ν ,
where ξ z = H ( u 0 ) 1 ν 1 ( sin π ( ν ) π 0 τ ν ( H ( u 0 ) + τ I ) 1 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , ( A + τ I ) 1 z ) d t ) d τ + H ( u 0 ) ν ν 1 z . Further note that
ξ z 1 π 0 H ( u 0 ) 1 ν 1 τ ν ( H ( u 0 ) + τ I ) 1 × 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , ( A + τ I ) 1 z ) d t d τ + H ( u 0 ) ν ν 1 z 1 π [ 0 1 τ ν H ( u 0 ) 1 ν 1 ( H ( u 0 ) + τ I ) 1 k 0 u 0 u ^ 2 ( A + τ I ) 1 z d τ + 1 τ ν H ( u 0 ) 1 ν 1 ( H ( u 0 ) + τ I ) 1 k 0 u 0 u ^ 2 ( A + τ I ) 1 z d τ ] + H ( u 0 ) ν ν 1 ρ 1 π [ 0 1 τ ν ν 1 1 d τ k 0 u 0 u ^ 2 z + H ( u 0 ) 1 ν 1 1 τ ν 2 d τ k 0 u 0 u ^ 2 z ] + H ( u 0 ) ν ν 1 ρ 1 π 1 ν ν 1 + H ( u 0 ) 1 ν 1 1 ν k 0 r 0 2 ρ + H ( u 0 ) ν ν 1 ρ : = ρ 0 .
Suppose
u 0 u ^ = A z = [ A H ( u 0 ) + H ( u 0 ) ] z = [ 0 1 ( H ( u ^ + t ( u 0 u ^ ) ) H ( u 0 ) ) d t + H ( u 0 ) ] z = H ( u 0 ) [ 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , z ) d t + z ] = H ( u 0 ) ξ z 1 ,
where ξ z 1 = 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , z ) d t + z . Observe that
ξ z 1 ( k 0 u ^ u 0 2 + 1 ) z ( k 0 r 0 2 + 1 ) ρ = ρ 1 .
So u 0 u ^ = A z implies u 0 u ^ = H ( u 0 ) ξ z 1 , ξ z 1 ρ 1 i.e., (10) implies (12). Similarly one can show that (10) implies
u 0 u ^ = H ( u ^ ) ν 1 ξ z , ξ z ρ 2 for 0 < ν 1 < ν < 1 H ( u ^ ) ξ z 1 , ξ z 1 ρ 1 for ν = 1 ,
for some constant ρ 2 . Throughout the paper, we use the relation (Fundamental Theorem of Integration),
H ( u ) H ( x ) = 0 1 H ( x + t ( u x ) ) d t ( u x )
for all x and u in a ball contained in D ( H ) .
Remark 3.
In general, it is believed that (see [5]) a priori parameter-choice strategy is not a good strategy to choose α since the choice is depending on the unknown ν . In this study, we introduce a new parameter-choice strategy which is not depending on unknown ν and gives the best known convergence order O ( δ ν ν + 1 ) .
In some recent papers, the first author and his collaborators considered iterative methods [14,15] for obtaining stable approximate solutions for (3) (see [8,16]). In most of the iterative methods Fréchet derivative of the operator involved is used. In [10], Semenova considered the iterative method defined for fixed α , δ , by
u n + 1 , α δ = u n , α δ γ [ H ( u n , α δ ) + α ( u n , α δ u 0 ) y δ ] .
Note that, the above iterative method is derivative-free. Convergence analysis in [10] is based on the assumption that H is Lipschitz continuous and the Lipschitz constant R satisfies
0 < γ < min 1 α , 2 α α 2 + R 2 ,
where γ is a constant. Contraction mapping arguments are used to prove the convergence in [10].
In [16], George and Nair considered the method (13), but with β independent on the regularization parameter α and the Lipschitz constant R , instead of γ . The source condition on u 0 u ^ in [16] depends on the known u 0 and the analysis in [16] is not based on the contraction mapping arguments as in [10].
The purpose of this paper is threefold: (1) introduce a new source condition, (2) introduce a new parameter-choice strategy, and (3) apply the parameter-choice strategy to the (finite–dimensional setting of the) method in [16].
The remainder of the paper is organized as follows. In Section 2, we present the error bounds under the source condition (10) and a new parameter-choice strategy. In Section 3, we present the finite dimensional realization of method (13). In Section 4, we present the finite dimensional realization of (10). Section 5 contains the numerical example and the conclusion is given in Section 6.

2. Error Bounds under (10) and a New Parameter Choice Strategy

First we obtain an estimate for u ^ u α using (10).
Theorem 1.
Let 3 2 k 0 r 0 < 1 , Assumption 1 and (10) be satisfied. Then,
u ^ u α 2 + k 0 r 0 3 2 k 0 r 0 α ν z .
Proof. 
Since H ( u ^ ) = y and H ( u α ) + α ( u α u 0 ) = y , we have
H ( u α ) H ( u ^ ) + α ( u α u 0 ) = 0 ,
i.e.,
H ( u α ) H ( u ^ ) + α ( u α u ^ ) = α ( u 0 u ^ ) ,
or
( M α + α I ) ( u α u ^ ) = α ( u 0 u ^ ) ,
where
M α = 0 1 H ( u ^ + t ( u α u ^ ) ) d t .
Again (16) can be written as
( A 0 + α I ) ( u α u ^ ) = ( A 0 M α ) ( u α u ^ ) + α ( u 0 u ^ ) ,
where A 0 = H ( u 0 ) . Thus, we have
u α u ^ = ( A 0 + α I ) 1 ( M α A 0 ) ( u α u ^ ) + α ( A 0 + α I ) 1 ( u 0 u ^ ) = ( A 0 + α I ) 1 0 1 [ H ( u ^ + t ( u α u ^ ) ) H ( u 0 ) ] ( u α u ^ ) d t + α ( A 0 + α I ) 1 ( u 0 u ^ ) = ( A 0 + α I ) 1 A 0 0 1 Φ u ^ + t ( u α u ^ ) , u 0 , u α u ^ d t + α ( A 0 + α I ) 1 ( u 0 u ^ )
and hence
u α u ^ k 0 [ u α u ^ 2 + u ^ u 0 ] u α u ^ + α ( A 0 + α I ) 1 ( u 0 u ^ ) 3 2 k 0 r 0 u α u ^ + α ( A + α I ) 1 ( u 0 u ^ ) by ( 7 ) + α [ ( A 0 + α I ) 1 ( A + α I ) 1 ] ( u 0 u ^ ) 3 2 k 0 r 0 u α u ^ + α ( A + α I ) 1 ( u 0 u ^ ) + ( A 0 + α I ) 1 ( A A 0 ) α ( A + α I ) 1 ( u 0 u ^ ) 3 2 k 0 r 0 u α u ^ + α ( A + α I ) 1 ( u 0 u ^ ) + A 0 ( A 0 + α I ) 1 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , α ( A + α I ) 1 ( u 0 u ^ ) ) d t 3 2 k 0 r 0 u α u ^ + α ( A + α I ) 1 ( u 0 u ^ ) + k 0 r 0 2 α ( A + α I ) 1 ( u 0 u ^ )
i.e.,
1 3 2 k 0 r 0 u α u ^ ( 1 + k 0 r 0 2 ) α ( A + α I ) 1 ( u 0 u ^ ) 2 3 k 0 r 0 2 + k 0 r 0 u ^ u α α ( A + α I ) 1 A ν z by ( 10 ) sup λ σ ( A ) α λ ν λ + α z α ν z .
Theorem 2.
Suppose Assumption 1 and (10) hold. Then,
u α δ u ^ max { 1 , 2 + k 0 r 0 3 2 k 0 r 0 z } ( δ α + α ν ) .
In particular, if α = δ 1 ν + 1 , then
u α δ u ^ = O δ ν ν + 1 .
Proof. 
Follows from (6) and Theorem 1. □
Remark 4.
Note that the best value for δ α + α ν is attained when δ α = α ν , i.e., α = δ 1 ν + 1 , and in this case the optimal order is O δ ν ν + 1 . However, the above choice of α is depending on the unknown ν . In view of this, our aim is to choose α (not depending on ν), so that we obtain u α δ u ^ = O δ ν ν + 1 .

A New Parameter Choice Strategy

For u U , define
ϕ ( α , u ) : = α 2 ( A 0 + α I ) 2 ( H ( u 0 ) u ) ,
where A 0 = H ( u 0 ) .
Theorem 3.
For each u U , and α > 0 the function α ϕ ( α , u ) is continuous, monotonically increasing and
lim α 0 ϕ ( α , u ) = 0 and lim α ϕ ( α , u ) = H ( u 0 ) u .
Proof. 
Note that
ϕ ( α , u ) 2 = 0 A 0 α λ + α 4 d E λ ( H ( u 0 ) u ) 2 ,
where E λ is the spectral family of A 0 . Note that for each λ > 0 ,
α α λ + α 4
is strictly increasing and satisfies lim α 0 α λ + α 4 = 0 and lim α α λ + α 4 = 1 . Hence, by Dominated Convergence Theorem ϕ ( α , u ) is strictly increasing, continuous, lim α 0 ϕ ( α , u ) = 0 and lim α ϕ ( α , u ) = H ( u 0 ) u .
In addition to (2), we assume that
c δ H ( u 0 ) y δ ,
for some c > 1 . The following theorem is a consequence of the intermediate value theorem.
Theorem 4.
Let y δ satisfies (2) and (19). Then,
ϕ ( α , y δ ) = c δ
has a unique solution α .
Next, we shall show that if α = α ( δ , u 0 ) satisfies (10) and (20) hold, then u ^ u α = O ( δ ν ν + 1 ) . Our proof is based on the following moment inequality for positive type operator B (see [12], p. 290)
B u x B v x u v x 1 u v , 0 u v .
Theorem 5.
Let 3 2 k 0 r 0 < 1 , Assumption 1 and (10) be satisfied. Let α = α ( δ , u 0 ) be the solution of (20). Then,
u ^ u α O ( δ ν ν + 1 ) .
Proof. 
By taking B = α ( A + α I ) 1 A and x = α 1 ν ( A + α I ) ( 1 ν ) z in (17) and then using (21) with u = ν , v = 1 + ν , we have
2 3 k 0 r 0 2 + k 0 r 0 u ^ u α B ν x B 1 + ν x ν 1 + ν x 1 1 + ν = α 2 ( A + α I ) 2 A 1 + ν z ν 1 + ν z 1 1 + ν = α 2 ( A + α I ) 2 A ( u 0 u ^ ) ν 1 + ν z 1 1 + ν = α 2 ( A + α I ) 2 ( H ( u 0 ) y ) ν 1 + ν z 1 1 + ν α 2 ( A + α I ) 2 ( H ( u 0 ) y δ ) + α 2 ( A + α I ) 2 ( y δ y ) ν 1 + ν z 1 1 + ν = ( B 1 + δ ) ν 1 + ν z 1 1 + ν
where B 1 = α 2 ( A + α I ) 2 ( H ( u 0 ) y δ ) and we used the inequality,
α 2 ( A + α I ) 2 ( y δ y ) δ .
We have,
B 1 = α 2 ( A + α I ) 2 ( H ( u 0 ) y δ ) = α 2 [ ( A + α I ) 2 ( A 0 + α I ) 2 ] ( H ( u 0 ) y δ ) + α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y δ ) α 2 [ ( A + α I ) 2 ( A 0 + α I ) 2 ] ( H ( u 0 ) y δ ) + α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y δ ) = : D 1 + ϕ ( α , y δ )
where D 1 = α 2 [ ( A + α I ) 2 ( A 0 + α I ) 2 ] ( H ( u 0 ) y δ ) . Let w = α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y δ ) . Note that,
D 1 = α 2 [ ( A + α I ) 2 ( A 0 + α I ) 2 ] ( H ( u 0 ) y δ ) = ( A + α I ) 2 [ A 0 2 A 2 + 2 α ( A 0 A ) ] w = ( A + α I ) 2 [ ( A + A 0 ) + 2 α I ] ( A 0 A ) w = ( A + α I ) 2 [ A 0 A + 2 A + 2 α I ] ( A 0 A ) w = [ ( A + α I ) 1 ( A 0 A ) ] 2 w + 2 ( A + α I ) 1 ( A 0 A ) w ( Γ 2 + 2 Γ ) w = ( Γ 2 + 2 Γ ) ϕ ( α , y δ ) ,
where Γ = ( A + α I ) 1 ( A 0 A ) . By Assumption 1, we obtain
Γ x [ ( A + α I ) 1 ( A 0 + α I ) 1 ] ( A 0 A ) x + ( A 0 + α I ) 1 ( A 0 A ) x = ( A 0 + α I ) 1 [ A 0 A ] ( A + α I ) 1 ( A 0 A ) x + ( A 0 + α I ) 1 ( A 0 A ) x ( A 0 + α I ) 1 A 0 × 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , ( A + α I ) 1 ( A 0 A ) x ) d t + ( A 0 + α I ) 1 A 0 0 1 Φ ( ( u ^ + t ( u 0 u ^ ) , u 0 , x ) d t k 0 r 0 2 Γ x + k 0 r 0 2 x ,
i.e.,
( 1 k 0 r 0 2 ) Γ x k 0 r 0 x ,
and hence
B 1 [ 2 k 0 r 0 2 k 0 r 0 ( 2 k 0 r 0 2 k 0 r 0 + 2 ) + 1 ] ϕ ( α , y δ ) = O ( δ ) .
The result now follows from (22)–(26). □
Theorem 6.
Suppose Assumption 1 and (10) hold and if α = α ( δ , u 0 ) is chosen as a solution of (20). Then,
δ α = O δ ν ν + 1 .
Proof. 
By (20), we have
c δ = α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y δ ) α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y ) + α 2 ( A 0 + α I ) 2 ( y y δ ) ) α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y ) + δ ,
so
( c 1 ) δ α 2 ( A 0 + α I ) 2 ( A + α I ) 2 ( H ( u 0 ) y ) + α 2 ( A + α I ) 2 ( H ( u 0 ) y ) = ( A 0 + α I ) 2 ( A + α I ) 2 ( A 0 + α I ) 2 × α 2 ( A + α I ) 2 ( H ( u 0 ) y ) + α 2 ( A + α I ) 2 ( H ( u 0 ) y ) .
Let w 1 = α 2 ( A + α I ) 2 ( H ( u 0 ) y ) . Then, similar to (24), we have
( c 1 1 ) δ ( Γ 1 2 + 2 Γ 1 + 1 ) w 1 ,
where Γ 1 = ( A 0 + α I ) 1 ( A A 0 ) . Note that,
Γ 1 x = ( A 0 + α I ) 1 ( A A 0 ) x = ( A 0 + α I ) 1 A 0 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , x ) d t k 0 2 u 0 u ^ x k 0 r 0 2 x ,
so
Γ 1 k 0 r 0 2 .
Therefore, by (10), (28) and (29), we have
( c 1 ) δ ( k 0 r 0 2 ) 2 + k 0 r 0 + 1 w 1 = ( k 0 r 0 2 ) 2 + k 0 r 0 + 1 α 2 ( A + α I ) 2 A ( u 0 u ^ ) = ( k 0 r 0 2 ) 2 + k 0 r 0 + 1 α 2 ( A + α I ) 2 A 1 + ν z ( k 0 r 0 2 ) 2 + k 0 r 0 + 1 α 2 ( A + α I ) 1 A ν z ( k 0 r 0 2 ) 2 + k 0 r 0 + 1 α 1 + ν z ,
or
α 1 + ν c 1 ( k 0 r 0 2 ) 2 + k 0 r 0 + 1 z δ .
Thus,
δ α = δ ν ν + 1 δ α ν + 1 1 ν + 1 = O ( δ ν ν + 1 ) .
Combining Theorems 5 and 6, we obtain:
Theorem 7.
Let Assumption 1 and (10) be satisfied and let α = α ( δ , u 0 ) be the solution of (20). Then,
u α δ u ^ = O ( δ ν ν + 1 ) .
In [16], the following estimates was given (see [16], Theorem 2.3)
u α δ u n , α δ k q α n ,
where q α = 1 β α and k r 0 + 1 with β = 1 β 0 + α , β 0 H ( u ) , u B ( u , 2 ( r 0 + 1 ) ) ¯ . Suppose
n α , δ : = min { n N : α q α n δ } .
Theorem 8.
Let Assumption 1 and (10) be satisfied and let α = α ( δ , u 0 ) be the solution of (20). Then,
u n α , δ , α δ u ^ = O ( δ ν ν + 1 ) .
Proof. 
Follows from the inequality
u n α , δ , α δ u ^ u n α , δ , α δ u α δ + u α δ u ^ ,
Equation (31), Theorems 6 and 7. □

3. Finite Dimensional Realization of (13)

Consider a family { P h } h > 0 of orthogonal projections of U onto the range R ( P h ) of P h . Let there exists b 0 > 0 such that
( I P h ) u ^ : = b h b 0 ,
and let
r 2 ( 2 r 0 + max { u ^ , 1 } + b h ) with r 0 : = u ^ u 0 .
We assume that;
(i)
B ( P h u 0 , r ) ¯ D ( H ) ,
(ii)
there exists β 0 > 0 such that
P h H ( u ) P h β 0 u B ( P h u 0 , r ) ¯ .
(iii)
there exists ε 0 > 0 such that
H ( u ) ( I P h ) : = ε h ( u ) ε h ε 0 u B ( P h u 0 , r ) ¯ .
Remark 5.
(a)
Suppose H ( u ) is self-adjoint for u B ( P h u 0 , r ) ¯ . Then, H ( u ) ( I P h ) = ( I P h ) H ( u ) , and by Assumption 1, we have H ( u ) v = H ( P h u 0 ) ( v + φ ( u , P h u 0 , v ) ) . Hence,
H ( u ) ( I P h ) v = ( I P h ) H ( P h u 0 ) ( v + φ ( u , P h u 0 , v ) ) ( I P h ) H ( P h u 0 ) [ v + k 0 u P h u 0 v ] ( 1 + k 0 r ) ( I P h ) H ( P h u 0 ) v ,
so, H ( u ) ( I P h ) ( 1 + k 0 r ) ( I P h ) H ( P h u 0 ) .
Therefore, in this case, we can take, ε h = ( 1 + k 0 r ) ( I P h ) H ( P h u 0 ) .
(b)
Suppose, H ( u ) is not self-adjoint for u B ( P h u 0 , r ) ¯ . In this case, under the additional assumption (see [17])
H ( u ) = R u H ( P h u 0 ) , u B ( P h u 0 , r ) ¯
with I R u C R u P h u 0 , we have
H ( u ) ( I P h ) = R u H ( P h u 0 ) ( I P h ) R u H ( P h u 0 ) ( I P h ) ( 1 + C R r ) H ( P h u 0 ) ( I P h ) .
Therefore, in this case, we can take, ε h = ( 1 + C R r ) H ( P h u 0 ) ( I P h ) .
From now on, we assume δ ( 0 , d ] and α [ δ + ε h , a ) with a > d + ε 0 .
First we shall prove that
( P h H P h ) ( u ) + α P h ( u u 0 ) = P h y δ
has a unique solution u α h , δ R ( P h ) , under the assumption
R ( P h ) D ( H ) .
Proposition 1.
Suppose (35) holds. Then (34) has a unique solution u α h , δ in B ( P h u 0 , r ) for all u 0 U and y δ U .
Proof. 
Since H is monotone, we have
( P h H P h ) ( u ) ( P h H P h ) ( v ) , u v = H ( P h ( u ) ) H ( P h ( v ) ) , P h ( u ) P h ( v ) 0 ,
so P h H P h is monotone and D ( P h H P h ) = U . Hence by Minty–Browder Theorem(see [18,19]), Equation (34) has a unique solution u α h , δ for all u 0 U and y δ U .
Next, we shall prove that u α h , δ B ( P h u 0 , r ) . Note that by (34), we have
P h H ( P h u α h , δ ) + α P h ( u α h , δ u ^ ) P h H ( u ^ ) = P h ( y δ y ) + α P h ( u 0 u ^ ) .
Let M = 0 1 H ( u ^ + t ( P h u α h , δ u ^ ) ) d t . Then by (36), we have
P h M ( P h u α h , δ u ^ ) + α P h ( u α h , δ u ^ ) = P h ( y δ y ) + α P h ( u 0 u ^ ) .
or
( P h M P h + α I ) ( u α h , δ P h u ^ ) = P h ( y δ y ) + α P h ( u 0 u ^ ) + P h M ( I P h ) u ^ .
So, we have
u α h , δ P h u ^ = ( P h M P h + α I ) 1 × [ α P h ( u 0 u ^ ) + P h ( y δ y ) + ( P h M ( I P h ) ) ( u ^ ) ] P h ( u 0 u ^ ) + P h ( y δ y ) α + P h M ( I P h ) u ^ α r 0 + δ α + ε h u ^ α
and hence
u α h , δ P h u 0 u α h , δ P h u ^ + P h ( u ^ u 0 ) 2 r 0 + max { u ^ , 1 } δ + ε h α 2 r 0 + max { u ^ , 1 } < r ,
i.e., u α h , δ B ( P h u 0 , r ) .
The method: The rest of this section, H ( u ) , u B ( P h u 0 , r ) ¯ is assumed to be positive self-adjoint operator. We consider the sequence { u n , α h , δ } defined iteratively by
u n + 1 , α h , δ = u n , α h , δ β P h [ F P h ( u n , α h , δ ) + α ( u n , α h , δ u 0 ) y δ ]
where
u 0 , α h , δ = P h u 0 and β : = 1 β 0 + a .
Note that if lim n { u n , α h , δ } exists, then the limit is the solution u α h , δ of (34).
Theorem 9.
Let δ ( 0 , d ] , α [ δ + ε h , a ) , u α h , δ and u α δ are solutions of (3) and (34), respectively. Then
u α h , δ u α δ u ^ ε h α + b h + 2 u α δ u ^ .
Proof. 
Note that by (3), we have
P h H ( u α δ ) + α P h ( u α δ u 0 ) = P h y δ .
Therefore, by (34) and (39), we have
P h ( H ( u α h , δ ) H ( u α δ ) ) + α P h ( u α h , δ u α δ ) = 0 .
Let T h : = 0 1 H ( u α δ + t ( u α h , δ u α δ ) ) d t . Then by (40), we have
P h T h ( u α h , δ u α δ ) + α P h ( u α h , δ u α δ ) = 0
or
P h T h P h ( u α h , δ u α δ ) + α P h ( u α h , δ u α δ ) = P h T h ( I P h ) u α δ .
Notice that
u α δ + t ( u α h , δ u α δ ) P h u 0 = ( 1 t ) ( u α δ u ^ + u ^ P h u 0 ) + t ( u α h , δ P h u 0 ) = ( 1 t ) [ ( u α δ u ^ ) + ( I P h ) u ^ + P h ( u ^ u 0 ) ] + t ( u α h , δ P h u 0 ) ( 1 t ) [ u α δ u ^ + ( I P h ) u ^ + P h ( u ^ u 0 ) ] + t ( u α h , δ P h u 0 ) ( 1 t ) [ u α δ u ^ + b h + r 0 ] + t u α h , δ P h u 0 ( 1 t ) [ ( δ α + 2 r 0 ) + b h ] + t ( 2 r 0 + max { 1 , u ^ } ) by ( 7 )   and   ( 37 ) r ,
that is u α δ + t ( u α h , δ u α δ ) B ( P h u 0 , r ) . So, P h T h P h is self-adjoint and hence by (41),
u α h , δ P h u α δ = ( P h T h P h + α I ) 1 P h T h ( I P h ) u α δ P h T h ( I P h ) u α δ α ε h α u α δ ε h α ( u ^ + u ^ u α δ )
and
( I P h ) u α δ ( I P h ) u ^ + u α δ u ^ .
Since ε h α 1 , by (42) and (43), we have
u α h , δ u α δ u α h , δ P h u α δ + ( I P h ) u α δ u ^ ε h α + b h + 2 u α δ u ^ .
Remark 6.
If α b h δ + ε h and α = ( δ + ε h ) 1 ν + 1 , then by Theorems 2 and 9, we have
u α h , δ u ^ = O ( ( δ + ε h ) ν ν + 1 ) .
Theorem 10.
Let δ ( 0 , d ] and α [ δ + ε h , a ) . Then, { u n , α h , δ } B ( P h u 0 , r ) ¯ and lim n u n , α h , δ = u α h , δ . Further
u n , α h , δ u α h , δ κ q α , h n ,
where q α , h : = 1 β α , κ 2 r 0 + max { 1 , u ^ } and β : = 1 / ( β 0 + a ) .
Proof. 
We shall show the following using induction;
(1a)
u n , α h , δ B ( P h u 0 , r ) ¯ ,
(1b)
the operator
A n h : = 0 1 H ( u α h , δ + t ( u n , α h , δ u α h , δ ) ) d t
is positive self-adjoint, well defined and
(1c)
u n + 1 , α h , δ u α h , δ ( 1 β α ) u n , α h , δ u α h , δ n = 0 , 1 , 2 ,
Clearly, u 0 , α h , δ = P h u 0 B ( P h u 0 , r ) ¯ . Furthermore, we have by Proposition 1, u α h , δ B ( P h u 0 , r ) ¯ , so by (32), A 0 h is a well defined and positive self-adjoint operator with P h A 0 h P h β 0 . So (1a) and (1b) hold for n = 0 .
Note that
u 1 , α h , δ u α h , δ = u 0 , α h , δ u α h , δ β P h [ H ( u 0 , α h , δ ) H ( u α h , δ ) + α ( u 0 , α h , δ u α h , δ ) ] .
Since,
H ( u 0 , α h , δ ) H ( u α h , δ ) = 0 1 H ( u α h , δ + t ( u 0 , α h , δ u α h , δ ) ) ( u 0 , α h , δ u α h , δ ) d t = A 0 h ( u 0 , α h , δ u α h , δ )
we have
u 1 , α h , δ u α h , δ = [ I β ( P h A 0 h P h + α I ) ] ( u 0 , α h , δ u α h , δ ) ] .
Since P h A 0 h P h is a positive self-adjoint operator ( cf. [20]),
I β ( P h A 0 h P h + α I ) = sup u = 1 | [ ( 1 β α ) I β P h A 0 h P h ] u , u | = sup u = 1 | ( 1 β α ) β P h A 0 h P h u , u |
and since P h A 0 h P h β 0 and β = 1 / ( β 0 + a ) , we have
0 β P h A 0 h P h u , u β P h A 0 h P h β β 0 < 1 β α α ( 0 , a ) .
Therefore,
I β ( P h A 0 h P h + α I ) 1 β α .
Thus, by (44), we have
u 1 , α h , δ u α h , δ ( 1 β α ) u 0 , α h , δ u α h , δ q α , h P h u 0 u α h , δ .
Therefore, we have
u 1 , α h , δ u α h , δ q α , h ( 2 r 0 + max { 1 , u ^ } ) , by ( 37 ) = κ q α , h .
and
u 1 , α h , δ P h u 0 u 1 , α h , δ u α h , δ + u α h , δ P h u 0 2 P h u 0 u α h , δ 2 ( 2 r 0 + max { 1 , u ^ } ) r .
Thus, u 1 , α h , δ B ( P h u 0 , r ) ¯ . So, for n = 0 , (1a)–(1c) hold. The induction for (1a)–(1c) is completed, if we simply replace u 1 , α h , δ , u 0 , α h , δ in the preceding arguments with u n + 1 , α h , δ , u n , α h , δ , respectively. The result now follows from (1c). □
Theorem 11.
Let δ ( 0 , d ] , α ( δ + ε h , a ] with d + ε 0 < a . Let u α δ and u α be solutions of (3) and (4), respectively. For δ ( 0 , d ] and α [ δ + ε h , a ) , let { u n , α h , δ } be as in (38). Let
n α , δ : = min { m N : α q α , h m δ + ε h }
and
α b h δ + ε h .
Then,
u n α , δ , α h , δ u ^ = ( κ + 1 + max { u ^ , 3 } ) u ^ u α + δ + ε h α .
Proof. 
By Theorems 9 and 10, we have
u n α , δ , α h , δ u ^ u n α , δ , α h , δ u α h , δ + u α h , δ u α δ + u α δ u ^ κ q α , h n + ε h α u ^ + b h + 3 u α δ u ^ κ q α , h n + ε h α u ^ + b h + 3 ( δ α + u α u ^ )
( κ + 1 + max { 3 , u ^ } ) u ^ u α + δ + ε h α .
Here, we used the fact that q α , h n δ + ε h α for n = n α , δ and b h δ + ε h α . Thus, we obtain the required estimate in the theorem. □
Finite dimensional realization of (20) is considered next.

4. Finite Dimensional Realization of the a New Parameter Choice Strategy (20)

For u U , define
ϕ h ( α , u ) : = α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) u ) .
The proof of the next theorem is similar to that of Theorem 3, so the proof is omitted.
Theorem 12.
For each u U , the function α ϕ h ( α , u ) for α > 0 , defined in (49), is continuous, monotonically increasing and
lim α 0 ϕ h ( α , u ) = 0 , lim α ϕ h ( α , u ) = P h ( H ( u 0 ) u ) .
In addition to (2), we assume that
c 1 δ + d 1 ε h P h ( H ( u 0 ) y δ ) ,
for some c 1 > 1 and d 1 > k 0 r 0 2 2 + r 0 . The proof of the following theorem follows from the intermediate value theorem.
Theorem 13.
If y δ satisfies (2) and (50). Then,
ϕ h ( α , y δ ) = c 1 δ + d 1 ε h
has a unique solution α = α ( δ , h , u 0 ) .
Next, we shall show that if α = α ( δ , h , u 0 ) satisfies (51), then u ^ u α = O ( ( δ + ε h ) ν ν + 1 ) . Our proof is based on the moment inequality (21).
Theorem 14.
Let Assumption 1 and (10) be satisfied and let α = α ( δ , h , u 0 ) satisfies (51). Then,
u ^ u α O ( ( δ + ε h ) ν ν + 1 ) .
Proof. 
By (24), the result follows once we prove w = O ( δ + ε h ) . This can be seen as follows,
w = α 2 ( A 0 + α I ) 2 ( H ( u 0 ) y δ ) α 2 [ ( A 0 + α I ) 2 ( P h A 0 P h + α P h ) 2 ] ( H ( u 0 ) y δ ) + α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y δ ) α 2 ( A 0 + α I ) 2 [ ( P h A 0 P h + α P h ) 2 ( A 0 + α I ) 2 ] × ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h = α 2 ( A 0 + α I ) 2 [ ( P h A 0 P h ) 2 + 2 α P h A 0 P h ( A 0 2 + 2 α A 0 ) + α 2 ( P h I ) ] × ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h = α 2 ( A 0 + α I ) 2 [ ( P h A 0 P h + A 0 ) ( P h A 0 P h A 0 ) + 2 α ( P h A 0 P h A 0 ) ] × ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h = α 2 ( A 0 + α I ) 2 [ ( P h A 0 P h A 0 ) + 2 ( A 0 + α I ) ] × ( P h A 0 P h A 0 ) ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h = [ ( A 0 + α I ) 1 ( P h A 0 P h A 0 ) ] 2 + 2 ( A 0 + α I ) 1 ( P h A 0 P h A 0 ) × α 2 ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h = [ ( A 0 + α I ) 1 ( P h A 0 P h A 0 P h + A 0 P h A 0 ) ] 2 + 2 ( A 0 + α I ) 1 ( P h A 0 P h A 0 P h + A 0 P h A 0 ) × α 2 ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h = [ ( A 0 + α I ) 1 ( P h A 0 P h A 0 P h ) ] 2 + 2 ( A 0 + α I ) 1 ( P h A 0 P h A 0 P h ) × α 2 ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h [ ( A 0 + α I ) 1 ( P h I ) A 0 P h ] 2 + 2 ( A 0 + α I ) 1 ( ( P h I ) A 0 P h ) × α 2 ( P h A 0 P h + α P h ) 2 ( H ( u 0 ) y δ ) + c 1 δ + d 1 ε h ε h α + 2 ε h α + 1 ( c 1 δ + d 1 ε h ) ,
where, we used ( P h I ) P h = 0 . Next, we shall show that ε h α is bounded. Note that,
c 1 δ + d 1 ε h = α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y δ ) α 2 ( P h A 0 P h + α I ) 2 P h ( y y δ ) + α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y ) δ + α 2 ( P h A 0 P h + α I ) 2 P h A ( u 0 u ^ ) δ + α 2 ( P h A 0 P h + α I ) 2 P h ( A A 0 ) ( u 0 u ^ ) + α 2 ( P h A 0 P h + α I ) 2 P h A 0 ( u 0 u ^ ) = δ + α 2 ( P h A 0 P h + α I ) 2 P h A 0 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , u 0 u ^ ) + α 2 ( P h A 0 P h + α I ) 2 P h A 0 [ P h + I P h ] ) ( u 0 u ^ ) δ + α 2 ( P h A 0 P h + α ) 2 P h A 0 [ P h + I P h ] 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , u 0 u ^ ) + α 2 ( P h A 0 P h + α I ) 2 P h A 0 [ P h + I P h ] ) ( u 0 u ^ ) δ + ( α + A 0 ( I P h ) ) 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , u 0 u ^ ) d t + ( α + A 0 ( I P h ) ) u 0 u ^ δ + ( α + ε h ) k 0 2 u 0 u ^ 2 + ( α + ε h ) u 0 u ^ δ + ( k 0 r 0 2 2 + r 0 ) ε h + ( k 0 r 0 2 2 + r 0 ) α
so, we have
( d 1 ( k 0 r 0 2 2 + r 0 ) ) ε h < ( c 1 1 ) δ + ( d 1 ( k 0 r 0 2 2 + r 0 ) ) ε h ( k 0 r 0 2 2 + r 0 ) α
and hence
ε h α k 0 r 0 2 2 + r 0 d 1 ( k 0 r 0 2 2 + r 0 ) : = C r 0 .
Now, the result follows from (27) and (53). □
Theorem 15.
Suppose Assumption 1 and (10) hold and if α = α ( δ , h , u 0 ) is chosen as a solution of (51). Then,
δ + ε h α = O ( δ + ε h ) ν ν + 1 .
Proof. 
By (51), we have
c 1 δ + d 1 ε h = α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y δ ) α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y ) + α 2 ( P h A 0 P h + α I ) 2 P h ( y y δ ) ) α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y ) + δ ,
so
( c 1 1 ) δ + d 1 ε h α 2 ( P h A 0 P h + α I ) 2 P h ( H ( u 0 ) y ) α 2 ( P h A 0 P h + α P h ) 2 ( A + α I ) 2 ( H ( u 0 ) y ) + α 2 ( A + α I ) 2 ( H ( u 0 ) y ) = ( P h A 0 P h + α P h ) 2 P h ( A + α I ) 2 ( P h A 0 P h + α I ) 2 × α 2 ( A + α I ) 2 ( H ( u 0 ) y ) + α 2 ( A + α I ) 2 ( H ( u 0 ) y ) .
Let w 1 = α 2 ( A + α I ) 2 ( H ( u 0 ) y ) . Then, similar to (24), we have
( c 1 1 ) δ + d 1 ε h ( Γ 2 2 + 2 Γ 2 + 1 ) w 1 ,
where Γ 2 = ( P h A 0 P h + α I ) 1 ( P h A P h A 0 P h ) . Note that,
Γ 2 x = ( P h A 0 P h + α I ) 1 [ P h ( A A 0 ) + P h A 0 ( I P h ) ] x ( P h A 0 P h + α I ) 1 [ P h ( A A 0 ) x + ( P h A 0 P h + α I ) 1 P h A 0 ( I P h ) ] x = ( P h A 0 P h + α I ) 1 [ P h A 0 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , x ) d t + ( P h A 0 P h + α I ) 1 P h A 0 ( I P h ) ] x = ( P h A 0 P h + α I ) 1 [ P h A 0 [ P h + I P h ) ] 0 1 Φ ( u ^ + t ( u 0 u ^ ) , u 0 , x ) d t + ( P h A 0 P h + α I ) 1 P h A 0 ( I P h ) ] x ( 1 + ε h α ) k 0 2 u 0 u ^ + ε h α x ( 1 + C r 0 ) k 0 2 r 0 + C r 0 x ,
so
Γ 2 ( 1 + C r 0 ) k 0 2 r 0 + C r 0 : = C Γ 2 .
Therefore, by (10), (55) and (56), we have
( c 1 1 ) δ + d 1 ε h < C Γ 2 2 + 2 C Γ 2 + 1 w 1 = C Γ 2 2 + 2 C Γ 2 + 1 α 2 ( A + α I ) 2 A ( u 0 u ^ ) = C Γ 2 2 + 2 C Γ 2 + 1 α 2 ( A + α I ) 2 A 1 + ν z C Γ 2 2 + 2 C Γ 2 + 1 α 2 ( A + α I ) 1 A ν z C Γ 2 2 + 2 C Γ 2 + 1 α 1 + ν z ,
or
α 1 + ν min { c 1 1 , d 1 } C Γ 2 2 + 2 C Γ 2 + 1 z ( δ + ε h ) .
Thus
δ + ε h α = ( δ + ε h ) ν ν + 1 δ + ε h α ν + 1 1 ν + 1 = O ( ( δ + ε h ) ν ν + 1 ) .
By combining Theorems 11, 14 and 15, we have the following Theorem.
Theorem 16.
Suppose Assumption 1 and (10) hold and if α = α ( δ , h , u 0 ) is chosen as a solution of (51). Then
u n α , δ , α h , δ u ^ = O ( δ + ε h ) ν ν + 1 .
Remark 7.
Note that in the proposed method a system of equation is solved to obtain the parameter α and used it for computing u n α , δ , α h , δ . Whereas in the classical discrepancy principle one has to compute α and u n α , δ , α h , δ in each iteration step. This is an advantage of our proposed approach.

5. Numerical Examples

The following steps are involved in the computation of u n α , δ , α h , δ .
Step I Compute α = α ( δ , h , u 0 ) = : α ( δ , ϵ h ) satisfying (51)
Step II Choose n such that q α , h n = ( 1 β α ( δ , ε h ) ) n δ + ε h α ( δ , ε h ) .
Step III Compute u n α , δ , α h , δ using (38).
To compute u n , α h , δ , consider a sequence ( V m ) , of finite dimensional subspaces, where V m = s p a n { v 1 , v 2 , , v m + 1 } with v i , i = 1 , 2 , , m + 1 as the linear splines (in a uniform grid of m + 1 points in [ 0 , 1 ] ), so that dimension V m = m + 1 . Since u n , α h , δ V m , u n , α h , δ = i = 1 m + 1 λ i ( n ) v i , λ i , i = 1 , 2 , m + 1 are some scalars. Then, from (38), we have
i = 1 m + 1 λ i ( n + 1 ) v i = i = 1 m + 1 λ i ( n ) v i β P m [ H ( i = 1 m + 1 λ i ( n ) v i ) + α i = 1 m + 1 ( λ i ( n ) λ i ( 0 ) ) v i y δ ] ,
where P m : = P h m is the projection on to V m with h m = 1 m . In this case one can prove as in [21] that H ( u ) ( I P m ) = O ( 1 m 2 ) . So we have taken ε h m = 1 m 2 in our computation. Since P m H ( i = 1 m + 1 λ i ( n ) v i ) V m , P m y δ V m , we approximate
P m H ( i = 1 m + 1 λ i ( n ) v i ) = i = 1 m + 1 H ( i = 1 m + 1 λ i ( n ) v i ) ( t i ) v i , P m y δ = i = 1 m + 1 y δ ( t i ) v i ,
where t i , i = 1 , 2 , , m + 1 are grid points. So λ ( n + 1 ) = ( λ 1 ( n + 1 ) , λ 2 ( n + 1 ) , , λ m + 1 ( n + 1 ) ) T satisfies (58), if λ ( n + 1 ) satisfies the equation
Q [ λ ( n + 1 ) λ ( n ) ] = Q β [ Y δ ( H h + α ( λ ( n ) λ ( 0 ) ) ] ,
where
Q = ( v i , v j ) i , j , i , j = 1 , 2 , m + 1 ,
Y δ = ( y δ ( t 1 ) , y δ ( t 2 ) , y δ ( t m + 1 ) ) T ,
and
H h = ( H ( i = 1 m + 1 λ i ( n ) v i ) ( t 1 ) , H ( i = 1 m + 1 λ i ( n ) v i ) ( t 2 ) , , H ( i = 1 m + 1 λ i ( n ) v i ) ( t m + 1 ) ) T .
To compute the α satisfying (51), we follow the following steps:
Let z = ( P m A 0 P m + α I ) 2 P m ( H ( u 0 ) y δ ) , Then z V m , so z = i = 1 m + 1 ξ i v i for some scalars ξ i , i = 1 , 2 , m + 1 . Note that ( P m A 0 P m + α I ) 2 z = P m ( H ( u 0 ) y δ ) or ( P m A 0 P m + α I ) Z = P m ( H ( u 0 ) y δ ) , where Z = ( P m A 0 P m + α I ) z .
Since Z V m , we have Z = i = 1 m + 1 ς i v i . Further ς = ( ς 1 , ς 2 , , ς m + 1 ) T and ξ = ( ξ 1 , ξ 2 , , ξ m + 1 ) T satisfies the equations
( M + α Q ) ς = Q B ,
and
( M + α Q ) ξ = Q ς ,
respectively, where
M = ( A 0 v i , v j ) i , j , i , j = 1 , 2 , m + 1
and
B = ( ( H ( u 0 ) y δ ) ( t 1 ) , ( H ( u 0 ) y δ ) ( t 2 ) , , ( H ( u 0 ) y δ ) ( t m + 1 ) ) T .
We compute α in (51), using Newton’s method as follows. Let f ( α ) = α 4 z 2 ( c 1 δ + d 1 ε h ) 2 . Then
f ( α ) = 4 α 3 z 2 + 4 α 4 z , Z Z ,
where Z Z = ( P m A 0 P m + α I ) 3 P m ( H ( u 0 ) y δ ) . Let Z Z = i = 1 m + 1 Θ i v i .
The Θ = ( Θ 1 , Θ 2 , , Θ m + 1 ) T satisfies the equation
( M + α Q ) Θ = Q ξ .
So,
f ( α ) = α 4 ξ T Q ξ ( c 1 δ + d 1 ε h ) 2
and
f ( α ) = 4 α 3 ξ T Q ξ + 4 α 4 ξ T Q Θ .
Then, using Newton’s iteration we compute the ( k + 1 ) t h iterate as; α k + 1 = α k f ( α ) f ( α ) . In our computation, we stop the iterate when α k + 1 α k 10 5 .
We consider a simple one dimensional example studied in [5,7,22,23] to illustrate our results in the previous sections. We also compare our computational results with that adaptive method considered in [16,24]. Let us briefly explain the adaptive method considered in [16]. Choose α 0 = δ + ϵ h , α j = ϱ j α 0 . For each j find n j such that n j = min { i : q α , h i 1 ϱ j } .
Then, find k such that
k : = max { i : u n i , δ , α i h , δ u n j , δ , α j h , δ 4 1 ϱ j , j = 0 , 1 , , i 1 } .
Choose, α = α k as the regularization parameter.
Example 1.
Let c > 0 be a constant. Consider the inverse problem of identifying the distributed growth law u ( t ) , t ( 0 , 1 ) , in the initial value problem
d y d t = u ( t ) y ( t ) , y ( 0 ) = c , t ( 0 , 1 )
from the noisy data y δ ( t ) L 2 ( 0 , 1 ) . One can reformulate the above problem as an (ill-posed) operator equation H ( u ) = y with
[ H ( u ) ] ( t ) = c e 0 t u ( θ ) d θ , u L 2 ( 0 , 1 ) , t ( 0 , 1 ) .
Then H is given by
[ H ( u ) h ] ( t ) = [ H ( u ) ] ( t ) 0 t h ( θ ) d θ .
It is proved in [7], that H is positive type (sectorial) and spectrum of H ( u ) is the singleton set { 0 } . Further it is proved in [5] that H satisfies Assumption 1 and that u ^ u 0 R ( H ( u ^ ) ) provided u * : = u ^ u 0 H 1 ( 0 , 1 ) and u * ( 0 ) = 0 . Now since u ^ u 0 = H ( u ^ ) w , we have
[ u ^ u 0 ] ( t ) = [ H ( u ^ ) ] ( t ) 0 t w ( θ ) d θ = c e 0 t u ^ ( θ ) d θ 0 t w ( θ ) d θ = 0 1 c e 0 t [ u ^ + τ ( u 0 u ^ ) ] ( θ ) d θ d τ 0 t w ( θ ) d θ 0 1 e 0 t [ τ ( u 0 u ^ ) ] ( θ ) d θ d τ = [ A w ¯ ] ( t ) ,
where w ¯ = w 0 1 e 0 t [ τ ( u 0 u ^ ) ] ( θ ) d θ d τ . This shows the source condition (10) is satisfied. For our computation we have taken u ^ ( t ) = t , u 0 ( t ) = 0 and y ( t ) = e t 2 2 . In Table 1, we present the relative error E α = u n α , δ , α h , δ u ^ u n α , δ , α h , δ , and α values using a new method (51) and adaptive method considered in [16] for different values of δ and n . Furthermore, we provide computational time (CT) for both the methods mentioned above. The relative error obtained for our a new method (51) is lesser than that the adaptive method in [16] for various δ values. As the relative error decreases the accuracy of reconstruction increases.
The solutions obtained for different δ values ( δ = 0.01 , 0.001 , 0.0001 ) for n = 500 are shown in Figure 1, Figure 2 and Figure 3, respectively, and for n = 1000 and δ = 0.01 , 0.001 , 0.0001 are shown in Figure 4, Figure 5 and Figure 6, respectively. The exact and noisy data are shown in subfigure (a) of these figures and the computed solution is shown in subfigure(b) (C.S-A priori denotes the figure corresponding to the method (51)). The computed solution for the new method is closer to the actual solution.

6. Conclusions

We introduced a new source condition and a new parameter-choice strategy. The proposed a new parameter-choice strategy is independent of the unknown parameter ν and it provides the optimal order O ( δ ν ν + 1 ) , for 0 ν 1 .

Author Contributions

Conceptualization and validation by S.G., J.P., K.R. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors Santhosh George and Jidesh P wish to thank the SERB, Govt. of India for the financial support under Project Grant No. CRG/2021/004776. Krishnendu R thanks UGC India for JRF.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. George, S.; Nair, M.T. A modified Newton-Lavrentiev regularization for non-linear ill-posed Hammerstein-type operator equations. J. Complex. 2008, 24, 228–240. [Google Scholar] [CrossRef]
  2. Hofmann, B.; Kaltenbacher, B.; Resmerita, E. Lavrentiev’s regularization method in Hilbert spaces revisited. Inverse Probl. Imaging 2016, 10, 741–764. [Google Scholar] [CrossRef]
  3. Janno, J.; Tautenhahn, U. On Lavrentiev regularization for ill-posed problems in Hilbert scales’. Numer. Funct. Anal. Optim. 2003, 24, 531–555. [Google Scholar] [CrossRef]
  4. Mahale, P.; Nair, M.T. Lavrentiev regularization of non-linear ill-posed equations under general source condition. J. Nonlinear Anal. Optim. 2013, 4, 193–204. [Google Scholar]
  5. Tautenhahn, U. On the method of Lavrentiev regularization for non-linear ill-posed problems. Inverse Probl. 2002, 18, 191–207. [Google Scholar] [CrossRef]
  6. Vasin, V.; George, S. An analysis of Lavrentiev regularization method and Newton type process for non-linear ill-posed problems. Appl. Math. Comput. 2014, 230, 406–413. [Google Scholar]
  7. Nair, M.T.; Ravishankar, P. Regularized versions of continuous Newton’s method and continuous modified Newton’s method under general source conditions. Numer. Funct. Anal. Optim. 2008, 29, 1140–1165. [Google Scholar] [CrossRef]
  8. George, S.; Sreedeep, C.D. Lavrentiev’s regularization method for nonlinear ill-posed equations in Banach spaces. Acta Math. Sci. 2018, 38B, 303–314. [Google Scholar] [CrossRef]
  9. George, S. On convergence of regularized modified Newton’s method for non-linear ill-posed problems. J. Inverse Ill-Posed Probl. 2010, 18, 133–146. [Google Scholar] [CrossRef]
  10. Semenova, E.V. Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators. Comput. Methods Appl. Math. 2010, 10, 444–454. [Google Scholar] [CrossRef]
  11. De Hoog, F.R. Review of Fredholm equations of the first kind. In The Application and Numerical Solution of Integral Equations; Anderssen, R.S., De Hoog, F.R., Luckas, M.A., Eds.; Sijthoff and Noordhoff: Alphen aan den Rijn, The Netherlands, 1980; pp. 119–134. [Google Scholar]
  12. Krasnoselskii, M.A.; Zabreiko, P.P.; Pustylnik, E.I.; Sobolevskii, P.E. Integral Operators in Spaces of Summable Functions; Noordhoff International Publishing: Leyden, The Netherlands, 1976. [Google Scholar]
  13. Mahale, P.; Nair, M.T. Iterated Lavrentiev regularization for non-linear ill-posed problems. ANZIAM J. 2009, 51, 191–217. [Google Scholar] [CrossRef]
  14. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering, Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Group: Abingdon, UK, 2022. [Google Scholar]
  15. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  16. George, S.; Nair, M.T. A derivative–free iterative method for nonlinear ill-posed equations with monotone operators. J. Inverse Ill-Posed Probl. 2017, 25, 543–551. [Google Scholar] [CrossRef]
  17. Kaltenbacher, B. Some Newton-type methods for the regularization of nonlinear ill-posed problems. Inverse Probl. 1997, 13, 729–753. [Google Scholar] [CrossRef]
  18. Deimling, K. Nonlinear Functional Analysis; Springer: New York, NY, USA, 1985. [Google Scholar]
  19. Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  20. Nair, M.T. Functional Analysis: A First Course; Fourth Print, 2014; PHI-Learning: New Delhi, India, 2002. [Google Scholar]
  21. Groetsch, C.W.; King, J.T.; Murio, D. Asymptotic analysis of a finite element method for Fredholm integral equations of the first kind. In Treatment of Integral Equations by Numerical Methods; Baker, C.T.H., Miller, G.F., Eds.; Academic Press: Cambridge, MA, USA, 1982; pp. 1–11. [Google Scholar]
  22. Hofmann, B.; Scherzer, O. Factors influencing the ill-posedness of nonlinear inverse problems. Inverse Probl. 1994, 10, 1277–1297. [Google Scholar] [CrossRef]
  23. Groetsch, C.W. Inverse Problems in the Mathematical Sciences; Vieweg: Braunschweig, Germany, 1993. [Google Scholar]
  24. Pereverzev, S.; Schock, E. On the adaptive selection of the parameter in regularization of ill-posed problems. SIAM J. Numer. Anal. 2005, 43, 2060–2076. [Google Scholar] [CrossRef]
Figure 1. (a) data and (b) Solution with δ = 0.01 and n = 500 .
Figure 1. (a) data and (b) Solution with δ = 0.01 and n = 500 .
Mathematics 10 03365 g001
Figure 2. (a) data and (b) Solution with δ = 0.001 and n = 500 .
Figure 2. (a) data and (b) Solution with δ = 0.001 and n = 500 .
Mathematics 10 03365 g002
Figure 3. (a) data and (b) Solution with δ = 0.0001 and n = 500 .
Figure 3. (a) data and (b) Solution with δ = 0.0001 and n = 500 .
Mathematics 10 03365 g003
Figure 4. (a) data and (b) Solution with δ = 0.01 and n = 1000 .
Figure 4. (a) data and (b) Solution with δ = 0.01 and n = 1000 .
Mathematics 10 03365 g004
Figure 5. (a) data and (b) Solution with δ = 0.001 and n = 1000 .
Figure 5. (a) data and (b) Solution with δ = 0.001 and n = 1000 .
Mathematics 10 03365 g005
Figure 6. (a) data and (b) Solution with δ = 0.0001 and n = 1000 .
Figure 6. (a) data and (b) Solution with δ = 0.0001 and n = 1000 .
Mathematics 10 03365 g006
Table 1. Relative errors using discrepancy principle.
Table 1. Relative errors using discrepancy principle.
Method n = 500n = 1000
δ = 0.01δ = 0.001δ = 0.0001δ = 0.01δ = 0.001δ = 0.0001
(51) α 4.283954 × 10 3 4.283969 × 10 3 4.283972 × 10 3 3.602506 × 10 3 3.602505 × 10 3 3.602536 × 10 3
E α 1.225477 × 10 2 1.225481 × 10 2 1.225482 × 10 2 1.036919 × 10 2 1.036919 × 10 2 1.036927 × 10 2
CT3.764950 × 10 1 3.286400 × 10 1 3.355110 × 10 1 1.8796501.8704681.802014
Adaptive method in [16] α 1.040604 × 10 4 1.040604 × 10 6 1.040604 × 10 8 1.040604 × 10 4 1.040604 × 10 6 1.040604 × 10 8
E α 2.182110 × 10 2 2.173007 × 10 2 2.172918 × 10 2 2.183745 × 10 2 2.174636 × 10 2 2.174546 × 10 2
CT1.246600 × 10 2 1.159500 × 10 2 4.501330 × 10 1 1.352600 × 10 2 1.191300 × 10 2 8.252000 × 10 3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

George, S.; Padikkal, J.; Remesh, K.; Argyros, I.K. A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations. Mathematics 2022, 10, 3365. https://doi.org/10.3390/math10183365

AMA Style

George S, Padikkal J, Remesh K, Argyros IK. A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations. Mathematics. 2022; 10(18):3365. https://doi.org/10.3390/math10183365

Chicago/Turabian Style

George, Santhosh, Jidesh Padikkal, Krishnendu Remesh, and Ioannis K. Argyros. 2022. "A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations" Mathematics 10, no. 18: 3365. https://doi.org/10.3390/math10183365

APA Style

George, S., Padikkal, J., Remesh, K., & Argyros, I. K. (2022). A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations. Mathematics, 10(18), 3365. https://doi.org/10.3390/math10183365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop