Next Article in Journal
Optimal Impulse Vaccination Approach for an SIR Control Model with Short-Term Immunity
Next Article in Special Issue
A Numerical Approximation Method for the Inverse Problem of the Three-Dimensional Laplace Equation
Previous Article in Journal
A Study of Third Hankel Determinant Problem for Certain Subfamilies of Analytic Functions Involving Cardioid Domain
Previous Article in Special Issue
A Regularization Method to Solve a Cauchy Problem for the Two-Dimensional Modified Helmholtz Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Asymptotical Regularization of Nonlinear Ill-Posed Problems

by
Pornsarp Pornsawad
1,2,*,
Nantawan Sapsakul
1,2 and
Christine Böckmann
3
1
Department of Mathematics, Faculty of Science, Silpakorn University, 6 Rachamakka Nai Rd., Nakhon Pathom 73000, Thailand
2
Centre of Excellence in Mathematics, Mahidol University, Rama 6 Rd., Bangkok 10400, Thailand
3
Institut für Mathematik, Universität Potsdam, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam OT Golm, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(5), 419; https://doi.org/10.3390/math7050419
Submission received: 19 March 2019 / Revised: 25 April 2019 / Accepted: 30 April 2019 / Published: 10 May 2019
(This article belongs to the Special Issue Numerical Analysis: Inverse Problems – Theory and Applications)

Abstract

:
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of F ( x δ ( T ) ) y δ = τ δ + for some δ + > δ , and an appropriate source condition. We yield the optimal rate of convergence.

1. Introduction

Let X and Y be infinite-dimensional real Hilbert space with inner products · , · and norms · . Let us consider a nonlinear operator equation:
F ( x ) = y ,
where F : D ( F ) X Y is a nonlinear operator between the Hilbert space X and Y. If the operator F is not continuously invertible, then (1) may not have a solution. If a solution exists, arbitrarily small perturbations of the data may lead to unacceptable results. In other words, the problems of the form (1) do not depend continuously on the data. It was shown in Tautenhahn (1994) [1] that asymptotic regularization, i.e., the approximation of Equation (1) by a solution of the Showalter differential equation:
d d t x δ ( t ) = F ( x δ ( t ) ) * [ y δ F ( x δ ( t ) ) ] , 0 < t T , x δ ( 0 ) = x ¯ ,
where the regularization parameter T is chosen according to the discrepancy principle, x ¯ is a suitable approximation to the unknown solution x * , and y δ Y are the available noisy data with:
y y δ δ ,
is a stable method for solving nonlinear ill-posed problems. Under the Hölder-type source condition x ¯ x * = ( F ( x * ) * F ( x * ) ) γ ν , ν X , 2 γ ( 0 , 1 ] for the regularized solution in X, the optimal rate x δ ( T * ) x * O ( δ 2 γ / ( 2 γ + 1 ) ) is obtained using the assumption that a bounded linear operator R x exists such that:
F ( x ) = R x F ( x + ) , x B r ( x ¯ ) ,
and:
R x I C x x + , C 0 ,
are satisfied, see [1,2]. Detailed studies of inverse ill-posed problems may be found, e.g., in [3] and [4,5,6,7].
It is well-known that the asymptotic regularization is a continuous version of the Landweber iteration. A forward Euler discretization of (2) gives back a damped Landweber iteration:
x k + 1 δ = x k δ ω F ( x k δ ) * ( F ( x k δ ) y δ ) ,
for some relaxation parameter ω > 0 , which is convergent for exact data and stable with respect to data error [2]. Later, Scherzer [8] observed that the term α k ( x k δ ξ ) appears in a regularized Gauss–Newton method, i.e.:
x k + 1 δ = x k δ ( F ( x k δ ) * F ( x k δ ) + α k I ) 1 ( F ( x k δ ) * ( F ( x k δ ) y δ ) + α k ( x k δ ξ ) ) .
To highlight the importance of this term for iterative regularization, Scherzer [8] included the term α k ( x k δ ξ ) into the Landweber method and proved a convergence rate result under the usual Hölder-type sourcewise representation without the assumptions on the nonlinearity of operator F like in (4) and (5). Moreover, in [9], the additional term was included to the whole family of iterative Runge–Kutta-type methods (RKTM):
x k + 1 δ = x k δ + τ k b T Π 1 1 F ( x k δ ) * ( y δ F ( x k δ ) ) τ k 1 ( x k δ ξ ) ,
where Π 1 stands for ( I + τ k A F ( x k δ ) * F ( x k δ ) ) 1 , the vector b T and matrix A are defined by the Runge–Kutta method, and τ k is a relaxation parameter, which includes the modified Landweber iteration. Using a priori and a posteriori stopping rules, the convergence rate resultes of the RKTM are obtained under a Hölder-type sourcewise condition if the Fréchet derivative is properly scaled. However, References [8,9] have to take into account that the nonlinear operator F is properly scaled with a Lipschitz-continuous Fréchet derivative in B r ( x 0 ) , i.e.:
F ( x ) F ( x ˜ ) L ˜ x x ˜ , x , x ˜ B r ( x 0 ) ,
with L ˜ 1 instead of (4) and (5).
Due to the minimal assumptions for the convergence analysis of the modified iterative RKTM, we studied in detail the additional term in the continuous version written as:
x ˙ δ ( t ) = F ( x δ ( t ) ) * [ y δ F ( x δ ( t ) ) ] ( x δ ( t ) x ¯ ) , 0 < t T , x δ ( 0 ) = x ¯ ,
for the noisy case and as:
x ˙ ( t ) = F ( x ( t ) ) * [ y F ( x ( t ) ) ] ( x ( t ) x ¯ ) , 0 < t T , x ( 0 ) = x ¯ ,
for the noise-free case.
Recently, a second order asymptotic regularization for the linear problem A x = y was investigated in [10]:
x ¨ ( t ) + μ x ˙ ( t ) + A * A x ( t ) = A * y δ , x ( 0 ) = x ¯ , x ˙ ( 0 ) = x ¯ ˙ .
Under Hölder-type source condition and Morozov’s discrepancy principle, the method has the same power-type convergence rate as (2) in the linear case. Furthermore, a discrete second-order iterative regularization for the nonlinear case was proposed in [11].
The paper is organized as follows: In Section 2, the assumption and preliminary results are given. We show that if the stopping time T is chosen to be a solution of F ( x δ ( T ) ) y δ τ δ + for some δ + > δ , then there exists a unique solution T * < . Section 3 contributes to the convergence analyses of the proposed method under the tangential cone condition and, in addition, the modified discrepancy principle for noisy case. Finally, in Section 4, we show that the rate O ( ( δ + ) 2 γ / ( 2 γ + 1 ) ) is obtained under the modified source condition. Section 5 provides the conclusion.

2. Preliminaries

For an ill-posed problem, the local property of the nonlinear operator is usually used to ensure at least the local convergence of regularization method instead of using nonexpansivity of the fixed point operator [7]. For the presented work, we can provide the local convergence if the nonlinear operator fulfills the following tangential cone condition, i.e., for all x , x ˜ B r ( x ¯ ) D ( F ) :
F ( x ˜ ) F ( x ) F ( x ) ( x ˜ x ) η F ( x ) F ( x ˜ ) , η < 1 .
It is immediately implied by Equation (9) that for all x , x ˜ B r ( x ¯ ) D ( F ) , we have:
1 1 + η F ( x ) ( x x ˜ ) F ( x ) F ( x ˜ ) 1 1 η F ( x ) ( x x ˜ ) .
A stronger condition was used in [12] to provide the local convergence of Tikhonov regularization, i.e.:
F ( x ˜ ) F ( x ) F ( x ) ( x ˜ x ) c x ˜ x F ( x ) F ( x ˜ ) .
This condition implies (9) if x ˜ x is sufficiently small. In addition to the local condition (Equation (9)), we assume that the Fréchet derivative of F is bounded, i.e., for all x B r ( x ¯ ) :
F ( x ) L .
Adding the term ( x δ ( t ) x ¯ ) to the Showalter differential equation requires a more complicated proof. To prove the convergence of the presented method, the following assumptions are needed. However, it is not necessary for the convergence rate result in Section 4 and the discretized version [9].
Assumption 1.
For T 0 > 0 and x ¯ = x ( 0 ) , the following properties hold:
(i) 
T 0 F ( x ( σ ) ) y 2 d σ converges;
(ii) 
T 0 x ( σ ) x ¯ d σ converges.
The following lemma will be useful.
Lemma 1.
For any continuous function f on ( T 0 , ) and T 0 > 0 , if T 0 f ( s ) d s converges, then:
(i) 
T f ( s ) d s converges for all T > T 0 ;
(ii) 
lim T T f ( s ) d s = 0 .
Corollary 1.
Let the assumption 1 be satisfied. Then:
(i) 
lim T T F ( x ( σ ) ) y 2 d σ = 0 ;
(ii) 
lim T T x ( σ ) x ¯ d σ = 0 .
Proof. 
The proof directly follows from the Lemma 1. □
To prove the existence and uniqueness of solution T * of the nonlinear equation in Lemma 3, we prepared Lemma 2.
Lemma 2.
Let x * B r ( x ¯ ) be a solution of (1). Let (3) and (9) hold. Then:
d d T x δ ( T ) x ¯ 2 2 y δ F ( x δ ( T ) ) 2 + 2 y δ F ( x δ ( T ) ) δ + 2 1 η y δ F ( x δ ( T ) ) F ( x * ) ( x * x ¯ ) + 2 η y δ F ( x δ ( T ) ) 2 + 2 η y δ F ( x δ ( T ) ) δ + 2 η 1 η y δ F ( x δ ( T ) ) F ( x * ) ( x * x ¯ ) 2 x δ ( T ) x ¯ 2 .
Proof. 
Using (7), we obtained:
d d T x δ ( T ) x ¯ 2 = 2 F ( x δ ( T ) ) * [ y δ F ( x δ ( T ) ) ] ( x δ ( T ) x ¯ ) , x δ ( T ) x ¯ = 2 y δ F ( x δ ( T ) ) , F ( x δ ( T ) ) ( x δ ( T ) x ¯ ) 2 x δ ( T ) x ¯ , x δ ( T ) x ¯ = 2 y δ F ( x δ ( T ) ) , F ( x δ ( T ) ) F ( x ¯ ) + 2 y δ F ( x δ ( T ) ) , F ( x ¯ ) F ( x δ ( T ) ) F ( x δ ( T ) ) ( x ¯ x δ ( T ) ) 2 x δ ( T ) x ¯ , x δ ( T ) x ¯ = 2 y δ F ( x δ ( T ) ) , F ( x δ ( T ) ) y δ + 2 y δ F ( x δ ( T ) ) , y δ F ( x ¯ ) + y y + 2 y δ F ( x δ ( T ) ) , F ( x ¯ ) F ( x δ ( T ) ) F ( x δ ( T ) ) ( x ¯ x δ ( T ) ) 2 x δ ( T ) x ¯ , x δ ( T ) x ¯ .
Using (9), we rewrote (13) and obtained:
d d T x δ ( T ) x ¯ 2 2 y δ F ( x δ ( T ) ) , F ( x δ ( T ) ) y δ + 2 y δ F ( x δ ( T ) ) , y δ y + 2 y δ F ( x δ ( T ) ) , y F ( x ¯ ) + 2 η y δ F ( x δ ( T ) ) F ( x δ ( T ) ) F ( x ¯ ) 2 x δ ( T ) x ¯ , x δ ( T ) x ¯ 2 y δ F ( x δ ( T ) ) 2 + 2 y δ F ( x δ ( T ) ) y δ y + 2 y δ F ( x δ ( T ) ) y F ( x ¯ ) + 2 η y δ F ( x δ ( T ) ) F ( x δ ( T ) ) y δ + 2 η y δ F ( x δ ( T ) ) y y δ + 2 η y δ F ( x δ ( T ) ) y F ( x ¯ ) 2 x δ ( T ) x ¯ 2 .
Our assertion was obtained via (3), (10), and (14). □
In [1], the stopping time T serves as a regularization parameter and is chosen such that the discrepancy principle is satisfied, i.e.:
F ( x δ ( T * ) ) y δ τ δ < F ( x δ ( T ) ) y δ , 0 < T T * ,
with some τ > ( 1 + η ) / ( 1 η ) . However, in our research, we used a variation of the discrepancy principle. Let δ + > 0 be defined by:
δ + = δ + L r 1 η .
Note that δ + > δ . In the presented work, the regularization parameter fulfills the following rule:
F ( x δ ( T * ) ) y δ τ δ + < F ( x δ ( T ) ) y δ , 0 < T < T * , τ > 1 + η 1 η ,
where T * is a solution of the following nonlinear equation:
h ( T ) : = F ( x δ ( T ) ) y δ τ δ + = 0 .
If δ + = δ , Tautenhahn [1] shows that a unique solution of h ( T ) = 0 exists, which is T * < .
Lemma 3.
Let (9) and (11) be fulfilled, x δ ( T ) be a solution of (7), and x * be a solution of (1) in B r ( x ¯ ) . If F ( x ¯ ) y δ > τ δ + > 0 with τ > ( 1 + η ) / ( 1 η ) , then there exists a unique solution T * < of (17).
Proof. 
(a) Observe that h ( T ) is continuous with h ( 0 ) = F ( x ¯ ) y δ τ δ + > 0 . Using (7), we have:
d d T F ( x δ ( T ) ) y δ 2 = 2 F ( x δ ( T ) ) F ( x δ ( T ) ) * [ y δ F ( x δ ( T ) ) ] , F ( x δ ( T ) ) y δ 2 F ( x δ ( T ) ) ( x δ ( T ) x ¯ ) , F ( x δ ( T ) ) y δ = 2 F ( x δ ( T ) ) * ( y δ F ( x δ ( T ) ) ) 2 2 F ( x ¯ ) F ( x δ ( T ) ) F ( x δ ( T ) ) ( x ¯ x δ ( T ) ) , F ( x δ ( T ) ) y δ 2 F ( x δ ( T ) ) y δ , F ( x δ ( T ) ) y δ 2 y δ y , F ( x δ ( T ) ) y δ 2 y F ( x ¯ ) , F ( x δ ( T ) ) y δ .
Using (3), (9), and (10), we can estimate the above derivative by:
d d T F ( x δ ( T ) ) y δ 2 2 F ( x δ ( T ) ) * ( y δ F ( x δ ( T ) ) ) 2 + 2 η F ( x δ ( T ) ) F ( x ¯ ) F ( x δ ( T ) ) y δ 2 F ( x δ ( T ) ) y δ 2 + 2 δ F ( x δ ( T ) ) y δ + 2 1 η F ( x * ) ( x * x ¯ ) F ( x δ ( T ) ) y δ 2 F ( x δ ( T ) ) * ( y δ F ( x δ ( T ) ) ) 2 + 2 η F ( x δ ( T ) ) y δ 2 + 2 η δ F ( x δ ( T ) ) y δ + 2 η 1 η F ( x * ) ( x * x ¯ ) F ( x δ ( T ) ) y δ 2 F ( x δ ( T ) ) y δ 2 + 2 δ F ( x δ ( T ) ) y δ + 2 1 η F ( x * ) ( x * x ¯ ) F ( x δ ( T ) ) y δ .
Moreover, (11) together with the fact that x * B r ( x ¯ ) yield:
d d T F ( x δ ( T ) ) y δ 2 2 F ( x δ ( T ) ) * ( y δ F ( x δ ( T ) ) ) 2 2 ( 1 η ) F ( x δ ( T ) ) y δ 2 + 2 δ ( η + 1 ) F ( x δ ( T ) ) y δ + 2 η + 1 1 η L r F ( x δ ( T ) ) y δ 2 F ( x δ ( T ) ) * ( y δ F ( x δ ( T ) ) ) 2 + 2 ( 1 η ) F ( x δ ( T ) ) y δ [ η + 1 1 η ( δ + L r 1 η ) F ( x δ ( T ) ) y δ ] .
The variation of discrepancy principle (Equation (16)) provides the right hand side of (19) as a negative value. Thus, h ( T ) is non-increasing.
(b) Next, we show that lim T h ( T ) < 0 . Suppose that lim T h ( T ) 0 . Due to this preliminary supposition, we have F ( x δ ( T ) ) y δ τ δ + for all T < . Applying (11) to (12) and using the fact that x * B r ( x ¯ ) , we get:
d d T x δ ( T ) x ¯ 2 2 ( 1 η ) y δ F ( x δ ( T ) ) [ y δ F ( x δ ( T ) ) + 1 + η 1 η ( δ + L r 1 η ) ] 2 x δ ( T ) x ¯ 2 2 ( 1 η ) y δ F ( x δ ( T ) ) [ y δ F ( x δ ( T ) ) + τ δ + ] .
Rearranging (20), we obtain:
1 2 d d T x δ ( T ) x ¯ 2 y δ F ( x δ ( T ) ) [ ( 1 η ) y δ F ( x δ ( T ) ) ( 1 + η ) δ + ] .
Using the discrepancy principle (Equation (16)), we can rewrite (21) as:
[ ( 1 η ) τ δ + ( 1 + η ) δ + ] y δ F ( x δ ( T ) ) < [ ( 1 η ) y δ F ( x δ ( T ) ) ( 1 + η ) δ + ] y δ F ( x δ ( T ) ) 1 2 d d T x δ ( T ) x ¯ 2 .
Integrating (22) on both sides and using c = 1 / ( 2 [ ( 1 η ) τ δ + ( 1 + η ) δ + ] ) and x δ ( 0 ) = x ¯ , we obtain:
0 y δ F ( x δ ( T ) ) d T c [ lim T x δ ( T ) x ¯ 2 x δ ( 0 ) x ¯ ] 0 .
It follows that y δ F ( x δ ( T ) ) = 0 for all T 0 . This means that lim T F ( x δ ( T ) ) y δ = 0 or lim T h ( T ) = τ δ + < 0 , which contradicts the assumption. Consequently, there is a solution T * < with h ( T * ) = 0 .
(c) Finally, we show by contraposition that a solution of h ( T ) = 0 is unique. From (a), there is T 0 < with F ( x δ ( T ) ) y δ = τ δ + for all T [ T 0 , T 0 + ϵ ] for some ϵ > 0 . Thus, ( d / d T ) F ( x δ ( T ) ) y δ = 0 for T [ T 0 , T 0 + ϵ ] . By (12) and (20), we have:
d d T x δ ( T ) x ¯ 2 2 ( 1 η ) y δ F ( x δ ( T ) ) [ y δ F ( x δ ( T ) ) + 1 + η 1 η ( δ + r L 1 η ) ] 2 x δ ( T ) x ¯ 2 2 x δ ( T ) x ¯ 2 .
Similarly, by (19), we obtain:
d d T F ( x δ ( T ) ) y δ 2 2 F ( x δ ( T ) ) * ( y δ F ( x δ ( T ) ) ) 2 .
The parallelogram law, (7), (24), and (25) provide:
x ˙ δ ( T ) 2 2 F ( x δ ( T ) ) * [ y δ F ( x δ ( T ) ) ] 2 + 2 x δ ( T ) x ¯ 2 d d T F ( x δ ( T ) ) y δ 2 d d T x δ ( T ) x ¯ 2 .
This means that x ˙ δ ( T ) 2 0 , and thus, x ˙ δ ( T ) 2 = 0 . Consequently, d d T x δ ( T ) = 0 , which implies that x δ ( T ) is a constant. For all T > T 0 , we have x δ ( T ) = x δ ( T 0 ) . Therefore, lim T F ( x δ ( T ) ) y δ = τ δ + , which contradicts (b). □
Remark 1.
Due to the discrepancy principle and 2 A B A 2 + B 2 , we have:
d d T x δ ( T ) x * 2 2 F ( x δ ( T ) ) y δ [ ( 1 η ) F ( x δ ( T ) ) y δ + δ + ( 1 + η ) ] 2 x δ ( T ) x * 2 + 2 x * x ¯ x δ ( T ) x * x * x ¯ x δ ( T ) x * x * x ¯ + x δ ( T ) x * .
Proving by contradiction, we can show that x δ ( T ) x * < x * x ¯ . This means that x δ ( T ) B r ( x * ) , and thus, x δ ( T ) B 2 r ( x ¯ ) . In the same manner, for the noise-free case, we obtain x ( T ) B 2 r ( x ¯ ) .

3. Convergence Results

In this section, we first show for the exact data that the solution of (8) tends to a solution of F ( x ) = y as T , and it also tends to a unique solution of minimal distance to x ¯ = x ( 0 ) under the conventional condition. At the end of this section, we show that the proposed method provides a stable approximation x δ ( T * ) of F ( x δ ) = y δ if a unique solution T * is chosen by the discrepancy principle (16). Note, the following result was used to prove that the solution x ( T ) of (8) converges to a solution x * B r ( x ¯ ) provided the tangential cone condition holds.
Lemma 4.
[13] Let x * B r ( x ¯ ) be a solution of (1). If the tangential cone condition (9) holds, then any solution x B r ( x ¯ ) of (1) satisfies:
x * x N ( F ( x * ) ) .
Remark 2.
Because of Lemma 4, Equation (1) has a unique solution x + of minimal distance to x ¯ . It holds x + x ¯ N ( F ( x + ) ) . If N ( F ( x + ) ) N ( F ( x ( T ) ) ) , we get x ( T ) x ¯ N ( F ( x + ) ) , see [2].
Next, we prove the convergence of the solution x ( T ) of (8) for the noise-free case.
Theorem 1.
Let (3) and the tangential cone condition (9) be satisfied and let x ( T ) be the solution of (8) for T > 0 . If (1) is solvable in B r ( x ¯ ) , then:
x ( T ) x * , T ,
where x * B r ( x ¯ ) is a solution of (1). If x + denotes the unique solution of minimal distance to x ¯ and if N ( F ( x + ) ) N ( F ( x ) ) for all x B r ( x ¯ ) , then x ( T ) converges to x + .
Proof. 
Let x ˜ * be any solution of (1) in B r ( x ¯ ) and put:
e ( T ) : = x ˜ * x ( T ) .
We show that e ( T ) 0 for T . Let s be an arbitrary real number with s > T . Thus, it holds that:
e ( T ) e ( s ) 2 = 2 e ( s ) e ( T ) , e ( s ) + e ( T ) 2 e ( s ) 2 .
Through (27), we have:
d d T x ( T ) x ˜ * 2 + x ( T ) x ˜ * 2 r 2 .
Obviously, for c 1 , c 2 R and c 2 1 , x ( T ) x ˜ * 2 = c 1 e T + c 2 r 2 fulfills (30). Therefore, d d T x ( T ) x ˜ * 2 is negative. This means that x ( T ) x ˜ * is non-increasing. It follows that e ( T ) and e ( s ) converge (for T ), to some ϵ 0 , and consequently, lim T ( e ( T ) 2 e ( s ) 2 ) = ϵ ϵ = 0 . Next, we show that e ( s ) e ( T ) , e ( s ) also tends to zero as T . Through (8), we have:
e ( s ) e ( T ) = x ( T ) x ( s ) = T s x ˙ ( σ ) d σ ,
and through (10) together with the inequality y F ( x ( s ) ) y F ( x ( σ ) ) for T σ s , we have:
e ( s ) e ( T ) , e ( s ) = T s ( F ( x ( σ ) ) * [ y F ( x ( σ ) ) ] ( x ( σ ) x ¯ ) ) d σ , x ˜ * x ( s ) T s F ( x ( σ ) ) * [ y F ( x ( σ ) ) ] , x ˜ * x ( s ) d σ + T s x ¯ x ( σ ) , x ˜ * x ( s ) d σ T s y F ( x ( σ ) ) { F ( x ( σ ) ) ( x ˜ * x ( σ ) ) + F ( x ( σ ) ) ( x ( σ ) x ( s ) ) } d σ + T s x ¯ x ( σ ) x ˜ * x ( s ) d σ 3 ( 1 + η ) T s y F ( x ( σ ) ) 2 d σ + x ˜ * x ( s ) T s x ¯ x ( σ ) d σ .
The right hand side of (31) becomes zero as T because of Corollary 1, which implies that e ( s ) e ( T ) , e ( s ) 0 as T , and thus:
e ( T ) e ( s ) 0 as T .
This means that lim T e ( T ) exists. Consequently, for T , the solution x ( T ) of (8) converges, say, to some x * . Due to the continuity of F, we have lim T F ( x ( T ) ) = F ( x * ) . By Corollary 1 we have lim T y F ( x ( T ) ) = 0 , and thus, x * is a solution of (1).
Using Lemma 4 and the additional assumption N ( F ( x + ) ) N ( F ( x ( T ) ) ) for all x ( T ) B r ( x ¯ ) , we know that x ( T ) x ¯ N ( F ( x + ) ) . Therefore:
x + x * = x + x ¯ + x ¯ x * N ( F ( x + ) ) .
This means x * = x + and x ( T ) x + . □
For the noise case, the regularization parameter T * = T * ( δ ) , which is chosen by the discrepancy principle (16), provides the solution x δ ( T ) of (7), which converges to x * B r ( x ¯ ) as δ 0 , see next theorem.
Theorem 2.
Let the tangential cone condition (Equation (9)) and F ( x ¯ ) y δ > τ δ + > 0 be satisfied. Let x δ ( T * ) be the solution of (7), where T = T * is chosen by the discrepancy principle (Equation (16)) with τ > ( 1 + η ) / ( 1 η ) . If (1) is solvable in B r ( x ¯ ) and x * B r ( x ¯ ) is a solution of (1), then:
x δ ( T * ) x * , δ 0 .
Proof. 
Due to the results of theorem 1 and Corollary 1, the proof can be done according to the method of the proof of theorem 2.4 in [2]. □

4. Convergence Rates

In this section, we prove an order optimal error bound under a particular sourcewise representation. The Hölder-type source condition is commonly used to analyze the convergence rate results for many regularization methods, e.g., [1,2,8,12]. An analysis of ill-posed problems under general source conditions of the form:
x + { x X : x ¯ x = φ ( F * F ) ν , ν E } ,
with an index function φ , i.e., φ is continuous, strictly increasing and lim t 0 φ ( t ) = 0 , was reported in [14,15,16]. For the presented work, the following source condition (Equation (33)) is necessary. However, the usual assumptions on the nonlinearity of the operator F are still required.
Assumption 2.
Let x + B r ( x ¯ ) be the unique solution of minimal distance to x ¯ . There exists an element ν X and constant 2 γ ( 0 , 1 ] and E 0 such that:
x ¯ x + = e F ( x + ) * F ( x + ) T ( F ( x + ) * F ( x + ) ) γ ν ν E , T > 0 ,
with:
e K * K T = I + k = 1 ( 1 ) k T k ( K * K ) k / k ! and K = F ( x + ) .
The sum is absolutely convergent, since K , K * are bounded linear operators.
Assumption 3.
For all x B r ( x ¯ ) , there exists a linear bounded operator R x : Y Y and a constant C > 0 such that:
(i) 
F ( x ) = R x F ( x + ) ;
(ii) 
R x I C x x + .
Proposition 1.
Let (3), (9), assumption 3, and F ( x ¯ ) y δ > τ δ with τ > ( 1 + η ) / ( 1 η ) be satisfied. Let x = x δ ( T ) be the solution of (7) with T T * , where T * is chosen according to the discrepancy principle (Equation (16)). Then, we have:
( I R x * ) ( F ( x ) y δ ) C τ ( τ 1 ) ( 1 + η ) x x + F ( x + ) ( x x + ) .
Proof. 
Let x = x δ ( T ) be the solution of (7) with T T * . Using (3), (10), and (16), we obtain:
y δ F ( x ) y δ F ( x + ) + F ( x + ) F ( x ) 1 τ F ( x ) y δ + 1 1 η F ( x + ) ( x x + ) ,
and consequently:
y δ F ( x ) τ ( τ 1 ) ( 1 η ) F ( x + ) ( x x + ) .
By assumption 3 and (35), our assertion is obtained. □
Proposition 2.
Let B r ( x ¯ ) i n t ( D ( F ) ) and assumption 3 be satisfied. Then, for all x , x + B r ( x ¯ ) we have:
F ( x ) F ( x + ) F ( x + ) ( x x + ) 1 2 C x x + F ( x + ) ( x x + ) .
Proof. 
The proof is similar to that in [1]. □
Proposition 3.
Let x δ ( T ) be the solution of (7) and x + denotes the unique solution of minimal distance to x ¯ . Then:
x δ ( T ) x + = 1 2 ( I + e K * K T ) ( x ¯ x + ) + 1 2 0 T e K * K ( T s ) K * ( y δ y ) d s + 1 2 0 T e K * K ( T s ) w ( s ) d s + 1 2 0 T e K * K ( T s ) K * K ( x δ ( s ) x ¯ ) d s ,
where:
w ( s ) = K * K ( x δ ( s ) x + ) 2 F ( x δ ( s ) ) * [ F ( x δ ( s ) ) y δ ] + K * ( y y δ ) .
Proof. 
Integration by parts yields:
0 T e K * K ( T s ) x ˙ δ ( s ) d s = x δ ( T ) e K * K T x ¯ 0 T e K * K ( T s ) K * K x δ ( s ) d s ,
and the following integration results in:
0 T e K * K ( T s ) K * K x + d s = x + e K * K T x + .
Combining both equations yields:
x δ ( T ) x + = e K * K T ( x ¯ x + ) + 0 T e K * K ( T s ) K * ( y δ y ) d s + 0 T e K * K ( T s ) [ K * K ( x δ ( s ) x + ) F ( x δ ( s ) ) * [ F ( x δ ( s ) ) y δ ] K * ( y y δ ) ] + 0 T e K * K ( T s ) ( x ¯ x δ ( s ) ) d s .
Integration by parts again yields:
0 T e K * K ( T s ) K * K ( x δ ( s ) x ¯ ) d s = ( x δ ( T ) x ¯ ) + 0 T e K * K ( T s ) ( x δ ( s ) x ¯ ) d s 0 T e K * K ( T s ) F ( x δ ( s ) ) * [ y δ F ( x δ ( s ) ) ] d s .
Applying (40) to (39), the assertion is obtained. □
Using ((A1) Appendix A), we have:
sup 0 < λ L 2 λ γ e λ T C ˜ / ( 1 + T ) γ ,
with 0 < γ 1 / 2 and C ˜ = max { 2 , γ γ } .
In the next theorem, we estimate the functions:
f 1 ( T ) = x δ ( T ) x + , f 2 ( T ) = K ( x δ ( T ) x + ) .
Theorem 3.
Let (3), (9), assumption 2 with γ [ γ ¯ , 1 / 2 ] , γ ¯ > 0 , η < min 1 , 1 c ^ 1 c γ 4 1 c γ 4 , 1 2 c ^ 2 1 c ^ 2 , c ^ 1 + c γ 4 < 1 , c ^ 2 < 1 / 2 , and F ( x ¯ ) y δ > τ δ + be satisfied. Let B r ( x ¯ ) i n t ( D ( F ) ) , and x + denotes the unique solution of minimal distance to x ¯ . If x δ ( T ) is the solution of (7) with T T * , where T * is chosen according to the discrepancy principle (Equation (16)) with τ > max 1 , 2 + ( 1 c γ 4 ) ( 1 η ) ( 1 c γ 4 ) ( 1 η ) c ^ 1 , 1 / 2 + ( 1 c ^ 2 ) ( 1 η ) ( 1 c ^ 2 ) ( 1 η ) c ^ 2 , then the functions f 1 and f 2 of (42) satisfy the following system of integral inequalities of the second kind:
f 1 ( T ) c ˜ E ( 1 + T ) γ + c 1 C ˜ 1 + T f 2 ( T ) + c 2 c 3 0 T f 1 ( s ) f 2 ( s ) 1 + T s d s + τ c 1 c 3 0 T f 2 ( s ) 1 + T s d s + c 3 0 T f 1 ( s ) 1 + T s d s g 1 ( T , f 1 , f 2 ) ,
and
f 2 ( T ) c ˜ E ( 1 + T ) γ + 1 / 2 + c 1 2 f 2 ( T ) + c 2 c 3 0 T f 1 ( s ) f 2 ( s ) 1 + T s d s + c 3 ( τ c 1 + 1 ) 0 T f 2 ( s ) 1 + T s d s g 2 ( T , f 1 , f 2 ) ,
where the constant c 1 , c 2 , c 3 , and c ˜ > 0 are given by:
c 1 = 1 ( τ 1 ) ( 1 η ) , c 2 = 2 C τ ( τ 1 ) ( 1 η ) + C 2 , c 3 = C ˜ 2 , c ˜ = P C ˜   and   P = 1 + 1 2 γ .
Proof. 
Let the terms on the right hand side of (37) be denoted by I 1 , I 2 , I 3 , and I 4 , respectively. Thus:
f 1 ( T ) I 1 + I 2 + I 3 + I 4 .
Applying (33) and (41) for I 1 = 1 2 ( I + e K * K T ) ( x ¯ x + ) , we obtain:
I 1 1 2 sup 0 < λ L 2 ( 1 + e λ T ) λ γ e λ T ν E C ˜ ( 1 + T ) γ .
Similarly, using (3) and (41) for I 2 = 1 2 0 T e K * K ( T s ) K * ( y δ y ) d s , we get:
I 2 1 2 0 T sup 0 < λ L 2 e λ ( T s ) λ 1 / 2 d s y δ y C ˜ 2 0 T d s ( 1 + T s ) 1 / 2 δ δ C ˜ 1 + T .
The discrepancy principle (Equation (16)) and (35) provide:
δ δ + 1 τ F ( x δ ( T ) ) y δ 1 ( τ 1 ) ( 1 η ) K ( x δ ( T ) x + ) = f 2 ( T ) ( τ 1 ) ( 1 η ) .
Applying (48) into (47), we get:
I 2 c 1 C ˜ f 2 ( T ) 1 + T ,
with c 1 = 1 ( τ 1 ) ( 1 η ) . Observe that assumption 3(i) yields:
w ( s ) = K * [ K ( x δ ( s ) x + ) 2 R x δ ( s ) * ( F ( x δ ( s ) ) y δ ) + y y δ ] .
We set:
z ( s ) = K ( x δ ( s ) x + ) 2 R x δ ( s ) * ( F ( x δ ( s ) ) y δ ) + y y δ = [ F ( x δ ( s ) ) F ( x + ) F ( x + ) ( x δ ( s ) x + ) ] + 2 ( I R x δ ( s ) * ) ( F ( x δ ( s ) ) y δ ) + y δ F ( x δ ( s ) ) .
Through (34) and (36), we obtain:
z ( s ) F ( x δ ( s ) ) F ( x + ) F ( x + ) ( x δ ( s ) x + ) + 2 ( I R x δ ( s ) * ) [ F ( x δ ( s ) ) y δ ] + y δ F ( x δ ( s ) ) c 2 x δ ( s ) x + F ( x + ) ( x δ ( s ) x + ) + τ c 1 F ( x + ) ( x δ ( s ) x + ) ,
with c 2 = 2 C τ ( τ 1 ) ( 1 η ) + C 2 .
Using (52) together with (41) and (42), we get:
I 3 1 2 0 T sup 0 < λ L 2 e λ ( T s ) λ 1 2 z ( s ) d s c 2 C ˜ 2 0 T f 1 ( s ) f 2 ( s ) ( 1 + T s ) 1 2 d s + τ c 1 C ˜ 2 0 T f 2 ( s ) ( 1 + T s ) 1 2 d s .
Applying (33), (41), and (42) for I 4 , we have:
I 4 1 2 0 T sup 0 < λ L 2 λ e λ ( T s ) x δ ( s ) x + d s + 1 2 0 T sup 0 < λ L 2 λ 1 + γ e λ ( 2 T s ) ν d s C ˜ 2 0 T f 1 ( s ) 1 + T s d s + E C ˜ 2 0 T 1 ( 1 + 2 T s ) 3 / 2 d s C ˜ 2 0 T f 1 ( s ) 1 + T s d s + E C ˜ 1 + T .
Applying (46), (49), (53), and (54) to (45), the first assertion is obtained.
We note that Proposition 3 yields:
K ( x δ ( T ) x + ) = 1 2 ( I + e K K * T ) K ( x ¯ x + ) + 1 2 0 T e K K * ( T s ) K K * ( y δ y ) d s + 1 2 0 T e K K * ( T s ) K w ( s ) d s + 1 2 0 T e K K * ( T s ) K K * K ( x δ ( s ) x ¯ ) d s .
Let the terms on the right hand side of (55) be denoted by J 1 , J 2 , J 3 , and J 4 , respectively. Thus:
f 2 ( T ) J 1 + J 2 + J 3 + J 4 .
Applying (33) and (41) for J 1 = 1 2 ( I + e K K * T ) K ( x ¯ x + ) , we obtain:
J 1 1 2 sup 0 < λ L 2 ( 1 + e λ T ) e λ T λ γ + 1 / 2 ν E C ˜ ( 1 + T ) γ + 1 / 2 .
Note that by direct integration, we get:
sup λ > 0 0 T e λ ( T s ) λ d s = sup λ > 0 1 e λ T 1 .
Similarly, using (3), (41), and (48) for J 2 = 1 2 0 T e K K * ( T s ) K K * ( y δ y ) d s , we get:
J 2 1 2 sup 0 < λ L 2 0 T e λ ( T s ) λ d s y δ y c 1 2 f 2 ( T ) .
Using (41) and (52), we obtain:
J 3 1 2 0 T sup 0 < λ L 2 e λ ( T s ) λ z ( s ) d s c 2 C ˜ 2 0 T f 1 ( s ) f 2 ( s ) 1 + T s d s + τ c 1 C ˜ 2 0 T f 2 ( s ) 1 + T s d s .
Applying (33), (41), and (42) for J 4 , we have:
J 4 1 2 0 T sup 0 < λ L 2 e λ ( T s ) λ K ( x δ ( s ) x + ) d s + 1 2 0 T sup 0 < λ L 2 e λ ( 2 T s ) λ γ + 3 / 2 d s v C ˜ 2 0 T f 2 ( s ) 1 + T s d s + E C ˜ 2 0 T 1 ( 1 + 2 T s ) γ + 3 / 2 d s C ˜ 2 0 T f 2 ( s ) 1 + T s d s + E C ˜ 2 ( γ + 1 / 2 ) 1 ( T + 1 ) γ + 1 / 2 .
Applying (57)–(60) to (56), the second assertion is obtained. □
We remark that constants c ^ 1 + c γ 4 < 1 and c ^ 2 < 1 / 2 exist for 0 < T T ˜ . It might be that T * T ˜ does not hold for all problems.
Proposition 4.
Let the assumption of Theorem 3 be satisfied. If the constant E is sufficiently small, then there exists a constant c * = c * ( τ , γ , η ) such that the following estimates hold:
f 1 ( T ) c * E ( T + 1 ) γ h 1 ( T ) ,
f 2 ( T ) c * E ( T + 1 ) γ + 1 / 2 h 2 ( T ) .
Proof. 
We used the estimate (A2), (A3), (A6), and (A7) to show that:
g 1 ( T , h 1 , h 2 ) h 1 , g 2 ( T , h 1 , h 2 ) h 2 ,
hold with g 1 f 1 and g 2 f 2 , which is defined by (43) and (44), respectively. The definition of g 1 in (43) provides:
g 1 ( T , h 1 , h 2 ) = c ˜ E ( 1 + T ) γ + c 1 C ˜ 1 + T h 2 ( T ) + c 2 c 3 0 T h 1 ( s ) h 2 ( s ) 1 + T s d s + τ c 1 c 3 0 T h 2 ( s ) 1 + T s d s + c 3 0 T h 1 ( s ) 1 + T s d s .
Substituting h 1 ( T ) = c * E ( T + 1 ) γ and h 2 ( T ) = c * E ( T + 1 ) γ + 1 / 2 in (64) and then estimating the integral by (A3), (A5), and (A6), we obtain:
g 1 ( T , h 1 , h 2 ) = c ˜ E ( 1 + T ) γ + c 1 C ˜ c * E ( 1 + T ) γ + c 2 c 3 ( c * ) 2 E 2 0 T d s 1 + T s ( s + 1 ) 2 γ + 1 / 2 + τ c 1 c 3 c * E 0 T d s 1 + T s ( s + 1 ) γ + 1 / 2 + c 3 c * E 0 T d s ( 1 + T s ) ( s + 1 ) γ E ( 1 + T ) γ [ c ˜ + c 1 c * C ˜ + c 2 c 3 c γ 2 ( c * ) 2 E + τ c 1 c 3 c ^ 1 c * + c 3 c γ 4 c * ] .
Similarly, if the integral in g 2 ( T , h 1 , h 2 ) is estimated by (A3) and (A7), then:
g 2 ( T , h 1 , h 2 ) = c ˜ E ( 1 + T ) γ + 1 / 2 + c 1 2 h 2 ( T ) + c 2 c 3 0 T h 1 ( s ) h 2 ( s ) 1 + T s d s + c 3 ( τ c 1 + 1 ) 0 T h 2 ( s ) 1 + T s d s = c ˜ E ( 1 + T ) γ + 1 / 2 + c 1 2 c * E ( 1 + T ) γ + 1 / 2 + c 2 c 3 ( c * E ) 2 0 T d s ( 1 + T s ) ( s + 1 ) 2 γ + 1 / 2 + ( τ c 1 + 1 ) c 3 c * E 0 T d s ( 1 + T s ) ( s + 1 ) γ + 1 / 2 E ( 1 + T ) γ + 1 / 2 [ c ˜ + c 1 c * / 2 + c 2 c 3 c γ 2 ( c * ) 2 E + ( τ c 1 + 1 ) c 3 c * c ^ 2 ] .
Due to the assumption, we have C ˜ = 2 and c ˜ = 2 P . If ν E is sufficiently small, τ > max 1 , 2 + ( 1 c γ 4 ) ( 1 η ) ( 1 c γ 4 ) ( 1 η ) c ^ 1 , 1 / 2 + ( 1 c ^ 2 ) ( 1 η ) ( 1 c ^ 2 ) ( 1 η ) c ^ 2 , η < min 1 , 1 c ^ 1 c γ 4 1 c γ 4 , 1 2 c ^ 2 1 c ^ 2 , c ^ 1 + c γ 4 < 1 , and c ^ 2 < 1 / 2 , there exists c * = c * ( τ , γ , η ) such that:
c ˜ + c 1 c * C ˜ + c 2 c 3 c γ 2 ( c * ) 2 E + τ c 1 c 3 c ^ 1 c * + c 3 c γ 4 c * = 2 P + c * 2 + τ c ^ 1 ( τ 1 ) ( 1 η ) + c γ 4 + c 2 c 3 c γ 2 ( c * ) 2 E ,
and:
c ˜ + c 1 c * / 2 + c 2 c 3 c γ 2 ( c * ) 2 E + ( τ c 1 + 1 ) c 3 c * c ^ 2 = 2 P + c * 1 / 2 + τ c ^ 2 ( τ 1 ) ( 1 η ) + c ^ 2 + c 2 c 3 c γ 2 ( c * ) 2 E ,
are smaller than c * . Our assertions is obtained via (63). □
Next, we provide the main result of this section.
Theorem 4.
Let the assumptions of theorem 3 be satisfied. If the constant E is sufficiently small, then there exists a constant c = c ( τ , γ , η ) such that:
x δ ( T * ) x + c E 1 2 γ + 1 ( δ + ) 2 γ 2 γ + 1 .
Proof. 
We observe that (33) provides:
x δ ( s ) x ¯ = x δ ( s ) x + + x + x ¯ = x δ ( s ) x + e K * K T * ( K * K ) γ v ,
where T is replaced by T * . Similarly using (33) and (37), we get:
x δ ( T * ) x + = 1 2 ( I + e K * K T * ) e K * K T * ( K * K ) γ v + 1 2 0 T * e K * K ( T * s ) K * ( y δ y ) d s + 1 2 0 T * e K * K ( T * s ) K * z ( s ) d s + 1 2 0 T * e K * K ( T * s ) K * K ( x δ ( s ) x ¯ ) d s .
We define:
ν * = 1 2 ( I + e K * K T * ) e K * K T * ν + 1 2 0 T * e K * K ( T * s ) ( K * K ) γ + 1 / 2 z ( s ) d s + 1 2 0 T * e K * K ( T * s ) ( K * K ) 1 γ ( x δ ( s ) x ¯ ) d s ,
where z ( s ) is obtained by (51). Thus, (67) can be rewritten as:
x δ ( T * ) x + = ( K * K ) γ ν * + 1 2 0 T * e K * K ( T * s ) K * ( y δ y ) d s .
Due to (41) and (52), we have:
ν * 1 2 sup 0 < λ L 2 ( e λ T * + e 2 λ T * ) v + 1 2 0 T * sup 0 < λ L 2 e λ ( T * s ) λ γ + 1 / 2 z ( s ) d s + 1 2 0 T * sup 0 < λ L 2 e λ ( T * s ) λ 1 γ x δ ( s ) x + d s + 1 2 sup 0 < λ L 2 0 T * e λ ( 2 T * s ) λ d s v 3 2 v + C ˜ 2 0 T * c 2 f 1 ( s ) f 2 ( s ) + τ c 1 f 2 ( s ) ( 1 + T * s ) γ + 1 / 2 d s + C ˜ 2 0 T * f 1 ( s ) ( 1 + T * s ) 1 γ d s .
Using Proposition 4, (A4), (A8), and (A9), (70) becomes:
ν * 3 2 v + C ˜ 2 c 2 ( c * E ) 2 0 T * d s ( s + 1 ) 2 γ + 1 / 2 ( 1 + T * s ) γ + 1 / 2 + C ˜ 2 τ c 1 c * E 0 T * d s ( s + 1 ) γ + 1 / 2 ( 1 + T * s ) γ + 1 / 2 + C ˜ 2 c * E 0 T * d s ( 1 + T * s ) 1 γ ( s + 1 ) γ 3 2 E + c 2 c 3 ( c * E ) 2 c γ 3 + τ c 1 c 3 c * E c ^ 3 + c 3 c * E c ^ 4 c ˜ 1 E .
Through (3), (10), and (69), we obtain:
K ( K * K ) γ ν * K ( x δ ( T * ) x + ) + 1 2 0 T * e K K * ( T * s ) K K * ( y δ y ) d s ( 1 + η ) F ( x δ ( T * ) ) F ( x + ) + 1 2 sup 0 < λ L 2 0 T * e λ ( T * s ) λ d s y δ y ( 1 + η ) F ( x δ ( T * ) ) y δ + ( 1 + η ) δ + δ 2 ( 1 + η ) τ δ + + ( 3 / 2 + η ) δ + c ˜ 2 δ + .
The interpolation inequality B p ν * B q ν * p / q ν * 1 p / q with B = K * K , p = γ , and q = γ + 1 / 2 together with (71) and (72) provide:
( K * K ) γ ν * ( K * K ) γ + 1 / 2 ν * 2 γ / ( 2 γ + 1 ) ν * 1 2 γ / ( 2 γ + 1 ) ( c ˜ 2 δ + ) 2 γ / ( 2 γ + 1 ) ( c ˜ 1 E ) 1 2 γ / ( 2 γ + 1 ) c ˜ E 1 / ( 2 γ + 1 ) ( δ + ) 2 γ / ( 2 γ + 1 ) .
From (48) and (62), we have:
( 1 + T * ) γ + 1 / 2 δ + ( 1 + T * ) γ + 1 / 2 f 2 ( T * ) ( τ 1 ) ( 1 η ) c * E ( τ 1 ) ( 1 η ) .
Thus:
1 + T * δ + c * E ( τ 1 ) ( 1 η ) 1 / ( 2 γ + 1 ) ( δ + ) 2 γ / ( 2 γ + 1 ) .
Through (41) and (69), we have:
x δ ( T * ) x + ( K * K ) γ ν * + 1 2 0 T * e K * K ( T * s ) K * ( y δ y ) d s ( K * K ) γ ν * + C ˜ 1 + T * δ + .
The assertion is obtained via (73), (74), and (75). □

5. Conclusions

In this article, an additional term was included to the Showalter differential equation in order to study the impact of this term to the classical asymptotical regularization proposed by [1]. In the presented work, the regularization parameter was chosen according to an a posteriori choice rule (Equation (16)), where δ + = δ + L r 1 η is needed instead of using δ . It includes not only the noise level but also the information of local properties of the nonlinear operator F, see [12] for the analysis of Tikhonov regularization using the modified discrepancy principle. This may cause a slightly bigger residual norm than the conventional discrepancy principle. However, it still allows a stable approximation x δ ( T * ) of F ( x δ ) = y δ . To ensure the convergence of the proposed method, the additional assumption 1 is required.
Apart from the convergence result, the proposed method obtained the optimal convergence rate under the source condition (33), i.e., x ¯ x + = e F ( x + ) * F ( x + ) T ( F ( x + ) * F ( x + ) ) γ and the assumptions on the nonlinearity of operator F. Although the exponential term e F ( x + ) * F ( x + ) T in the source condition was not necessary in the classical asymptotical regularization to obtain the optimal rate [1], we discovered that the exponential term is an important key to obtain the optimal rate for the presented method and probably also for the modified iterative RKTM studied by [9]. The modified iterative RKTM obtained the rate O ( k * ψ / 2 ) under the Hölder type source condition, where k * was chosen in accordance with the discrepancy principle and 0 < ψ < 1 was fixed. To obtain the optimal rate of the modified iterative RKTM under the source condition (Equation (33)), an analysis in detail is required.
Furthermore, the numerical integration method for solving (2) or (7), such as Runge–Kutta-type methods, is written in the following form:
x k + 1 = x k + ω Φ ω ( t k , x k ) ,
where ω > 0 is a relaxation parameter and Φ ω is an increment function [17]. Another discretization technique is based on Padé approximation in the following form [18]:
x k + 1 = x k + ω Φ ˜ ω ( t k , x k ) Φ ω ( t k , x k ) .
The effects of Padé integration in the study of the chaotic behavior of conservative nonlinear chaotic systems have been reported by Butusov et al. [18]. The comparative study of the Runge–Kutta methods versus Padé methods shows that chaotic behavior appears in models obtained by nonlinear integration techniques where chaos does not appear in conventional methods. A regularized algorithm for computing Padé approximations in a floating point arithmetic or for problems with noise has been reported by Gonnet et al. [19]. However, the role and effects of Padé integration for solving (2) or (7) requires a study in detail. This is an interesting task for future investigations.

Author Contributions

The authors P.P., N.S., and C.B. carried out jointly this research work and drafted the manuscript together. All the authors validated the article and read the final version.

Funding

This research received no external funding.

Acknowledgments

This work was supported by the Faculty of Science of Silpakorn University and by Centre of Excellence in Mathematics of Mahidol University. We would like to express special thanks to Assistant Professor Jittisak Rakbud for their valuable help. The authors would like to thank the reviewers for valuable hints and improvements.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

For a bounded Fréchet derivative (11) we have, see also [1]:
sup 0 < λ L 2 λ γ e λ T = γ γ ( e T ) γ 1 + γ γ e γ ( 1 + T ) γ   for   0 γ T L 2 e T max { γ γ , 1 } ( 1 + T ) γ   for   γ T L 2 .
Proposition A1.
Let γ ( 0 , 1 / 2 ] and T ( 0 , T ¯ ] , then there exist constants c γ 1 , c γ 2 , c γ 3 , c γ 4 , c ^ 1 , c ^ 2 , c ^ 3 , and c ^ 4 with:
0 T d s T s + 1 ( s + 1 ) 2 γ + 1 / 2 c γ 1 ( T + 1 ) γ
0 T d s ( T s + 1 ) ( s + 1 ) 2 γ + 1 / 2 c γ 2 ( T + 1 ) γ + 1 / 2
0 T d s ( T s + 1 ) 1 / 2 γ ( s + 1 ) 2 γ + 1 / 2 c γ 3
0 T d s ( T s + 1 ) ( s + 1 ) γ c γ 4 ( T + 1 ) γ
0 T d s T s + 1 ( s + 1 ) γ + 1 / 2 c ^ 1 ( T + 1 ) γ
0 T d s ( T s + 1 ) ( s + 1 ) γ + 1 / 2 c ^ 2 ( T + 1 ) γ + 1 / 2
0 T d s ( T s + 1 ) 1 / 2 γ ( s + 1 ) γ + 1 / 2 c ^ 3 .
0 T d s ( T s + 1 ) 1 γ ( s + 1 ) γ c ^ 4 .
Proof. 
To prove (A2), we observe that integral in (A2) is bounded above by the Riemann sum. If the interval [ 0 , T ] is divided into m subinterval, for some c ˜ γ 1 , c γ 1 > 0 , we have:
0 T d s T s + 1 ( s + 1 ) 2 γ + 1 / 2 T m c ˜ γ 1 j = 0 m 1 m j m T + 1 1 / 2 j T m + 1 ( 2 γ + 1 / 2 ) T m j = 0 m 1 c γ 1 T + 1 1 / 2 T + 1 ( 2 γ + 1 / 2 ) c γ 1 ( T + 1 ) γ .

References

  1. Tautenhahn, U. On the asymptotical regularization of nonlinear ill-posed problems. Inverse Probl. 1994, 10, 1405–1418. [Google Scholar] [CrossRef]
  2. Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
  3. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of inverse problems; Kluwer Academic Publishers: Norwell, MA, USA, 1996. [Google Scholar]
  4. Kabanikhin, S. Inverse and Ill-posed Problems: Theory and Applications; Inverse and Ill-Posed Problems Series; De Gruyter: Berlin, Germany, 2011. [Google Scholar]
  5. Hansen, P. Regularization Tools: A Matlab Package for Analysis and Solution of Discrete Ill-posed Problems; IMM-REP, Institut for Matematisk Modellering, Danmarks Tekniske Universitet: Lyngby, Denmark, 1994. [Google Scholar]
  6. Tikhonov, A.; Goncharsky, A.; Stepanov, V.; Yagola, A. Numerical Methods for the Solution of Ill-Posed Problems; Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 2013. [Google Scholar]
  7. Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods For Nonlinear Ill-Posed Problems; Radon Series on Computational and Applied Mathematics; Walter de Gruyter: Berlin, Germany, 2008. [Google Scholar]
  8. Scherzer, O. A Modified Landweber Iteration for Solving Parameter Estimation Problems. Appl. Math. Optim. 1998, 38, 45–68. [Google Scholar] [CrossRef]
  9. Pornsawad, P.; Böckmann, C. Modified Iterative Runge-Kutta-Type Methods for Nonlinear Ill-Posed Problems. Numer. Funct. Anal. Optim. 2016, 37, 1562–1589. [Google Scholar] [CrossRef] [Green Version]
  10. Zhang, Y.; Hofmann, B. On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. 2018, 1–26. [Google Scholar] [CrossRef]
  11. Hubmer, S.; Ramlau, R. Convergence Analysis of a Two-Point Gradient Method for Nonlinear Ill-Posed Problems. Inverse Probl. 2017, 33, 095004. [Google Scholar] [CrossRef]
  12. Qi-Nian, J. Applications of the modified discrepancy principle to Tikhonov regularization of nonlinear ill-posed problems. SIAM J. Numer. Anal. 1999, 36, 475–490. [Google Scholar] [CrossRef]
  13. Hanke, M. A regularizing Levenberg-Marquardt scheme with applications to inverse groundwater filtration problems. Inverse Probl. 1997, 13, 79–95. [Google Scholar] [CrossRef]
  14. Mathé, P.; Hofmann, B. How general are general source conditions? Inverse Probl. 2008, 24, 015009. [Google Scholar] [CrossRef]
  15. Hofmann, B.; Mathé, P. Analysis of profile functions for general linear regularization methods. SIAM J. Numer. Anal. 2007, 45, 1122–1141. [Google Scholar] [CrossRef]
  16. Tautenhahn, U. Optimality for ill-posed problems under general source conditions. Numer. Funct. Anal. Optim. 1998, 19, 377–398. [Google Scholar] [CrossRef]
  17. Böckmann, C.; Pornsawad, P. Iterative Runge–Kutta–type methods for nonlinear ill-posed problems. Inverse Probl. 2008, 24, 025002. [Google Scholar] [CrossRef]
  18. Butusov, D.; Karimov, A.; Tutueva, A.; Kaplun, D.; Nepomuceno, E.G. The effects of Padé numerical integration in simulation of conservative chaotic systems. Entropy 2019, 21, 362. [Google Scholar] [CrossRef]
  19. Gonnet, P.; Guttel, S.; Trefethen, L.N. Robust Padé approximation via SVD. SIAM Rev. 2013, 55, 101–117. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Pornsawad, P.; Sapsakul, N.; Böckmann, C. A Modified Asymptotical Regularization of Nonlinear Ill-Posed Problems. Mathematics 2019, 7, 419. https://doi.org/10.3390/math7050419

AMA Style

Pornsawad P, Sapsakul N, Böckmann C. A Modified Asymptotical Regularization of Nonlinear Ill-Posed Problems. Mathematics. 2019; 7(5):419. https://doi.org/10.3390/math7050419

Chicago/Turabian Style

Pornsawad, Pornsarp, Nantawan Sapsakul, and Christine Böckmann. 2019. "A Modified Asymptotical Regularization of Nonlinear Ill-Posed Problems" Mathematics 7, no. 5: 419. https://doi.org/10.3390/math7050419

APA Style

Pornsawad, P., Sapsakul, N., & Böckmann, C. (2019). A Modified Asymptotical Regularization of Nonlinear Ill-Posed Problems. Mathematics, 7(5), 419. https://doi.org/10.3390/math7050419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop