1. Introduction
Let
X and
Y be infinite-dimensional real Hilbert space with inner products
and norms
. Let us consider a nonlinear operator equation:
where
is a nonlinear operator between the Hilbert space
X and
Y. If the operator
F is not continuously invertible, then (
1) may not have a solution. If a solution exists, arbitrarily small perturbations of the data may lead to unacceptable results. In other words, the problems of the form (
1) do not depend continuously on the data. It was shown in Tautenhahn (1994) [
1] that asymptotic regularization, i.e., the approximation of Equation (
1) by a solution of the Showalter differential equation:
where the regularization parameter
T is chosen according to the discrepancy principle,
is a suitable approximation to the unknown solution
, and
are the available noisy data with:
is a stable method for solving nonlinear ill-posed problems. Under the Hölder-type source condition
for the regularized solution in
X, the optimal rate
is obtained using the assumption that a bounded linear operator
exists such that:
and:
are satisfied, see [
1,
2]. Detailed studies of inverse ill-posed problems may be found, e.g., in [
3] and [
4,
5,
6,
7].
It is well-known that the asymptotic regularization is a continuous version of the Landweber iteration. A forward Euler discretization of (
2) gives back a damped Landweber iteration:
for some relaxation parameter
, which is convergent for exact data and stable with respect to data error [
2]. Later, Scherzer [
8] observed that the term
appears in a regularized Gauss–Newton method, i.e.:
To highlight the importance of this term for iterative regularization, Scherzer [
8] included the term
into the Landweber method and proved a convergence rate result under the usual Hölder-type sourcewise representation without the assumptions on the nonlinearity of operator
F like in (
4) and (
5). Moreover, in [
9], the additional term was included to the whole family of iterative Runge–Kutta-type methods (RKTM):
where
stands for
, the vector
and matrix
A are defined by the Runge–Kutta method, and
is a relaxation parameter, which includes the modified Landweber iteration. Using a priori and a posteriori stopping rules, the convergence rate resultes of the RKTM are obtained under a Hölder-type sourcewise condition if the Fréchet derivative is properly scaled. However, References [
8,
9] have to take into account that the nonlinear operator
F is properly scaled with a Lipschitz-continuous Fréchet derivative in
, i.e.:
with
instead of (
4) and (
5).
Due to the minimal assumptions for the convergence analysis of the modified iterative RKTM, we studied in detail the additional term in the continuous version written as:
for the noisy case and as:
for the noise-free case.
Recently, a second order asymptotic regularization for the linear problem
was investigated in [
10]:
Under Hölder-type source condition and Morozov’s discrepancy principle, the method has the same power-type convergence rate as (
2) in the linear case. Furthermore, a discrete second-order iterative regularization for the nonlinear case was proposed in [
11].
The paper is organized as follows: In
Section 2, the assumption and preliminary results are given. We show that if the stopping time
T is chosen to be a solution of
for some
, then there exists a unique solution
.
Section 3 contributes to the convergence analyses of the proposed method under the tangential cone condition and, in addition, the modified discrepancy principle for noisy case. Finally, in
Section 4, we show that the rate
is obtained under the modified source condition.
Section 5 provides the conclusion.
2. Preliminaries
For an ill-posed problem, the local property of the nonlinear operator is usually used to ensure at least the local convergence of regularization method instead of using nonexpansivity of the fixed point operator [
7]. For the presented work, we can provide the local convergence if the nonlinear operator fulfills the following tangential cone condition, i.e., for all
:
It is immediately implied by Equation (
9) that for all
, we have:
A stronger condition was used in [
12] to provide the local convergence of Tikhonov regularization, i.e.:
This condition implies (
9) if
is sufficiently small. In addition to the local condition (Equation (
9)), we assume that the Fréchet derivative of
F is bounded, i.e., for all
:
Adding the term
to the Showalter differential equation requires a more complicated proof. To prove the convergence of the presented method, the following assumptions are needed. However, it is not necessary for the convergence rate result in
Section 4 and the discretized version [
9].
Assumption 1. For and , the following properties hold:
- (i)
converges;
- (ii)
converges.
The following lemma will be useful.
Lemma 1. For any continuous function f on and , if converges, then:
- (i)
converges for all ;
- (ii)
Corollary 1. Let the assumption 1 be satisfied. Then:
- (i)
;
- (ii)
Proof. The proof directly follows from the Lemma 1. □
To prove the existence and uniqueness of solution of the nonlinear equation in Lemma 3, we prepared Lemma 2.
Lemma 2. Let be a solution of (1). Let (3) and (9) hold. Then: Proof. Using (
9), we rewrote (
13) and obtained:
Our assertion was obtained via (
3), (
10), and (
14). □
In [
1], the stopping time
T serves as a regularization parameter and is chosen such that the discrepancy principle is satisfied, i.e.:
with some
. However, in our research, we used a variation of the discrepancy principle. Let
be defined by:
Note that
. In the presented work, the regularization parameter fulfills the following rule:
where
is a solution of the following nonlinear equation:
If
, Tautenhahn [
1] shows that a unique solution of
exists, which is
.
Lemma 3. Let (9) and (11) be fulfilled, be a solution of (7), and be a solution of (1) in . If with , then there exists a unique solution of (17). Proof. (a) Observe that
is continuous with
. Using (
7), we have:
Using (
3), (
9), and (
10), we can estimate the above derivative by:
Moreover, (
11) together with the fact that
yield:
The variation of discrepancy principle (Equation (
16)) provides the right hand side of (
19) as a negative value. Thus,
is non-increasing.
(b) Next, we show that
. Suppose that
. Due to this preliminary supposition, we have
for all
. Applying (
11) to (
12) and using the fact that
, we get:
Rearranging (
20), we obtain:
Using the discrepancy principle (Equation (
16)), we can rewrite (
21) as:
Integrating (
22) on both sides and using
and
, we obtain:
It follows that for all . This means that or , which contradicts the assumption. Consequently, there is a solution with .
(c) Finally, we show by contraposition that a solution of
is unique. From (a), there is
with
for all
for some
. Thus,
for
. By (
12) and (
20), we have:
Similarly, by (
19), we obtain:
The parallelogram law, (
7), (
24), and (
25) provide:
This means that , and thus, . Consequently, , which implies that is a constant. For all , we have . Therefore, , which contradicts (b). □
Remark 1. Due to the discrepancy principle and , we have: Proving by contradiction, we can show that . This means that , and thus, . In the same manner, for the noise-free case, we obtain .
3. Convergence Results
In this section, we first show for the exact data that the solution of (
8) tends to a solution of
as
, and it also tends to a unique solution of minimal distance to
under the conventional condition. At the end of this section, we show that the proposed method provides a stable approximation
of
if a unique solution
is chosen by the discrepancy principle (
16). Note, the following result was used to prove that the solution
of (
8) converges to a solution
provided the tangential cone condition holds.
Lemma 4. [13] Let be a solution of (1). If the tangential cone condition (9) holds, then any solution of (1) satisfies: Remark 2. Because of Lemma 4, Equation (1) has a unique solution of minimal distance to . It holds . If , we get , see [2]. Next, we prove the convergence of the solution
of (
8) for the noise-free case.
Theorem 1. Let (3) and the tangential cone condition (9) be satisfied and let be the solution of (8) for . If (1) is solvable in , then:where is a solution of (1). If denotes the unique solution of minimal distance to and if for all , then converges to . Proof. Let
be any solution of (
1) in
and put:
We show that
for
. Let
s be an arbitrary real number with
. Thus, it holds that:
Obviously, for
and
,
fulfills (
30). Therefore,
is negative. This means that
is non-increasing. It follows that
and
converge (for
), to some
, and consequently,
. Next, we show that
also tends to zero as
. Through (
8), we have:
and through (
10) together with the inequality
for
, we have:
The right hand side of (
31) becomes zero as
because of Corollary 1, which implies that
as
, and thus:
This means that
exists. Consequently, for
, the solution
of (
8) converges, say, to some
. Due to the continuity of
F, we have
. By Corollary 1 we have
, and thus,
is a solution of (
1).
Using Lemma 4 and the additional assumption
for all
, we know that
. Therefore:
This means and . □
For the noise case, the regularization parameter
, which is chosen by the discrepancy principle (
16), provides the solution
of (
7), which converges to
as
, see next theorem.
Theorem 2. Let the tangential cone condition (Equation (9)) and be satisfied. Let be the solution of (7), where is chosen by the discrepancy principle (Equation (16)) with . If (1) is solvable in and is a solution of (1), then: Proof. Due to the results of theorem 1 and Corollary 1, the proof can be done according to the method of the proof of theorem 2.4 in [
2]. □
4. Convergence Rates
In this section, we prove an order optimal error bound under a particular sourcewise representation. The Hölder-type source condition is commonly used to analyze the convergence rate results for many regularization methods, e.g., [
1,
2,
8,
12]. An analysis of ill-posed problems under general source conditions of the form:
with an index function
, i.e.,
is continuous, strictly increasing and
, was reported in [
14,
15,
16]. For the presented work, the following source condition (Equation (
33)) is necessary. However, the usual assumptions on the nonlinearity of the operator
F are still required.
Assumption 2. Let be the unique solution of minimal distance to . There exists an element and constant and such that: The sum is absolutely convergent, since are bounded linear operators.
Assumption 3. For all , there exists a linear bounded operator and a constant such that:
- (i)
;
- (ii)
Proposition 1. Let (3), (9), assumption 3, and with be satisfied. Let be the solution of (7) with , where is chosen according to the discrepancy principle (Equation (16)). Then, we have: Proof. Let
be the solution of (
7) with
. Using (
3), (
10), and (
16), we obtain:
and consequently:
By assumption 3 and (
35), our assertion is obtained. □
Proposition 2. Let and assumption 3 be satisfied. Then, for all we have: Proof. The proof is similar to that in [
1]. □
Proposition 3. Let be the solution of (7) and denotes the unique solution of minimal distance to . Then:where: Proof. Integration by parts yields:
and the following integration results in:
Combining both equations yields:
Integration by parts again yields:
Applying (
40) to (
39), the assertion is obtained. □
Using ((
A1)
Appendix A), we have:
with
and
.
In the next theorem, we estimate the functions:
Theorem 3. Let (3), (9), assumption 2 with , , , , and be satisfied. Let , and denotes the unique solution of minimal distance to . If is the solution of (7) with , where is chosen according to the discrepancy principle (Equation (16)) with , then the functions and of (42) satisfy the following system of integral inequalities of the second kind:andwhere the constant , and are given by: Proof. Let the terms on the right hand side of (
37) be denoted by
and
, respectively. Thus:
Applying (
33) and (
41) for
, we obtain:
Similarly, using (
3) and (
41) for
, we get:
The discrepancy principle (Equation (
16)) and (
35) provide:
Applying (
48) into (
47), we get:
with
. Observe that assumption 3(i) yields:
Through (
34) and (
36), we obtain:
with
.
Using (
52) together with (
41) and (
42), we get:
Applying (
33), (
41), and (
42) for
, we have:
Applying (
46), (
49), (
53), and (
54) to (
45), the first assertion is obtained.
We note that Proposition 3 yields:
Let the terms on the right hand side of (
55) be denoted by
and
, respectively. Thus:
Applying (
33) and (
41) for
, we obtain:
Note that by direct integration, we get:
Similarly, using (
3), (
41), and (
48) for
, we get:
Using (
41) and (
52), we obtain:
Applying (
33), (
41), and (
42) for
, we have:
Applying (
57)–(
60) to (
56), the second assertion is obtained. □
We remark that constants and exist for . It might be that does not hold for all problems.
Proposition 4. Let the assumption of Theorem 3 be satisfied. If the constant E is sufficiently small, then there exists a constant such that the following estimates hold: Proof. We used the estimate (
A2), (
A3), (
A6), and (
A7) to show that:
hold with
and
, which is defined by (
43) and (
44), respectively. The definition of
in (
43) provides:
Substituting
and
in (
64) and then estimating the integral by (
A3), (
A5), and (
A6), we obtain:
Similarly, if the integral in
is estimated by (
A3) and (
A7), then:
Due to the assumption, we have
and
. If
is sufficiently small,
,
,
, and
, there exists
such that:
and:
are smaller than
. Our assertions is obtained via (
63). □
Next, we provide the main result of this section.
Theorem 4. Let the assumptions of theorem 3 be satisfied. If the constant E is sufficiently small, then there exists a constant such that: Proof. We observe that (
33) provides:
where
T is replaced by
. Similarly using (
33) and (
37), we get:
We define:
where
is obtained by (
51). Thus, (
67) can be rewritten as:
Due to (
41) and (
52), we have:
Using Proposition 4, (
A4), (
A8), and (
A9), (
70) becomes:
Through (
3), (
10), and (
69), we obtain:
The interpolation inequality
with
,
, and
together with (
71) and (
72) provide:
From (
48) and (
62), we have:
Through (
41) and (
69), we have:
The assertion is obtained via (
73), (
74), and (
75). □
5. Conclusions
In this article, an additional term was included to the Showalter differential equation in order to study the impact of this term to the classical asymptotical regularization proposed by [
1]. In the presented work, the regularization parameter was chosen according to an a posteriori choice rule (Equation (
16)), where
is needed instead of using
. It includes not only the noise level but also the information of local properties of the nonlinear operator
F, see [
12] for the analysis of Tikhonov regularization using the modified discrepancy principle. This may cause a slightly bigger residual norm than the conventional discrepancy principle. However, it still allows a stable approximation
of
. To ensure the convergence of the proposed method, the additional assumption 1 is required.
Apart from the convergence result, the proposed method obtained the optimal convergence rate under the source condition (
33), i.e.,
and the assumptions on the nonlinearity of operator
F. Although the exponential term
in the source condition was not necessary in the classical asymptotical regularization to obtain the optimal rate [
1], we discovered that the exponential term is an important key to obtain the optimal rate for the presented method and probably also for the modified iterative RKTM studied by [
9]. The modified iterative RKTM obtained the rate
under the Hölder type source condition, where
was chosen in accordance with the discrepancy principle and
was fixed. To obtain the optimal rate of the modified iterative RKTM under the source condition (Equation (
33)), an analysis in detail is required.
Furthermore, the numerical integration method for solving (
2) or (
7), such as Runge–Kutta-type methods, is written in the following form:
where
is a relaxation parameter and
is an increment function [
17]. Another discretization technique is based on Padé approximation in the following form [
18]:
The effects of Padé integration in the study of the chaotic behavior of conservative nonlinear chaotic systems have been reported by Butusov et al. [
18]. The comparative study of the Runge–Kutta methods versus Padé methods shows that chaotic behavior appears in models obtained by nonlinear integration techniques where chaos does not appear in conventional methods. A regularized algorithm for computing Padé approximations in a floating point arithmetic or for problems with noise has been reported by Gonnet et al. [
19]. However, the role and effects of Padé integration for solving (
2) or (
7) requires a study in detail. This is an interesting task for future investigations.