Next Article in Journal
A State-Based Language for Enhanced Video Surveillance Modeling (SEL)
Previous Article in Journal
Micro-Mechanical Hyperelastic Modelling for (Un)Filled Polyurethane with Considerations of Strain Amplification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution

by
Santhosh George
1,†,
Jidesh Padikkal
1,†,
Ajil Kunnarath
1,†,
Ioannis K. Argyros
2,* and
Samundra Regmi
3,*,†
1
Department of Mathematical and Computational Science, National Institute of Technology Karnataka, Surathkal 575 025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, University of Houston, Houston, TX 77204, USA
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Modelling 2024, 5(2), 530-548; https://doi.org/10.3390/modelling5020028
Submission received: 21 March 2024 / Revised: 9 May 2024 / Accepted: 10 May 2024 / Published: 13 May 2024

Abstract

:
The modeling of many problems of practical interest leads to nonlinear ill-posed equations (for example, the parameter identification problem (see the Numerical section)). In this article, we introduce a new source condition (SC) and a new parameter choice strategy (PCS) for the Tikhonov regularization (TR) method for nonlinear ill-posed problems. The new PCS is introduced using a new SC to compute the regularization parameter (RP) before computing the regularized solution. The theoretical results are verified using a numerical example.

1. Introduction

Many problems of practical interest lead to nonlinear ill-posed equations. For example, consider the inverse problem of identifying the distributed growth law x ( t ) , t ( 0 , 1 ) in the initial value problem
d y d t = x ( t ) y ( t ) , y ( 0 ) = c , t ( 0 , 1 )
from the noisy data y δ ( t ) L 2 ( 0 , 1 ) .
If it is the exact case, we can use the variable separable method and obtain that x ( t ) = d d t ln y . Assume there is a fidelity term δ sin ( t δ 2 ) added to ln y so that
ln y δ = ln y + δ sin t δ 2 .
Taking the derivative with respect to t for finding new x δ , we obtain
x δ ( t ) = d d t ln y + 1 δ cos t δ 2 .
Note that the magnitude of noise is small (if δ is small) in (2), but it is large in (3). This is typical of an ill-posed problem (the violation of Hadamard’s criterion [1]). One can reformulate the above problems as an ill-posed operator equation L ( x ) = y with
[ L ( x ) ] ( t ) = c e 0 t x ( θ ) d θ , x L 2 ( 0 , 1 ) , t ( 0 , 1 ) .
The problem is to find x for a given y , when y is not exactly known. The modeling of problems in acoustics, electrodynamics, gravimetry, phase retrieval, etc., that leads to the solving of ill-posed equations can be found in [2].
Another real-life application occurs in the parameter identification problem when mathematical models used in biology, physics, economics, etc., are often defined by a Partial Differential Equation (PDE) (see Example 1) [3,4]. It is known that in general the solution of such a PDE need not be an elementary function. So, based on the experimental data, one need to obtain the parameters of the mathematical model. This type of problem is known as the parameter identification problem [5].
In this paper, we consider the abstract nonlinear ill-posed equation
L ( f ) = g ,
where L : D ( L ) U V is a nonlinear operator and U , V are Hilbert spaces. Throughout the paper, it is assumed that L is weakly/sequentially closed, the continuous operator D ( L ) is a subset of U, L has the Fréchet derivative at all f D ( L ) and is denoted by L ( f ) , and L ( f ) * is the adjoint of the linear operator L ( f ) . We are interested in an f 0 -minimum norm solution f ^ ( f 0 MNS) (see [5,6]) of (5) (here, f 0 is an apriori estimate in the interior of D ( L ) , see [5,7,8]). Recall that a solution f ^ of (5) is called an f 0 M N S of (5) if
f ^ f 0 = min { f f 0 : L ( f ) = g , f D ( L ) } .
We assume that f ^ does not depend continuously on the data g, and the available data are g δ with
g g δ δ .
In such a situation, regularization methods are employed to obtain approximation for f ^ . TR is the well-known regularization method [5,6,8,9,10,11,12,13]. In this method, the minimizer f α δ of the Tikhonov functional
J α ( f ) = L ( f ) g δ 2 + α f f 0 2 , f D ( L )
for some α > 0 is taken as an approximation. It is known [5,9] that f α δ satisfies the equation
L ( f α δ ) * ( L ( f α δ ) g δ ) + α ( f α δ f 0 ) = 0 .
The convergence and rate of convergence of f α δ f ^ are obtained [5,9,14] under the so-called source conditions (SCs) on f 0 f ^ . Recall that apriori assumptions about the unknown solution f ^ are called source conditions [15]. The most commonly used SCs for the TR method are [5,9];
f 0 f ^ = ( Γ * Γ ) ν w for   some w N ( Γ * Γ ) , w ρ ,
where Γ = L ( f ^ ) , Γ * is the adjoint of the linear operator Γ , and [16,17]
f 0 f ^ = ( L ( f 0 ) * L ( f 0 ) ) ν w for   some w N ( L ( f 0 ) * L ( f 0 ) ) , w ρ
for ρ > 0 , 0 < ν 1 .
Other types of SCs are also studied in the literature, for example, the generalized source condition [16,17,18,19,20] and variational source condition [21,22,23,24,25].
In this paper, we introduce a new SC, i.e, we assume that
f 0 f ^ = ( L * L ) ν w for   some w N ( L * L ) , w ρ ,
where L = 0 1 L ( f ^ + t ( f 0 f ^ ) ) d t , ρ > 0 , 0 < ν 1 . It is known that [5,8,14,16], under the SCs (9) and (10) the best possible rate of convergence of f α δ f ^ is O ( δ 2 ν 2 ν + 1 ) . We shall prove that the SC (11) also gives the convergence rate O ( δ 2 ν 2 ν + 1 ) (hereafter, we call ν the Hölder-type parameter). We formulate the new SC to introduce a new PCS (this stategy is apriori in the sense that the RP α is chosen depending on δ and g δ before computing the regularized solution f α δ ) to choose α . The new PCS gives the order
f α δ f ^ = O ( δ 2 ν 2 ν + 1 ) , 0 < ν 1 2 O ( δ 1 2 ) , ν > 1 2 .
Note that most of the apriori PCS depends on the unknown ν in the SC. The advantages of our proposed PCS are (i) it is independent of the parameter ν , (ii) it provides the order O ( δ 2 ν 2 ν + 1 ) for 0 < ν 1 2 , and (iii) it is apriori in the sense that it is computed before computing the regularized solution f α δ .
In earlier studies such as [10,11,20,26,27,28], the regularization parameter α = α ( n , δ ) , depending on the iteration step, is computed in each iteration, and the stopping index is determined using some stopping criteria [11,20,26,27,28]. This apprach is computationally very expensive, but our approach requires the computation of α = α ( δ ) only once (here, α is independent of the iteration step); hence, one can also fix the stopping index for a given tolerence level in the beginning of the computation (see the comparison table in Example 1).
The above-mentioned advantages are obtained without actually using the operator L for computing α and f α δ (or the iteratively regularized solution).
Another class of regularization methods is the so-called iterative regularization methods [26,27,28,29,30,31,32,33,34,35,36] (and the reference therein). Since our aim in this paper is to introduce a new PCS that allows us to compute the RP α (depending on g δ and δ ) before computing the regularized solution f α δ , we leave the details of the above-mentioned (except (11)) source conditions and iterative regularization methods to motivated readers.
The rest of the paper is arranged as follows. An error analysis under the new SC is given in Section 2. A new PCS is given in Section 3, the numerical results are given in Section 4, and the paper ends with a conclusion in Section 5, followed by the Appendix.

2. Error Analysis

The proof of our results is based on the following assumptions (cf. [5,9]).
(i)
∃ constant k 0 > 0 and a continuous function φ : D ( L ) × D ( L ) × U U such that for ( f , z , v ) D ( L ) × D ( L ) × U , there is a φ ( f , z , v ) U such that
( L ( f ) L ( z ) ) v = L ( z ) φ ( f , z , v ) ,
where
φ ( f , z , v ) k 0 f z v .
(ii)
∃ constant k 1 > 0 and a continuous function φ 1 : D ( L ) × D ( L ) × V V such that for ( f , z , g ) D ( L ) × D ( L ) × V , there is a φ 1 ( f , z , g ) V such that
( L ( f ) * L ( z ) * ) g = L ( z ) * φ 1 ( f , z , g ) ,
where
φ 1 ( f , z , g )   k 1 f z g .
(iii)
∃ constant k 2 > 0 and a continuous function φ 2 : D ( L ) × D ( L ) × U U such that for ( f , z , v ) D ( L ) × D ( L ) × U , there is a φ 2 ( f , z , v ) U such that
( L ( f ) * L ( z ) * ) L ( z ) v = L ( z ) * L ( z ) φ 2 ( f , z , v ) ,
where
φ 2 ( f , z , v )   k 2 f z v .
(iv)
∃ constant k 3 > 0 and a continuous function φ 3 : D ( L ) × D ( L ) × V V such that for ( f , z , g ) D ( L ) × D ( L ) × V , there is a φ 3 ( f , z , g ) V such that
( L ( z ) L ( f ) ) L ( z ) * g = L ( f ) L ( z ) * φ 3 ( f , z , g ) ,
where
φ 3 ( f , z , g )   k 3 f z g .
Remark 1.
(a) Note that, by (ii) above, we have
L ( f ) * h = L ( f ) * R ( f , f , h )
where R ( f , f , h )   C R h for some constant C R > 0 provided f f is bounded.
(b) 
Using the above assumptions, one can prove the following identities (proof of which is given in Appendix A). Let Π = 0 1 L ( u + t ( v u ) ) d t . Then,
( L ( f ) * L ( f ) + α I ) 1 L ( f ) * ( Π L ( f ) ) ξ k 0 u f + v u 2 ξ , v f k 0 v u 2 ξ , v = f
( L ( f ) * L ( f ) + α I ) 1 ( Π * L ( f ) * ) L ( f ) ξ k 2 u f + v u 2 ξ , v f k 2 v u 2 ξ , v = f ,
( L ( f ) * L ( f ) + α I ) 1 ( Π * L ( f ) * ) ( Π L ( f ) ) ξ k 2 k 0 u f + v u 2 2 ξ , v f 3 k 2 k 0 v u 2 4 ξ , v = f
and
α [ ( L ( f ) L ( f ) * + α I ) 1 ( L L * + α I ) 1 ] ζ ( k 3 C R + k 1 ) ( f f ^ + f 0 f ^ 2 ) α ( L L * + α I ) 1 ζ , f f 0 ( k 3 C R + k 1 ) f 0 f ^ 2 α ( L L * + α I ) 1 ζ , f = f 0 .
(c) 
We will be using the following estimates:
( L * L + α I ) 1 ( L * L ) ν α ν 1 , 0 < ν 1 ,
( L ( f ) * L ( f ) + α I ) 1 L ( f ) * L ( f ) 1 , f D ( L )
and
( L ( f ) * L ( f ) + α I ) 1 L ( f ) * 1 α , f D ( L ) .
Let r : = δ α + 2 r 0 , where r 0 = f 0 f ^ . Then, since f α δ is the minimizer of (7), we have
L ( f α δ ) g δ 2 + α f α δ f 0 2 L ( f ^ ) g δ 2 + α f ^ f 0 2 = δ 2 + α f ^ f 0 2 ,
hence,
f α δ f 0 δ α + f ^ f 0 = δ α + r 0 .
Similarly, we have
f α f 0 f 0 f ^ = r 0 .
First, we shall prove that f 0 f ^ R ( L * L ) implies f 0 f ^ R ( Γ * Γ ) and f 0 f ^ R ( ( L * L ) ν ) implies f 0 f ^ R ( ( Γ * Γ ) ν 1 ) for 0 < ν 1 < ν .
Proposition 1.
Suppose (i) and (iii) hold. Then, the following hold:
(P1)
f 0 f ^ = L * L z , z ρ f 0 f ^ = Γ * Γ ξ z , ξ z ρ 0 for some ρ 0 > 0 .
(P2)
f 0 f ^ = ( L * L ) ν z , z ρ f 0 f ^ = ( Γ * Γ ) ν 1 ξ z , ξ z ρ 1 for some ρ 1 > 0 , 0 < ν 1 < ν < 1 .
Proof. 
The proof is given in Appendix B. □
Remark 2.
Similarly, one can prove
(P1′)
f 0 f ^ = L * L z , z ρ f 0 f ^ = L ( f 0 ) * L ( f 0 ) ξ z , ξ z ρ 0 for some ρ 0 > 0
and
(P2′)
f 0 f ^ = ( L * L ) ν z , z ρ f 0 f ^ = ( L ( f 0 ) * L ( f 0 ) ν 1 ξ z , ξ z ρ 1 for some ρ 1 > 0 , 0 < ν 1 < ν < 1 .
Remark 3.
Proposition 1 shows that SC (11) is not a severe restriction, but it almost follows from SC (9) or SC (10). But the advantage of using SC (11), as mentioned in the introduction, is that one can compute the regularization parameter α (depending on g δ and δ) before computing the regularized solution f α δ (see Section 3).
Lemma 1.
If we suppose k 0 r < 2 , then assumptions (i) and (iii) hold. Let f α δ be as in (8) and f α be the solution of (8) with g in place of g δ . Then,
f α δ f α 2 2 k 0 r δ α + k 2 r ( k 0 ( r + r 0 2 ) + 1 ) f α f ^ .
Proof. 
The proof is given in Appendix C. □
Lemma 2.
Suppose k 0 r 0 < 2 , (11) and the assumptions (i)–(iii) hold. Then,
f α f ^ 2 w 2 k 0 r 0 [ ( k 3 C R + k 1 ) 3 r 0 2 + 1 ] α ν .
Proof. 
The proof is given in Appendix D. □
Next, we prove the main result of this Section using Lemma 1 and Lemma 2.
Theorem 1.
Let the assumptions in Lemmas 1 and 2 hold. Then,
f α δ f ^ q ( δ α + α ν ) ,
where q = 2 2 k 0 r max 1 , k 2 r ( k 0 ( r + r 0 2 ) + 1 ) 2 w 2 k 0 r 0 [ ( k 3 C R + k 1 ) 3 r 0 2 + 1 ] . In particular, for α = δ 2 2 ν + 1 , we have
f α δ f ^ = O ( δ 2 ν 2 ν + 1 ) .
Proof. 
Since,
f α δ f ^ f α δ f α + f α f ^ ,
the result follows from Lemma 1 and Lemma 2. □
Remark 4.
Note that the apriori parameter choice α = δ 2 2 ν + 1 gives the order O ( δ 2 ν 2 ν + 1 ) , for 0 ν 1 . But, ν is unknown, so such a choice is impossible when it comes to practical cases. So, we consider a new PCS that does not require knowledge of the unknown parameter ν and provide the order O ( δ 2 ν 2 ν + 1 ) , for 0 ν 1 2 and O ( δ 1 2 ) , for 1 2 < ν 1 .

3. New Parameter Choice Strategy

Let
d ( α , g δ ) = α ( L 0 L 0 * + α I ) 1 ( L ( f 0 ) g δ ) ,
where L 0 = L ( f 0 ) .
Theorem 2.
The function α d ( α , g δ ) for α > 0 , defined in (24), is monotonically increasing, continuous, and
lim α 0 d ( α , g δ ) = P ( L ( f 0 ) g δ ) , lim α d ( α , g δ ) = L ( f 0 ) g δ ,
where P is the orthogonal projection onto the null space N ( L 0 * ) of L 0 * .
Proof. 
See Lemma 1 in [18]. □
Further, we assume
P ( L ( f 0 ) g δ ) c δ L ( f 0 ) g δ ,
for some c > 1 .
The application of the intermediate value theorem gives the following theorem.
Theorem 3.
If g δ satisfies (6) and (25), thenis a unique α such that
d ( α , g δ ) = c δ .
We will be using the following moment inequality:
B u x B v x u v x 1 u v , 0 u v ,
where B is positive selfadjoint operator (see [37]).
Lemma 3.
Let α = α ( δ ) be the unique solution of (26) and ( k 3 C R + k 1 ) 3 r 0 2 < 1 . Suppose that (11) holds and g δ satisfies (6) and (25). Then, under the assumptions (i), (ii), (iii), and (iv):
f α f ^ = O ( δ 2 ν 2 ν + 1 ) , 0 ν 1 .
Proof. 
The proof is given in Appendix E. □
Lemma 4.
Let g δ satisfy (6) and (25) and let α = α ( δ ) satisfy (26). Further, suppose (11) holds and assumptions (i)–(iv) hold. Then,
δ α = O ( δ 2 ν 2 ν + 1 ) , ν 1 2 O ( δ 1 2 ) , ν > 1 2 .
Proof. 
The proof is given in Appendix F. □
Theorem 4.
Suppose that the assumptions in Lemmas 1–4 hold. Then,
f α δ f ^ = O ( δ 2 ν 2 ν + 1 ) , ν 1 2 O ( δ 1 2 ) , ν > 1 2 .
Proof. 
Since,
f α δ f ^ f α δ f α + f α f ^ ,
the proof follows from Lemmas 1–4. □
Remark 5.
Note that α = α ( δ ) satisfies (26) and is independentof ν and gives the order O ( δ 2 ν 2 ν + 1 ) for 0 ν 1 2 and O ( δ 1 2 ) for 1 2 < ν 1 . Also, observe that the PCS does not depend on the operator L and that the regularization parameter α is computed before computing f α δ .

4. Numerical Example

Next, we provide an example satisfying the assumptions (i)–(iv).
Example 1.
Here, the problem is to find q satisfying the two-point boundary value problem
u + q u = f , t ( 0 , 1 ) u ( 0 ) = g 0 , u ( 1 ) = g 1 ,
where g 0 , g 1 and f L 2 [ 0 , 1 ] are given. This problem can be written as an operator equation of the form L ( q ) = u ( q ) , where L : D ( L ) L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] is a nonlinear operator and u ( q ) satisfies (28). Here,
D ( L ) : = { q L 2 [ 0 , 1 ]   :   q q 0 ϵ for some q 0 U and small enough ϵ > 0 } ,
where
U = { q L 2 [ 0 , 1 ] : q 0 a . e . } .
Then,
L ( q ) h = T q 1 ( L ( q ) ) , L ( q ) * w = L ( q ) T q 1 ( w ) ,
for q D ( L ) , h , w L 2 [ 0 , 1 ] , where T q : H 2 ( 0 , 1 ) H 0 1 ( 0 , 1 ) L 2 [ 0 , 1 ] satisfies
T q u = u + q u , u H 2 H 0 1 .
Assumptions (i) and (ii) are verified in [5]. The verification of assumptions (iii) and (iv) is given in Appendix G.
We estimate the parameter α using PCS (26). To compute f α δ in (8), we use the Gauss–Newton method, which defines the iterate { f k , α δ } for k = 1 , 2 , by
f k + 1 , α δ = f k , α δ ( L ( f k , α δ ) * L ( f k , α δ ) + α I ) 1 [ L ( f k , α δ ) * ( L ( f k , α δ ) g δ ) + α ( f k , α δ f 0 ) ] .
Since we are estimating q, we will use the notation q k , α δ for f k , α δ , q ^ for f ^ , and u δ for g δ in the example.
We take f = 100 e 10 ( t 0.5 ) 2 and g 0 = 1 , g 1 = 2 as in [28]. Then, q ^ = 5 t 2 ( 1 t ) + sin ( 2 π t ) . For our computation, we use random noise data u δ so that u u δ δ . Further, we have taken the initial approximation as q 0 = 0 . We have used a finite difference method for solving the differential equations involved in the computation by dividing [ 0 , 1 ] into 100 subintervals of equal length, and the resulting tridiagonal system has been solved by the Thomas algorithm [38].
We have taken c = 4 in (26) to compute α .  Table 1 gives the values of δ , the parameter α computed using (26), and the error q k , α δ q ^ and time taken to compute α for different values of δ . The corresponding figures are provided in Figure 1.
We compare our method with that of the most widely used iterative method [26] for (5), which is the regularized Gauss Newton method, in which the iterations x α , k δ are defined for k = 0 , 1 , 2 , by
f k + 1 , α k + 1 δ = f k , α k δ ( L ( f k , α k δ ) * L ( f k , α k δ ) + α k I ) 1 [ L ( f k , α k δ ) * ( L ( f k , α k δ ) y δ ) + α ( f k , α k δ f 0 ) ] ,
where f 0 , α δ : = x 0 . Here, ( α k ) is a given sequence of numbers such that
α k > 0 , 1 < α k α k + 1 r and lim k α k = 0
for some constant r > 1 .
Stopping index: Choose k δ as the first positive integer that satisfies
1 2 ( F ( x k δ δ ) y δ + F ( x k δ 1 δ ) y δ τ δ ,
where τ > 1 is a sufficiently large constant not depending on δ . We have taken λ = 1.05 and α k = 1 / k in our computations.
We use a 4-core 64 bit Windows machine with 11th Gen Intel(R) Core(TM) i5-1135G7 CPU @ 2.40GHz for all our computations (using MATLAB).
Clearly, the table shows that our approach requires less computational time than that of method (30).

5. Conclusions

We introduced a new SC and a new PCS for the TR of nonlinear ill-posed problems. Our PCS does not require knowledge of ν , and it gives the error estimate
f α δ f ^ = O ( δ 2 ν 2 ν + 1 ) , 0 < ν 1 2 O ( δ 1 2 ) , ν > 1 2 .
The advantage of our method is that one can compute the RP α before computing the regularized solution f α δ . We also applied the method to the parameter identification problem modeled as in Example 1 and obtained favourable numerical results.

Author Contributions

Conceptualization, S.G., J.P., A.K., I.K.A. and S.R.; methodology, S.G., J.P., A.K., I.K.A. and S.R.; software, S.G., J.P., A.K., I.K.A. and S.R.; validation, S.G., J.P., A.K., I.K.A. and S.R.; formal analysis, S.G., J.P., A.K., I.K.A. and S.R.; investigation, S.G., J.P., A.K., I.K.A. and S.R.; resources, S.G., J.P., A.K., I.K.A. and S.R.; data curation, S.G., J.P., A.K., I.K.A. and S.R.; writing—original draft preparation, S.G., J.P., A.K., I.K.A. and S.R.; writing—review and editing, S.G., J.P., A.K., I.K.A. and S.R.; visualization, S.G., J.P., A.K., I.K.A. and S.R.; supervision, S.G., J.P., A.K., I.K.A. and S.R.; project administration, S.G., J.P., A.K., I.K.A. and S.R.; funding acquisition, S.G., J.P., A.K., I.K.A. and S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of the Identities (17)–(20)

Using assumption (i), we have
( L ( f ) * L ( f ) + α I ) 1 L ( f ) * ( Π L ( f ) ) ξ = ( L ( f ) * L ( f ) + α I ) 1 L ( f ) * L ( f ) 0 1 φ ( u + t ( v u ) , f , ξ ) d t k 0 u f + v u 2 ξ , v f k 0 v u 2 ξ , v = f
and using (iii), we have
( L ( f ) * L ( f ) + α I ) 1 ( Π * L ( f ) * ) L ( f ) ξ = ( L ( f ) * L ( f ) + α I ) 1 L ( f ) * L ( f ) × 0 1 φ 2 ( u + t ( v u ) , f , ξ ) d t k 2 u f + v u 2 ξ , v f k 2 v u 2 ξ , v = f
and by (i) and (iii);
( L ( f ) * L ( f ) + α I ) 1 ( Π * L ( f ) * ) ( Π L ( f ) ) ξ = ( L ( f ) * L ( f ) + α I ) 1 ( Π * L ( f ) * ) L ( f ) × 0 1 φ ( u + t ( v u ) , f , ξ ) d t = ( L ( f ) * L ( f ) + α I ) 1 L ( f ) * L ( f ) × 0 1 φ 2 ( u + τ ( v u ) , f , 0 1 φ ( u + t ( v u ) , f , ξ ) d t ) d τ k 2 ( u f + v u 2 ) 0 1 φ ( u + t ( v u ) , f , ξ ) d t k 2 k 0 u f + v u 2 2 ξ , v f k 2 k 0 v u 2 4 ξ , v = f .
Further, using (ii), (iv), and Remark 1 (a) and (c), we obtain
α [ ( L ( f ) L ( f ) * + α I ) 1 ( L L * + α I ) 1 ] ζ = ( L ( f ) L ( f ) * + α I ) 1 [ L L * L ( f ) L ( f ) * ] α ( L L * + α I ) 1 ζ = ( L ( f ) L ( f ) * + α I ) 1 [ ( L L ( f ) ) L * + L ( f ) ( L * L ( f ) * ) ] × α ( L L * + α I ) 1 ζ = ( L ( f ) L ( f ) * + α I ) 1 [ ( L L ( f ) ) L ( f ) * × 0 1 R ( f ^ + τ ( f 0 f ^ ) , f , α ( L L * + α I ) 1 ζ ) d τ + L ( f ) ( L * L ( f ) * ) α ( L L * + α I ) 1 ζ ] ( L ( f ) L ( f ) * + α I ) 1 L ( f ) L ( f ) * 0 1 R ( f ^ + τ ( f 0 f ^ ) , f , 0 1 φ 3 ( f ^ + t ( f 0 f ^ ) , f , α ( L L * + α I ) 1 ζ ) ) d τ d t + ( L ( f ) L ( f ) * + α I ) 1 L ( f ) L ( f ) * × 0 1 φ 1 ( f ^ + t ( f 0 f ^ ) , f , α ( L L * + α I ) 1 ζ ) d t ( k 3 C R + k 1 ) ( f f ^ + f 0 f ^ 2 ) α ( L L * + α I ) 1 ζ , f f 0 ( k 3 C R + k 1 ) f 0 f ^ 2 α ( L L * + α I ) 1 ζ , f = f 0 .

Appendix B. Proof Proposition 1

Suppose f 0 f ^ = L * L z , z ρ . Then, by (i) and (iii) we have
f 0 f ^ = L * L z = [ L * L Γ * Γ ] z + Γ * Γ z = [ ( L * Γ * ) L + Γ * ( L Γ ) ] z + Γ * Γ z , = [ ( L * Γ * ) ( L Γ + Γ ) + Γ * ( L Γ ) ] z + Γ * Γ z , = [ ( L * Γ * ) ( L Γ ) + ( L * Γ * ) Γ + Γ * ( L Γ ) ] z + Γ * Γ z , = ( L * Γ * ) Γ 0 1 φ ( f ^ + τ ( f 0 f ^ ) , f ^ , z ) d τ + 0 1 Γ * Γ φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + 0 1 Γ * Γ φ ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + Γ * Γ z , = 0 1 Γ * Γ φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , 0 1 φ ( f ^ + τ ( f 0 f ^ ) , f ^ , z ) d τ ) d t + 0 1 Γ * Γ φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + 0 1 Γ * Γ φ ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + Γ * Γ z , = Γ * Γ Ψ ( f ^ , z , f ^ ) ,
where Ψ ( f ^ , z , f ^ ) = 0 1 φ 2 f ^ + t ( f 0 f ^ ) , f ^ , 0 1 φ ( f ^ + τ ( f 0 f ^ ) , f ^ , z ) d τ d t + 0 1 φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + 0 1 φ ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + z . Further, we have
Ψ ( f ^ , z , f ^ ) 0 1 φ 2 f ^ + t ( f 0 f ^ ) , f ^ , 0 1 φ ( f ^ + τ ( f 0 f ^ ) , f ^ , z ) d τ d t + 0 1 φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + 0 1 φ ( f ^ + t ( f 0 f ^ ) , f ^ , z ) d t + z k 2 k 0 2 f 0 f ^ + ( k 2 + k 0 ) f 0 f ^ 2 + 1 z k 2 k 0 r 0 2 + ( k 2 + k 0 ) r 0 2 + 1 ρ = : ρ 0 .
This proves ( P 1 ). To prove ( P 2 ), we use the formula ([37], p. 287) for the fractional power of positive self-adjoint operators B given by
B ϱ x = sin π ϱ π 0 τ ϱ ( B + τ I ) 1 x Θ ( τ ) τ x + + ( 1 ) n Θ ( τ ) τ n B n 1 x d τ + sin π ϱ π x ϱ B x ϱ 1 + + ( 1 ) n 1 B n 1 x ϱ n + 1 , x U ,
where
Θ ( ς ) = 0 i f 0 ς 1 1 i f 1 < ς <
and ϱ is a complex number such that 0 < R e ϱ < n .
Suppose that f 0 f ^ = ( L * L ) ν z , 0 ν < 1 . Then, by using the above formula, we have
f 0 f ^ = [ ( L * L ) ν ( Γ * Γ ) ν ] z + ( Γ * Γ ) ν z = sin π ( ν ) π 0 τ ν ( Γ * Γ + τ I ) 1 × ( L * L Γ * Γ ) ( L * L + τ I ) 1 z d τ + ( Γ * Γ ) ν z , = sin π ( ν ) π 0 τ ν ( Γ * Γ + τ I ) 1 [ ( L * Γ * ) ( L Γ ) + ( L * Γ * ) Γ + Γ * ( L Γ ) ] ( L * L + τ I ) 1 z d τ + ( Γ * Γ ) ν z .
So, by using (i) and (iii) we have
f 0 f ^ = sin π ( ν ) π 0 τ ν ( Γ * Γ + τ I ) 1 × [ ( L * Γ * ) Γ 0 1 φ ( f ^ + s ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d s + 0 1 Γ * Γ φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t + 0 1 Γ * Γ φ ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t ] d τ + ( Γ * Γ ) ν z ( again ,   using   ( iii )   we   have ) = sin π ( ν ) π 0 τ ν ( Γ * Γ + τ I ) 1 × [ 0 1 Γ * Γ φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , 0 1 φ ( f ^ + s ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d s ) d t + 0 1 Γ * Γ φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t + 0 1 Γ * Γ φ ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t ] d τ + ( Γ * Γ ) ν z
so, for 0 < ν 1 < ν we have
f 0 f ^ = ( Γ * Γ ) ν 1 [ sin π ( ν ) π 0 τ ν ( Γ * Γ ) 1 ν 1 ( Γ * Γ + τ I ) 1 × { 0 1 φ 2 f ^ + t ( f 0 f ^ ) , f ^ , 0 1 φ ( f ^ + s ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d s d t d τ + 0 1 φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t + 0 1 φ ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t } ] d τ + ( Γ * Γ ) ν z , = ( Γ * Γ ) ν 1 ξ z ,
where
ξ z = sin π ( ν ) π 0 τ ν ( Γ * Γ ) 1 ν 1 ( Γ * Γ + τ I ) 1 × { 0 1 φ 2 f ^ + t ( f 0 f ^ ) , f ^ , 0 1 φ ( f ^ + s ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d s d t + 0 1 φ 2 ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t d τ + 0 1 φ ( f ^ + t ( f 0 f ^ ) , f ^ , ( L * L + τ I ) 1 z ) d t d τ } + ( Γ * Γ ) ν ν 1 z .
Further, by (i) and (iii) we have
ξ z 1 π ( 0 τ ν ( Γ * Γ ) 1 ν 1 ( Γ * Γ + τ I ) 1 × k 2 k 0 4 f 0 f ^ 2 + k 2 + k 0 2 f 0 f ^ ( L * L + τ I ) 1 z d τ + ( Γ * Γ ) 1 ν 1 z .
By spliting the limit of intergration and rearranging the terms, we obtain
ξ z = 1 π k 2 k 0 4 f 0 f ^ 2 + k 2 + k 0 2 f 0 f ^ × [ 0 1 τ ν ( Γ * Γ ) 1 ν 1 ( Γ * Γ + τ I ) 1 + ν 1 × ( Γ * Γ + τ I ) ν 1 ( L * L + τ I ) 1 z d τ + 1 τ ν ( Γ * Γ ) 1 ν 1 ( Γ * Γ + τ I ) 1 ( L * L + τ I ) 1 z d τ ] + ( Γ * Γ ) 1 ν 1 z .
Now, using the relations ( Γ * Γ ) 1 ν 1 ( Γ * Γ + τ I ) 1 + ν 1 1 ,   ( Γ * Γ + τ I ) 1 τ 1 ,   ( Γ * Γ + τ I ) ν 1 τ ν 1 , and ( L * L + τ I ) 1 τ 1 we have
ξ z 1 π k 2 k 0 4 f 0 f ^ 2 + k 2 + k 0 2 f 0 f ^ × 0 1 τ ν ν 1 1 d τ + 1 τ ν 2 d τ ( Γ * Γ ) 1 ν 1 z + ( Γ * Γ ) 1 ν 1 z = 1 π k 2 k 0 4 r 0 2 + k 2 + k 0 2 r 0 1 ν ν 1 + ( Γ * Γ ) 1 ν 1 1 ν ρ + ( Γ * Γ ) 1 ν 1 ρ = : ρ 1 .
This proves ( P 2 ).

Appendix C. Proof of Lemma 1

Observe that,
L ( f α δ ) * [ L ( f α δ ) g δ ] + α ( f α δ f 0 ) = 0
and
L ( f α ) * [ L ( f α ) g ] + α ( f α f 0 ) = 0 .
So, we have
L ( f α δ ) * L ( f α δ ) L ( f α ) * L ( f α ) + α ( f α δ f α ) = L ( f α δ ) * g δ L ( f α ) * g
or
L ( f α δ ) * [ L ( f α δ ) L ( f α ) ] + [ L ( f α δ ) * L ( f α ) * ] L ( f α ) + α ( f α δ f α ) = L ( f α δ ) * ( g δ g ) + [ L ( f α δ ) * L ( f α ) * ] g .
Let
M α δ = 0 1 L ( f α + t ( f α δ f α ) ) d t .
Then, by (A1) we have,
f α δ f α = ( L ( f α δ ) * L ( f α δ ) + α I ) 1 [ L ( f α δ ) * ( L ( f α δ ) M α δ ) ( ( f α δ f α ) + L ( f α δ ) * ( g δ g ) + ( L ( f α δ ) * L ( f α ) * ) ( g L ( f α ) ) ] = ( L ( f α δ ) * L ( f α δ ) + α I ) 1 [ L ( f α δ ) * ( L ( f α δ ) M α δ ) ( ( f α δ f α ) + L ( f α δ ) * ( g δ g ) + ( L ( f α δ ) * L ( f α ) * ) × 0 1 [ L ( f α + t ( f ^ f α ) ) L ( f α δ ) + L ( f α δ ) ] d t ( f ^ f α ) ] = ( L ( f α δ ) * L ( f α δ ) + α I ) 1 [ L ( f α δ ) * ( L ( f α δ ) M α δ ) ( ( f α δ f α ) + L ( f α δ ) * ( g δ g ) + ( L ( f α δ ) * L ( f α ) * ) L ( f α δ ) × 0 1 φ ( f α + t ( f ^ f α ) , f α δ , f ^ f α ) d t + f ^ f α ] .
By (17) (with Π = M α δ , i.e, u = f α ,   f = v = f α δ , ξ = f α δ f α ), (iii), (22), and (23), we have
f α δ f α k 0 r 2 f α δ f α + δ α + k 2 r k 0 ( r + r 0 2 ) + 1 f α f ^ ( f α δ f α r ) .
Therefore,
( 1 k 0 r 2 ) f α δ f α δ α + k 2 r k 0 ( r + r 0 2 ) + 1 f α f ^ .

Appendix D. Proof of Lemma 2

Since L ( f α ) * ( L ( f α ) g ) + α ( f α f 0 ) = 0 and L ( f ^ ) = g , we have
[ L ( f α ) * 0 1 L ( f ^ + t ( f α f ^ ) ) d t + α I ] ( f α f ^ ) = α ( f 0 f ^ ) ,
and
[ L ( f α ) * L ( f α ) + α I ] ( f α f ^ ) = L ( f α ) * 0 1 [ L ( f α ) L ( f ^ + t ( f α f ^ ) ) ] d t ( f α f ^ ) + α ( f 0 f ^ ) .
So,
f α f ^ = ( L ( f α ) * L ( f α ) + α I ) 1 × L ( f α ) * 0 1 [ L ( f α ) L ( f ^ + t ( f α f ^ ) ) ] d t ( f α f ^ ) + α ( f 0 f ^ )
and hence by (17) (with Π = 0 1 L ( f ^ + t ( f α f ^ ) ) ) we have
f α f ^ k 0 2 f α f ^ 2 + α ( L ( f α ) * L ( f α ) + α I ) 1 ( f 0 f ^ ) k 0 r 0 2 f α f ^ + α ( L ( f α ) * L ( f α ) + α I ) 1 ( f 0 f ^ ) .
Since f α f ^ r 0 , by (A3) we have
( 1 k 0 r 0 2 ) f α f ^ α ( L ( f α ) * L ( f α ) + α I ) 1 ( f 0 f ^ ) .
Next, we shall prove that α ( L ( f α ) * L ( f α ) + α I ) 1 ( f 0 f ^ ) = O ( α ν ) under the assumption (11).
Note that
α ( L ( f α ) * L ( f α ) + α I ) 1 ( f 0 f ^ ) α [ ( L ( f α ) * L ( f α ) + α I ) 1 ( L * L + α I ) 1 ] ( f 0 f ^ ) + α ( L * L + α I ) 1 ( f 0 f ^ ) [ ( k 3 C R + k 1 ) ( f α f ^ + f 0 f ^ 2 ) + 1 ] α ( L * L + α I ) 1 ( f 0 f ^ ) ( by ( 20 ) ) [ ( k 3 C R + k 1 ) 3 r 0 2 + 1 ] α ( L * L + α I ) 1 ( L * L ) ν w [ ( k 3 C R + k 1 ) 3 r 0 2 + 1 ] α ν w .

Appendix E. Proof of Lemma 3

Note that, by (A4) and (A5), we have
f α f ^ 2 r 0 2 k 0 r 0 ( ( k 3 C R + k 1 ) ) 3 r 0 2 + 1 ) α ( L * L + α I ) 1 ( L * L ) ν w .
Let B = ( L * L ) 1 2 , x = α ( L * L + α I ) 1 w . Then, by (27), we have
B 2 ν x = α ( L * L + α I ) 1 ( f 0 f ^ ) B 1 + 2 ν x 2 ν 2 ν + 1 x 1 2 ν + 1 α ( L * L ) 1 2 ( L * L + α I ) 1 ( f 0 f ^ ) 2 ν 2 ν + 1 w 1 2 ν + 1 α ( L L * + α I ) 1 A ( f 0 f ^ ) 2 ν 2 ν + 1 w 1 2 ν + 1 = ( α ( L L * + α I ) 1 ( L ( f 0 ) g δ + g δ g ) ) 2 ν 2 ν + 1 w 1 2 ν + 1 ( α ( L L * + α I ) 1 ( L ( f 0 ) g δ ) + δ ) 2 ν 2 ν + 1 w 1 2 ν + 1 .
Here, we have used the relations ( L * L ) 1 2 = U L , where U is the unitary operator and L ( f 0 f ^ ) = L ( f 0 ) g . Observe that,
α ( L L * + α I ) 1 ( L ( f 0 ) g δ ) α [ ( L L * + α I ) 1 ( L 0 L 0 * + α I ) 1 ] ( L ( f 0 ) g δ ) + α ( L 0 L 0 * + α I ) 1 ( L ( f 0 ) g δ ) ( k 3 C R + k 1 ) r 0 2 α ( L L * + α I ) 1 ( L ( f 0 ) g δ ) + α ( L 0 L 0 * + α I ) 1 ( L ( f 0 ) g δ ) ( by ( 20 ) ) 2 ( 2 ( k 3 C R + k 1 ) r 0 ) α ( L 0 L 0 * + α I ) 1 ( L ( f 0 ) g δ ) = 2 ( 2 ( k 3 C R + k 1 ) r 0 ) d ( α , g δ ) = 2 ( 2 ( k 3 C R + k 1 ) r 0 ) c δ .
The result now follows from (A6), (A7), and (A8).

Appendix F. Proof of Lemma 4

Note that,
c δ = d ( α , g δ ) α ( L 0 L 0 * + α I ) 1 ( g g δ ) + α ( L 0 L 0 * + α I ) 1 ( L ( f 0 ) g ) δ + α ( L 0 L 0 * + α I ) 1 ( L ( f 0 ) g ) .
So,
( c 1 ) δ α [ ( L 0 L 0 * + α I ) 1 ( L L * + α I ) 1 ] ( L ( f 0 ) g ) + α ( L L * + α I ) 1 ( L ( f 0 ) g ) ( k 3 C R + k 1 ) r 0 2 + 1 α ( L * L + α I ) 1 ( L * L ) 1 2 + ν w ( by ( 20 ) ) ( k 3 C R + k 1 ) r 0 2 + 1 α 1 2 + ν w , ν 1 2 α L * L ν 1 2 w , ν > 1 2 .
Therefore, since
δ α = δ α 1 2 + ν 1 2 ν + 1 δ 2 ν 2 ν + 1 , ν 1 2 δ α 1 2 δ 1 2 , ν > 1 2
by (A9) and (A10), we have
δ α = O ( δ 2 ν 2 ν + 1 ) , ν 1 2 O ( δ 1 2 ) , ν > 1 2 .

Appendix G. Verification of Assumptions (iii) and (iv)

As in [5], we use the following assumptions:
(A1)
Let q 0 D ( L ) , and assume that ∃ κ > 0 with | L ( q 0 ) ( t ) | κ t ( 0 , 1 ) . Then, ∃ U ( q 0 ) of q 0 in L 2 [ 0 , 1 ] such that
(A2)
| L ( q ) ( t ) | κ 2 for all q U ( q 0 ) D ( L ) and t ( 0 , 1 ) .
Note that,
( T z T f ) L ( z ) L ( z ) * w + T f ( L ( z ) L ( f ) ) L ( z ) * w = ( L ( f ) L ( z ) ) L ( z ) * w ,
so, we have for x , z U ( q 0 ) D ( L ) :
( L ( z ) L ( f ) ) L ( z ) * w = T f 1 ( T f T z ) L ( z ) L ( z ) * w + ( L ( f ) L ( z ) ) L ( z ) * w = T f 1 1 L ( f ) ( ( T f T z ) L ( z ) L ( z ) * w + ( L ( f ) L ( z ) ) L ( z ) * w ) L ( f ) = T f 1 [ L ( z ) T z 1 T z 1 L ( f ) L ( z ) ( ( T f T z ) L ( z ) L ( z ) * w + ( L ( f ) L ( z ) ) L ( z ) * w ) ] L ( f ) = L ( f ) L ( z ) * φ 2 ( z , f , w ) ,
where φ 2 ( z , f , w ) = 1 L ( f ) L ( z ) T z ( ( T f T z ) L ( z ) L ( z ) * w + ( L ( f ) L ( z ) ) L ( z ) * w ) . Then, as in Lemma 2.4 in [5], one can prove that φ 2 ( z , f , w ) k 2 z f w . Further, observe that
[ L ( f ) * L ( z ) * ] L ( z ) v = L ( z ) T z 1 [ T z 1 L ( z ) ( L ( f ) ( T f 1 T z 1 ) ( L ( z ) v ) + ( L ( f ) L ( z ) ) T z 1 ( L ( z ) v ) ) ] = L ( z ) T z 1 [ T z 1 L ( z ) 2 ( L ( f ) ( T f 1 T z 1 ) ( L ( z ) v ) × ( L ( f ) L ( z ) ) T z 1 ( L ( z ) v ) ) ] L ( z ) = L ( z ) * L ( z ) φ 3 ( z , f , v ) ,
where φ 3 ( z , f , v ) = 1 L ( z ) 2 L ( f ) ( T f 1 T z 1 ) ( L ( z ) v ) + ( L ( f ) L ( z ) ) T z 1 ( L ( z ) v ) . Again, as in Lemma 2.4 in [5], one can prove that φ 3 ( z , f , v ) k 3 z f v .

References

  1. Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Dover Publications: New York, NY, USA, 1953. [Google Scholar]
  2. Ramm, A.G. Inverse Problems: Mathematical and Analytical Techniques with Applications to Engineering; Springer: New York, NY, USA, 2004. [Google Scholar]
  3. Akimova, E.N.; Misilov, V.E.; Sultanov, M.A. Regularized gradient algorithms for solving the nonlinear gravimetry problem for the multilayered medium. Math. Methods Appl. Sci. 2020, 21, 7012. [Google Scholar] [CrossRef]
  4. Byzov, D.; Martyshko, P. Three-Dimensional Modeling and Inversion of Gravity Data Based on Topography: Urals Case Study. Mathematics 2024, 12, 837. [Google Scholar] [CrossRef]
  5. Scherzer, O.; Engl, H.W.; Kunisch, K. Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM. J. Numer. Anal. 1993, 30, 1796–1838. [Google Scholar] [CrossRef]
  6. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Kluwer Academic Publisher: Dordrecht, The Netherlands; Boston, MA, USA; London, UK, 1996. [Google Scholar]
  7. Flemming, J. Generalized Tikhonov Regularization and Modern Convergence Rate Theory in Banach Spaces; Shaker Verlag: Aachen, Germany, 2012. [Google Scholar]
  8. Mair, B.A. Tikhonov regularization for finitely and infinitely smoothing operators. SIAM J. Math. Anal. 1994, 25, 135–147. [Google Scholar] [CrossRef]
  9. Engl, H.W.; Kunisch, K.; Neubauer, A. Convergence rates for Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1989, 5, 523–540. [Google Scholar] [CrossRef]
  10. Jin, Q.N.; Hou, Z.Y. On the choice of regularization parameter for ordinary and iterated Tikhnov regularization of nonlinear ill-posed problems. Inverse Probl. 1997, 13, 815–827. [Google Scholar] [CrossRef]
  11. Jin, Q.N.; Hou, Z.Y. On an a posteriori parameter choice strategy for Tikhnov regularization of nonlinear ill-posed problems. Numer. Math. 1999, 83, 139–159. [Google Scholar]
  12. Rieder, A. On the regularization of nonlinear ill-posed problems via inexact Newton iterations. Inverse Probl. 1999, 15, 309–327. [Google Scholar] [CrossRef]
  13. Vasin, V.; George, S. Expanding the applicability of Tikhonov’s regularization and iterative approximation for ill-posed problems. J. Inverse -Ill-Posed Probl. 2014, 22, 593–607. [Google Scholar] [CrossRef]
  14. Blaschke, B. Some Newton Type Methods for the Regularization of Nonlinear Ill-posed Problems. Trauner 1996, 13, 729. [Google Scholar]
  15. Nair, M.T. Linear Operator Equations: Approximation and Regularization; World Scientific: Singapore, 2009. [Google Scholar]
  16. Argyros, I.K.; George, S.; Jidesh, P. Inverse free iterative methods for nonlinear ill-posed operator equations. Int. J. Math. Math. Sci. 2014, 2014, 754154. [Google Scholar] [CrossRef]
  17. George, S. On conversence of regularized modified Newton’s method for nonlinear ill-posed problems. J. Inv. Ill-Posed Probl. 2010, 18, 133–146. [Google Scholar] [CrossRef]
  18. George, S.; Nair, M.T. An a posteriori parameter choice for simplified regularization of ill-posed problems. Inter. Equat. Oper. Th. 1993, 16, 392–399. [Google Scholar] [CrossRef]
  19. Hohage, T. Logarithmic convergence rates of the iteratively regularized Gauß-Newton method for an inverse potential and an inverse scattering problem. Inverse Probl. 1997, 13, 1279–1299. [Google Scholar] [CrossRef]
  20. Mahale, P.; Singh, A.; Kumar, A. Error estimates for the simplified iteratively regularized Gauss-Newton method under a general source condition. J. Anal. 2022, 31, 295–328. [Google Scholar] [CrossRef]
  21. Chen, D.; Yousept, I. Variational source condition for ill-posed backward nonlinear Maxwell’s equations. Inverse Probl. 2019, 35, 025001. [Google Scholar] [CrossRef]
  22. Hohage, T.; Weidling, F. Verification of a variational source condition for acoustic inverse medium scattering problems. Inverse Probl. 2015, 31, 075006. [Google Scholar] [CrossRef]
  23. Hohage, T.; Weidling, F. Variational source condition and stability estimates for inverse electromagnetic medium scattering problems. Inverse Probl. 2017, 11, 203–220. [Google Scholar]
  24. Hohage, T.; Weidling, F. Characerizations of variational source conditions, converse results, and maxisets of spectral regularization methods. SIAM J. Numer. Anal. 2017, 55, 598–620. [Google Scholar] [CrossRef]
  25. Hofmann, B.; Kaltenbacher, B.; Póschl, C.; Scherzer, O. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Probl. 2007, 23, 987–1010. [Google Scholar] [CrossRef]
  26. Jin, Q. On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems. Math. Comput. 2000, 69, 1603–1623. [Google Scholar]
  27. Jin, Q. A convergence analysis of the iteratively regularized Gauss-Newton method under Lipschitz condition. Inverse Probl. 2008, 24, 045002. [Google Scholar] [CrossRef]
  28. Mahale, P.; Dixit, S.K. Convergence analysis of simplified iteratively regularized Gauss-Newton method in a Banach space setting. Appl. Anal. 2017, 97, 1386785. [Google Scholar] [CrossRef]
  29. Bakushinskii, A. The problem of the convergence of the iterativley regularized Gauß-Newton method. Comput. Maths. Math. Phys. 1992, 32, 1353–1359. [Google Scholar]
  30. Bakushinskii, A. Iterative methods without saturation for solving degenerate nonlinear operator equations. Dokl. Akad. Nauk. 1995, 1, 7–8. [Google Scholar]
  31. Bakushinskii, A.; Kokurin, M. Iterative Methods for Approximate Solution of Inverse Problems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  32. Blaschke, B.; Neubauer, A.; Scherzer, O. On convergence rates for the iteratively regularized Gauß-Newton method. Ima J. Numer. Anal. 1997, 17, 421–436. [Google Scholar] [CrossRef]
  33. Deuflhard, P.; Engl, H.W.; Scherzer, O. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Probl. 1998, 14, 1081–1106. [Google Scholar] [CrossRef]
  34. Hanke, M. Regularizing properties of a truncated Newton-CG algorithm for nonlinear inverse problems. Numer. Funct. Anal. Optim. 1997, 18, 971–993. [Google Scholar] [CrossRef]
  35. Kaltenbacher, B. A posteriori parameter choice strategies for some Newton type methods for the regularization of nonlinear ill-posed problems. Numer. Math. 1998, 79, 501–528. [Google Scholar] [CrossRef]
  36. Mahale, P. Simplified Generalized Gauss-Newton iterative method under Morozove type stopping rule. Numer. Funct. Anal. Optim. 2015, 36, 1448–1470. [Google Scholar] [CrossRef]
  37. Krasnoselskii, M.A.; Zabreiko, P.P.; Pustylnik, E.I.; Sobolevskii, P.E. Integral Operators in Spaces of Summable Functions; Noordhoff International Publ.: Leyden, IL, USA, 1976. [Google Scholar]
  38. Ford, W. Numerical Linear Algebra with Applications; Academic Press: New York, NY, USA, 2015; pp. 163–179. [Google Scholar]
Figure 1. Exact ( ) and computed solutions ( ) for various parameters given against each subfigure.
Figure 1. Exact ( ) and computed solutions ( ) for various parameters given against each subfigure.
Modelling 05 00028 g001
Table 1. Computed α and computed error.
Table 1. Computed α and computed error.
Method δ α q k , α δ q ^ Elapsed Time in Seconds
0.013.8147 × 10−60.02550.1664
0.0013.7253 × 10−90.00810.3884
(29)0.053.0518 × 10−50.02980.1277
0.0059.5367 × 10−70.01780.1865
0.011.9073 × 10−60.01380.2190
0.0013.7253 × 10−90.00860.5289
(30)0.056.1035 × 10−50.04040.3383
0.0054.7684 × 10−70.01430.3988
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

George, S.; Padikkal, J.; Kunnarath, A.; Argyros, I.K.; Regmi, S. Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution. Modelling 2024, 5, 530-548. https://doi.org/10.3390/modelling5020028

AMA Style

George S, Padikkal J, Kunnarath A, Argyros IK, Regmi S. Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution. Modelling. 2024; 5(2):530-548. https://doi.org/10.3390/modelling5020028

Chicago/Turabian Style

George, Santhosh, Jidesh Padikkal, Ajil Kunnarath, Ioannis K. Argyros, and Samundra Regmi. 2024. "Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution" Modelling 5, no. 2: 530-548. https://doi.org/10.3390/modelling5020028

APA Style

George, S., Padikkal, J., Kunnarath, A., Argyros, I. K., & Regmi, S. (2024). Parameter Choice Strategy That Computes Regularization Parameter before Computing the Regularized Solution. Modelling, 5(2), 530-548. https://doi.org/10.3390/modelling5020028

Article Metrics

Back to TopTop