1. Introduction
Numerous significant applications in cosmology [
1], data science [
2], remote sensing [
3], medicine [
4], and geophysics [
5] are modeled in inverse problems involving the determination of unknown coefficients of partial differential equations based on limited information about the system over a finite period. Degenerate wave models are gaining more focus these days in many physical applications [
6,
7,
8]. A survey of the numerical techniques that have been applied to direct and inverse problems with integer or fractional order derivatives indicates a lot of focus in recent days [
9,
10,
11]. A reconstruction of missing sole terms in different styles of time-dependent fractional diffusion problems has been seen in [
12,
13].
The problem under consideration invokes the one proposed in [
14] in which the reconstruction is only targeted to the potential term. We address here the reconstruction of two factors (initial condition and potential) in a multidimensional wave problem with interior degeneracy.
where
. Endowed with the final observation data
in the domain
, where
and
,
,
a function that degenerates into a point
inside the spatial domain
with
. The Hilbert space is defined as
with the inner product
An important application of the inverse problem (
1) is to distinguish between various types of seismic events, such as implosion, explosion, or earthquake, which generate waves that propagate through the Earth and can be recorded using seismometers. In [
15], a seismic source modeled as a point moment tensor forcing in the elastic wave equation for the displacement was estimated by minimizing the gap between the time-dependent measured/recorded and computed waveforms (see [
16]). The weak formulation of (
1) is:
In the literature, the inverse problem of determining the coefficient
over a large scale from time
and its time-integratation for temperature, among other data, has been studied and used to prove its existence and uniqueness when the source term
Q, the Dirichlet boundary conditions, and the initial condition
are known [
17,
18,
19,
20].
In addition, several numerical methods have been proposed. Examples include the standard regularization method of the Tikhonov type [
21], the method of Armijo combined with the finite element method [
22], the (NAG E04FCF) combined with the method of finite differences (FDM) [
23], and the conjugate gradient method (CGM) [
24] to reconstruct the coefficient
numerically from the additional measurements.
Extensive research has been conducted on the inverse problem of determining the initial condition from time-integral temperature measurements and a final instant when the reaction potential and the source term are known (see, for example, [
25,
26]).
In this paper, we study the generalization of the conditions used in [
27], which amounts to knowing the direct solution at certain instants of the time domain and at its end. This is difficult in practice to implement with precision at recordings with an average time, which reduces the possibility of significant measurement errors in the direct solution
. More specifically, using the average weighted integral observations given in (
3) and (
4), we study the inverse problem to determine the pair
in (
1). Let
be two approximations to the delta function at
such that
, which are given, and
and
are the average weighted integral observations that are also given. Let
Taking into account the generalizations of the final observations
, the integral observations (
3) and (
4) can be thought of in this way. However, note that the selection of the weighted functions (
3) and (
4) has a crucial role in obtaining the relevant data to recover the two unknown values
; for more information, refer to [
28]. On the other hand, in the literature, the determination of the reaction potential and the source term have been studied from the final observation of time in [
27] and the measurement of the integral observation in time in [
29] for non-degenerate parabolic problems. In the inverse problems (
1), (
3), and (
4), these approaches can also be used to simultaneously calculate the response coefficient
and the initial condition
.
The manuscript is arranged to have the uniqueness of the inverse problem in
Section 2. The stability and the regularity results, the proof of the Fréchet differentiability of the objective functional, and the conjugate gradient and sensitivity problems are presented in
Section 3. Using the conjugate gradient method (CGM) regularized by the discordance principle [
30], the inverse problem is solved numerically in a stable way.
Section 4 is devoted to the numerical simulations and their good agreement with the theoretical analysis. The well-posedness of the direct problem of (
1) is discussed in the following theorem.
Theorem 1. Assume that , and , The problem (1) has the following unique weak solutionwith The constant C depends on χ and S.
Proof. The proof of the following theorem relies on the outlines used to prove the existence and uniqueness theorem for the degenerate linear viscose-elastic problem presented in [
31]. □
2. Well-Posedness of the Inverse Problem
Firstly, we introduce the following admissible set:
A reformulation of the inverse problems (
1), (
3), and (
4) as a nonlinear non-classical parabolic problem is targeted to simplify the proof of existence and uniqueness. By multiplying the first equation in (
1) by
and
, respectively, integrating the resulting relations with respect to
t from 0 to
S, and using (
3) and (
4), we have
We introduce the following assumptions:
- (a)
and ;
- (b)
;
- (c)
and a.e. in , for some positive constants and .
Inserting (
8) and (
9) into (
1), we obtain
Thus, the solution to inverse problems (
1), (
3), and (
4) is equivalent to obtaining the solution
to the nonlinear parabolic problem (
10). Utilize the technique in [
19], and consider the following two auxiliary hyperbolic problems:
and
where
c is a Lipschitz continuous function on
defined by
and
Therefore
and
satisfy
with
To prove the existence and uniqueness of the solution to the inverse problems (
1), (
3), and (
4). We set
Theorem 2. Let and , and, for the functional spaces inverse problem solution, we put
Suppose that assumptions (a)–(c) are satisfied. Assume that there exists a number satisfying Then, there exists at most one solution and a.e. to the inverse problems (1), (3), and (4). Proof. Likewise, we have
, which imply that
Using inequality (
17) and the definition of function
, we obtain
Hence, problem (
12) becomes
By taking
it is easy to obtain that
satisfies the estimate
Let
and
be two solutions to problem (
10), and let
Then,
satisfies the following problem:
with
Using (
17) and (
18), we obtain
and, as
for
then
Using the fact that
we obtain that
This implies the uniqueness of the solution to problem (
10). This means that the potential
given by (
8) and the initial condition
given by (
9) only satisfy the inverse problems (
1), (
3), and (
4). The proof is completed. □
In this paper, we generalize the constant-time observations to mean-time records, which can be difficult to achieve in practice, and these records smooth out potentially large measurement errors in the direct solution
More precisely, we are looking for the triple
of problem (
1) with average weighted integral observations. Let
be the solution to the direct problem (
1). Now, let us reformulate our inverse problem as an optimization problem. In reality, the average weighted integral observations
, given in (
3) and (
4), respectively, may contain noise. Due to the poor posing of the inverse problem, which causes tiny inaccuracies in the input data (
3) and (
4) to lead to substantial errors in the output coefficients
and
This creates the main challenge numerically in the reconstruction of the solution. Numerically, we are looking for an approximation of the answer using measurements with noise. Let us consider
, satisfying
Hence, we will simultaneously reconstruct the coefficient
and the condition
under the following noisy data:
We minimize the objective functional
defined in (
25) to obtain the solution to the inverse problem
In this paper, our technique does not depend on the “regularize then discretize approach”; more precisely, we adopt the approach used in [
32].
Theorem 3. The optimization problem in (25) admits at least one solution. Proof. We know that
This gives the existence of a minimizing sequence
such that
Hence, the subsequence
is uniformly bounded in
, which proves the existence of the subsequence
which converges weakly to
in
Using estimate (
5), we find that the sequence
is uniformly bounded in
So we can extract a subsequence
, which converges weakly to
in
Let
such that
. The variational formulation of problem (
1) for
gives
We can write
Using estimate (
5) and the fact that
converges weakly to
in
, we find that
Consequently, by letting
n tend to infinity in (
26), we obtain
and, since
injects into
, and the solution to the direct problem is unique, we deduce that
Now, by the weak lower semi-continuity of the norm, we have
This shows that
is a minimizer of (
25) on the set
□
To calculate the gradient of the functional
, we use the conjugate gradient method introduced in
Section 3. First, we show that this functional
is differentiable as stated in the following lemma:
Lemma 1. Let the coefficient b and the initial condition be perturbed by a small and , with , and let φ be the weak solution to (1) corresponding to The function continues, i.e., Proof. The proof is very simple. Just use estimate (
5) and replace
with
and
in (
1). □
We rely on the following lemma to show the differentiability of the map
Lemma 2. Assume that The map is Fréchet differentiable; more precisely, there are two operators such that Proof. Let
, and
be the solution to problem (
1). We can verify that
satisfies the following problem:
Then, problem (
31) has a unique solution
which depends linearly on
Let
So,
satisfies the following problem
Using (
31) directly shows that
z verifies □
Now, the application of estimate (
5) to (
33) gives
From (
27), we conclude that
Hence, the map
is Fréchet differentiable. This shows relation (
29). Now, let
be the solution to the problem
In the same way, we can directly prove (
30) for the coefficient
b.
To calculate the gradient of
, we need the following lemma, which gives the existence and uniqueness of the following adjoint problem of (
1)
Lemma 3. Assume that and Based on these assumptions, problem (35) admits a unique solution. Moreover, there is such that Proof. To show the existence and uniqueness of
p, we use the same approach for problem (
1). Multiplying both sides of the first equation of (
35) by
and integrating it over
we obtain
Now, we integrate for
, and we use the fact that
to obtain
since
dividing by
, we find that
this proves Lemma 3. □
Theorem 4. The functional is Fréchet differentiable, and its gradients are given respectively by such that p is the solution to the adjoint problem (35). Proof. Let
be a small perturbation of
b such that
denoted by
Then,
is the solution to the following problem
Using problems (
35) and (
39) and integrating over
I, we obtain
According to an estimate similar to that of (
5) for
we have
and
Using the same approach used to calculate the gradient with respect to
b, we calculate the gradient with respect to the initial condition
such that the direct solution
is the solution to the following problem
so we find (
38). □
3. Conjugate Gradient Method
To reconstruct simultaneously the coefficient
b and the initial condition
, we apply a process based on the conjugate gradient method (CGM), which consists of minimizing the functional in Equation (
25). This process is given by:
where
are the initial guesses for
where
i is the number of iterations, and
denotes the step size. Moreover, we define the search directions
and
by
Additionally, we can use Fletcher–Reeves formula in [
33] to obtain the step sizes
In the same way, we can find
and
According to the arguments used in [
34], the fact that
is Frécher differentiable, and the fact that the functional
is a monotone decreasing convergent sequence, we obtain the following result:
Theorem 5. The CGM (41)–(44) either terminates at a stationary point or converges in the following senses: Since the errors can be noticed at the average weighted integral observations (
3) and (
4). The iteration process given by (
42) cannot be preformed by the conjugate gradient method, and so the method is not well-posed because we do not have the regularization term. However, the method can become well-posed if we apply a divergence criterion so that the iteration procedure is stopped. This criterion is given by:
Thus, the iterations of this algorithm, based on CGM for the numerical reconstruction of the coefficient
and the initial condition
, are as given by Algorithm 1.
Algorithm 1 CG algorithm for the minimizer of (25) |
- 1.
Set and initiate and for the coefficient b and initial condition . - 2.
Determine numerically, using the finite difference method, the solution to the direct problem ( 1) and the objective functional ( 25). - 3.
Determine numerically, using the finite difference method, the solution to the adjoint problem ( 35) and the gradient of the objective functional ( 37) and ( 38). Calculate the coefficients and given in Equations ( 43) and ( 42), respectively. - 4.
Determine numerically, using the finite difference method, the solution to the sensitivity problems ( 39) and ( 39) and by using and where the step sizeS and are given in ( 44) - 5.
Update using ( 41). - 6.
If the condition ( 46) is satisfied, go to Step 7. Else, set and return to Step 2. - 7.
End.
|
4. Numerical Experiments
Here, we apply the CG method [
35] in one and two dimensions in order to identify simultaneously the coefficient
b and the initial condition
in
1. We discretize this problem using the finite element method in space and the Crank–Nicholson scheme in time direction. For the two-dimensional problem, we apply the alternating direction implicit (ADI) method as described in [
28]. We create the noisy data, and a random perturbation is added, i.e.,
where
and
p represents the percentage of noise.
We calculate the approximate
error by the following formula to demonstrate the precision of the numerical solution
where
are the initial guesses reconstructed at the
kth iteration, and
are the exact values. The residual
at the
ith iteration is given by
4.1. One-Dimensional Problem
We fix
without a loss of generality. We also fix
and
, respectively, and let
. We choose
and
in (
3) and (
4) as
where
q is a small positive constant, and
It is clear that
for small values of
q, where
is the Dirac delta function. Then, according to the properties of the Dirac delta function, Equations (
3) and (
4)would become
For all the three numerical examples presented later, we choose the weight functions as
Example 1. To validate this choice, we apply our proposed algorithm for the reconstruction of the coefficient and the initial condition defined on χ by We take the initial guesses .
Figure 1 is devoted to showing the variation of the functional according to the number of iterations
i for the simultaneous determination of the coefficient
b and the initial condition
in the cases of no noise
and with noise
,
. It can be observed that if
, then the function
quickly converges to a very small value because it is a decreasing function according to the values of
i. We see that the number of iterations for the algorithm to stop is
in the case of no noise and
in the case of noise. These numbers of iterations are obtained via the discrepancy in (
46). Errors (
47) and (
48) associated with
and
have been found for
and
with the noise levels
, respectively. Therefore, we can conclude that our numerical process is reasonably accurate in determining the coefficient
and the initial condition
Similarly, the norms of the gradients of the functional are obtained for
and
with
In
Figure 2, we illustrate the comparison between the functions of the recovered coefficient
b and the initial condition
and their exact values with the noise levels
Figure 2 illustrates the numerical reconstruction of the coefficient
b and the initial condition
with the number of final iterations
given in
Figure 1 with different values for the noise levels,
. From
Figure 1, we can notice that the exact and the numerical solutions are almost equal; that is, the proposed algorithm to determine the two coefficients converges to the required solutions.
In the following example, we will apply our method to reconstruct the coefficient b and the initial condition defined by
Similarly, for Example 1, using the discrepancy in (
46), the stopping iterations are,
for the noise levels
, and 5. The errors (
47) and (
48) associated with
and
have been found for
and
with noise level
and 5, respectively, Under these stopping iterations,
Figure 3 illustrates the exact and numerical solutions for Example 2 for different values of the noise levels (
, and 5).
4.2. Two-Dimensional Problem
The space–time region
is partitioned into
equidistant meshes. We testify the numerical performance of the Algorithm 1 for the reconstruction of the coefficient
b and the initial condition
b with the following degeneracy
We take the initial guesses .
Example 3. Suppose that the exact coefficient and initial condition for the degenerate wave problem (1) are given by: For this example, the number of the final iterations of CGM is
for the noise levels
, and 5 (
Figure 4). Errors (
47) and (
48), associated with
and
, have been found for
, and
. Additionally,
, and
The exact coefficient, initial condition functions, and recovered solution are shown in
Figure 5 and
Figure 6. The absolute errors between the exact coefficient and initial condition functions and their numerical reconstruction are shown in
Figure 7 and
Figure 8. We can notice that the recovered terms are very close to the exact solutions; this shows the effectiveness of our proposed method.
The following
Table 1 and
Table 2 are devoted to the different values of
and
for
,
, and
.
Here, we summarize the results of the numerical experiments. For one-dimensional problems, as considered in Example 1, the resulting figures can be described as follows.
Figure 1a represents the graph of the functional
without a noise level (0 percent) and with the iteration number
, while
Figure 1b represents the graph of the functional
with a noise level (1 percent) with the number of iterations
,
Figure 1c represents the graph of the functional
with a noise level (5 percent) with the number of iterations
Figure 2a represents the graph of the reaction coefficient
b without a noise level (0 percent) and with a noise level (1 and 5 percent). Additionally,
Figure 2b represents the graph of the initial condition
without a noise level (0 percent) and with a noise level (1 and 5 percent). For Example 2,
Figure 3a represents the graph of the reaction coefficient
b without a noise level (0 percent) and with a noise level (1 and 5 percent). Moreover,
Figure 3b represents the graph of the initial condition
without a noise level (0 percent) and with a noise level (1 and 5 percent).
For two-dimensional problems, as given in Example 3,
Figure 4a represents the graph of the functional
without a noise level (0 percent) and with the iteration number
.
Figure 4b represents the graph of the functional
J with a noise level (1 percent) and with the number of iterations
Figure 4c represents the graph of the functional
J with a noise level (5 percent) and with the number of iterations
Figure 5 represents the exact solution to the recovered reaction coefficient.
Figure 9a,
Figure 9b, and
Figure 9c represent the graphs of the reaction coefficient
b without a noise level (0 percent) and with noise levels (1 and 5 percent), respectively.
Figure 7a,
Figure 7b, and
Figure 7c represent the graphs of the absolute error between the exact and numerical reaction coefficient
b without a noise level (0 percent) and with noise levels (1 and 5 percent), respectively.
Figure 6 gives the exact solution to the initial condition.
Figure 10a,
Figure 10b, and
Figure 10c represent the graph of the reaction coefficient
b without a noise level (0 percent) and with noise levels (1 and 5 percent), respectively.
Figure 8a,
Figure 8b, and
Figure 8c represent the graphs of the absolute error between the exact and numerical reaction coefficient
b without a noise level (0 percent) and with noise levels (1 and 5 percent), respectively.