Next Article in Journal
Model Formulation Over Lie Groups and Numerical Methods to Simulate the Motion of Gyrostats and Quadrotors
Next Article in Special Issue
A New Scheme Using Cubic B-Spline to Solve Non-Linear Differential Equations Arising in Visco-Elastic Flows and Hydrodynamic Stability Problems
Previous Article in Journal
Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems
Previous Article in Special Issue
Approximate Solutions of Time Fractional Diffusion Wave Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Source Term for the Time-Fractional Diffusion-Wave Equation by Fractional Tikhonov Method

1
Faculty of Natural Sciences, Thu Dau Mot University, Thu Dau Mot City 820000, Binh Duong Province, Vietnam
2
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
3
Faculty of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
4
Nonlinear Analysis and Applied Mathematics (NAAM) Research Group, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
5
Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(10), 934; https://doi.org/10.3390/math7100934
Submission received: 17 August 2019 / Revised: 24 September 2019 / Accepted: 4 October 2019 / Published: 10 October 2019

Abstract

:
In this article, we consider an inverse problem to determine an unknown source term in a space-time-fractional diffusion equation. The inverse problems are often ill-posed. By an example, we show that this problem is NOT well-posed in the Hadamard sense, i.e., this problem does not satisfy the last condition-the solution’s behavior changes continuously with the input data. It leads to having a regularization model for this problem. We use the Tikhonov method to solve the problem. In the theoretical results, we also propose a priori and a posteriori parameter choice rules and analyze them.

1. Introduction

Let Ω be a bounded domain in R d with sufficiently smooth boundary Ω , β ( 1 , 2 ) . In this paper, we consider the inverse source problem of the time-fractional diffusion-wave equation:
0 + β u ( x , t ) = Δ u ( x , t ) + Ξ ( x ) , ( x , t ) Ω × ( 0 , T ) , u ( x , t ) = 0 , ( x , t ) Ω × ( 0 , T ] , u ( x , 0 ) = f ( x ) , x Ω , u t ( x , 0 ) = g ( x ) , x Ω , u ( x , T ) = h ( x ) , x Ω ,
where 0 + β u ( x , t ) is the Caputo fractional derivative of order β defined as [1]
0 + β u ( x , t ) = 1 Γ ( 2 α ) 0 t 2 u ( x , s ) s 2 d s ( t s ) β 1 , 1 < β < 2 ,
where Γ ( . ) is the Gamma function.
It is known that the inverse source problem mentioned above is ill-posed in general, i.e., a solution does not always exist and, in the case of existence of a solution, it does not depend continuously on the given data. In fact, from a small noise of physical measurement, for example, ( h , f , g ) is noised by observation data ( h ε 1 , g ε 2 , h ε 3 ) with order of ε 1 > 0 , ε 2 > 0 , and ε 3 > 0 .
h h ε 1 L 2 ( Ω ) ε 1 , f f ε 2 L 2 ( Ω ) ε 2 , and g g ε 3 L 2 ( Ω ) ε 3 .
In all functions f ( x ) , g ( x ) , and h ( x ) are given data. It is well-known that if ε 1 , ε 2 , and ε 3 are small enough, the sought solution Ξ ( x ) may have a large error. The backward problem is to find Ξ ( x ) from Ξ ε and g ε which satisfies (3), where · L 2 ( Ω ) denotes the L 2 norm.
It is known that the inverse source problem mentioned above is ill-posed in general, i.e., a solution does not always exist, and in the case of existence of a solution, it does not depend continuously on the given data. In fact, from a small noise of physical measurement, the corresponding solutions may have a large error. Hence, a regularization is required. Inverse source problems for a time-fractional diffusion equation for 0 < β < 1 have been studied. Tuan et al. [2] used the Tikhonov regularization method to solve the inverse source problem with the later time and show the estimation for the exact solution and regularized solution by a priori and a posteriori parameter choices rules. Wei et al. [3,4,5] studied an inverse source problem in a spatial fractional diffusion equation by quasi-boundary value and truncation methods. Fan Yang et al., see [6], used the Landweber iteration regularization method for determining the unknown source for the modified Helmholtz equation. Nevertheless, to our best knowledge, Salir Tarta et al. [7] used these properties and analytic Fredholm theorem to prove that the inverse source problem is well-posed, i.e., f ( t , x ) can be determined uniquely and depends continuously on additional data u ( T , x ) , x Ω , see [8,9]—the authors studied the inverse source problem in the case of nonlocal inverse problem in a one-dimensional time-space and numerical algorithm. Furthermore, the research of backward problems for the diffusion-wave equation is an open problem and still receives attention. In 2017, Tuan et al. [10] considered
β t β u ( x , t ) = r β ( Δ ) α 2 u ( x , t ) + h ( t ) f ( x ) , ( x , t ) Ω T , u ( 1 , t ) = u ( 1 , t ) = 0 , 0 < t < T , u ( x , 0 ) = 0 , x Ω , u ( x , T ) = g ( x ) , x Ω ,
where Ω T = ( 1 , 1 ) × ( 0 , T ) ; r > 0 is a parameter; h C [ 0 , T ] is a given function; β ( 0 , 1 ) ; α ( 1 , 2 ) are fractional order of the time and the space derivatives, respectively; and T > 0 is a final time. The function u = u ( x , t ) denotes a concentration of contaminant at a position x and time t with ( Δ ) α 2 as the fractional Laplacian. If α tends to 2, the fractional Laplacian tends to the Laplacian normal operator, see [1,2,7,8,9,10,11,12,13,14,15,16]. In this paper, we use the fractional Tikhonov regularization method to solve the identification of source term of the fractional diffusion-wave equation inverse source problem with variable coefficients in a general bounded domain. However, a fractional Tikhonov is not a new method for mathematicians in the world. In [16], Zhi Quan and Xiao Li Feng used this method for considering the Helmholtz equation. Here, we estimate a convergence rate under an a priori bound assumption of the exact solution and a priori parameter choice rule and estimate a convergence rate under the a posteriori parameter choice rule.
In several papers, many authors have shown that the fractional diffusion-wave equation plays a very important role in describing physical phenomena, such as the diffusion process in media with fractional geometry, see [17]. Nowadays, fractional calculus receives increasing attention in the scientific community, with a growing number of applications in physics, electrochemistry, biophysics, viscoelasticity, biomedicine, control theory, signal processing, etc., see [18]. In a lot of papers, the Mittag–Leffler function and its properties are researched and the results are used to model the different physical phenomena, see [19,20].
The rest of this article is organized as follows. In Section 2, we introduce some preliminary results. The ill-posedness of the fractional inverse source problem (1) and conditional stability are provided in Section 3. We propose a Tikhonov regularization method and give two convergence estimates under an a priori assumption for the exact solution and two regularization parameter choice rules: Section 4 (a priori parameter choice) and Section 5 (a posteriori parameter choice).

2. Preliminary Results

In this section, we introduce a few properties of the eigenvalues of the operator ( Δ ) , see [21].
Definition 1
(Eigenvalues of the Laplacian operator).
1. 
Each eigenvalues of ( Δ ) is real. The family of eigenvalues { b ˜ i } i = 1 satisfy 0 b ˜ 1 b ˜ 2 b ˜ 3 , and b ˜ i as i .
2. 
We take { b ˜ i , e i } the eigenvalues and corresponding eigenvectors of the fractional Laplacian operator in Ω with Dirichlet boundary conditions on Ω :
Δ e i ( x ) = b ˜ i e i ( x ) , x Ω , e i ( x ) = 0 , on Ω ,
for i = 1 , 2 , . Then, we define the operator ( Δ ) by
Δ u : = i = 0 c i ( Δ e i ( x ) ) = i = 0 c i b ˜ i e i ( x ) ,
which maps H 0 κ ( Ω ) into L 2 ( Ω ) . Let 0 κ < . By H κ ( Ω ) , we denote the space of all functions g L 2 ( Ω ) with the property
i = 1 ( 1 + b ˜ i ) 2 κ | g i | 2 < ,
where g i = Ω g ( x ) e i ( x ) d x . Then, we also define g H κ ( Ω ) = i = 1 ( 1 + b ˜ i ) 2 κ | g i | 2 . If κ = 0 , then H κ ( Ω ) is L 2 ( Ω ) .
Definition 2
(See [1]). The Mittag–Leffler function is:
E β , γ ( z ) = i = 0 z i Γ ( β i + γ ) , z C ,
where β > 0 and γ R are arbitrary constants.
Lemma 1
(See [21]). For 1 < β < 2 , γ R , and ω > 0 , we get
E β , γ ( ω ) = 1 Γ ( γ β ) ω + o 1 ω 2 , ω ·
Lemma 2
(See [1]). If β < 2 and γ R , suppose ζ satisfies π β 2 < ζ < min { π , π β } , ζ | arg ( y ) | π . Then, there exists a constant A ˜ as follows:
| E β , γ ( y ) | A ˜ 1 + | y | ·
Lemma 3
(See [22]). The following equality holds for b ˜ > 0 , α > 0 and m N
d m d t m E α , 1 ( b ˜ t α ) = b ˜ t α m E α , α m + 1 ( b ˜ t α ) , t > 0 .
Lemma 4.
For b ˜ i > 0 , β > 0 , and positive integer i N , we have
( 1 ) d d t t E β , 2 ( b ˜ i t β ) = E β , 1 ( b ˜ i t β ) , ( 2 ) d d t E β , 1 ( b ˜ i t β ) = b ˜ i t β 1 E β , β ( b ˜ i t β ) .
Lemma 5.
For any b ˜ i satisfying b ˜ i b ˜ 1 > 0 , there exists positive constants A , B such that
A b ˜ i T β | E β , β + 1 b ˜ i T β | B b ˜ i T β ·
Lemma 6
(See [21]). Let b ˜ > 0 , we have
0 e p t t β i + γ 1 E β , γ ( i ) ( ± a t β ) d t = i ! p β γ ( p β a ) i + 1 , ( a ) > a 1 β ,
where E β , γ ( i ) ( y ) : = d i d y i E β , γ ( y ) .
Lemma 7.
For constant ξ b ˜ 1 and 1 2 τ 1 , one has
C ( ξ ) = ξ A 2 τ + α 2 ξ 2 τ C ¯ ( τ , A ) α 1 τ ,
where C ¯ = C ¯ ( τ , A ) are independent on α , ξ .
Proof. 
Let 1 2 τ 1 , we solve the equation C ( ξ 0 ) = 0 , then there exists a unique ξ 0 = A ( 2 τ 1 ) 1 2 τ α 1 τ , it gives
C ( ξ ) C ( ξ 0 ) A 1 2 τ 2 τ ( 2 τ 1 ) 2 τ 1 2 τ α 1 τ : = C ¯ ( τ , A ) α 1 τ .
 □
Lemma 8.
Let the constant ξ b ˜ 1 and 1 2 τ 1 , we get
D ( ξ ) = α 2 ξ 2 τ j A 2 τ + α 2 ξ 2 τ B 1 ( j , τ , A ) α j τ , 0 < j < 2 τ , B 2 ( j , τ , A , b ˜ 1 ) α 2 , j 2 τ .
Proof. 
  • If j 2 τ , then from ξ b ˜ 1 , we get
    D ( ξ ) = α 2 ξ 2 τ j A 2 τ + α 2 ξ 2 τ α 2 ξ 2 τ j A 2 τ α 2 A 2 τ b ˜ 1 j 2 τ 1 A 2 τ b ˜ 1 j 2 τ α 2 .
  • If 0 < j < 2 τ , then it can be seen that lim ξ 0 D ( ξ ) = lim ξ + D ( ξ ) = 0 . Taking the derivative of D with respect to ξ , we know that
    D ( ξ ) = α 2 ( 2 τ j ) ξ 2 τ j 1 ( A 2 τ + α 2 ξ 2 τ ) α 4 2 τ ξ 4 τ j 1 ( A 2 τ + α 2 ξ 2 τ ) 2 ·
From (16), a simple transformation gives
D ( ξ ) = α 2 ( 2 τ j ) A 2 τ ξ 2 τ j 1 α 4 j ξ 4 τ j 1 ( A 2 τ + α 2 ξ 2 τ ) 2 ·
D ( ξ ) attains maximum value at ξ = ξ 0 such that it satisfies D ( ξ ) = 0 . Solving D ( ξ 0 ) = 0 , we know that ξ 0 = A ( 2 τ j ) 1 2 τ α 1 τ j 1 2 τ .
Hence, we conclude
D ( ξ ) D ( ξ 0 ) = D A ( 2 τ j ) 1 2 τ α 1 τ j 1 2 τ = ( 2 τ j ) 2 τ j 2 τ A j j j 2 τ 2 τ α j τ .
 □
Lemma 9.
Let ξ > b ˜ 1 > 0 and 1 2 τ < 1 , and F ( ξ ) be a function defined by
F ( ξ ) = α 2 ξ 2 τ ( j + 1 ) A 2 τ + α 2 ξ 2 τ B 3 ( j , τ , A ) α j + 1 τ , 0 < j < 2 τ 1 , B 4 ( j , τ , A , b ˜ 1 ) α 2 , j 2 τ 1 ,
where B 3 ( j , τ , A , b ˜ 1 ) = 2 τ j 1 2 τ A 2 τ j + 1 2 τ j 1 j + 1 2 τ and B 4 ( j , τ , A , b ˜ 1 ) = 1 A 2 τ b ˜ 1 ( j + 1 ) 2 τ .
Proof. 
  • If j 2 τ 1 , then for ξ b ˜ 1 we know that
    F ( ξ ) α 2 ξ 2 τ ( j + 1 ) A 2 τ 1 A 2 τ b ˜ 1 ( j + 1 ) 2 τ α 2 = B 4 ( j , τ , A , b ˜ 1 ) α 2 .
  • If 0 < j < 2 τ 1 , then we have lim ξ 0 F ( ξ ) = lim ξ F ( ξ ) = 0 , then we know
    F ( ξ ) sup ξ ( 0 , + ) F ( ξ ) F ( ξ 0 ) .
    By taking the derivative of F with respect to ξ , we know that
    ( F ) ( ξ ) = A 2 τ α 2 ( 2 τ j 1 ) ξ 2 τ j 2 + α 4 ( j 1 ) ξ 4 τ j 2 ( A 2 τ + α 2 ξ 2 τ ) 2 ·
    The function F ( ξ ) attains maximum at value ξ = ξ 0 , whereby ξ 0 ( 0 , + ) , which satisfies ( F ) ( ξ 0 ) = 0 . Solving ( F ) ( ξ 0 ) = 0 , we obtain that ξ 0 = A ( 2 τ j 1 ) 1 2 τ α 1 τ ( j + 1 ) 1 2 τ > 0 , then we have
    F ( ξ ) F ( ξ 0 ) = F A ( 2 τ j 1 ) 1 2 τ α 1 τ ( j + 1 ) 1 2 τ = 2 τ j 1 2 τ A 2 τ j + 1 2 τ j 1 j + 1 2 τ α j + 1 τ .
The proof of Lemma 9 is completed. Our main results are described in the following Theorem. □
Now, we use the separation of variables to yield the solution of (1). Suppose that the solution of (1) is defined by Fourier series
u ( x , t ) = i = 1 u i ( t ) e i ( x ) , with u i ( t ) = u ( · , t ) , e i ( · ) .
Next, we apply the separating variables method and suppose that problem (1) has a solution of the form u ( x , t ) = i = 1 u i ( t ) e i ( x ) . Then, u i ( t ) is the solution of the following fractional ordinary differential equation with initial conditions as follows:
β t β u i ( t ) = Δ u i ( t ) + Ξ i ( x ) , ( x , t ) Ω × ( 0 , T ) , u i ( 0 ) = f ( x ) , e i ( x ) , x Ω , u i t ( 0 ) = g ( x ) , e i ( x ) , x Ω .
As Sakamoto and Yamamoto [22], the formula of solution corresponding to the initial value problem for (22) is obtained as follows:
u i ( t ) = t β E β , β + 1 ( b ˜ i t β ) Ξ , e i + E β , 1 ( b ˜ i t β ) f , e i + t E β , 2 ( b ˜ i t β ) g , e i ·
Hence, we get
u ( x , t ) = i = 1 [ t β E β , β + 1 ( b ˜ i t β ) Ξ ( x ) , e i ( x ) + E β , 1 ( b ˜ i t β ) f ( x ) , e i ( x ) + t E β , 2 ( b ˜ i t β ) g ( x ) , e i ( x ) ] e i ( x ) .
Letting t = T , we obtain
u ( x , T ) = i = 1 [ T β E β , β + 1 ( b ˜ i T β ) Ξ ( x ) , e i ( x ) + E β , 1 ( b ˜ i T β ) f ( x ) , e i ( x ) + T E β , 2 ( b ˜ i T β ) g ( x ) , e i ( x ) ] e i ( x ) .
From (25) and using final condition u ( x , T ) = h ( x ) , we get
h ( x ) = i = 1 [ T β E β , β + 1 ( b ˜ i T β ) Ξ ( x ) , e i ( x ) + E β , 1 ( b ˜ i T β ) f ( x ) , e i ( x ) + T E β , 2 ( b ˜ i T β ) g ( x ) , e i ( x ) ] e i ( x ) .
By denoting h i = h ( x ) , e i ( x ) , f i = f ( x ) , e i ( x ) , g i = g ( x ) , e i ( x ) , and Ξ i = Ξ ( x ) , e i ( x ) , using a simple transformation, we have
Ξ i = h i E β , 1 ( b ˜ i T β ) f i T E β , 2 ( b ˜ i T β ) g i T β E β , β + 1 ( b ˜ i T β ) ·
Then, we receive the formula of the source function Ξ ( x )
Ξ ( x ) = i = 1 R i T β E β , β + 1 ( b ˜ i T β ) e i ( x ) ,
where R i = h i E β , 1 ( b ˜ i T β ) f i T E β , 2 ( b ˜ i T β ) g i .
In the following Theorem, we provide the uniqueness property of the inverse source problem.
Theorem 1.
The couple solution u ( x , t ) , Ξ ( x ) of problem (1) is unique.
Proof. 
We assume Ξ 1 and Ξ 2 to be the source functions corresponding to the final values R 1 and R 2 in form (27) and (28), respectively, whereby
R 1 = h 1 E β , 1 ( b ˜ i T β ) f 1 T E β , 2 ( b ˜ i T β ) g 1 , R 2 = h 2 E β , 1 ( b ˜ i T β ) f 2 T E β , 2 ( b ˜ i T β ) g 2 .
Suppose that h 1 = h 2 , f 1 = f 2 , and g 1 = g 2 , then we prove that Ξ 1 = Ξ 2 . In fact, using the inequality ( a + b + c ) 2 3 ( a 2 + b 2 + c 2 ) , we get
R 1 R 2 L 2 ( Ω ) 2 3 h 1 h 2 L 2 ( Ω ) 2 + 3 E β , 1 ( b ˜ i T β ) 2 f 1 f 2 L 2 ( Ω ) 2 + 3 T 2 E β , 2 ( b ˜ i T β ) 2 g 1 g 2 L 2 ( Ω ) 2 3 h 1 h 2 L 2 ( Ω ) 2 + 3 B 2 b ˜ 1 T 2 β f 1 f 2 L 2 ( Ω ) 2 + 3 T 2 B 2 b ˜ 1 T 2 β g 1 g 2 L 2 ( Ω ) 2 ·
From (30), we can see that if the right hand side tends to 0, then R 1 R 2 L 2 ( Ω ) 2 0 . Therefore, we have R 1 = R 2 . The proof is completed. □

2.1. The Ill-Posedness of Inverse Source Problem

Theorem 2.
The inverse source problem is ill-posed.
Define a linear operator K : L 2 ( Ω ) L 2 ( Ω ) as follows:
K Ξ ( x ) = Ω k ( x , ω ) Ξ ( ω ) d ω = R ( x ) , x Ω ,
where k ( x , ω ) is the kernel
k ( x , ω ) = i = 1 T β E β , β + 1 b ˜ i T β e i ( x ) e i ( ω ) .
Due to k ( x , ω ) = k ( ω , x ) , we know K is a self-adjoint operator. Next, we are going to prove its compactness. We use the fractional Tikhonov regularization method to rehabilitate it, where e i ( x ) is an orthogonal basis in L 2 ( Ω ) and
ξ i = T β E β , β + 1 ( b ˜ i T β ) .
Proof. 
Due to k ( x , ω ) = k ( ω , x ) , we know K is a self-adjoint operator. Next, we are going to prove its compactness. Defining the finite rank operators K N as follows:
K N Ξ ( x ) = i = 1 N T β E β , β + 1 ( b ˜ i T β ) Ξ ( x ) , e i ( x ) e i ( x ) .
Then, from (31) and (34) and combining Lemma 5, we have
K N Ξ K Ξ L 2 ( Ω ) 2 = i = N + 1 T β E β , β + 1 ( b ˜ i T β ) 2 | Ξ ( x ) , e i ( x ) | 2 i = N + 1 B 2 b ˜ i | Ξ ( x ) , e i ( x ) | 2 B 2 b ˜ N i = N + 1 | Ξ ( x ) , e i ( x ) | 2 .
Therefore, K N K L 2 ( Ω ) 0 in the sense of operator norm in L ( L 2 ( Ω ) ; L 2 ( Ω ) ) as N . Additionally, K is a compact and self-adjoint operator. Therefore, K admits an orthonormal eigenbasis e i in L 2 ( Ω ) . From (31), the inverse source problem we introduced above can be formulated as an operator equation
K Ξ ( x ) = R ( x ) ,
and by Kirsch [23], we conclude that it is ill-posed. To illustrate an ill-posed problem, we present an example. To perform this example ill-posed, we fix β and let us choose the input data
h m ( x ) = e m ( x ) b ˜ m , g m = T 2 2 β B 2 e m ( x ) b ˜ m , and f m = T 2 β B 2 e m ( x ) b ˜ m ·
Due to (28) and combining (37), by (23), the source term corresponding to Ξ m is
Ξ m ( x ) = i = 1 R m ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) e i ( x ) = i = 1 e m ( x ) b ˜ m , e i ( x ) E β , 1 ( b ˜ i T β ) e m b ˜ m , e i ( x ) T E β , 2 ( b ˜ i T β ) e m b ˜ m , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) e i ( x ) = e m ( x ) b ˜ m 1 E β , 1 ( b ˜ i T β ) T E β , 2 ( b ˜ i T β ) ) T β E β , β + 1 ( b ˜ i T β ) ,
where R m = h m E β , 1 ( b ˜ i T β ) f m T E β , 2 ( b ˜ i T β ) g m .
Let us choose other input data h , f , g = 0 . By (28), the source term corresponding to h , f , g is Ξ = 0 . An error in L 2 ( Ω ) norm between two input final data is
h m h L 2 ( Ω ) = e m ( x ) b ˜ m L 2 ( Ω ) = 1 b ˜ m , g m g L 2 ( Ω ) = T 2 2 β B 2 e m ( x ) b ˜ m L 2 ( Ω ) = T 2 2 β B 2 1 b ˜ m , f m f L 2 ( Ω ) = T 2 β B 2 e m ( x ) b ˜ m L 2 ( Ω ) = T 2 B 2 1 b ˜ m ,
with B as defined in Lemma 5. Therefore,
lim m + h m h L 2 ( Ω ) = lim m + 1 b ˜ m = 0 , lim m + g m g L 2 ( Ω ) = T 2 2 β B 2 lim m + 1 b ˜ m = 0 , lim m + f m f L 2 ( Ω ) = T 2 β B 2 lim m + 1 b ˜ m = 0 .
An error in L 2 norm between two corresponding source terms is
Ξ m Ξ L 2 ( Ω ) = e m ( x ) 1 E β , 1 ( b ˜ m T β ) T β E β , 2 ( b ˜ m T β ) b ˜ m T β E β , β + 1 ( b ˜ i T β ) L 2 ( Ω ) = 1 E β , 1 ( b ˜ m T β ) T β E β , 2 ( b ˜ m T β ) b ˜ m T β E β , β + 1 ( b ˜ i T β ) .
From (41) and using the inequality in Lemma 5, we obtain
Ξ m Ξ L 2 ( Ω ) b ˜ m B 1 E β , 1 ( b ˜ m T β ) T β E β , 2 ( b ˜ m T β ) .
From (42), we have
lim m + Ξ m Ξ L 2 ( Ω ) > lim m + b ˜ m B 1 A ˜ b ˜ m T β A ˜ b ˜ m > lim m + b ˜ m B 1 1 b ˜ m A ˜ T β + A ˜ = + .
Combining (40) and (43), we conclude that the inverse source problem is ill-posed. □

2.2. Conditional Stability of Source Term Ξ ( x )

In this section, we show a conditional stability of source function Ξ ( x ) .
Theorem 3.
If Ξ H γ j ( Ω ) M 1 for M 1 > 0 , then
Ξ L 2 ( Ω ) M 1 T β A j j + 1 R L 2 ( Ω ) j j + 1 .
Proof. 
By using the (28) and Hölder inequality, we have
Ξ L 2 ( Ω ) 2 = i = 1 Ξ i 2 = i = 1 R i 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 i = 1 | R i | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 j + 2 1 j + 1 i = 1 | R i | 2 j j + 1 i = 1 | R i | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 j | T β E β , β + 1 ( b ˜ i T β ) | 2 1 j + 1 R i L 2 ( Ω ) 2 j j + 1 .
Using Lemma (5) leads to
1 | T β E β , β + 1 ( b ˜ i T β ) | 2 j | b ˜ i T β | 2 j | A 2 j | ·
Combining (45) and (46), we get
Ξ L 2 ( Ω ) 2 i = 1 | b ˜ i j T β j | 2 j + 1 Ξ i 2 j j + 1 A 2 j j + 1 R L 2 ( Ω ) 2 j j + 1 Ξ H γ j ( Ω ) 2 T β A 2 j j + 1 R L 2 ( Ω ) 2 j j + 1 .
Taking square root in both sides, we have (44). □

3. Regularization of the Inverse Source Problem for the Time-Fractional Diffusion-Wave Equation by the Fractional Tikhonov Method

As mentioned above, applying the fractional Tikhonov regularization method we solve the inverse source problem. Due to singular value decomposition for compact self-adjoint operator K , as in (33). If the measured data h ε 1 ( x ) , f ε 2 ( x ) , g ε 3 ( x ) and h ( x ) , f ( x ) ) , g ( x ) with a noise level of ε 1 , ε 2 , and ε 3 satisfy
h h ε 1 L 2 ( Ω ) ε 1 , f f ε 2 L 2 ( Ω ) ε 2 , and g g ε 3 L 2 ( Ω ) ε 3 ,
then we can present a regularized solution as follows:
Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) = i = 1 T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ R ε 1 , ε 2 , ε 3 ( x ) , e i ( x ) e i ( x ) , 1 2 τ 1 ,
where α is a parameter regularization.
Ξ α , τ ( x ) = i = 1 T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ R ( x ) , e i ( x ) e i ( x ) , 1 2 τ 1 ,
where
R i ε 1 , ε 2 , ε 3 = h i ε 1 E β , 1 ( b ˜ i T β ) f i ε 2 T E β , 2 ( b ˜ i T β ) g i ε 3 , R i = h i E β , 1 ( b ˜ i T β ) f i T E β , 2 ( b ˜ i T β ) g i .

4. A Priori Parameter Choice

Afterwards, we will give an error estimation for Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) and show convergence rate under a suitable choice for the regularization parameter.
Theorem 4.
Let Ξ be as (28) and the noise assumption (48) hold. Then, we have the following estimate:
  • If 0 j 2 τ , since α = max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ j + 2 we have
    Ξ Ξ α , τ ε 1 , ε 2 , ε 3 L 2 ( Ω ) max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 j j + 2 M 1 1 j + 2 C ¯ ( τ , A ) P ( B , b ˜ 1 , T , β ) 1 2 + B 1 ( j , τ , A ) .
  • If j 2 τ , by choosing α = max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ τ + 2 we have
    Ξ Ξ α , τ ε 1 , ε 2 , ε 3 L 2 ( Ω ) max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 τ τ + 1 M 1 1 τ + 1 C ¯ ( τ , A ) P ( B , b ˜ 1 , T , β ) 1 2 + B 2 ( j , τ , A , b ˜ 1 ) ,
where
P ( B , b ˜ 1 , T , β ) = 1 + B 2 | b ˜ 1 T β | 2 + B 2 T 2 2 β | b ˜ 1 | 2 ,
M 1 is a positive number satisfies Ξ H γ j ( Ω ) M 1 ,
B 1 ( j , τ , A ) = ( 2 τ j ) 2 τ j 2 τ A j j j 2 τ 2 τ , B 2 ( j , τ , A ) = 1 A 2 τ b ˜ 1 j 2 τ ·
Proof. 
By the triangle inequality, we know
Ξ Ξ α , τ ε 1 , ε 2 , ε 3 L 2 ( Ω ) Ξ α , τ Ξ α , τ ε 1 , ε 2 , ε 3 L 2 ( Ω ) K 1 L 2 ( Ω ) + Ξ Ξ α , τ L 2 ( Ω ) K 2 L 2 ( Ω ) .
The proof falls naturally into two steps.
Step 1: Estimation for K 1 L 2 ( Ω ) , we receive
Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) Ξ α , τ ( x ) = i = 1 T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ ( R ε 1 , ε 2 , ε 3 ( x ) R ( x ) , e i ( x ) ) e i ( x ) = i = 1 T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ ( h i ε 1 h i , e i ( x ) + E β , 1 ( b ˜ i T β ) f i ε 2 f i , e i ( x ) + T E β , 2 ( b ˜ i T β ) g i ε 3 g i , e i ( x ) ) e i ( x ) .
Combining (50) to (51), and Lemma 5, it is easily seen that | T β E β , β ( b ˜ i T β ) | A b ˜ i . From (58), applying the inequality ( a + b + c ) 2 3 a 2 + 3 b 2 + 3 c 2 and combining Lemma 7, we know that
K 1 L 2 ( Ω ) 2 sup i N b ˜ i A 2 τ + α 2 | b ˜ i | 2 τ 2 ( i = 1 3 | h i ε 1 h i , e i ( x ) | 2 + 3 i = 1 | E β , 1 ( b ˜ i T β ) | 2 | f i ε 2 f i , e i ( x ) | 2 + 3 T 2 i = 1 | E β , 2 ( b ˜ i T β ) | 2 | g i ε 3 g i , e i ( x ) | 2 ) sup i N b ˜ i A 2 τ + α 2 | b ˜ i | 2 τ 2 ( 3 ε 1 2 + 3 i = 1 | E β , 1 ( b ˜ i T β ) | 2 ε 2 2 + 3 T 2 i = 1 | E β , 2 ( b ˜ i T β ) | 2 ε 3 2 ) .
Using the result of Lemma 1 in above, we receive
K 1 L 2 ( Ω ) 2 C ¯ ( τ , A ) α 1 τ 2 3 ε 1 2 + i = 1 3 B 2 ε 2 2 | b ˜ i T β | 2 + i = 1 3 B 2 T 2 ε 3 2 | b ˜ i T β | 2 C ¯ ( τ , A ) α 1 τ 2 3 ε 1 2 + 3 B 2 ε 2 2 | b ˜ 1 T β | 2 + 3 B 2 T 2 ε 3 2 | b ˜ 1 T β | 2 C ¯ ( τ , A ) α 1 τ 2 max ε 1 2 , ε 2 2 , ε 3 2 3 + B 2 | b ˜ 1 T β | 2 + B 2 T 2 2 β | b ˜ 1 | 2 .
Therefore, we have concluded
K 1 L 2 ( Ω ) C ¯ ( τ , A ) α 1 τ max ε 1 2 , ε 2 2 , ε 3 2 P ( B , b ˜ 1 , T , β ) 1 2 ,
where
P ( B , b ˜ 1 , T , β ) = 3 + B 2 | b ˜ 1 T β | 2 + B 2 T 2 2 β | b ˜ 1 | 2 .
 □
Step 2: Next, we have to estimate K 2 L 2 ( Ω ) . From (28) and (50), and using Parseval equality, we get
K 2 L 2 ( Ω ) 2 i = 1 + T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 | T β E β , β + 1 ( b ˜ i T β ) | 2 | R ( x ) , e i ( x ) | 2 i = 1 + α 2 | T β E β , β + 1 ( b ˜ i T β ) | α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ 2 | R ( x ) , e i ( x ) | 2 .
From (63), we have estimation for K 2 L 2 ( Ω ) 2
K 2 L 2 ( Ω ) 2 = i = 1 + α 4 | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ 2 i = 1 + α 4 b ˜ i 2 j b ˜ i 2 j | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ 2 sup i N | D ( i ) | 2 i = 1 + b ˜ i 2 j | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 = sup i N | D ( i ) | 2 Ξ H j ( Ω ) 2 .
Hence, D ( i ) has been estimated
D ( i ) = α 2 b ˜ i j α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ .
Next, using the Lemmas 5 and 8, we continue to estimate D ( i ) . In fact, we get
D ( i ) α 2 b ˜ i 2 τ j A 2 τ + α 2 b ˜ i 2 τ B 1 ( j , τ , A ) α j τ , 0 < j < 2 τ , B 2 ( j , τ , A , b ˜ 1 ) α 2 , j 2 τ .
Combining (64) to (66), we receive
K 2 L 2 ( Ω ) 2 B 1 ( j , τ , A ) M 1 α j τ , 0 < j < 2 τ , B 2 ( j , τ , A , b ˜ 1 ) M 1 α 2 , j 2 τ .
Next, combining the above two inequalities, we obtain
Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) C ¯ ( τ , A ) α 1 τ max ε 1 2 , ε 2 2 , ε 3 2 P ( B , b ˜ 1 , T , β , γ ) 1 2 + B 1 ( j , τ , A ) M 1 α j τ , 0 < j < 2 τ , B 2 ( j , τ , A , b ˜ 1 ) M 1 α 2 , j 2 τ .
Choose the regularization parameter α as follows:
α = max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ j + 2 , 0 < j < 2 τ , max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ τ + 1 , j 2 τ .
Hence, we conclude that
Case 1: If 0 j 2 τ , since α = max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ j + 2 we have
Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 j j + 2 M 1 1 j + 2 C ¯ ( τ , A ) P ( B , b ˜ 1 , T , β , γ ) 1 2 + B 1 ( j , τ , A ) .
Case 2: If j 2 τ , since α = max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ τ + 2 we have
Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 τ τ + 1 M 1 1 τ + 1 C ¯ ( τ , A ) P ( B , b ˜ 1 , T , β , γ ) 1 2 + B 2 ( j , τ , A , b ˜ 1 ) .

5. A Posteriori Parameter Choice

In this section, we consider an a posteriori regularization parameter choice in Morozov’s discrepancy principle (see in [21]). We use the discrepancy principle in the following form:
T β E β , β + 1 ( b ˜ i T β ) 2 τ α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ R ε 1 , ε 2 , ε 3 ( x ) R ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) = k max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 ,
whereby 1 2 τ 1 , k > 1 , and α is the regularization parameter.
Lemma 10.
Let
ρ ( α ) = i = 1 α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ 2 | R ε 1 , ε 2 , ε 3 ( x ) , e i ( x ) | 2 .
If 0 < k max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 < R ε 1 , ε 2 , ε 3 L 2 ( Ω ) , then the following results hold:
(a) 
ρ ( α ) is a continuous function;
(b) 
ρ ( α ) 0 as α 0 ;
(c) 
ρ ( α ) R ε 1 , ε 2 , ε 3 L 2 ( Ω ) as α ;
(d) 
ρ ( α ) is a strictly increasing function.
Lemma 11.
Let α be the solution of (72), it gives
1 α 1 τ 2 B 3 ( j , τ , A ) 1 j + 1 k 2 6 P ( B , b ˜ 1 , T , β ) 1 2 ( j + 1 ) M 1 1 j + 1 max ε 1 2 , ε 2 2 , ε 3 2 1 2 ( j + 1 ) , 0 < j < 2 τ 1 , 2 B 4 ( j , τ , A , b ˜ 1 ) 1 2 τ k 2 6 P ( B , b ˜ 1 , T , β ) 1 4 τ M 1 1 2 τ max ε 1 2 , ε 2 2 , ε 3 2 1 4 τ , j 2 τ 1 ,
which gives the required results.
Proof. 
Step 1: First of all, we have the error estimation between R ε 1 , ε 2 , ε 3 and R . Indeed, using the inequality ( a + b + c ) 2 3 ( a 2 + b 2 + c 2 ) , a , b , c 0 , we get
R ε 1 , ε 2 , ε 3 R L 2 ( Ω ) 2 3 ε 1 2 + 3 i = 1 | E β , 1 ( b ˜ i T β ) | 2 ε 2 2 + 3 T 2 i = 1 | E β , 2 ( b ˜ i T β ) | 2 ε 3 2 , 3 ε 1 2 + 3 B 2 ε 2 2 | b ˜ 1 T β | 2 + 3 B 2 T 2 ε 3 2 | b ˜ 1 T β | 2 3 max ε 1 2 , ε 2 2 , ε 3 2 P ( B , b ˜ 1 , T , β ) .
Step 2: Using the inequality ( a + b ) 2 2 ( a 2 + b 2 ) , a , b 0 , we can receive the following estimation
k 2 max ε 1 2 , ε 2 2 , ε 3 2 = i = 1 α 2 α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ 2 | R i ε 1 , ε 2 , ε 3 ( x ) , e i ( x ) | 2 2 i = 1 α 2 α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ 2 | R i ε 1 , ε 2 , ε 3 ( x ) R ( x ) , e i ( x ) | 2 + 2 i = 1 α 2 | T β E β , β + 1 ( b ˜ i T β ) | α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ b ˜ i j 2 b ˜ i 2 j | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 .
From (76), we get
k 2 max ε 1 2 , ε 2 2 , ε 3 2 2 i = 1 | R i ε 1 , ε 2 , ε 3 ( x ) R ( x ) , e i ( x ) | 2 + 2 i = 1 α 2 | T β E β , β + 1 ( b ˜ i T β ) | α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ b ˜ i j 2 b ˜ i 2 j | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 6 max ε 1 2 , ε 2 2 , ε 3 2 P ( B , b ˜ 1 , T , β , γ ) + 2 i = 1 α 2 | T β E β , β + 1 ( b ˜ i T β ) | α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ b ˜ i j 2 b ˜ i 2 j | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 6 max ε 1 2 , ε 2 2 , ε 3 2 P ( B , b ˜ 1 , T , β ) + 2 i = 1 | H i | 2 b ˜ i 2 j | R ( x ) , e i ( x ) | 2 | T β E β , β + 1 ( b ˜ i T β ) | 2 ,
whereby
H i = α 2 | T β E β , β + 1 ( b ˜ i T β ) | α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ b ˜ i j .
From (78), we get H i as follows:
H i = α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + | T β E β , β + 1 ( b ˜ i T β ) | 2 τ b ˜ i j α 2 T β B T β b ˜ j α 2 + | T β A T β b ˜ j | 2 τ b ˜ i j α 2 B A 2 τ + α 2 b ˜ i 2 τ b ˜ i 2 τ ( j + 1 ) .
From (79), using Lemma 9, we have
H i B 3 ( j , τ , A ) α j + 1 τ , 0 < j < 2 τ 1 , B 4 ( j , τ , A , b ˜ 1 ) α 2 , j 2 τ 1 .
Therefore, combining (77) to (79), we know that
k 2 max ε 1 2 , ε 2 2 , ε 3 2 6 max ε 1 2 , ε 2 2 , ε 3 2 P ( B , b ˜ 1 , T , β ) + 2 B 3 2 ( j , τ , A ) M 1 2 α 2 j + 1 τ , 0 < j < 2 τ 1 , 2 B 4 2 ( j , τ , A , b ˜ 1 ) M 1 2 α 4 , j 2 τ 1 .
From (80), it is very easy to see that
k 2 6 P ( B , b ˜ 1 , T , β ) max ε 1 2 , ε 2 2 , ε 3 2 2 B 3 2 ( j , τ , A ) M 1 2 α 2 j + 1 τ , 0 < j < 2 τ 1 , 2 B 4 2 ( j , τ , A , b ˜ 1 ) M 1 2 α 4 , j 2 τ 1 .
So,
1 α 1 τ 2 B 3 ( j , τ , A ) 1 j + 1 k 2 6 P ( B , b ˜ 1 , T , β ) 1 2 ( j + 1 ) M 1 1 j + 1 max ε 1 2 , ε 2 2 , ε 3 2 1 2 ( j + 1 ) , 0 < j < 2 τ 1 , 2 B 4 ( j , τ , A , b ˜ 1 ) 1 2 τ k 2 6 P ( B , b ˜ 1 , T , β ) 1 4 τ M 1 1 2 τ max ε 1 2 , ε 2 2 , ε 3 2 1 4 τ , j 2 τ 1 ,
which gives the required results. The estimation of Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) is established by our next Theorem. □
Theorem 5.
Assume the a priori condition and the noise assumption hold, and there exists τ > 1 such that 0 < k max ε 1 2 , ε 2 2 , ε 3 2 1 2 < R ε 1 , ε 2 , ε 3 L 2 ( Ω ) . This Theorem now shows the convergent estimate between the exact solution and the regularized solution such that
  • If 0 j 2 τ 1 , we have the convergence estimate
    Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) Q ( j , T , A , B , β , k , b ˜ 1 ) max ε 1 2 , ε 2 2 , ε 3 2 j 2 ( j + 1 ) M 1 1 j + 1 .
  • If j 2 τ 1 , we have the convergence estimate
    Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) L ( C ¯ , T , A , B , β , k , j , b ˜ 1 ) × max ε 1 2 , ε 2 2 , ε 3 2 1 2 1 1 2 τ M 1 1 2 τ ,
whereby
Q ( j , T , A , B , β , k , b ˜ 1 ) = C ¯ ( τ , A ) 2 B 3 ( j , τ , A ) 1 j + 1 P ( B , b ˜ 1 , T , β ) ) 1 2 k 2 6 P ( B , b ˜ 1 , T , β ) 1 2 ( j + 1 ) + 3 P ( B , b ˜ 1 , T , β ) 1 2 + k A j j + 1 , P ( B , b ˜ 1 , T , β , γ ) = 1 + B 2 | b ˜ 1 T β | 2 + B 2 T 2 2 β | b ˜ 1 | 2 , L ( C ¯ , T , A , B , β , k , j , b ˜ 1 ) = [ C ¯ ( τ , A ) 2 B 4 ( j , τ , A , b ˜ 1 ) 1 2 τ ( P ( B , b ˜ 1 , T , β ) ) 1 2 k 2 6 P ( B , b ˜ 1 , T , β ) 1 4 τ + 3 P ( B , b ˜ 1 , T , β ) 1 2 + k A 1 1 2 τ b ˜ 1 2 τ j 1 ] .
Proof. 
Applying the triangle inequality, we get
Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) Ξ ( x ) Ξ α , τ L 2 ( Ω ) + Ξ α , τ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) .
Case 1: If 0 < j 2 τ 1 . First of all, we recalled estimation from (61) and, by Lemma 9 Part (a), we have
Ξ α , τ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) Q ( j , T , A , B , β , k , b ˜ 1 ) max ε 1 2 , ε 2 2 , ε 3 2 j 2 ( j + 1 ) M 1 1 j + 1 .
Next, we have estimate Ξ ( x ) Ξ α , τ ( x ) L 2 ( Ω ) . From (28) and (50), and using Parseval equality, we get
Ξ ( x ) Ξ α , τ ( x ) L 2 ( Ω ) = i = 1 + T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ 1 T β E β , β + 1 ( b ˜ i T β ) R ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) = i = 1 + α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) = i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) e i ( x ) L 2 ( Ω ) .
Using the Hölder’s inequality, we obtain
Ξ ( x ) Ξ α , τ ( x ) L 2 ( Ω ) i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) j j + 1 × i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) j + 1 e i ( x ) L 2 ( Ω ) 1 j + 1 i = 1 + α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ R ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) j j + 1 A 1 × i = 1 + α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) j e i ( x ) L 2 ( Ω ) 1 j + 1 A 2 .
From (89) and (75), using Lemma 11, one has
A 1 ( i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) 2 τ + α 2 R ( x ) R ε 1 , ε 2 , ε 3 ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) + i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) 2 τ + α 2 R ε 1 , ε 2 , ε 3 ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) ) j j + 1 3 max ε 1 2 , ε 2 2 , ε 3 2 1 2 P ( B , b ˜ 1 , T , β ) 1 2 + k max ε 1 2 , ε 2 2 , ε 3 2 1 2 j j + 1 3 P ( B , b ˜ 1 , T , β ) 1 2 + k j j + 1 max ε 1 2 , ε 2 2 , ε 3 2 j 2 ( j + 1 ) .
Next, using the priori condition a, we have
A 2 = i = 1 + α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) j e i ( x ) L 2 ( Ω ) 1 j + 1 i = 1 + Ξ ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) j e i ( x ) L 2 ( Ω ) 1 j + 1 i = 1 + b ˜ i A j Ξ i e i ( x ) L 2 ( Ω ) 1 j + 1 1 A j j + 1 M 1 1 j + 1 .
Combining (88) to (91), we conclude that
Ξ ( x ) Ξ α , τ ( x ) L 2 ( Ω ) 3 P ( B , b ˜ 1 , T , β ) 1 2 + k A j j + 1 max ε 1 2 , ε 2 2 , ε 3 2 j 2 ( j + 1 ) M 1 1 j + 1 .
Combining (87) to (92), we know that
Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) Q ( j , T , A , B , β , k , b ˜ 1 ) max ε 1 2 , ε 2 2 , ε 3 2 j 2 ( j + 1 ) M 1 1 j + 1 ,
whereby
Q ( j , T , A , B , β , k , b ˜ 1 ) = C ¯ ( τ , A ) 2 B 3 ( j , τ , A ) 1 j + 1 P ( B , b ˜ 1 , T , β ) ) 1 2 k 2 6 P ( B , b ˜ 1 , T , β ) 1 2 ( j + 1 ) + 3 P ( B , b ˜ 1 , T , β ) 1 2 + k A j j + 1 , P ( B , b ˜ 1 , T , β , γ ) = 1 + B 2 | b ˜ 1 T β | 2 + B 2 T 2 2 β | b ˜ 1 | 2 .
Case 2: Our next goal is to determine the estimation of Ξ α , τ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) when j 2 τ 1 , we get
Ξ α , τ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) C ¯ ( τ , A ) 2 B 4 ( j , τ , A , b ˜ 1 ) 1 2 τ ( P ( B , b ˜ 1 , T , β ) ) 1 2 k 2 6 P ( B , b ˜ 1 , T , β ) 1 4 τ × M 1 1 2 τ max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 ( 1 1 2 τ ) .
Next, for Ξ ( x ) Ξ α , τ ( x ) L 2 ( Ω ) , we get
Ξ ( x ) Ξ α , τ ( x ) L 2 ( Ω ) = i = 1 + α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) = i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) T β E β , β + 1 ( b ˜ i T β ) e i ( x ) L 2 ( Ω ) i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) e i ( x ) L 2 ( Ω ) 1 1 2 τ B 1 × i = 1 + α 2 T β E β , β + 1 ( b ˜ i T β ) α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ Ξ ( x ) , e i ( x ) ( T β E β , β + 1 ( b ˜ i T β ) ) 2 τ e i ( x ) L 2 ( Ω ) 1 2 τ B 2 .
From (96), repeated application of Lemma 11 Part (b) enables us to write B 1 , it is easy to check that
B 1 3 max ε 1 2 , ε 2 2 , ε 3 2 1 2 P ( B , b ˜ 1 , T , β ) 1 2 + k max ε 1 2 , ε 2 2 , ε 3 2 1 2 1 1 2 τ 3 P ( B , b ˜ 1 , T , β ) 1 2 + k 1 1 2 τ max ε 1 2 , ε 2 2 , ε 3 2 1 2 1 1 2 τ .
In the same way as in A 2 , it follows easily that α 2 α 2 + T β E β , β + 1 ( b ˜ i T β ) 2 τ < 1 , we now proceed by induction
B 2 i = 1 + Ξ ( x ) , e i ( x ) ( T β E β , β + 1 ( b ˜ i T β ) ) 2 τ 1 e i ( x ) L 2 ( Ω ) 1 2 τ i = 1 + b ˜ i A 2 τ 1 b ˜ i j b ˜ i j Ξ i e i ( x ) L 2 ( Ω ) 1 2 τ A 1 2 τ 1 b ˜ 1 2 τ j 1 M 1 1 2 τ .
Combining (86) and (95)–(98), it may be concluded that
Ξ ( x ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x ) L 2 ( Ω ) L ( C ¯ , T , A , B , β , k , j , b ˜ 1 ) × max ε 1 2 , ε 2 2 , ε 3 2 1 2 1 1 2 τ M 1 1 2 τ ,
whereby
L ( C ¯ , T , A , B , β , k , j , b ˜ 1 ) = [ C ¯ ( τ , A ) 2 B 4 ( j , τ , A , b ˜ 1 ) 1 2 τ ( P ( B , b ˜ 1 , T , β ) ) 1 2 k 2 6 P ( B , b ˜ 1 , T , β ) 1 4 τ + 3 P ( B , b ˜ 1 , T , β ) 1 2 + k A 1 1 2 τ b ˜ 1 2 τ j 1 ] .
The proof is completed. □

6. Simulation Example

In this section, we are going to show an example to simulate the theory. In order to do this, we consider the problem as follows:
0 + β u ( x , t ) = 2 x 2 u ( x , t ) + Ξ ( x ) , ( x , t ) ( 0 , π ) × ( 0 , 1 ) ,
where the Caputo fractional derivative of order β is defined as
0 + β u ( x , t ) = 1 Γ ( 2 α ) 0 t 2 u ( x , s ) s 2 d s ( t s ) β 1 , 1 < β < 2 ,
where Γ ( . ) is the Gamma function.
We chose the operator Δ u = 2 x 2 u on the domain Ω = ( 0 , π ) with the Dirichlet boundary condition u ( 0 , t ) = u ( π , t ) = 0 for t ( 0 , 1 ) , we have the eigenvalues and corresponding eigenvectors given by b ˜ i = i 2 , i = 1 , 2 , and e i ( x ) = 2 π sin ( i x ) , respectively.
In addition, problem (101) satisfies the conditions
u ( x , 0 ) = f ( x ) , t u ( x , 0 ) = g ( x ) , x ( 0 , π )
and the final condition
u ( x , 1 ) = h ( x ) , x ( 0 , π ) .
We consider the following assumptions:
f ( x ) = 2 π sin ( 2 x ) , g ( x ) = 2 π sin ( x ) , h ( x ) = 2 π E β , 2 ( 1 ) sin ( x ) + E β , 1 ( 2 ) sin ( 2 x ) + E β , β + 1 ( 3 ) sin ( 3 x ) .
In this example, we choose the following solution
u ( x , t ) = 2 π t E β , 2 ( t β ) sin ( x ) + E β , 1 ( 2 t β ) sin ( 2 x ) + t β E β , β + 1 ( 3 t β ) sin ( 3 x ) .
Before giving the main results of this section, we present some of the following numerical approximation methods.
  • Composite Simpson’s rule: Suppose that the interval [ a , b ] is split up into n sub-intervals, with n being an even number. Then, the composite Simpson’s rule is given by
    a b φ ( z ) d z h 3 j = 1 n / 2 [ φ ( z 2 j 2 ) + 4 φ ( z 2 j 1 ) + φ ( z 2 j ) ] = h 3 [ φ ( z 0 ) + 2 j = 1 n / 2 1 φ ( z 2 j ) + 4 j = 1 n / 2 φ ( z 2 j 1 ) + φ ( z n ) ] ,
    where z j = a + j h for j = 0 , 1 , . . . , n 1 , n with h = b a n , in particular, z 0 = a and z n = b .
  • For a , b are two positive integers given. We use the finite difference method to discretize the time and spatial variable for ( x , t ) ( 0 , π ) × ( 0 , 1 ) as follows:
    x p = p Δ x , t q = q Δ t , 0 p X , 0 q T ,
    Δ x = π X , Δ t = 1 T .
  • Explicit forward Euler method: Let u p q = u ( x p , t q ) , then the finite difference approximations are given by
    2 u ( x p , t q ) x 2 = u p + 1 q 2 u p q + u p 1 q Δ x 2 ,
    2 u ( x p , t q ) t 2 = u p q + 1 2 u p q + u p q 1 Δ t 2 .
Instead of getting accurate data ( h , f , g ) , we get approximated data of ( h , f , g ) , i.e., the input data ( h , f , g ) is noised by observation data ( h ε 1 , g ε 2 , h ε 3 ) with order of ε 1 , ε 2 , ε 3 > 0 which satisfies
h ε 1 = h + ε 1 ( rand ( · ) 1 ) , f ε 2 = f + ε 2 ( 2 rand ( · ) + 1 ) , g ε 3 = g + ε 3 ( rand ( · ) 2 ) ,
where, in Matlab software, the rand ( · ) function generates arrays of random numbers whose elements are uniformly distributed in the interval ( 0 , 1 ) .
The absolute error estimation is defined by
Error β , ε , α , τ = 1 X 1 p = 1 X 1 Ξ ( x p ) Ξ α , τ ε 1 , ε 2 , ε 3 ( x p ) 2 1 / 2 ,
where 1 2 τ 1 and α = max { ε 1 2 , ε 2 2 , ε 3 2 } 1 2 M 1 τ τ + 2 .
From the above analysis, we present some results as follows.
In Table 1, we show the convergent estimate between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 with a priori and a posteriori parameter choice rules. From the observations on this table, we can conclude that the approximation result is acceptable. Moreover, we also present the graph of the source functions with cases of the input data noise and the corresponding errors, respectively (see Figure 1, Figure 2 and Figure 3). In addition, the solution u ( x , t ) is also shown in Figure 4 for 0 x π and 0 t 1 .

7. Conclusions

In this study, we use the Tikhonov method to regularize the inverse problem to determine an unknown source term in a space-time-fractional diffusion equation. By an example, we prove that this problem is ill-posed in the sense of Hadamard. Under a priori and a posteriori parameter choice rules, we show the results about the convergent estimate between the exact solution and the regularized solution. In addition, we show an example to illustrate our proposed regularization.

Author Contributions

Project administration, Y.Z.; Resources, L.D.L.; Methodology, N.H.L.; Writing—review, editing and software, C.N.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Podlubny, I. Fractional Differential Equations. In Mathematics in Science and Engineering; Academic Press Inc.: San Diego, CA, USA, 1990; Volume 198. [Google Scholar]
  2. Nguyen, H.T.; Le, D.L.; Nguyen, T.V. Regularized solution of an inverse source problem for a time fractional diffusion equation. Appl. Math. Model. 2016, 40, 8244–8264. [Google Scholar] [CrossRef]
  3. Wei, T.; Wang, J. A modified quasi-boundary value method for an inverse source problem of the time-fractional diffusion equation. Appl. Numer. Math. 2014, 78, 95–111. [Google Scholar] [CrossRef]
  4. Wang, J.G.; Zhou, Y.B.; Wei, T. Two regularization methods to identify a space-dependent source for the time-fractional diffusion equation. Appl. Numer. Math. 2013, 68, 39–57. [Google Scholar] [CrossRef]
  5. Zhang, Z.Q.; Wei, T. Identifying an unknown source in time-fractional diffusion equation by a truncation method. Appl. Math. Comput. 2013, 219, 5972–5983. [Google Scholar] [CrossRef]
  6. Yang, F.; Liu, X.; Li, X.X. Landweber iterative regularization method for identifying the unknown source of the modified Helmholtz equation. Bound. Value Probl. 2017, 91. [Google Scholar] [CrossRef]
  7. Tatar, S.; Ulusoy, S. An inverse source problem for a one-dimensional space-time fractional diffusion equation. Appl. Anal. 2015, 94, 2233–2244. [Google Scholar] [CrossRef]
  8. Tatar, S.; Tinaztepe, R.; Ulusoy, S. Determination of an unknown source term in a space-time fractional diffusion equation. J. Frac. Calc. Appl. 2015, 6, 83–90. [Google Scholar]
  9. Tatar, S.; Tinaztepe, R.; Ulusoy, S. Simultaneous inversion for the exponents of the fractional time and space derivatives in the space-time fractional diffusion equation. Appl. Anal. 2016, 95, 1–23. [Google Scholar] [CrossRef]
  10. Tuan, N.H.; Long, L.D. Fourier truncation method for an inverse source problem for space-time fractional diffusion equation. Electron. J. Differ. Equ. 2017, 2017, 1–16. [Google Scholar]
  11. Mehrdad, L.; Dehghan, M. The use of Chebyshev cardinal functions for the solution of a partial differential equation with an unknown time-dependent coefficient subject to an extra measurement. J. Comput. Appl. Math. 2010, 235, 669–678. [Google Scholar] [Green Version]
  12. Pollard, H. The completely monotonic character of the Mittag-Leffler function Eα(-x). Bull. Am. Math. Soc. 1948, 54, 1115–1116. [Google Scholar] [CrossRef]
  13. Yang, M.; Liu, J.J. Solving a final value fractional diffusion problem by boundary condition regularization. Appl. Numer. Math. 2013, 66, 45–58. [Google Scholar] [CrossRef]
  14. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier Science Limited: Amsterdam, The Netherlands, 2006. [Google Scholar]
  15. Luchko, Y. Initial-boundary-value problems for the one-dimensional time-fractional diffusion equation. Fract. Calc. Appl. Anal. 2012, 15, 141–160. [Google Scholar] [CrossRef]
  16. Quan, Z.; Feng, X.L. A fractional Tikhonov method for solving a Cauchy problem of Helmholtz equation. Appl. Anal. 2017, 96, 1656–1668. [Google Scholar] [CrossRef]
  17. Trifce, S.; Tomovski, Z. The general time fractional wave equation for a vibrating string. J. Phys. A Math. Theor. 2010, 43, 055204. [Google Scholar]
  18. Trifce, S.; Ralf, M.; Zivorad, T. Fractional diffusion equation with a generalized Riemann–Liouville time fractional derivative. J. Phys. A Math. Theor. 2011, 44, 255203. [Google Scholar]
  19. Hilfer, R.; Seybold, H.J. Computation of the generalized Mittag-Leffler function and its inverse in the complex plane. Integral Transform Spec. Funct. 2006, 17, 37–652. [Google Scholar] [CrossRef]
  20. Seybold, H.; Hilfer, R. Numerical Algorithm for Calculating the Generalized Mittag-Leffler Function. SIAM J. Numer. Anal. 2008, 47, 69–88. [Google Scholar] [CrossRef]
  21. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Application of Fractional differential equations. In North—Holland Mathematics Studies; Elsevier Science B.V.: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  22. Sakamoto, K.; Yamamoto, M. Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems. J. Math. Anal. Appl. 2011, 382, 426–447. [Google Scholar] [CrossRef] [Green Version]
  23. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problem; Springer: Berlin, Germany, 1996. [Google Scholar]
Figure 1. A comparison between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 for β = 1.5 , X = T = 40 , { ε 1 , ε 2 , ε 3 } := { 9 × 10 2 , 2 × 10 2 , 1 × 10 3 } , τ = 4 5 .
Figure 1. A comparison between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 for β = 1.5 , X = T = 40 , { ε 1 , ε 2 , ε 3 } := { 9 × 10 2 , 2 × 10 2 , 1 × 10 3 } , τ = 4 5 .
Mathematics 07 00934 g001
Figure 2. A comparison between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 for β = 1.5 , X = T = 40 , { ε 1 , ε 2 , ε 3 } := { 1 × 10 3 , 2 × 10 3 , 3 × 10 3 } , τ = 4 5 .
Figure 2. A comparison between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 for β = 1.5 , X = T = 40 , { ε 1 , ε 2 , ε 3 } := { 1 × 10 3 , 2 × 10 3 , 3 × 10 3 } , τ = 4 5 .
Mathematics 07 00934 g002
Figure 3. A comparison between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 for β = 1.5 , X = T = 40 , { ε 1 , ε 2 , ε 3 } := { 3 × 10 1 , 2 × 10 2 , 5 × 10 1 } , τ = 4 5 .
Figure 3. A comparison between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 for β = 1.5 , X = T = 40 , { ε 1 , ε 2 , ε 3 } := { 3 × 10 1 , 2 × 10 2 , 5 × 10 1 } , τ = 4 5 .
Mathematics 07 00934 g003
Figure 4. The solution u ( x , t ) for ( x , t ) ( 0 , π ) × ( 0 , 1 ) .
Figure 4. The solution u ( x , t ) for ( x , t ) ( 0 , π ) × ( 0 , 1 ) .
Mathematics 07 00934 g004
Table 1. The errors estimation between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 at β = 1.5 with X = T = 40 , τ = 4 5 .
Table 1. The errors estimation between Ξ and Ξ α , τ ε 1 , ε 2 , ε 3 at β = 1.5 with X = T = 40 , τ = 4 5 .
{ ε 1 , ε 2 , ε 3 } X = 40 , T = 40
Error priori β , ε , α , τ Error posteriori β , ε , α , τ
{ 3 × 10 1 , 2 × 10 2 , 5 × 10 1 } 0.1644781720120520.182258736154960
{ 9 × 10 2 , 2 × 10 2 , 1 × 10 3 } 0.0310667474418970.030595088570760
{ 1 × 10 3 , 2 × 10 3 , 3 × 10 3 } 0.0146765865122560.015071362259137

Share and Cite

MDPI and ACS Style

Long, L.D.; Luc, N.H.; Zhou, Y.; Nguyen, a.C. Identification of Source Term for the Time-Fractional Diffusion-Wave Equation by Fractional Tikhonov Method. Mathematics 2019, 7, 934. https://doi.org/10.3390/math7100934

AMA Style

Long LD, Luc NH, Zhou Y, Nguyen aC. Identification of Source Term for the Time-Fractional Diffusion-Wave Equation by Fractional Tikhonov Method. Mathematics. 2019; 7(10):934. https://doi.org/10.3390/math7100934

Chicago/Turabian Style

Long, Le Dinh, Nguyen Hoang Luc, Yong Zhou, and and Can Nguyen. 2019. "Identification of Source Term for the Time-Fractional Diffusion-Wave Equation by Fractional Tikhonov Method" Mathematics 7, no. 10: 934. https://doi.org/10.3390/math7100934

APA Style

Long, L. D., Luc, N. H., Zhou, Y., & Nguyen, a. C. (2019). Identification of Source Term for the Time-Fractional Diffusion-Wave Equation by Fractional Tikhonov Method. Mathematics, 7(10), 934. https://doi.org/10.3390/math7100934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop