Next Article in Journal
Memes Evolution in a Memetic Variant of Particle Swarm Optimization
Next Article in Special Issue
An Efficient Analytical Technique, for The Solution of Fractional-Order Telegraph Equations
Previous Article in Journal
Generalized Implicit Set-Valued Variational Inclusion Problem with ⊕ Operation
Previous Article in Special Issue
Application of the Laplace Homotopy Perturbation Method to the Black–Scholes Model Based on a European Put Option with Two Assets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularization and Error Estimate for the Poisson Equation with Discrete Data

by
Nguyen Anh Triet
1,†,
Nguyen Duc Phuong
2,†,
Van Thinh Nguyen
3,† and
Can Nguyen-Huu
4,*,†
1
Faculty of Natural Sciences, Thu Dau Mot University, Thu Dau Mot City 820000, Binh Duong Province, Vietnam
2
Faculty of Fundamental Science, Industrial University of Ho Chi Minh City, Ho Chi Minh City 700000, Vietnam
3
Department of Civil and Environmental Engineering, Seoul National University, Seoul 08826, South Korea
4
Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(5), 422; https://doi.org/10.3390/math7050422
Submission received: 1 April 2019 / Revised: 30 April 2019 / Accepted: 5 May 2019 / Published: 10 May 2019

Abstract

:
In this work, we focus on the Cauchy problem for the Poisson equation in the two dimensional domain, where the initial data is disturbed by random noise. In general, the problem is severely ill-posed in the sense of Hadamard, i.e., the solution does not depend continuously on the data. To regularize the instable solution of the problem, we have applied a nonparametric regression associated with the truncation method. Eventually, a numerical example has been carried out, the result shows that our regularization method is converged; and the error has been enhanced once the number of observation points is increased.

1. Introduction

The Cauchy problem for the Poisson equation of elliptic type has been applied to a wide range of science and technology fields. It arises, for instance, to describe some phenomena relating to physics [1,2], and biology [3]. In this paper, we are interested in finding a solution u ( x , y , t ) of the Cauchy problem for the Poisson equation
u t t + u x x + u y y = F ( x , y , t ) , ( x , y , t ) Ω × ( 0 , T ) , u ( x , y , 0 ) = g ( x , y ) , ( x , y ) Ω , u t ( x , y , 0 ) = h ( x , y ) , ( x , y ) Ω ,
where Ω = ( 0 , π ) × ( 0 , π ) , and T > 0 is a given real number.
In reality, the exact values of the functions F , g , h are not available, we only know the observations themselves, F ^ , g ^ , h ^ instead. As is well-known, observational data are always discrete sets obtained from temporal and spatial measurements, and containing errors. Some previous studies by many authors are often assumed that the error is deterministic F ^ F + h h ^ + g g ^ ε , such as [4,5], etc. However, there are the errors which come from uncontrollable sources, such as volcanic activity, wind, rain, snow, humidity, radiation, etc., then the errors are random. It is reasonable to assume that the observed value of F , g , h follow the random models
F ^ i j ( t ) = F ( x i , y j , t ) + τ B i j ( t ) ,
g ^ i j = g ( x i , y j ) + σ i j ξ i , j ,
h ^ i j = h ( x i , y j ) + υ i j χ i j ,
where ( x i , y j ) = π 2 i 1 2 n , π 2 j 1 2 m , i = 1 , , n and j = 1 , , m is grid points in Ω . Here τ is a positive constant, B i j ( t ) are Brownian motions, and random variables ξ i j and χ i j are mutually independent and identically distributed; ξ i j and χ i j i . i . d N ( 0 , 1 ) ; σ i j and υ i j are unknown positive constants which are bounded by constants V σ , max and V υ , max , respectively; i.e., 0 < σ i j V σ , max and 0 < υ i j V υ , max .
A small perturbation in the given Cauchy data F , g , h may cause a very large error on the solution of Problem (1). This implies that the problem (1) associated with random models (2)–(4) is ill-posed in the sense of Hadamard. Hence, some regularization methods should be applied. This problem has not been studied before, and that is the motivation for us to conduct this paper. Our main goal is to apply the nonparametric regression method for approximating the functions F , g , h . Then we produce an estimate for u ( x , y , t ) for t ( 0 , T ] .
This paper is organized as follows. In Section 2, we define some notations. The discretization form of Fourier coefficients is introduced in Section 3. Section 4 is devoted to the ill-posedness of the problem. In Section 5, we construct an estimator which is called the regularized solution. An upper bound of the estimation error is also described in this section. Finally, a numerical result is presented in Section 7.

2. Some Notions and the Solution of Problem

2.1. Some Notions

In this section, we present some notions which are useful for our main results. We first designate the L 2 ( Ω ) space. The Dirichlet Laplacian operator is defined by
A f : = Δ f = 2 f x 2 + 2 f y 2 .
Since A is a linear, densely defined self-adjoint and positive definite elliptic operator on the connected bounded domain Ω with Dirichlet boundary condition. By the spectral theory, it is easy to show that the eigenvalues of A are given by λ p , q = p 2 + q 2 . The corresponding eigenfunctions are denote by φ p , q ( x , y ) = φ p ( x ) φ q ( y ) for all p , q Z + , where φ p ( x ) = 2 / π sin ( p x ) . Thus the eigenpairs ( λ p , q , φ p , q ) , p , q Z + satisfy
A φ p , q ( x , y ) = λ p , q φ p , q ( x , y ) , ( x , y ) Ω , φ p , q ( x , y ) = 0 , ( x , y ) Ω .
The functions φ p , q are normalized so that φ p , q p , q Z + is an orthonormal basis of L 2 ( Ω ) . According to the characterization of Hilbert basis, for any integer N , M , the set
S N , M : = span φ p , q : p = 1 , N ; q = 1 , , M ,
is an N × M -dimensional subspace of L 2 ( Ω ) . From [6] (Lemma 1.10), we give the projection of f L 2 ( Ω ) onto finite dimensional subspaces S N , M , as follows:
P S N , M f = p = 1 N q = 1 M f , φ p , q φ p , q ( x , y ) .
For any positive numbers α , β and E, we denote the Sobolev class of functions by
S α , β E ( Ω ) = f L 2 ( Ω ) : p = 1 q = 1 p 2 α q 2 β f , φ p , q 2 < E 2 .

2.2. The Solution of Problem

Lemma 1.
If (1) has a solution then it has a form
u ( x , y , t ) = p = 1 q = 1 u p , q ( t ) φ p , q ( x , y ) ,
the Fourier coefficients
u p , q ( t ) = cosh t λ p , q g p , q + sinh t λ p , q λ p , q h p , q + 0 t sinh λ p , q λ p , q F p , q ( s ) d s ,
where g p , q = g , φ p , q , h p , q = h , φ p , q and F p , q ( t ) = F ( · , · , t ) , φ p , q .
Proof. 
Suppose that the solution of (1) has the Fourier series
u ( x , y , t ) = p = 1 q = 1 u p , q ( t ) φ p , q ( x , y ) , where u p , q ( t ) = u ( · , · , t ) , φ p , q .
Multiplying both sides of the equation u t t + u x x + u y y = F ( x , y , t ) by φ p , q ( x , y ) and integrating over [ 0 , T ] we get
d 2 d t 2 u p , q + λ p , q u p , q = F p , q ( t ) .
Multiplying the equation (10) by sinh ( ( t s ) λ p , q ) λ p , q and integrating both sides from 0 to t, we obtain
u p , q ( t ) = cosh t λ p , q g p , q + sinh t λ p , q λ p , q h p , q + 0 t sinh λ p , q ( t s ) λ p , q F p , q ( s ) d s .
Now we insert λ p , q = p 2 + q 2 into (11) and we complete the proof. □

3. Discretization Form of the Fourier Coefficients

In this section, we consider the discretization form of the Fourier coefficients, we first state the Lemmas below (see [7], page 145) which derive certain orthogonality results concerning the sine functions over uniform design.
Lemma 2.
Let i = 1 , , n with x i = π 2 i 1 2 n and φ p ( x ) = 2 π sin ( p x ) , then for all p = 1 , , n 1 we have
d p , r = 1 n i = 1 n φ p ( x i ) φ r ( x i ) = ( 1 ) k π , i f   r p = 2 k n , ( 1 ) k π , i f   r + p = 2 k n , 0 , o t h e r w i s e .
If r = 1 , , n 1 then we have
d p , r = 1 n i = 1 n φ p ( x i ) φ r ( x i ) = 1 π , i f   p = r , 0 , i f   p r .
Lemma 3.
For p = 1 , , n 1 and q = 1 , , m 1 , with x i = π 2 i 1 2 n and y j = π 2 j 1 2 n , if we denote
D p , r ; q , s = d p , r d q , s = 1 n i = 1 n φ p ( x i ) φ r ( x i ) 1 m j = 1 m φ q ( y j ) φ s ( y j ) .
then we have
D p , r ; q , s = ( 1 ) k + l π 2 , i f   ( r , s ) ± ( p , q ) = ( 2 k n , 2 l m ) , ( 1 ) k + l π 2 , i f   ( r , s ) ± ( p , q ) = ( 2 k n , 2 l m ) , 0 , o t h e r w i s e .
For r = 1 , , n 1 and s = 1 , , m 1 , we have
D p , r ; q , s = 1 π 2 , i f   ( r , s ) = ( p , q ) , 0 , i f   ( r , s ) ( p , q ) .
Proof. 
This Lemma is derived from Lemma 2. □
Lemma 4.
Assume that f C 1 ( Ω ¯ ) . For p = 1 , , n 1 and q = 1 , , m 1 , we set
f n , m ; p , q : = π 2 n m i = 1 m j = 1 n f ( x i , y j ) φ p , q ( x i , y j ) ,
with ( x i , y j ) Ω are grid points of Ω. The Fourier coefficients f p , q = f , φ p , q of the function f ( x , y ) can be represented as
f p , q = f n , m ; p , q Θ n , m ; p , q f ,
where Θ n , m ; p , q f consists of the terms:
Θ n , m ; p , q f = Θ n ; p , q 1 + Θ m ; p , q 2 + Θ n , m ; p , q 3 ,
which are defined as:
Θ n ; p , q 1 = l = 1 ( 1 ) l f p , 2 l m + q f p , 2 l m q , Θ m ; p , q 2 = k = 1 ( 1 ) k f 2 k n + p , q f 2 k n p , q ,
and
Θ n , m ; p , q 3 = k = 1 l = 1 ( 1 ) k + l ( f 2 k n + p , 2 l m + q f 2 k n + p , 2 l m q f 2 k n p , 2 l m + q + f 2 k n p , 2 l m q ) .
Proof. 
The function f ( x , y ) has the expansion
f ( x , y ) = r = 1 s = 1 f , φ r , s φ r , s ( x , y ) .
For p = 1 , , n 1 and q = 1 , , m 1 , at grid point ( x i , y j ) Ω , we have
1 n m i = 1 n j = 1 m f ( x i , y j ) φ p , q ( x i , y j ) = 1 n m i = 1 n j = 1 m r = 1 s = 1 f , φ r , s φ r , s ( x i , y j ) φ p , q ( x i , y j ) = r = 1 s = 1 f , φ r , s 1 n i = 1 n φ p ( x i ) φ r ( x i ) 1 m j = 1 m φ q ( y j ) φ s ( y j ) = r = 1 s = 1 m 1 f , φ r , s D p , r ; q , s + s = m f , φ r , s D p , r ; q , s = r = 1 n 1 s = 1 m 1 f , φ r , s D p , r ; q , s + r = 1 n 1 s = m f , φ r , s D p , r ; q , s + r = n s = 1 m 1 f , φ r , s D p , r ; q , s + r = n s = m f , φ r , s D p , r ; q , s .
The formula (13) give us
r = 1 n 1 s = 1 m 1 f , φ r , s D p , r ; q , s = 1 π 2 f , φ p , q ,
and from (12), we obtain
r = 1 n 1 s = m f , φ r , s D p , r ; q , s = 1 π 2 l = 1 ( 1 ) l f p , 2 l m + q f p , 2 l m q
r = n s = 1 m 1 f , φ r , s D p , r ; q , s = 1 π 2 k = 1 ( 1 ) k f 2 k n + p , q f 2 k n p , q ,
and
r = n s = m f , φ r , s D p , r ; q , s = 1 π 2 k = 1 l = 1 ( 1 ) k + l ( f 2 k n + p , 2 l m + q f 2 k n + p , 2 l m q f 2 k n p , 2 l m + q + f 2 k n p , 2 l m q ) .
Combining (15)–(19) we get
1 n m i = 1 n j = 1 m f ( x i , y j ) φ p , q ( x i , y j ) = 1 π 2 f , φ p , q + 1 π 2 Θ n , m , p , q f .
This completes the proof of Lemma 3. □
Remark 1.
Let us now consider the term
Θ n , m ; p , q f = f n , m ; p , q f p , q
in Lemma 3. The hidden ideal inside is that the Fourier coefficients f p , q is approximated by f n . m ; p , q which is the function of the data set. So Θ n , m ; p , q f is a different between them; Θ n , m ; p , q f also is called a residual. Naturally, it must converge to zero as n , m tend to infinity.
We obtain the upper boundary of Θ n , m ; p , q f in the following Lemma:
Lemma 5.
Suppose that f ( · , · ) S α , β E , α , β > 1 . Then there exists a constant C independent of n , m , p , q such that
Θ n , m ; p , q f C 1 n α + 1 m β .
Proof. 
From (14), the triangle inequality implies
Θ n , m ; p , q f Θ n ; p , q 1 + Θ m ; p , q 2 + Θ n , m ; p , q 3 .
The first term is bounded by
Θ n ; p , q 1 k = 1 | f 2 k n + p , q | + | f 2 k n p , q | k = 1 E ( 2 k n + p ) α q β + E ( 2 k n p ) α q β k = 1 E ( k n ) α q β + E ( k n ) α q β 1 n α k = 1 2 E k α ,
and the second term is estimated as follows
Θ m ; p , q 2 l = 1 | f p , 2 l m + q | + | f p , 2 l m q | l = 1 E p α ( 2 l m + q ) β + E p α ( 2 l m q ) β l = 1 E p α ( l m ) β + E p α ( l m ) β = 1 m β k = 1 2 E l β .
Similarly, we also have
Θ n , m ; p , q 3 k = 1 l = 1 | f 2 k n + p , 2 l m + q | + | f 2 k n + p , 2 l m q | + | f 2 k n p , 2 l m + q | + | f 2 k n p , 2 l m q | 1 n α m β k = 1 l = 1 4 E k α l β .
Therefore, there exists a constant number in such a way that
C = 1 2 max k = 1 2 E k α ; l = 1 2 E l β ; k = 1 l = 1 4 E k α l β ,
such that (20) is verified. □
Lemma 6.
Assume that f C [ 0 , T ] ; C 1 ( Ω ¯ ) . For p = 1 , , n 1 and q = 1 , , m 1 , we set
f n , m ; p , q ( t ) : = π 2 n m i = 1 m j = 1 n f ( x i , y j , t ) φ p , q ( x i , y j ) ,
with ( x i , y j ) Ω are grid points of Ω. The Fourier coefficients f p , q ( t ) = f ( · , · , t ) , φ p , q of the function f ( x , y ) can be represented as
f p , q ( t ) = f n , m ; p , q ( t ) Θ n , m ; p , q f ( t ) ,
where Θ n , m ; p , q f ( t ) is
Θ n , m ; p , q f ( t ) = Θ n ; p , q 1 ( t ) + Θ m ; p , q 2 ( t ) + Θ n , m ; p , q 3 ( t ) ,
and
Θ n ; p , q 1 ( t ) = k = 1 ( 1 ) k f p , 2 l m + q ( t ) f p , 2 l m q ( t ) ,
Θ m ; p , q 2 ( t ) = l = 1 ( 1 ) l f 2 k n + p , q ( t ) f 2 k n p , q ( t ) ,
and
Θ n , m ; p , q 3 ( t ) = k = 1 l = 1 ( 1 ) k + l f 2 k n + p , 2 l m + q ( t ) f 2 k n + p , 2 l m q ( t ) f 2 k n p , 2 l m + q ( t ) + f 2 k n p , 2 l m q ( t ) .
Lemma 7
(Upper boundary of residual Θ n , m ; p , q f ( t ) ). Suppose that f ( · , · , t ) S α , β E for all t [ 0 , T ] , α , β > 1 and that p = 1 , , n 1 ; q = 1 , , m 1 . Let us recall that Θ n , m ; p , n f ( t ) defined by (22). Then there exists a generic constant C which is independent of n , m , p , q such that
Θ n , m ; p , q f ( t ) C 1 n α + 1 m β .
Lemma 8.
For any integer N , M such that 0 < N < n and 0 < M < m , the solution u can be represented as follows
u ( x , y , t ) = p = 1 N q = 1 M cosh t λ p , q g n , m ; p , q + sinh t λ p , q λ p , q h n , m ; p , q + 0 t sinh λ p , q ( t s ) λ p , q F n , m ; p , q ( s ) d s cosh t λ p , q Θ n , m ; p , q g sinh t λ p , q λ p , q Θ n , m ; p , q h 0 t sinh λ p , q ( t s ) λ p , q Θ n , m ; p , q F ( s ) d s φ p , q ( x , y ) + u P S N , M u ( x , y , t ) .
Proof. 
According to Lemma 4, we can present the Fourier coefficients:
g p , q = g n , m ; p , q Θ n , m ; p , q g ,
and
h p , q = h n , m ; p , q Θ n , m ; p , q h .
In addition, Lemma 6 gives us
F p q ( t ) = F n , m ; p , q ( t ) Θ n , m ; p , q F ( t ) .
Moreover, from (5), the projection of u into subspace S N , M can be written as
P S N , M u = p = 1 N q = 1 M u p , q φ p , q ( x , y ) ,
where u p , q is defined in (8). Substituting g p , q , h p , q and F p q ( t ) into u p , q , and since u ( x , y , t ) = P S N , M u ( x , y , t ) + u P S N , M u ( x , y , t ) , so the proof is completed. □

4. The Ill-Posedness of the Problem

Based on above fundamental, we can now consider the ill–posedness of our problem. We investigate a model with concrete data and prove the instability of the solution in the case of random noise data. We divide our task into two parts.
Part 1. Small change in the data: If F ( x , y , t ) = 0 , g ( x , y ) = 0 , and h ( x , y ) = 0 then problem (1) has the unique solution u ( x , y , t ) = 0 . Let us take the following random observation data:
F ^ i j ( t ) = 1 n m B i j ( t ) , g ^ i j = 1 n m ξ i j , h ^ i j = 1 n m χ i j , i = 1 , , n , j = 1 , , m
where B i j ( t ) are Brownian motions, ξ i j i . i . d N ( 0 , 1 ) and χ i j i . i . d N ( 0 , 1 ) . Next, we consider the mean square error between g and its estimation, g ^ n , m N , M (see Lemma 9), which is characterized by E g ^ n , m N , M g 2 . Firstly, the Paserval’s identity give us
g ^ n , m N , M g L 2 ( Ω ) 2 = p = 1 N q = 1 M π 2 n m i = 1 n j = 1 m 1 n m ξ i j φ p q ( x i , y j ) 2 .
Secondly, since ξ i j i . i . d N ( 0 , 1 ) , we have E ξ i j ξ l k = 0 for all i l or j k . Hence, taking expectation both sides of (24), we obtain
E g ^ n , m N , M g L 2 ( Ω ) 2 = p = 1 N q = 1 M π 4 n 2 m 2 i = 1 n j = 1 m 1 n m E ξ i j 2 φ p q 2 ( x i , y j ) .
On the other hand E ξ i j 2 = 1 and from (13) give us 1 n m i = 1 n j = 1 m φ p q 2 ( x i , y j ) = 1 π 2 . Thus
E g ^ n , m N , M g L 2 ( Ω ) 2 = p = 1 N q = 1 M π 2 n 2 m 2 = π 2 N M n 2 m 2 .
Finally, if we set N = n 1 and M = m 1 , then
lim n , m E g ^ n , m n 1 , m 1 g L 2 ( Ω ) 2 = lim n , m π 2 ( n 1 ) ( m 1 ) n 2 m 2 = 0 .
Doing the same manner as above, we also have
lim n , m E h ^ n , m n 1 , m 1 h L 2 ( Ω ) 2 = lim n , m π 2 ( n 1 ) ( m 1 ) n 2 m 2 = 0 ,
and
lim n , m E F ^ n , m n 1 , m 1 F L 2 ( Ω ) 2 = lim n , m π 2 t ( n 1 ) ( m 1 ) n 2 m 2 = 0 .
Part 2. Large change in the solution: We shall construct the solution of problem (1) with respect to the random data. Noting that U n 1 , m 1 is a trigonometric polynomial with order less than n (with respect to the variable x) and less than m (with respect to the variable y). Using (31) with N = n 1 , M = m 1 , and we have
U n 1 , m 1 ( x , y , t ) = p = 1 n 1 q = 1 m 1 cosh t λ p , q g ^ n , m ; p , q n 1 , m 1 + sinh t λ p , q λ p , q h ^ n , m ; p , q n 1 , m 1 + 0 t sinh λ p , q ( t s ) λ p , q F ^ n , m ; p , q n 1 , m 1 ( s ) d s φ p , q ( x , y ) ,
we deduce
U n 1 , m 1 L 2 ( Ω ) 2 cosh t λ n 1 , m 1 g ^ n , m ; n 1 , m 1 n 1 , m 1 + sinh t λ n 1 , m 1 λ n 1 , m 1 h ^ n , m ; n 1 , m 1 n 1 , m 1 + 0 t sinh λ n 1 , m 1 ( t s ) λ n 1 , m 1 F ^ n , m ; n 1 , m 1 n 1 , m 1 ( s ) d s 2 = : K 1 + K 2 + K 3 2 .
Since 2 ( a + b ) 2 a 2 2 b 2 for all a , b R , we deduce
2 U n 1 , m 1 L 2 ( Ω ) 2 2 K 1 + K 2 + K 3 2 K 1 2 2 K 2 + K 3 2 K 1 2 4 K 2 2 4 K 3 2 .
Therefore, we get the inequality expectation
2 E U n 1 , m 1 L 2 ( Ω ) 2 E K 1 2 4 E K 2 2 4 E K 3 2 .
Here, we need to estimate the terms E K 1 2 , E K 2 2 and E K 3 2 . By using the Parseval’s identity, we first see that
K 1 2 = cosh t λ n 1 , m 1 g ^ n , m ; n 1 , m 1 n 1 , m 1 2 = cosh 2 t λ n 1 , m 1 π 2 n m i = 1 n j = 1 m 1 n m ξ i j φ n 1 , m 1 ( x i , y j ) 2 .
Since the noises ξ i j are mutually independent and identically distributed ξ i j i . i . d N ( 0 , 1 ) , we obtain E ξ i j ξ l k = 0 for all i l or j k . In addition, from (13) given that
i = 1 n j = 1 m 1 n m φ n 1 , m 1 2 ( x i , y j ) = 1 π 2 .
Hence
E K 1 2 = π 4 n 2 m 2 cosh 2 t λ n 1 , m 1 i = 1 n j = 1 m 1 n m E ξ i j 2 φ n 1 , m 1 2 ( x i , y j ) = π 4 n 2 m 2 cosh 2 t λ n 1 , m 1 i = 1 n j = 1 m 1 n m φ n 1 , m 1 2 ( x i , y j ) = π 2 n 2 m 2 cosh 2 t λ n 1 , m 1 .
Next, we estimate E K 2 2 . Applying similar transform as in the previous step, we get
K 2 2 = sinh t λ n 1 , m 1 λ n 1 , m 1 h ^ n , m ; n 1 , m 1 n 1 , m 1 2 = sinh t λ n 1 , m 1 λ n 1 , m 1 π 2 n m i = 1 n j = 1 m 1 n m χ i j φ n 1 , m 1 ( x i , y j ) 2 ,
and we also have
E K 2 2 = π 2 n 2 m 2 sinh 2 t λ n 1 , m 1 λ n 1 , m 1 π 2 n 2 m 2 cosh 2 t λ n 1 , m 1 λ n 1 , m 1 .
Finally, the Parseval’s identity implies that
K 3 2 = 0 t sinh λ n 1 , m 1 ( t s ) λ n 1 , m 1 F ^ n , m ; p , q n 1 , m 1 ( s ) d s 2 = 0 t sinh λ n 1 , m 1 ( t s ) λ n 1 , m 1 π 2 n m i = 1 n j = 1 m 1 n m B i j ( t ) φ p q ( x i , y j ) ( s ) d s 2 ,
and using the Hölder’s inequality, we obtain
K 3 2 0 t d s 0 t sinh λ n 1 , m 1 ( t s ) λ n 1 , m 1 π 2 n m i = 1 n j = 1 m 1 n m B i j ( s ) φ p q ( x i , y j ) 2 d s π 2 T n 2 m 2 sinh 2 t λ n 1 , m 1 λ n 1 , m 1 0 t i = 1 n j = 1 m 1 n m B i j ( s ) φ p q ( x i , y j ) 2 d s .
From the properties of Brownian motion, we known that E B i j ( s ) B l k ( s ) = 0 for all i l or j k and E B i j 2 ( s ) = s , thus
E K 3 2 π 2 T n 2 m 2 sinh 2 t λ n 1 , m 1 λ n 1 , m 1 0 t i = 1 n j = 1 m 1 n m B i j 2 ( s ) φ p q 2 ( x i , y j ) d s T 3 n 2 m 2 cosh 2 t λ n 1 , m 1 λ n 1 , m 1 .
Substituting (26)–(28) into (25), we have
2 U n 1 , m 1 L 2 ( Ω ) 2 π 2 n 2 m 2 cosh 2 t λ n 1 , m 1 2 π 2 n 2 m 2 cosh 2 t λ n 1 , m 1 λ n 1 , m 1 2 T 3 n 2 m 2 cosh 2 t λ n 1 , m 1 λ n 1 , m 1 ,
and then
2 U n 1 , m 1 C ( [ 0 , T ] , L 2 ( Ω ) 2 cosh 2 t λ n 1 , m 1 π 2 n 2 m 2 4 π 2 n 2 m 2 λ n 1 , m 1 4 T 3 n 2 m 2 λ n 1 , m 1 .
As n , m tend to infinity, it shows that
lim n , m E g ^ n , m n 1 , m 1 g L 2 ( Ω ) 2 + h ^ n , m n 1 , m 1 h L 2 ( Ω ) 2 + F ^ n , m n 1 , m 1 F L 2 ( Ω ) 2 = 0 ,
whereas
lim n , m E U n 1 , m 1 u C ( [ 0 , T ] , L 2 ( Ω ) ) 2 = .
From the above proposition, we see that a small change in the data leads to a significant change in the solution of the problem. Thus, we can conclude that the problem (1) is ill-posed in the Hadamard sense. Hence, a regularization is in order.

5. Regularization and Error Estimate

5.1. Nonparametric Estimate for Functional Data

In this section, we consider a nonparametric regression model which are developed when the predictor is a function valued random variable f ^ i j ; i = 1 , , n ; j = 1 , m , based on a representation of the unknown regression function f ( x , y ) by the model
f ^ i j = f ( x i , y j ) + X i j ,
where ( x i , y j ) are as in the introduction, X i j are independent and identically distributed random variables with distribution N ( 0 , 1 ) . Our goal in this section is to find an estimated function for f ( x , y ) from random data f ^ i j at grid points ( x i , y j ) .
The finite dimensional subspace S N , M , 0 < N < n , 0 < M < n of L 2 ( Ω ) is closed [8] (page 362). Moreover, the projection Theorem [9] (page 130) leads to the following Corollary.
Corollary 1.
Let S N , M be a closed subspace of the Hilbert space L 2 ( Ω ) . For each f L 2 ( Ω ) there is unique closest function f ^ n , m N , M S N , M such that
f f ^ n , m N , M L 2 = min ξ S N , M f ξ L 2 .
In the statistic sense, the function f is unknown, we only know its observed values, f ^ i j , at discrete points ( x i , y j ) . The estimate function for f in subspace S N , M is
f ^ n , m N , M = arg min ξ S N , M i = 1 n j = 1 m f ^ i j ξ ( x i , y j ) 2 .
The Lemma below gives us a result of the function f ^ N , M = ξ which satisfies (29).
Lemma 9.
Problem (29) has a unique solution
f ^ n , m N , M = p = 1 N q = 1 M π 2 n m i = 1 n j = 1 m f ^ i j φ p , q ( x i , y j ) φ p , q .
Proof. 
Since ξ S N , M , its Fourier expansion has a form
ξ ( x , y ) = p = 1 N q = 1 M ξ p q φ p , q ( x , y ) ,
then Equation (29) becomes
f ^ n , m N , M = arg min ξ S N , M i = 1 n j = 1 m f ^ i j p = 1 N q = 1 M ξ p q φ p , q ( x i , y j ) 2 .
Let the n × m variables function
Ξ ( ξ l = 1 , n ¯ , k = 1 , m ¯ ) = i = 1 n j = 1 m f ^ i j p = 1 N q = 1 M ξ p q φ p , q ( x i , y j ) 2 .
Here, our goal is to find ξ l , k such that Ξ ( ξ l = 1 , n ¯ , k = 1 , m ¯ ) has the minimum value. A necessary and sufficient condition to get that point is
ξ l , k Ξ ( ξ l = 1 , n ¯ , k = 1 , m ¯ ) = 2 i = 1 n j = 1 m f ^ i j p = 1 N q = 1 M ξ p q φ p , q ( x i , y j ) φ l , k ( x i , y j ) = 0 ,
for all l = 1 , , N , k = 1 , , M . Based on (13), we obtain
ξ l , k = π 2 n m i = 1 n j = 1 m f ^ i j φ l , k ( x i , y j ) .
Hence, the estimate function for f is
f n , m N , M = p = 1 N q = 1 M π 2 n m i = 1 n j = 1 m f ^ i j φ p , q ( x i , y j ) φ p , q ,
as desired. □

5.2. Regularized Solution

In reality, we only have n × m observation values ( g ^ i j , h ^ i j , F ^ i j ) of the functions ( g , h , F ) at grid points ( x i , y j ) instead of known ( g , h , F ) . Using Lemma 9, the estimation of g , h , F is
g ^ n , m N , M ( x , y ) = p = 1 N q = 1 M g n , m ; p , q N , M φ p , q ( x , y ) , where g ^ n , m ; p , q N , M = π 2 n m i = 1 n j = 1 m g ^ i j φ p q ( x i , y j ) ,
h ^ n , m N , M ( x , y ) = p = 1 N q = 1 M h n , m ; p , q N , M φ p , q ( x , y ) , where h ^ n , m ; p , q N , M = π 2 n m i = 1 n j = 1 m h ^ i j φ p q ( x i , y j ) ,
F ^ n , m N , M ( x , y , t ) = p = 1 N q = 1 M F ^ n , m ; p , q N , M ( t ) φ p , q ( x , y ) , where F ^ n , m ; p , q N , M ( t ) = π 2 n m i = 1 n j = 1 m F ^ i j φ p q ( x i , y j ) .
Choosing two integer numbers N , M in such a way that 1 < N < n , 1 < M < m , we will find a solution U N , M of the problem on the subspace S N , M such that
U t t N , M + U x x N , M + U y y N , M = F ^ n , m N , M ( x , y , t ) , ( x , y , t ) Ω × ( 0 , T ) , U N , M ( x , y , 0 ) = g ^ n , m N , M ( x , y ) , ( x , y ) Ω , U t N , M ( x , y , 0 ) = h ^ n , m N , M ( x , y ) , ( x , y ) Ω ,
Using Lemma 8, we deduce that the system (30) has a unique solution
U N , M ( x , y , t ) = p = 1 N q = 1 M cosh t λ p , q g ^ n , m ; p , q N , M + sinh t λ p , q λ p , q h ^ n , m ; p , q N , M + 0 t sinh λ p , q ( t s ) λ p , q F ^ n , m ; p , q N , M ( s ) d s φ p , q ( x , y ) .

5.3. Convergence Estimate

The main result in this section is to study the convergence rate of the estimator. We recall the regularized solution of the problem (1) as shown above:
U N , M ( x , y , t ) = p = 1 N q = 1 M cosh t λ p , q g ^ n , m ; p , q N , M + sinh t λ p , q λ p , q h ^ n , m ; p , q N , M + 0 t sinh λ p , q ( t s ) λ p , q F ^ n , m ; p , q N , M ( s ) d s φ p , q ( x , y ) ,
which is constructed from the observation data g ^ i , j , h ^ i , j , F ^ i , j at grid points ( x i , y j ) , i = 1 , , n ; j = 1 , , m of unknown functions g , h , F , respectively. The theorem below will show that a suitable choosing regularization parameter is necessary.
Theorem 1 (Convergence estimate for the regularized solution).
Let E > 0 α , β > 1 , 0 < N < n , 0 < M < m and g C 1 ( Ω ¯ ) S α , β E ( Ω ) , h C 1 ( Ω ¯ ) S α , β E ( Ω ) , F C ( [ 0 , T ] ; C 1 ( Ω ¯ ) S α , β E ( Ω ) ) . Assume that problem (1) has unique solution u C [ 0 , T ] ; L 2 ( Ω ) C [ 0 , T ] ; S α , β E ( Ω ) . For n , m large enough,
1 n α + 1 m β 2 1 n m ,
there is a constant Ψ such that
E U N , M u C ( [ 0 , T ] ; L 2 ( Ω ) ) 2 Ψ 1 n m N M e 2 t λ N , M + 3 p 2 α q 2 β E ,
Remark 2.
Let us consider the equality e 2 t N n , m 2 + M n , m 2 = n a m b , 0 < a , b < 1 . By choosing the integer number N n , m = M n , m such that
N n , m = M n , m : = 1 2 2 t ln n a m b ,
where x is integer number which is a truncation of a real number x. The mean squared error, E U N n , m , M n , m u L 2 ( Ω ) 2 is of order n a 1 m b 1 .
Proof. 
We have
U N , M u = p = 1 N q = 1 M cosh t λ p , q g ^ n , m ; p , q N , M g n , m ; p , q φ p , q ( x , y ) + p = 1 N q = 1 M sinh t λ p , q λ p , q h ^ n , m ; p , q N , M h n , m ; p , q φ p , q ( x , y ) + p = 1 N q = 1 M 0 t sinh λ p , q ( t s ) λ p , q F ^ n , m ; p , q N , M ( s ) F n , m ; p , q ( s ) d s φ p , q ( x , y ) p = 1 N q = 1 M cosh t λ p , q Θ n , m ; p , q g φ p , q ( x , y ) p = 1 N q = 1 M sinh t λ p , q λ p , q Θ n , m ; p , q h φ p , q ( x , y ) p = 1 N q = 1 M 0 t sinh λ p , q ( t s ) λ p , q Θ n , m ; p , q F ( s ) d s φ p , q ( x , y ) u P S N , M u ( x , y , t ) : = Υ 1 + Υ 2 + Υ 3 Υ 4 Υ 5 Υ 6 Υ 7 .
Here, we find the upper bounds for E Υ 1 , E Υ 2 , E Υ 3 and Υ 4 to Υ 7 by six steps below. We first proceed to step 1.
Step 1. Estimate E Υ 1 L 2 ( Ω ) 2 : Using the fact that
Υ 1 = p = 1 N q = 1 M cosh t λ p , q g ^ n , m ; p , q N , M g n , m ; p , q φ p , q ( x , y ) = p = 1 N q = 1 M cosh t λ p , q π 2 n m i = 1 n j = 1 m g ^ i j g ( x i , y j ) φ ( x i , y j ) φ p , q ( x , y ) = p = 1 N q = 1 M cosh t λ p , q π 2 n m i = 1 n j = 1 m σ i j ξ i j φ ( x i , y j ) φ p , q ( x , y ) ,
the Parseval’s identity implies that
Υ 1 L 2 ( Ω ) 2 = p = 1 N q = 1 M cosh 2 t λ p , q π 2 n m i = 1 n j = 1 m σ i j ξ i j φ ( x i , y j ) 2 .
Since the noises ξ i j are mutually independent and identically distributed ξ i j i . i . d N ( 0 , 1 ) , we obtain E ξ i j ξ l k = 0 for all i l or j k . Hence
E Υ 1 L 2 ( Ω ) 2 = p = 1 N q = 1 M cosh 2 t λ p , q π 4 n 2 m 2 i = 1 n j = 1 m σ i j 2 E ξ i j 2 φ 2 ( x i , y j ) .
In addition, we also have
i = 1 n j = 1 m 1 n m φ n 1 , m 1 2 ( x i , y j ) = 1 π 2 .
and < σ i j V σ , max . Thus
E Υ 1 L 2 ( Ω ) 2 V σ , max 2 π 2 n m p = 1 N q = 1 M cosh 2 t λ p , q π 2 n m V σ , max 2 p = 1 N q = 1 M e 2 t λ N , M π 2 n m V σ , max 2 N M e 2 t λ N , M .
Step 2. Estimate E Υ 2 L 2 ( Ω ) 2 : Applying a similar manner as in Step 1, we have
Υ 2 = p = 1 N q = 1 M sinh t λ p , q λ p , q h n , m ; p , q N , M h n , m ; p , q φ p , q ( x , y ) = p = 1 N q = 1 M sinh t λ p , q λ p , q π 2 n m i = 1 n j = 1 m h ^ i j h ( x i , y j ) φ ( x i , y j ) φ p , q ( x , y ) = p = 1 N q = 1 M sinh t λ p , q λ p , q π 2 n m i = 1 n j = 1 m υ i j χ i j φ ( x i , y j ) φ p , q ( x , y ) ,
and the Parseval’s identity give us
Υ 2 L 2 ( Ω ) 2 = p = 1 N q = 1 M sinh 2 t λ p , q λ p , q π 2 n m i = 1 n j = 1 m υ i j χ i j φ ( x i , y j ) 2 .
Because of mutually independent and identically properties of normal distribution, we have
E Υ 2 L 2 ( Ω ) 2 = p = 1 N q = 1 M sinh 2 t λ p , q λ p , q π 4 n 2 m 2 i = 1 n j = 1 m υ i j 2 E χ i j 2 φ 2 ( x i , y j ) p = 1 N q = 1 M sinh 2 t λ p , q λ p , q π 4 n 2 m 2 V υ , max 2 i = 1 n j = 1 m φ 2 ( x i , y j ) V υ , max 2 π 2 n m p = 1 N q = 1 M sinh 2 t λ p , q λ p , q π 2 n m V υ , max 2 p = 1 N q = 1 M e 2 t λ N , M λ 1 , 1 π 2 n m V υ , max 2 N M e 2 t λ N , M λ 1 , 1 .
Step 3. Estimate E Υ 3 L 2 ( Ω ) 2 :
Υ 3 = p = 1 N q = 1 M 0 t sinh ( t s ) λ p , q λ p , q F ^ n , m ; p , q N , M F n , m ; p , q φ p , q ( x , y ) = p = 1 N q = 1 M 0 t sinh ( t s ) λ p , q λ p , q π 2 n m i = 1 n j = 1 m F ^ i j F ( x i , y j , t ) φ ( x i , y j ) φ p , q ( x , y ) = p = 1 N q = 1 M 0 t sinh ( t s ) λ p , q λ p , q π 2 n m i = 1 n j = 1 m τ B i j ( t ) φ ( x i , y j ) φ p , q ( x , y ) .
From the Parseval’s identity, we obtain
Υ 3 L 2 ( Ω ) 2 = p = 1 N q = 1 M 0 t sinh ( t s ) λ p , q λ p , q π 2 n m i = 1 n j = 1 m τ B i j ( t ) φ ( x i , y j ) d s 2 ,
and using the Hölder’s inequality, we get
Υ 3 L 2 ( Ω ) 2 p = 1 N q = 1 M 0 t d s 0 t sinh 2 ( t s ) λ p , q λ p , q π 2 n m i = 1 n j = 1 m τ B i j ( t ) φ ( x i , y j ) 2 d s p = 1 N q = 1 M e 2 t λ N , M λ 1 , 1 0 t d s 0 t π 2 n m i = 1 n j = 1 m τ B i j ( t ) φ ( x i , y j ) 2 d s p = 1 N q = 1 M e 2 t λ N , M λ 1 , 1 T 0 t π 2 n m i = 1 n j = 1 m τ B i j ( t ) φ ( x i , y j ) 2 d s .
From the properties of Brownian motion, we know that E B i j ( s ) B l k ( s ) = 0 for all i l or j k and E B i j 2 ( s ) = s . Hence
E Υ 3 L 2 ( Ω ) 2 p = 1 N q = 1 M e 2 t λ N , M λ 1 , 1 T 0 t π 4 n 2 m 2 i = 1 n j = 1 m τ 2 E B i j 2 ( s ) φ 2 ( x i , y j ) d s p = 1 N q = 1 M e 2 t λ N , M λ 1 , 1 T 0 t π 4 n 2 m 2 τ 2 t i = 1 n j = 1 m φ 2 ( x i , y j ) d s p = 1 N q = 1 M e 2 t λ N , M λ 1 , 1 T 0 t π 2 n m τ 2 s d s π 2 n m T 3 τ 2 N M e 2 t λ N , M λ 1 , 1 .
Step 4. Estimate Υ 4 L 2 ( Ω ) 2 and Υ 5 L 2 ( Ω ) 2 . The Parseval’s identity implies that
Υ 4 L 2 ( Ω ) 2 = p = 1 N q = 1 M cosh 2 t λ p , q Θ n , m ; p , q g 2 .
Moreover, Lemma 5 also give us the upper bound. We deduce
Υ 4 L 2 ( Ω ) 2 C 2 1 n α + 1 m β 2 N M e 2 t λ N , M .
By a similar transform, as above, we get
Υ 5 L 2 ( Ω ) 2 = p = 1 N q = 1 M sinh 2 t λ p , q λ p , q Θ n , m ; p , q h 2 C 2 1 n α + 1 m β 2 N M e 2 t λ N , M λ 1 , 1 .
Step 5. Estimate Υ 6 L 2 ( Ω ) 2 . Applying the Parseval’s identity, we have
Υ 6 L 2 ( Ω ) 2 = p = 1 N q = 1 M 0 t sinh ( t s ) λ p , q λ p , q Θ n , m ; p , q F ( s ) d s 2 ,
and using the Hölder’s inequality, we have
Υ 6 L 2 ( Ω ) 2 p = 1 N q = 1 M 0 t d s 0 t sinh 2 ( t s ) λ p , q λ p , q Θ n , m ; p , q F ( s ) 2 d s .
The Lemma 7 give us the upper bound of Θ n , m ; p , q F ( s ) . Hence
Υ 6 L 2 ( Ω ) 2 C 2 1 n α + 1 m β 2 T 2 N M e 2 t λ N , M λ 1 , 1 .
Step 6. Estimate Υ 7 L 2 ( Ω ) 2 :
Υ 7 L 2 ( Ω ) 2 = p = 1 N q = M + 1 u , φ p , q 2 + p = N + 1 q = 1 M u , φ p , q 2 + p = N + 1 q = M + 1 u , φ p , q 2 3 p 2 α q 2 β E .
If the condition (33) holds, then by combining six steps above, we obtain
E U N , M u C ( [ 0 , T ] ; L 2 ( Ω ) ) 2 Ψ 1 n m N M e 2 t λ N , M + 3 p 2 α q 2 β E ,
where
Ψ = π 2 V σ , max 2 + π 2 V υ , max 2 λ 1 , 1 + π 2 T 3 τ λ 1 , 1 + C 2 + C 2 λ 1 , 1 + C 2 T 2 λ 1 , 1 .
This completes the proof. □

6. Numerical Example

6.1. The Ill-Posedness of the Problem

In this subsection, we implement the numerical experiment to illustrate the ill-posedness of the problem which presented in Section 4. For convenience, let us assign the change in the data as following
D ( t ) : = E g ^ n , m N , M g L 2 ( Ω ) 2 + h ^ n , m N , M h L 2 ( Ω ) 2 + F ^ n , m N , M F L 2 ( Ω ) 2 ,
the change in the solution
S ( t ) : = E U N , M ( · , · , t ) u ( · , · , t ) L 2 ( Ω ) 2 .
In this situation, we set t = 1 / 4 , N = n 1 , M = m 1 and n = m . Observation value of F , g, h follow the random models, see Figure 1,
F ^ i j ( t ) = 1 n m B i j ( t ) , g ^ i j = 1 n m ξ i j , h ^ i j = 1 n m χ i j , i = 1 , , n , j = 1 , , m .
The value of D ( t ) and S ( t ) are given in Table 1. Through this result, we clearly see that the smaller D ( 1 / 4 ) , the larger S ( 1 / 4 ) will be, when n = m increases from 5 to 50.

6.2. Convergence Behavior of the Regularized Solution

In this section, we present an example in order to illustrate the efficiency of the proposed methods. We consider following Cauchy problem:
u t t + u x x + u y y = F ( x , y , t ) , ( x , y , t ) Ω × ( 0 , 1 ) , u ( x , y , 0 ) = g ( x , y ) , ( x , y ) Ω , u t ( x , y , 0 ) = h ( x , y ) , ( x , y ) Ω ,
where F ( x , y , t ) = 2 t sin x sin y , g ( x , y ) = 0 and h ( x , y ) = sin x sin y . This system has the exact solution u ( x , y , t ) = t sin x sin y .
The observation data at grid points ( x i , y j ) = π 2 i 1 2 n , π 2 j 1 2 m ; i = 1 , , n ; j = 1 , , m of functions F , h , g are
F ^ i j ( t ) = F ( x i , y j , t ) + α B i j ( t ) , α = 1 and B i j ( t ) i . i . d N ( 0 , 1 ) , g ^ i j = g ( x i , y j ) + σ i j ξ i , j , σ i j = 0 . 1 and ξ i j i . i . d N ( 0 , 1 ) , h ^ i j = h ( x i , y j ) + υ i j χ i j , υ i j = 0 . 1 and χ i j i . i . d N ( 0 , 1 ) ,
for all i = 1 , , n ; j = 1 , , m . For fixed t, the matrix form of the regularized solution will be
U N , M = U N , M ( x 1 , y 1 , t ) U N , M ( x 1 , y 2 , t ) U N , M ( x 1 , y m , t ) U N , M ( x 2 , y 1 , t ) U N , M ( x 2 , y 2 , t ) U N , M ( x 2 , y m , t ) U N , M ( x n , y 1 , t ) U N , M ( x n , y 2 , t ) U N , M ( x n , y m , t ) ,
the elements of U N , M are computed as
U N , M ( x i , y j , t ) = p = 1 N q = 1 M cosh t λ p , q g ^ n , m ; p , q N , M + sinh t λ p , q λ p , q h ^ n , m ; p , q N , M + 0 t sinh λ p , q ( t s ) λ p , q F ^ n , m ; p , q N , M ( s ) d s φ p , q ( x i , y j ) .
The errors are esimated by
E r r n , m N , M ( t ) = 1 n m i = 1 n j = 1 m U N , M ( x i , y j , t ) u ( x i , y j , t ) 2 .
In this numerical example, we keep the fixed value of regularization parameters with N = M = 3 . We carried out the numerical calculations for two cases with different observation points; the first one for n = m = 500 , and another one for n = m = 1000 . From Table 2, it shows that the error is reduced once the spatial observation points are increased; on the other hand the convergence rate has been increased once time t tends to 0. This reflects the behavior of the rate of convergence in Theorem 1. For a more intuitive look, in Figure 2, we show the 3D graphs of the exact solution and regularized solution at t = 1 / 4 , where we chose N = M = 3 and n = m = 1000 .

7. Conclusions

In this work, we considered the problem of finding the solution for the 2-D inhomogeneous elliptic equation with initial data and source pertubed by random noise. We first estimate the initial functions and source function using the nonparametric least square regression method in statistic manner. Then we approximated Fourier coefficients f p , q by f n , m ; p , q which is the function of the data set. This problem is ill-posed, hence a regularization is needed. Eventually, we have carried out the numerical example, and validated the regularized solution against the exact solution, it shows the numerical errors have been reduced once the number of spatial observation points is increased; on the other hand the convergence rate is increased once t tends to zero.

Author Contributions

All authors contributed equally to this work, as well as to its preparation. They have read and approved the final manuscript.

Funding

This research was funded by National Research Foundation of Korea grant number NRF-2015R1D1A1A01060880. The third author would like to thank to the support from the National Research Foundation of Korea grant (NRF-2015R1D1A1A01060880).

Acknowledgments

Many thanks to Professor Nguyen Huy Tuan for discussing and helping to develop this paper. The authors also would like to thank the anonymous reviewers for their valuable and constructive comments to improve our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lavrentev, M.M.; Romanov, V.G.; Shishatskii, S.P. Ill-posed Problems of Mathematical Physics and Analysis; Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1986; Volume 64. [Google Scholar]
  2. Heyvaerts, J.; Lasry, J.M.; Schatzman, M.; Witomski, P. Solar flares: A non linear eigenvalue problem in an unbounded domain. In Bifurcation and Nonlinear Eigenvalue Problems; Lecture Notes in Mathematics 782; Springer: Berlin, Germany, 1980; pp. 160–191. [Google Scholar]
  3. Johnson, C.R. Computational and numerical methods for bioelectric field problems. Crit. Rev. Biomed. Eng. 1997, 25, 1–81. [Google Scholar] [CrossRef] [PubMed]
  4. Tuan, N.H.; Thang, L.D.; Khoa, V.A. A modified integral equation method of the nonlinear elliptic equation with globally and locally Lipschitz source. Appl. Math. Comput. 2015, 256, 245–265. [Google Scholar] [CrossRef]
  5. Tuan, N.H.; Binh, T.T.; Viet, T.Q.; Lesnic, D. On the Cauchy problem for semilinear elliptic equations. J. Inverse Ill-Posed Probl. 2016, 24, 123–138. [Google Scholar] [CrossRef]
  6. Haase, M. Functional Analysis: An Elementary Introduction; American Mathematical Society: Providence, RI, USA, 2014. [Google Scholar]
  7. Eubank, R.L. Nonparametric Regression and Spline Smoothing; CRC Press: Boca Raton, FL, USA, 1999. [Google Scholar]
  8. Salsa, S. Partial Differential Equations in Action From Modelling to Theory; Springer: Berlin, Germany, 2008. [Google Scholar]
  9. Hunter, J.K.; Nachtergaele, B. Applied Analysis; World Scientific Publishing Company: Singapore, 2001. [Google Scholar]
Figure 1. The exact data g ( x , y ) = h ( x , y ) = F ( x , y , t ) and their observation value at time t = 1 / 4 .
Figure 1. The exact data g ( x , y ) = h ( x , y ) = F ( x , y , t ) and their observation value at time t = 1 / 4 .
Mathematics 07 00422 g001
Figure 2. 3D graphs of exact solution at time t = 1 / 4 , and the regularized solution for the regularization parameters of N = M = 3 and n = m = 1000 .
Figure 2. 3D graphs of exact solution at time t = 1 / 4 , and the regularized solution for the regularization parameters of N = M = 3 and n = m = 1000 .
Mathematics 07 00422 g002
Table 1. A small change in the data result in a large change in the solution.
Table 1. A small change in the data result in a large change in the solution.
n = m = 5 n = m = 10 n = m = 20 n = m = 50
D ( 1 / 4 ) 0.56850.17990.05010.0085
S ( 1 / 4 ) 0.58001.7333 × 10 3 1.9078 × 10 10 1.7587 × 10 35
Table 2. Numerical errors
Table 2. Numerical errors
Err n , m N , M ( t ) t = 1 / 4 t = 1 / 2 t = 3 / 4 t = 1
n = m = 500 7.92 × 10 4 1.51 × 10 3 3.60 × 10 3 1.27 × 10 2
n = m = 1000 4.91 × 10 4 7.75 × 10 4 1.50 × 10 3 3.31× 10 3

Share and Cite

MDPI and ACS Style

Triet, N.A.; Phuong, N.D.; Nguyen, V.T.; Nguyen-Huu, C. Regularization and Error Estimate for the Poisson Equation with Discrete Data. Mathematics 2019, 7, 422. https://doi.org/10.3390/math7050422

AMA Style

Triet NA, Phuong ND, Nguyen VT, Nguyen-Huu C. Regularization and Error Estimate for the Poisson Equation with Discrete Data. Mathematics. 2019; 7(5):422. https://doi.org/10.3390/math7050422

Chicago/Turabian Style

Triet, Nguyen Anh, Nguyen Duc Phuong, Van Thinh Nguyen, and Can Nguyen-Huu. 2019. "Regularization and Error Estimate for the Poisson Equation with Discrete Data" Mathematics 7, no. 5: 422. https://doi.org/10.3390/math7050422

APA Style

Triet, N. A., Phuong, N. D., Nguyen, V. T., & Nguyen-Huu, C. (2019). Regularization and Error Estimate for the Poisson Equation with Discrete Data. Mathematics, 7(5), 422. https://doi.org/10.3390/math7050422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop