Next Article in Journal
Multiscale Approach for Bounded Deformation Image Registration
Next Article in Special Issue
Reproducing Kernel Hilbert Spaces of Smooth Fractal Interpolation Functions
Previous Article in Journal
New Class Up and Down λ-Convex Fuzzy-Number Valued Mappings and Related Fuzzy Fractional Inequalities
Previous Article in Special Issue
Scale-Free Fractal Interpolation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractal Perturbation of the Nadaraya–Watson Estimator

1
Department of Data Science and Analytics, School of Intelligent Science and Technology, I-Shou University, Dashu District, Kaohsiung City 84001, Taiwan
2
Department of Applied Science, School of Academic Studies, R.O.C. Naval Academy, Zuoying District, Kaohsiung City 813000, Taiwan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Fractal Fract. 2022, 6(11), 680; https://doi.org/10.3390/fractalfract6110680
Submission received: 8 October 2022 / Revised: 6 November 2022 / Accepted: 15 November 2022 / Published: 17 November 2022

Abstract

:
One of the main tasks in the problems of machine learning and curve fitting is to develop suitable models for given data sets. It requires to generate a function to approximate the data arising from some unknown function. The class of kernel regression estimators is one of main types of nonparametric curve estimations. On the other hand, fractal theory provides new technologies for making complicated irregular curves in many practical problems. In this paper, we are going to investigate fractal curve-fitting problems with the help of kernel regression estimators. For a given data set that arises from an unknown function m, one of the well-known kernel regression estimators, the Nadaraya–Watson estimator m ^ , is applied. We consider the case that m is Hölder-continuous of exponent β with 0 < β 1 , and the graph of m is irregular. An estimation for the expectation of | m ^ m | 2 is established. Then a fractal perturbation f [ m ^ ] corresponding to m ^ is constructed to fit the given data. The expectations of | f [ m ^ ] m ^ | 2 and | f [ m ^ ] m | 2 are also estimated.

1. Introduction

One of the main tasks in the problems of machine learning, curve fitting, signal analysis, and many statistical applications is to develop suitable models for given data sets. In many real-world applications, it requires to generate a function to interpolate or to approximate the data arising from some unknown function. In data-fitting problems, interpolation is usually applied when the data are noise-free, and regression is considered if we have noisy observations.
The theory of nonparametric modeling of a regression has been developed by many researchers. Several types of estimators and their statistical properties have been studied in the literature. The class of kernel estimators is one of the main types of nonparametric curve estimations, and the Nadaraya–Watson estimator, the Priestley–Chao estimator, and the Gasser–Müller estimator are widely used in applications. See [1,2,3,4,5,6] and references given in these books. In [7,8], the authors investigated the differences between several types of kernel regression estimators, and there is no answer to which of these estimators is the best since each of them has advantages and disadvantages.
Fractal theory provides another technology for making complicated curves and fitting experimental data. A fractal interpolation function (FIF) is a continuous function interpolating a given set of points, and the graph of a FIF is the attractor of an iterated function system. The concept of FIFs was introduced by Barnsley ([9,10]), and it has been developed to be the basis of an approximation theory for nondifferentiable functions. FIFs can also be applied to model discrete sequences ([11,12,13]). Various types of FIFs and their approximation properties were discussed in [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44], and the references given in the literature. See also the book [45] for recent developments. In [46,47,48,49,50], the construction of FIFs for random data sets is given, and some statistical properties of such FIFs were investigated. In [51], the authors made a topological–geometric contribution for the development and applications of fractal models, which present periodic changes.
For a given data set that arises from an unknown function m, the purpose of this paper is not to establish a fractal function that interpolates points in the data set, but we aim to find a fractal function that has good approximation for these data points. In [52], the authors trained SVM by the chosen training data and then applied the SVM model to calculate the interpolation points used to construct a linear FIF. In this paper, we consider the Nadaraya–Watson estimator m ^ for some sample data chosen from a given data set, and establish an estimation for the expectation of | m ^ m | 2 . Then a FIF f [ m ^ ] corresponding to m ^ is constructed to fit the given data set, and the expectations of | f [ m ^ ] m ^ | 2 and | f [ m ^ ] m | 2 are also estimated.
Throughout this paper, let D = { ( t i , y i ) R × R : i = 0 , 1 , , N } be a given data set, where N is an integer greater than or equal to 2, and t 0 < t 1 < < t N . We take t 0 = 0 and t N = 1 for convenience. Let I = [ 0 , 1 ] and I i = [ t i 1 , t i ] for i = 1 , , N . Let C [ I ] denote the set of all real-valued continuous functions defined on I. The set of functions in C [ I ] that interpolate all points in D is denoted by C D [ I ] . Define f = max t I | f ( t ) | for f C [ I ] . It is known that ( C [ I ] , · ) is a Banach space, and C D [ I ] is a complete metric space, where the metric is induced by · .

2. Construction of Fractal Interpolation Functions

In this section, we establish a fractal perturbation of a given function in C [ I ] . The construction given here has been treated in the literature (see [47]). We show the details here to make our paper more self-contained.
Let u C [ I ] and D = { ( t i , y i ) : y i = u ( t i ) , i = 0 , 1 , , N } , where 0 = t 0 < t 1 < < t N = 1 . Assume that the data points in D are non-collinear. For i = 1 , , N , let L i : I I i be a homeomorphism such that L i ( 0 ) = t i 1 and L i ( 1 ) = t i . Define M i : I × R R by
M i ( t , y ) = s i y + u ( L i ( t ) ) s i p ( t ) ,
where 1 < s i < 1 and p is a continuous function on I such that p ( 0 ) = u ( 0 ) and p ( 1 ) = u ( 1 ) . Then M i ( 0 , u ( 0 ) ) = y i 1 , M i ( 1 , u ( 1 ) ) = y i , and
| M i ( t , y ) M i ( t , y * ) | = | s i | | y y * | for all t I and y , y * R .
Define W i : I × R I i × R by W i ( t , y ) = ( L i ( t ) , M i ( t , y ) ) for i = 1 , , N . For h C D [ I ] , let G h = { ( t , h ( t ) ) : t I } . Then W i ( G h ) = { ( L i ( t ) , M i ( t , h ( t ) ) ) : t I } . Since L i : I I i is a homeomorphism, W i ( G h ) can be written as
W i ( G h ) = { ( t , M i ( L i 1 ( t ) , h ( L i 1 ( t ) ) ) ) : t I i } .
Hence W i ( G h ) is the graph of the continuous function h i : I i R defined by h i ( t ) = M i ( L i 1 ( t ) , h ( L i 1 ( t ) ) ) . Define a mapping T : C D [ I ] C D [ I ] by
T ( h ) ( t ) = h i ( t ) = s i h ( L i 1 ( t ) ) + u ( t ) s i p ( L i 1 ( t ) ) , t I i .
By ( 3 ) we see that, for g , h C D [ I ] and t I i ,
| T ( g ) ( t ) T ( h ) ( t ) | | s i | | g ( L i 1 ( t ) ) h ( L i 1 ( t ) ) | .
Then
T ( g ) T ( h ) max i = 1 , , N | s i | max z I | g ( z ) h ( z ) | s g h .
Here s = max { | s 1 | , , | s N | } . Since 0 s < 1 , we have the following theorem ([47], Theorem 2.1).
Theorem 1. 
The operator T given by ( 3 ) is a contraction mapping on C D [ I ] .
Definition 1. 
The fixed point f [ u ] of T in C D [ I ] is called a fractal interpolation function (FIF) on I corresponding to the continuous function u.
The FIF f [ u ] given in Definition 1 satisfies the following equation for i = 1 , , N :
f [ u ] ( t ) = s i f [ u ] ( L i 1 ( t ) ) p ( L i 1 ( t ) ) + u ( t ) , t I i .
If s i = 0 for all i, then f [ u ] = u . Therefore, f [ u ] can be treated as a fractal perturbation of u.

3. The Nadaraya–Watson Estimator

Let D = { ( t i , y i ) R × R : i = 0 , 1 , , N } be a given data set, where 0 = t 0 < t 1 < < t N = 1 . Suppose that
Y i = m ( t i ) + ϵ i , for i = 0 , 1 , , N ,
where m : [ 0 , 1 ] R is an unknown function, and each y i is an observation of Y i . Here, all ϵ i are independent stochastic disturbance terms with zero expectation, E [ ϵ i ] = 0 , and finite variance, Var [ ϵ i ] σ 2 < . In this section, we consider the Nadaraya–Watson estimator m ^ for D and establish an estimation for the expectation of | m ^ m | 2 .
Consider the case that m is Hölder continuous of exponent β with 0 < β 1 , and the graph of m is irregular. Then, m satisfies the inequality with 0 < β 1 and λ > 0 :
| m ( t ) m ( t ) | λ | t t | β , t , t I .
The Nadaraya–Watson estimator m ^ of m is defined by
m ^ ( t ) = i = 0 N k d ( t t i ) Y i j = 0 N k d ( t t j ) , where k d ( z ) = 1 d k z d .
Here d > 0 is a bandwidth, and k is an integrable function defined on R .
The function k is called a kernel and is usually assumed to be bounded and satisfies some integrable conditions. Some widely used kernels are given in ([2], p. 41) and ([5], p. 3), and the estimations using different kernels are usually numerically similar (see [6]). In this paper, we assume that there are positive numbers C 1 , C 2 , η , and R such that the kernel k satisfies the condition
C 1 χ [ η , η ] ( z ) k ( z ) C 2 χ [ R , R ] ( z ) , z R .
Condition (8) and its multidimensional form was considered in ([5], Theorem 1.7) and ([1], Theorem 5.1).
A new estimation for the bias of m ^ was obtained in [53]. Here, we give an estimation for E [ ( m ^ ( t ) m ( t ) ) 2 ] in the following Theorem 2. Similar results were studied in [1,2,5], and other literature. The convergence rate of upper estimation obtained in Theorem 2 is the same as the known results.
The Nadaraya–Watson estimator m ^ given in (7) can be written in the form
m ^ ( t ) = i = 0 N W i ( t ) Y i , where W i ( t ) = k d ( t t i ) j = 0 N k d ( t t j ) .
Then i = 0 N W i ( t ) = 1 for all t and
E [ m ^ ( t ) ] = i = 0 N W i ( t ) E [ Y i ] = i = 0 N W i ( t ) m ( t i ) .
In the following lemma, we give a lower bound for j = 0 N k d ( t t j ) . Define
a N = min 1 k N t k t k 1 , A N = max 1 k N t k t k 1 .
Lemma 1. 
Let 0 = t 0 < t 1 < < t N = 1 . Suppose that k : R R and there are positive numbers C 1 and η such that C 1 χ [ η , η ] ( z ) k ( z ) for z R . Let d > 0 and let A N and k d be defined in (11) and (7), respectively. Assume that A N < 2 d η and A N α N for some α > 0 . Then for 0 t 1 ,
j = 0 N k d ( t t j ) C 1 η N α .
Proof. 
For 0 t 1 , the condition C 1 χ [ η , η ] ( z ) k ( z ) implies that
j = 0 N k d ( t t j ) = 1 d j = 0 N k t t j d C 1 d j = 0 N χ [ η , η ] t t j d = C 1 d | E η ( t ) | ,
where E η ( t ) = { t j : j = 0 , 1 , , N , and | t t j d | η } and | E η ( t ) | is the number of elements of E η ( t ) . Since | t t j d | η if and only if t j [ t d η , t + d η ] [ 0 , 1 ] , we have E η ( t ) = { t j : j = 0 , 1 , , N , and t j [ t d η , t + d η ] [ 0 , 1 ] }.
For t [ d η , 1 d η ] , we have [ t d η , t + d η ] [ 0 , 1 ] , and by the condition A N < 2 d η , we see that | E η ( t ) | [ 2 d η A N ] 1 and this implies | E η ( t ) | d η A N . For t [ 0 , d η ) , we have [ t d η , t + d η ] [ 0 , 1 ] = [ 0 , t + d η ] and t 0 = 0 E η ( t ) . Hence | E η ( t ) | [ t + d η A N ] + 1 1 and | E η ( t ) | d η A N . For t ( 1 d η , 1 ] , we have [ t d η , t + d η ] [ 0 , 1 ] = [ t d η , 1 ] and t N = 1 E η ( t ) . Hence | E η ( t ) | [ 1 t + d η A N ] + 1 1 and | E η ( t ) | d η A N . Then the condition A N α N implies ( 12 ) . □
Theorem 2. 
Let D be a given data set and assume that m satisfies ( 6 ) . Suppose that k satisfies ( 8 ) and m ^ is defined by ( 7 ) . Assume that A N < 2 d η and A N α N for some α > 0 . Then we have
E [ ( m ^ ( t ) m ( t ) ) 2 ] λ 2 R 2 β d 2 β + α C 2 σ 2 C 1 η 1 N d .
Proof. 
We see that
E [ ( m ^ ( t ) m ( t ) ) 2 ] = { E [ m ^ ( t ) ] m ( t ) } 2 + E [ m ^ ( t ) 2 ] ( E [ m ^ ( t ) ] ) 2 .
By (6) and (9)–(10), we have
| E [ m ^ ( t ) ] m ( t ) | = i = 0 N W i ( t ) ( m ( t i ) m ( t ) ) λ i = 0 N W i ( t ) | t i t | β .
Condition ( 8 ) implies that k ( t t i d ) = 0 if | t t i d | > R . Therefore,
| E [ m ^ ( t ) ] m ( t ) | λ d β i = 0 N k ( t t i d ) | t t i d | β j = 0 N k ( t t j d ) λ R β d β .
On the other hand, by ( 8 ) and ( 12 ) , we also have
sup i , t W i ( t ) = sup i , t k ( t t i d ) j = 0 N k ( t t j d ) α C 2 C 1 η N d .
By (9), (10) and (5), we have
E [ m ^ ( t ) 2 ] ( E [ m ^ ( t ) ] ) 2 = E [ ( m ^ ( t ) E [ m ^ ( t ) ] ) 2 ] = E i = 0 N W i ( t ) ϵ i 2 .
Since all ϵ i are independent and satisfy E [ ϵ i ] = 0 and Var [ ϵ i ] σ 2 < , the condition i = 0 N W i ( t ) = 1 and estimation ( 16 ) imply that
E [ m ^ ( t ) 2 ] ( E [ m ^ ( t ) ] ) 2 = i = 0 N W i ( t ) 2 E [ ϵ i 2 ] σ 2 sup i , t W i ( t ) i = 0 N W i ( t ) α C 2 σ 2 C 1 η 1 N d .
Then by (14) and (15), we have (13). □
For a given kernel k which satisfies ( 8 ) , estimation ( 13 ) shows that C 1 and η should be chosen so that C 1 η is as large as possible. The minimizer d * with respect to d of the right-hand side of ( 13 ) can be obtained by setting E ( d ) = λ 2 R 2 β d 2 β + ( α C 2 σ 2 C 1 η N ) d 1 , and then solve the equation
E ( d ) = ( 2 β ) λ 2 R 2 β d 2 β 1 α C 2 σ 2 C 1 η N d 2 = 0 .
We have
d * = α C 2 σ 2 2 β C 1 η λ 2 R 2 β 1 2 β + 1 N 1 2 β + 1
and the upper estimate given in ( 13 ) can be reduced to C * N 2 β / ( 2 β + 1 ) , where C * depends on α , β , λ , σ 2 , η , R, C 1 , and C 2 .

4. Fractal Perturbation of the Nadaraya–Watson Estimator

In this section, we consider FIFs f [ m ^ ] corresponding to the function m ^ and we establish estimations for the expectation of | f [ m ^ ] m ^ | 2 and | f [ m ^ ] m | 2 . Suppose that k is continuous and we replace each Y i in ( 7 ) by y i . Then m ^ C [ I ] . By the construction given in Section 2 with u = m ^ , we have a FIF f [ m ^ ] on I that satisfies the equation for i = 1 , , N :
f [ m ^ ] ( t ) = s i f [ m ^ ] ( L i 1 ( t ) ) p ( L i 1 ( t ) ) + m ^ ( t ) , t I i .
Here, p is chosen to be the linear polynomial such that p ( 0 ) = m ^ ( 0 ) and p ( 1 ) = m ^ ( 1 ) . Then we replace y i by Y i for each i and consider f [ m ^ ] ( t ) a random variable for every t I . We are interested in estimations for E [ | f [ m ^ ] m | 2 ] .
Theorem 3. 
Suppose that k is continuous and k satisfies ( 8 ) with R = 1 and C 2 = 1 . Suppose that m satisfies ( 6 ) and m ^ is defined by ( 7 ) . Let M = max { | m ( t i ) | : i = 0 , 1 , , N } . Assume that A N < 2 d η , A N α N , and a N τ N for some α > 0 and τ > 0 , where A N and a N are defined in ( 11 ) . Suppose that 0 < s = max { | s 1 | , , | s N | } < 2 1 / 2 and E [ | f [ m ^ ] m ^ | 2 ] < . Then we have
E [ | f [ m ^ ] m ^ | 2 ] 72 s 2 α 2 ( M 2 + σ 2 ) ( 1 2 s 2 ) C 1 2 η 2 τ 2 ( N d + τ ) 2 ( N d ) 2 ,
E [ | f [ m ^ ] m | 2 ] 144 s 2 α 2 ( M 2 + σ 2 ) ( 1 2 s 2 ) C 1 2 η 2 τ 2 ( N d + τ ) 2 ( N d ) 2 + 2 λ 2 d 2 β + 2 α σ 2 C 1 η 1 N d .
Proof. 
For t I i , ( 18 ) implies
| f [ m ^ ] ( t ) m ^ ( t ) | 2 2 s i 2 | f [ m ^ ] ( L i 1 ( t ) ) m ^ ( L i 1 ( t ) ) | 2 + | m ^ ( L i 1 ( t ) ) p ( L i 1 ( t ) ) | 2 ,
and we have
sup t I i E [ | f [ m ^ ] ( t ) m ^ ( t ) | 2 ] 2 s i 2 sup z I E [ | f [ m ^ ] ( z ) m ^ ( z ) | 2 ] + sup z I E [ | m ^ ( z ) p ( z ) | 2 ] .
Then
E [ | f [ m ^ ] m ^ | 2 ] 2 s 2 { E [ | f [ m ^ ] m ^ | 2 ] + E [ | m ^ p | 2 ] }
and therefore
E [ | f [ m ^ ] m ^ | 2 ] 2 s 2 1 2 s 2 E [ | m ^ p | 2 ] .
Since p is the linear polynomial with p ( 0 ) = m ^ ( 0 ) and p ( 1 ) = m ^ ( 1 ) , we have
p ( t ) = m ^ ( 0 ) + ( m ^ ( 1 ) m ^ ( 0 ) ) t , t I ,
and then
| m ^ ( t ) p ( t ) | = | ( m ^ ( t ) m ^ ( 0 ) ) ( 1 t ) + ( m ^ ( t ) m ^ ( 1 ) ) t | .
The convexity of the square function x x 2 implies that
| m ^ ( t ) p ( t ) | 2 ( 1 t ) | m ^ ( t ) m ^ ( 0 ) | 2 + t | m ^ ( t ) m ^ ( 1 ) | 2
and therefore
E [ | m ^ ( t ) p ( t ) | 2 ] ( 1 t ) E [ | m ^ ( t ) m ^ ( 0 ) | 2 ] + t E [ | m ^ ( t ) m ^ ( 1 ) | 2 ] , t I .
By ( 9 ) , m ^ ( t ) m ^ ( 1 ) = r = 0 N ( W r ( t ) W r ( 1 ) ) Y r . By ( 8 ) with R = 1 , we see that if t r < 1 d , then 1 t r d > 1 and k ( 1 t r d ) = 0 . This implies W r ( 1 ) = 0 . For t I , if t r [ t d , t + d ] , then | t t r d | > 1 and k ( t t r d ) = 0 . This implies W r ( t ) = 0 . Then
m ^ ( t ) m ^ ( 1 ) = r B t ( W r ( t ) W r ( 1 ) ) Y r ,
where B t = { r : t r [ t d , t + d ] or t r [ 1 d , 1 ] } . Let ξ = [ d a N ] . Then the number of elements in B t is less than 3 ( ξ + 1 ) .
By ( 12 ) and ( 8 ) with C 2 = 1 , we have
| W r ( t ) W r ( 1 ) | k d ( t t r ) j = 0 N k d ( t t j ) + k d ( 1 t r ) j = 0 N k d ( 1 t j ) 2 α C 1 η N d .
By ( 5 ) we also have E [ Y r 2 ] = m ( t r ) 2 + σ 2 for r = 0 , 1 , , N . Condition ( 6 ) shows that m is continuous and therefore m is bounded on I. Then for t I ,
E [ | m ^ ( t ) m ^ ( 1 ) | 2 ] r B t ( W r ( t ) W r ( 1 ) ) 2 r B t E [ Y r 2 ] 2 α C 1 η N d 2 ( M 2 + σ 2 ) ( 3 ξ + 3 ) 2 .
We also have the same estimate for E [ | m ^ ( t ) m ^ ( 0 ) | 2 ] .
By the condition a N τ N , we have ξ d a N N d τ , and then ( 23 ) can be reduced to
E [ | m ^ ( t ) p ( t ) | 2 ] 36 α 2 ( M 2 + σ 2 ) C 1 2 η 2 τ 2 ( N d + τ ) 2 ( N d ) 2 , t I .
Thus, ( 19 ) can be obtained by ( 21 ) and ( 26 ) . Moreover, we have ( 20 ) by ( 13 ) , ( 19 ) , and the inequality
E [ | f [ m ^ ] m | 2 ] 2 E [ | f [ m ^ ] m ^ | 2 ] + 2 E [ | m ^ m | 2 ] .
For a given kernel k, which satisfies condition ( 8 ) , estimation ( 20 ) shows that C 1 and η should be chosen so that C 1 η is as large as possible. If we choose d = d * , where d * is given by ( 17 ) with C 2 = 1 and R = 1 , then ( 20 ) can be reduced to
E [ | f [ m ^ ] m | 2 ] A ( 1 + D N 2 β 2 β + 1 ) 2 + C * N 2 β 2 β + 1 ,
where A = 144 s 2 α 2 ( M 2 + σ 2 ) ( 1 2 s 2 ) C 1 2 η 2 τ 2 and D depends on λ , α , β , C 1 , η , τ , σ 2 , and C * depends on λ , α , β , C 1 , η , σ 2 . Moreover, the constant M can be estimated by M ˜ = max { | y 0 | , | y 1 | , , | y N | } .
The right-hand side of ( 27 ) tends to A when N . In fact, if d is chosen so that d 0 and N d as N , then the right-hand side of ( 20 ) tends to A as N . Moreover, A 0 as s 0 .
Example 1. 
The data set we used in this example is the Crude Oil WTI Futures daily highest price from 2021/7/19 to 2022/8/17. These data are opened and they can be obtained from the website https://www.investing.com/commodities/crude-oil-historical-data. There are 287 raw data and we chose 11 data as our sample subset S . These data points are shown in Figure 1. We set S = { ( t i , w i ) : i = 0 , 1 , , 10 } , where t i = i 10 and w i are the Crude Oil WTI Futures daily highest prices in 2021/7/19, 8/26, 10/6, 11/16, 12/28, 2022/2/2, 3/9, 4/19, 5/30, 7/5, and 8/17.
Let m ^ be defined by ( 7 ) , with each Y i being replaced by w i ,
m ^ ( t ) = i = 0 10 k ( t 0.1 × i d ) w i j = 0 10 k ( t 0.1 × j d ) ,
and choose k to be the Epanechnikov kernel k ( t ) = 0.75 ( 1 t 2 ) χ { | t | 1 } . Let N = 10 and choose R = 1 , C 2 = 1 , η = 1 3 , C 1 = 0.5 in ( 8 ) . We estimate M by max { w 0 , w 1 , , w 10 } , and set α = 1 and τ = 1 in Theorem 3. Assume that β = 0.5 in this example. The values of σ 2 and λ are estimated by the sample variance and max | w i w j | | t i t j | : i , j = 0 , 1 , , 10 , i j , respectively. By ( 17 ) , we set d = 0.092 .
We construct a FIF f [ m ^ ] by the method given in Section 2 with linear functions L i and the linear polynomial p such that L i ( 0 ) = i 1 10 , L i ( 1 ) = i 10 , and p ( 0 ) = m ^ ( 0 ) , p ( 1 ) = m ^ ( 1 ) . The chosen values s 1 , , s 10 are given in Table 1.
The graphs of raw data and m ^ are shown in Figure 2. The graphs of m ^ and f [ m ^ ] are shown in Figure 3. The graphs of raw data and f [ m ^ ] are shown in Figure 4.

5. Conclusions

The purpose of this paper is to construct a fractal interpolation function (FIF) that has good approximation for a given data set. We consider the Nadaraya–Watson estimator m ^ for some sample data chosen from a given data set, and then apply m ^ to construct a FIF f [ m ^ ] to fit the given set of data points. The Nadaraya–Watson estimator is widely used in data-fitting problems, and its fractal perturbation is considered in our paper. The expectations of mean squared errors of such approximation are also estimated. By the figures given in Example 1, we may see the quality of curve fitting by a FIF, which is constructed from m ^ with 11 sample points to fit the 287 raw data points. We see that the error of approximation can be decreased by choosing more sample data.
In this paper, we construct a FIF to fit a given data set with the help of the Nadaraya–Watson estimator. In fact, the Priestley–Chao estimator, the Gasser–Müller estimator, and other types of kernel regression estimators can also be used in our approach. Nonparametric regression has been studied for a long time. Several types of models with their theoretical results and applications are widely developed by many researchers. Fractal perturbations of these models are worth investigating in the field of fractal curve fitting.

Author Contributions

Conceptualization, D.-C.L.; methodology, D.-C.L.; software, C.-W.L.; validation, D.-C.L. and C.-W.L.; formal analysis, D.-C.L.; investigation, C.-W.L.; resources, C.-W.L.; data curation, C.-W.L.; writing—original draft preparation, C.-W.L.; writing—review and editing, D.-C.L.; project administration, D.-C.L.; funding acquisition, D.-C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science and Technology, R.O.C. grant number MOST 110-2115-M-214-002.

Data Availability Statement

The data set used in this paper can be obtained in the webpage https://www.investing.com/commodities/crude-oil-historical-data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Györfi, L.; Kohler, M.; Krzyzak, A.; Walk, H. A Distribution-Free Theory of Nonparametric Regression; Springer: New York, NY, USA, 2002. [Google Scholar]
  2. Härdle, W.; Müller, M.; Sperlich, S.; Werwatz, A. Nonparametric and Semiparametric Models; Springer: New York, NY, USA, 2004. [Google Scholar]
  3. Hart, J.D. Nonparametric Smoothing and Lack-of-Fit Tests; Springer: New York, NY, USA, 1997. [Google Scholar]
  4. Li, Q.; Racine, J.S. Nonparametric Econometrics; Princeton University Press: Mercer County, NJ, USA, 2007. [Google Scholar]
  5. Tsybakov, A.B. Introduction to Nonparametric Estimation; Springer: New York, NY, USA, 2009. [Google Scholar]
  6. Wasserman, L. All of Nonparametric Statistics; Springer: New York, NY, USA, 2006. [Google Scholar]
  7. Chu, C.-K.; Marron, J.S. Choosing a kernel regression estimator. Stat. Sci. 1991, 6, 404–436. [Google Scholar] [CrossRef]
  8. Jones, M.C.; Davies, S.J.; Park, B.U. Versions of kernel-type regression estimators. J. Amer. Statist. Assoc. 1994, 89, 825–832. [Google Scholar] [CrossRef]
  9. Barnsley, M.F. Fractal functions and interpolation. Constr. Approx. 1986, 2, 303–329. [Google Scholar] [CrossRef]
  10. Barnsley, M.F. Fractals Everywhere; Academic Press: Orlando, FL, USA, 1988. [Google Scholar]
  11. Marvasti, M.A.; Strahle, W.C. Fractal geometry analysis of turbulent data. Signal Process. 1995, 41, 191–201. [Google Scholar] [CrossRef]
  12. Mazel, D.S. Representation of discrete sequences with three-dimensional iterated function systems. IEEE Trans. Signal Process. 1994, 42, 3269–3271. [Google Scholar] [CrossRef]
  13. Mazel, D.S.; Hayes, M.H. Using iterated function systems to model discrete sequences. IEEE Trans. Signal Process. 1992, 40, 1724–1734. [Google Scholar] [CrossRef]
  14. Balasubramani, N. Shape preserving rational cubic fractal interpolation function. J. Comput. Appl. Math. 2017, 319, 277–295. [Google Scholar] [CrossRef]
  15. Balasubramani, N.; Guru Prem Prasad, M.; Natesan, S. Shape preserving α-fractal rational cubic splines. Calcolo 2020, 57, 21. [Google Scholar] [CrossRef]
  16. Barnsley, M.F.; Elton, J.; Hardin, D.; Massopust, P. Hidden variable fractal interpolation functions. SIAM J. Math. Anal. 1989, 20, 1218–1242. [Google Scholar] [CrossRef]
  17. Barnsley, M.F.; Massopust, P.R. Bilinear fractal interpolation and box dimension. J. Approx. Theory 2015, 192, 362–378. [Google Scholar] [CrossRef]
  18. Chand, A.K.B.; Kapoor, G.P. Generalized cubic spline fractal interpolation functions. SIAM J. Numer. Anal. 2006, 44, 655–676. [Google Scholar] [CrossRef] [Green Version]
  19. Chand, A.K.B.; Navascués, M.A. Natural bicubic spline fractal interpolation. Nonlinear Anal. 2008, 69, 3679–3691. [Google Scholar] [CrossRef]
  20. Chand, A.K.B.; Navascués, M.A. Generalized Hermite fractal interpolation. Rev. Real Acad. Cienc. Zaragoza 2009, 64, 107–120. [Google Scholar]
  21. Chand, A.K.B.; Tyada, K.R. Constrained shape preserving rational cubic fractal interpolation functions. Rocky Mt. J. Math. 2018, 48, 75–105. [Google Scholar] [CrossRef] [Green Version]
  22. Chand, A.K.B.; Vijender, N.; Viswanathan, P.; Tetenov, A.V. Affine zipper fractal interpolation functions. BIT Numer. Math. 2020, 60, 319–344. [Google Scholar] [CrossRef]
  23. Chand, A.K.B.; Viswanathan, P. A constructive approach to cubic Hermite fractal interpolation function and its constrained aspects. BIT Numer. Math. 2013, 53, 841–865. [Google Scholar] [CrossRef]
  24. Chandra, S.; Abbas, S.; Verma, S. Bernstein super fractal interpolation function for countable data systems. Numer. Algorithms 2022. [Google Scholar] [CrossRef]
  25. Dai, Z.; Wang, H.-Y. Construction of a class of weighted bivariate fractal interpolation functions. Fractals 2022, 30, 2250034. [Google Scholar] [CrossRef]
  26. Katiyar, S.K.; Chand, A.K.B. Shape preserving rational quartic fractal functions. Fractals 2019, 27, 1950141. [Google Scholar] [CrossRef]
  27. Katiyar, S.K.; Chand, A.K.B.; Kumar, G.S. A new class of rational cubic spline fractal interpolation function and its constrained aspects. Appl. Math. Comput. 2019, 346, 319–335. [Google Scholar] [CrossRef]
  28. Luor, D.-C. Fractal interpolation functions with partial self similarity. J. Math. Anal. Appl. 2018, 464, 911–923. [Google Scholar] [CrossRef]
  29. Massopust, P.R. Fractal Functions, Fractal Surfaces, and Wavelets; Academic Press: San Diego, CA, USA, 1994. [Google Scholar]
  30. Massopust, P.R. Interpolation and Approximation with Splines and Fractals; Oxford University Press: New York, NY, USA, 2010. [Google Scholar]
  31. Miculescu, R.; Mihail, A.; Pacurar, C.M. A fractal interpolation scheme for a possible sizeable set of data. J. Fractal Geom. 2022. [Google Scholar] [CrossRef]
  32. Navascués, M.A. Fractal approximation. Complex Anal. Oper. Theory 2010, 4, 953–974. [Google Scholar] [CrossRef]
  33. Navascués, M.A. Fractal bases of Lp spaces. Fractals 2012, 20, 141–148. [Google Scholar] [CrossRef]
  34. Navascués, M.A.; Chand, A.K.B. Fundamental sets of fractal functions. Acta Appl. Math. 2008, 100, 247–261. [Google Scholar] [CrossRef]
  35. Navascués, M.A.; Pacurar, C.; Drakopoulos, V. Scale-free fractal interpolation. Fractal Fract. 2022, 6, 602. [Google Scholar] [CrossRef]
  36. Prasad, S.A. Super coalescence hidden-variable fractal interpolation functions. Fractals 2021, 29, 2150051. [Google Scholar] [CrossRef]
  37. Ri, S.; Drakopoulos, V. Generalized fractal interpolation curved lines and surfaces. Nonlinear Stud. 2021, 28, 427–488. [Google Scholar]
  38. Tyada, K.R.; Chand, A.K.B.; Sajid, M. Shape preserving rational cubic trigonometric fractal interpolation functions. Math. Comput. Simul. 2021, 190, 866–891. [Google Scholar] [CrossRef]
  39. Vijender, N. Fractal perturbation of shaped functions: Convergence independent of scaling. Mediterr. J. Math. 2018, 15, 211. [Google Scholar] [CrossRef]
  40. Viswanathan, P. A revisit to smoothness preserving fractal perturbation of a bivariate function: Self-Referential counterpart to bicubic splines. Chaos Solitons Fractals 2022, 157, 111885. [Google Scholar] [CrossRef]
  41. Viswanathan, P.; Chand, A.K.B. Fractal rational functions and their approximation properties. J. Approx. Theory 2014, 185, 31–50. [Google Scholar] [CrossRef]
  42. Viswanathan, P.; Chand, A.K.B. α-fractal rational splines for constrained interpolation. Electron. Trans. Numer. Anal. 2014, 41, 420–442. [Google Scholar]
  43. Viswanathan, P.; Navascués, M.A.; Chand, A.K.B. Associate fractal functions in Lp-spaces and in one-sided uniform approximation. J. Math. Anal. Appl. 2016, 433, 862–876. [Google Scholar] [CrossRef]
  44. Wang, H.-Y.; Yu, J.-S. Fractal interpolation functions with variable parameters and their analytical properties. J. Approx. Theory 2013, 175, 1–18. [Google Scholar] [CrossRef]
  45. Banerjee, S.; Gowrisankar, A. Frontiers of Fractal Analysis Recent Advances and Challenges; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  46. Kumar, M.; Upadhye, N.S.; Chand, A.K.B. Linear fractal interpolation function for data set with random noise. Fractals, 2022; accepted. [Google Scholar] [CrossRef]
  47. Luor, D.-C. Fractal interpolation functions for random data sets. Chaos Solitons Fractals 2018, 114, 256–263. [Google Scholar] [CrossRef]
  48. Luor, D.-C. Statistical properties of linear fractal interpolation functions for random data sets. Fractals 2018, 26, 1850009. [Google Scholar] [CrossRef]
  49. Luor, D.-C. Autocovariance and increments of deviation of fractal interpolation functions for random datasets. Fractals 2018, 26, 1850075. [Google Scholar] [CrossRef]
  50. Luor, D.-C. On the distributions of fractal functions that interpolate data points with Gaussian noise. Chaos Solitons Fractals 2020, 135, 109743. [Google Scholar] [CrossRef]
  51. Caldarola, F.; Maiolo, M. On the topological convergence of multi-rule sequences of sets and fractal patterns. Soft Comput. 2020, 24, 17737–17749. [Google Scholar] [CrossRef]
  52. Wang, H.-Y.; Li, H.; Shen, J.-Y. A novel hybrid fractal interpolation-SVM model for forecasting stock price indexes. Fractals 2019, 27, 1950055. [Google Scholar] [CrossRef]
  53. Tosatto, S.; Akrour, R.; Peters, J. An upper bound of the bias of Nadaraya-Watson kernel regression under Lipschitz assumptions. Stats 2021, 4, 1–17. [Google Scholar] [CrossRef]
Figure 1. Raw data and sample data.
Figure 1. Raw data and sample data.
Fractalfract 06 00680 g001
Figure 2. Raw data and m ^ .
Figure 2. Raw data and m ^ .
Fractalfract 06 00680 g002
Figure 3. m ^ and f [ m ^ ] .
Figure 3. m ^ and f [ m ^ ] .
Fractalfract 06 00680 g003
Figure 4. Raw data and f [ m ^ ] .
Figure 4. Raw data and f [ m ^ ] .
Fractalfract 06 00680 g004
Table 1. The values of s k .
Table 1. The values of s k .
k12345678910
s k 0.02−0.030.08−0.160.05−0.26−0.36−0.06−0.140.06
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luor, D.-C.; Liu, C.-W. Fractal Perturbation of the Nadaraya–Watson Estimator. Fractal Fract. 2022, 6, 680. https://doi.org/10.3390/fractalfract6110680

AMA Style

Luor D-C, Liu C-W. Fractal Perturbation of the Nadaraya–Watson Estimator. Fractal and Fractional. 2022; 6(11):680. https://doi.org/10.3390/fractalfract6110680

Chicago/Turabian Style

Luor, Dah-Chin, and Chiao-Wen Liu. 2022. "Fractal Perturbation of the Nadaraya–Watson Estimator" Fractal and Fractional 6, no. 11: 680. https://doi.org/10.3390/fractalfract6110680

APA Style

Luor, D. -C., & Liu, C. -W. (2022). Fractal Perturbation of the Nadaraya–Watson Estimator. Fractal and Fractional, 6(11), 680. https://doi.org/10.3390/fractalfract6110680

Article Metrics

Back to TopTop