Next Article in Journal
Acknowledgment to Reviewers of Fractal Fract in 2021
Previous Article in Journal
Analytical Study on Sodium Alginate Based Hybrid Nanofluid Flow through a Shrinking/Stretching Sheet with Radiation, Heat Source and Inclined Lorentz Force Effects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Overview of One-Dimensional Continuous Functions with Fractional Integral and Applications in Reinforcement Learning

1
College of Command Information System, Army Engineering University of PLA, Nanjing 210001, China
2
Troops of 78092, Chengdu 610031, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(2), 69; https://doi.org/10.3390/fractalfract6020069
Submission received: 10 January 2022 / Revised: 20 January 2022 / Accepted: 22 January 2022 / Published: 27 January 2022
(This article belongs to the Topic Fractional Calculus: Theory and Applications)

Abstract

:
One-dimensional continuous functions are important fundament for studying other complex functions. Many theories and methods applied to study one-dimensional continuous functions can also be accustomed to investigating the properties of multi-dimensional functions. The properties of one-dimensional continuous functions, such as dimensionality, continuity, and boundedness, have been discussed from multiple perspectives. Therefore, the existing conclusions will be systematically sorted out according to the bounded variation, unbounded variation and h o ¨ lder continuity. At the same time, unbounded variation points are used to analyze continuous functions and construct unbounded variation functions innovatively. Possible applications of fractal and fractal dimension in reinforcement learning are predicted.

1. Introduction

It is a widely held view that dimensionality is an important indicator to describe functions, but different functions have many disparate internal structures and properties. Traditional topological dimension had not dealt with some characteristics of the intricate functions well. In recent years, there is a growing body of literature that recognises the importance of using fractal dimension instead of topological dimension to describe the functions. The fractal dimension is an extension of the topological dimension. The fractal dimension reflects the effectiveness of the space occupied by the complex sets, and it is a measure of the irregularity of the complex sets. It is cross-combined with the chaos theory of dynamical systems and complements each other. It admits that the part of the world may show similarity with the whole in a certain aspect under certain conditions or processes. The value of the fractal dimension can be not only an integer but also a fraction. So fractal dimension can measure complex sets like the Cantor ternary set. From the point of view of the measure theory, the fractal dimension is the jump point that makes the measure of the set change from infinity to zero. Fractal dimension includes the Hausdorff dimension, the Box dimension and the Packing dimension. Each dimension has a special definition and many calculation methods. The tool for studying fractal dimension is no longer just classic calculus, and a full discussion about the properties of continuous functions lies beyond the scope of classical calculus. Fractional calculus (FC) has gradually become the main method [1,2,3]. Since classical calculus is a special case of fractional calculus [4], many problems that cannot be measured by classical calculus can be solved by fractional calculus, such as studying the properties of continuous functions that are continuous but not differentiable everywhere [5,6]. The most widely used FC is the Riemann-Liuville fractional calculus and the Weyl-Marchaud fractional calculus.
Recent work has established that one-dimensional continuous functions have significant and useful properties [7]. For instance, the Box dimension of bounded variation functions and the functions with Riemann-Liuville fractional calculus are both one. The Box dimension of continuous functions is not less than one. Fractional integral does not increase the dimensionality of the functions, and this special operator makes the fractal dimension have a special linear relationship.
However, there are still some issues that are worth considering and discussing. For example, is the Hausdorff dimension of a continuous function with bounded variation equal to one? What are the Hausdorff dimension and the Box dimension of functions satisfying the H o ¨ lder condition? Is there a one-dimensional unbounded variation function? Can the function of unbounded variation and bounded variation be mutually converted under special prerequisites? Are there other ways to better explore unbounded variation functions effectively? It is these original questions that promote the emergence of new concepts and many new analytical tools. A few years ago, scholars always used the definition of bounded variation to define the unbounded variation function. The definition is not conducive to exploring the nature of the unbounded variation function. As unbounded variation functions defined by the unbounded variation point directly, a new perspective for studying unbounded variation functions was gradually discovered. At the same time, the relevant conclusions about unbounded variation points have also been rigorously proved. For example, the Box dimension of an unbounded variation function with only an unbounded variation point is one. If this function has self-similarity at the same time, its Hausdorff dimension is also one. A more interesting topic is to investigate the changes between some classic functions and the functions after fractional calculus. These changes usually include fractal dimension [8,9,10], continuity [11,12], boundedness [13,14] and types of fractional calculus [15,16].
After concentrated discussions on some special functions theoretically [17,18], scholars do not have any visual information of the functions [19,20]. The most obvious evidence is the Weierstrass function. Researchers not only know about its functional properties, but also clearly know what its image looks like. Nevertheless, scholars are not very familiar with the image of any one-dimensional continuous functions with an unbounded variation point. Therefore, several attempts have been made to construct the special functions [21], such as one-dimensional continuous functions with finite or infinite unbounded variation points, and unbounded variation functions that satisfy the H o ¨ lder condition. The construction process of these special functions mainly uses some compression, translation and symmetric transformations. There are also some special unbounded variation functions that are obtained by special operations on the basis of the devil function [22].
So far, there existed many research angles and conclusions on one-dimensional continuous functions and their fractional calculus [23]. In order to have a comprehensive understanding, this paper will systematically sort out the current research results from the perspectives of bounded variation, unbounded variation and the H o ¨ lder condition. A more detailed analysis of unbounded variation functions through the unbounded variation point will also be elaborated. Combined with the very popular reinforcement learning in machine learning, some very interesting practical applications are predicted. For example, the evaluation model based on the fractal dimension and the random search method based on the fractal structure. The advantage of the fractal evaluation model based on the fractal dimension is that the method based on the local information can evaluate the distance between any two states to the equilibrium state. The distance can speed up the calculation process of algorithms. At the same time, evaluating the current state during the training process can also optimize and improve algorithms reasonably. The fractal random search method also makes full use of the self-similarity to reduce the search time as much as possible on the basis of ensuring the probe of the entire space. Finally, the framework to prove the convergence of reinforcement learning algorithms is introduced using fractal attractors.
The main innovations of this manuscript are as follows. First, the existing conclusions about one-dimensional continuous functions are summarized through three different classification methods, which is helpful to study other complex functions. The second is to introduce the concept of the unbounded variation point to directly study unbounded variation functions. The unbounded variation point can effectively grasp the essence of unbounded variation functions. At the same time, some special unbounded variation functions can be constructed based on the unbounded variation point, and the images of these complex unbounded variation functions can be easily obtained. Third, by combining reinforcement learning and fractal theory, some possible application directions are predicted, and a unique fractal evaluation model is proposed. These results can provide some new ideas for other researchers.
Section 2 mainly recalled some basic concepts, such as the definition of fractal dimension, bounded variation functions, unbounded variation points and fractional calculus. Section 3 mainly discussed the bounded variation function and its fractional calculus. Section 4 focused on the correlation between the continuity of H o ¨ lder and variation functions. Section 5 primarily explored the unbounded variation function through the unbounded variation point, and gave the construction process of one-dimensional continuous unbounded variation functions. Section 6 forecasted some applications of fractal and fractal functions in reinforcement learning and analyzed the advantages and disadvantages of these methods. The logical structure of this paper is shown in Figure 1.

2. Basic Concepts

Among fractal dimension, the Box dimension is the most widely used. However, some other dimension is still mentioned in some engineering problems, such as the modified Box dimension and the Packing dimension. At the same time, the relationship between these dimension is often analyzed and compared in theoretical research. Most of the definitions are based on measurement theory, and there are also some interrelationships between various dimension. Typical definitions of fractal dimension are as follows.
Definition 1.
([24,25]) Let F be a non-empty bounded subset of R n and N δ ( F ) be the smallest number of sets of diameter at most δ which can cover F. The lower and upper Box dimension of F respectively are defined as
dim ̲ B ( F ) = lim ̲ δ 0 log N δ ( F ) log δ ,
and
dim ¯ B ( F ) = lim δ 0 ¯ log N δ ( F ) log δ .
If (1) and (2) are equal, the common value is the Box dimension of F:
dim B ( F ) = lim δ 0 log N δ ( F ) log δ .
If F can be decomposed into a countable number of pieces F 1 , F 2 , in such a way that the dimension of the largest piece should be as small as possible. This idea leads to the following modified Box-counting dimension,
dim ̲ M B ( F ) = inf { sup i dim ̲ B F i : F i = 1 F i } ,
dim ¯ M B ( F ) = inf { sup i dim ¯ B F i : F i = 1 F i } .
If (3) and (4) are equal, the common value is the modified Box-counting dimension of F. Let
P s ( F ) = inf { i P 0 s ( F i ) : F i = 1 F i } .
It may be shown that P s ( F ) is the s-dimensional Packing measure. The definition of the Packing dimension [26] in the usual way:
dim P F = sup { s : P s ( F ) = } = inf { s : P s ( F ) = 0 } .
The above dimension is put forward for some specific problems. In the research process, the appropriate fractal dimension should be selected according to the needs. For example, the measurement of the Hausdorff dimension is more accurate and the calculation of the Box dimension is simpler through programs.
The Jordan decomposition theorem is widely applied in the proof process of various problems, and the core concept of the theorem is the function with bounded variation. The definition of the bounded variation function is shown in Definition 2. The unbounded variation function can be defined by the complementary set of bounded variation functions, but this paper will research unbounded variation functions through the unbounded variation point that can be found in Definition 3.
Definition 2.
([27]) Let f ( x ) be defined on I = [ 0 , 1 ] . A set of points P = { x 0 , x 1 , , x n } , satisfying the inequalities 0 = x 0 < x 1 < < x n 1 < x n = 1 , is called a partition. P = { x 0 , x 1 , , x n } is a partition of I and write f k = f ( x k ) f ( x k 1 ) , for k = 1 , 2 , , n . If there exists a positive number M such that
k = 1 n | f k | M ,
for all partitions of I, f ( x ) is said to be of bounded variation on I.
Bounded variation functions have many important properties [28,29]. Such as, a monotonic function is a bounded variation function. The sum, difference, and product of a finite number of bounded variation functions are still the bounded variation function. The absolutely continuous function must be the function of bounded variation.
Definition 3.
(UV point) Let f ( x ) be a continuous function on I.
(1) For p ( 0 , 1 ) . There exists a closed subinterval Q = [ q 1 , q 2 ]   ( 0 q 1 < p < q 2 1 ) of I such that the variation of f ( x ) on Q is finite, then denote ( p , 0 ) as a bounded variation point of f ( x ) , or ( p , 0 ) as an unbounded variation point of f ( x ) .
(2) For p = 0 or p = 1 . There is a closed subinterval Q = [ 0 , q 1 ]   ( 0 < x 1 ) or Q = [ q 1 , 1 ]   ( 0 q 1 < 1 ) of I and the variation of f ( x ) on Q is finite, then denote ( p , 0 ) is a bounded variation point of f ( x ) , otherwise ( p , 0 ) is an unbounded variation point of f ( x ) .
Due to the complexity of the function structure, the functions of unbounded variation are often non-differentiable functions in the defined interval. The concept of the UV point grasps the essence of unbounded variation functions and transforms the complex structure cleverly. Classical calculus is difficult to analyse the properties of unbounded variation functions, but the properties of some special unbounded variation functions can be investigated by fractional calculus [30,31]. This article mainly utilizes the Riemann-Liouville fractional integral and the Weyl fractional integral [32] to study unbounded variation functions. Their definitions can be found in Definition 4.
Definition 4.
([33,34]) (1) Let f ( x ) C I , ν > 0 . D ν f ( 0 ) = 0 and for x ( 0 , 1 ] ,
D ν f ( x ) = 1 Γ ( ν ) 0 x ( x t ) ν 1 f ( t ) d t
is the Riemann-Liouville fractional integral of f ( x ) of order ν.
(2) Let f ( x ) be a continuous function defined on ( , + ) and 0 < ν < 1 .
W ν f ( x ) = 1 Γ ( ν ) x ( t x ) ν 1 f ( t ) d t
is called as the Weyl fractional integral of f ( x ) of order ν.
The abbreviation C I and B V I will be represented for continuous functions and bounded variation functions defined on I respectively. Denote G ( f , I ) as the image of f ( x ) on I. Denote bounded variation function and unbounded variation function as BVF and UVF respectively. C 0 is the Cantor set.

3. Bounded Variation Functions and Their Fractional Integral

The structure of the bounded variation function is not complex. Simple calculations show that its Box dimension is one [35,36]. Furthermore, the bounded variation function after the Weyl fractional integral is still a bounded variation function, so its Box dimension is still one. The relationship between them can be shown in Figure 2.
The proof process of the above related conclusions will be given in detail. First of all, a frequently occurring lemma is necessary to be displayed.
Lemma 1.
Given a function f ( x ) and an interval [ a , b ] , R f is the maximum range of f ( x ) over [ a , b ] , i.e.,
R f [ a , b ] = sup a < x , y < b | f ( x ) f ( y ) | .
Let f ( x ) C I B V I . Suppose that 0 < δ < 1 and m be the least integer greater than or equal to δ 1 . If N δ is the number of squares of the δ mesh that intersect G ( f , I ) , then
δ 1 i = o m 1 R f [ i δ , ( i + 1 ) δ ] N δ 2 m + δ 1 i = 0 m 1 R f [ i δ , ( i + 1 ) δ ] .
Proof of Lemma 1. 
The number of mesh squares of δ in the column above the interval [ i δ , ( i + 1 ) δ ] that intersect G ( f , I ) belongs to [ R f [ i δ , ( i + 1 ) δ ] / δ , 2 + ( R f [ i δ , ( i + 1 ) δ ] / δ ) ] . By summing all such intervals together, the lemma can be proved. □
Theorem 1.
  • (1) If dim ̲ B G ( f , I ) 1 and f ( x ) is a continuous function, dim ¯ B G ( f , I ) 2 .
  • (2) If f ( x ) C I B V I , dim B G ( f , I ) = 1 .
Proof of Theorem 1. 
By using Definition 1,
dim ̲ B G ( f , I ) lim ̲ δ 0 log C δ log δ = 1 , dim ¯ B G ( f , I ) lim ¯ δ 0 log C δ 2 log δ = 2 .
Let { x i } i = 1 n be arbitrary points satisfying 0 = x 0 < x 1 < x 2 < < x n = 1 , then
sup ( x 0 , x 1 , , x n ) k = 1 n | f ( x k ) f ( x k 1 ) | < C .
Let m be the least integer greater than or equal to 1 δ . N δ is the number of squares of the δ mesh that intersect G ( f , I ) . Combining Lemma 1,
N δ 2 m + δ 1 i = 1 m R f [ ( i 1 ) δ , i δ ] .
For 1 i m 1 and x i , 0 = i δ , x i , 3 = ( i + 1 ) δ , x i , 1 , x i , 2 ( i δ , ( i + 1 ) δ ) ,
R f [ i δ , ( i + 1 ) δ ] sup x i , 0 < x i , 1 < x i , 2 < x i , 3 k = 1 3 | f ( x i , k ) f ( x i , k 1 ) | .
There exists a positive constant C such that N δ C δ 1 and
dim ¯ B G ( f , I ) 1 , 0 < v < 1 .
Simultaneously, the topolopy dimension of a continuous function f ( x ) is no less than 1,
dim ̲ B G ( f , I ) 1 , 0 < v < 1 .
Thus, dim B G ( f , I ) = 1 . □
If non-negative constants C and α can be found to formula the following inequation
| f ( x ) f ( y ) | C | x y | α ,
f ( x ) is a H o ¨ lder continuous function [37]. When α = 1 , f ( x ) is a Lipschitz continuous function. Throughout this paper, the term f ( x ) L i p C means that f ( x ) is a Lipschitz continuous function on I and the Lipschitz constant is C.
Corollary 1.
If f ( x ) L i p C , then dim B G ( f , I ) = 1 .
Proof of Corollary 1. 
f ( x ) L i p C , x , y I ,
f ( x ) f ( y ) C x y .
Let { x i } i = 1 n be arbitrary points satisfying 0 = x 0 < x 1 < x 2 < < x n = 1 . Since
sup ( x 0 , x 1 , . . . , x n ) k = 1 n | f ( x k ) f ( x k 1 ) | C k = 1 n | x k x k 1 | C ,
f ( x ) B V I and dim B G ( f , I ) = 1 . □
Corollary 1 shows that a function that satisfies the Lipschitz condition must be a BVF. However, a function that satisfies the H o ¨ lder condition is not necessarily a BVF [38,39]. The counter-example is as follows:
f ( x ) = 1 / l n x , 0 < x 0.5 , 0 , x = 0 .
Obviously, since this function is monotonically increasing in [ 0 , 0.5 ] , it is a BVF. But for any α > 0 , this function does not satisfy the H o ¨ lder condition of order α .
Theorem 2.
If f ( x ) C I B V I , dim B G ( W v f , I ) = 1 .
Proof of Theorem 2. 
Since f ( x ) C I and f ( x ) is of bounded variation on I, f ( x ) can be replaced with the difference of two monotone increasing and continuous functions g 1 ( x ) and g 2 ( x ) by the Jordan decomposition theorem, f ( x ) = g 1 ( x ) g 2 ( x ) , where g 1 ( x ) = h 1 ( x ) c , g 2 ( x ) = h 2 ( x ) c , h 1 ( x ) = h 2 ( x ) = c on [ 1 , + ) . Then h 1 ( x ) and h 2 ( x ) are also monotone increasing and continuous functions.
(1) If f ( 0 ) 0 , let g 1 ( 0 ) 0 and g 2 ( 0 ) = 0 . By Definition 4,
G 1 ( x ) = W v g 1 ( x ) = 1 Γ ( v ) x h 1 ( t ) c ( t x ) 1 v d t , 0 < v < 1 ,
G 1 ( x ) still is a continuous function on I when g 1 ( x ) is a continuous function. Let 0 x 1 x 2 1 and 0 < v < 1 ,
G 1 ( x 2 ) G 1 ( x 1 ) = 1 Γ ( v ) x 2 ( t x 2 ) v 1 ( h 1 ( t ) c ) d t 1 Γ ( v ) x 1 ( t x 1 ) v 1 ( h 1 ( t ) c ) d t = 1 Γ ( v ) x 2 1 ( t x 2 ) v 1 ( h 1 ( t ) c ) d t 1 Γ ( v ) x 1 1 ( t x 1 ) v 1 ( h 1 ( t ) c ) d t = 1 Γ ( v ) ( x 2 1 ( t x 2 ) v 1 h 1 ( t ) d t x 1 1 ( t x 1 ) v 1 h 1 ( t ) d t ) + 1 Γ ( v ) ( x 1 1 ( t x 1 ) v 1 c d t x 2 1 ( t x 2 ) v 1 c d t ) = 1 Γ ( v ) x 1 1 x 2 + x 1 ( t x 1 ) v 1 ( h 1 ( t x 1 + x 2 ) h 1 ( t ) ) d t + 1 Γ ( v ) 1 + x 1 x 2 1 ( t x 1 ) v 1 ( c h 1 ( t ) ) d t 0 .
Thus, G 1 ( x ) still is a monotone increasing and continuous function on I. If
G 2 ( x ) = W v g 2 ( x ) = 1 Γ ( v ) x h 2 ( t ) c ( t x ) 1 v d t , 0 < v < 1 ,
G 2 ( x ) is also a monotone increasing and continuous function on I.
(2) If f ( 0 ) < 0 , let g 1 ( x ) = 0 and g 2 ( x ) > 0 . Using a similar way, both W v g 1 ( x ) and W v g 2 ( x ) are monotone increasing and continuous functions on I. So W v f ( x ) still is a BVF on I and
dim B G ( W v f , I ) = 1 .

4. Unbounded Variation Functions (UVFs)

4.1. A Special UVF

The construction process of the devil stair function d ( x ) will be elaborated firstly. Then, a peculiar continuous function D ( x ) of unbounded variation on I will be constructed on the basis of d ( x ) .
If x ( 1 3 , 2 3 ) , d 1 ( x ) = 1 2 . Let d 1 ( 0 ) = 0 and d 1 ( 1 ) = 1 . d 1 ( x ) can be exhibited on I by connecting d 1 ( 0 ) , d 1 ( 1 3 ) , d 1 ( 2 3 ) and d 1 ( 1 ) with line segments.
If x ( 1 9 , 2 9 ) , d 2 ( x ) = 1 4 . If x ( 7 9 , 8 9 ) , d 2 ( x ) = 3 4 . Connecting d 1 ( 0 ) , d 2 ( 1 9 ) , d 2 ( 2 9 ) , d 1 ( 1 3 ) , d 1 ( 2 3 ) , d 2 ( 7 9 ) , d 2 ( 8 9 ) and d 1 ( 1 ) with line segments to form d 2 ( x ) on I.
By induction, d n ( x ) ( n 3 ) can be constructed. Let d ( x ) = lim n d n ( x ) .
The construction of D 1 ( x ) is based on d 1 ( x ) with two more line segments whose length are 1. The line segments and the part of d 1 ( x ) , x ( 1 3 , 2 3 ) make up an isosceles triangle. In D 1 ( x ) , the triangle is shown without the base line.
The construction of D 2 ( x ) is based on d 2 ( x ) and D 1 ( x ) . Simultaneously for x ( 0 , 1 3 ) or x ( 1 3 , 2 3 ) , using similar ways to construct D 2 ( x ) like as d 1 ( x ) D 1 ( x ) . However, the length of line segments added is 1 / 2 2 1 .
The construction of D 3 ( x ) is based on d 3 ( x ) and D 2 ( x ) . Simultaneously for x ( 0 , 1 9 ) , x ( 2 9 , 1 3 ) , x ( 2 3 , 7 9 ) , or x ( 8 9 , 1 ) , using similar steps to construct D 3 ( x ) like as d 1 ( x ) D 1 ( x ) . The process of constructing is similar, the only difference is the length of line segments added is 1 / 3 2 2 .
By induction, the construction of D n ( x ) is based on d n ( x ) and D n 1 ( x ) . The length of line segments is 1 / n 2 n 1 . Then, D ( x ) = lim n D n ( x ) . Images of d ( x ) and D ( x ) are given as follows Figure 3.
Combining the construction process of D ( x ) , properties of the function will be investigated.
Property 1.
The length of G ( D , I ) is infinite on I. The lebesgue measure of differentiable points on I is one.
Proof of Property 1.
Length of G ( D , I ) is no less than
1 · 2 · 1 + 2 · 2 · 1 / 2 2 + 4 · 2 · 1 / 3 4 + + 2 n 1 · 2 · 1 / n 2 n 1 + = 2 n = 1 1 n = .
Thus, the length of G ( D , I ) is infinite on I. Let A be the set of differentiable points of D ( x ) on I.
m ( A ) = 1 3 + 2 · 1 9 + 4 · 1 27 + + 2 n 1 · 1 3 n + = 1 .
Denote B as the set of non-differentiable points of D ( x ) on I, then
m ( B ) = 1 1 = 0 .
Property 2.
The Box dimension of D ( x ) is one and D ( x ) has uncountable unbounded variation points on I.
Proof of Property 2.
Since D ( x ) is a continuous function, dim ̲ B G ( D , I ) 1 . Let 0 < δ < 1 , 1 δ n 1 + 1 δ . The number of squares of the δ mesh that intersect G ( D , I ) are less than
2 n + 1 δ i = 1 n 1 i + 2 1 δ .
Thus,
dim ¯ B G ( D , I ) lim δ 0 ¯ log [ 2 n + 1 δ i = 1 n 1 i + 2 1 δ ] log δ lim δ 0 ¯ log [ 2 n + 2 δ 1 ( log ( n + 1 ) + 1 ) ] log δ 1 .
Further analysis showed that dim B G ( D , I ) = 1 .
If x C 0 , a positive number N 0 will be found such that variation of any subinterval I x containing x of I is at least
1 2 N 0 1 N 0 + 2 1 2 N 0 + 1 1 N 0 + 1 + 2 2 1 2 N 0 + 2 1 N 0 + 2 + = 1 2 N 0 n = 1 1 N 0 + n 1 = 1 2 N 0 ( n = 1 1 n n = 1 N 0 1 1 n ) = .
Thus, ( x , 0 ) is an unbounded variation point of D ( x ) on I. Since the arbitrariness of x, the number of unbounded variation points of D ( x ) on I is uncountable. □
Now, the construction of H ( x ) that contains uncountable UV points will be displayed. Divided I into three equal intervals,
I 1 , 1 = [ 0 , 1 3 ] , I 1 , 2 = [ 1 3 , 2 3 ] , I 1 , 3 = [ 2 3 , 1 ] .
Two line segments are added such that constituting an isosceles triangle with I 1 , 2 and the length of the segment is 1, Then I 1 , 2 will be removed. I 1 , 1 and I 1 , 3 are divided into three equal intervals respectively,
I 1 , 1 = I 2 , 1 I 2 , 2 I 2 , 3 , I 1 , 3 = I 2 , 4 I 2 , 5 I 2 , 6 .
Four line segments are added such that constituting an isosceles triangle with I 2 , 2 and I 2 , 5 . The length of the segment is 1 4 . Furthermore, delete I 2 , 2 and I 2 , 5 . Similar way can get H 3 and H 4 . H n can be got From H n 1 . By dividing
I n 1 , 1 , I n 1 , 3 , I n 1 , 4 , I n 1 , 6 , , I n 1 , 3 · 2 n 2 1 , I n 1 , 3 · 2 n 2
into three equal intervals respectively,
I n 1 , 1 = I n , 1 I n , 2 I n , 3 , I n 1 , 3 · 2 n 2 = I n , 3 · 2 n 1 2 I n , 3 · 2 n 1 1 I n , 3 · 2 n 1 .
2 n line segments are added such that constituting an isosceles triangle with
I n 1 , 2 , I n 1 , 5 , , I n 1 , 3 · 2 n 2 1 .
The length of the segment is 1 n · 2 n 1 . Then delete I n , 2 , I n , 5 ,⋯, I n , 3 · 2 n 1 1 . The image of H ( x ) is Figure 4.
Obviously, H ( x ) is a continuous function. Firstly, the length of H ( x ) on I is n = 1 1 n = , the variation of H ( x ) on I is infinite. Secondly, the number of δ mesh squares that intersect G ( H , I ) is at most δ 1 n = 1 1 n + 2 δ 1 and
dim B G ( H , I ) = lim δ 0 log ( δ 1 n = 1 1 n + 2 δ 1 ) log δ = 1 .
Finally, x 0 C 0 [ a , b ] , the variation of H ( x ) on [ a , b ] is n = N 0 1 N 0 2 N 0 1 1 n = , where N 0 is a positive integer. So H ( x ) contains uncountable UV points.
The function that satisfies the Lipschitz condition must be a BVF, but the function that satisfies the H o ¨ lder condition is not necessarily a BVF [40,41]. The following two special functions are just the best evidence for the above conclusion.

4.2. UVF Satisfying the H o ¨ lder Condition of Order α   ( 0 < α < 1 )

Let A n = a 1 + a 2 + + a n + be the convergence series of positive terms and any of terms is monotonically decreasing. The sum of A n is s and the construction process of the function f α ( x ) on [ 0 , s ] is as follows:
f ( x ) = 0 , x { 0 , a 1 , a 1 + a 2 , a 1 + a 2 + a 3 } ; f ( x ) = 1 n , x { a 1 + a 2 + + a n 1 + a n 2 ( n = 1 , 2 , ) } ; f ( s ) = 0 .
f α ( x ) is linear in the following intervals, such as [ a 1 + + a n 1 , a 1 + + a n 1 + a n 2 ] , [ a 1 + + a n 1 + a n 2 , a 1 + + a n 1 + a n ] , n = 1 , 2 , . The specific image of f α ( x ) is as follows Figure 5.
Theorem 3.
f α ( x ) is a continuous function on [ 0 , s ] ( 0 < s 1 ) and the total variation of f α ( x ) in the interval [ 0 , s ] is infinite.
Proof of Theorem 3.
From the specific construction process of f α ( x ) , f α ( x ) is a continuous function on [ 0 , s ] obviously. The proof of its total variation is infinite will be given next. Consider the following partition: 0 < a 1 2 < a 1 < a 1 + a 2 2 < + a 1 + a 2 < a 1 + a 2 + a 3 2 < a 1 + a 2 + a 3 < < a 1 + a 2 + + a k . Then,
V 0 s ( f ( x ) ) = | f ( a 1 2 ) f ( 0 ) | + | f ( a 1 ) f ( a 1 2 ) | + | f ( a 1 + a 1 2 ) f ( a 1 ) | + + | f ( a 1 + a 2 + + a k ) f ( a 1 + a 2 + + a k 1 + a k 2 ) | + | f ( s ) f ( a 1 + a 2 + + a k ) | = 1 + 1 + 1 2 + 1 2 + + 1 k + 1 k = + .
The conclusion is lim k + V 0 s ( f ( x ) ) = + . Thus, f α ( x ) is an UVF on [ 0 , s ] . □
Theorem 4.
  f α ( x ) satisfies the H o ¨ lder condition of a given order α ( 0 < α < 1 ) .
Proof of Theorem 4.
Case one: two points P 1 ( x 1 , y 1 ) , P 2 ( x 2 , y 2 ) on the interval are selected arbitrarily, but the two points are in the same linear interval, a 1 + + a n 1 x 1 < x 2 a 1 + + a n 1 + a n 2 . Then the specific image of Case one is as follows Figure 6.
| y 2 y 1 | = 2 n a n | x 2 x 1 | = 2 | x 2 x 1 | 1 α n a n | x 2 x 1 | α < 2 a n 1 α n a n | x 2 x 1 | α = 2 n a n α | x 2 x 1 | α .
Therefore, it is significant to select the appropriate sequence a n to make 2 n a n α bounded. a sequence that satisfies the above formula can be found easily, such as a n = n 1 α .
Case two: If the two points P 1 ( x 1 , y 1 ) , P 2 ( x 2 , y 2 ) are not in the same linear interval, moving P 1 to P 3 ( x 3 , y 3 ) through translation transformation. Then the specific image of Case two is as follows Figure 7.
Combined with the proof of Case one, | y 2 y 1 | = | y 2 y 3 | C | x 2 x 3 | α .
Since f α ( x ) is a continuous function, the lower Box dimension of f α ( x ) is greater than or equal to 1. The number of δ mesh squares that intersect G ( f α , [ 0 , s ] ) is at most δ 1 n = 1 1 n + 2 δ 1 ,
dim B G ( f α , [ 0 , s ] ) = lim δ 0 log ( δ 1 n = 1 1 n + 2 δ 1 ) log δ = 1 .

4.3. UVF Not Satisfying the H o ¨ lder Condition of Any Order α ( α > 0 )

An UVF g ( x ) that does not satisfy the H o ¨ lder condition of any order α ( α > 0 ) on the basis of f α ( x ) will be constructed. Since f α ( x ) satisfies the H o ¨ lder condition of order α ( 0 < α < 1 ) on [ 0 , s ] , for α * > α , x = a 1 + a 2 + + a n 1 + a n 2 , y = a 1 + a 2 + + a n 1 + a n ,
lim n + f ( y ) f ( x ) | y x | α * = 1 n ( a n 2 ) α * = 1 n ( 1 2 n 1 α ) α * = 2 α * n α * α 1 = + .
Thus, f α ( x ) does not satisfy the H o ¨ lder condition of any order α * ( α * > α ) on [ 0 , s ] .
Denote σ n = k = 1 1 k n and divide the interval I as follows,
0 = β 2 < β 3 < β 4 < < β n < ( β n 1 , n + ) .
(1) If n is an even number, g ( x ) can be obtained by compressing f 1 n ( x ) by n times on the ordinate, compressing by σ n β n + 1 β n times on the abscissa and moving β n to the right along the abscissa,
g ( x ) = 1 n f 1 n [ σ n ( x β n ) β n + 1 β n ] .
(2) If n an is odd number,
g ( x ) = 1 n f 1 n [ σ n ( β n + 1 x ) β n + 1 β n ] .
In addition to the above construction process, an additional supplementary definition f ( 1 ) = 0 is reasonable. The specific image of g ( x ) is as follows Figure 8.
From the construction process of g ( x ) , g ( x ) is defined everywhere on the interval I and g ( x ) is a continuous function. Through similar calculation, it can be known that the total variation of this function is also infinite. g ( x ) is also an UVF.
However, for interval [ β n , β n + 1 ] , g ( x ) satisfies the H o ¨ lder condition of order 1 n and does not satisfy the H o ¨ lder condition of order 1 n 1 . Therefore, the function g ( x ) does not satisfy the H o ¨ lder condition of any order α ( α > 0 ) . Since g ( x ) is a continuous function, the Box dimension of g ( x ) is more than one.

4.4. UVF Contained Finite UV Points

The introduction of the unbounded variation points gives a new way to study unbounded variation functions [42]. Many conclusions about unbounded variation functions can be obtained by analyzing the number and location of unbounded variation points. At the same time, if the function has self-similarity, some remarkable conclusions can be strictly demonstrated, such as Corollary 2 and Theorem 8.
Lemma 2.
([24]) If F R n , then dim P F = dim ¯ M B F .
Researchers have established the following relation for F R n :
dim H F dim ̲ M B F dim ¯ M B F = dim P F dim ¯ B F .
Theorem 5.
If f ( x ) is a continuous function on I and ( 1 , 0 ) is the only UV point of f ( x ) , then
dim H G ( f , I ) = dim P G ( f , I ) = dim M B G ( f , I ) = 1 .
Proof of Theorem 5.
Since f ( x ) is a continuous function on I,
1 dim H G ( f , I ) dim ̲ B G ( f , I ) .
δ > 0 , I = ( i = 1 E i ) [ 1 δ , 1 ] , where E i are subsets of I.
dim H G ( f , [ 1 δ , 1 ] ) dim ¯ B G ( f , [ 1 δ , 1 ] ) lim ¯ δ 0 log M δ log δ = 1 ,
where M is a positive constant.
dim ¯ M B G ( f , I ) = inf { sup δ dim ¯ B G ( f , ( i = 1 E i ) [ 1 δ , 1 ] ) } = 1 .
Thus,
1 dim H G ( f , I ) dim ¯ M B G ( f , I ) = 1 .
It is already becoming apparent that
dim H G ( f , I ) = dim P G ( f , I ) = dim M B G ( f , I ) = 1 .
Theorem 6.
If f ( x ) is a continuous function containing at most finite UV points on I, then
dim H G ( f , I ) = dim P G ( f , I ) = dim M B G ( f , I ) = 1 .
Proof of Theorem 6.
Let x 1 < x 2 < < x n be UV points of f ( x ) , n disjoint intervals [ a i , x i ] I can be found, where i = 1 , 2 , , n . Denote A = i = 1 n [ a i , x i ] . By Lemma 2,
dim H G ( f , [ a i , x i ] ) = dim P G ( f , [ a i , x i ] ) = dim M B G ( f , [ a i , x i ] ) = 1 .
Since the Hausdorff dimension has the property of countable stability,
dim H G ( f , I ) = dim H G ( f , A ( I \ A ) ) = max { dim H G ( f , A ) , dim H G ( f , I \ A ) } = 1 .
Given ε = min 1 i < j n | x i x j | , C i = [ a i ε 2 , a i + ε 2 ] , C n + 1 = I \ ( i = 1 n C i ) .
dim ¯ B G ( f , C i ) = 1 ,
where i = 1 , 2 , , n + 1 . Combining the definition of the modified Box-counting dimension,
dim ¯ M B G ( f , I ) = inf { sup i dim ¯ B C i : I i = 1 n + 1 C i } = 1 .
It is easy to check that
dim H G ( f , I ) = dim P G ( f , I ) = dim M B G ( f , I ) = 1 .
Corollary 2.
If a continuous function f ( x ) has the property of self-similar on I and ( 1 , 0 ) is the only UV point of, then
dim H G ( f , I ) = dim B G ( f , I ) = 1 .
Proof of Corollary 2.
Since f ( x ) is self-similar on I, G ( f , I ) is compact and
dim ¯ B ( G ( f , I ) V ) = dim ¯ B G ( f , I )
for all open sets V those intersect G ( f , I ) and dim ¯ B G ( f , I ) = dim ¯ M B G ( f , I ) . Thus,
dim H G ( f , I ) = dim P G ( f , I ) = dim M B G ( f , I ) = dim B G ( f , I ) = 1 .

4.5. UVF Contained Infinite UV Points

Theorem 7.
Let f ( x ) be a continuous function on I. f ( x ) has infinite and countable UV points and only one accumulation point, then
dim H G ( f , I ) = 1 .
Proof of Theorem 7.
Since f ( x ) is a continuous function on I,
1 dim H G ( f , I ) dim ̲ B G ( f , I ) .
(1) ( 0 , 0 ) is an accumulation point: denote the above countable UV points as
x 1 > x 2 > x 3 > > x n > .
δ > 0 , dim H G ( f , [ 0 , δ ] ) = 1 , there is not an accumulation point in other positions, Thus, there exists E i I and E i only contains one UV point x i , E i E j = when i j . f ( x ) only has an UV point on E i and
dim H G ( f , E i ) = 1 .
Denote E = i = 1 E i . By the countable stability of the Hausdorff dimension,
dim H G ( f , I ) = dim H ( G ( f , E ) G ( f , [ 0 , δ ] ) ) = sup { dim H G ( f , E ) , dim H G ( f , [ 0 , δ ] ) } = 1 .
Thus,
dim H G ( f , I ) = 1 .
(2) ( 1 , 0 ) is an accumulation point: denote the above countable UV points as
x 1 < x 2 < x 3 < < x n < .
δ > 0 , dim H G ( f , [ 1 δ , 1 ] ) = 1 , there is not an accumulation point in other points. There exists E i I and E i only contains one UV point x i , E i E j = when i j . f ( x ) only has an UV point on E i and
dim H G ( f , E i ) = 1 .
Denote E = i = 1 E i .
dim H G ( f , I ) = dim H ( G ( f , E ) G ( f , [ 0 , δ ] ) ) = sup { dim H G ( f , E ) , dim H G ( f , [ 0 , δ ] ) } = 1 .
Thus
dim H G ( f , I ) = 1 .
(3) x n ( 0 , 1 ) , ( x n , 0 ) is an accumulation point: δ > 0 , dim H G ( f , [ x n δ , x n + δ ] ) = 1 . By the above discussions,
dim H G ( f , I ) = 1 .
Theorem 8.
Let f ( x ) be a continuous function containing countable UV points and f ( x ) only have an accumulation point on I. If f ( x ) is self-similar, then
dim H G ( f , I ) = dim B G ( f , I ) = 1 .
Proof. 
Since f ( x ) is a continuous function on I,
1 dim H G ( f , I ) dim ̲ B G ( f , I ) .
Denote the above uncountable UV points as x 1 , x 2 , x 3 , . There exists [ a i , x i ] and [ a i , x i ] [ a j , x j ] = when i j . Thus, f ( x ) only have an UV point on [ a i , x i ] and
dim B G ( f , [ a i , x i ] ) = 1 .
Thus,
dim ¯ M B G ( f , E ) = inf { sup i dim ¯ B G ( f , [ a i , x i ] ) : E = i = 1 N 1 [ a i , x i ] } = 1 .
Denote E = i = 1 N 1 [ a i , x i ] , F = [ a N , 1 ] and H = i = 1 N 1 [ x i , a i + 1 ] where a 1 = 0 . Further inferences show that f ( x ) is a BVF on H and
dim H G ( f , H ) = dim B G ( f , H ) = 1 .
It can be seen from the similar calculation process that
dim ¯ M B G ( f , I ) = dim ¯ M B ( G ( f , E ) G ( f , F ) G ( f , H ) ) = inf { sup { dim H G ( f , E ) , dim H G ( f , F ) , dim H G ( f , H ) } } = 1 .
Since f ( x ) is self-similar on I, G ( f , I ) is compact and
dim ¯ B ( G ( f , I ) V ) = dim ¯ B G ( f , I )
for all open sets V that intersect G ( f , I ) . Thus,
dim ¯ B G ( f , I ) = dim ¯ M B G ( f , I ) .
Notice that the conclusion dim ̲ B G ( f , I ) 1 remains true.
dim H G ( f , I ) = dim P G ( f , I ) = dim M B G ( f , I ) = dim B G ( f , I ) = 1 .

5. Possible Applications in Reinforcement Learning

Since AlphaGo has shown amazing abilities in Go [43,44], reinforcement learning in machine learning has gradually been paid attention and researched by many scholars [45,46,47,48]. The core idea of reinforcement learning is to use the continuous interaction between the agent and the environment to maximize the long-term cumulative return expectation. The agent learns the optimal strategy through the mechanism of trial and error. Taking the expectation of maximizing returns as the goal makes reinforcement learning “foresight”, not just focusing on the immediate situation, so the strategies obtained through reinforcement learning are scientific. Since the optimal strategy can be learned by reinforcement learning, Reinforcement learning has become an emerging method of researching decision theory. At the same time, the learning process of the agent in reinforcement learning is dynamic, and the required data is also generated through interaction with the environment, so a large amount of original label data is not required.
With the advent of deep neural networks, deep reinforcement learning can solve many complex problems. The seemingly complex fractal sets also have special regularity (self-similarity). Therefore, can fractals and fractal dimension be used in the learning process of the agent to speed up the learning speed of the agent or improve the search efficiency of algorithms? This section will introduce several possible applications of fractal and fractal dimension in reinforcement learning.

5.1. The Evaluation Model Based on Fractal Dimension

The main basis of the fractal evaluation model is the fractal dimension. Fractal dimension is an important indicator of system stability. The multi-dimensional vector can be formed by utilizing the parameters, such as actions and states of agents. Many multi-dimensional vectors may establish a special set. The fractal dimension of the set can determine the distance between the current state and the equilibrated state. The equilibrium state is that all agents are in a stable state and there is no motivation to change the current strategy. The main operational flows of the fractal evaluation model are as follows.
Step one: data standardization. The number of states and agents are K, N respectively. State S = ( s 1 , s 2 , , s N , a 1 , a 2 , , a N , r 1 , r 2 , , r N ) . Standardization is to eliminate the differences caused by the species of each data. Standardized data is S = ( y i j ) , i = 1 , 2 , , K Z + , y = 1 , 2 , , 3 N Z + .
Step two: weight. w j = d j / i = 1 N d j ,
where d j = max 1 i , k K y i j y k j , j = 1 , 2 , , N Z + .
Step three: calculate N ( r ) . The distance used in the algorithm is unified as Euclidean distance. 3 N data of each state can be regarded as points on each coordinate axis in the 3n-dimensional space. These points constitute a subset of the 3 n -dimensional Euclidean space E 3 N . The distance from each point to the origin is d i j and let R = max ( d i j ) , i = 1 , 2 , , K Z + , j = 1 , 2 , , 3 N Z + . For a specific state, N ( r ) is the number of all points satisfying d i j < r and r is the radius of the hypersphere. Keep adjusting the value of radius r until r = R and N ( r ) = N . When the radius is r, the number of points contained in the hypersphere is N ( r ) = i = 0 3 N s g n ( r d i j ) and s g n ( x ) is a symbolic function,
s g n ( x ) = 1 , x > 0 0 , x 0
Step four: calculate the fractal dimension. D = log N ( r ) / log r .
From the above calculation process, the number of sample points contained in the hypersphere with r will change continuously as the radius alters. At the same time, the graph of the function formed by the above standardized data points is usually non-linear. The fractal dimension D in this step can be fitted by the least square method,
D = 3 N i = 1 3 N log N r i log r i i = 1 3 N log N r i i = 1 3 N log r i 3 N i = 1 3 N log r i 2 i = 1 3 N log r i 2 .
At present, most reinforcement learning algorithms are based on global information. However, due to the limitations of communication and observation, the agent cannot obtain all the information in practical. Therefore, the MDP(Markov decision process) model used to solve basic reinforcement learning problems is not applicable. Researchers establish the POMDP(Partially observable markov decision process) model [49,50] to solve partially observable reinforcement learning problems. The main solutions include function approximation, opponent modeling, and graph theory.
Fractal dimension is another new idea to optimize POMDP. The theoretical foundation of using fractal dimension to evaluate stability is mainly based on the Lyapunov stability theory. The larger fractal dimension of the set, the more stable points in the set. Therefore, the set with lager fractal dimension is more stable than the set with small fractal dimension under the same disturbance. The advantage of this method is that the agent does not need to know global information. The strategy selection of agents can be guided by the fractal dimension, and the correct strategy direction can optimize the algorithm. At the same time, for a game where there is no pure strategy Nash equilibrium, it is still possible to compare the distance between any two situations and the equilibrium state by calculating the fractal dimension.

5.2. The Convergence Model Based on Fractal Attractor

At present, the convergence of most reinforcement learning algorithms lacks rigorous proofs. However, due to the powerful fitting ability of multiple neural networks [51,52,53], the algorithm can converge better in various experimental environments. The convergence obtained in the experiment cannot effectively understand the essence of the problem and optimize the existing algorithm. Obviously, the convergence of an algorithm is the fixed point of a particular function mathematically. Solving the fixed point problem can also be transformed into an attractor in fractal theory. Therefore, the convergence of the algorithm can be verified by calculating the existence of attractors. Surprisingly, the calculation of attractors has theoretical guarantees. Therefore, can the Bellman equation in reinforcement learning be regarded as an iterative function system, and then its solution is the attractor of the iterative function system? The idea of the model is shown in Figure 9.
The advantage of this convergence model lies in its versatility, which can prove the convergence of a class of similar algorithms. The method of theoretical proof is conducive to finding the essence of the problem, so as to provide different ideas for the optimization algorithm.

5.3. The Random Search Algorithm Based on Fractal

Exploration and utilization is one of the important research directions in deep reinforcement learning. The goal of exploration is to find more information about the environment, and the purpose of utilization is to use the known environmental information to maximize rewards. In short, exploration is to try behaviors that have not yet been tried, while utilization is to choose the next action from the knowledge that agents have already mastered. The balance between exploration and utilization is the basic issue of reinforcement learning. In deep reinforcement learning tasks, in order to obtain the best strategy, it is often necessary to collect more information. For solving the problem of exploration and utilization, researchers have proposed many classic methods. The ϵ -greedy method is a commonly used strategy for greedy exploration.
However, the exploratory efficiency of this method is not good. Fractals generally have the following characteristics. One is that both the whole and the part have irregularities, and the other is that the internal structure has self-similarity and unevenness. The search method based on the fractal structure can reduce the search time as much as possible on the basis of ensuring that all spaces are explored. Due to the self-similar structure of the fractal, the algorithm does not always need to repeat the previous training during the training process. Thus the way can reduce a lot of unnecessary training time. Therefore, whether the above-mentioned characteristics of fractal can be used to achieve efficient search is looking forward to follow-up research and discussion. At present, there has been a lot of research on using fractals to improve search efficiency [54,55,56], but these algorithms can still continue to be optimized.

6. Conclusions

This manuscript systematically sorts out the conclusions about one-dimensional continuous functions. The Box dimension of bounded variation functions and the functions with the Weyl fractional integral are both one. The Box dimension of continuous functions that satisfy the Lipschitz condition is also one. These results also fully show that fractional calculus does not increase the dimensionality of functions. This conclusion seems simple, but no one seems to have carried out a rigorous proof. The structure of unbounded variation function is more complicated. The construction process of several special unbounded variation functions is displayed firstly, and a lot of general conclusions about unbounded variation functions are proved by using UV points. Combined with the self-similarity, the conclusions of the fractal dimension of some special functions are also strictly proved. These conclusions are very helpful for perfecting the theory of unbounded variation. At the same time, in order to increase the practical significance of the above conclusions, some applications of fractal and fractal dimension in reinforcement learning are also introduced. On the one hand, these works can sort out the current results, and on the other hand, some useful ideas and research directions can also be shown to other researchers. The evaluation model based on fractal dimension proposed in this manuscript can effectively accelerate the convergence speed of many reinforcement learning algorithms by using fractal dimension to judge the stability of any state. This model is an important result of the combination of the two theories, and it is believed that more fractal theories will be applied to reinforcement learning.
However, the research on one-dimensional continuous functions is far from over. In particular, what are the necessary and sufficient conditions for the conversion between unbounded variation and bounded variation? Are there other theories and tools that can be used to study one-dimensional continuous functions? Can existing relevant conclusions about one-dimensional continuous functions be extended to multi-dimensional continuous functions? Can the conclusion of the unbounded variation function be used in other fields?

Author Contributions

Conceptualization, W.J., and C.L.; methodology W.J.; Data curation, W.B.; formal analysis, G.H. writing original draft, W.J. and T.W. All authors have read and agreed to the published version of the manuscript.

Funding

Research was funded by National Natural Science Foundation of China (grant number 61806221387 and 12071218).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

Research is supported by National Natural Science Foundation of China (61806221 and 12071218).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Srivastava, H.M.; Kashuri, A.; Mohammed, P.O.; Nonlaopon, K. Certain inequalities pertaining to some new generalized fractional integral operators. Fractal Fract. 2021, 5, 160. [Google Scholar] [CrossRef]
  2. Khan, M.B.; Noor, M.A.; Abdeljawad, T.; Mousa, A.A.A.; Abdalla, B.; Alghamdi, S.M. LR-preinvex interval-valued functions and Riemann–Liouville fractional integral inequalities. Fractal Fract. 2021, 5, 243. [Google Scholar] [CrossRef]
  3. Machado, J.T.; Mainardi, F.; Kiryakova, V. Fractional calculus: Quo vadimus? (Where are we going?). Fract. Calc. Appl. Anal. 2015, 18, 495–526. [Google Scholar] [CrossRef]
  4. Butera, S.; Paola, M.D. A physically based connection between fractional calculus and fractal geometry. Ann. Phys. 2014, 350, 146–158. [Google Scholar] [CrossRef]
  5. Kolwankar, K.M.; Gangal, A.D. Fractional differentiability of nowhere differentiable functions and dimensions. Chaos Solitons Fractals 1996, 6, 505–513. [Google Scholar] [CrossRef] [Green Version]
  6. Kolwankar, K.M.; Gangal, A.D. Hölder exponent of irregular signals and local fractional derivatives. Pramana J. Phys. 1997, 48, 49–68. [Google Scholar] [CrossRef] [Green Version]
  7. Nigmatullin, R.R.; Baleanu, D. Relationships between 1D and space fractals and fractional integrals and their applications in physics. In Applications in Physics, Part A; De Gruyter: Berlin, Germany, 2019; Volume 4, pp. 183–220. [Google Scholar]
  8. Tatom, F.B. The relationship between fractional calculus and fractals. Fractals 1995, 3, 217–229. [Google Scholar] [CrossRef]
  9. Zähle, M.; Ziezold, H. Fractional derivatives of weierstrass-type functions. J. Comput. Appl. Math. 1996, 76, 265–275. [Google Scholar] [CrossRef] [Green Version]
  10. Liang, Y.S. The relationship between the Box dimension of the Besicovitch functions and the orders of their fractional calculus. Appl. Math. Comput. 2008, 200, 197–207. [Google Scholar] [CrossRef]
  11. Ruan, H.J.; Su, W.Y.; Yao, K. Box dimension and fractional integral of linear fractal interpolation functions. J. Approx. Theory 2009, 161, 187–197. [Google Scholar] [CrossRef] [Green Version]
  12. Liang, Y.S. Box dimensions of Riemann-Liouville fractional integrals of continuous functions of bounded variation. Nonlinear Anal. 2010, 72, 4304–4306. [Google Scholar] [CrossRef]
  13. Liang, Y.S. Fractal dimension of Riemann-Liouville fractional integral of 1-dimensional continuous functions. Fract. Calc. Appl. Anal. 2018, 21, 1651–1658. [Google Scholar] [CrossRef]
  14. Wu, J.R. On a linearity between fractal dimensions and order of fractional calculus in Hölder space. Appl. Math. Comput. 2020, 385, 125433. [Google Scholar] [CrossRef]
  15. Verma, S.; Viswanathan, P. A note on Katugampola fractional calculus and fractal dimensions. Appl. Math. Comput. 2018, 339, 220–230. [Google Scholar] [CrossRef]
  16. Verma, S.; Viswanathan, P. Bivariate functions of bounded variation: Fractal dimension and fractional integral. Indag. Math. 2020, 31, 294–309. [Google Scholar] [CrossRef] [Green Version]
  17. Bush, K.A. Continuous functions without derivatives. Am. Math. Mon. 1952, 59, 222–225. [Google Scholar] [CrossRef]
  18. Shen, W.X. Hausdorff dimension of the graphs of the classical Weierstrass functions. Math. Z. 2018, 289, 223–266. [Google Scholar] [CrossRef] [Green Version]
  19. Su, W.Y. Construction of fractal calculus. Sci. China Math. Chin. Ser. 2015, 45, 1587–1598. [Google Scholar] [CrossRef]
  20. Xie, T.F.; Zhou, S.P. On a class of fractal functions with graph box dimension 2. Chaos Solitons Fractals 2004, 22, 135–139. [Google Scholar] [CrossRef]
  21. Liang, Y.S.; Su, W.Y. Von Koch curves and their fractional calculus. Acta Math. Sin. Chin. Ser. 2011, 54, 227–240. [Google Scholar]
  22. Wang, J.; Yao, K. Construction and analysis of a special one-dimensional continuous functions. Fractals 2017, 25, 1750020. [Google Scholar] [CrossRef]
  23. Wang, J.; Yao, K.; Liang, Y.S. On the connection between the order of Riemann-Liouvile fractional falculus and Hausdorff dimension of a fractal function. Anal. Theory Appl. 2016, 32, 283–290. [Google Scholar] [CrossRef]
  24. Falconer, K.J. Fractal Geometry: Mathematical Foundations and Applications; John Wiley Sons Inc.: Chicheste, PA, USA, 1990. [Google Scholar]
  25. Wen, Z.Y. Mathematical Foundations of Fractal Geometry; Science Technology Education Publication House: Shanghai, China, 2000. (In Chinese) [Google Scholar]
  26. Hu, T.Y.; Lau, K.S. Fractal dimensions and singularities of the weierstrass type functions. Trans. Am. Math. Soc. 1993, 335, 649–665. [Google Scholar] [CrossRef]
  27. Zheng, W.X.; Wang, S.W. Real Function and Functional Analysis; High Education Publication House: Beijing, China, 1980. (In Chinese) [Google Scholar]
  28. Tian, L. The estimates of Hölder index and the Box dimension for the Hadamard fractional integral. Fractals 2021, 29, 2150072. [Google Scholar] [CrossRef]
  29. Wang, C.Y. R-L Algorithm: An approximation algorithm for fractal signals based on fractional calculus. Fractals 2020, 24, 2150243. [Google Scholar] [CrossRef]
  30. Oldham, K.B.; Spanier, J. The Fractional Calculus; Academic Press: New York, NY, USA, 1974. [Google Scholar]
  31. Teodoro, G.S.; Machado, J.A.; Oliveira, E.C. A review of definitions of fractional derivatives and other operators. J. Comput. Phys. 2019, 388, 195–208. [Google Scholar] [CrossRef]
  32. Kiryakova, V.S. Generalized Fractional Calculus and Applications; CRC Press: Boca Raton, FL, USA, 1993. [Google Scholar]
  33. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; John Wiley Sons Inc.: New York, NY, USA, 1976. [Google Scholar]
  34. Podlubny, I. Geometric and physical interpretation of fractional integration and fractional differentiation. Fract. Calc. Appl. Anal. 2002, 5, 367–386. [Google Scholar]
  35. Mu, L.; Yao, K.; Wang, J. Box dimension of weyl fractional integral of continuous functions with bounded variation. Anal. Theory Appl. 2016, 32, 174–180. [Google Scholar] [CrossRef]
  36. Kilbas, A.A.; Titioura, A.A. Nonlinear differential equations with marchaud-hadamard-type fractional derivative in the weighted sapce of summable functions. Math. Model. Anal. 2007, 12, 343–356. [Google Scholar] [CrossRef]
  37. Tian, L. Hölder continuity and box dimension for the Weyl fractional integral. Fractals 2020, 28, 2050032. [Google Scholar] [CrossRef]
  38. Yao, K.; Liang, Y.S.; Su, W.Y.; Yao, Z.Q. Fractal dimension of fractional derivative of self-affine functions. Acta Math. Sin. Chin. Ser. 2013, 56, 693–698. [Google Scholar]
  39. Xu, Q. Fractional integrals and derivatives to a class of functions. J. Xuzhou Norm. Univ. 2006, 24, 19–23. [Google Scholar]
  40. Stein, E.M. Singular Integrals and Differentiability Properties of Functions; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  41. Liang, Y.S.; Su, W.Y. Fractal dimensions of fractional integral of continuous functions. Acta Math. Sin. 2016, 32, 1494–1508. [Google Scholar] [CrossRef]
  42. Liang, Y.S.; Zhang, Q. 1-dimensional continuous functions with uncountable unbounded variation points. Chin. J. Comtemporary Math. 2018, 39, 129–136. [Google Scholar]
  43. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the game of go without human knowledge. Nature 2017, 550, 354–359. [Google Scholar] [CrossRef]
  44. Magnani, L. AlphaGo, Locked Strategies, and Eco-Cognitive Openness; Eco-Cognitive Computationalism Springer: Cham, Switzerland, 2021; pp. 45–71. [Google Scholar]
  45. Liu, S.; Pan, Z.; Cheng, X. A novel fast fractal image compression method based on distance clustering in high dimensional sphere surface. Fractals 2017, 25, 1740004. [Google Scholar] [CrossRef] [Green Version]
  46. Li, S.; Wu, Y.; Cui, X.; Dong, H.; Fang, F.; Russell, S. Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient. Proc. Aaai Conf. Artif. Intell. 2019, 33, 4213–4220. [Google Scholar] [CrossRef] [Green Version]
  47. Li, G.; Jiang, B.; Zhu, H.; Che, Z.; Liu, Y. Generative attention networks for multi-agent behavioral modeling. Proc. Aaai Conf. Artif. Intell. 2020, 34, 7195–7202. [Google Scholar] [CrossRef]
  48. Liu, S.; Wang, S.; Liu, X.Y.; Gandomi, A.H.; Daneshmand, M.; Muhammad, K.; de Albuquerque, V.H.C. Human Memory Update Strategy: A multi-layer template update mechanism for remote visual monitoring. IEEE Trans. Multimed. 2021, 23, 2188–2198. [Google Scholar] [CrossRef]
  49. Hoerger, M.; Kurniawati, H. An on-line POMDP solver for continuous observation spaces. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 7643–7649. [Google Scholar]
  50. Igl, M.; Zintgraf, L.; Le, T.A.; Wood, F.; Whiteson, S. Deep variational reinforcement learning for POMDPs. Int. Conf. Mach. Learn. 2018, 16, 2117–2126. [Google Scholar]
  51. Zhou, Z.H. Neural Networks. In Machine Learning; Springer: Singapore, 2021. [Google Scholar]
  52. Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. Simam: A simple parameter-free attention module for convolutional neural networks. Int. Conf. Mach. Learn. 2021, 26, 11863–11874. [Google Scholar]
  53. Almatroud, A.O. Extreme multistability of a fractional-order discrete-time neural network. Fractal Fract. 2021, 5, 202. [Google Scholar] [CrossRef]
  54. Alomoush, M.I. Optimal combined heat and power economic dispatch using stochastic fractal search algorithm. J. Mod. Power Syst. Clean Energy 2020, 8, 276–286. [Google Scholar] [CrossRef]
  55. Tran, T.T.; Truong, K.H. Stochastic fractal search algorithm for reconfiguration of distribution networks with distributed generations. Ain Shams Eng. J. 2020, 11, 389–407. [Google Scholar] [CrossRef]
  56. Pham, L.H.; Duong, M.Q.; Phan, V.D.; Nguyen, T.T.; Nguyen, H.N. A high-performance stochastic fractal search algorithm for optimal generation dispatch problem. Energies 2019, 12, 1796. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The logical structure of the paper.
Figure 1. The logical structure of the paper.
Fractalfract 06 00069 g001
Figure 2. The properties of bounded variation functions.
Figure 2. The properties of bounded variation functions.
Fractalfract 06 00069 g002
Figure 3. The image of d ( x ) and D ( x ) .
Figure 3. The image of d ( x ) and D ( x ) .
Fractalfract 06 00069 g003
Figure 4. The image of H ( x ) .
Figure 4. The image of H ( x ) .
Fractalfract 06 00069 g004
Figure 5. The image of f α ( x ) .
Figure 5. The image of f α ( x ) .
Fractalfract 06 00069 g005
Figure 6. Case one.
Figure 6. Case one.
Fractalfract 06 00069 g006
Figure 7. Case two.
Figure 7. Case two.
Fractalfract 06 00069 g007
Figure 8. The image of g ( x ) .
Figure 8. The image of g ( x ) .
Fractalfract 06 00069 g008
Figure 9. The frame of convergence model.
Figure 9. The frame of convergence model.
Fractalfract 06 00069 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jun, W.; Lei, C.; Bin, W.; Hongtao, G.; Wei, T. Overview of One-Dimensional Continuous Functions with Fractional Integral and Applications in Reinforcement Learning. Fractal Fract. 2022, 6, 69. https://doi.org/10.3390/fractalfract6020069

AMA Style

Jun W, Lei C, Bin W, Hongtao G, Wei T. Overview of One-Dimensional Continuous Functions with Fractional Integral and Applications in Reinforcement Learning. Fractal and Fractional. 2022; 6(2):69. https://doi.org/10.3390/fractalfract6020069

Chicago/Turabian Style

Jun, Wang, Cao Lei, Wang Bin, Gong Hongtao, and Tang Wei. 2022. "Overview of One-Dimensional Continuous Functions with Fractional Integral and Applications in Reinforcement Learning" Fractal and Fractional 6, no. 2: 69. https://doi.org/10.3390/fractalfract6020069

APA Style

Jun, W., Lei, C., Bin, W., Hongtao, G., & Wei, T. (2022). Overview of One-Dimensional Continuous Functions with Fractional Integral and Applications in Reinforcement Learning. Fractal and Fractional, 6(2), 69. https://doi.org/10.3390/fractalfract6020069

Article Metrics

Back to TopTop