Next Article in Journal
Quantitative Assessment of Natural Ventilation in an Elementary School Classroom in the Context of COVID-19 and Its Impact in Airborne Transmission
Previous Article in Journal
Semi-Direct Point-Line Visual Inertial Odometry for MAVs
Previous Article in Special Issue
Study on Underwater Target Tracking Technology Based on an LSTM–Kalman Filtering Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional Derivative Gradient-Based Optimizers for Neural Networks and Human Activity Recognition

by
Oscar Herrera-Alcántara
Departamento de Sistemas, Universidad Autónoma Metropolitana, Mexico City 02200, Mexico
Appl. Sci. 2022, 12(18), 9264; https://doi.org/10.3390/app12189264
Submission received: 3 August 2022 / Revised: 8 September 2022 / Accepted: 12 September 2022 / Published: 15 September 2022
(This article belongs to the Special Issue Deep Learning for Signal Processing Applications)

Abstract

:
In this paper, fractional calculus principles are considered to implement fractional derivative gradient optimizers for the Tensorflow backend. The performance of these fractional derivative optimizers is compared with that of other well-known ones. Our experiments consider some human activity recognition (HAR) datasets, and the results show that there is a subtle difference between the performance of the proposed method and other existing ones. The main conclusion is that fractional derivative gradient descent optimizers could help to improve the performance of training and validation tasks and opens the possibility to include more fractional calculus concepts to neural networks applied to HAR.

1. Introduction

In the context of machine learning, neural networks are one of the most popular and efficient techniques to model data, and gradient descent methods are widely used to optimize them. The fundamental gradient descent optimizer considers a factor with an opposite direction to the gradient of the objective function. Other optimizers consider momentum and velocity analogies to improve the training convergence and the generalization capacity.
Effectively, starting with the basic update rule of gradient descent optimizers, the fundamental factor updates the free parameters in the opposite direction of the gradient g t on the approximation error surface, and the learning rate η modulates the feedback to move forward to obtain a minimum [1].
A batched version vanilla gradient descent (GD) updates the parameters considering all the training samples, but it is impractical for large datasets. The GD update formula is
Δ θ t = η g t .
Alternatively, a stochastic gradient descent (SGD) version updates the parameters for each i-th training sample. Hence, the SGD update formula is
Δ θ t , i = η g t , i
and although it could introduce fluctuations, on one hand, it can be useful to explore the optimization space, but on the other hand, it can introduce unnecessary variance in the parameter updates, and it makes the learning rate a critical factor. Considering this, a mixed version of minibatch gradient descent proposes to split the dataset in subsets to deal with this tradeoff [2].
Adagrad [3] is the other evolved version that considers adaptation of the learning rate based on the memory of the gradients and aims to give helpful feedback for sparsed features of input data. Adagrad computes a historical diagonal matrix G t , i i , accumulating the sum of squares of the gradients to modify the adjustment of each parameter θ i which aims to deal with the disparity of frequent/infrequent features of training samples. The Adagrad update formula is
Δ θ t , i = η G t , i i + ϵ g t , i
that considers ϵ > 0 in the denominator to avoid a zero division.
Adadelta [4] is a variant of Adagrad that aims to lessen the accumulation of square gradients along all the time for G i i , and instead it defines an average window given 0 γ 1 to ponderate squares of current and previous one gradients according to
E [ g 2 ] t = γ E [ g 2 ] t 1 + ( 1 γ ) g t 2
so, it can be conceived as the root mean squared error of the gradients:
R M S [ g ] t = E [ g 2 ] t + ϵ
where ϵ > 0 is also included to avoid a division by zero, given that the original update formula for Adadelta is
Δ θ t = η R M S [ g ] t g t .
To preserve the same “unit of measure”, the learning rate η is replaced by the RMS of parameter updates in this way:
R M S [ Δ θ ] t = E [ Δ θ 2 ] t + ϵ = γ E [ Δ θ 2 ] t 1 + ( 1 γ ) Δ θ t 2 + ϵ
up to t 1 since Δ θ is still being calculated. Therefore, the Adadelta update rule is
Δ θ t = R M S [ Δ θ ] t 1 R M S [ g ] t g t .
RMSProp [5] is considered an extension of Adagrad that maintains a moving average square of the gradients instead of using all the historical, and divides the gradient by the root mean square of that average. In this sense, it has a great similarity with Equation (6) presented as the former Adadelta rule.
Other optimizers consider momentum (update memory) based on the update of the previous iteration, analogous to the physical concept of particle inertia, so when “the ball moves” in the same direction from the current to the next update, it accelerates the convergence, and it opposes when it changes directions, providing more stability and better convergence.
Adam [6] mixes the average of past gradients m t and past squared gradients v t together with a momentum approach, in such a case that m t = β 1 m t 1 + ( 1 β 1 ) g t contains a previous memory value followed by a second term based on the gradient. Similarly, v t = β 2 v t 1 + ( 1 β 2 ) g t 2 first involves a memory update term followed by a squared gradient term. Since m t and v t are initialized to zero, and to avoid zero-bias tendency, other formulas are obtained for the first momentum bias-corrected m ^ t = m t 1 β 1 t and for the second momentum bias-corrected v ^ t = v t 1 β 2 t . The updated Adam formula is
Δ θ t , i = η v t ^ + ϵ m t ^
that also considers ϵ > 0 in the denominator to avoid a zero division.
Some other gradient descent optimizers have been proposed, but now just only one more is discussed, the AdamP optimizer [7]. AdamP is a variant of Adam that appeals to weight normalization and effective automatic step size adjustment over time to stabilize the overall training procedure that improves the generalization. The normalization considers projections in the weight space via the projection operator Π θ ( x ) : = x ( θ · x ) θ applied to the momentum update m t = β m t 1 + g t , so the AdamP rule is Δ θ t = η q t where
q t = Π θ t ( m t ) , if c o s ( θ t , g t ) δ d i m ( θ ) m t , otherwise
for δ > 0 , and where c o s ( a , b ) is the cosine similarity between two vectors.
Table 1 summarizes these previously described optimizers. In addition, it also includes a version of SGD with γ momentum, as well as SGDP [7] that includes weight normalizations via projections based on SGD, analogous to how AdamP is built based on Adam. The first column indicates the name, the second column the update rule, and the third column a comment. It is emphasized that the update rule for some optimizers, such as Adagrad, Adadelta and Adam, involves ϵ > 0 to avoid a division by zero.
All previous optimizers consider the gradient g t as the cornerstone update factor that comes from a first-order derivative of the objective function. The purpose of this work is to present optimizers that introduce a fractional derivative gradient in the update rule, as well as an implementation for the Tensorflow backend. This proposal is mainly based on the fractional differential calculus theory [8,9,10,11] and on previous works [12,13].
Fractional calculus is not a novel topic [14] but it has recently taken relevance in several fields, including linear viscoelasticity [15], fractional control [16], partial differential equations [17], signal processing [18], image processing [19], time series prediction [20], and mathematical economics [21], among others, and of course neural networks in the age of deep learning [12,13,22]. Given that neural network architectures have several challenges such as generalization enhancement, gradient vanishing problems, regularization and overfitting, it seems that fractional calculus still has a lot to contribute.
The rest of the paper is organized as follows. In Section 2, details of the proposed fractional derivative gradient update rule are presented. In Section 3, experiments are described to obtain performance comparisons with known optimizers. It allows to support the main conclusion regarding the improvement between the performance of the proposed method and other existing ones. In Section 4, some discussions are presented based on the experiments, and some future work directions are commented on.

2. Materials

In this section, the Caputo fractional derivative definition is reviewed, as well as its relationship with the backpropagation algorithm for neural networks.

2.1. Fractional Derivatives

There is no unified theory for fractional calculus, and evidence of this is that there is no single definition for fractional derivatives. See, for example, the Grünwald–Letnikov, the Riemann–Liouville and the Caputo definitions [10,13,23]. The Caputo fractional derivative, for a , x R , ν > 0 and n = [ ν + 1 ] , is defined as
a C D x ν f ( x ) = 1 Γ ( n ν ) a x ( x y ) n ν 1 f ( n ) ( y ) d y
and it seems to be the most popular since, in contrast to Grünwald–Letnikov and Riemann–Liouville, the Caputo fractional derivative of Equation (11) is zero for f ( x ) = C , with C R , which matches with the integer derivative version [12]. In Equation (11), a ( x y ) n ν 1 kernel can be identified that convolves with f ( n ) . The application and study of other kernels and their properties to define more fractional derivatives is an open research area.
An interesting property of the fractional ν -order derivative operator D x ν applied to x p is that [24]
D x ν x p = Γ ( p + 1 ) x p v Γ ( p v + 1 )
and, in particular for ν = 1 2 and p = 1 , it allows to calculate the 1 2 derivative of x:
D x 1 2 x = Γ ( 2 ) x 1 2 Γ ( 3 2 ) ,
moreover, if the v = 1 2 derivative is calculated again with p = 1 2 , i.e., if D x 1 2 is applied again to Equation (13),
D x 1 2 + 1 2 x = D x 1 2 ( D x 1 2 x ) = D x 1 2 Γ ( 2 ) x 1 2 Γ ( 3 2 ) = Γ ( 2 ) Γ ( 3 2 ) Γ ( 3 2 ) Γ ( 1 ) x 0 = 1 ,
which is consistent with the first-order derivative D x ( 1 ) x = 1 .

2.2. Backpropagation for Neural Networks

In supervised learning, given an input data set X and the corresponding desired outputs O, the training sample set can be expressed as { X i , O i } i = 1 N , where N is the number of samples.
For a neural network with an architecture conformed by a single input layer X, followed by L = H + 1 layers that considers H hidden layers and an output layer O with activation functions φ ( x ) , the matrix of synaptic weights w k j l indicates the connection between the neuron k of layer l + 1 and the neuron j of the current layer, l [ 1 , L 1 ] . A special case is for l = 0 , where the weights connect the input data X with the neurons of the first hidden layer.
The error of neuron k at the output layer is e k i = a k i L o k i , where subindex i refers to the neural network receiving the i-th input pattern. Consequently, given the i-th training sample, the error E i of the output layer considering all its n L neurons is
E i = 1 2 k = 1 n L e k i 2 = 1 2 k = 1 n L ( a k i L o k i ) 2
then, the total error E over all the N training samples is
E = i = 1 N E i = 1 2 i = 1 N k = 1 n L ( a k i L o k i ) 2
and the learning process via the backpropagation algorithm aims to minimize E by adjusting the free parameters of the weight matrix.
Essentially, a backpropagation training consists of repeated forward and backward steps. The forward step evaluates progressively the induced local fields V l , multiplying the inputs I l of the l-th layer and the corresponding synaptic weights W l = w k j l . For the first layer, I 1 = X , so the induced local field vector at layer l can be expressed as the dot product V l = I l · W l where l [ 1 , L ] .
The output of neuron k at layer l is a k l = φ ( v k l ) , where v k l is the k-th local induced field of V l , and by convention for l = 0 the “output vector” a 0 is equal to I = X , the input data set. Of course, each activation function φ can be different for each layer.
For the backward step, once the outputs a L of the L-th layer have been calculated, the local gradients δ k l are evaluated, and it allows to obtain the gradient descent updates in reverse order for l = L , L 1 , , 1 .
Indeed, for the gradient descent optimizer, the weight updates Δ w k j l are given by
Δ w k j l = η E i w k j l
seeking a direction for weight change that reduces the value of E i [1].
Since the local gradient is
δ k l = E i v k
and considering that
E i w k j l = E i v k · v k w k j l = E i v k · a j l 1 = δ k l a j l 1
then, Δ w k j l can be expressed as
Δ w k j l = η · δ k l · a j l 1 .
At the output layer, δ k L involves two factors, the error e k i and the derivative of the activation function as follows:
δ k L = e k i · φ ( v k L )
whereas for hidden layer l, the local gradient considers the contribution of errors via the k neurons of the l + 1 layer, hence
δ j l = φ ( v j ) · k = 1 n l + 1 δ k l + 1 · w k j l + 1 .
To be consistent with the nomenclature of Section 1, let g t = δ k l · a j l 1 , where a j l 1 is the output of neuron j of the previous layer, i.e., an input to the layer l. Additionally, let Δ θ t , i = Δ w k j l when the i-th training sample is presented to the neural network.

2.3. Fractional Derivative and Gradient Descent

Essentially, the same approach of the gradient descent for the first-order derivatives is applied to the fractional gradient D w k j l ν E i . In this case, the weight updates are
Δ w k j l = η D w k j l ν E i
and the main difference comes when applying the chain rule, as follows:
D w k j l ν E i = E i w k j l · D w k j l ν w k j l = δ k l · a j l 1 · ( w k j l ) 1 ν Γ ( 2 ν ) ,
which is identical to that of the integer derivative but multiplied by the fractional factor ( w k j l ) 1 ν Γ ( 2 ν ) .
Note that the property of Equation (12) for p = 1 is applied to obtain the fractional ν -order derivative of w k j l . Additionally, note that in the case of ν = 1 , it is reduced to the already known integer case since the factor ( w j i l ) 1 ν = 1 and Γ ( 2 ν ) = 1 , then Equation (24) can be conceived as a generalization of the integer gradient descent, for ν > 0 .

2.4. Tensorflow Implementation of Fractional Gradient Optimizers

Tensorflow is a platform for machine learning, and it has been widely used for the deep learning community since it provides open-source Python libraries to train and deploy many applications [25]. Tensorflow also includes efficient support for GPU devices, as well as integration with high-level APIs, such as Keras [26]. Tensorflow is available for several operating systems and is also available through Jupyter notebook cloud services, such as Google Colab [27].
The module tf.optimizers contains classes for gradient descent optimizers, such as SGD, Adadelta, Adagrad, Adam, among others. For example, the SGD optimizer is located in the Tensorflow–Keras module tf.keras.optimizers.SGD, and accepts some parameters, as is shown in the next fragment of code: Applsci 12 09264 i001
These parameters have default values, such as m o m e n t u m = 0 , that means that the default update rule is Δ w = l e a r n i n g _ r a t e g r a d i e n t . Given a positive value of momentum, the update rule according to the API documentation is Δ w = v e l o c i t y where the “velocity” is defined as v e l o c i t y = m o m e n t u m v e l o c i t y l e a r n i n g _ r a t e g r a d i e n t . So, the velocity stores a single slot memory value as described in Section 1, and it corresponds to the Δ θ t , i factor in the fifth row of Table 1.
Since the main goal is to introduce the fractional factor of Equation (24) to the gradient descent optimizers, a simple and elegant solution is to multiply the current gradient by this factor. However, there are some aspects to be considered. First, note that Equation (24) involves a power 1 ν that will be negative for ν > 1 , and consequently, it could produce a division by zero (in the practice, Tensorflow obtains NaN values). A possible solution is to aggregate ϵ > 0 , as it was shown in Section 1. However, there is a second consideration; when 1 ν = p q , and q is even (for example ν = 1 2 or ν = 3 4 ), then negative values of w k j l generate complex values. To deal with these two situations and to preserve real values, w k j l was replaced by | w k j l |   +   ϵ , so the proposed fractional gradient factor f w ν is
f w ν : = ( | w k j l |   +   ϵ ) 1 ν Γ ( 2 ν ) .
A strong motivation to replace w k j l by | w k j l |   +   ϵ is that it allows to have a limit for f w ν when ν 1 . In such a case,
lim ν 1 ( | w k j l |   +   ϵ ) 1 ν Γ ( 2 ν ) = lim ν 1 + ( | w k j l |   +   ϵ ) 1 ν Γ ( 2 ν ) = 1 ,
that supports the idea of conceiving Equation (24) as a more general case of the integer gradient descent update rule.
For the Tensorflow implementation, a new class FSGD with fractional gradient was defined based on the SGD optimizer. The update_step method was modified as follows: Applsci 12 09264 i002
The same procedure applies to other gradient descent optimizers listed in Table 1, and each fractional version uses the prefix “F”. For example, FAdam is the fractional version of Adam, and it was obtained modifying the _resource_apply_dense method of the Adam class. The modification includes the next source code: Applsci 12 09264 i003
The source code for AdamP was adapted from [7] and despite it including modifications for the weight normalization via projections, the section of interest to update the gradient is identical to Adam. Thus, the same modifications apply to the fractional version named FAdamP. In a similar manner, it also applies to FSGDP as the fractional version of SGDP.
The source code of all fractional optimizers FSGD, FAdagrad, FAdadelta, FRMSProp, FAdam, FSGDP and FAdamP is available for download.

3. Results

Once the fractional optimizers FSGD, FAdagrad, FAdadelta, FRMSProp, FAdam, FSGDP and FAdamP were implemented, they were compared with their counterparts available in Tensorflow–Keras, as well as with SGDP and AdamP obtained from [7].
The fractional versions with prefix “F” and ν = 1.0 coincide with the original non-fractional versions, since according to Equation (26) they are special cases of the fractional derivatives, and it was comprobated experimentally.
The comparisons were organized in three experiments. The first experiment considers the well-known dataset MNIST [28], whereas Experiments 2 and 3 use the HAR datasets [29,30].

3.1. Experiment 1

Experiment 1 uses MNIST with 10-fold cross-validation, 15 epochs, architecture of 3 dense layers with ReLu, and an output layer with softmax for 10 classes.
Three subexperiments are described below.

3.1.1. Experiment 1.1

It considers a learning rate η = 0.001 and momentum γ = 0 for FSGD because the main idea is to evaluate the fundamental effect of the fractional factor f w ν . In this case, ν = 0.1 , 0.2 , , 1.9 since the experiments show that larger values of ν have worse performance. Obviously, it considers the case ν = 1.0 , whose results match with SGD, and it was corroborated obtaining a correlation of 0.999 .
The results of the cross-folding accuracy are shown in Table 2, where the rows correspond to folds 1 to 10. It is possible to appreciate that small values of ν close to zero produce low accuracies, and the worst case is for ν = 0.1 that reports an accuracy of 12.3 % at the third fold. Conversely, as ν increases, so does the accuracy, which reaches a maximum and then begins to decrease slowly.
In Figure 1, the boxplots of all data of Table 2 are shown. In both of them, it is possible to appreciate the optimal performance for ν = 1.7 (the average accuracy is 95.85 and the standard deviation is 0.36 ). The improvement is about 4 % better than SGD ( ν = 1.0 ), and these values are highlighted in bold in Table 2 for comparison purposes.
From the results of Experiment 1, the importance of the fractional gradient factor f w ν stands out, since the best performance is achieved for ν > 1.0 , instead of the traditional v = 1 . It is shown that f w ν provides additional freedom degree to optimize the neural network parameters.

3.1.2. Experiment 1.2

This experiment considers FSGD with the same values for learning rate η and momentum γ . The learning rate is increased 100 times with respect to Experiment 1.1 , and then η = γ = 0.1 that aims to have a balance on their contribution to the weight updates. The fractional order varies from ν = 0.1 to 1.9 with step = 0.2 , and additionally ν = 1.0 , which corresponds to SGD as a special case.
The results of Experiment 1.2 are shown in Table 3 and Figure 2, where it is possible to see (highlighted in bold in Table 3) that cases ν = 1.1 and ν = 1.9 have better performance than others, including the case ν = 1.0 which corresponds to SGD with momentum γ = 0.1 . Although these cases in the last fold (see the last row of columns ν = 1.1 and ν = 1.9 ) have a slightly smaller value than those of ν = 1.0 , the rest of the data show a consistent enhancement over the rest of the folds, as it is illustrated in the boxplots of Figure 2, where the boxplots for ν = 1.1 and ν = 1.9 are better positioned above the case ν = 1.0 .
From Experiment 1.2 , it is deduced that the use of momentum contributes to better performance close to 99 % , while FSGD without momentum in Experiment 1.1 barely reaches about 95.6 % for the last fold.

3.1.3. Experiment 1.3

Other experiment considers FSGD with η = 0.001 and a high value for momentum γ = 0.9 . The results are shown in Table 4 together with the boxplots of Figure 3. Additionally, in Table 5, the correlation of the columns of Table 4 are shown, and it is notorious the high correlation between all columns for different values of ν = 0.1 to 1.9 , which means that a high value of momentum and low learning rate diminishes the effect of the fractional factor f w ν . In fact, the correlation matrix of Table 5 makes this evident since all the correlation values are higher than 0.91 , in spite of the value of ν . Moreover, the performance for γ = 0.9 decreases about 2 % with respect to Experiment 1.2 with lower momentum γ = 0.1 , as is appreciated when comparing Table 3 and Table 4.

3.2. Experiment 2

Experiment 2 uses the HAR dataset Actitracker [29]. It was released by Wireless Sensor Data Mining (WISDM) lab and refers to 36 users using a smartphone in their pocket at a sample rate of 20 samples per second. The dataset contains acceleration values for x, y and z axes, while the user performs six different activities in a controlled environment: downstairs, jogging, sitting, standing, upstairs, and walking. The number of samples is 1,098,209, which was originally split into 80 % for training and 20 % for testing.
To obtain better experimental support, these data were merged, and cross-validation with K = 4 folds was applied with shuffle.
The source code was adapted from [31], and it considers a 2D-convolutional neural network (2D-CNN) with two dense layers and ReLu activation function, followed by a softmax layer.
Fractional optimizers were studied in two groups based on their performance; the first group is FSGD, FSGDP, FAdagrad and FAdadelta, and the second group is FRMSProp, FAdam and FAdamP. It gives place to Experiments 2.1 and 2.2 .

3.2.1. Experiment 2.1

The source code of [31] was modified to include FSGD, FSGDP, FAdagrad and FAdadelta.
Because of space saving, the boxplots of accuracies for K = 4 folds are illustrated in Figure 4, Figure 5, Figure 6 and Figure 7, but tables with numerical data are not included. The following observations can be made:
  • Figure 4. The highest score for FSGD is 82.8 % at ν = 1.7 .
  • Figure 5. The highest score for FSGDP is 82.71 % at ν = 1.7 (a marginal difference with respect to FSGD).
  • Figure 6. FAdagrad just reaches its maximum 84.64 % at ν = 1.6 .
  • Figure 7. The worst performance of these four optimizers is for FAdadelta with a maximum of 63.11 % at ν = 1.8 .
In this experiment, is obvious the influence of the fractional factor f w ν to enhance the performance compared with the traditional first-order case. However, these results are not the best possible because they can be improved by other optimizers, as will be shown in the next experiment.

3.2.2. Experiment 2.2

In Experiment 2.2, the modification to the source code of [31] was to include fractional FRMSProp, FAdam and FadamP versions.
Figure 8, Figure 9 and Figure 10 show the boxplots from accuracies of the K = 4 folds for FRMSProp, FAdam and FadamP, respectively, with ν in the interval 0.1 to 1.9 and step equals to 0.1 .
In Figure 8, a few candidates can be identified as “the winners”, although there is no a strong dominant case. The most relevant cases of Figure 8 are
  • FRSMProp at ν = 0.5 . This case has a superior and more compact behavior compared with the integer case v = 1.0 .
  • FRSMProp at ν = 1.4 . This case has similar performance to that of RMSProp ( ν = 1.0 , average 89 % and standard deviation 1.68 ), but the case ν = 1.4 is slightly higher (average 89.5 % and standard deviation 1.52 ).
In Figure 9, FAdam with ν = 1.1 can be considered better than for ν = 1.0 since the first has an average accuracy equals to 87.7 ( 1.15 % over the case ν = 1.0 ) although it certainly has a slightly larger standard deviation (+0.43) than for the case ν = 1.0 .
According to Figure 10, FAdamP with ν = 1 (i.e., AdamP) can be considered the best (just in FadamP category) since is not possible to identify a convincingly better case than accuracy 88 % and standard deviation = 0.84 . However, when comparing group 2, FadamP is improved by FRSMProp at ν = 1.4 .
In general, for this Experiment 2.2, the best cases correspond to FRSMProp, but it is also important to mention that group 2 outperformed the group 1 accuracies of Experiment 2.1 .

3.3. Experiment 3

In Experiment 3, the dataset “Human Activity Recognition Using Smartphones” [30] was used. It contains data from 30 subjects performing one of six activities: walking, walking upstairs, walking downstairs, sitting, standing and laying. The data were collected while wearing a waist-mounted smartphone, and the movement labels were manually obtained from videos.
Originally, the dataset was split into 70 % for training and 30 % for testing. Both training and testing subsets were mixed to obtain a unified dataset and to apply cross-validation with K = 4 folds and shuffle, that produces a fold size of 25 % of the whole, which yields to 75 % for training and 25 % for testing, trying to match the original split of 70 % + 30 % .
The optimizers were studied with the same groups of Experiment 2, because of their behavior.

3.3.1. Experiment 3.1

The source code was adapted from [32] to include optimizers FSGD, FSGDP, FAdagrad and FAdadelta. Crossfolding was applied for each optimizer, moving ν from 0.1 to 1.9 with steps of 0.1 .
The learning rate was 0.001 and the momentum γ was equal to 0.1 . The results for FSGD, FSGDP, FAdagrad and FAdadelta are shown in Figure 11, Figure 12, Figure 13 and Figure 14 as boxplots, where it is possible to see essentially the same tendency for each optimizer: increasing conform ν increases to reach a maximum and then decreases. FSGD and FSGDP decrease abruptly for ν 1.7 (see Figure 11 and Figure 12). The highest average accuracies are at ν = 1.6 for FSGD and FSGDP with accuracies of 76.3 % and 76.4 % , respectively.
In the case of Figure 13 for FAdagrad, the maximum average accuracy is 80.9 % at ν = 1.7 . FAdadelta in Figure 14 presents the worst performance, given that the maximum is 53.5 % at ν = 1.8 .
Again, in this experiment, it is possible to appreciate the influence of the fractional factor f w ν to modify the accuracy.

3.3.2. Experiment 3.2

In Figure 15, Figure 16 and Figure 17, boxplots are shown for the accuracies of K = 4 folds of FRMSProp, FAdam and FadamP respectively.
From these three figures, the next relevant cases were observed for each case independently:
  • FRMSProp. In Figure 15, the case ν = 1.0 is improved by the rest of the cases, except ν = 1.8 .
  • Fadam. In Figure 16, the case ν = 1.0 is improved by the accuracy at ν = 1.2 .
  • FadamP. In Figure 17, the case ν = 1.0 is improved by the accuracy at ν = 0.9 .
In this experiment, FRMSProp does not have better performance than FAdam and FAdamP; however, the fractional order seems to slightly modify the performance, and in most cases, a value other than 1.0 provides better accuracy.

3.3.3. Experiment 3.3

Finally, the same experiments 3.2 and 3.2 were repeated with 10 folds, and similar results were obtained. An overview of the accuracies of both groups, group 1 (FSGD, FSGDP, FAdagrad, FAdadelta) and group 2 (FRMSProp, Fadam, FAdamP), is shown as boxplots in Figure 18 and Figure 19. The boxplots correspond to each optimizer listed in groups 1 and 2, for ν [ 0.1 , 1.9 ] with increments of 0.1 .
In Figure 18, it is possible to appreciate the highest accuracy for FAdagrad at ν = 1.7 , whereas in Figure 19, the highest accuracy is for FAdam at ν = 1.2 .
Once the experiments have been carried out, it is convenient to mention that, similar to the integer case, a geometric interpretation can be given to Equation (24) as a gradient at the region of interest on the error surface. The main benefit of fractional v-derivative over the integer version is the expansion of the region of search around the test point given by v and the weights w k j l [33]. In this sense, the benefit is provided by the factor f w v of Equation (25), and to explore the effect of non-locality, a 3D plot is shown in Figure 20. It is possible to identify four regions with different behavior:
  • Region A: For v 0 + . The factor f w v follows | w k j l | and it could produce divergence for w k j l and consequently for f w v , and for the fractional gradient D w k j l ν E i .
  • Region B: For v = 1 . The integer case corresponds to the red line f w v = 1 . No context information is considered, just the local point.
  • Region C: For 1 < v < 2 . The surface f w v reaches a local maximum f w v = 8.75 at v * = 1.77 for ϵ = 0.01 .
  • Region D: For v 2 . f w v tends to zero as fast as | w k j l | increases. It promotes that small weights increase its values and move to the flatter region, where they will stabilize.
The Cartesian axes in Figure 20 are as follows:
  • x-axis. The weights w k j l .
  • y-axis. Fractional value v ( 0 , 2 ) .
  • z-axis. The fractional factor f w v that modifies the integer order gradient.
In Figure 20 there is a yellow plane at f w v = 1 used as reference for the red line of the integer case.
It is noteworthy that in region C, the experimental accuracies of the fractional optimizers (depending of v) follow a similar behavior to f w v . For now, it is just an observation that merits exploring the possible relationship between the optimal fractional order v * and the region of best accuracy for fractional optimizers.

4. Discussion

Unlike other optimizers that have been proposed, alluding mainly to concepts such as momentum or velocity, in this paper, gradient descent variants are proposed based on fractional calculus concepts, and specifically on the Caputo fractional derivative.
The proposed fractional optimizers add the prefix “F” to original names, and their update formulas essentially aggregate the f w ν factor defined in such a way that the limit exists when the v-order derivative tends to 1, which leads to a more general formula that includes the integer order as a special case.
Fractional optimizers are slightly more expensive computationally because they require computing the f w ν factor. However, they are still very competitive because the computation is performed efficiently through Tensorflow, and the additional advantage is that the fractional factor transfers non-local exploring properties, rather than just considering an infinitely small neighborhood around a point on the error surface as in the integer case, which experimentally is traduced in the improvement of the performance of fractional optimizers.
The fractional factor f w ν provides an additional degree of freedom to the backpropagation algorithm, and consequently, to the learning capacity of neural networks, as was shown in several experiments.
The fractional optimizers were successfully implemented in Tensorflow–Keras with modifications to the original source code to obtain FSGD, FSGDP, FAdagrad, FAdadelta, FRMSProp, FAdam and FAdamP classes. Everything indicates that it is possible to apply the same methodology to modify other gradient-based optimizers, as well as making implementations in other frameworks.
Three experiments were carried out with MNIST and two HAR datasets. The results on crossfolding show that in all the experiments, a fractional order provides better performance than the first order for the same neural network architectures.
In the experiments, FSGD, FSGDP, FAdagrad and FAdadelta (group 1) basically follow the same pattern of increasing their performance as the ν -order does, obtaining a maximum and then decreasing.
Other optimizers, such as FRMSProp, FAdam and FAdamP (group 2), do not follow the same pattern, and seem to be less susceptible to the fractional order change. From the experiments, it can be said that FRMSProp has an “intermediate” pattern between the optimizers of group 1 and group 2.
Even so, essentially in all the experiments, the best performing derivative order was a fractional value.
Therefore, based on the results, it is possible to affirm that fractional derivative gradient optimizers can help to improve the performance on the training and validation task, and opens the possibility to include more fractional calculus concepts to neural networks applied to HAR.
The Tensorflow–Keras implementations of this work are available in a repository to contribute to the deep learning and HAR communities to improve and apply these techniques based on fractional calculus.
Future works include exploring more fractional derivative definitions and HAR datasets with fractional regularization factors as well as studying the effects in vanishing gradient problems with other neural network architectures.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Tensorflow-Keras implementations of this work are available at http://ia.azc.uam.mx/.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Haykin, S.S. Neural Networks and Learning Machines, 3rd ed.; Pearson Education: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  2. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
  3. Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
  4. Zeiler, M.D. ADADELTA: An Adaptive Learning Rate Method. arXiv 2012, arXiv:1212.5701. [Google Scholar]
  5. Tieleman, T.; Hinton, G. Neural Networks for Machine Learning; Technical Report; COURSERA: Mountain View, CA, USA, 2012. [Google Scholar]
  6. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization, 2014. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  7. Heo, B.; Chun, S.; Oh, S.J.; Han, D.; Yun, S.; Kim, G.; Uh, Y.; Ha, J.W. AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual Event, 3–7 May 2021. [Google Scholar]
  8. Podlubny, I. Mathematics in Science and Engineering. In Fractional Differential Equations; Academic Press: Cambridge, MA, USA, 1999; Volume 198, p. 340. [Google Scholar]
  9. Oustaloup, A. La dérivation non Entière: Théorie, Synthèse et Applications; Hermes Science Publications: New Castle, PA, USA, 1995; p. 508. [Google Scholar]
  10. Luchko, Y. Fractional Integrals and Derivatives: “True” versus “False”; MDPI: Basel, Switzerland, 2021. [Google Scholar] [CrossRef]
  11. Miller, K.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; Wiley: Hoboken, NJ, USA, 1993. [Google Scholar]
  12. Bao, C.; Pu, Y.; Zhang, Y. Fractional-Order Deep Backpropagation Neural Network. Comput. Intell. Neurosci. 2018, 2018, 7361628. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, J.; Wen, Y.; Gou, Y.; Ye, Z.; Chen, H. Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Neural Netw. 2017, 89, 19–30. [Google Scholar] [CrossRef] [PubMed]
  14. Machado, J.T.; Kiryakova, V.; Mainardi, F. Recent history of fractional calculus. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 1140–1153. [Google Scholar] [CrossRef]
  15. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity, 2nd ed.; World Scientific: Singapore, 2022; Number 2; p. 628. [Google Scholar]
  16. Muresan, C.I.; Birs, I.; Ionescu, C.; Dulf, E.H.; De Keyser, R. A Review of Recent Developments in Autotuning Methods for Fractional-Order Controllers. Fractal Fract. 2022, 6, 37. [Google Scholar] [CrossRef]
  17. Yousefi, F.; Rivaz, A.; Chen, W. The construction of operational matrix of fractional integration for solving fractional differential and integro-differential equations. Neural Comput. Appl. 2019, 31, 1867–1878. [Google Scholar] [CrossRef]
  18. Gonzalez, E.A.; Petráš, I. Advances in fractional calculus: Control and signal processing applications. In Proceedings of the 2015 16th International Carpathian Control Conference (ICCC), Szilvasvarad, Hungary, 27–30 May 2015; pp. 147–152. [Google Scholar] [CrossRef]
  19. Henriques, M.; Valério, D.; Gordo, P.; Melicio, R. Fractional-Order Colour Image Processing. Mathematics 2021, 9, 457. [Google Scholar] [CrossRef]
  20. Shoaib, B.; Qureshi, I.M.; Shafqatullah; Ihsanulhaq. Adaptive step-size modified fractional least mean square algorithm for chaotic time series prediction. Chin. Phys. B 2014, 23, 050503. [Google Scholar] [CrossRef]
  21. Tarasov, V.E. On History of Mathematical Economics: Application of Fractional Calculus. Mathematics 2019, 7, 509. [Google Scholar] [CrossRef] [Green Version]
  22. Alzabut, J.; Tyagi, S.; Abbas, S. Discrete Fractional-Order BAM Neural Networks with Leakage Delay: Existence and Stability Results. Asian J. Control 2020, 22, 143–155. [Google Scholar] [CrossRef]
  23. Ames, W.F. Chapter 2—Fractional Derivatives and Integrals. In Fractional Differential Equations; Podlubny, I., Ed.; Mathematics in Science and Engineering; Elsevier: Amsterdam, The Netherlands, 1999; Volume 198, pp. 41–119. [Google Scholar] [CrossRef]
  24. Garrappa, R.; Kaslik, E.; Popolizio, M. Evaluation of Fractional Integrals and Derivatives of Elementary Functions: Overview and Tutorial. Mathematics 2019, 7, 407. [Google Scholar] [CrossRef]
  25. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 8 September 2022).
  26. Chollet, F.; Zhu, Q.S.; Rahman, F.; Lee, T.; Marmiesse, G.; Zabluda, O.; Qian, C.; Jin, H.; Watson, M.; Chao, R.; et al. Keras. 2015. Available online: https://keras.io/ (accessed on 4 July 2022).
  27. Google Inc. Google Colab. 2015. Available online: https://colab.research.google.com (accessed on 4 July 2022).
  28. Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
  29. Actitracker. Available online: http://www.cis.fordham.edu/wisdm/dataset.php (accessed on 4 July 2022).
  30. Reyes-Ortiz, J.L.; Anguita, D.; Ghio, A.; Oneto, L.; Parra, X. Human Activity Recognition Using Smartphones Dataset. Available online: https://archive.ics.uci.edu/ml/machine-learning-databases/00240 (accessed on 4 July 2022).
  31. HAR Using CNN in Keras. Available online: https://github.com/Shahnawax/HAR-CNN-Keras (accessed on 4 July 2022).
  32. Jason, B. How to Model Human Activity from Smartphone Data. Available online: https://machinelearningmastery.com/how-to-model-human-activity-from-smartphone-data/ (accessed on 4 July 2022).
  33. Khan, S.; Ahmad, J.; Naseem, I.; Moinuddin, M. A Novel Fractional Gradient-Based Learning Algorithm for Recurrent Neural Networks. Circuits Syst. Signal Process. 2018, 37, 593–612. [Google Scholar] [CrossRef]
Figure 1. Experiment 1.1: Boxplots for accuracies of FSGD with 10-cross folding and ν = 0.1, 0.2, …, 1.9.
Figure 1. Experiment 1.1: Boxplots for accuracies of FSGD with 10-cross folding and ν = 0.1, 0.2, …, 1.9.
Applsci 12 09264 g001
Figure 2. Experiment 1.2: Boxplots accuracies for FSGD with η = γ = 0.1 , ν = 0.1 , 0.3 , , 1.9 and 1.0 .
Figure 2. Experiment 1.2: Boxplots accuracies for FSGD with η = γ = 0.1 , ν = 0.1 , 0.3 , , 1.9 and 1.0 .
Applsci 12 09264 g002
Figure 3. Accuracy boxplots for FSGD with 10-cross folding, momentum = 0.9 and ν = 0.1 , 0.2 , , 1.9 .
Figure 3. Accuracy boxplots for FSGD with 10-cross folding, momentum = 0.9 and ν = 0.1 , 0.2 , , 1.9 .
Applsci 12 09264 g003
Figure 4. Experiment 2.1 : FSGD cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 4. Experiment 2.1 : FSGD cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g004
Figure 5. Experiment 2.1: FSGDP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 5. Experiment 2.1: FSGDP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g005
Figure 6. Experiment 2.1: FAdagrad cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 6. Experiment 2.1: FAdagrad cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g006
Figure 7. Experiment 2.1: FAdadelta crossfolding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 7. Experiment 2.1: FAdadelta crossfolding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g007
Figure 8. Experiment 2.2: FRMSProp cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 8. Experiment 2.2: FRMSProp cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g008
Figure 9. Experiment 2.2: FAdam cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 9. Experiment 2.2: FAdam cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g009
Figure 10. Experiment 2.2: FAdamP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 10. Experiment 2.2: FAdamP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g010
Figure 11. Experiment 3.1: FSGD cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 11. Experiment 3.1: FSGD cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g011
Figure 12. Experiment 3.1: FSGDP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 12. Experiment 3.1: FSGDP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g012
Figure 13. Experiment 3.1: FAdagrad cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 13. Experiment 3.1: FAdagrad cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g013
Figure 14. Experiment 3.1: FAdadelta cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 14. Experiment 3.1: FAdadelta cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g014
Figure 15. Experiment 3.2: FRMSProp cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 15. Experiment 3.2: FRMSProp cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g015
Figure 16. Experiment 3.2: FAdam cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 16. Experiment 3.2: FAdam cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g016
Figure 17. Experiment 3.2: FAdamP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 17. Experiment 3.2: FAdamP cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g017
Figure 18. Experiment 3.2: Group 1, cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 18. Experiment 3.2: Group 1, cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g018
Figure 19. Experiment 3.2: Group 2, cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Figure 19. Experiment 3.2: Group 2, cross-folding accuracies, K = 4 folds, ν [ 0.1 , 1.9 ] .
Applsci 12 09264 g019
Figure 20. Plot for the fractional factor f w v with ϵ = 0.01 .
Figure 20. Plot for the fractional factor f w v with ϵ = 0.01 .
Applsci 12 09264 g020
Table 1. Update rules for several gradient descent optimizers.
Table 1. Update rules for several gradient descent optimizers.
NameUpdate RuleComment for the Update
GD Δ θ t = η g t It is opposed to the gradient.
SGD Δ θ t , i = η g t , i It is opposed to the gradient for each training sample.
Adagrad Δ θ t , i = η G t , i i + ϵ g t , i It is opposed to the gradient with adaptive decreasing learning rate for each sample.
Adadelta Δ θ t = R M S [ Δ θ ] t 1 R M S [ g ] t g t It is opposed to the gradient with adaptive learning rate for each sample.
RMSProp Δ θ t = η R M S [ g ] t g t It is opposed to the gradient, and divides η using the RMS of the average square of windowed gradients.
SGD with Momentum γ Δ θ t , i = γ Δ θ t 1 η g t , i It uses one-slot memory of parameter updates and direction opposed to the gradient for each training sample.
Adam Δ θ t , i = η v t ^ + ϵ m t ^ It is opposed to the gradient and combines average of past gradients m t as well as average of past squared gradients v t .
AdamP Δ θ t = η Π θ t ( m t ) or η m t It is opposed to the gradient and also considers weight normalization via the projection of m t .
SGDP Δ θ t , i = η Π θ t , i ( γ ) or η γ It is opposed to the gradient and also considers weight normalization via the projection of γ .
Table 2. Comparison between SGD and FSGD. The case ν = 1.0 matches with SGD.
Table 2. Comparison between SGD and FSGD. The case ν = 1.0 matches with SGD.
FSGD ( ν = 0.1 , , 1.9 )
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
18.724.733.843.255.969.681.987.289.390.992.393.294.094.795.295.195.394.292.2
13.717.023.330.144.274.583.087.990.492.193.794.695.195.495.695.695.895.894.3
12.313.718.223.037.973.383.188.090.291.792.893.894.394.894.995.194.594.591.8
19.725.032.342.453.871.383.487.890.091.292.693.794.595.095.395.595.494.793.3
22.826.933.754.468.380.586.989.691.092.593.293.994.594.995.395.595.595.293.8
14.816.718.421.024.761.282.187.789.991.392.293.294.394.795.195.295.494.492.8
19.222.929.842.955.577.885.488.090.291.592.693.494.194.694.995.195.094.493.0
19.526.234.540.045.673.184.088.390.091.692.693.594.394.995.495.695.995.394.0
22.729.036.443.246.049.978.286.689.590.892.092.993.694.294.795.195.394.593.5
25.432.942.651.558.677.085.688.490.491.692.893.794.394.995.495.695.595.093.3
Table 3. Experiment 1.2: Comparison between SGD and FSGD with η = 0.1 and momentum γ = 0.1 . The case ν = 1.0 corresponds to SGD.
Table 3. Experiment 1.2: Comparison between SGD and FSGD with η = 0.1 and momentum γ = 0.1 . The case ν = 1.0 corresponds to SGD.
ν
Fold0.10.30.50.70.91.01.11.31.51.71.9
199.099.099.098.999.198.999.099.098.999.099.0
298.698.998.998.998.998.999.099.099.098.899.1
398.998.696.199.098.996.799.099.098.898.899.0
498.998.898.998.998.898.898.898.898.698.998.9
598.798.798.798.898.798.798.898.798.798.898.8
698.998.999.198.998.896.699.098.898.898.499.0
799.199.099.099.198.999.099.098.999.198.799.2
899.399.298.999.298.599.199.299.399.199.199.1
998.798.798.898.898.898.898.898.898.798.798.9
1098.898.999.098.999.099.198.999.098.999.098.8
Table 4. Comparison between SGD and FSGD with momentum. The case ν = 1.0 reduces to SGD.
Table 4. Comparison between SGD and FSGD with momentum. The case ν = 1.0 reduces to SGD.
FSGD with Momentum = 0.9 ( ν = 0.1 , , 1.9 )
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9
97.397.497.497.397.497.497.397.397.397.497.597.397.497.397.297.397.397.397.4
97.697.697.597.697.697.697.597.697.697.697.697.697.697.597.597.597.597.697.6
97.497.497.497.497.497.397.497.497.397.497.497.497.497.497.497.397.397.497.4
97.597.397.497.497.497.497.597.697.497.697.597.497.497.497.597.497.497.497.4
97.697.797.697.897.797.797.797.797.797.797.797.797.897.797.897.797.797.897.6
97.797.897.797.797.897.897.797.797.897.897.897.797.897.797.797.797.797.897.7
97.697.697.697.697.697.697.697.597.697.697.597.697.597.697.597.697.797.797.6
98.198.098.097.998.097.998.098.098.198.097.998.198.098.098.098.098.098.098.0
97.297.297.297.297.397.197.197.197.397.297.297.297.297.197.297.297.197.297.2
97.897.897.797.797.797.797.897.797.897.797.897.797.797.797.797.797.897.797.7
Table 5. Correlation matrix for FSGD: learning rate = 0.001 , momentum = 0.9 and ν = 0.1 , 0.2 , , 1.9 .
Table 5. Correlation matrix for FSGD: learning rate = 0.001 , momentum = 0.9 and ν = 0.1 , 0.2 , , 1.9 .
ν
ν 0.10.20.30.40.50.60.70.80.91.01.11.21.31.41.51.61.71.81.9
0.11
0.20.961
0.30.970.991
0.40.930.960.951
0.50.950.980.980.941
0.60.950.980.970.970.971
0.70.990.970.980.960.950.961
0.80.980.940.950.960.920.940.981
0.90.960.970.980.950.980.960.960.941
1.00.970.940.950.950.940.960.970.980.951
1.10.940.950.960.940.940.940.940.970.940.951
1.20.960.980.980.960.980.960.970.940.980.940.931
1.30.940.980.980.980.980.970.960.950.970.950.960.981
1.40.980.970.980.970.950.970.990.980.970.980.960.970.971
1.50.920.920.920.970.920.910.950.950.940.920.900.940.940.941
1.60.960.970.980.970.970.960.970.950.980.940.920.980.970.980.961
1.70.960.960.970.970.940.960.980.950.960.950.920.950.940.980.950.981
1.80.950.970.970.960.980.990.960.930.970.950.910.970.970.960.940.980.971
1.90.970.980.980.960.970.970.970.960.980.960.940.980.960.990.920.980.970.971
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Herrera-Alcántara, O. Fractional Derivative Gradient-Based Optimizers for Neural Networks and Human Activity Recognition. Appl. Sci. 2022, 12, 9264. https://doi.org/10.3390/app12189264

AMA Style

Herrera-Alcántara O. Fractional Derivative Gradient-Based Optimizers for Neural Networks and Human Activity Recognition. Applied Sciences. 2022; 12(18):9264. https://doi.org/10.3390/app12189264

Chicago/Turabian Style

Herrera-Alcántara, Oscar. 2022. "Fractional Derivative Gradient-Based Optimizers for Neural Networks and Human Activity Recognition" Applied Sciences 12, no. 18: 9264. https://doi.org/10.3390/app12189264

APA Style

Herrera-Alcántara, O. (2022). Fractional Derivative Gradient-Based Optimizers for Neural Networks and Human Activity Recognition. Applied Sciences, 12(18), 9264. https://doi.org/10.3390/app12189264

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop