Next Article in Journal
The Application of Multilinear Regression Model for Quantitative Analysis on the Basis of Excitation-Emission Matrix Spectra and the Release of a Free Graphical User Interface
Next Article in Special Issue
Enhancing Symmetry and Memory in the Fractional Economic Growing Quantity (FEGQ) Model
Previous Article in Journal
Comparing the Numerical Solution of Fractional Glucose–Insulin Systems Using Generalized Euler Method in Sense of Caputo, Caputo–Fabrizio and Atangana–Baleanu
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

OPT-FRAC-CHN: Optimal Fractional Continuous Hopfield Network

by
Karim El Moutaouakil
1,*,†,
Zakaria Bouhanch
1,†,
Abdellah Ahourag
1,
Ahmed Aberqi
2 and
Touria Karite
3
1
Laboratory of Engineering Sciences, Multidisciplinary Faculty of Taza, Sidi Mohamed Ben Abdellah University, Taza 35000, Morocco
2
Laboratory of Mathematics Analysis and Applications (LAMA), National School of Applied Sciences, Sidi Mohammed Ben Abdellah University, Fez 30022, Morocco
3
Laboratory of Engineering Systems and Applications (LISA), National School of Applied Sciences, Sidi Mohammed Ben Abdellah University, Fez 30022, Morocco
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2024, 16(7), 921; https://doi.org/10.3390/sym16070921
Submission received: 6 June 2024 / Revised: 3 July 2024 / Accepted: 15 July 2024 / Published: 18 July 2024

Abstract

:
The continuous Hopfield network (CHN) is a common recurrent neural network. The CHN tool can be used to solve a number of ranking and optimization problems, where the equilibrium states of the ordinary differential equation (ODE) related to the CHN give the solution to any given problem. Because of the non-local characteristic of the “infinite memory” effect, fractional-order (FO) systems have been proved to describe more accurately the behavior of real dynamical systems, compared to the model’s ODE. In this paper, a fractional-order variant of a Hopfield neural network is introduced to solve a Quadratic Knap Sac Problem (QKSP), namely the fractional CHN (FRAC-CHN). Firstly, the system is integrated with the quadratic method for fractional-order equations whose trajectories have shown erratic paths and jumps to other basin attractions. To avoid these drawbacks, a new algorithm for obtaining an equilibrium point for a CHN is introduced in this paper, namely the optimal fractional CHN (OPT-FRAC-CHN). This is a variable time-step method that converges to a good local minima in just a few iterations. Compared with the non-variable time-stepping CHN method, the optimal time-stepping CHN method (OPT-CHN) and the FRAC-CHN method, the OPT-FRAC-CHN method, produce the best local minima for random CHN instances and for the optimal feeding problem.

1. Introduction

A Hopfield network is a spin-glass device designed to simulate neural networks, developed based on Ernst Ising and Wilhelm Lenz’s study of the Ising pattern of magnetic substances [1]. Numerous hard-constrained engineering challenges in a variety of fields have been expressed in terms of Hopfield energy functions: associative memory systems [2], analog-to-digital conversion [3], the job-shop scheduling problem [4], quadratic assignment and other related NP-complete problems [5,6], the channel allocation problem in wireless networks [7], the mobile ad hoc network routing problem [8], image restoration [9], system identification [10], portfolio managing [11,12], classification problems [13,14], and the clustering problem [15,16].
This paper introduces a new version of the Hopfield recurrent neural network, called the optimal fractional continuous Hopfield network (OPT-FRAC-CHN).
The continuous Hopfield network (CHN) is a form of recurrent artificial neural network introduced by John Hopfield in 1982 [17]. It is composed of n neurons that are fully connected and serves as an associative memory system with continuous activation function units; see Figure 1.
The dynamic of this neural network is governed by the following systems of differential Equation (1):
d u d t = T v + I
where T R n × n represents the matrix of the connection weight between the neurons, I R n is the vector of the biases, u is the vector of the states of the neurons, and v is the vector of outputs of the neurons. The output v i , of the neuron i, is deduced from the state u i by the v i = g ( u i ) (the activation function g is the hyperbolic tangent). The dynamic of the CHN is associated with an energy function (Lyapunov function) that decreases until reaching an equilibrium point. If the matrix T is symmetric, then the CHN has an equilibrium point [17].
The CHN will solve optimization problems which can be expressed as the constrained minimization of
E L y a p ( v ) = 1 2 v t T v I t v
The minimums of E are in the corners of [ 0 , 1 ] n ; see [17,18,19,20,21].
To solve a Quadratic Knap Sac Problem (QKSP) optimization problem using a CHN, it is necessary to build a suitable energy function obtained by aggregating of all the QKSP components (the objective function and the constraints) using a penalty parameter to control the objective function and the constraints [22,23,24]. The local minima of this energy function correspond to the optimal solutions of the problem to be solved [25,26].
Several researchers have investigated constrained optimization problems in the context of this neural approach [17,18,19,20,21,27]. In these papers, however, the reliability of the equilibrium point obtained for the optimization problem is not necessarily assured. Moreover, when constructing the energy function, how the objective function and constraints are taken into account remains a major challenge. A synthetic study of the internal work and of the energy functions introduced to solve the QKSP problem enabled the authors of [28] to suggest a general equation that allows QKSP-type problems to be solved via a CHN while ensuring the realism of the equilibrium points. To guarantee the feasibility of the CHN equilibrium point, these authors decomposed the solution space into suitable subsets and guided the convergence of the CHN to a feasible solution. Subsequently, several authors have used this energy function to solve QKSP-type problems [5,15]. In this work, to solve this system using the Euler method, each equation was discretized using a time step [29]. However, this method has a large local truncating error, thereby requiring the use of a very small step size to the detriment of a lower speed and a more rounded error in the floating point computations. Moreover, it is sensitive to scale modification. Nevertheless, this procedure is very vulnerable to initial states and is very time-consuming. To overcome these drawbacks, a variable time-step method whose property is to reduce convergence time has been introduced in [30]. Many authors have already used this method to solve constrained optimization problems [13,16]. However, these papers describe the dynamics of a CHN based on the ordinary derivative, which does not consider the memory effect and prevents a CHN from dealing with interesting details of the optimization problem studied due to insufficient complexity. As it is known, fractional-order systems are a generalization of classical integer-order systems; they incorporate derivatives and integrals of non-integer orders [31,32]. They have gained significant attention recently in various fields due to their ability to model and accurately explain complex and real-world phenomena [14,33]. Their key aspect is that they can capture memory and hereditary properties of processes, which leads to an accurate representation. In this sense, combining fractional-order systems with continuous neural networks can result in a powerful framework for solving complex optimization problems. The authors of [27] used a Hopfield network model with fractional-order neurons for parameter estimation; they used a domain decomposition method to solve the problem. However, an appropriate decomposition remains a challenge, and a poor choice of the three terms of such a decomposition can lead to very poor local minima. In addition, the time step is chosen manually, which can lead to very high processing time and sensitivity.
To improve the capacity of the dynamic equation (1) to describe more accurately the behavior of dynamical systems associated with QKSP, we used the fractional order (FO) in this paper to characterize the infinite memory effect, namely the fractional CHN (FRAC-CHN) [32,33]. To avoid the problem encountered with the decomposition method, the FRAC-CHN system was integrated with the quadratic method for fractional-order equations [34]. To select the best time step, in each iteration, a new algorithm that obtains one equilibrium point for the CHN was introduced in this paper, namely the optimal fractional CHN (OPT-FRAC-CHN). This is a variable time-step method that allows the convergence to a promising local minimum in just a few iterations. We have used this method to find the equilibrium points of random instances of a CHN and to solve the optimal diet problem by building a suitable energy function. Compared to a CHN with a non-variable time step, a CHN with an optimal time step (OPT-CHN) [30],  FRAC-CHN, and OPT-FRAC-CHN produce a best local minima of random CHN instances and optimal diet problem (minimum glycemic load, and minimum positive and negative requirements for the nutrients gap). This can be explained by the increased memory of CHNs thanks to the fractional description of the evolution of this recursive neural network. This memory is extended thanks to the quadratic scheme, presented in Section 3.2, which updates the future state using the expertise of four past steps. In addition, OPT-FRAC-CHN gets closer to the global minimum thanks to the optimal selection of time steps, enabling the energy function to be optimally reduced at each iteration.
The main contribution of this work is summarized as follows:
  • Introduction of the fractional version of the continuous Hopfield network (CHN);
  • Memory augmentation of the CHN by using quadratic fractional numerical schema with a domain local truncating error;
  • Introduction of an optimal time step algorithm to solve the fractional CHN model differential equation;
  • Solving the optimal regime problem using the FRAC-CHN neural network, the quadratic fractional schema, and the optimal time step algorithm to solve the optimal diet problem.
The rest of the document is organized as follows: Section 2 presents the methodology adopted to realize this work. Section 3 introduces fractional calculus. Section 4 presents the proposed the OPT-FRAC-CHN. Section 5 gives the experimental results. Section 6 provides some conclusions and perspectives.

2. Methodology

In this section, we present the methodology used in conducting this study, as shown in Figure 2. Furthermore, we outline the key notations employed in this paper.
  • Fractional calculus basics: First, we define the fractional derivative in the Riemann–Liouville sense [32]. Next, we present the quadratic scheme for fractional derivative approximation [34]. Finally, we provide the estimation of the approximation error.
  • Optimal fractional continuous Hopfield network: First, we give the fractional state differential equation that generalizes the ordinary model given by Equation (1) introduced in [17,18,19,20,21]. Second, we use the quadratic scheme to approximate the fractional derivative of the proposed model given by Equation (17). Third, to ensure a maximal decrease of the fractional energy function, we calculate the optimal step using an explicit equation; see Equation (13). Then, we give the approximation error majoration in the case of the fractional CHN; see Equations (11) and (17). Fourth, we build a suitable energy function, see Equation (14), to solve the optimal diet problem using the optimal fractional recurrent neural network [5,15]. In this regard, the objective function and the constraints of the mathematical model (see Equation (14)), introduced in  [35,36,37,38,39,40], are combined using penalty parameters to control the feasibility and optimality of the resulted regime. Fifth, we give the algorithms of the algorithms associated with the CHN (Algorithm 1), OPT-CHN (Algorithm 2), FRAC-CHN (Algorithm 3), and OPT-FRAC-CHN (Algorithm 4).
  • Experimentation and implementation details: In this stage, the properties of the decreasing energy function and convergence have been verified by a first series of computational experiments based on 100 random CHNs of different sizes [30]. In addition, we use the OPT-FRAC-CHN to solve the optimal feeding problem [35,36,37]:
    -
    Constraints and objective function parameters are extracted from a set of 177 Moroccan foods described based on 20 nutrients;
    -
    Positive and negative nutrient requirements are extracted from the recommendations of the World Health Organization (WHO) and the Food and Agriculture Organization of the United Nations (FAO) [38,39,40].
Algorithm 1 CHN algorithm
   Fractional order: α ;
   Time step: δ t ;
   Initial outputs: v 0 , v 1 , v 2 , and  v 3 ;
   Generate the initial population
   while  i < M a x I t e r  do do
       v i + 1 = v i + δ t E ( v i )
   end whileend while
   return  Equilibrium point v M a x I t e r , *
Algorithm 2 OPT-CHN algorithm
   Fractional order: α ;
   Initial time step: δ t 0 ;
   Initial outputs: v 0 , v 1 , v 2 , and  v 3 ;
   Generate the initial population
   while  i < M a x I t e r  do do
       A i = v i ;
       B i = 2 u 0 v i ( 1 v i ) E ( v i ) ;
       δ t i , * = B i ( T A i + I ) B i T B i ;
       v i + 1 = A i + δ t i , * B i n
   end whileend while
   return  Equilibrium point v M a x I t e r , *
Algorithm 3 FRAC-CHN algorithm
   Fractional order: α ;
   Time step: δ t ;
   Initial outputs: v 0 , v 1 , v 2 , and  v 3 ;
   Generate the initial population
   while  i < M a x I t e r  do do
       A i = α v i + 1 / 2 α v i 1 + 1 6 α ( 1 α ) v i 2 + 1 24 α ( 1 α ) ( 2 α ) v i 3 ;
       B i = 2 u 0 v i ( 1 v i ) E ( v i ) ;
       v i + 1 = A i + δ t i , * B i
   end whileend while
   return  Equilibrium point v M a x I t e r , *
Algorithm 4 OPT-FRAC-CHN algorithm
   Fractional order: α ;
   Initial time step: δ t 0 ;
   Initial outputs: v 0 , v 1 , v 2 , and  v 3 ;
   Generate the initial population
   while  i < M a x I t e r  do do
       A i = α v i + 1 / 2 α v i 1 + 1 6 α ( 1 α ) v i 2 + 1 24 α ( 1 α ) ( 2 α ) v i 3 ;
       B i = 2 u 0 v i ( 1 v i ) E ( v i ) ;
       δ t i , * = B i ( T A i + I ) B i T B i ;
       v i + 1 = A i + δ t i , * B i
   end whileend while
   return  Equilibrium point v M a x I t e r , *

3. Fractional Calculus Basics

In this section, we present the Riemann–Liouville fractional integral [32]. Then, we give some very easy-to-use quadratic schemes for fractional derivative approximation [34].

3.1. Fractional Derivative

The Riemann–Liouville [32] fractional integral of order α > 0 is defined as
I α f ( t ) = 1 Γ ( α ) a t ( t τ ) α 1 f ( τ ) d τ
and I 0 f ( t ) = f ( t ) .
The fractional derivative known as the Riemann–Liouville fractional derivative of order n 1 < α n is defined as
D α f ( t ) = 1 Γ ( n α ) ( d d t ) n a t ( t τ ) n α 1 f ( τ ) d τ
where n is an integer. Another definition of the fractional derivative introduced by Caputo [33], of order m 1 < α m , is defined as
D α f ( t ) = 1 Γ ( m α ) a t ( t τ ) m α 1 f ( m ) ( τ ) d τ
where m is an integer. In the next subsection, we will give a quadratic scheme to approximate the fractional derivative given by Equation (5).

3.2. Quadratic Scheme for Fractional Derivative Approximation

In this subsection, the domain interval [ 0 , t ] is distributed into an even number of subintervals, N = 2 n for 1 n , in equal parts with a uniform step size (or time interval) h, where h = t 2 n such that the node points are t i = i h ; i = 0 , 1 , 2 , , 2 n .
I α f ( t ) I Q ( f , h , α ) and D α f ( t ) D Q ( f , h , α ) and E I Q I α f ( t ) = I α f ( t ) I Q ( f , h , α )
and
E D Q I α f ( t ) = D α f ( t ) D Q ( f , h , α )
where I Q ( f , h , α ) and D Q ( f , h , α ) are the quadratic approximations of I α f ( t ) and D α f ( t ) , respectively. E I Q I α f ( t ) and E D Q I α f ( t ) are the errors associated with these approximations, respectively.
The function f ( τ ) is approximated over the interval [ t 2 i , t 2 i + 2 ] using the following formula [34]:
f i 2 = ( τ t 2 i + 1 ) 2 h [ 1 τ t 2 i + 1 h ] f 2 i + [ 1 ( τ t 2 i + 1 h ) 2 ] f 2 i + 1 + τ t 2 i + 1 2 h [ 1 + τ + t 2 i + 1 h ] f 2 i + 2
We suppose that f C m + 3 [ 0 , δ ] and that the interval [ 0 , δ ] is divided into an even number of subintervals [ t 2 i , t 2 i + 2 ] such that t i = i h with h = δ 2 n i = 0 , 1 , 2 , , 2 n ; then, the quadratic approximation D Q ( f , h , α ) of the D-operator is given by [41]
D Q ( f , h , α ) = i = 0 n ( A m f ( m ) ( t 2 i ) + B m f ( m ) ( t 2 i + 1 ) + C m f ( m ) ( t 2 i + 2 ) )
where
A m = 2 m α h m α Γ ( m α + 3 ) ( n i 1 ) m α + 1 ( 2 m + α + 4 i 4 n ) + ( n i ) m α 2 + ( m α ) 2 + 4 i 2 + i ( 6 8 n ) + 3 ( m α ) ( 1 + i n ) 6 n + 4 n 2 B m = 2 m α + 2 h m α Γ ( m α + 3 ) ( n i 1 ) m α + 1 ( m α 2 i + 2 n ) + ( n i ) m α + 1 ( 2 + m + 2 i 2 n ) C m = 2 m α h m α Γ ( m α + 3 ) ( n i ) m α + 1 ( 2 + m α + 4 i 4 n ) + ( n i 1 ) m α ( m α ) 2 + 2 i 3 ( m α ) i + 4 i 2 2 n + 3 ( m α ) n 8 n + 4 n 2
Theorem 1.
The approximation error has the form
| E D Q ( f , h , α ) | C α f ( m + 3 ) t ( m α ) h 3
where C α is a constant that solely depends on α [41].

4. Optimal Fractional Continuous Hopfield Network

In this section, we give the fractional continuous Hopfield network that implements the outputs of the neurons only. Then, we demonstrate that this equation has a unique solution. In addition, we introduce our method called the OPT-FRAC-CHN. Finally, we build a specific energy function to solve the optimal diet problem.

4.1. Fractional Continuous Hopfield Network

The dynamical system (1) can also be expressed as
d v i d t = 2 u 0 v i ( 1 v i ) ( j = 1 n T i j v j + I i ) , i = 1 , , n v i ( 0 ) = v i 0 [ 0 , 1 ] , i = 1 , , n
The dynamic of the FRAC-CHN is governed by
d α v i d t α = 2 u 0 v i ( 1 v i ) ( j = 1 n T i j v j + I i ) , i = 1 , , n v i ( 0 ) = v i 0 [ 0 , 1 ] , i = 1 , , n
We define the following function:
F i ( t , v ) = 2 u 0 v i ( 1 v i ) ( j = 1 n T i j v j + I i ) , i = 1 , , n
Theorem 2.
The system of Equation (10) has a unique solution.
Proof. 
For the system (10), to find a unique solution, we must demonstrate that the function F i is Lipschitzian concerning v | F i ( t , v ) F i ( t , w ) | = 2 u 0 | v i ( 1 v i ) ( j = 1 n T i j v j + I i ) w i ( 1 w i ) ( j = 1 n T i j w j I i ) | .
  • By adding and subtracting the v i ( 1 v i ) ( j = 1 n T i j w j + I i ) term, we obtain
  • | F i ( t , v ) F i ( t , w ) | = 2 u 0 | v i ( 1 v i ) ( j = 1 n T i j v j v i ( 1 v i ) ( j = 1 n T i j w j + I i ) + v i ( 1 v i ) ( j = 1 n T i j w j + I i ) w i ( 1 w i ) ( j = 1 n T i j w j I i ) | .
  • Using the triangle inequality and knowing that v i [ 0 , 1 ] (thus | v i ( 1 v i ) | = v i ( 1 v i ) ) , we obtain the following inequality:
  • | F i ( t , v ) F i ( t , w ) | 2 u 0 v i ( 1 v i ) | j = 1 n T i j v j j = 1 n T i j w j | + 2 u 0 | j = 1 n T i j w j + I i | | v i ( 1 v i ) w i ( 1 w i ) | .
  • As w j [ 0 , 1 ] , j = 1 , , n , we have | j = 1 n T i j w j + I i | j = 1 n | T i j | + | I i | , and | j = 1 n T i j w j | j = 1 n | T i j | , i = 1 , , n .
  • We set S 1 = M a x { j = 1 n | T i j | + | I i | , i = 1 , , n } and S 2 = M a x { j = 1 n | T i j | , i = 1 , , n } ; as we know that M a x { a ( 1 a ) , a [ 0 , 1 ] } = 1 4 , we get the following inequality:
  • | F i ( t , v ) F i ( t , w ) | S 2 2 u 0 v w + 2 S 1 u 0 | v i ( 1 v i ) w i ( 1 w i ) |
  • We have | v i ( 1 v i ) w i ( 1 w i ) | | v i w i | ( 1 + v i + w i ) 3 | v i w i | .
  • Then, we get the following result:
  • | F i ( t , v ) F i ( t , w ) | 1 u 0 ( S 2 2 + 3 S 1 ) v w , i = 1 , , n . □
Lemma 1.
i = 1 , , n and d N , d v i d d t d is a polynomial considering the vector v.
Proof. 
i = 1 , , n , d v i d t = 2 u 0 v i ( 1 v i ) ( j = 1 n T i j v j + I i ) , thus d v i d t is a polynomial of degree 3.
  • We suppose that d v i d d t d is a polynomial p ( v ) of q degree considering the vector v:
    p ( v ) = k = 1 m a k i = 1 n v i n i , k , where i = 1 n n i , k q , k
    d v i d + 1 d t d + 1 = d p ( v ) d t = k = 1 m a k j = 1 n n j , k d v j d t v j n j , k 1 i j v i n i , k
    d v i d + 1 d t d + 1 = k = 1 m 2 a k u 0 j = 1 n n j , k ( v j ( 1 v j ) ( i = 1 n T j i v i + I j ) ) v j n j , k 1 i j v i n i , k
    Thus, d v i d + 1 d t d + 1 is a polynomial of degree M a x { 2 + j = 1 n n j , k , k = 1 , , m } = 2 + q . □
Theorem 3.
The approximation error has the form
i M a x { | E D Q ( v i , t , h , α ) | , i = 1 , , n } C α C m , T , I t m α h 3
where C α is a constant depending only on α and C m , T , I , u 0 is a constant depending only on m , T , I , and u 0 .
Proof. 
For Theorem 1 and Lemma 1, we have | E D Q ( v i , t , h , α ) | C α i v i ( m + 3 ) t m α h 3 , where C α i is a constant depending only on α .
  • As v i ( m + 3 ) is a polynomial on v and v i [ 0 , 1 ] , then v i ( m + 3 ) C m , T , I , u 0 , where C m , T , I , u 0 is a constant depending only on m , T , I , and u 0 (the slope of the neuron activation function). Let us define C α = M a x { C α i , i = 1 , , n } .
  • Then, we have the desired result. □

4.2. Fractional Continuous Hopfield Network with Optimal Time Step

In the OPT-FRAC-CHN, the update process is based on fractional calculus (FC). The derivation in Equation (10) is solved by considering the first four terms of the derivative as
v i + 1 = α v i + 1 / 2 α v i 1 + 1 6 α ( 1 α ) v i 2 + 1 24 α ( 1 α ) ( 2 α ) v i 3 δ t 2 u 0 v i ( 1 v i ) E ( v i )
where α is a constant in the interval [ 0 , 1 ] , δ i is the time step, v i represents the neuron outputs at iteration, i, v i 1 is that at iteration i 1 , and so on. Figure 3 gives an electronic representation of the discrete fractional continuous Hopfield lattice. In this sense, to calculate v t , we use v t 1 , v t 2 , and v t 3 using the appropriate weights obtained from the discrete approximation of the fractional derivative, namely α , 1 / 2 α , 1 6 α ( 1 α ) , and 1 24 α ( 1 α ) ( 2 α ) , respectively.
To determine the equilibrium point of the suggested recurrent neural network, we present the subsequent algorithm:
Theorem 4.
Let us define α as a constant in the interval [ 0 , 1 ] and v i , v i 1 , v i 2 , and v i 3 as the states of the FRAC-CHN at the iteration i, i 1 , i 2 , and i 3 , respectively. Then, the optimal fractional time step is given by
δ t i , * = B i ( T A i + I ) B i T B i
where B i = 2 u 0 v i ( 1 v i ) E ( v i ) and A i = α v i + 1 / 2 α v i 1 + 1 6 α ( 1 α ) v i 2 + 1 24 α ( 1 α ) ( 2 α ) v i 3 .
Proof. 
First, we set ϕ ( t ) = E ( α v i + 1 / 2 α v i 1 + 1 6 α ( 1 α ) v i 2 + 1 24 α ( 1 α ) ( 2 α ) v i 3 δ t 2 u 0 v i ( 1 v i ) E ( v i ) ) .
d t i , * is the solution of min δ t [ . 1 , + . 1 ] { ϕ ( d t i , * ) } if d ϕ d t ( d t i , * ) = 0
Thus, B i E ( A i δ t B i ) = 0 . Then, B i ( T ( A i d t i , * B i ) + I ) = 0 .
  • Thus,
    d t i , * = B i ( T A i + I ) B i T B i
 □

4.3. Application: OPT-FRAC-CHN to Optimal Diet Problem

The fuzzy quadratic optimization programming ( P ) that minimizes the total glycemic load, and minimizes the favorable and unfavorable nutrient gaps [42,43] is given by Equation [35,36,37,38,39,40]:
( P ) : min g x + α / 2 A x b 2 + β / 2 E x f 2 S u b j e c t t o : x { 0 1 } d
where the terms are defined as follows:
  • d is the number of foods;
  • x = ( x j ) j = 1 : d is the vector of the foods serving sizes;
  • g is the vector formed by the foods’ glycemic load;
  • A is the matrix of the favorable nutrients, b is the vector of the favorable nutrient requirements, E is the vector of unfavorable nutrients, and f is the maximum number of positive nutrients that the diet must contain,
  • μ and β are penalty parameters that achieve the balance between the objective function components. In practice, if β μ the CHN will focus on the objective function to the detriment of the constraints; otherwise, the constraints will attract more of the CHN’s attention [13,16,35,37,38]. u is the vector of ones from R d .
To solve the problem (P) using the CHN, we introduce the following energy function:
E ( x ) = g x + μ / 2 A x b 2 + β / 2 E x f 2 + γ / 2 x ( 1 x )
where γ is a penalty parameter that controls the binarity of the output of the CHN units. In this sense, the differential equation that governs the dynamic of the diet is given by
d v i d t = 2 u 0 v i ( 1 v i ) ( j = 1 n T i j D v j + I i D ) , i = 1 , , n v i ( 0 ) = v i 0 [ 0 , 1 ] , i = 1 , , n
The dynamics of the FRAC-CHN related to optimal dietary issues are controlled by
d α v i d t α = 2 u 0 v i ( 1 v i ) ( j = 1 n T i j D v j + I i D ) , i = 1 , , n v i ( 0 ) = v i 0 [ 0 , 1 ] , i = 1 , , n
To calculate T D and I D , we calculate the first and the second derivatives of the function (15):
d E d x = g + μ A ( A x b ) + β E ( E x f ) + γ / 2 ( 1 2 x ) d 2 E d x 2 = μ A A + β E E γ 1 d 2
The bias and the weight of the built CHN are given by
I D = d E d t ( 0 ) = g + μ A b + β E f γ / 2 T D = d 2 E d x 2 ( 0 ) = μ A A β E E + γ 1 d 2
To avoid the stability of the CHN at the interior of [ 0 1 ] d , we introduce the following constraint [15,16]:
γ μ max ( d i a g ( A A ) ) + β max ( d i a g ( E E ) )

4.4. Proposed Algorithm

The algorithms associated with the CHN, OPT-CHN, FRAC-CHN, and OPT-FRAC-CHN will be used to find the equilibrium point of different instances. It will therefore be of great educational interest to recall the main instructions of these algorithms.
Algorithm 1 presents the main steps of the basic CHN [5,17,18,19]. This algorithm builds a trajectory based on the same time step over the course of iterations.
Algorithm 2 presents the main steps of the OPT-CHN [15,26,28]. This algorithm builds a trajectory based on an optimal time step over the course of iterations to correct the fluctuation of the FRAC-CHN. However, this method only takes into account the neuron outputs of the previous iteration v i .
Algorithm 3 presents the main steps of the FRAC-CHN. This algorithm builds a trajectory based on the same time step throughout iterations [34], which causes fluctuation in the trajectory to the equilibrium point. However, this method takes into account the neuron outputs of the four previous iterations v i , v i 1 , v i 2 , and v i 3 , which permits the construction and correction of the trajectory to the equilibrium point.
Algorithm 4 presents the main steps of the OPT-FRAC-CHN. This algorithm builds a trajectory based on optimal time steps over the course of iterations to correct the fluctuation of the FRAC-CHN. In addition, this method takes into account the neuron outputs of the four previous iterations v i , v i 1 , v i 2 , and v i 3 , which permits the construction and correction of thetrajectory to the equilibrium point using its long memory.

5. Experimentation and Implementation Details

To test the algorithm introduced in this paper, some computer programs were designed to calculate the equilibrium points of several randomly generated CHNs and to solve the optimal diet problem; see Section 4.3. Three programs, which were coded in Matlab, computed one equilibrium point of the CHN by the Euler method, and by the Euler method with optimal step. Two other programs calculated a FRAC-CHN equilibrium point using the quadratic method mentioned in Section 3.2, with a fixed time step and a sequence of optimal steps given by Equation (13).

5.1. Testing and Comparison on Random Instances

The properties of energy function decrease and convergence were verified through a first set of computational experiences based on 100 random CHNs of size different sizes. As in the [30] study, the parameters (weights and biases) of CHNs and FRAC-CHNs were randomly generated in the following way:
T = U [ 10 , 0 ] d × d , I = U [ 0 , 5 ] d , and u 0 [ 0.02 , 2 ] , where U is the uniform distribution.
Figure 4, Figure 5, Figure 6 and Figure 7 present the energy function of the continuous Hopfield network (Algorithm 1), the optimal continuous Hopfield network (Algorithm 2), fractional continuous Hopfield network ( α = 0.7 , Algorithm 3), and optimal fractional continuous Hopfield network (Algorithm 4) vs. iterations, respectively. We remark that the energy functions of different CHNs decrease with iterations, but the OPT-FRAC-CHN reaches the best equilibrium points. Admittedly, the OPT-CHN has escaped the valley that attracted the CHN, thanks to the optimal pitch calculated at each iteration, but the new minimum remains mediocre compared with that obtained by the FRAC-CHN. Compared with the CHN and OPT-CHN, sampling on α has enabled us to find a population of fractions for which the FRAC-CHN performs better than the CHN and OPT-CHN. This can be explained by the increased memory of CHNs thanks to the fractional description of the evolution of this recursive neural network. This memory is extended thanks to the quadratic scheme, presented in Section 3.2, which updates the future state using the expertise of four past steps. In addition, and considering all almost-instances, the FRAC-CHN was attracted by a local minimum of average quality due to the non-variability of the manually chosen time step. In this context, it is very difficult to analyze the CHN input data (weights and biases) of each instance to choose an appropriate time step. Furthermore, a time step may be adequate for one instance but may produce a very bad solution for a new instance. On the contrary, the OPT-FRAC-CHN gets closer to the global minimum thanks to the optimal selection of time steps, enabling the energy function to be optimally reduced at each iteration. The other advantage, of a practical nature, is that, once initialized, the evolution of the OPT-FRAC-CHN becomes automatic because the optimal step is given by the Formula (13) as a function of current neuron activations, α fraction, weights, and biases.
To highlight the trajectory generated when calculating the equilibrium points of the CHN, OPT-CHN, FRAC-CHN, and OPT-FRAC-CHN, we have displayed the isolines of the energy function of these CHNs.
Figure 8, Figure 9, Figure 10 and Figure 11 display the isolines of the energy function of an instance of the CHN. Starting from the same initial states, the trajectories of different methods converge to the optimal equilibrium points. In the classic CHN, the optimal step only allows a few stations to be reduced in the trajectory followed by the CHN when searching for the minimum. For certain instances with several local minima, this reduction may be more remarkable.
Given the need for four initializations, the FRAC-CHN reaches the best equilibrium point too late; Indeed, this version of the CHN generates several intermediate points before reaching the minimum of the energy function. It is possible to generate the three recalled vectors of neuron activations using the CHN, and then we continued the research using the FRAC-CHN, which improved the behavior of the latter. In our case, the optimal choice of time step, using the Formula (13), enabled us to considerably reduce the number of intermediate points in the search trajectory. In fact, by selecting the optimal time step at each iteration, the OPT-FRAC-CHN ensures the maximum reduction in the CHN’s energy consumption, avoiding the exploration of unpromising regions.

5.2. Optimal Diet Using OPT-FRAC-CHN

We used the OPT-FRAC-CHN to solve the optimal diet problem based on the energy function (15) introduced in Section 4.3 and by considering the parameter constraints Equation (19) to ensure the feasibility of the equilibrium points.
A feasible solution of Equation (19) is μ = 1.25 , β = 1 , and γ = 0.125 . The maximum number of iterations is 200. The initial state is randomly generated following where U is the uniform distribution U [ 0 1 ] .
In the optimal diet problem, researchers adopt three performance measures [35,36,37]: glycemic load, positive nutrient requirement gap, and nutrient requirement gap.
Table 1 gives the total glycemic load, positive nutrient gap, and negative nutrient gap of optimal diets obtained by the CHN, OPT-CHN, FRAC-CHN, and OPT-FRAC-CHN for eight fractional orders (0.5, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, and 0.61). We remark that the OPT-FRAC-CHN produces optimal regimes with a very low glycemic load, low positive nutrient requirement gap, and negative nutrient requirement gap for different fractional orders. Compared to the CHN, OPT-CHN, and FRAC-CHN, the proposed method produces the best diets. In fact, the CHN and OPT-CHN produce zero glycemic load diets, which is great, but these diets have very large discrepancies in terms of positive and negative nutrient requirements. Compared with the diets produced by the CHN and OPT-CHN, and for almost all α fraction values, the FRAC-CHN produces the best diet that makes a good compromise between the three criteria: glycemic load, positive nutrient requirement gap, and nutrient requirement gap. This can be explained by the increased memory of CHNs thanks to the fractional description of the evolution of this recursive neural network. This memory is extended thanks to the quadratic scheme, which updates the future state using the expertise of four past steps. However, these diets are still bad compared to the ones produced by the OPT-FRAC-CHN thanks to the optimal step. Indeed, the selection of the optimal time step, in each iteration, permits the OPT-FRAC-CHN to find the best equilibrium points of the CHN compared to the FRAC-CHN.
To illustrate the behavior of the energy function, given by Equation (15), we give the dynamic of this function vs. time of the CHN (time step = 0.001), OPT-CHN, FRAC-CHN (time step = 0.001, α = 0.7 ), and OPT-FRAC-CHN ( α = 0.7 ); see Figure 12, Figure 13, Figure 14 and Figure 15; the blod distinguishes the model with the best performance. We remark that the CHN and OPT-CHN were attracted by a medium local minima from the beginning until the end of iterations. we remark that the FRAC-CHN has a periodic behavior, which may prevent the system from reaching an equilibrium point because of the non-variability of the time step. This inconvenience was addressed by the OPT-FRAC-CHN thanks to the optimal time steps calculated at each iteration. We tested the OPT-FRAC-CHN for different α values from [ 0 0.5 [ ] 0.61 1 ] but the regimes obtained were not satisfactory.

5.3. Limitations

In the course of our experimental work, we have identified certain limitations which we discuss in the following subsection.
Memory complexity: the fact of considering all four vectors of neuron activation, at each iteration, increases the memory used by a factor of 3 s i z e ( p r o b l e m ) s i z e ( f l o a t ) .
Optimal fraction choice: to find out which of the fraction values can produce good solutions, we discriminated the interval [ 0 , 1 ] and looked for an equilibrium point for each of these values, which may cause the best model to catch up.
Initialization of neuron activation vectors: it is always difficult to choose a good initialization for a single activation vector, and the problem becomes increasingly difficult when four activation vectors are involved, as this has a strong influence on the quality of the equilibrium points produced.
Stochastic nature of problem: the parameters of the food model, given by the Equation (14), are stochastic because the nutrient values of 100 g of a given food change according to several factors (temperature, age, speed …). However, the CHN does not have enough complexity to capture this phenomenon.

6. Conclusions

Due to the non-local characteristic of the “infinite memory” effect, fractional-order (FO) models have been proven to give a more accurate description of the performance of dynamic actual systems than ordinary differential models. A new fractional model of the CHN, namely the FRAC-CHN, that impliments the only neuron outputs was introduced in this paper. To calculate one equilibrium point of the FRAC-CHN, a quadratic integration was used. Thanks to its multistep character, this method permits to correct the trajectory to the equilibrium point. To address this shortcoming, a quadratic integration with optimal time step was introduced in this paper, namely the OPT-FRAC-CHN. This method was used to calculate the equilibrium point of several instances of randomly generated CHNs and to solve the optimal diet problem. Compared with the CHN, OPT-CHN, and FRAC-CHN, thanks to it long memory, its multistep character, and the optimal time step, the OPT-FRAC-CHN produced the best equilibrium points (with a very low energy), corrected the fluctuation of the trajectories to equilibrium points, and needed less iterations to reach these equilibrium points. In short, this superiority can be explained by the increased memory of the CHN thanks to the fractional description of the evolution of this network. This memory was enlarged through the quadratic scheme, which updates the coming state using the expertise of four previous states. In another aspect, the OPT-FRAC-CHN moves closer to the global minimum by selecting the optimal time steps, thereby optimally decreasing the energy function at each iteration.
In conducting our experimental phase, we noted several drawbacks (discussed in Section 5.3): (a) the increase in memory complexity, (b) the difficulty of initializing the activation vectors of the four neurons, and (c) the inability of the CHN to capture the stochastic nature of optimization problems. To overcome these limitations, we can (a) decrease the order of the numerical method used to solve Equation (10), (b) use the classical CHN to obtain the necessary initializations, and (c) use fuzzy logic to describe the evolution of the CHN.
The equilibrium point produced by the OPT-FRAC-CHN can be implemented to solve Quadratic Knap Sac Problems from different fields, in particular the optimal diet problem, image restoration, image segmentation, the portfolio management problem, and the channel allocation problem.
Given that the energy function does not increase, the suggested algorithm consistently reached a local minimum, functioning as an intelligent local search algorithm. Therefore, to prevent being drawn towards local minima, a fuzzy adaptation of the OPT-FRAC-CHN could be a viable solution. Furthermore, the OPT-FRAC-CHN can be expanded to address non-quadratic combinatorial optimization problems.

Author Contributions

Conceptualization, K.E.M.; Methodology, K.E.M.; Investigation, A.A. (Abdellah Ahourag), A.A. (Ahmed Aberqi) and T.K.; Resources, T.K.; Writing—original draft, K.E.M.; Writing—review & editing, Z.B. and A.A. (Abdellah Ahourag); Supervision, K.E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data will be made available to authors with permission.

Acknowledgments

(a) This work was supported by the Ministry of National Education, Professional Training, Higher Education and Scientific Research (MENFPESRS), and the Digital Development Agency (DDA) and CNRST of Morocco (Nos. Alkhawarizmi/2020/23); (b) The authors thank the reviewers and appreciate all valuable comments and suggestions, which helped to improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following are the main notations used in this document:
EPEquilibrium point
QKSPQuadratic Knap Sac Problem
FCFractional calculus.
CHNContinuous Hopfield network
T, IParameters of CHN network
OPT-CHNOptimal continuous Hopfield network
FRAC-CHNFractional continuous Hopfield network
FCFractional Calculus
OPT-FRAC-CHNContinuous Hopfield network
WHOWorld Health Organization
FAOFood and Agriculture Organization of the United Nations
AColumn of positive nutrients of 177 foods
EColumn of negative nutrients of 177 foods
gVector of the glycemic load of 177 foods
bVector of positive nutrient requirements
fVector of negative nutrient requirements

References

  1. Zhou, Y.; Pang, T.; Liu, K.; Mahoney, M.W.; Yang, Y. Temperature balancing, layer-wise weight analysis, and neural network training. arXiv 2023, arXiv:2312.00359. [Google Scholar]
  2. Du, M.; Behera, A.K.; Vaikuntanathan, S. Active oscillatory associative memory. J. Chem. Phys. 2024, 160, 055103. [Google Scholar] [CrossRef] [PubMed]
  3. Abdulrahman, A.; Sayeh, M.; Fadhil, A. Enhancing the analog to digital converter using proteretic hopfield neural network. Neural Comput. Appl. 2024, 36, 5735–5745. [Google Scholar] [CrossRef]
  4. Rbihou, S.; Haddouch, K.; El moutaouakil, K. Optimizing hyperparameters in Hopfield neural networks using evolutionary search. OPSEARCH 2024, 1–29. [Google Scholar] [CrossRef]
  5. El Alaoui, M.; El Moutaouakil, K.; Ettaouil, M. A multi-step method to calculate the equilibrium point of the Continuous Hopfield Networks: Application to the max-stable problem. Wseas Trans. Syst. Control 2017, 12, 418–425. [Google Scholar]
  6. Uykan, Z. On the Working Principle of the Hopfield Neural Networks and its Equivalence to the GADIA in Optimization. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3294–3304. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  7. Del Re, E.; Fantacci, R.; Ronga, L. A dynamic channel allocation technique based on Hopfield neural networks. IEEE Trans. Veh. Technol. 1996, 45, 26–32. [Google Scholar] [CrossRef]
  8. Kumar, A.; Shukla, R.K.; Shukla, R.S. Enhancement of Energy Optimization in Semi Joint Multipath Routing Protocol using QoS Based on Mobile Ad-Hoc Networks. In Proceedings of the 2023 2nd Edition of IEEE Delhi Section Flagship Conference (DELCON), Rajpura, India, 24–26 February 2023; pp. 1–5. [Google Scholar] [CrossRef]
  9. Hong, Q.; Fu, H.; Liu, Y.; Zhang, J. In-memory computing circuit implementation of complex-valued hopfield neural network for efficient portrait restoration. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2023, 42, 3338–3351. [Google Scholar] [CrossRef]
  10. Wang, H.; Long, A.; Yu, L.; Zhou, H. An efficient approach of graph isomorphism identification using loop theory and hopfield neural networks. Multimed. Tools Appl. 2024, 83, 22545–22566. [Google Scholar] [CrossRef]
  11. Ziane, M.; Sara, C.; Fatima, B.; Abdelhakim, C.; EL Moutaouakil, K. Portfolio selection problem: Main knowledge and models (A systematic review). Stat. Optim. Inf. Comput. 2024, 12, 799–816. [Google Scholar] [CrossRef]
  12. Senhaji, K.; Moutaouakil, K.E.; Ettaouil, M. Portfolio selection problem: New multicriteria approach for the mean-semivariance model. In Proceedings of the 2016 3rd International Conference on Logistics Operations Management (GOL), Fez, Morocco, 23–25 May 2016; pp. 1–6. [Google Scholar] [CrossRef]
  13. El Moutaouakil, K.; El Ouissari, A.; Olaru, A.; Palade, V.; Ciorei, M. OPT-RNN-DBSVM: OPTimal Recurrent Neural Network and Density-Based Support Vector Machine. Mathematics 2023, 11, 3555. [Google Scholar] [CrossRef]
  14. El Moutaouakil, K.; El Ouissari, A. Opt-RNN-DBFSVM: Optimal recurrent neural network density based fuzzy support vector machine. Rairo Oper. Res. 2023, 57, 2804–7303. [Google Scholar] [CrossRef]
  15. Moutaouakil, K.E.; Touhafi, A. A New Recurrent Neural Network Fuzzy Mean Square Clustering Method. In Proceedings of the 2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech), Marrakesh, Morocco, 24–26 November 2020; pp. 1–5. [Google Scholar]
  16. El Moutaouakil, K.; Yahyaouy, A.; Chellak, S.; Baizri, H. An Optimized Gradient Dynamic-Neuro-Weighted-Fuzzy Clustering Method: Application in the Nutrition Field. Int. J. Fuzzy Syst. 2022, 24, 3731–3744. [Google Scholar] [CrossRef]
  17. Hopfield, J.J. Neurons with graded response have collective computational properties like those of two-states neurons. Proc. Natl. Acad. Sci. USA 1984, 81, 3088–3092. [Google Scholar] [CrossRef] [PubMed]
  18. Ghosh, A.; Pal, N.R.; Pal, S.K. Object background classfcation using Hopfield type neural networks. Int. J. Pattern Recognit. Artiffcial Intell. 1992, 6, 989–1008. [Google Scholar] [CrossRef]
  19. Nasrabadi, N.M.; Choo, C.Y. Hopfield network for stereo vision correspondence. IEEE Trans. Neural Netw. 1992, 3, 5–13. [Google Scholar] [CrossRef] [PubMed]
  20. Wasserman, P.D. Neural Computing: Theory and Practice; Van Nostrand Reinhold: New York, NY, USA, 1989. [Google Scholar]
  21. Wu, J.K. Neural Networks and Simulation Methods; Marcel Dekker: New York, NY, USA, 1994. [Google Scholar]
  22. Smith, K.A. Neural networks for combinatorial optimization: A review of more than a decade of research. Informs J. Comput. 1999, 11, 15–34. [Google Scholar] [CrossRef]
  23. Joya, G.; Atencia, M.A. Sandoval, F. Hopfield neural networks for optimization: Study of the different dynamics. Neurocomputing 2002, 43, 219–237. [Google Scholar] [CrossRef]
  24. Wang, L. On the dynamics of discrete-time, continuous-state Hopfield neural networks. IEEE Trans. Circuits Syst. Analog. Digit. Signal Process. 1998, 45, 747–749. [Google Scholar] [CrossRef]
  25. Hopfield, J.J.; Tank, D.W. Neural computation of decisions in optimization problems. Biol. Cybern. 1985, 52, 1–25. [Google Scholar] [CrossRef]
  26. Talavan, P.M.; Yanez, J. Parameter setting of the Hopfield network applied to TSP. Neural Netw. 2002, 15, 363–373. [Google Scholar] [CrossRef] [PubMed]
  27. Fazzino, S.; Caponetto, R.; Patanè, L. A new model of Hopfield network with fractional-order neurons for parameter estimation. Nonlinear Dyn. 2021, 104, 2671–2685. [Google Scholar] [CrossRef] [PubMed]
  28. Talaván, P.M.; Yáñez, J. The generalized quadratic knapsack problem. A neuronal network approach. Neural Netw. 2006, 19, 416–428. [Google Scholar] [CrossRef] [PubMed]
  29. Demidowitsch, B.P.; Maron, I.A.; Schuwalowa, E.S. Metodos Numericos de Analisis; Paraninfo: Madrid, Spain, 1980. [Google Scholar]
  30. Talaván, P.M.; Yáñez, J. A continuous Hopfield network equilibrium points algorithm. Comput. Oper. Res. 2005, 32, 2179–2196. [Google Scholar] [CrossRef]
  31. Danca, M.-F. Hopfield neuronal network of fractional order: A note on its numerical integration. Chaos Solitons Fractals 2021, 151, 111219. [Google Scholar] [CrossRef]
  32. An, T.V.; Phu, N.D.; Van Hoa, N. The stabilization of uncertain dynamic systems involving the generalized Riemann–Liouville fractional derivative via linear state feedback control. Fuzzy Sets Syst. 2023, 472, 108697. [Google Scholar] [CrossRef]
  33. Shana, Y.; Lva, G. New criteria for blow up of fractional differential equations. Filomat 2024, 38, 1305–1315. [Google Scholar]
  34. Pandey, R.K.; Agrawal, O.P. Comparison of four numerical schemes for isoperimetric constraint fractional variational problems with A-operator. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Boston, MA, USA, 2–5 August 2015; American Society of Mechanical Engineers: New York, NY, USA, 2015; Volume 57199, pp. 317–324. [Google Scholar]
  35. Donati, M.; Menozzi, D.; Zighetti, C.; Rosi, A.; Zinetti, A.; Scazzina, F. Towards a sustainable diet combining economic, environmental and nutritional objectives. Appetite 2016, 106, 48–57. [Google Scholar] [CrossRef]
  36. Bas, E. A robust optimization approach to diet problem with overall glycemic load as objective function. Appl. Math. Model. 2014, 38, 4926–4940. [Google Scholar] [CrossRef]
  37. El Moutaouakil, K.; Ahourag, A.; Chakir, S.; Kabbaj, Z.; Chellack, S.; Cheggour, M.; Baizri, H. Hybrid firefly genetic algorithm and integral fuzzy quadratic programming to an optimal Moroccan diet. Math. Model. Comput. 2023, 10, 338–350. [Google Scholar] [CrossRef]
  38. Ahourag, A.; Chellak, S.; Cheggour, M.; Baizri, H.; Bahri, A. Quadratic Programming and Triangular Numbers Ranking to an Optimal Moroccan Diet with Minimal Glycemic Load. Stat. Optim. Inf. Comput. 2023, 11, 85–94. [Google Scholar]
  39. El Moutaouakil, K.; Ahourag, A.; Chellak, S.; Baïzri, H.; Cheggour, M. Fuzzy Deep Daily Nutrients Requirements Representation. Rev. Intell. Artif. 2022, 36. [Google Scholar] [CrossRef]
  40. El Moutaouakil, K.; Baizri, H.; Chellak, S. Optimal fuzzy deep daily nutrients requirements representation: Application to optimal Morocco diet problem. Math. Model. Comput. 2022, 9, 607–615. [Google Scholar] [CrossRef]
  41. Kumar, K.; Pandey, R.K.; Sharma, S. Approximations of fractional integrals and Caputo derivatives with application in solving Abel’s integral equations. J. King Saud Univ.-Sci. 2019, 31, 692–700. [Google Scholar]
  42. World Health Organization. Diet and Physical Activity: A Public Health Priority; World Health Organization: Geneva, Switzerland, 2021. [Google Scholar]
  43. World Health Organization. WHO and FAO Announce Global Initiative to Promote Consumption of Fruit and Vegetables; World Health Organization: Geneva, Switzerland, 2003. [Google Scholar]
Figure 1. Electronic diagram of equilibrium continuous Hopfield network.
Figure 1. Electronic diagram of equilibrium continuous Hopfield network.
Symmetry 16 00921 g001
Figure 2. The methodology adopted to carry out this work.
Figure 2. The methodology adopted to carry out this work.
Symmetry 16 00921 g002
Figure 3. Electronic diagram of a discrete-time Hopfield lattice of order α .
Figure 3. Electronic diagram of a discrete-time Hopfield lattice of order α .
Symmetry 16 00921 g003
Figure 4. CHN energy vs. iterations.
Figure 4. CHN energy vs. iterations.
Symmetry 16 00921 g004
Figure 5. OPT-CHN energy vs. iterations.
Figure 5. OPT-CHN energy vs. iterations.
Symmetry 16 00921 g005
Figure 6. FRAC-CHN energy vs. iterations for α = 0.7 .
Figure 6. FRAC-CHN energy vs. iterations for α = 0.7 .
Symmetry 16 00921 g006
Figure 7. OPT-FRAC-CHN energy vs. iterations for α = 0.7 .
Figure 7. OPT-FRAC-CHN energy vs. iterations for α = 0.7 .
Symmetry 16 00921 g007
Figure 8. CHN energy vs. iteration.
Figure 8. CHN energy vs. iteration.
Symmetry 16 00921 g008
Figure 9. OPT-CHN energy vs. iteration.
Figure 9. OPT-CHN energy vs. iteration.
Symmetry 16 00921 g009
Figure 10. FRAC-CHN energy vs. iteration for α = 0.7 .
Figure 10. FRAC-CHN energy vs. iteration for α = 0.7 .
Symmetry 16 00921 g010
Figure 11. OPT-FRAC-CHN energy vs. iteration for α = 0.7 .
Figure 11. OPT-FRAC-CHN energy vs. iteration for α = 0.7 .
Symmetry 16 00921 g011
Figure 12. The diet CHN energy vs. iteration.
Figure 12. The diet CHN energy vs. iteration.
Symmetry 16 00921 g012
Figure 13. The diet OPT-CHN energy vs. iteration.
Figure 13. The diet OPT-CHN energy vs. iteration.
Symmetry 16 00921 g013
Figure 14. The diet FRAC-CHN energy vs. iteration for α = 0.7 .
Figure 14. The diet FRAC-CHN energy vs. iteration for α = 0.7 .
Symmetry 16 00921 g014
Figure 15. The diet OPT-FRAC-CHN energy vs. iteration for α = 0.7 .
Figure 15. The diet OPT-FRAC-CHN energy vs. iteration for α = 0.7 .
Symmetry 16 00921 g015
Table 1. Total glycemic load, positve nutrient gap, and negative nutrient gap of optimal diets obtained by the CHN, OPT-CHN, FRAC-CHN, and OPT-FRAC-CHN.
Table 1. Total glycemic load, positve nutrient gap, and negative nutrient gap of optimal diets obtained by the CHN, OPT-CHN, FRAC-CHN, and OPT-FRAC-CHN.
MethodTime
Step
Fraction
Order
Glycemic
Load
Positive Gap
(Microgram)
Negative Gap
(Microgram)
FRAC-CHN0.0010.5367.174400.032261.45
OPT-FRAC-CHNOptimal0.523.962039.13609.57
Frac-CHN0.0010.55415.425853.802899.12
OPT-FRAC-CHNOptimal0.5537.651190.53342.13
FRAC-CHN0.0010.56442.666554.354837.73
OPT-FRAC-CHNOptimal0.5642.78902.58244.40
FRAC-CHN0.0010.571093.5524404.7115717.35
OPT-FRAC-CHNOptimal0.5743.99864.32244.17
FRAC-CHN0.0010.5843.043366.09720.05
OPT-FRAC-CHNOptimal0.5847.84732.78203.43
FRAC-CHN0.0010.591642.232508.251764.04
OPT-FRAC-CHNOptimal0.5954.82562.62148.48
FRAC-CHN0.0010.623.923385.861046.27
OPT-FRAC-CHNOptimal0.649.58459.62138.06
FRAC-CHN0.0010.61899.692037.158098.13
OPT-FRAC-CHNOptimal0.6173.66359.9798.16
CHN0.001Ordinary0.005036.281795.06
OPT–CHNOptimalOrdinary0.005036.281795.06
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Moutaouakil, K.; Bouhanch, Z.; Ahourag, A.; Aberqi, A.; Karite, T. OPT-FRAC-CHN: Optimal Fractional Continuous Hopfield Network. Symmetry 2024, 16, 921. https://doi.org/10.3390/sym16070921

AMA Style

El Moutaouakil K, Bouhanch Z, Ahourag A, Aberqi A, Karite T. OPT-FRAC-CHN: Optimal Fractional Continuous Hopfield Network. Symmetry. 2024; 16(7):921. https://doi.org/10.3390/sym16070921

Chicago/Turabian Style

El Moutaouakil, Karim, Zakaria Bouhanch, Abdellah Ahourag, Ahmed Aberqi, and Touria Karite. 2024. "OPT-FRAC-CHN: Optimal Fractional Continuous Hopfield Network" Symmetry 16, no. 7: 921. https://doi.org/10.3390/sym16070921

APA Style

El Moutaouakil, K., Bouhanch, Z., Ahourag, A., Aberqi, A., & Karite, T. (2024). OPT-FRAC-CHN: Optimal Fractional Continuous Hopfield Network. Symmetry, 16(7), 921. https://doi.org/10.3390/sym16070921

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop