Next Article in Journal
A Lagrangian Description of Buoyancy Effects on Aircraft Wake Vortices from Wing Tips near a Heated Ground Plane
Previous Article in Journal
Optimal Operational Reliability and Reconfiguration of Electrical Distribution Network Based on Jellyfish Search Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time

1
Department of Industrial Engineering, College of Engineering, Prince Sattam Bin Abdulaziz University, Alkharj 16273, Saudi Arabia
2
Industrial Engineering Department, Zagazig University, Zagazig 44519, Egypt
*
Author to whom correspondence should be addressed.
Energies 2022, 15(19), 6992; https://doi.org/10.3390/en15196992
Submission received: 24 August 2022 / Revised: 15 September 2022 / Accepted: 19 September 2022 / Published: 23 September 2022
(This article belongs to the Section E: Electric Vehicles)

Abstract

:
Since the rules and regulations strongly emphasize environmental preservation and greenhouse gas GHG reduction, researchers have progressively noticed a shift in the transportation means toward electromobility. Several challenges must be resolved to deploy EVs, beginning with improving network accessibility and bidirectional interoperability, reducing the uncertainty related to the availability of suitable charging stations on the trip path and reducing the total service time. Therefore, suggesting DQN supported by AIoT to pair EVs’ requests and station invitations to reduce idle queueing time is crucial for long travel distances. The author has written a proposed methodology in MATLAB to address significant parameters such as the battery charge level, trip distance, nearby charging stations, and average service time. The effectiveness of the proposed methodology is derived from hybridizing the meta-heuristic techniques in searching DQN learning steps to obtain a solution quickly and improve the servicing time by 34%, after solving various EV charging scheduling difficulties and congestion control and enabling EV drivers to policy extended trips. The work results obtained from more than 2145 training hypothetical examples for EVs’ requests were compared with the Bayesian Normalized Neural Network (BASNNC) algorithm, which hybridize the Beetle Antennae Search and Neural Network Classifier, and with other methods such as Grey Wolf Optimization (GWO) and Sine-cosine and Whale optimization, revealing that the mean overall comparison efficiencies in error reduction were 72.75%, 58.7%, and 18.2% respectively.

1. Introduction

There are many differentiators between planning and scheduling policy paradigms, but these two paradigms are complementary to each other. A pure policy assignment can be defined in terms of its applicability in the relevant disciplines such as “The difficulty of Policy is in identifying a set of actions that will transform the existing circumstance into one in which the target description is correct.” [1,2]. While Marsay, D. J. focuses on moving the policy world from specified beginning circumstances to a specified target state is the crux of the AI Policy issue through sequence of actions.” [3]. Another viewpoint on the policy of planning task is to evaluate it as a design or synthesis activity. This perspective differs considerably from the usual approaches. The policy can be defined as an inventing, styling, or forming something, such as the pieces’ ranking of a thing through execution of actions, according to the dictionary [4]. There are a few inferences that may be made about the nature of a policy assignment after examining the numerous definitions aforementioned. When policy is a job, the primary focus is developing a series of actions that will enable the problems necessary to achieve.
The many constraints that restrict the range of potential solutions frequently impose restrictions on the generated actions.
The policy is not a synthesis of activities’ sequence through possible known actions merely because the time element which defines the interdependence between these actions is uncharted.
One of the critical ways that various manufacturers promote and differentiate their EVs is by providing a vast network of charging stations throughout the advertising region or reducing the charging time schedule. Rapid technological advancements over the past several decades have led to both huge beneficial improvements in daily living but rising in pollution levels. Extensive and radical sustainable policies have been done to simplify electromobility globally to avoid polluting transportation emissions through secure actions’ (e.g., EVs, electric vehicles) to create clean cities: as cited in Jan. 2018 about the Energy Future and Mobility innovation discussed by the Global Economic Summit. This desired transformation pushes the researchers to enhance the deep learning mechanisms to reduce the error in obtaining the best solution (i.e., reward) when assigning the object with low consumption cost to its destination, so candidate electricity and vehicle EVs to grid station δ i , [5,6] who suggested a reinforcement learning-based/free energy management system. The Markov decision process has been used to represent the state space (i.e., the number of EVs), transition probability, action space (i.e., assignment condition), and reward function of the energy management system as a scheduling issue. Clean cities require some physical infrastructure layers (e.g., electric cars, charging stations, transformers, electric lines, etc.,) and cyberinfrastructure layers (e.g., IoT devices, sensor nodes, meters, monitoring devices, etc.,). These layers make up the majority of the electric vehicle system EVs [7,8] when using successive relations sketched as in Figure 1 based on tackling through one of branches appeared in Figure 2.
The authors accentuate that supporting the infrastructure layers of charging in congestion-ridden urban areas relies on understanding the long-term change in mobility to save them from pollution risk. To close the gap between the demands for EV charging (i.e., requests) [9] and charging station availability (i.e., invitations), which is named bidirectional connectivity network (biCN), the smart reinforcement scheduling policy learning (RL) is needed, which is one of the three keys of machine learning paradigms, whether supervised learning and unsupervised learning for tackling the network elements [10]. The RL can be written by many programming language, the authors tend to formulate the desired (biCN) by MATLAB and discuss their relations which consists of: the operator (O; assigning) learns and trains to accomplish a goal by interacting with the environment through selective policy (π; Function steps) that manage the back propagation for the operator’s series of actions ( a t ; prefered assigning index. An alleged policy is a function that when applied to a function by deep neural networks, maps every action of an operator that takes to the anticipated result or reward. The Actions Space is limited or continuous, with a combination of all appropriate actions in a certain group that allows different actions and often describe the actions space via the number of moves or the sequence of states (i.e., the policy that guide trajectory τ ; Electric vehicle) available to the operator and are represented by real-valued vectors. An operator (O) uses a rule known as a policy to determine what actions to be selected, which may be deterministic, in which case the alluded symbol μ is typically used, or stochastic, usually denoted by π. Therefore, Figure 1 discusses the state, actions, parameters architecture distribution relationships based on the following relation expressed by a t = μ ( s t ) O t ,   a t ~ π θ ( a t | O t )   (hint, denotes “sampling” from the stochastic process)   and   π θ ( a t | s t ) ,   where the policy of fully observed is based on some actions using the states to make the observation like these actions. A state s t is an exhaustive account of how things are in the world. As a result, every state had a place in the ecosystem under study. While the observation O t just partially describes this state and may leave out important details. The operator could be able to see a state completely or in part, though. If only a portion, the operator creates an internal state (or state estimation). In order to observe states mechanism, the deep RL approach always uses a genuine array, vector, or polynomial tensor to observe it. The states are the significant features (parameterized) that affect the objective achievement in minimum time and errors, which are called parameterized policies and depend on a set of parameters θ as aforementioned. State transitions are dependent exclusively on recent events and describe what changes occur in the environment timely ( s t ) and the state at time ( s t + 1 ) , where if it determinstic f ( s t , a t ) or stochastic P ( . | s t , a t ) and create the valid actions by the operator according to its policy. The rewards function R which pull the solution in specific paths (i.e., Trajectories) is critically important in searching the route by giving back a reward   r t = R ( s t , a t , s t + 1 ) , which can be simplified to r t = R ( s t ) |   R ( s t , a t ) i.e., the value for one movement in the propagation for the state is representative usually as a real +ve|−ve number = the cumulative r t for a whole trajectory τ at the end) and can be expressed as follow:
R ( τ ) = t = 0 T 1 r t = t = 0 T 1 R ( s t , a t , s t + 1 )
The selection policy frequently use discount benefits γ (0,1) received in the candidate trajectory as expressed as R ( τ ) = t = 0 γ t · r t = γ 0 · r 0 + γ 1 · r 1 + γ 2 · r 2 +   .
Nevertheless, the operator’s ultimate objective is to choose an ideal policy of action that optimizes the due reward (overall trajectories) and define the probability distributions by assuming stochastic policy and environmental changes that can be expressed as:
P ( τ | π ) = ρ 0 ( s 0 ) t = 0 T 1 P ( s t + 1 | s t , a t ) · π ( a t | s t )
( ρ 0   initial state probability) and P ( s t + 1 | s t , a t )   is   state transition probability of environment [11,12] π ( a t | s t ) is action probability of agent. The expected  ( r t ) can be denoted by J (π) that represent the charging stations’ actions   δ i a that can be expressed by
J ( π ) = τ n P ( τ | π ) · R ( τ ) = E τ ~ π [ R ( τ ) ]
and from there in the illustrative case study in Section 3, the primary optimization issue in RL may be articulated as alternative π * = a r g m a x | π J ( π ) , where π * being the optimal scheduling policy that emphasizes the importance of function value (i.e., selection criteria index discussed in Equation (8)) for a the pair state-action ( Q ( s t , a t ) ), which represent the EVs’ requests and the stations’ invitations. This issue has several origin of uncertainty because interconnections across different operators, such as EVs, δ i , the grid of electric power, connectivity networks (biCN), and the electricity supplier. The applications of deep RL in this field, focuses on the operational and demand response control between the pair Q above and their energy consumption management [13] with regard to the electric power grid [14]. Therefore, the researchers classified the policies to four functions to reach the destination through minimum route and consuming less time. The first function indexing value (+ve) is called Action-value that provides the anticipated result if starting from the provided state ( s t τ ), and performing any action ( a t ) forever according to policy (π) and delivers the return ( r t ) if starting from the given state ( s t τ ) and repeated along policy π directly, while the function without action is named negative policy value. The first function can be expressed by the Q π ( s t , a t ) = E τ ~ π [   R ( τ )   |   s 0 = s t , a 0 = a t ] and the second is expressed by V π ( s ) = E τ ~ π [   R ( τ )   |   s 0 = s   ] . The second, if the function followed optimal policy whether on value or on action value can be expressed by V * ( s ) and Q * ( s , a ) where both act according to optimal policy π * at Equations (4) and (5), and the bellman style as in Equations (4a) and (5a).
V * ( s ) = max π   E τ ~ π [   R ( τ )   |   s 0 = s   ]
V * ( s ) = max a E s ~ P [   r ( s , a ) + γ · V * ( s )   ]
Q * ( s , a ) = max π   E τ ~ π [   R ( τ )   |   s 0 = s ,   a 0 = a   ]
Q * ( s , a ) = E s ~ P [ r ( s , a ) + γ · max a Q * ( s , a ) ]
Its fundamental tenet is that “the reward which is expected to gain from being there in a specific position, plus the value of wherever will arrive next,” sums up the worth of the beginning state. The authors note that the connection between V π   o r * and Q π   o r * is as follows, V π ( s ) = E a ~ π [ Q π ( s , a ) ] and the V * ( s ) = max a Q * ( s , a ) and special self-consistency equations called Bellman equations are obeyed by the discussed functions according to relative action a t . Therefore, the proportionate benefit of such action is expressed by:
A π ( s , a ) = Q π ( s , a ) V π ( s )
The taxonomy of reinforcement learning illustrates several algorithms that show the position of the proposed methodology as in Figure 2.
Figure 2 shows that when the operator has access to an environment’s model through forecasting state transitions via specific functions and gains rewards [15], it aims to learn an optimal policy as described by the model-based RL algorithms without actually acting out the policy aims to assigning some of the movement states that request some of the actions Q ( s t τ ,   a i δ ) [16]. According to Zhang H. et al. the AlphaZero is interested in chess pieces’ movement issues, which are discussed in the Google DeepMind project (i.e., the predecessor played go to enhance their expectation which considered a scheduling issue) [17]. The lack of scheduling in this issue demands instantaneous request behavior of the mayday case and consumes long time to generate solution. Therefore, the scheduling time is critical indicator [18]. With more EVs on the road, it will be harder to locate connections. Although services such as ChargePoint or ChargeHub (e.g., cloud) give EVs drivers actual-time information guide about available δ i stations, the capability to reserve connectors for a later time has not yet been implemented [19]. Therefore, using the autonomous internet of things (AIoT) enhances this drawback. The stations were selected according to many models, which helps in reducing the route such as Zero-only action is unlike prior iterations of AlphaGo actions [20,21], whether Go or shogi rules were not initially known to the neural network training. The AI used reinforcement learning to practice playing against itself (i.e., attempting to understand oneself) until it could foresee its own movements and how they would impact the result of the game, in contrast AlphaGo involves slow training and consumes a long time to reach the same level [22]. In the free models in the left side of Figure 2, the operators often do not have access to a ground-truth representation of the environment’s model, which is the fundamental disadvantage and drawback of these algorithms, then need experience through behaving erratically in public environment. Therefore, the operators need learning to catch optimization for selected route or use called Q-learning. The functions of the free-model are enhanced directly by optimizing the parameters ( θ ) by gradient ascent toward J ( π θ ) , while Q-learning focuses on the value of the functions and their policies π θ ( a | s ) that are managed by the neural network rules as discussed in Equations (6) and (7).
V π ( s ) = E a ~ π ,   s ~ P [   r ( s , a ) + γ · V π ( s )   ]
Q π ( s , a ) = E s ~ P [ r ( s , a ) + γ · E a ~ π [ Q π ( s , a ) ] ]
Moreover, described implementations of the deep RL technique to deal with a variety of new issues, including management rate of aggregated data, convert the data to arguments, flexible dynamical access for the network, bidirectional wireless catching in actual time as illustrated in the right side of Figure 3, network security, etc., and regarding communication and networking. The Lei, L. et al. discuss also the managing of the DRL environment that consists of three layers (e.g., the perception, the network, and the application) by the autonomous IoT (AIoT) systems, which are the next generation of IoT systems [23]. The main advantage of these models (i.e., model-Free RL) [24] are principled techniques for policy optimization that lead to goals directly, which are more trustworthy and stable in deducing the results. Therefore, the authors integrated the Q-Learning and use the Policy-Optimization branch in this taxonomy for finding their proposed methodology. The proposed methodology relies on the Policy Gradient (e.g., Stochastic or Deterministic evaluated by Monte-Carlo or Actor-Critic have multi agents) [25] and calculates Q θ ( s , a )   that focuses on increasing the performance of function indexing used by minimizing error when tackled as Bellman equation Q * ( s , a ) (gap among destination and the gaind output position) and performs gradient ascent to directly maximize performance when following Equation (8) in Section 3. If the policy optimization is proximal through maximizing a surrogate, objective function may be called PPO (proximal policy optimization). This integration aims to reduce the failure in mechanism behavior of DQN (deep Q-learning network) when obtaining the results to reach Q * as trying to enhance C51 method. The hybridization occurs when DDPG is implemented. The main objective is obtaining the shortest route between the selected beginning position and nearest destination (i.e., Electric charging station), which helps save time in service. Therefore, the method suggested by the Lee et al. and Ahmed M. Abed supports the paper objective and reduces the total trip time while accounting for the dynamic nature of traffic situations and unforeseen future charging requirements [26,27]. It returns the optimum route ( r t ) and charging stations, which is considered as a challenging issue. The operator’s ultimate objective is to create a policy that yields the highest reward over an extended period of time [28]. For the policy EV to charge properly, it is suggested to use RL to learn about generated government energy usage, which decreases energy consumption costs as well as encourages Evs’ marketing [29,30]. Therefore, many researchers discuss ways to increasing the power grid utilization to reach a steady state of load and to trying to prevent electric potential fluctuations reductions by using RL approaches [31,32,33]. The famous model for selecting the shortest route between source position and destination is the Markov model, which has been mimicked by the proposed methodology, that schedule the assignment of states (EVs) to the preferred charging stations ( δ π ). Therefore, Wang, K. et al. and Wan, Y. et al. suggest to tackling the intended scheduling such as (MDP) because of its random behavior especially for the traffic parameter [12,34], while EVs arrival time in an uncertain case is based on the recommendations for using DRL [30] to solve the shortcomings of the conventional model-based methods branch that require a smart model to be efficient, where delivering an EV charging approach is scalable and adaptable [35]. A model-free real-time scheduling as illustrated in Figure 2 of electric car charging relies on DRL [36], which is considered an uncertain transition probability and is developed. The performance of proposed policy relies on forecasting method, predicting the readiness of δ i by the EV using an LSTM neural network to create an intelligent scheduling [37], which present promising results. Therefore, the authors verify those results [38], after mimicking their function formulation recommendations to adjust the operators and mimic the methodology.

2. The Policy Formulation and Research Contribution

The majority of the current research relies on inaccurate traditional econometric or time series approaches that have uncertain behavior, which motivate the authors to study the feasibility of EVs stations and their usability management via tuple parameters (state, action, transaction function, reward). The review guides to use RL with motivated searching mechanism that preferred to be a heuristic to expand the solution space [39,40]. Therefore, adopted policy, a rewarding argument is the gained value from the transaction function t = 1 n J ( π ) θ through sequences of decisions (i.e., Markov principles) enhanced by a metaheuristic technique. The possibility of obliteration and inflation makes the value of the future less valuable than a reward now, r t 0 r t n . The movement of state (EV) relies on the available locations J ( π ) t in the backlogs-list (e.g., the layout map) having two cluster directions, unidirectional or bidirectional. The gained reward r t tends to support policy π A or policy π B , when J ( π ) s was not on the proposed routes. The charter of the work is illustrated in Table 1. To improve the searching mechanism of various computations is among the significant parameters to find the best hybridization to meet optimization processes based on fifth different metaheuristic algorithms [41]. The first is the Scatter search, made famous by the Tabu search, and is hybridized with the immune algorithm. The second is the differential evolution algorithm (DE), which is called evolutionary computation because its behavior relies on iteratively trying to improve the candidate’s solution and solve the complex parameter optimization problem to optimize profitability regardless of time. Therefore, the researchers hybridize the Grey Wolf Optimizer (GWO) with the immune algorithm to minimize the total service time, [42] as a third trial. On the other hand, the fourth hybridization is done between the Sine–cosine and Whale Optimization Algorithm, which creates initial random agents and requires them to fluctuate outwards or toward the best possible solution based on sine and cosine functions [43,44,45,46]. Finally, the process parameters were optimized using reinforcement learning and a Bayesian normalized NN utilizing the beetle antennae search (BASNNC) algorithm, which outperformed the previously discussed processes [38]. Therefore, the proposed methodology selects the branch of “need-based,” as illustrated in Figure 2 above, which matched with understanding the behavior of the tackling parameters through servicing (i.e., EVs charging operation) mainly if multiple constraints are considered. Therefore, this work tries to enhance this mechanism to find a preferred solution in the minimum time and reduce the service time under some of the abovementioned considerations. The proposed design consists of two recruitment networks; a deep Q network represent the relation between the state and actions Q ( s t τ ,   a t i ) for the best action-value function indexing discriminative features for the idle time. The suggested system does not need to know in advance variables such as arrival and departure times or power usage, in contrast to optimization-based techniques that can be managed by the autonomous clouding technique, because the neural network can estimate the right choice based on the present parameters ( θ i ). The paper discusses the reinforcement learning mechanism for optimization algorithms in the review section and constructs the formulation for the problem that targets reducing the total service time for EVs in the suitable station taking into account the minimum route distance to pair them. Section 2 provides the paper charter that expounds the evaluation indicator, which highlights the contribution as presented in Section 3. Section 4 discusses how to analyze the DQN via fuzzy scheduling iterations to pick good distributions of EVs around the stations on the road by the proposed metaheuristic methodology. Finally, the results are discussed in Section 5. In the end, the conclusions are presented and the drawbacks are discussed, as well as future work directions are envisaged. The conclusion also presents the comparative results with outputs mimicked.

3. Contribution of This Study

The contribution is based on the inquiry about the advantages of using hybrid approaches to be used to maximize the research functions in uncertain networks through elective policy systems. This policy aims to determine minimum route length between the EVs and candidate stations and achieve the related objective which is minimum service time.
  • A new routing metaheuristic approach is suggested in order to save expenses, time, and eliminate hardship of travel.
  • Analyzing how suggested stations affect the effectiveness of routing heuristics.
λ t Q and λ t V are specified as the target values of Q-functions and value functions, respectively, to derive the loss functions. When using value-based approaches, the regression loss may be used to assess how successfully NN mimics Q functions or value functions.

The Environment Layout Map Description

The proposed policy seeks for selecting the shortest route length between main successive EVs requests and their expected service time. There are five sequential actions discussed through Markov to identify the shortest route in lowest time, when modeling the RL (travelling state ( s t ) to a desired place, looking up pricing information, serviceable actions ( a t ) , setting up if necessary, and making other quick decisions). The effectiveness and execution of any subsequent actions policy have four tactical choices affecting the result [48]. The analyzed environment (i.e., layout design and its chosen locations) sketched as a rectangular-like illustration is presented in Figure 3.
Shape of Charger StationRectangular
Number of stations 2 J π 2100
Main starting positionBottom, Right, Up
Street spanAs safety instructed
Policy of Service t = 0 n a = 1 m π t a
Proposed systemDynamic and stochastic
In assignment policy, priority facility movement and routing policy are considered the basis for the suggested methodology. The layout design is stressed as an essential component impacting the process performance and routing distance in the proposed partitioning of the map according to the studied characteristics, where the policy consuming time to complete assigning n (locations) to δ i indicated equals the number of desired selected locations. The rear and front walls of the borders are close by and parallel to the two end streets. Each place on the layout has a definition of each symbol, which is also represented in Table 2. Numerous potential stations are suggested on the right or left of the roadway in the proposed layout plan, which was created using Google Maps and is framed in a fixed two-dimensional rectangular interior [19], as shown in Figure 4. Through “Zap Map,” a website displays charging stations accessible in several European nations and a wealth of beneficial information to assist the EVs drivers charge the vehicle quickly and securely via creating most preferred scheduling map according to all requests done in the same time road, to locate the finest station for your needs, such as minimum charging time, waiting time, and closer to the main flight route.
The charging stations δ i ( π ) is ascending where i { 1 , 2 , , m } is ( m 1 ) N + 1 , the second station may be available as δ 2 is ( J 1 ) N + 3 , …, ( J 1 ) N + i , and so on. The distance between the two stations δ i and δ i + 1 must be the shortest and calculated by Equation (8).
λ i = m i n j = 1 n [ l ( 2 N δ i x δ i y ) + 2 B + | δ x δ y | A + 2 D ,   l ( δ i x + δ i y 2 ) + 2 F + | δ x δ y | A
where δ i x = [ x m ] , i x = x [ x m ] N , δ j y = [ y m ] , i y = y [ y m ] N .
At the start of each period ( t ) when requested, the drivers must determine the number of serviceable stations ( δ ) and if they are serviceable.
The originality of the suggested method in creating recruitment datasets needed to train the algorithm via simulator because of dearth of datasets [27,32,49]. The sequential stages of proposed methodology characteristics are illustrated in Figure 5.

4. The Methodology Description

The forward propagation of aggregated data inputs and the reverse propagation of errors are two bidirectional processes that make up the classic gradient-based BP neural network learning process. Training is repeated, and deviations and network weight changes are continually computed in the direction of the relative error function gradient. It takes time to get closer to the objective.
Traditional gradient-based BP neural networks do, however, have certain intrinsic drawbacks, such as delayed convergence and a propensity toward local optimums. Researchers often experiment with different activation functions, change the network layout, and enhance weight-definite approaches. The proposed methodology works in case of many requested stations at the same time. The aim is selecting the closest five EVs to specific station according to distance on the main road trip and their requested service. The analysis of the procedures requested to reduce the total idle time of EVs and increase the utilization of the charging stations, which is calculated by the OEE indicator. Since there are an infinite number of possible scenarios, it is impossible to keep the ideal answer for each one of them in a cloud database. This issue served as the impetus for us to create the AIoT-DQN algorithm, which employs a function to identify the optimal course of action in each situation and is controlled by the autonomous Internet of Things AIoT response [39,50], which suggests the weight-and-structure-determination (WASD) algorithms, i.e., technique of employing linearly independent or orthogonal polynomials as activation functions, and several other ways. The relative gradient descent technique used by the heuristic random search algorithm gives it a powerful global search capability integrating the APSO method with a neural network to improve network parameters [40,51]. The proposed model is based on hybridizing with three model-free rules to gain an advantage to tailor some of the heuristic steps empowering the searching mechanism, such as the DQN, and DDPG, and policy gradient as illustrated in Figure 2. The proposed AIoT-DQN methodology was written in MATLAB to tackle the drawbacks in native DQN through two sequential phases [52]. The first phase is interested in building the network for mimicking the environment illustrated in Figure 5 when selecting the EVs and serviceable time at available stations via specific policies to get the minimum route between the car’s position and destination [53]. The second phase is an interest in cost analysis. The different zones of any charging request procedures shown in left side of Figure 5 represented by ( J i ) have a sequence of actions ( a i ). Zones (a, b, and d) are the BVA actions (i.e., idle time -service time) that are represented by (select many available stations, inspection procedures before charging, average setup time procedures, arrival distance, and waiting time). At the same time, zone (c) is the VA activity (i.e., service time) which must be scheduled in a minimum period. These zones were deduced from previous studies. Because of its large solution space, the authors resort to smart scheduling solution managed by the AIoT. If there are N of EVs requests and m stations δ i , the total number of potentially viable solution is equal to ( N ! ) m . After analyzing all of the feasible alternatives, an optimal solution for a certain performance metric may be identified among these possibilities. For instance, if there are five EVs making a request at the same time, the Q ( s t 5 ,   a t 5 ) = 2.48 × 10 8 alternatives, which consumes long computational time.
The proposed smart scheduling helps the EV drivers to plan their trip by selecting suitable stations on their roads and desired charging levels. These stations are not conditioned for full charging but charging to a particular level which reduces the queue time, and the remaining charge can be compensated on the road from the scheduled stations. The drivers can accept this plan or cancel to select another station for another charging policy (i.e., dataset’s list). The proposed methodology is identified through many parameters and decision variables as shown in Table 3.

4.1. Phase I: The Smart Fuzzy Scheduling Formulation, Fecrease EVs Service Time

Many authors met to integrate the heuristic steps to enhance their founding solution with the minimum errors. A back propagation neural network based on particle swarm optimization was suggested after they investigated how to carefully choose input parameters to obtain desired outcomes [54]. The conjugate gradient approach [55], the least squares method, and better methods based on numerical optimization exist in addition to those mentioned above. Therefore, the heuristic step is an important approach that must be used in enhancing the solution. The smart fuzzy scheduling for DQN managed by autonomous Internet of Things (AIoT) depends on applying the proposed fuzzy metaheuristic steps expressed in Table 4 called the decrease EVs service time ( d E V s S t ), and arranges the output index [ α ] and relative index [ α r ] in descending order as expressed in Equation (8), then constructs the Gantt chart in its first construction. This schedule presents more effective service-span path than some of the other published rules in the same context. Equation (9) discusses all possible solutions for assigning different EVs in a pair aspect in sequence. The first step computes the result of this rule and determines its priority (priority index). The second step arranges these indexes in descending order for groups of stations that aim to reduce waiting time. If two EVs follow in the arrangement loaded by one station, then test the best starting EV that saves time. This testing is the third step. After that, it reschedules EVs for every station alone in its waiting time by sliding EVs to finish two processes simultaneously, still rescheduling till it stops reducing the total idle time. Optimality is achieved when rescheduling the EVs by the same route with the next assumption. The acronym used in the proposed equation, ( d E V s S t ), this rule is compared with Lee et al. and other published rules; this model is effective in most examples. The formula relies on six parameters as:
ES finalEVsEarliest start request time of final EVs estimated for certain station.
ETfinalEVsExecuting request time of final EVs.
ES firstlEVsEarliest start request time to first EVs estimated assigned to a station.
ES predecessorEVsEarliest start request time of predecessor of first EVs estimated time.
ETpredecessorEVsExecuting request time for EVs selected.
ETfirstlEVsExecuting request time for first EVs estimated chosen time.
( d E V s S t ) α = ES finalEVs + ET finalEVs ES firstlEVs ES predecessorEVs + ET predecessorEVs + ET firstlEVs
The methodology has pseudo-code composed of ten sequential steps as follow:

4.2. Phase II: Cost Analysis Formulation

According to actions distribution shown in Figure 6, the priority for requests to be sequenced to δ i is its servicing time, where the shortest one for a request is finishing the request service earlier. One drawback is that the EVs request that picked the longest service time will be serviced last in scheduling, even though it has a high priority. Therefore, find the C m a x ( π ) expressed in Equation (10) i.e., the make span index of the processing actions to choose the serviced EVs.
C m a x ( π ) = max 0 t 1 t 2 t m 1 N ( j = 0 t 1 p π ( a j ) 1 + j = t 1 t 2 p π ( a j ) 2 + + j = t m 1 N p π ( a j ) m )
The route length under cost consideration of specific service span generated by policy function can be expressed by Equation (10), where the cost is according to branch ( π A )   but not to the proposed trip route, plus to all δ i , j that are serviceable, or if one of them are disposed due to time consideration.
λ i d = min t = 1 T [ h π A ( I 0 S + i = 1 n ( r i B ( C i π A C i π B ) ) ) + I t S + C i π A λ i S t π A + C i π B λ i S t π B + C d π B X t D ] + h π A 𝛽 t 𝜏 t r + T h π B I 0 R + i = 1 t ( T t + 1 ) r - t A B
S t π A = S 0 π A + k = 1 t 1 ( d k + r ¯ t A B ) S t k π A ,   1 < t < T
S t π B = S 0 π B + k = 1 t 1 ( d k + r ¯ t A B ) S t k π B ,   1 < t < T
X t D = X 0 D + k = 1 t 1 ( d k + r ¯ t A B ) X t k D ,   1 < t < T
I t S = I t 0 S + k = 1 t 1 ( d k + r ¯ t A B ) I t k π b ,   1 < t < T
Subject to:
I t S h π A [ I 0 S + i = 1 t ( S i π A + S i π B d ¯ i ) + q t τ t d + i = 1 t ϑ i t   ] 1 t T ,   d λ i d
I t S b [ I 0 S + i = 1 t ( S i π A + S i π B d ¯ i ) + q t τ t d + i = 1 t ϑ i t   ] 1 t T ,   d λ i d
S I i = δ = 1 m [ δ ( 2 J 1 ) ] t i j 2
where S t π A ,   S t π B ,   X t D > 0 and S I i S I i + 1 S I i + 2 S I N .
For each potential realization of δ , Constraints (12) and (16) maintain track till the end of period t or return to station at the end of period t . The block of i = 1 t ( S i π A + S i π B d ¯ i ) = τ t d / I 0 R | S represents the capacity of the requested service at the end of the schedule time t , which follows Exponential and Weibull behavior throughout the day from 5 a.m.: 11 a.m. and from 5 p.m.: 11 p.m. respectively. Constraint (13) generates a priority index SI as Palmer’s algorithm, i.e., the job ordering. The policy mechanism shown in Equations (11)–(16) should be reformulated to be a traceable model according to index α value, and the suggested cost analysis should be used to regulate the uncertainty behavior of the right-hand sides of the variables S t π A ,   S t π B ,   X t D , I t S ,   as discussed in Equation (17).
α 0 + t = 1 T ( α t a t + γ t a ^ t ) 0   γ t a t γ t ,   a n d   1 t < T
Some researchers reported that ANN, but due to the trajectory, tended to become entrapped in a local optimum, therefore is not used for training alone. This is the main reason that some researchers show that hybrid approaches perform better than the traditional ANN [28,56]. This paper observed the Zap map for a long time and generated 120 random examples for requests from three to six EVs simultaneously at a specific station as selected according to Equations (9)–(16). The proposed methodology explains an illustrative example of serving six actions for six different EVs by tackling the DQN network to serve the EVs in minimum time. The proposed methodology focuses on significant parameters illustrated in Figure 7 and Figure 8 and requested actions according to their policy π that guides the driver to the shortest path to the station. The AIoT tackles this hybridization to enhance the efficiency of (biCN) or the bi-directional relationship between the EVs and available stations. While BASNNC search algorithm was used to optimize the process parameters only via reinforcement learning and were superior to the previous mechanisms aforementioned [38]. Therefore, the illustrative methodology examples are compared by them to the study’s efficiency via the OEE indicator.
The time of obtaining results for examples of some EVs that need some of the stations’ actions to be served ranges between 60 and 488 s according to its complexity. The analysis of the results as discussed in the next section of the paper show that the significant factors affect the response of reducing total service time as illustrated in Figure 7 and Figure 8 above, where the actions such as charging point dev., inspection time, and the Battery size must be controlled to manage the average service span-time illustrates in Figure 9. The recharge service RcS problem is modelled in this work and mimicked as Markov movement steps with uncertain transition probability. A deep Q ( s t n ,   a t τ ) network with function ranking as discussed in Equation (8) approximation has been utilized to find the optimum EVCS selection policy.
The fuzzy group F G i = { α a 1 a 3 = J ( E V s ) 1 = 6.25 ,   α a 4 a 2 = 3.8 ,   α a 4 a 6 = 2 } set is arranged in a descending order, this means the a 1 ~ J ( E V s ) 1 precedes a 4 ~ J ( E V s ) 4 J 4 , and a 4 precedes a 6 ~ J ( E V s ) 6 J 6 , a 2 ~ J ( E V s ) 2 J 2 . On the other hand, the remaining actions have been tested in F G j which are arranged in ascending order, where α a 3 a 6 = 0.3142 , α a 6 a 3 = 0.44 . Therefore, a 6 ~ J ( E V s ) 6 J 6 precedes a 3 ~ J ( E V s ) 3 J 3 and so on as shown in Table 5, which expounds 180 direct potential relations and 6! indirect relations.
{1.0/1}{0.9/4, 1.0/5}{1.0/2, 0.9/3}
{1.0/2, 0.2/4}{1.0/5}{0.7/2, 1.0/3}
{0.5/4, 1.0/5}{1.0/7}{1.0/6}
{1.0/4}{1.0/9}{1.0/3, 0.9/4}
{1.0/5, 0.9/6}{1.0/2, 0.8/3}{1.0/4}
Service after added λ 1 d Service after added λ 2 d Service after added λ 3 d
{1.0/1}{0.9/5, 1.0/6}{0.9/7, 1.0/8, 0.9/9}
{1.0/3, 0.2/5}{0.9/10, 1.0/11}{0.7/12, 0.9/13, 1.0/14}
{0.5/7, 1.0/8, 0.2/9, 0.2/10}{0.9/15, 1.0/16}{0.9/21, 1.0/22}
{0.5/11, 1.0/12, 0.2/13, 0.2/14}{0.9/22, 1.0/23}{0.9/25, 1.0/26, 0.9/27}
{0.5/6, 1.0/17, 0.9/18, 0.2/19, 0.2/20}{0.9/24, 1.0/25, 0.8/26}{0.2/29, 1.0/30, 0.9/31}
Therefore, requests more than N of EVs are a complex problem and needs to be programed on a suitable mobile application. The fuzzy intervals for expected service time according to requested actions (e.g., inspection time) by the EVs according to the bidirectional relationship between the stations and EVs on the road for the illustrative case study are discussed in Table 5 with aggregated data, which constructs the deep Q network illustrated in Figure 11, and other actions also have its intervals.

5. Results Analysis

In this work, we use data aggregation to train the reinforcement learning of the suggested approach using counting of EV charging data from the city of Dundee’s open data site, which presents statistics expound various EV requests [53,57]. Each charging request provides details about the chargers’ monikers, start and end times, energy consumption, power output, and physical locations, as well as the shortest distance, reaching time, and idle time required to accomplish the charging operation and the various six actions. Every studied station has three distinct types of chargers: chargers (7 kW, slow), (22 kW, fast), and (>43 kW, rapid) [49,57]. In the present study, a four-months observed requests dataset aggregating across (127 days) is used to produce the descriptive statistics. The excluded outliers, or 0.83%, are the charging times at rapid type that differ by ± 3 σ from the median (the total number of quick charger charging requests used in this investigation is 4645). According to Table 6, the rapid chargers’ standard deviation is 24.08 min, whereas their typical charging time is 516.5 min. There are five main factors that affect the charging sequence acceleration of EVs such as battery size (the larger the battery capacity, the longer it takes to charge), battery status (empty vs. full, or when it is half full), high vehicle charging rate, high charging rate point, and weather (charging time tends to be longer at lower temperatures, especially when using a fast charger). Moreover, EVs are less efficient at lower temperatures. So, too much travel distance cannot be added according to the charging time. The descriptive statistics of EVs charging requests used in the training study region are described as shown in the Table 6 for a long four months. The preferred scheduling obtained from ( dEVs S t ) after 59 iterations in 1-EVs size to 9116 iterations in 6-EVs size to advice the EV drivers and stations δ i   by the shared beneficial interests paired together by managing of the AIoT to the deep Q network illustrated in Figure 10 according to the utilization% illustrated in the Figure 11. In case of assigning more than 150 EVs in the same time for specific station which generate non available time. Moreover, if the problem size increases more than 559, EVs will not get any scheduling solution by (BASNNC) algorithm, while the ( dEVs S t ) generates solution in time over 5.7 min, while BASNNC fails over 1374 EVs. The absolute average for ideal EVs service time is (644.58/52) = 10.73 h/EVs. The scheduling of the different actions for requested charging orders according to Bayesian regularized BASNNC search algorithm illustrated in Figure 12 to calculate the average time of charging six EVs per hours i.e., 55 h/Six EVs (9.16 h/EV). While the proposed scheduling according to AIoT if integrated in managing the DQN reduces the idle time of the stations and the waiting time of the EVs to gain their requests actions as illustrated in Figure 13 to average 41.25 h/Six EVs (6.875 h/EV). Both are affected by the behavior of the index τ t d / I 0 R | S throughout the day every 10 min. The average of τ t d = 35 and I 0 R | S = 12 at period 5 a.m.: 11 p.m., and the τ t d = 45 and I 0 R | S = 8 at the period 5 p.m.: 11 p.m. for four months. The tracing of the EVs’ requests behavior reveals that the following exponential distribution in the morning changes to Weibull in the afternoon along the studying interval, which presents the lowest error in servicing time expectation as illustrated in Figure 14 and Figure 15.
The part of results of the 2145 generated hypothetical examples dealing with the EVs requests or stations invitations through (biCN) managed by the AIoT for four months from all EVs to execute six potential actions for specific stations checks the condition of the shortest route between EVs’ requests and the stations on the main three roads stored in dataset at the downtown are listed in the Table 7.
MATLAB is used to predict the total service time to proposed methodology and BASNNC. The aggregated data are classified into four groups according to the problem size, the first is only three EVs up to six EVs and executed all main actions at assigned stations. The average ideal time approximates 9.167 h, while the proposed discuss average approximates 8.562 h, while BASNNC presents average 10.42 h OEE indicates that ( dEVs S t ) methodology has 72.4% for generated 2145 (i.e., 43 × 50) hypothetical examples and 59% for other algorithm. The worst results of the hypothetical examples shown in Table 7 according to the proposed methodology ( dEVs S t ) have been chosen and tackled by using grey wolf optimization (GWO) and extracting other four groups ( F G 31 :   F G 34 ) for resolving their scheduling and obtaining the solutions shown in Table 8. The worst results of the hypothetical examples are shown in Table 8 and extract other three examples having large scale (more than eight EVs) according to the proposed algorithm ( dEVs S t ) have been chosen again and tackled by using the third metaheuristic optimization named (Sine-cosine & Whale; SCW) for resolving their scheduling to check its efficiency [58] and obtain the solutions shown in Table 9. The average service time of proposed ( dEVs S t ) if compared by the GWO is superior by only 4%.
Table 7, Table 8 and Table 9 indicate the test of the average service time of Q (s, a) groups, consisting of 50 hypothetical examples that have the same number of EVs and requested actions with different arrangements, and wait for the solution to be extracted after 60 s or less after running the methodology by the MATLAB code. The authors noticed the failure of SCW in obtaining solutions in time and for a long time, reaching 26 min for all these examples that have more than seventeen EVs’ requests simultaneously. While the solutions of GWO and SCW are close to the size of the problems, they are less than six EVs/point charger/station. The proposed methodology is superior to SCW, GWO, and over six EVs up to eight. The average service time of proposed ( dEVs S t ) if compared by the GWO is superior by only 15%.

6. Conclusions

Many authors tried to integrate the heuristic steps to enhance their founding solution with the minimum errors [39,54,55]. This work tries to modify the DQN searching mechanism for tackling the uncertain EVs requests and the electric charging stations’ invitations to find the preferred assignment in the minimum time reliving it by guiding it to the closest electrical charging station. This bidirectional connectivity is managed by the AIoT, which tries to achieve two objectives. The first is to reduce its scheduling of service time and must be on the trip’s road through analyzing the route distance length. Therefore, the proposed methodology consists of two sequential phases, the first aims to constructing a fuzzy metaheuristic scheduling steps enhancing the searching in the DQN for the best action-value function (Equation (8)). Figure 9 discusses the network managing by autonomous Internet of Things (AIoT) to manage the requests and invitations for the EVs during their trips for decreasing EVs service time ( dEVs S t ) [29], and gained important feature, which are not required in advance variables such as arrival and departure times or power usage, in contrast to optimization-based techniques that can be manage by the autonomous clouding technique, because the neural network can estimate the right choice based on the present parameters ( θ i ) . The second phase carried out the cost analysis for assigning specific N of EVs to preferred m of station to begin extracting the Gantt chart for distributing EVs at the same time to receive request actions, especially if taken advantage of scheduling EVs’ servicing distribution exponential or Weibull for the charging actions rate during service as illustrated in Figure 14 and Figure 15. The proposed method is verified by computing the OEE for proposed and comparative presented by Qing Wu, [38]. Because of its large solution space ( N ! ) m , the authors resort to smart scheduling solution programming. The proposed AIoT-DQN methodology was written in MATLAB to tackle the drawbacks in native DQN [52,59]. Therefore, the metaheuristic was a preferred mechanism to tackling the problem. The aggregated data are classified into four groups according to the problem size, the first is only three EVs up to six EVs and executed all main actions at assigned station. The average ideal time approximates 9.167 h and the proposed method has an average of approximately 6.875 h, while BASNNC presents an average of 10.47 h. The OEE indicator indicates that equal 79.06% for ( dEVs S t ) methodology and 58.7% for other algorithms discussed above. Table 10 shows some comparative indicators to check the scope of superiority of ( dEVs S t ) over BASNNC and BASNNC algorithm in finding scheduling distribution for different EVs that need some service actions at the same time.

Author Contributions

Conceptualization, A.M.A.; Data curation, A.M.A. and A.A.; Formal analysis, A.M.A.; Funding acquisition, A.M.A.; Investigation, A.M.A.; Methodology, A.M.A.; Project administration, A.M.A. and A.A.; Software, A.A.; Supervision, A.M.A.; Validation, A.M.A.; Visualization, A.M.A. and A.A.; Writing—original draft, A.M.A.; Writing—review & editing, A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Allen, J.; Koomen, J. Planning using a temporal world model. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Karlsruhe, Germany, 8–12 August 1983; Volume 2, pp. 741–747. [Google Scholar]
  2. Bartak, B. Slot Models for Schedulers Enhanced by planning Capabilities. In Proceedings of the 19th Workshop of the UK Planning and Scheduling Special Interest Group, Milton Keynes, UK, 14–15 December 2000; pp. 11–24. [Google Scholar]
  3. Marsay, D.J. Uncertainty in Planning: Adapting the framework of Game Theory. In Proceedings of the 19th Workshop of the UK Planning and Scheduling Special Interest Group, Milton Keynes, UK, 14–15 December 2000; pp. 101–107. [Google Scholar]
  4. Noronha, S.J.; Sarma, V.V.S. Knowledge-Based Approaches for Scheduling Problems: A Survey. IEEE Trans. Knowl. Data Eng. 1991, 3, 160–171. [Google Scholar] [CrossRef]
  5. Kim, S.; Lim, H. Reinforcement Learning Based Energy Management Algorithm for Smart Energy Buildings. Energies 2018, 11, 2010. [Google Scholar] [CrossRef]
  6. Qian, T.; Shao, C.; Wang, X.; Shahidehpour, M. Deep Reinforcement Learning for EV Charging Navigation by Coordinating Smart Grid and Intelligent Transportation System. IEEE Trans. Smart Grid 2020, 11, 1714–1723. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Yu, X.; Guo, D.; Yin, Y.; Zhang, Z. Weights and structure determination of multiple-input feed-forward neural network activated by Chebyshev polynomials of Class 2 via cross-validation. Neural Comput. Appl. 2014, 25, 1761–1770. [Google Scholar] [CrossRef]
  8. Silva, F.C.; Ahmed, M.A.; Martínez, J.M.; Kim, Y.-C. Design and Implementation of a Blockchain-Based Energy Trading Platform for Electric Vehicles in Smart Campus Parking Lots. Energies 2019, 12, 4814. [Google Scholar] [CrossRef]
  9. Schwemmle, N. Short-Term Spatio-Temporal Demand Pattern Predictions of Trip Demand. Master’s Thesis, Katholieke Universiteit Leuven, Leuven, Belgium, 2021. Available online: https://zenodo.org/record/4514435#.YRZTNYgzbIU (accessed on 6 February 2021).
  10. Wang, R.; Chen, Z.; Xing, Q.; Zhang, Z.; Zhang, T. A Modified Rainbow-Based Deep Reinforcement Learning Method for Optimal Scheduling of Charging Station. Sustainability 2022, 14, 1884. [Google Scholar] [CrossRef]
  11. Soldan, F.; Bionda, E.; Mauri, G.; Celaschi, S. Short-term forecast of EV charging stations occupancy probability using big data streaming analysis. arXiv 2021, arXiv:2104.12503. [Google Scholar]
  12. Wan, Y.; Qin, J.; Ma, Q.; Fu, W.; Wang, S. Multi-agent DRL-based data-driven approach for PEVs charging/discharging scheduling in smart grid. J. Frankl. Inst. 2022, 359, 1747–1767. [Google Scholar] [CrossRef]
  13. Lee, K.-B.; AAhmed, M.; Kang, D.-K.; Kim, Y.-C. Deep Reinforcement Learning Based Optimal Route and Charging Station Selection. Energies 2020, 13, 6255. [Google Scholar] [CrossRef]
  14. Yang, J.-Y.; Chou, L.-D.; Chang, Y.-J. Electric-Vehicle Navigation System Based on Power Consumption. IEEE Trans. Veh. Technol. 2016, 65, 5930–5943. [Google Scholar] [CrossRef]
  15. Motz, M.; Huber, J.; Weinhardt, C. Forecasting BEV charging station occupancy at work places. In Informatik 2020; Reussner, R.H., Koziolek, A., Heinrich, R., Eds.; Gesellschaft für Informatik: Bonn, Germany, 2021; p. 771e81. [Google Scholar] [CrossRef]
  16. Schrittwieser, J.; Antonoglou, I.; Hubert, T.; Simonyan, K.; Sifre, L.; Schmitt, S.; Guez, A.; Lockhart, E.; Hassabis, D.; Graepel, T.; et al. Mastering Atari, Go, chess and shogi by Policyning with a learned model. Nature 2020, 588, 604–609. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, H.; Yu, T. AlphaZero. In Deep Reinforcement Learning; Dong, H., Ding, Z., Zhang, S., Eds.; Springer: Singapore, 2020. [Google Scholar] [CrossRef]
  18. Engel, H.; Hensley, R.; Knupfer, S.; Sahdev, S. Charging Ahead: Electric-Vehicle Infrastructure Demand; McKinsey Center for Future Mobility: New York, NY, USA, 2018. [Google Scholar]
  19. Sawers, P. (2022). Google Maps Will Now Show Real-Time Availability of Electric Vehicle Charging Stations. 2019. Available online: https://venturebeat.com/2019/04/23/google-maps-will-now-show-real-time-availability-of-charging-stations-for-electric-cars/ (accessed on 1 April 2022).
  20. Shioda, M.; Ito, T. Learning of Evaluation Functions on Mini-Shogi Using Self-playing Game Records. In Proceedings of the International Conference on Technologies and Applications of Artificial Intelligence (TAAI), Taipei, Taiwan, 3–5 December 2020; pp. 41–46. [Google Scholar] [CrossRef]
  21. Amara-Ouali, Y.; Goude, Y.; Massart, P.; Poggi, J.M.; Yan, H. A review of electric vehicle load open data and models. Energies 2021, 14, 2233. [Google Scholar] [CrossRef]
  22. François-Lavet, V.; Henderson, P.; Islam, R.; Bellemare, M.G.; Pineau, J. An Introduction to Deep Reinforcement Learning. Found. Trends Mach. Learning 2018, 11, 219–354, arXiv:1811.12560. Available online: https://arxiv.org/abs/1811.12560 (accessed on 30 November 2018). [CrossRef]
  23. Ji, Y.; Wang, J.; Xu, J.; Fang, X.; Zhang, H. Real-time energy management of a microgrid using deep reinforcement learning. Energies 2019, 12, 2291. [Google Scholar] [CrossRef]
  24. Sadeghianpourhamami, N.; Deleu, J.; Develder, C. Definition and Evaluation of Model-Free Coordination of Electrical Vehicle Charging with Reinforcement Learning. IEEE Trans. Smart Grid 2020, 11, 203–214. [Google Scholar] [CrossRef]
  25. Gu, S.; Lillicrap, T.; Ghahramani, Z.; Turner, R.E.; Levine, S. Qprop: Sample-efficient policy gradient with an off-policy critic. arXiv 2016, arXiv:1611.02247. [Google Scholar]
  26. Lei, L.; Tan, Y.; Zheng, K.; Liu, S.; Zhang, K.; Shen, X. Deep Reinforcement Learning for Autonomous Internet of Things: Model, Applications and Challenges. IEEE Commun. Surv. Tutor. 2020, 22, 1722–1760. [Google Scholar] [CrossRef]
  27. Abed, A.M.; Elattar, S. Minimize the Route Length Using Heuristic Method Aided with Simulated Annealing to Reinforce Lean Management Sustainability. Processes 2020, 8, 495. [Google Scholar] [CrossRef]
  28. Subramanian, A.; Chitlangia, S.; Baths, V. Reinforcement learning and its connections with neuroscience and psychology. Neural Netw. 2022, 145, 271–287. [Google Scholar] [CrossRef]
  29. Lee, S.; Choi, D.-H. Energy Management of Smart Home with Home Appliances, Energy Storage System and Electric Vehicle: A Hierarchical Deep Reinforcement Learning Approach. Sensors 2020, 20, 2157. [Google Scholar] [CrossRef]
  30. Abdullah, H.M.; Gastli, A.; Ben-Brahim, L. Reinforcement Learning Based EV Charging Management Systems–A Review. IEEE Access 2021, 9, 41506–41531. [Google Scholar] [CrossRef]
  31. Mostafa, S.; Loay, I.; Ahmed, M. Machine Learning-Based Management of Electric Vehicles Charging: Towards Highly-Dispersed Fast Chargers. Energies 2020, 13, 5429. [Google Scholar]
  32. Liu, Y.; Chen, W.; Huang, Z. Reinforcement Learning-Based Multiple Constraint Electric Vehicle Charging Service Scheduling. Math. Probl. Eng. 2021, 2021, 1401802. [Google Scholar] [CrossRef]
  33. Konstantina, V.; Wolfgang, K.; John, C. Smart Charging of Electric Vehicles Using Reinforcement Learning. In Proceedings of the Workshops at the Twenty-Seventh AAAI Conference on Artificial Intelligence, Bellevue, WA, USA, 14–18 July 2013; pp. 41–48. Available online: https://www.researchgate.net/publication/286726772_Smart_charging_of_electric_vehicles_using_reinforcement_learning (accessed on 24 February 2022).
  34. Wang, K.; Wang, H.; Yang, J.; Feng, J.; Li, Y.; Zhang, S.; Okoye, M.O. Electric vehicle clusters scheduling strategy considering real-time electricity prices based on deep reinforcement learning. Energy Rep. 2022, 8 (Suppl. 4), 695–703. [Google Scholar] [CrossRef]
  35. Tuchnitz, F.; Ebell, N.; Schlund, J.; Pruckner, M. Development and Evaluation of a Smart Charging Strategy for an Electric Vehicle Fleet Based on Reinforcement Learning. Appl. Energy 2021, 285, 116382. [Google Scholar] [CrossRef]
  36. Wan, Z.; Li, H.; He, H.; Prokhorov, D. Model-Free Real-Time EV Charging Scheduling Based on Deep Reinforcement Learning. IEEE Trans. Smart Grid 2019, 10, 5246–5257. [Google Scholar] [CrossRef]
  37. Ma, T.; Faye, S. Multistep electric vehicle charging station occupancy prediction using hybrid LSTM neural networks. Energy 2022, 244 Pt B, 123217. [Google Scholar] [CrossRef]
  38. Wu, Q.; Ma, Z.; Xu, G.; Li, S.; Chen, D. A Novel Neural Network Classifier Using Beetle Antennae Search Algorithm for Pattern Classification. IEEE Access 2019, 7, 64686–64696. [Google Scholar] [CrossRef]
  39. Zhang, L.; Li, K.; Bai, E.W.; Irwin, G.W. Two-stage orthogonal least squares methods for neural network construction. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1608–1621. [Google Scholar] [CrossRef]
  40. Han, H.G.; Lu, W.; Hou, Y.; Qiao, J.F. An adaptive-PSO-based self organizing RBF neural network. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 104–117. [Google Scholar] [CrossRef]
  41. Yıldız, A.R. A novel hybrid immune algorithm for global optimization in design and manufacturing. Robot. Comput. Manuf. 2009, 25, 261–270. [Google Scholar] [CrossRef]
  42. Khalilpourazari, S.; Khalilpourazary, S. Optimization of production time in the multi-pass milling process via a robust grey wolf optimizer. Neural Comput. Appl. 2018, 29, 1321–1336. [Google Scholar] [CrossRef]
  43. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  44. Nguyen, T.T.; Nguyen, T.A.; Trinh, Q.H. Optimization of milling parameters for energy savings and surface quality. Arab. J. Sci. Eng. 2020, 45, 9111–9125. [Google Scholar] [CrossRef]
  45. Kaur, G.; Dhillon, J. Economic power generation scheduling exploiting hill-climbed Sine–Cosine algorithm. Appl. Soft Comput. 2021, 111, 107690. [Google Scholar] [CrossRef]
  46. World Economic Forum. Electric Vehicles for Smarter Cities: The Future of Energy and Mobility; World Economic Forum: Cologny, Switzerland, 2018; p. 32. Available online: https://www3.weforum.org/docs/WEF_2018_%20Electric_For_Smarter_Cities.pdf (accessed on 28 January 2022).
  47. Ghosh, A. Possibilities and Challenges for the Inclusion of the Electric Vehicle (EV) to Reduce the Carbon Footprint in the Transport Sector: A Review. Energies 2020, 13, 2602. [Google Scholar] [CrossRef]
  48. Blair, E.H. Regulation time Culture. Professional Regulation time. J. Prof. Saf. 2013, 58, 59–65. [Google Scholar]
  49. EU Science Hub. Electric Vehicles: A New Model to Reduce Time Wasted at Charging Points. 2019. Available online: https://ec.europa.eu/jrc/en/news/electric-vehicles-newmodel-reduce-time-wasted-charging-points (accessed on 28 January 2022).
  50. Zhang, J.; Yan, J.; Liu, Y.; Zhang, H.; Lv, G. Daily electric vehicle charging load profiles considering demographics of vehicle users. Appl. Energy 2020, 274, 115063. [Google Scholar] [CrossRef]
  51. Zhang, X.; Peng, L.; Cao, Y.; Liu, S.; Zhou, H.; Huang, K. Towards holistic charging management for urban electric taxi via a hybrid deployment of battery charging and swap stations. Renew. Energy 2020, 155, 703–716. [Google Scholar] [CrossRef]
  52. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.A.; Fidjeland, A.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  53. Wang, J.Q.; Du, Y.; Wang, J. LSTM based long-term energy consumption prediction with periodicity. Energy 2020, 197, 117197. [Google Scholar] [CrossRef]
  54. Ren, C.; An, N.; Wang, J.; Li, L.; Hu, B.; Shang, D. Optimal parameters selection for BP neural network based on particle swarm optimization: A case study of wind speed forecasting. Knowl.-Based Syst. 2014, 56, 226–239. [Google Scholar] [CrossRef]
  55. Khadse, C.B.; Chaudhari, M.A.; Borghate, V.B. Conjugate gradient back-propagation based articial neural network for real time power quality assessment. Int. J. Electr. Power Energy Syst. 2016, 82, 197–206. [Google Scholar] [CrossRef]
  56. Schwemmle, N.; Ma, T.Y. Hyper parameter optimization for neural network based taxi demand prediction. In Proceedings of the BIVEC-GIBET Transport Research Days 2021, Delft, The Netherlands, 27–28 May 2021. [Google Scholar]
  57. Guo, Q.; Xin, S.; Sun, H.; Li, Z.; Zhang, B. Rapid-Charging Navigation of Electric Vehicles Based on Real-Time Power Systems and Traffic Data. IEEE Trans. Smart Grid 2014, 5, 1969–1979. [Google Scholar] [CrossRef]
  58. Mirjalili, S. The whale optimization algorithm. Adv. Eng. Softw. 2016, 9, 51–67. [Google Scholar] [CrossRef]
  59. Yang, H.; Deng, Y.; Qiu, J.; Li, M.; Lai, M.; Dong, Z.Y. Electric Vehicle Route Selection and Charging Navigation Strategy Based on Crowd Sensing. IEEE Trans. Ind. Inform. 2017, 13, 2214–2226. [Google Scholar] [CrossRef]
Figure 1. The architecture of the deep learning.
Figure 1. The architecture of the deep learning.
Energies 15 06992 g001
Figure 2. The taxonomy of machine learning models and methodologies.
Figure 2. The taxonomy of machine learning models and methodologies.
Energies 15 06992 g002
Figure 3. The map layout of state available direction and electric charger stations.
Figure 3. The map layout of state available direction and electric charger stations.
Energies 15 06992 g003
Figure 4. The spatial distribution of chargers in the city of Dundee and EVs allocation via Zap Map (source: https://data.dundeecity.gov.uk/dataset/ev-charging-data, accessed on 28 January 2022).
Figure 4. The spatial distribution of chargers in the city of Dundee and EVs allocation via Zap Map (source: https://data.dundeecity.gov.uk/dataset/ev-charging-data, accessed on 28 January 2022).
Energies 15 06992 g004
Figure 5. The stages of the methodology framework.
Figure 5. The stages of the methodology framework.
Energies 15 06992 g005
Figure 6. The perspective of procedures for EVs charging process at requested.
Figure 6. The perspective of procedures for EVs charging process at requested.
Energies 15 06992 g006
Figure 7. The significant factors on service span time.
Figure 7. The significant factors on service span time.
Energies 15 06992 g007
Figure 8. The significant factors on utilization.
Figure 8. The significant factors on utilization.
Energies 15 06992 g008
Figure 9. The queue of the EVs and their service time The DQN network was built for specific station δ 1 from three different stations δ i selected on the Zap Map and one of them meets the distance considerations discussed in Equations (4)–(10), and illustrated in Figure 10, to distribute the six different EVs to be served by the six actions a i until complete charging according to a specific policy π that selects a specific station δ i .
Figure 9. The queue of the EVs and their service time The DQN network was built for specific station δ 1 from three different stations δ i selected on the Zap Map and one of them meets the distance considerations discussed in Equations (4)–(10), and illustrated in Figure 10, to distribute the six different EVs to be served by the six actions a i until complete charging according to a specific policy π that selects a specific station δ i .
Energies 15 06992 g009
Figure 10. The DQN network for specific station from three different stations.
Figure 10. The DQN network for specific station from three different stations.
Energies 15 06992 g010
Figure 11. The iterations to get scheduling by proposed methodology.
Figure 11. The iterations to get scheduling by proposed methodology.
Energies 15 06992 g011
Figure 12. The solution of service span time by BASNNC algorithm.
Figure 12. The solution of service span time by BASNNC algorithm.
Energies 15 06992 g012
Figure 13. The solution of service span time by proposed methodology aided by AIoT management.
Figure 13. The solution of service span time by proposed methodology aided by AIoT management.
Energies 15 06992 g013
Figure 14. The lowest error in expected EVs’ service time is exponential during the period [5 A.m.: 11 A.m.].
Figure 14. The lowest error in expected EVs’ service time is exponential during the period [5 A.m.: 11 A.m.].
Energies 15 06992 g014
Figure 15. The lowest error in expected EVs’ service time is Weibull during the period [5 P.m.: 11 P.m.].
Figure 15. The lowest error in expected EVs’ service time is Weibull during the period [5 P.m.: 11 P.m.].
Energies 15 06992 g015
Table 1. The work charter design sheet.
Table 1. The work charter design sheet.
TitleScheduling EVs Charging Stations on Their Trip Route Using AIoT
Goals
  • Reduce idle time actions (e.g., setup time, inspection time, response time, etc.)
  • Reduce the service time by divide the charging span into fuzzy zones.
  • Select the preferred station which closer to the main flight route.
Problem Scope: The scope of the work will focus on scheduling the different selection of requests for EVs stations to achieve the goals above through modelling fuzzy network trained by reinforcement learning bidirectional DQN and manage it by the AIoT. The Q, V loss function is used to generate the initial searching values formulated as follow:
λ t Q = ( Q ( s t , a t ); θ λ t Q ) 2 and λ t V = ( V ( s t , a t ); θ λ t V ) 2
The OEE is a verification indicator revealing the effectiveness of the proposed scheduling by three terms:
Availability = Station Run Time/Planned Service Time.
Performance = (Ideal Charging Cycle Time × Total Satisfied EVs Count)/Station Run Time.
Quality = Good EVs servicing Count/Total serviced Count.
Government/Customer Impact:
  • Deployment of EVs to help clean the environment
  • Smart Scheduling Q Network Map
  • Transportation without hindrances.
Business Impact: The expected savings time via comparing the results by Qing Wu, (2019) [38] and Lee et al. (2020) [29] to reduce trip time and increase the stations utilization by measuring the OEE [47].
Current State (Baseline)Objective and Stretch Target
O E E = [ ( S a t i s f i e d   E V × I d e a l   C h a r g i n g   C y c l e   T i m e ) / P l a n n e d   s e r v i c e   T i m e ]  
O E E = A v a i l a b i l i t y ×   P e r f o r m a n c e × Q u a l i t y
Metrics“As-is”“Should-be”ToleranceSet Mfg. condition
Standardization63%97%No controlNo procedures
Idle-Time23%6%ControlFixed procedures
Construct cause and effectConstruct FMEA
Policy function: Y = f (battery charge level, route length, trip distance, the role on queue list L)
Major contribution of this work smart fuzzy scheduling for DQN managed by AIoT
  • Zap Map on google engine search
  • Pareto Chart
  • Proposed Solutions
Table 2. Symbols describe the illustrated map discussed in Figure 3.
Table 2. Symbols describe the illustrated map discussed in Figure 3.
x , y Coordinates of available serviceable stations. A The gap between two streets.
δ i , j The number of stations on the map that were tallied (1, 2, 3, …, i, …, m) F The separation between the major thorough fare and the first candidate station.
N The number of EVs places on each street, regardless of whether π A or π B is used. B The distance between the last i ,   j on the map’s vertical street and the rear street.
F G i The fuzzy group for selected i = 1 N = 3 δ i l The separation between two nearby stations J i and J i + 1
λ i d The route length under cost consideration L The list of suggested stations
Table 3. The AIoT-DQN equations parameters and Decision Variables.
Table 3. The AIoT-DQN equations parameters and Decision Variables.
Parameters
C π A The cost of distance of arrive the EVs to δ i as suggested via π A .
C π B The cost of distance of arrive the EVs to δ i as suggested via π B .
C d π B The cost of disposing stations δ j due to distance or to EVs driver canceled and switch to another π B .
h π A Processing fees for usable station to be available list of EVs in ( π A ) at period ( t ) .
h π B Processing fees for usable station to be available list of EVs in ( π B ) at period ( t ) .
I 0 S Number of stations δ i candidate at time ( t ) according of ( π A ).
I 0 R Number of stations δ i candidate at time ( t ) according of ( π B ).
r t A B Uncertain serviceable number of δ j in ( π B ) and/or hybrid at ( π A ) in period t + t
r ¯ t A B The average number of d i + d j sites that were upgraded to be serviceable during period t is estimated.
r ^ ¯ t A B Maximum variation from the mean for the number of δ i upgraded to serviceable in period t ,
τ t d The maximum number of uncertain requests and (re)requests for the i = 1 n δ i along the route that can diverge from the mean concurrently until the end of period ( t ) .
d t Uncertain δ i ’ service time requested for the list L .
b EVs backlogging cost to serviceable δ j station at ( π B ) after finishing ( π A ).
γ The discount parameters for control reward gained
Decision Variables
S t π A The number of serviceable δ i that are distributed in period t of ( π A ).
S t π B The number of serviceable δ j at ( π B ) that is updated of ( π A ) in period ( t ) .
X t D The number of δ i   that form list L and eliminated during period t owing to cancellation or going against the cost analysis step and being delayed until route actions were complete
I t S The number of backlogging EVs i + j at the end of period ( t ) for ( π A   & B ).
Table 4. The ( d E V s S t ) pseudo steps to construct the DQ Network.
Table 4. The ( d E V s S t ) pseudo steps to construct the DQ Network.
Step 1:Construct the serviceable network that represents the EVs requesting the charger stations.
Step 2:Sort the ( π A ) generated requested list L for available stations δ i in ascending order from nearest to farthest.
Step 3:The route begins from the request point till reaching all station δ i generated by the policy π A | B .
Step 4:All serviceable δ i d and δ j d generated by ( π A ) and ( π B ) list for stations on road and on opposite side respectively are subject to constraints as in Equation (7) that lead to select shortest route λ = d i + d j , and go to step 8.
Step 5: λ i m   a n d   λ j n   are the Euclidean distance between the current δ n and δ m , where δ m and δ n are the first and last station according to the list L of ( π B ).
Step 6:Choose the serviceable stations δ j along the suggested route to achieve the goals of being closest and serving in the least amount of time and saving money in ( π B ), and take into account the last ( i ) as the starting point to search others, keeping the previous calculated distance route as λ i .
Step 7:If the c o s t = 0 d π A   c o s t = 0 d π B , select π A all δ i branch and neglect π B for all δ j branch and save it in the backlogging list to check the objective attainability from π A . Otherwise, hybrid of EVs up to first six object on the route.
Step 8:If min [( λ i m > λ j n + F )] < y, arrange the stations in list L in descending order, and follow the function indexing Equation (8) to classify the r t τ .
Step 9:If all subsequent items in L are serviced, then repeat steps 3 and 4.
Step 10:Return to the main street before proceeding to the feeding place.
Table 5. { FG i } set; the fuzzy index for the groups of jobs { FG i } set of actions.
Table 5. { FG i } set; the fuzzy index for the groups of jobs { FG i } set of actions.
The Fuzzy Index for the Groups of Jobs {Ui} Set
J act ( E V s )
EV(1)EV(2)EV(3)EV(4)EV(5)EV(6)
J act ( E V s )
EV(1)EV(2)EV(3)EV(4)EV(5)EV(6)
a1 then a210.50.4132.20.50.8a4 then a1−ve2−ve0.1120.31−ve
a1 then a36.252.35−ve0.54−vea4 then a23.81.60.340.780.190.2857
a1 then a42.250.1150.470.2310.9a4 then a32.15.4−ve−ve0.61−ve
a1 then a55.750.890.88−ve0.2a4 then a51.92.48−ve0.56−ve−ve
a1 then a66−ve30−ve0.35−vea4 then a620.61.34−ve0.42−ve
a2 then a1−ve1.25−ve−ve0.57−vea5 then a1ve0.0830.112−ve0.820.143
a2 then a3−ve3.375−ve−ve0.9−vea5 then a20.42−ve1.670.920.650.762
a2 then a4−ve0.6250.47−ve0.5670.364a5 then a30.211.50.656−ve1.3−ve
a2 then a5−ve1.50.07−ve0.174−vea5 then a4−ve−ve1.67−ve0.820.857
a2 then a6−ve0.3751.47−ve−ve−vea5 then a6−ve−ve3.34−ve1−ve
a3 then a1−ve−ve0.21.34−ve0.647a6 then a1−ve3.34−ve2.340.030.934
a3 then a21−ve2.64.7−ve1.411a6 then a21.122.67−ve7.34−ve1.802
a3 then a4−ve−ve31.45−ve1.529a6 then a30.449−ve10.310.734
a3 then a50.274−ve1.82.3−ve0.705a6 then a4−ve1.67−ve2.50.031.933
a3 then a60.312−ve60.110.0510.353a6 then a50.364−ve3.67ve1.267
Table 6. The descriptive analysis according to training data.
Table 6. The descriptive analysis according to training data.
Seven Charger Point Type in Station# of Studied Points of Charging in Selected Stations# of Charging Requests via Bidirectional Cloud Management# of Requests per Day per Charging Point over Horizon Period tService Span Time (h).
μ σ
Slow (7 KW)36 (69.2%)21450.446875194.1319.7
Fast (22 KW)9 (17.3%)7160.745833426.4195.38
Rapid (43 KW or up)7 (13.5 %)46451.52314824.087.62
Total527506 644.58
Table 7. Illustrative solutions for the first group FG for four EVs or less size in [hour].
Table 7. Illustrative solutions for the first group FG for four EVs or less size in [hour].
No. ( d E V s S t ) BASNNCNo. ( d E V s S t ) BASNNCNo. ( d E V s S t ) BASNNCNo. ( d E V s S t ) BASNNCNo. ( d E V s S t ) BASNNC
112.481379.4812.725137.9110.78197.21710.462256.3648.784
28.21710.62787.21710.737148.21911.799209.21912.929266.96910.679
38.1512.39599.1512.67158.3647.6542112.21912.0892711.0510.92
412.5512.03109.5512.96167.96910.0992212.1511.63288.3648.234
510.69510.1751110.69510.941711.0510.63239.5512.602296.96910.389
69.4812.89127.9111.331810.9111.62247.69510.285307.0511.76
Table 8. Another verification for proposed methodology with GWO scheduling in [hour].
Table 8. Another verification for proposed methodology with GWO scheduling in [hour].
No. ( d E V s S t ) GWONo. ( d E V s S t ) GWONo. ( d E V s S t ) GWONo. ( d E V s S t ) GWONo. ( d E V s S t ) GWONo. ( d E V s S t ) GWO
412.5514.1158.3649.652112.21913.0892711.0511.923111.6211.983311.0513.1
510.69510.321711.0511.62212.1512.63288.3648.1733212.7811.72348.319.14
Table 9. Another verification for proposed methodology with Sine-cosine and Whale optimization scheduling in [hour].
Table 9. Another verification for proposed methodology with Sine-cosine and Whale optimization scheduling in [hour].
No. ( d E V s S t ) SCWNo. ( d E V s S t ) SCWNo. ( d E V s S t ) SCWNo. ( d E V s S t ) SCWNo. ( d E V s S t ) SCWNo. ( d E V s S t ) SCW
510.69511.52288.3648.373212.7813.013513.0413.33612.712.73711.05-----
387.629.1397.9211.6407.612.7418.0411.78427.5410.65439.67-----
Table 10. The comparative average of thirty examples for four different problem sizes.
Table 10. The comparative average of thirty examples for four different problem sizes.
Three EVsFour EVsFive EVsSix EVsEight EVsUp to
Extracting solution time [s]6.68.121.334.758.6mins
Average distance among the perfect three stations and EVs’ request position [km]7.88.18.410.616.121
Ideal servicing time [h]7.1217.5357.9188.07510.2Over day
Average Span time of ( dEVs S t ) [h]6.9217.1437.4746.8767.9infeasible
Average Span time of BASNNC [h]9.0069.22110.74311.76116.1-------
Average Span time of GWO [h]9.2159.710.7411.36118.7infeasible
Average Span time of SCW [h]9.1539.82510.24111.16214.6-------
OEE ( dEVs S t )78.1%75.2%72.1%76.5%54.1%-------
OEE of BASNNC30.1%29.1%43.7%71.0%52.4%-------
OEE of GWO33.1%35.8%43.7%65.2%51.2%-------
OEE of SCW32.2%37.5%37.0%62.3%51.1%-------
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

M. Abed, A.; AlArjani, A. The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time. Energies 2022, 15, 6992. https://doi.org/10.3390/en15196992

AMA Style

M. Abed A, AlArjani A. The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time. Energies. 2022; 15(19):6992. https://doi.org/10.3390/en15196992

Chicago/Turabian Style

M. Abed, Ahmed, and Ali AlArjani. 2022. "The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time" Energies 15, no. 19: 6992. https://doi.org/10.3390/en15196992

APA Style

M. Abed, A., & AlArjani, A. (2022). The Neural Network Classifier Works Efficiently on Searching in DQN Using the Autonomous Internet of Things Hybridized by the Metaheuristic Techniques to Reduce the EVs’ Service Scheduling Time. Energies, 15(19), 6992. https://doi.org/10.3390/en15196992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop