1. Introduction
The transformation from fuel-based vehicles to electric vehicles (EVs) can be catalyzed with proper charging station infrastructure. The demand for EVs has seen a rise in the last five years, which poses the issue of crowded charging stations, with users facing increased waiting times. The installation of EV charging stations at optimal locations could partly alleviate this problem. But the necessity of a charging infrastructure for EVs becomes crucial, as the infrastructure needs to cater to the three major upcoming technologies in transportation, namely, connected mobility, autonomous driving, and electrification [
1]. At the same time, this must not subject the grid to the adverse effects of imbalance [
2].
The work reported in [
3] suggests using an Urgency First Charging (UFC) policy to prioritize EV charging based on charging demand and remaining parking time and a reservation-based CS-Selection scheme to choose the optimal charging station associated with the minimum travel and charging time. Simulations show that this approach improves the user experience, shortens trip duration, and fully charges more EVs before the departure deadline. The increasing use of battery-powered electric vehicles has brought attention to the limited number of charging stations, a phenomenon which would cause longer wait times and pricing constraints. This results in a degraded quality of experience for drivers and congestion-related problems for charging point providers. To solve this, Ref. [
4] proposes ChaseMe, a heuristic scheme based on two soft-computing techniques, to optimize charging station management and improve charging metrics for EVs. However, the ChaseMe method assumes that EV drivers will always follow the recommendations provided by the algorithm, which may not always be the case in practice. If drivers do not follow the recommended routes or charging schedules, the performance of the algorithm may be affected. Furthermore, the paper does not consider the impact of the method on the overall energy consumption and sustainability of the transportation system. The research carried out in [
5] proposes an optimal configuration method for fast-charging stations using the fast power supplement mode for electric vehicles. The method considers the charging characteristics of EV batteries, different types of EVs, and the spatial–temporal distribution of charging demand predicted through GPS and the K-means method. One potential drawback of the proposed method is that it assumes users will always prioritize fast charging over slower, more conventional charging methods, which may not always be the case. Additionally, the proposed method may require significant infrastructure investments, which may be a barrier to adoption in some regions.
The proposed EV charging navigation scheme in [
6] is a smart and efficient solution that utilizes Vehicular Edge Computing (VEC) networks to optimize charging schedules and minimize charging times. By taking into account real-time traffic conditions, energy prices, and battery SOC, the scheme is able to provide personalized charging plans for individual EV drivers. The work presented in [
7] proposes a hybrid Long Short-Term Memory (LSTM) neural network-based approach to predict the occupancy of EV charging stations. The proposed method combines the strengths of the LSTM model and the autoregressive integrated moving average (ARIMA) model to improve the accuracy of the occupancy prediction. The proposed approach is evaluated using real-world data from a public charging station in Hong Kong, and the results show that the hybrid LSTM neural network model outperforms the standalone LSTM and ARIMA models in terms of prediction accuracy. The study also highlights the importance of considering multistep predictions in EV charging station occupancy prediction to improve the reliability of the prediction results.
Intensive research has been undertaken in the area of the development of sustainable and easily accessible charging stations. Ref. [
8] analyzes charging control strategies under conditions of power limitations caused by grid stability requirements. The optimal power allocation problem has been studied in various ways, including the use of game theory and evolutionary optimization strategies. The literature shows numerous studies relevant to the allocation of charging stations and optimal routing to charging stations amidst congestion management. The work proposed in the present paper introduces a novel approach for optimizing the allocation of energy-efficient charging nodes within a specific charging station, taking into consideration the presence of multiple vehicles simultaneously requesting charging services.
A Battery Digital Twin model (BDT) serves as a digital replica of a battery, facilitating precise modeling, simulation, and optimization of battery performance [
9]. Widely employed in industries such as aerospace, automotive, and manufacturing, digital-twin technology has proven highly effective for tasks like predictive maintenance, design refinement, and performance enhancement. Developing a BDT necessitates the use of advanced numerical models capable of capturing the intricate physics and electrochemistry of batteries. These models can be integrated with real-time sensor data to offer dynamic and real-time representations of battery states [
10]. In the renewable-energy storage sector, BDTs optimize energy management systems, ensuring efficient operation [
11]. Ref. [
12] provides a comprehensive comparison between on-board Battery Management Systems and the application of digital twins for batteries, highlighting the numerous advantages of BDTs in terms of prolonging battery service life and considering the potential for a second life for batteries.
Digital-twin technologies for battery systems, especially those associated with electric vehicles (EVs) and energy storage, incorporate Unscented Kalman Filters (UKF) for state-of-charge (SOC) estimation. This aspect plays a vital role in optimizing battery performance and prolonging its lifecycle. The digital-twin concept facilitates the development of real-time models for parameter estimation and optimization within battery management systems (BMS) [
13]. An innovative approach utilizing the Unscented Kalman Filter in conjunction with neural networks has been introduced for simultaneously estimating the State of Charge (SOC) and the State of Health (SoH) of lithium-ion batteries. This method, which demonstrates high precision across diverse operating conditions, is implemented in electric vehicles and renewable-energy applications [
14].
Cloud-based BMS solutions offer several advantages, including improved monitoring and diagnostics, online learning capabilities, and enhanced visualization of battery data. These systems can aggregate data from multiple vehicles, enabling a more comprehensive analysis and optimization of battery performance at the fleet level. Despite the potential benefits, cloud-based BMS face challenges such as connectivity issues, security concerns, and the need for reliable internet access. Future research in this field should focus on addressing these challenges and exploring the integration of emerging technologies such as edge computing and fog computing to further enhance the capabilities of cloud-based BMS [
15].
The knowledge gaps which have been identified in the present literature have been addressed in this work, as described in
Table 1, shown below.
The proposed system comprises several distinct sections:
The initial part deals with leveraging the pivotal role of BDT technology in modeling electric vehicle (EV) batteries. In the context of EV batteries, BDTs empower researchers to model battery behavior under diverse operational conditions and loads, enabling performance prediction and early issue-identification. This knowledge, in turn, aids in the optimization of battery management systems, enhancing battery performance, and extending battery lifespan. BDTs also enable the assessment of the impact of factors like temperature on battery performance and the optimization of battery designs for specific applications. The incorporation of advanced visualization tools and machine learning algorithms provides further insights into battery behavior, allowing for continual enhancement of the digital twin. The second section delves into the proposed algorithm, which is designed to allocate charging nodes and intelligently distribute energy based on the energy availability of the charging station and the EV energy demand. Detailed discussions as to the optimization techniques and machine learning algorithms utilized in both the digital-twin implementation and the intelligent energy distribution are presented in subsequent sections. The concluding section underscores the advantages of incorporating BDT into the work, specifically with respect to reducing EV user wait times and enhancing energy conservation.
2. Methodology
The architecture of the proposed system is shown in
Figure 1. The first part involves the implementation of the DT of the Li-ion battery to obtain optimized battery parameters including SOC and time-to-charge. The BDT for every user is created based on the parameters uploaded to the cloud, including the internal resistance of the battery, which varies over drive cycles towards its End-of-Life (EOL). The BDT receives the cloud data associated with the physical battery, as uploaded by the on-board BMS of the user. The second part of the functional architecture is the machine learning block which takes optimized parameters from the BDT as inputs and provides the time-to-charge and the charging energy required by the incoming vehicle, based on its current SOC.
The energy required to charge the EV (εvi) and the time-to-charge for the EV (τch) are fed from the BDT to the novel ENDEAVOR algorithm to compute the Energy Distribution Ratio (EDR) for optimized energy distribution. Based on the energy available at the charging station and the energy demand from incoming users, the EDR is dynamically updated. These parameters control the charging current made available at each node. A mobile app developed for raising requests and receiving information regarding the status of node allocation was developed using Collab. The data from the user are fetched from the cloud. The charging station continuously updates its data regarding influx of users and available energy to the cloud. The trend of the change in the EDR is monitored and an estimate of the EDR is made for faster computation. With the use of BDT, along with the proposed node allocation and EDR-based charging, the time-to-wait of the users is decreased.
3. Implementation of Battery Digital Twin
A battery digital twin can be used to evaluate the degradation of the battery over drive cycles and the optimization of battery usage, as well as to identify the possibility of a second-life application of the battery [
17]. The utilization of both the proposed decision-tree in conjunction with the UKF for SOC estimation and the support vector machine (SVM) regression for the digital-twin parameter update enables the accurate estimation of time-to-charge. To implement the digital twin, it is necessary to collect data on the battery’s behavior, such as its voltage and current during charging and discharging cycles. These data can be obtained using sensors or through data logging systems. The data are then used to calibrate the Thevenin’s 3RC model, which is then used to simulate the behavior of the battery under different conditions. The BDT implementation steps are elaborated in the sections below.
3.1. Experimental Data for the Battery
In this work the experimental data associated with an INR 18650-20R lithium-ion battery cell are obtained from the CALCE dataset [
18]. The test set-up includes the Arbin BT2000 Battery Testing System, used for controlling the battery’s charging and discharging process, and a Temperature Chamber used to control the ambient temperature of the battery.
The specifications of the battery are summarized in
Table 2. The dataset is comprised of test data obtained at three temperatures: 0 °C, 25 °C, and 45 °C. In this test the battery is charged to the cut-off voltage of 4.2 V at a constant current of 1 C-rate; this is followed by constant-voltage charging until the current is brought down to 0.01 C. The C-rate of a battery is a measure of its charging or discharging rate relative to its capacity. It indicates the rate at which current is applied or withdrawn from the battery in relation to its nominal capacity. The C-rate is typically expressed as a multiple of the battery’s capacity (C). A comparison of battery technologies, with their relevant advantages and disadvantages, along with applications in which they are used can be found in
Table 3.
This step is then followed by a discharge cycle at a constant rate of 0.05 C until the voltage is reduced to 2.5 V. The charging is performed at constant rate of 0.05 C until the voltage is 4.2 V. The average of the charging and discharging process is recorded as the open circuit voltage (OCV) for 0 °C, 25 °C, and 45 °C. In order to obtain the OCV–SOC relationship, the incremental-current OCV (I-OCV) test data obtained for 25 °C are considered.
The input and output data obtained in terms of currents, as well as the voltage from the CALCE dataset, are used to obtain the second-order autoregressive model. The response of this model is then compared with the dataset model identified originally.
3.2. Open Circuit OCV–SOC Curve Estimation
The OCV–SOC curve is a crucial aspect in battery modelling and simulation and can provide valuable information as to battery behavior, insights which can be applied in the areas of electric vehicles, renewable-energy storage, and portable electronics. The SOC of a battery can be estimated based on the battery’s OCV using an empirical relationship. This relationship can vary depending on the type of battery chemistry and the specific battery design.
For a typical lithium-ion battery, the relationship between SOC and OCV can be approximated by Equation (1):
where
= Minimum open circuit voltage and
= Maximum open circuit voltage.
Equation (1) assumes a linear relationship between SOC and OCV, an assumption which may not always be accurate. In practice, battery management systems may use more sophisticated algorithms to estimate SOC based on a combination of factors, including OCV, current and voltage measurements, temperature, and other battery characteristics. Curve fitting is a method commonly used to obtain the relationship between the OCV and SOC of a battery. It involves fitting a mathematical model to a set of data points, which are obtained experimentally or through simulations. In this study, the OCV–SOC curve of a lithium-ion battery was modelled and fitted using exponential and polynomial functions in MATLAB-R2023a.
Figure 2 shows the Poly9 OCV–SOC Curve Fit for battery charging characteristics. The results were compared with Poly 8 and second-order exponential curve-fitting, based on the goodness-of-fit parameters, as seen in the tabulations in
Table 4.
It was observed that the exponential model was a good fit for the OCV–SOC curve, with a Root Mean Squared Error (RMSE) of 0.07. However, it was not accurate near the lower and higher OCV values. The ninth-degree polynomial model was the best fit, with the RMSE of 0.067 associated with an RMSE value of 0.83. The fitted curves were compared with the experimental data, and the accuracy of the models was evaluated.
The ninth-degree polynomial equation is given by
where
x is normalized by a mean of 39.68 and a standard deviation of 30.97. Coefficients of the equation are as shown in
Table 5.
Furthermore, the fitted exponential model was used to predict the voltage of the battery under different operating conditions, such as discharge and charge. The results were in good agreement with the experimental data, demonstrating the usefulness of the fitted OCV–SOC curve in predicting battery behavior.
3.3. Parameter Estimation and Battery Modelling
The internal resistance of the battery (R_0) is estimated, using the pulse-discharge method, from the battery current and voltage data obtained from CALCE. The 3 RC battery model suits the dataset best. The values of , and R_0 obtained from parameter estimation are stored in look-up tables and updated from time to time. The most common models used for lithium-ion batteries broadly include the empirical model, semi-empirical model, physical model, and data-driven model. The ECM is typically a resistor–capacitor network, internal resistance model, or a PNGV model. In this work, the 3RC network has been used for battery modelling. The 3RC battery model is a model widely used for predicting the behavior of batteries under different operating conditions. The model is based on three RC circuits which represent the different time-constants of the battery. The pulse-discharge method is a technique commonly used for parameter estimation in the 3RC battery model. The method involves discharging the battery using a short, high-current pulse and measuring the voltage response. By analyzing the voltage response, the parameters of the 3RC battery model can be estimated.
To use the pulse-discharge method for parameter estimation, the battery is first charged to a known SOC. Then, a high-current pulse is applied to the battery, and the voltage response is measured over time. The internal resistance, mainly contributed by
, can be obtained from Equation (3) [
19,
20,
21,
22,
23].
where
Similarly,
and
where
= Pulse edge voltages.
The values of
are obtained from the relaxation time constants
,
, and
of the pulse-discharge curve. The MATLAB simulink design optimization and curve-fitting toolbox are used for parameter estimation and for obtaining the OCV–SOC curve.
Figure 3 shows the sample pulse-discharge voltage, with the data points showing the values of
and
.
Figure 4 illustrates the relationships between various SOC values and different internal resistance values in a 3RC battery model. Additionally, it demonstrates how charging times vary depending on the internal resistance values.
3.4. SOC Estimation and Evaluation of Time-to-Charge ()
SOC estimation is a critical aspect of battery management systems, particularly in applications such as electric vehicles and renewable-energy storage. The traditional method of SOC estimation using Coulomb counting has limitations, such as the need for an accurate measurement of battery current and a determination of the effects of changes in the battery’s internal resistance. To overcome these limitations, various advanced algorithms have been proposed, including the Unscented Kalman Filter (UKF) method [
24,
25,
26,
27,
28]. The UKF is a nonlinear state-estimation algorithm that can effectively estimate the SOC of a lithium-ion battery by incorporating information from multiple sources, such as the battery voltage, current, and temperature. The algorithm uses a set of sigma points, which are used to represent the nonlinear state equations of the battery. These sigma points are then propagated through the battery model and used to estimate the SOC.
The UKF algorithm consists of the following steps.
Prediction of sigma points is expressed by
Propagation of sigma points is determined through the system model:
where
is the system model and
is the control input.
Correction of sigma points is then effected based on measurements:
where
= Sigma points,
= Measurement,
Measurement model,
= Current state estimate,
= Measurement noise covariance matrix,
= Covariance matrix,
= Kalman gain,
n = dimension of the state space, and
= Scaling factor.
The new state estimate and covariance matrix are calculated based on the corrected sigma points. The UKF method provides improved accuracy compared to traditional Coulomb-counting by considering the impacts of battery temperature and state of health on the SOC estimate. It also has the advantage of being computationally efficient and robust against measurement noise. From the estimated SOC-versus-time plot, as shown in
Figure 5, the τ_ch for a given SOC can be obtained and uploaded to the cloud.
The new state estimate and covariance matrix are calculated based on the corrected sigma points.
3.5. Evaluation of Battery Capacity ()
The preliminary estimation of capacity can be performed using the Ampere-hour Integral method given by Equation (15),
where
is the rated capacity and
is the battery current at time
t. To estimate the capacity, the start SOC (
and the SOC at the
ith charging instant (
are considered.
Figure 5 shows how to estimate the SOC using UKF for drive-cycle data recorded at 25 °C.
3.6. Parameter Update
The internal resistance of a Li-ion battery varies with the SOC of the battery, and can be modeled as a curve. However, over time, the internal resistance curve may deviate due to aging effects such as capacity fade and electrode degradation [
29]. This can lead to errors in the estimation of the SOC and time-to-charge of the battery. To address this issue, the least-squares method is used to update the digital twin with the deviation in the internal resistance curve observed after multiple drive-cycles. The least-squares method is a statistical technique used to find the best-fit line that minimizes the sum of the squared distances between the observed data points and the predicted values. The goal of the LS method is to find the values of the parameters that minimize the sum of the squared errors between the observed deviations and the model predictions.
where r is the internal resistance parameter of the DT to be estimated,
is the input, and
is the observed deviation in the internal resistance curve at that input. The optimal parameter r is obtained by minimizing the objective function of Equation (16). The deviation in the internal resistance curve can be obtained by comparing the voltage and current data collected during the drive cycles with the predicted values from the digital twin. The LS method can then be used to adjust the internal resistance curve in the digital twin, which can improve the accuracy of the SOC and time-to-charge estimations [
30,
31,
32].
Battery testing and SOC estimation rely heavily on drive cycles, which replicate real-world driving conditions. Various cycle types, including NEDC, UDDS, and WLTP, are employed to evaluate vehicle performance, fuel efficiency, and emissions under standardized circumstances.
Integration of drive-cycle data offers several advantages:
- -
Drive-cycle information in the UKF algorithm enhances state-estimation accuracy in electric vehicles by utilizing vehicle velocity and acceleration patterns. This could improve range predictions and energy management.
- -
Drive-cycle data improves SOC estimation accuracy by analyzing diverse real-world driving data to account for various scenarios, leading to better range estimation and battery management in electric vehicles.
4. Proposed Algorithm
Figure 6 is the flow diagram for the ENDEAVOR algorithms depicted in
Figure 7, namely, UKF step evaluations reduced to pictorial representations.
Energy Distribution and Node Allocation using Evolutionary and Resourceful Optimization (ENDEAVOR) is a novel algorithm that has been designed to primarily perform two tasks:
Determine the optimal charging current for each node, based on the values for SOC, DOD, and battery capacity obtained from the digital twin of the battery deployed in the cloud. This determination also depends on the energy availability of the charging station at the time a request is raised.
Allot an available node to the incoming user in a manner such that the waiting time of the user is optimized.
The grid-connected charging station sources energy from hybrid PV and wind energy sources. The energy fed to or from the grid and its usage details are uploaded to the cloud from time to time. These data are then used to predict the for a period of 24 h, based on historical data. The actual level of energy available at the charging station is obtained from the sensors at the charging station. The deviation between and is measured in terms of mean square error (MSE), and the cloud prediction algorithm is tuned to minimize MSE.
4.1. Smart Energy Distribution Based on EDR Computation
The user raises a request for charging through the developed mobile application by entering their EV’s unique ID. The algorithm polls for any incoming requests and keeps a count of the un-serviced requests. The SOCi, DODi, and
are obtained from the BDT deployed in the cloud, using the user’s unique vehicle ID. The amount of energy required by the vehicle in order to charge it is then computed as in Equation (17).
where
is the battery capacity of the
ith vehicle, SOC
i is the State of Charge of the ith vehicle, and
is the energy required to charge the ith vehicle.
The net amount of energy required to charge the incoming EVs that have raised requests and been allotted a node is obtained as in Equation (18).
If the energy demand by the incoming vehicles
is less than the available energy at the charging station
, then the available energy is used to meet the energy requirements of the users. If energy available at the charging station
is less than
, then the EDR (β) is computed as
Therefore, the energy that each EV that is allotted a node will be supplied
with is given by Equation (20):
The time-to-charge for vehicle i is obtained from the charging characteristics of the DT of the battery, corresponding to (new. The charging current pertaining to this energy is also obtained from the SOC curve of the battery DT.
4.2. Node Allocation
Node availability is checked at the charging station from time to time. The
ith vehicle with
is obtained and is allotted the empty node. The node ‘
n’ is blocked for the time
given by Equation (21).
where
is the time-to-charge for vehicle
i.
If no node is available, then the
ith vehicle is allotted the node with
, and
After charging of a vehicle is complete, the node becomes available for the next incoming vehicle. The value of
n, the number of available nodes, is updated before the next request is processed. Whenever a new request is raised at the charging station, the energy demand, energy availability, and β are computed and the node charging current is updated dynamically. The node is released after charging is complete and is then ready for the next vehicle to be plugged in. The flowchart of the algorithm is shown in
Figure 8.
4.3. ENDEAVOR Algorithm
Perform the following for every request raised by an incoming EV (Algorithm 1):
Algorithm 1. ENDEAVOR |
Read ← Read ← n, Node availability in charging station Read ← Minimize (MSE (,) Node allocation () For each i in EV raising request (m) Compute ε_vi = (Battery Discharged () )/100 =
Energy Distribution () End While End for Update n Energy Distribution () { Energy provided for each EV allotted a node } Battery Discharged () { Battery Discharged = 1 − SOCi Remaining % of energy = DODi − Battery Discharged % } Node allocation () { If n is available then Allot it to ith vehicle with (max ) Else ) Allot node n } Release node once charging is complete. |
Update the values for requests serviced and nodes available.
5. Results
The experiment employed a set of sample data associated with five EV users, and comprising IDs, SOC, DOD, and Capacity, as shown in
Table 6. The charging station is assumed to have three charging nodes and a power availability of 60 kWh. The efficiency of the charging system at the charging station is considered to be 95%. This parameter is used to compute the energy consumed from the charging station after the losses of the system are incurred, as
.
Case I:
This scenario considers the arrival of the first EV. The energy required to charge EV1 is computed, and determined to be 7.5 kWh, as shown in
Table 7. Since all nodes are initially considered to be free, EV1 is allotted the first node.
The energy available at the charging station is greater than the demand from user EV1. Thus, the EDR computation step is not performed, and the user is allotted a node and the vehicle charged, such that the time-to-charge is 2.77 h.
Case II:
EV2 and EV3 raise simultaneous requests for charging after 24 min. The charge available at the charging station is computed and the energy demand is obtained, as tabulated in
Table 8. The energy expended in charging EV1 is obtained from the characteristic curve data in the cloud. The time-to-charge for each EV is computed, and the available nodes are allotted for the computed amounts of time.
The total energy required from charging station is 42.74 KWh.
Case III:
In this element of the scenario, EV4 arrives after 2 h and 30 min. The amount of energy that EV2 and EV3 have used in six minutes is obtained from their battery characteristics, respectively, as tabulated in
Table 9.
The node allocation is as shown in
Table 10. Since EV1 has been charged, node 1 becomes available and is allotted to EV4.
The total energy required from the charging station is computed and determined to be 26.07 KWh, while the available energy at the CS is 18.2 KWh. The energy availability of the charging station is found to be less than the energy demand of the vehicles requesting charge. The EDR is therefore computed, as tabulated in
Table 11. EV2 and EV3 attain 98.74% SOC and EV4 attains 74% SOC.
In the absence of the ENDEAVOR algorithm, the energy available to charge EV4 would be as shown in
Table 12.
It is therefore observed that the use of ENDEAVOR for node allocation and energy distribution leads to energy savings and thereby allows for the optimized charging of EVs.
6. Discussion
In this system, step-by-step implementation of Li-ion battery behavior tracking using a digital twin of the battery has been discussed. The experimental data required for battery modelling and parameter estimation were acquired from CALCE. The behavior of a battery varies over its complete life-cycle owing to the changes in its electrochemical degradation. This is modeled by showing the variation of the internal resistance of the battery. The use of a DT deployed at the edge of the cloud allows the parameters like SOC and internal resistance and the battery model parameters to be updated. This work uses the 3RC model and the least-squares method to update the internal resistance of the real-time battery and its digital twin. Hence, the use of a single digital twin will suffice to replicate the battery over the entire lifetime of the battery. The Poly9 curve-fitting was used to obtain the OCV–SOC relationship of the battery and was observed to accurately model the OCV–SOC relationship, with an RMSE of 0.06791. The UKF technique was used to perform SOC estimation and obtain the variation of SOC with respect to time, using CALCE data for a single DST drive-cycle at 25 °C. The experimental voltage pulses used for the Incremental OCV test were used to obtain the internal resistance R0 using the pulse-discharge method. These data were then used to model the relationship between R0 and SOC using second-order exponential curve-fitting.
Using a BDT of the incoming EV, accurate estimations of the energy required to charge the battery as well as the time-to-charge can be determined based on the data available in the cloud. The ENDEAVOR algorithm ensures an optimized charging-node allocation for EV users who raise requests for charging at a particular charging station. Since it is a grid-connected, renewable-energy charging station, optimized energy distribution based on computation of the EDR parameter leads to a threefold advantage: avoiding the frequent loading of the distribution grid; reducing the waiting time for the EV users; and optimizing the usage of resources and distribution of energy.
When compared to existing methods, the ENDEAVOR algorithm demonstrated superior performance in terms of charging-node allocation and energy distribution. It excelled in reducing grid loading, minimizing the wait times for users, and optimizing resource utilization. The ability to use a single digital twin to model the battery’s behavior over its lifetime is a significant advantage. The energy availability for EV4 is improved by 0.59 KWh and the improvement in SOC is observed to be nearly 3% for the given scenario. This implies that, annually, the savings in energy amount to around 182.5 units, assuming that, on average, there is an improvement of 0.5 units of energy availability. The comparative analysis of the performance of the charging station with and without the use of ENDEAVOR algorithm integrated with BDT has been tabulated in
Table 13, based on the above computation.
Figure 9 shows the comparative analysis of the performance of the charging station with and without the ENDEAVOR algorithm incorporated.
7. Sending Slot Details to Charging Station
Figure 10 represents the individual battery parameters, as sensed through the sensor, and the subsequent data exchange with single node to the cloud, one node of the multi-node experimental formation shown in
Figure 11. From this point, slot details will be fetched from the Google Sheet using the Python code running on a local machine, which then communicates with the Arduino through serial communication. LEDs corresponding to various slot numbers will be turned on and off according to the slot allocated. Two pieces of code need to be written, one for Python and one for Arduino, that will make them listen over the same serial port, which will lead to the transfer of data between them.
Slot details will be fetched from the Google Sheet, as shown in
Figure 12, using the Python code running in a local machine; the local machine will then communicate with the Arduino through serial communication. This process involves establishing a connection between the Python script and the Arduino board to facilitate the exchange of data. For instance, the Python code can retrieve information such as slot numbers, availability status, and corresponding LED indicators from the Google Sheet.
Once the data is retrieved, the Python script sends commands to the Arduino through serial communication to control the LEDs based on the slot allocation. For example, if a particular slot is assigned to a user, the corresponding LED will be turned on to indicate its occupied status. On the other hand, if a slot is vacant, the LED will remain off. This synchronization between the Python code and the Arduino enables real-time updates and visual cues for users.
To achieve this seamless interaction, two separate pieces of code need to be developed, involving both the Python and Arduino platforms. The Python script should be designed to read data from the Google Sheet and send instructions to the Arduino via serial communication. Similarly, the Arduino code should be programmed to receive commands from the Python script and control the LEDs accordingly. By establishing a communication protocol over the same serial port, data transfer between the Python and Arduino systems can be efficiently managed.
In conclusion, the integration of Python and Arduino through serial communication allows for the dynamic control of LEDs based on the slot information retrieved from a Google Sheet. This not only enhances the visual representation of the slot status but also demonstrates the seamless coordination between software and hardware components in a practical application scenario.
The potentiometer, acting as a Proof of Concept (POC), plays a crucial role in sending values to the Google Cloud database in real-time. These data serve a vital purpose in the slot allotment process when users make requests through the mobile application. Another database is responsible for handling the commands received from users. When a user requests slot allotment via the app, this database is promptly updated, triggering the activation of the backend priority algorithm. Consequently, the slot is allocated using a priority-based algorithm. The updated slot allocation information is then reflected in the Google Cloud database and promptly communicated to the user through the mobile application. For instance, when a user requests a specific time slot for a particular activity, such as booking a fitness class, the priority algorithm ensures fair and efficient allocation. This streamlined process enhances user experience and operational efficiency.
The Feedback System is developed as described in the following:
- -
Implement a feedback mechanism in the mobile application for users to rate their slot allocation experiences;
- -
Use feedback data to continuously improve the priority algorithm and enhance user satisfaction.
- -
Data Analysis involves the following considerations:
- -
Utilize data analytics tools to analyze trends in slot allocation requests and user preferences;
- -
Optimize the allocation process based on data-driven insights to maximize efficiency.
- -
Scalability involves the following considerations:
- -
Ensure the system is sufficiently scalable to accommodate an increasing number of users and slot allocation requests;
- -
Implement cloud-based solutions for flexibility and seamless expansion as the user base grows.
Table 14 compares the proposed UKF-based ENDEAVOR algorithm with other existing algorithms, highlighting the key features, positive aspects, and negative aspects relative to the applications in which the algorithms are mostly employed.
The outlined approach demonstrates a cloud-based algorithm for distributing energy to electric vehicles (EVs). This method encompasses crucial stages including the acquisition of charging station information, node assignment, energy demand calculation, and power allocation based on existing resources. The flowchart provided offers a visual representation of the algorithm’s sequence and reasoning process. The algorithm begins by analyzing real-time data from charging stations to determine their current capacity and the associated demand. Next, it allocates nodes to represent individual EVs and their charging needs within the network. Finally, the system calculates an optimal energy distribution based on available resources, prioritizing critical charging requests while balancing overall grid stability. The flowchart describes a cloud-based system for managing energy distribution to electric vehicles (EVs). The preceding is a simplified explanation.
8. Conclusions
The implementation of Li-ion battery behavior tracking using a digital twin offers a dynamic approach for modeling batteries’ changes across their lifetimes, particularly with respect to electrochemical degradation, through the tracking of internal resistance variations. By leveraging real-world data and sophisticated techniques, a model has been created that accurately reflects the battery’s state and behavior, enabling precise predictions of charging times and energy requirements. The introduction of the novel ENDEAVOR algorithm has proven to be a significant leap in optimizing the charging process for EV users at specific charging stations. Through smart charging-node allocation and intelligent energy distribution, substantial reductions in grid loading, minimized wait times for users, and resource optimization have been demonstrated. This algorithm aligns well with the increasing demand for grid-connected, renewable-energy-powered charging stations, fostering both energy efficiency and user satisfaction.
However, the study did not consider the broader impact of the method on overall energy consumption and the sustainability of the transportation system. In the future, it would be beneficial to incorporate real-world driver behavior into the algorithm and conduct a more comprehensive sustainability analysis. Future work could also explore the integration of machine learning techniques for more accurate predictions and real-time adjustments. Additionally, the scalability and applicability of the proposed method in different geographical and infrastructure-related settings should be investigated to aid broader adoption.
This system represents a vital step forward in the realm of electric vehicle charging and will contribute to the development of more efficient, sustainable, and user-friendly electric transportation systems, furthering the transition towards a greener and smarter future.