1. Introduction
Supercapacitors have features such as a superior power density, a long cycle life, fast charge–discharge rates, maintenance-free features, a simpler packaging method, and compatibility with integrated circuits [
1,
2,
3,
4]. Therefore, supercapacitors have proven their feasibility as excellent storage components. The integration of supercapacitors into a self-power system has been widely employed in a range of portable microelectronic devices and sensing areas. The performance of supercapacitors is heavily influenced by the structural characteristics and chemical and physical properties of the electrode materials used. Thus, the design and screening of high-performance supercapacitor materials is a fundamental necessity for supercapacitor development [
4,
5,
6].
Recent advances in nanomaterials have proven that the electrochemical performance of materials can be significantly improved by tuning their nanostructure and synthetic conditions [
7,
8]. In the past, many types of nanomaterials have been used for electrochemical energy storage devices, including nanoparticles, nanorods, nanotubes, thin films, and nanofibers. Nanofibers have been used as electrodes due to their high porosity, flexibility, lightness, large surface area, and resistance to aggregation [
7,
9].
Many classes of materials have been tried to fabricate supercapacitor electrodes. These materials include carbon [
10,
11], metal oxides [
12,
13], conductive polymers [
14], metal sulfides, metal nitrides, and metal oxynitrides [
15]. Metal oxides and metal nitrides are suitable for supercapacitor electrode production. However, oxynitride and its equivalents are being investigated more as supercapacitor electrodes [
15]. Among the transition metal oxides, ruthenium oxide (RuO
), manganese dioxide (MnO
), nickel oxides (NiO), and cobalt oxide (Co
O
) have been studied as electrode materials to date [
16,
17]. Simultaneously, because of their adjustable surface area, chemical stability, high electrical conductivity, and outstanding mechanical performance, graphene oxide (GO) and functionalized GO have been studied for supercapacitor applications [
17,
18]. Graphene consists of hexagonal carbon atoms that have a large surface area of 2630 m
g
. It has an electrical conductivity of 10
Sm
and a theoretical gravimetric capacitance of 550 Fg
. Due to these properties, the supercapacitor has shown great potential as an electrode material. Graphene-based supercapacitors have excellent cycling stability and rate capability [
19]. Although the surface area of graphene oxide is less than that of graphene, graphene oxide exhibits a higher capacitance. This is due to the oxygen-containing functional groups on the surface [
20].
It is desirable to obtain high specific capacitance by using low-cost electrode materials. CeO
nanostructures with high porosity and surface area may be produced, and it is simple to modify their form. Additionally, their nanocomposites may work in concert to enhance electrochemical performance [
21]. On the other hand, the cobalt element exhibits good electrocatalytic activity between cobalt nanocrystals and various rare earth elements on other oxides due to its superior properties [
22]. Pure CeO
has low capacitance (<100 Fg
) in different electrolytic environments. The specific capacitance of CeO
can be improved with conductive carbon material supports. The excellent conductivity, porosity, and large surface area of carbon materials can effectively compensate for a number of cerium dioxide’s defects as an electrode material. For this purpose, it was aimed to create a hybrid material class with different properties by adding rGO to the composite and creating excellent physicochemical properties.
In their study, Wang et al. prepared a new electrode with high specific capacity and excellent cycling performance with Flammulina-velutipes-like CeO
/Co
O
/rGO nanoparticles on nickel foam substrate (CCGN) by hydrothermal synthesis and annealing [
23]. Afza et al. proposed reduced graphene oxide (rGO) nanocomposites prepared by the hydrothermal method. They stated that rGO-based CeO
nanomaterials have an excellent charge-carrying capacity for dual applications in the fields of photocatalysis and electrocatalysis [
24]. Veeresha et al. created highly scalable porous hierarchical microspheres composed of polyaniline nanofibers (PANIs), reduced graphene oxide (rGO), and cerium oxide nanorods (CNRs) synthesized by the spray drying method to obtain a functional and structural synergistic effect. They proposed a composite supercapacitor material with superior electrochemical properties by combining the three components together as a 3D porous hierarchical structure that maximizes ion and charge transport while reducing agglomeration [
25].
The graphene-based materials have specific features such as being functional in many different aspects, and they have widespread usage in related fields. Reduced graphene oxide has a wide range of applications. In the study [
26], the authors synthesized triple MoS
-rGO-Cu
O (MG-Cu) composites with NO
sensing properties. An attempt was made to design an efficient supercapacitor electrode material by using the synergistic effects of molybdenum disulfide (MoS
) and rGO in [
27]. Farshadnia et al. proposed an important analysis in their experiments to increase the porosity and surface area by first synthesizing a spongy graphene oxide nanostructure. Then, they fixed CoNi
S
and MoS
nanocomposites on porous graphene oxide to increase the capacity and improve its performance as a substrate. Finally, they integrated all the components to produce the final nanocomposite. The presence of metal sulfides as electroactive materials has promised a synergistic effect for use in supercapacitors by accelerating ion/electron diffusion rates and expanding active sites [
28].
The fractal theory is a mathematical concept that describes the repetition of patterns on different scales in a self-similar manner. It is often used to describe complex systems with a hierarchical structure, where smaller patterns are similar to larger ones. In the field of materials science, the fractal theory has been used to understand the microstructures of materials, including fractal nanocomposites [
29].
Fractal nanocomposites are a new class of materials that are characterized by their hierarchical structures, which are composed of multiple levels of fractal patterns. They are often made by incorporating nanoparticles into a polymer matrix, and their properties are influenced by both the size and distribution of the nanoparticles. In the context of supercapacitor design, fractal nanocomposites have been explored as potential electrode materials. This is because they have several desirable properties that make them attractive for supercapacitor applications, such as high surface area, good electrical conductivity, and excellent mechanical stability [
30].
One of the main advantages of fractal nanocomposites in supercapacitor design is their high surface area, which is due to the fractal structure of the nanoparticles. This high surface area provides a large number of active sites for the storage of charge, which is important for improving the capacitance of the electrode. Another advantage of fractal nanocomposites is their good electrical conductivity, which is achieved through the use of nanoparticles with high electrical conductivity. This allows for efficient charge transport within the electrode, which is essential for achieving high power and energy density in supercapacitors. As a consequence, fractal nanocomposites have shown promise in the design of supercapacitors due to their high surface area, good electrical conductivity, and excellent mechanical stability. Further research is needed to fully understand the potential of fractal nanocomposites for supercapacitor applications and to optimize their performance for practical use [
31]. In our study, we handled a fractal nanocomposite in terms of electrode design and performance measurement model. However, the fabrication of it and the details of the synthesis parameters were out of the scope of the proposed study. Due to the purposes of providing the machine learning model, the following sections of the article will not focus on the fractal design itself.
The supercapacitor system’s behavior is so complicated that it limits the possibility of obtaining the desired behavior with basic intuition using an Edisonian method. With its transient flow, supercapacitors’ power supply analysis requires tools such as numerical models, simulations, machine learning, and so on, apart from the trial and error methods [
32,
33]. Accurate application of those technologies in the layouts of storage behavior could result in fabricating more effective devices with high energy capability. Thanks to the superior modeling characteristic of artificial-intelligence (AI)-based techniques, the performance measurement of supercapacitors scoping the efficiency, reliability, and safety can better be figured out. Indeed, in nanotechnology, the physical, chemical, and electrical characteristics of nanomaterials could better be presented with the help of AI [
34,
35]. Besides AI-supported methods recently being preferred in supercapacitors, their important role in the formulation of material structures in materials engineering has taken place in the related literature [
36,
37]. At the center of the AI-based supercapacitor performance measurement tools lies the artificial neural network (ANN) methods. The conventional ANN prediction and classification models have widespread usage with their many different forms of architectures in complex analysis in different fields such as power electronics, energy management, and data-driven solutions to black box estimation problems. Indeed, ANNs are black boxes that find solutions on their own. That is, once a solution is discovered, it is impossible to determine how that solution was obtained. The ANN itself does not guarantee prediction or solution convergence. A learning model based on an ANN architecture can approximate a specific function with the correct parameters (also known as hyper-parameters) until it achieves a sufficient output. Such a remedy, however, is not always possible [
38]. Modern machine learning techniques overcome these disadvantages of conventional algorithms. Amid growing interest in the existing machine learning techniques used in practice, gradient tree boosting excels in a variety of applications. It is utilized not just as a standalone predictor but is also integrated into the real-world production workflow. Chen and Guestrin described the extreme gradient boosting algorithm, a scalable machine learning system for tree boosting, which they named XGboost [
39]. On a single machine, their proposed algorithmic system operates greater than 10-times faster than the up-to-date favored solutions with the capability of scaling billions of samples in distributed or memory-restricted scenarios. The most-significant component of XGBoost’s success is its scalability in all settings [
39,
40,
41]. The widespread usage of the XGboost model inspired us to investigate its capability of prediction on supercapacitor performance conditions. To the best of the authors’ knowledge, there is not any reported study accessible aiming at the prediction of the performance of energy storage devices including batteries, supercapacitors, and other similar devices with the help of the XGboost model [
39,
42,
43].
Supercapacitors are gaining attention as a new form of energy storage, but the main challenge is developing high-performance electrode materials. The proposed paper focused on analyzing carbon-material-based nanocomposites for use in supercapacitors, which can be improved by integrating additional materials. The analysis suggests that a specially designed electrode material is a suitable alternative for excellent performance [
44]. The article also discusses new challenges and trends in supercapacitor design using artificial intelligence and machine learning methods.
Cyclic voltammetry (CV) is an important technique in the performance analysis of supercapacitors because it can provide information on the electrochemical behavior of the supercapacitor, including its capacitance, charge–discharge behavior, and kinetics. In a supercapacitor, the charge is stored on the surface of the electrodes rather than in the bulk of the material, as in a traditional battery. This means that the capacitance of the supercapacitor depends on the surface area of the electrodes, the nature of the electrode–electrolyte interface, and the kinetics of the charge–discharge process [
45]. CV can be used to study the capacitance of a supercapacitor by measuring the current response to a potential sweep. The shape of the CV curve can provide information on the capacitance and the rate of charge–discharge. The capacitance can be calculated by the area under the CV curve, which is known as the charge stored. In addition, CV can be used to study the kinetics of the charge–discharge process by measuring the rate of the current response to a potential sweep. This can be used to understand the rate-limiting steps in the charge–discharge process and optimize the performance of the supercapacitor [
46].
In cyclic voltammetry, an electrode is scanned through a potential range while the current is measured, and then, the potential is scanned in the opposite direction. The resulting plot of the current vs. potential is called a voltammogram, and it can provide information about the kinetics and thermodynamics of the redox reactions taking place [
45]. The relationship between fractal structures and cyclic voltammetry is that fractals can be used to model the electrochemical behavior of a system. For example, a fractal model can be used to describe the mass transport of redox species to an electrode surface during a cyclic voltammetry experiment. This can be used to understand the kinetics of the electrochemical reaction and to optimize the conditions for the experiment. The proposed study deals with the CV performance analysis of such a fractal carbon electrode, whose synthesis process is described in the study [
34].
Machine learning (ML) is used in the modeling of experimental data because it can provide a flexible and efficient way to analyze large and complex datasets. ML algorithms can be used to identify patterns and relationships in the data that might not be immediately apparent. This can help to extract meaningful information from the data and to understand the underlying mechanisms of the system being studied. One of the main advantages of using ML in the modeling of experimental data is that it can handle a large number of variables and can automatically identify important features and interactions. This can be particularly useful when the data are high-dimensional and when the underlying relationships are not well understood [
47]. Another advantage is that ML models can be easily updated and refined as new data become available, which is important for applications such as predictive modeling and control systems. ML can also be used to optimize the experimental conditions to improve the performance of a system, by using techniques such as Bayesian optimization. ML offers a powerful and flexible approach to analyzing experimental data, allowing extracting insights and knowledge that would be difficult or impossible to obtain with traditional modeling techniques [
46].
In this study, we took advantage of the effectiveness and robustness of a modern machine learning method, XGboost, to make an intelligent model picturing the cyclic voltammetry (CV) behavior of a nanostructured supercapacitor. The supercapacitor CV dataset was a public experimental dataset obtained from a published study [
34], which also proposed a model design for CV behavior prediction. The investigated dataset consists of three subsets, which include experimental data of supercapacitors with different structures. Inspired by the study of Parwaiz et al., our aim was to enhance the data analysis and modeling strategy on the experimental supercapacitor dataset to reveal distinctive outcomes for research experts in their production stages. The benchmark study focused on the design of a Co-CeO
/rGO nanocomposite-structured supercapacitor and the prediction of several responses using the conventional models of the ANN and random forest algorithm. Our study also included an exploratory data analysis (EDA) besides a comprehensive prediction model of XGboost with superior performances compared to the benchmark study [
34]. The importance and originality of this study focus on exploring the experimental data deeply for supercapacitor CV behavior prediction as a performance measurement tool along with an AI-based model.
Figure 1 states the general framework of our study. In this holistic view, we can track the dataset handling, the core process of the study, which is called eXplainable AI, and the decision-making process. There are several ways in which this study makes an original contribution:
Instead of designing complex methods for building models for supercapacitor performance measurement by a practical approach, we considered a robust, yet effective solution as a profitable enhancement in this field.
Aiming to increase the predictive power of CV behavior, the proposed model is faster and relies on decision trees, reinforced by gradient boosting (an enhanced version of the conventional gradient boosting trees) with a low computational cost and fewer parameters.
The reader should bear in mind that the study was based on an enhanced AI prediction model for supercapacitors’ performance measurement instead of focusing on fabrication conditions. The remaining parts of the text are arranged as follows: In
Section 2, a brief explanation is presented regarding the relationship between fractals and graphene-based nanocomposites.
Section 3 is the data definition part and also includes the EDA for the used experimental dataset. Extreme gradient boosting trees are explained in the methodology, which is
Section 4.
Section 5 investigates the experimental findings of three XGboost models for each dataset. Conclusions and future work are presented in
Section 6 along with suggestions and the limitations of the study.
2. Fractals and Graphene-Based Nanocomposites
To achieve the optimal technological development and energy gains, materials must be tailored in a way that maximizes their property–structure relationship. Increasing the defect density of graphene through a transition from 1D linear edges to fractal edges is one way to accomplish this. This approach highlights the fractal nature inherent in graphene structures, which is important to consider when designing graphene-based materials [
48,
49].
In addition to fractalization, incorporating graphene into nanocomposites is another promising avenue for improving the properties and functionalities of graphene-based materials. For instance, the high surface area of graphene nanocomposites allows for increased interaction with other materials, resulting in improved mechanical, electrical, and thermal properties. Furthermore, the addition of graphene to polymers and other materials can enhance their strength, stiffness, and thermal conductivity [
49].
As a comprehensive framework, combining the fractalization of graphene edges with the incorporation of graphene into nanocomposites holds great potential for advancing a wide range of technological applications, including energy storage, catalysis, and electronics. By leveraging the unique properties of graphene and tailoring its structure in a strategic manner, we can unlock new possibilities for material design and innovation.
In the literature, there are studies discussing the use of fractal-like structures for supercapacitor applications [
50,
51]. In the study [
50], a metal–oxide electrode was synthesized in three different morphologies with similar specific surface areas: fern, flake, and microsphere. The fractal dimensions of these morphologies were estimated from electrochemical impedance spectroscopy. The capacitive surface charge storage contribution from cyclic voltammetry increased with the fractal dimensions from the microspheres to the ferns. The study suggested that fractal-like structures are beneficial for supercapacitor applications by promoting capacitive surface charge storage. The experiments of the same study aimed to investigate the effect of the fractal dimension on the charge storage performance of the related metal–oxide electrodes.
In the study [
51], the preparation and application of a nanocomposite in a two-electrode supercapacitor were described. Upon the formation of the composite, nanomaterials with a flower-petal-like shape were produced. The symmetric two-electrode supercapacitor exhibited excellent properties suitable for use in supercapacitor applications. Thus, the fractal structured nanocomposites were of a great importance to the supercapacitor design.
In addition to all those fractal studies in the supercapacitor literature, cyclic voltammetry itself has also a significant importance for the calculation of the fractal dimensions using the I-V signals. There are also studies in which the computation of the fractal dimension of a surface is performed by analyzing cyclic voltammograms obtained when the surface is utilized as the working electrode in an electrochemical cell [
52,
53]. The separation between the peaks observed in such voltammograms is highly influenced by the fractal dimension of the electrode. This relationship between fractals and cyclic voltammetry is significant, as it enables researchers to obtain information about the surface properties of electrodes from the voltammograms produced during electrochemical experiments. Cyclic voltammetry can be used to determine the fractal dimension of a surface when it is used as the working electrode in an electrochemical cell. The separation between peaks in the resulting voltammograms is heavily influenced by the fractal dimension of the electrode, providing valuable information about its surface properties [
53].
Our study focused on developing models for cyclic voltammetry (CV) data, which plays a crucial role in calculating the fractal structures of electrodes. Specifically, we gathered a dataset from a graphene-based fractal electrode design, which will be discussed in the following section. By proposing CV prediction models, our study makes a significant contribution to researchers investigating fractal electrodes. This work will be particularly relevant for those seeking to better understand the behavior and properties of such electrodes in electrochemical systems.
4. Machine Learning Model—The XGBoost Algorithm
With its very popular place in machine learning applications, the XGBoost algorithm mainly relies on decision tree logic. However, as its enhanced version, the main advantage is based on the tree boosting methodology. As an upper step, Chen and Guestrin [
39] enriched the gradient tree boosting with a greedily added function for optimization. The XGBoost algorithm involves a combined learning method. The combined learning method merges multiple learning models in order for the integrated model to have a more powerful generalization capability to achieve more reliable modeling outcomes. XGBoost is an amendment to the boosting algorithm founded on the gradient descent tree, which is composed of multiple decision tree iterations. XGBoost initially forms multiple classification and regression tree (CART) models to predict the data and then combines those trees as a unique tree model. The model will proceed to iteratively update, and the brand-new tree model formed in each repetition will fit the residual of the earlier tree. Since the number of trees increases, the complexity of the integrated model will progressively increase until it addresses the complexity of the data themselves, whereupon the training succeeds in the most-reliable results. To provide a brief explanation of the XGBoost methodology, we followed the definition of the studies [
39,
55] in the following lines. Equation (
1) is the basis of the XGBoost algorithm model, where
states the CART space,
is the sample score of
x, the model prediction is achieved by accumulation, and
q describes the composition of each tree,
T is the tree number, and each
matches an independent tree composition
q and leaf weight.
In XGBoost, the inner decision tree uses a conventional regression tree. Concerning the squared loss function, the split node of the regression tree resembles the residual. For the global loss function (gradient descent), the split node of the regression tree matches the appraised value of the residual. Hence, the accuracy of XGBoost will be higher. Equation (
2) states the iterative method of residual fitting. In (
2),
is the prediction value of the
ith sample after
repetitions.
is the initial set of the
i-th data points.
The objective optimization function in the XGBoost method, namely the loss function (
3), can be achieved by the iterative manner of the residuals. Concerning the general loss function, XGBoost will implement a second-order Taylor expansion to explore more information about the gradient and simultaneously eliminate the constant term; therefore, the gradient descent method can be thoroughly trained. Equations (
4) and (
5) state the loss function of the
t-th step, where
and
stand for the first and second derivatives.
Unlike other methods, XGBoost uses a regularization term
(
6) to block over-fitting and considerably increase the accuracy of the model. The
function describes the model complexity of the tree. The smaller the function output, the more powerful the generalization capability of the tree is.
stands for the weight on the jth leaf node of the tree model
f;
T states the cumulative number of leaf nodes of the tree model;
shows the penalty term of the L1 regularity;
is similarly the penalty term of the L2 regularity, also a design parameter for the algorithm. Consequently, the objective function definitions (
7)–(
9) are achieved, where
pictures the sample set upon the jth leaf node [
39,
55]:
Figure 17 addresses a general and briefly explained flow chart about the XGBoost algorithm in a single view. When K trees are designed after training, the features of the samples in the prediction will have a related leaf node in every tree, and each leaf node refers to a score. Finally, the related scores of each tree are summed up to build the recognition prediction value of each sample [
56].
5. Experimental Results
In this section, we present and discuss the obtained experimental results for the CV prediction of supercapacitor performance measurement in three datasets of the study. In our code laboratory setup, we used the Canonical Ubuntu 20.04 operating system (OS) installed on a workstation. Ubuntu is an open-source OS using the Linux kernel and based on Debian. The workstation had an Intel Xeon E5-2620 v4 GHz dual-core processor with 32 GB of RAM and an Nvidia Quadro M4000 8 GB GPU.
In our study, the model design was coded on a state-of-the-art AI platform named H
O [
57] using the Jupyter Lab environment [
58]. The programming language was based on the H
O Python module [
59] and the necessary Python 3.8 packages.
The HO platform makes AI and machine learning research rapid and effective. It makes the researcher’s study time and cost effective, and with its built-in grid search algorithm, the designed models’ hyper-parameters are optimized automatically. Even if a train–test split method is preferred in experiments, HO models use k-fold cross-validation in their training procedure, pick the best model, and run the test operation to generate the final performance metrics. In our experiments, we performed these complex processes in only 60 s. The programming duration was selected as 60 s intentionally because we wanted to emphasize our algorithm’s fast and cost-effective structure over the benchmark study. The proposed model, XGBoost, ran in 60 s of experiment time including the training and testing phases to achieve the performance metrics that we demonstrate in the next lines. Before discussing the performance of our proposed prediction model for the supercapacitor CV behavior, we provide the selected model’s unique parameters to set it up in the code design of the HO estimator models.
To describe the XGBoost model parameters, it is very logical to start with the booster parameter. It establishes the learner type. Ordinarily, this might be a tree or a linear function. In the event of trees, the model will be composed of an ensemble of trees. When it is set as the linear booster, it will be designed with a weighted sum of linear functions. In our case, the model was optimized with a dropouts meet multiple additive regression tree (DART) booster [
60]. The normalization type of our model was set to “tree”, which provides the trees having the same weight as each of the dropped trees in the DART process. The seed option was set for the randomization of the datasets as “1234”. The models solve dour problems with 50 trees.
To provide a balanced evaluation and even level for separate datasets, we determined our metrics set as the mean-squared error (MSE), root-mean-squared error (RMSE), mean absolute error (MAE), and coefficient of determination, named R-squared [
15,
34]. We can explore the definitions of the metrics below as stated in (
10):
where
n is the sample number and
is the error between the actual value (
) and predictions of the model (
). In the R-squared metric,
states the mean value of the predicted data. Outliers might have an undesirable impact on the value of these evaluation criteria. To avoid this issue, the MAE was used as a balanced measure, which handles all errors uniformly [
34]. The closer to zero they are, the MSE, RMSE, and MAE metrics point out a great performance. On the other hand, for R-squared, a great performance is obtained by being near 1.
In
Table 5, we can examine the training results of the three datasets with the achieved performance values. Here, we can also see the sample number of the training datasets, for which we used the 80% and 20% split ratio for all the datasets. In the training stage, the R-squared metric was not investigated. Meanwhile, the readers should keep in mind that our training procedure also included five-fold cross-validation, which means our model was tested while it was being trained.
Table 5 shows us that in the training stage, Dataset B had the best performance on every metric. Following it, Dataset C achieved the second position in the training. This proved the former explanations about the data distributions we interpreted from the EDA.
In
Table 6, the results of the tests are presented along with the R-squared metric and the number of samples in the testing dataset. If we examine the performance of the model based on the metrics of the MSE, RMSE, and MAE, it can be observed that Dataset B and Dataset C exhibited the best scores among the datasets. However, when considering the R-squared metric, Dataset A outperformed the other datasets. This difference in performance can be attributed to the larger sample size of Dataset A. Nevertheless, it can be concluded that our proposed intelligent model for predicting the CV behavior of supercapacitors demonstrated superior performance and achieved the objectives outlined in our motivation.
To provide a more comprehensive analysis of our results, additional visual explanations were prepared and are presented in
Figure 18. This figure shows the distribution of the actual vs. predicted values and the residual error plots for all sub-datasets included in our study. The visual representation provides a clear picture of the performance of the proposed model and highlights its strengths and limitations.
The results obtained from the proposed machine learning-based prediction model for the output current of supercapacitors clearly demonstrated the effectiveness of the model. This can be easily seen from the visual presentations provided in
Figure 18. The figures show the distribution of the actual vs. predicted values and the residual error plots for all sub-datasets included in our study.
Upon close examination of the figures, it becomes apparent that the model was able to accurately predict the output current of the supercapacitors. This can be seen from the strong correlation between the actual and predicted values, as well as the low residual errors. In other words, the model produced results with a high degree of accuracy and low prediction errors.
This level of performance is a testament to the robustness of the machine learning algorithms and techniques used in the model, as well as the quality of the data used for training and testing. The high correlation and low errors indicate that the model was able to capture the underlying patterns and relationships in the data and to generalize these patterns to make accurate predictions on new, unseen data.
In conclusion, the results obtained from the proposed model demonstrated that it is an effective solution for predicting the output current of supercapacitors. The model’s accuracy and low prediction errors make it a valuable tool for researchers, engineers, and other professionals in the field of energy storage and power electronics.
Figure 19 provides a visual representation of the actual output currents and the predicted output currents on the same plot. This presentation is useful for evaluating the performance of the prediction model and for clearly showing how the predicted outputs track the actual data.
From the
Figure 19, it can be observed that, for all datasets, the predicted output currents provided a superior tracking performance compared to the actual data. This indicates that the model was able to effectively capture the underlying patterns and relationships between the input features and the output currents and to use these patterns to make accurate predictions on new data.
In order to gain a deeper understanding of the model’s performance, an additional test was conducted using a single-input prediction model that only included the Volt feature as its input. The results of this test were presented in the EDA, where it was shown that the Volt feature had a significant impact on the output compared to the other two features. However, in order to investigate the effect of the less dominant features on the outputs, an additional test was conducted using Dataset A and the single-input model.
This additional test provided valuable insights into the performance of the prediction model and highlighted the importance of considering all relevant features in the training and testing of machine learning models. The results of this test can be used to further optimize the model and improve its performance, making it an even more effective tool for predicting the output currents of supercapacitors. Please see
Table 7 for metrics and detailed results.
Contrary to the commonly held belief that the Volt feature alone would provide sufficient performance, the results of the single-input model test showed that this is not the case. The R-squared value of the distribution revealed a negative correlation between the actual and predicted values, indicating that the model’s performance was poor when only considering the Volt feature.
To further understand the performance of the proposed model, a comparison was made with the benchmark study [
34]. In this comparison, the mean values of the RMSE and MAE metrics were selected and calculated from the results obtained in the benchmark study. The results from the testing stage of our proposed model were then compared to these mean values.
The comparison with the benchmark study provided valuable insights into the performance of the proposed model and highlighted its strengths and weaknesses. The comparison also helped to validate the results obtained from the model and demonstrate its effectiveness compared to other similar studies in the field.
In conclusion, the results of the comparison with the benchmark study and the single input model test provided a clear picture of the performance of the proposed model. While the model may have some limitations, it still represents a valuable tool for predicting the output currents of supercapacitors and provides a solid foundation for future research and development in the field.
The comparison between the proposed XGBoost-based machine learning prediction model and the benchmark study is presented in
Table 8. The results showed that the proposed model outperformed the benchmark study in terms of the RMSE and MAE metrics, commonly used for evaluating the performance of supercapacitor CV behavior prediction models.
The benchmark study proposed a conventional artificial neural network (ANN) method and a random forest algorithm for the same datasets. While the ANN is well-known for its iterative learning algorithm, the random forest is a basic version of the decision tree models. However, the proposed model uses the state-of-the-art XGBoost model, which offers a low-computational-cost implementation. With very few parameters and a 60 s runtime, the proposed model outperformed the benchmark study on almost all datasets with its RMSE and MAE values. The only exception was the RMSE value of Dataset B, where the proposed model fell just behind the ANN, but with a very small difference.
The results of the comparison demonstrated the strong performance of the proposed method and its superiority over conventional methods in supercapacitor performance measurement tool design. The proposed model is more efficient and offers a rapid process with a few parameter designs, making it an attractive solution for supercapacitor performance prediction.
The current study successfully demonstrated the implementation of a data-driven method for predicting the electrode effects on supercapacitor performance and revealed the most-prominent parameters that affect the behavior of supercapacitors. In the future, the proposed model may facilitate further analyses and potential works on the optimization of carbon-based electrodes in various supercapacitor applications. Overall, the results of this study make a valuable contribution to the field of supercapacitor performance prediction and open up new avenues for further research and development.