Next Article in Journal
Metal Acetate-Enhanced Microwave Pyrolysis of Waste Textiles for Efficient Syngas Production
Previous Article in Journal
Numerical Simulation of the Diffusion Characteristics of Odor Pollutants of Waste Bunkers in Waste Incineration Plant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Identification of Solid Oxide Fuel Cell Using Elman Neural Network and Dynamic Fitness Distance Balance-Manta Ray Foraging Optimization Algorithm

1
Shanghai KeLiang Information Technology Co. Ltd., Shanghai 201103, China
2
Faculty of Electric Power Engineering, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Processes 2024, 12(11), 2504; https://doi.org/10.3390/pr12112504
Submission received: 10 October 2024 / Revised: 7 November 2024 / Accepted: 9 November 2024 / Published: 11 November 2024
(This article belongs to the Topic Advances in Power Science and Technology, 2nd Edition)

Abstract

:
An accurate solid oxide fuel cell model is a prerequisite for optimizing the operation and state estimation of subsequent cell systems. Hence, this work aimed to utilize a vigoroso algorithmic tool, i.e., Elman neural network, for data prediction to enrich cell measurement data and employ the trained network model for noise reduction of voltage–current data. Furthermore, to obtain reliable cell parameters, a novel parameter identification model based on the dynamic fitness distance balance-manta ray foraging optimization (dFDB-MRFO) algorithm is proposed. Two datasets were applied to extract the electrochemical model and simple electrochemical model parameters of the solid oxide fuel cell model. To verify adequately the superiority of this method, which is compared with another seven conventional heuristic algorithms, four performance indicators were selected as evaluation criteria. Comprehensive case studies demonstrated that through data processing, the precision and robustness of identification could be effectively heightened. In general, the model fitting data obtained via parameter identification using dFDB-MRFO have excellent fitting precision contrast with the measured voltage–current data. Notably, the fitting degree obtained by dFDB-MRFO in the simple electrochemical model reached 99.95% and 99.91% under the two datasets, respectively.

1. Introduction

The widespread consumption and continuous rise in prices of fossil fuels accelerate resource depletion and trigger severe energy crises and environmental pollution problems [1,2]. These two challenges are jointly driving humanity to accelerate the search for other clean, renewable, and low-cost new energy sources [3,4]. So far, many renewable energy sources have been developed and utilized to a certain extent [5,6,7], such as solar energy, wind energy, wave energy, and hydrogen energy. Hydrogen energy is currently one of the cleanest, most effective, and most promising renewable energy sources. Particularly, as an outstanding energy storage carrier in the power grid system, can effectively solve the inherent intermittency and volatility problems of renewable energy [8]. The solid oxide fuel cell (SOFC) is an outstanding representative of numerous hydrogen energy applications; it is an electrochemical device that directly converts the chemical energy of fuel into electrical energy [9,10,11], in which H2 is generally used as fuel, while electrical energy, H2O, and heat are used as the output of the internal power generation reaction in the cell. In addition, SOFC also has advantages such as high-power conversion efficiency, low pollution, and high waste heat utilization rate [12,13], which give it bright development prospects and application value.
It is worth noting that dependable SOFC models are crucial for achieving optimal control of cells and improving cell performance. Currently, models for SOFC mainly include the electrochemical model (ECM), simple electrochemical model (SECM), steady-state model, and dynamic model. Therefore, ECM and SECM have been widely used due to their simple structure and high accuracy [14]. However, due to the nonlinearity, strong coupling, and multivariable nature of SOFC, it is not easy for analytical methods and traditional optimization algorithms to identify the optimal cell model parameters, making it difficult to achieve accurate cell modeling [15]. Heuristic algorithms can effectively solve these problems, using random search for iterative optimization, which can quickly and accurately solve the problem. An SOFC parameter identification model based on a cooperation search algorithm was proposed by Islam et al. [16], which studied the parameter identification of the steady-state and dynamic models. Wang et al. [17] proposed an identification method based on a modified version of gray wolf optimization (MGWO). By conducting a parameter identification study under various temperature and pressure conditions, the results obtained by MGWO were compared with the other four algorithms, and the results showed that the error of MGWO was minimal in all cases. A novel fractional order dragonfly algorithm model was designed by Guo et al. [18], which uses pressure and temperature as variables to study the impact on the final parameter identification results. Yousri et al. [19] proposed a parameter identification model based on a comprehensive learning dynamic multi-swarm marine predator algorithm, which utilizes dynamic multi-group and comprehensive learning strategies to improve the exploration and development process of the marine predator algorithm. In addition, it is applied to the steady-state model under different temperature conditions. The experimental results show that it has the characteristics of fast convergence speed and high optimization accuracy. Fathy et al. [20] used a political optimizer to study parameter identification of steady-state and dynamic models of SOFC. The sum of mean square errors between experimental voltage and model-calculated voltage data was used as the objective function, and the result showed that the political optimizer had better robustness and accuracy. An SOFC parameter identification model based on artificial ecosystem-based optimization (AEO) and manta ray foraging optimization (MRFO) was proposed by Chen et al. [21], which utilizes a fusion mechanism to significantly improve the algorithm’s global search ability. At the same time, the impact of temperature and pressure changes on parameter identification results was also studied. The results showed that the designed new model can achieve fast, accurate, and stable parameter identification. The methodologies failed to incorporate the ramifications of the quantity and quality of experimental data on the identification outcomes, instead confining their investigation to parameter identification within the confines of a solitary experimental dataset. This study provides a new direction for SOFC parameter identification, fully considering the methods and the impact of experimental data on parameter identification results. Overall, the contributions and innovations of this study can be summarized as follows:
  • Two SOFC models were established, which were studied for parameter identification. Two experimental datasets of SOFC were collected and organized
  • Elman neural network (ENN) was applied to process two datasets, namely data prediction and data denoising, which were applied to the parameter identification of the two models
  • The fitness and distance balance criteria applied to the MRFO algorithm was used to design an SOFC parameter identification model based on the dynamic fitness distance balance-manta ray foraging optimization (dFDB-MRFO) algorithm, and compared with seven other typical heuristic algorithms under four performance indicators
  • The performance of dFDB-MRFO was proved via comprehensive comparison with seven heuristic algorithms; it has higher accuracy, better robustness, and faster convergence speed. In addition, it was also proven that through data prediction and noise reduction processing, identification accuracy could be effectively improved as well as the stability of the identification results.

2. SOFC Modelling

In this section, the principle of SOFC and two SOFC models are introduced.

2.1. Mathematical Model

The electrochemical reaction principle inside SOFC is shown in Figure 1, where the anode and cathode chemical reaction mechanisms of the cell are as follows [22]:
Anode
2 H 2 + 2 O 2 2 H 2 O + 4 e
Cathode
O 2 + 4 e 2 O 2
Here, two types of SOFC models were mainly studied, namely ECM and SECM. SECM is a simplified model of ECM, and the mathematical expressions and parameters to be identified for both models are summarized in Table 1. Therein, N c e l l represents the number of single cell in series in the stack cell; E o represents the open circuit voltage of the cell, V; V a c t represents the active polarization voltage loss of the cell, V; V o h m represents ohmic voltage loss, V; V c o n represents concentration polarization voltage loss, V; I 0 represents the cell exchange current density, A.
In addition, activation voltage loss V a c t can be formulated as below [25,26]:
V act = A s i n h 1 I 2 I 0 , a + A s i n h 1 I 2 I 0 , c ,   for   ECM A s i n h 1 I 2 I 0 ,   for   SECM
where A represents the slope of Tafel curve, mV/decade; I represents the load current density, mA/cm2; I 0 , a represents the anode exchange current density, mA/cm2; I 0 , c represents the cathode exchange current density, mA/cm2; I 0 is defined as exchange current density for SECM. Moreover, ohmic voltage loss can be described by
V ohm = I · R o h m
where R o h m represents the equivalent resistance inside the cell, kΩ·cm2.
Finally, concentration voltage loss V c o n can be calculated by
V con = B · l n ( 1 I I L )
where B represents a constant; I L represents the ultimate current density, mA/cm2.

2.2. Objective Function

To achieve accurate extraction of cell parameters, it is necessary to minimize the error between fitting data and actual data. Therefore, this work used root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) [27] as fitness functions for SOFC cell parameter identification as follows:
R M S E m i n ( x ) = 1 M m = 1 M V f , m V a , m 2
M A E m i n x = 1 M m = 1 M V f , m V a , m
M A P E m i n x = 100 % M m = 1 M V f , m V a , m V a , m
x = E o ,   A ,   R o h m ,   B ,   I 0 , a ,   I 0 , c ,   I L , E C M x = [ E o ,   A ,   R o h m ,   B ,   I 0 ,   I L ] , S E C M
where x represents the variable to be identified; K is the actual current and voltage data size; V f ,   m means the m th output voltage in the fitted data; V a ,   m indicates the m th output voltage in the actual data. Therefore, V f ,   m can be calculated based on the mathematical equation of the cell model in Table 1.

3. ENN for Data Process

This section discusses the principle of FNN and the data preprocessing via FNN.

3.1. Principle of ENN

ENN is a special type of artificial neural network that contains an input layer, a hidden layer, and an output layer, as well as a feedback layer, which is its main difference from traditional feedforward neural networks. In ENN, the neurons in the hidden layer receive not only signals from the input layer but also feedback signals from the state of the hidden layer at the previous moment. The structure of ENN [28] is shown in Figure 2. It mainly achieves self-connection through the memory of the feedback layer through the hidden layer, greatly improving the dynamic and time-varying characteristics of the network [29].
The nonlinear state space model of ENN is as follows [30,31,32]:
y k = g ( w 3 · x k ) x k = f ( w 1 · x c , k + w 2 · u k 1 ) x c , k = x k 1
where y k represents the output layer vector; x k represents the hidden layer vector; u k represents the input layer vector; x c , k represents the feedback layer vector; w 1 represents the connection weight value from the feedback layer to the hidden layer; w 2 represents the connection weight value from the input layer to the hidden layer; w 3 represents the connection weight value from the hidden layer to the output layer; g ( ) represents the transfer function of the output layer; f ( ) represents the transfer function of the hidden layer.

3.2. ENN for V-I Data Preprocessing

The ENN algorithm is based on the nearest neighbor (NN) classification method, with its central tenet being the computation of nearest neighbor samples for each instance within the undersampled category, followed by a determination of whether to retain the current instance based on the category of these nearest neighbors. First, for each sample in the dataset, its k nearest neighbors are calculated. Subsequently, based on the category of these nearest neighbor data points, a decision is made regarding the retention of the current data point. Finally, through these steps, the ENN algorithm eliminates instances that are discordant with their neighboring instances, resulting in a refined and purer dataset.

3.2.1. ENN for V-I Data Prediction

Parameter identification heavily relies on the output V-I data of SOFC, while newly manufactured cells often have low parameter identification accuracy and modeling accuracy due to insufficient V-I data provided by the manufacturers, which affects the subsequent study on cells. Moreover, in addition to the lack of data, the direct loss of data and the incompleteness of records will also significantly affect the accuracy of parameter identification. Therefore, this study proposed the use of ENN for predicting and processing current and voltage data to increase the amount of data used and improve the accuracy and reliability of parameter identification.

3.2.2. ENN for V-I Data Denoising

In addition, the measured SOFC current and voltage data will be affected by factors such as operating conditions and environmental conditions during actual operation, resulting in many abnormal values in the measured V-I data, seriously reducing the accuracy of SOFC parameter identification. Therefore, this work studied the parameter identification results under noisy data and utilized ENN to achieve data denoising processing.

4. dFDB-MRFO Based SOFC Parameter Identification

In this section, the parameter identification method is established.

4.1. dFDB-MRFO Algorithm

dFDB-MRFO improves the foraging search method of manta ray based on the MRFO algorithm [33]. The algorithm process is as below.

4.1.1. Population Initialization

In the algorithm, the variable of the problem is the position of the manta ray in space, and the position matrix is the following [34]:
P = P 1 1 P 1 d P N 1 P N d N × d
where N denotes the number of individual manta rays; d represents the individual dimension of manta rays, i.e., the number of variables; P N d indicates the position of the N th manta ray individual in the d-dimension.
The fitness of candidate solutions for manta ray individuals is the following:
F = F 1 F N N × 1
where F N represents the fitness value of the N th manta ray individual.

4.1.2. Foraging Search

There are three main foraging behaviors of manta ray individuals, including chain foraging, spiral foraging, and rolling foraging, as shown in Figure 3.

dFDB Strategy

dFDB solves from candidate solutions based on the total score, the fitness value, and the distance of manta ray individuals which together serve as the total score of the candidate solutions [35]. The mathematical model is described below.
Suppose that the j th solution is P j , the optimal candidate solution selected based on fitness can be represented as P b e s t , then the distance between candidate solutions selected based on dFDB strategy is calculated using Euclidean distance, as follows:
P j = [ P 1 j , P 2 j , , P N j ] P b e s t = [ P 1 ( b e s t ) , P 2 ( b e s t ) , , P N ( b e s t ) ] D P j = ( P 1 j P 1 ( b e s t ) ) 2 + ( P 2 j P 2 ( b e s t ) ) 2 + ( P N j P N ( b e s t ) ) 2 j = 1,2 , , d
where j represents the number of dimensions; P N ( b e s t ) represents the position of the N th best candidate solution for manta ray based on fitness criteria.
In addition, it is first necessary to normalize the fitness and distance, and then perform weighted summation to obtain the final optimal solution selection criteria, as follows:
S P j = w · norm F j + ( 1 w ) · norm ( D P j ) j = 1,2 , , d
where S P j is the total score using dFDB as the standard; w represents the weight of fitness and distance in the total score; F j represents the fitness value of the jth solution; D P j denotes the Euclidean distance calculated from the j th solution.
In dFDB strategy, the selection of w is the following:
w = t T · 1 l b + l b
where t is the current number of iterations; T represents the maximum number of iterations; l b represents a lower limit value of w , with a range of 0 < l b < 0.5 .

Chain Foraging

During the chain foraging, the manta ray population arranges into a chain like predatory chain. In this case, the mathematical model for updating the individual position of manta rays is as follows [36]:
P i d ( t + 1 ) = P i d t + r · P d F D B d ( t ) P i d t + α · P d F D B d ( t ) P i d t , i = 1 P i d t + r · P i 1 d ( t ) P i d t + α · P d F D B d ( t ) P i d t , i = 2 , , N
α = 2 r · l o g ( r )
where P i d t represents the position of the i th individual in the d -dimension of the manta ray population during the t th iteration; P d F D B d ( t ) means the position of the optimal candidate individual in the t th iteration of the improved algorithm for the d -dimension of manta ray population; r is a uniform random number on the interval [0, 1]; α represents the weight coefficient.

Cyclone Foraging

Cyclone foraging mainly involves the formation of a foraging chain in the population of manta ray and cyclone towards the target food [37]. The mathematical model is as follows:
P i d t + 1 = P d F D B d ( t ) + r · P d F D B d ( t ) P i d t + β · P d F D B d ( t ) P i d t , i = 1 P d F D B d t + r · P i 1 d ( t ) P i d t + β · P d F D B d ( t ) P i d t , i = 2 , , N
β = 2 e r 1 · T t + 1 T · s i n ( 2 π · r 1 )
P r a n d d = L B d + r · ( U B d L B d )
P i d t + 1 = P r a n d d + r · P r a n d d P i d t + β · P r a n d d P i d t , i = 1 P r a n d d + r · P i 1 d ( t ) P i d t + β · P r a n d d P i d t , i = 2 , , N
where β means the weight coefficient of cyclone foraging; r 1 represents a random number on [0, 1]; P r a n d d is the random position of the d -dimensional manta ray; L B represents the lower limit value of the variable to be solved; U B represents the upper limit value of the variable to be solved.

Somersault Foraging

Somersault foraging mainly uses food sources as a fulcrum, and manta rays use this fulcrum to roll and find better foraging positions [38]. The mathematical model is as follows:
P i d t + 1 = P i d t + S · ( r 2 · P d F D B d t r 3 · P i d t ) i = 1,2 , , N
where S represents the somersault factor of manta ray; r 2 and r 3 mean a random number between [0, 1].

4.1.3. Update Iteration

The updated iterative mathematical model is shown in the following equation:
F i t + 1 > F i t , P i t = P i t + 1 F i t + 1 F i t , P i t = P i t F i t > F b e s t t , P b e s t t = P i t
where F i t represents the fitness value of the manta ray individual; P i t is the position of the manta ray individual.

4.2. dFDB-MRFO Based SOFC Parameter Identification Design

4.2.1. Fitness Function

The goal of using this algorithm for parameter identification is to minimize the difference between the fitted output current and the actual output current. The fitness function of this algorithm uses the three objective functions established in the cell modeling section as measurement indicators, namely RMSE, MAE, MAPE, and the time required for a single run.

4.2.2. Constraints

This study combines the variables to be solved into vectors as the positions of manta rays in the algorithm and then performs iterative optimization. Therefore, each parameter needs to be limited within its upper and lower boundaries as follows:
x m i n h x h x m a x h , h H x = E o ,   A ,   R o h m ,   B ,   I 0 , a ,   I 0 , c ,   I L , E C M x = [ E o ,   A ,   R o h m ,   B ,   I 0 ,   I L ] , S E C M
where x h denotes the h th SOFC parameter to be identified; x m i n h and x m a x h are the lower limit and upper limit of the h th parameter to be identified, respectively; H represents the number of parameters to be identified.

4.2.3. Execution Process

The complete process of parameter identification can be divided into three parts: data collection, data processing, and parameter extraction, respectively, as shown in Figure 4.

5. Case Studies

The dFDB-MRFO algorithm was applied to two datasets for parameter identification study on two models of SOFC, namely ECM and SECM. The search range of each parameter in both models is shown in Table 2. The datasets are respectively 1 kW flat stack cells developed by Huazhong University of Science and Technology (HUST) [39] and the single cell developed by CEREL Company in Poland [40,41], namely HUST and Poland mentioned in the work.
In addition, ENN was applied to two datasets for data processing, namely data prediction and data denoising. The parameter settings of ENN and the situation of each dataset are shown in Table 3, and the data processing results are shown in Figure 5. The results show that the predicted and denoised data curves are basically consistent with the original data curves, it was fully demonstrated that ENN has high prediction accuracy and fitting accuracy.
Moreover, the dFDB-MRFO algorithm was compared with seven other heuristic algorithms, namely artificial bee colony (ABC) [42], ant lion optimizer (ALO) [43], bird swarm algorithm (BSA) [44], gray wolf optimization (GWO) [45,46], improved whale optimization algorithm (IWOA) [47], moth swarm algorithm (MSA) [48], and whale optimization algorithm (WOA) [49]. ABC imitates the swarm intelligence optimization algorithm of bees’ honey harvesting behavior and finds the optimal solution to the problem through the collaborative work of bee harvesting, observing, and reconnaissance. ALO is an optimization algorithm based on the hunting behavior of ant lions, using their traps and hunting strategies to search for the optimal solution. BSA is an optimization algorithm that mimics the foraging and migration behavior of birds and seeks the optimal solution to the problem through the flight and gathering behavior of birds. GWO is an optimization algorithm based on the predation behavior of grey wolves, which searches for the optimal solution through the social hierarchy and predation strategy of grey wolves. IWOA simulates the hunting and bubble net attack behavior of whales to find the optimal solution, which has better search performance and convergence speed. MSA imitates the behavior of moths using light for navigation and seeks the optimal solution to the problem through their flight and search strategies. WOA is an optimization algorithm based on whale hunting behavior, which searches for the optimal solution by simulating whale hunting strategies and bubble net attack behavior [50]. These seven algorithms all have the advantages of strong global search ability, high parallel computing efficiency, good robustness, and are suitable for solving complex optimization problems. A total of eight algorithms were applied to two models of SOFC, using two datasets for parameter identification study, in four cases. To fairly compare the algorithms, the maximum number of iterations and population of each algorithm was set to the same value. Specifically, the maximum number of iterations is designed to be 1000, the population is designed to be 50, and all eight algorithms underwent 10 independent runs [51]. All case studies were undertaken by SimuNPS, released by Shanghai KeLiang Information Technology Co., Ltd., Shanghai, China through a personal computer with Intel(R) Core (TM) i5 CPU at 3.0 GHz and 64 GB of RAM.

5.1. ECM Parameter Identification

5.1.1. Results of ECM Based on Prediction Data

Table 4 presents ECM parameter identification and RMSE results obtained from various algorithms in two datasets, where ‘O’ and ‘P’ represent the original data and predicted data, respectively. Apparently, dFDB-MRFO has the highest accuracy in various situations, with a minimum RMSE of 9.5081 × 10−4. After data prediction, it can effectively improve the accuracy of parameter identification and reduce the performance indicator RMSE. Particularly, WOA has the largest RMSE decrease in the Poland dataset, at 64.5413%. However, it is worth noting that in the HUST dataset, there are anomalies in BSA and MSA, resulting in an increase in RMSE values and a decrease in accuracy after data prediction.
The iterative curves obtained by the eight algorithms on two datasets are shown in Figure 6. dFDB-MRFO has the smallest RMSE, the highest accuracy, and the fastest convergence speed in all cases, while ALO has the slowest convergence speed among the eight algorithms. The box diagram of RMSE obtained by the eight algorithms under 10 independent running conditions is shown in Figure 7. Obviously, the results obtained by dFDB-MRFO have the highest accuracy, stability, and robustness among all the algorithms. In addition, after data prediction, the RMSE of each algorithm showed a certain degree of decrease, and GWO eliminated outliers in RMSE after data prediction. Overall, the values of RMSE obtained by each algorithm decrease as the amount of data increases.
Table 5 presents the performance indicators obtained by each algorithm after prediction under two datasets. It shows that dFDB-MRFO yields the smallest MAE and MAPE results, with the highest identification accuracy. However, its disadvantage is that it takes a long time, while WOA takes the shortest time, with a single run time of only 17.8711 s.
The identified parameters of dFDB-MRFO are substituted into the SOFC model to obtain the fitted curve, as shown in Figure 8. In the prediction dataset, the fitting accuracy is very high. Specifically, in the HUST prediction dataset, R2 reaches 99.90%, while in the Poland prediction dataset, R2 is 99.86%. Figure 9 shows the radar diagrams of the worst and best results of the eight algorithms for RMSE, MAE, MAPE, and time under 10 independent operating conditions. One can easily observe that the first three indicators of dFDB-MRFO are the most stable, while WOA has the shortest time consumption, as shown in Figure 9d. It is evident that the various indicators of IWOA show the worst robustness and the highest fluctuation in algorithm performance.

5.1.2. Results of ECM Based on Denoising Data

Table 6 presents the parameter identification results of ECM obtained after data denoising processing on two datasets, where ‘N’ and ‘DN’ represent noise data and denoised data, respectively. It shows that the accuracy of each algorithm has been significantly improved after data denoising. Particularly, dFDB-MRFO obtained the smallest RMSE after denoising in the Poland dataset, only 1.2957 × 10−3. At the same time, the largest decrease in RMSE is obtained by dFDB-MRFO in the HUST dataset, from 2.4767 × 10−2 to 1.4120 × 10−3, with a decrease of 94.2989%. Second, the same decrease is observed in dFDB-MRFO in the Poland dataset, with a decrease of 93.8329% and the smallest decrease of 36.7625%, for the results obtained by IWOA on the HUST dataset.
Figure 10 shows the iterative curves of the eight algorithms obtained under two datasets. It is evident that after data denoising, the RMSE of each algorithm has significantly decreased, and among the eight algorithms, dFDB-MRFO always obtains the smallest RMSE value and the highest accuracy. In addition, ALO has the slowest convergence speed among the eight algorithms. The boxplot results obtained after 10 independent runs of noise reduction processing are shown in Figure 11. It shows that dFDB-MRFO has the smallest fluctuation in the results under multiple running conditions, while BSA, WOA, and IWOA have larger fluctuations, as shown in Figure 10a. However, it is very obvious that after data denoising, the value of RMSE significantly decreases, indicating that after denoising, the accuracy of parameter identification can be significantly improved.
The fitting V-I curve obtained by dFDB-MRFO is shown in Figure 12, with R2 at 99.93% and 99.82%, respectively, indicating high fitting accuracy. Figure 13 shows the radar diagrams of each performance indicator obtained by the eight algorithms. It shows that dFDB-MRFO obtains the best results under noise reduction data conditions, and its worst- and best-case fluctuations are small, making the algorithm relatively stable. However, its disadvantage is that it takes a relatively long time.

5.2. SECM Parameter Identification

5.2.1. Results of SECM Based on Prediction Data

Table 7 presents the SECM parameter identification results obtained from two datasets. It shows that dFDB-MRFO outperforms other algorithms, with a minimum RMSE of only 1.0670 × 10−3. After data prediction, RMSE has been reduced to a certain extent, and the accuracy of parameter identification has been further improved.
Figure 14 compares the iteration curves of SECM under two datasets, and the results show that dFDB-MRFO performs better than other algorithms, and its convergence speed is also faster. Figure 15 shows the boxplot obtained from 10 independent runs, fully demonstrating the better accuracy and robustness of dFDB-MRFO compared to other approaches.
Table 8 presents MAE, MAPE, and Tcost obtained by each algorithm for SECM in the two predicted datasets. The results show that the accuracy indicators MAE and MAPE of dFDB-MRFO are both optimal, but the disadvantage is that the running time is relatively long, requiring 38.7129 s in the HUST dataset and 34.0141 s in the Poland dataset. In the same situation, WOA requires the shortest time, only 18.9393 s in HUST data, and only 17.7537 s in Poland data, reducing the time by half.
Figure 16 shows V-I fitting curves obtained by the dFDB-MRFO algorithm under the two types of data, with high fitting accuracy of 99.41% and 99.91%, respectively. Figure 17 shows the optimal and worst performance indicators under 10 independent operating conditions. WOA, BSA, and IWOA show significant fluctuations in RMSE, MAE, and MAPE, indicating the poor robustness of these three algorithms. In addition, IWOA also requires the longest time among the eight methods.

5.2.2. Results of SECM Based on Denoising Data

Table 9 presents the parameter identification and RMSE results obtained by SECM under data denoising processing. For dFDB-MRFO, the RMSE obtained is the smallest among the eight algorithms. In addition, after data denoising, the RMSE of all eight algorithms decreased to a certain extent. Specifically, dFBD-MRFO decreased from 2.1003 × 10−2 to 1.1003 × 10−3 on the Poland dataset, with a decrease of 94.7612%, while the result obtained by WOA on the HUST dataset showed the smallest decrease, with a decrease of 49.4918%.
Figure 18 depicts the iteration curves obtained by each algorithm before and after data denoising processing. One can easily observe that dFDB-MRFO has good convergence speed and accuracy. Figure 19 shows the boxplot obtained by each algorithm. Among the eight algorithms, dFDB-MRFO has the best robustness and accuracy, while BSA has very poor robustness, and there are many RMSE outliers after 10 runs, as shown in Figure 19b.
Figure 20 shows V-I curve fitting obtained by dFDB-MRFO, with fitting accuracy of 99.95% and 99.91% for both datasets, respectively. Figure 21 shows the optimal and worst results obtained by the eight algorithms for the four performance indicators. It is evident that BSA, IWOA, and WOA exhibit significant volatility, while dFDB-MRFO is more stable.

5.3. Discussions

Overall, after data processing and algorithm improvement, the effectiveness of parameter identification was improved. However, it should not be overlooked that due to the random search characteristics, the identification accuracy of some algorithms unexpectedly decreased after data processing. For instance, in the HUST dataset, the RMSEs obtained by BSA and MSA applied to ECM increased compared to before data prediction, as shown in Table 4. In addition, it should be noted that although the performance of the algorithm was improved by selecting the dFDB strategy, its cost time was not the shortest compared to the other seven algorithms, as shown in Figure 21d.

6. Conclusions

This work adopts a novel hybrid method for SOFC parameter identification study, which contains the following three outcomes/novelties:
  • A comprehensive modeling study was conducted on SOFC, resulting in the establishment of two cell models: SECM and ECM. In addition, cell-related experimental data were fully investigated, two V-I datasets of SOFC were collected and organized, and the two datasets were applied to two cell models for parameter identification study
  • ENN was used to predict and denoise the datasets, and the effects of cell data number and quality on the final parameter identification results were compared and analyzed. The case results indicate that after data prediction and denoising, more reliable and stable identification results can be extracted. Especially, after data prediction, WOA applied to ECM performs at a 64.5413% increase of accuracy under Poland data, and dFDB-MRFO achieved a 94.7612% decrease for the RMSE of SECM after data denoising under HUST data
  • The weighted values of fitness and distance were selected as the optimal individual selection criteria, and the optimization strategy of MRFO was improved. The result obtained by dFDB-MRFO was compared with other heuristic algorithms. A comprehensive comparative analysis was carried out, which proved that dFDB-MRFO has faster convergence speed, higher identification accuracy, and better robustness compared to other algorithms. In particular, the parameters identified by dFDB-MRFO have the highest fitting accuracy of 99.93% for V-I data of ECM. The highest fitting accuracy is 99.95% for V-I data of SECM.

7. Future Work Outlook

Future work can be focused on the following two aspects outlined below.
The work on SOFC should be more in depth, using the established model to research the cell state of charge and the state of health estimation.
Due to the characteristic of random search, heuristic algorithms can significantly reduce the optimization speed but take a longer time when dealing with multiple data situations. Therefore, it is necessary and urgent to develop more parameter identification models based on neural networks and machine learning in the future.

Author Contributions

Methodology, L.S.; software, F.Z.; validation, D.G.; writing—original draft preparation, H.L.; writing—review and editing, B.Y.; funding acquisition, B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (62263014) and Yunnan Provincial Basic Research Project (202401AT070344, 202301AT070443).

Data Availability Statement

The fuel cell data in this article were obtained from references [34,35,36], with the DOI numbers in order as follows: 10.7666/d.D01311535, 10.1002/er.4424, 10.1016/j.ijhydene.2016.12.087.

Conflicts of Interest

Authors Hongbiao Li, Dengke Gao, Linlong Shi and Fei Zheng were employed by the company Shanghai KeLiang Information Technology Co. Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The Shanghai KeLiang Information Technology Co. Ltd. had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Nomenclature

AbbreviationsSOFCsolid oxide fuel cell
ABCartificial bee colonyWOAwhale optimization algorithm
AEOartificial ecosystem-based optimizationModel parameters
ALOant lion optimizer V the output voltage of cell (V)
BSAbird swarm algorithm N c e l l the number of single cell
dFDB-MRFOdynamic fitness distance balance-manta ray foraging optimization V c o n the concentration polarization voltage loss (V)
ECMelectrochemical model V a c t the active polarization voltage loss (V)
ENNElman neural network V o h m the ohmic voltage loss (V)
GWOgray wolf optimization E o the open circuit voltage of the cell (V)
IWOAimproved whale optimization algorithm A the slope of the Tafel curve (V)
MAEmean absolute error I 0 , a the anode exchange current density (mA/cm2)
MAPEmean absolute percentage error I the load current density (mA/cm2)
MGWOmodified version of gray wolf optimization B the constant in the cell model (V)
MRFOmanta ray foraging optimization I L the ultimate current density (mA/cm2)
MSAmoth swarm algorithm R o h m the equivalent resistance inside the cell ( k Ω · c m 2 )
RMSEroot mean square error I 0 , c the cathode exchange current density (mA/cm2)
SECMsimple electrochemical model I 0 the cell exchange current density (mA/cm2)

References

  1. Sivalingam, P.; Gurusamy, M. Momentum search algorithm for analysis of fuel cell vehicle-to-grid system with large-scale buildings. Prot. Control. Mod. Power Syst. 2024, 9, 147–160. [Google Scholar] [CrossRef]
  2. Hu, Q.; Bu, S.; Su, W.; Terzija, V. A privacy-preserving energy management system based on homomorphic cryptosystem for iot-enabled active distribution network. J. Mod. Power Syst. Clean Energy 2023, 12, 167–178. [Google Scholar] [CrossRef]
  3. Sharma, P.; Saini, K.K.; Mathur, H.D.; Mishra, P. Improved energy management strategy for prosumer buildings with renewable energy sources and battery energy storage systems. J. Mod. Power Syst. Clean Energy 2024, 12, 381–392. [Google Scholar] [CrossRef]
  4. Tan, S.; Liu, J.; Du, X.; Su, J.; Fan, L. Multi-port network modeling and stability analysis of VSC-MTDC systems. J. Mod. Power Syst. Clean Energy 2024, 12, 1666–1667. [Google Scholar] [CrossRef]
  5. Kiruthiga, B.; Karthick, R.; Manju, I. Optimizing harmonic mitigation for smooth integration of renewable energy: A novel approach using atomic orbital search and feedback artificial tree control. Prot. Control Mod. Power Syst. 2024, 9, 160–176. [Google Scholar] [CrossRef]
  6. Xu, H.; Feng, B.; Huang, G.; Sun, M.; Xiong, H.; Guo, C. Convex Hull Based Economic Operating Region for Power Grids Considering Uncertainties of Renewable Energy Sources. J. Mod. Power Syst. Clean Energy 2024, 12, 1419–1430. [Google Scholar] [CrossRef]
  7. Zhao, P.Y.; Li, Z.S.; Gao, H.; Yang, C. Review on collaborative scheduling optimization of electricity-gas-heat integrated energy system. Shandong Electr. Power 2024, 51, 1–11. [Google Scholar]
  8. Sun, C.; Negro, E.; Vezzù, K.; Pagot, G.; Cavinato, G.; Nale, A.; Bang, Y.H.; Di Noto, V. Hybrid inorganic-organic proton-conducting membranes based on SPEEK doped with WO3 nanoparticles for application in vanadium redox flow batteries. Electrochim. Acta 2019, 309, 311–325. [Google Scholar] [CrossRef]
  9. Yang, B.; Wang, J.; Zhang, M.; Shu, H.; Yu, T.; Zhang, X.; Yao, W.; Sun, L. A state-of-the-art survey of solid oxide fuel cell parameter identification: Modelling, methodology, and perspectives. Energy Convers. Manag. 2020, 213, 112856. [Google Scholar] [CrossRef]
  10. Alaedini, A.H.; Tourani, H.K.; Saidi, M. A review of waste-to-hydrogen conversion technologies for solid oxide fuel cell (SOFC) applications: Aspect of gasification process and catalyst development. J. Environ. Manag. 2023, 329, 117077. [Google Scholar] [CrossRef]
  11. Codony, A.C.; Olmos, R.G.; Martin, M.J. Regeneration of siloxane-exhausted activated carbon by advanced oxidation processes. J. Hazard. Mater. 2015, 285, 501–508. [Google Scholar] [CrossRef] [PubMed]
  12. Papurello, D.; Tomasi, L.; Silvestri, S. Proton transfer reaction mass spectrometry for the gas cleaning using commercial and waste-derived materials: Focus on the siloxane removal for SOFC applications. Int. J. Mass Spectrom. 2018, 430, 69–79. [Google Scholar] [CrossRef]
  13. Din, Z.U.; Zainal, Z.A. Biomass integrated gasification-SOFC systems: Technology overview. Renew. Sustain. Energy Rev. 2016, 53, 1356–1376. [Google Scholar] [CrossRef]
  14. Wang, J.; Yang, B.; Chen, Y.; Zeng, K.; Zhang, H.; Shu, H.; Chen, Y. Novel phasianidae inspired peafowl (pavo muticus/cristatus) optimization algorithm: Design, evaluation, and SOFC models parameter estimation. Sustain. Energy Technol. Assess. 2022, 50, 101825. [Google Scholar] [CrossRef]
  15. Yang, B.; Chen, Y.; Guo, Z.; Wang, J.; Zeng, C.; Li, D.; Shu, H.; Shan, J.; Fu, T.; Zhang, X. Levenberg-Marquardt backpropagation algorithm for parameter identification of solid oxide fuel cells. Int. J. Energy Res. 2021, 45, 17903–17923. [Google Scholar] [CrossRef]
  16. Ismael, I.; El-Fergany, A.A.; Gouda, E.A.; Kotb, M.F. Cooperation search algorithm for optimal parameters identification of SOFCs feeding electric vehicle at steady and dynamic modes. Int. J. Hydrogen Energy 2023, 50, 1395–1407. [Google Scholar] [CrossRef]
  17. Wang, J.; Xu, Y.-P.; She, C.; Xu, P.; Bagal, H.A. Optimal parameter identification of SOFC model using modified gray wolf optimization algorithm. Energy 2022, 240, 122800. [Google Scholar] [CrossRef]
  18. Guo, H.; Gu, W.; Khayatnezhad, M.; Ghadimi, N. Parameter extraction of the SOFC mathematical model based on fractional order version of dragonfly algorithm. Int. J. Hydrogen Energy 2022, 47, 24059–24068. [Google Scholar] [CrossRef]
  19. Yousri, D.; Hasanien, H.M.; Fathy, A. Parameters identification of solid oxide fuel cell for static and dynamic simulation using comprehensive learning dynamic multi-swarm marine predators algorithm. Energy Convers. Manag. 2021, 228, 113692. [Google Scholar] [CrossRef]
  20. Fathy, A.; Hegazy, R. Political optimizer based approach for estimating SOFC optimal parameters for static and dynamic models. Energy 2022, 238, 122031. [Google Scholar] [CrossRef]
  21. Chen, Y.J.; Yang, B.; Guo, Z.X.; Shu, H.C.; Cao, P.L. Parameter identification of solid oxide fuel cell based on AEO-MRFO. Power Syst. Technol. 2022, 46, 1382–1391. [Google Scholar]
  22. Wei, Y.; Stanford, R.J. Parameter identification of solid oxide fuel cell by chaotic binary shark smell optimization method. Energy 2019, 188, 115770. [Google Scholar] [CrossRef]
  23. Masadeh, M.A.; Kuruvinashetti, K.; Shahparnia, M.; Pillay, P.; Packirisamy, M. Electrochemical modeling and equivalent circuit representation of a microphotosynthetic power cell. IEEE Trans. Ind. Electron. 2016, 64, 1561–1571. [Google Scholar] [CrossRef]
  24. Bavarian, M.; Soroush, M.; Kevrekidis, I.G.; Benziger, J.B. Mathematical modeling, steady-state and dynamic behavior, and control of fuel cells: A review. Ind. Eng. Chem. Res. 2010, 49, 7922–7950. [Google Scholar] [CrossRef]
  25. Zhu, L.; Zhang, L.; Virkar, A.V. A parametric model for solid oxide fuel cells based on measurements made on cell materials and components. J. Power Sources 2015, 291, 138–155. [Google Scholar] [CrossRef]
  26. Ding, A.; Sun, H.; Zhang, S.; Dai, X.; Pan, Y.; Zhang, X.; Rahman, E.; Guo, J. Thermodynamic analysis and parameter optimization of a hybrid system based on SOFC and graphene-collector thermionic energy converter. Energy Convers. Manag. 2023, 291, 117327. [Google Scholar] [CrossRef]
  27. Li, K.; Hu, L.; Song, T.T. Health state estimation of lithium-ion batteries based on CNN-Bi-LSTM. Shandong Electr. Power 2023, 50, 66–72. [Google Scholar]
  28. Fetanat, M.; Stevens, M.; Jain, P.; Hayward, C.; Meijering, E.; Lovell, N.H. Fully Elman neural network: A novel deep recurrent neural network optimized by an improved Harris hawks algorithm for classification of pulmonary arterial wedge pressure. IEEE Trans. Biomed. Eng. 2022, 69, 1733–1743. [Google Scholar] [CrossRef]
  29. Sriram, L.M.K.; Gilanifar, M.; Zhou, Y.; Ozguven, E.E.; Arghandeh, R. Causal Markov Elman network for load forecasting in multinetwork systems. IEEE Trans. Ind. Electron. 2019, 66, 1434–1442. [Google Scholar] [CrossRef]
  30. Jia, H.L.; Taheri, B. Model identification of solid oxide fuel cell using hybrid Elman neural network/quantum pathfinder algorithm. Energy Rep. 2021, 7, 3328–3337. [Google Scholar] [CrossRef]
  31. Guo, C.; Lu, J.; Tian, Z.; Guo, W.; Darvishan, A. Optimization of critical parameters of PEM fuel cell using TLBO-DE based on Elman neural network. Energy Convers. Manag. 2019, 183, 149–158. [Google Scholar] [CrossRef]
  32. Ding, L.; Bai, Y.; Liu, M.-D.; Fan, M.-H.; Yang, J. Predicting short wind speed with a hybrid model based on a piecewise error correction method and Elman neural network. Energy 2022, 244, 122630. [Google Scholar] [CrossRef]
  33. Tang, A.-D.; Zhou, H.; Han, T.; Xie, L. A modified manta ray foraging optimization for global optimization problems. IEEE Access 2021, 9, 128702–128721. [Google Scholar] [CrossRef]
  34. Jian, X.Z.; Wang, P.; Wang, R.Z. Parameter identification model of photovoltaic module based on improved manta ray optimization algorithm. Acta Metrol. Sin. 2023, 44, 109–119. [Google Scholar]
  35. Kahraman, H.T.; Bakir, H.; Duman, S.; Katı, M.; Aras, S.; Guvenc, U. Dynamic FDB selection method and its application: Modeling and optimizing of directional overcurrent relays coordination. Appl. Intell. 2022, 52, 4873–4908. [Google Scholar] [CrossRef]
  36. Tao, Z.; Zhang, C.; Xiong, J.; Hu, H.; Ji, J.; Peng, T.; Nazir, M.S. Evolutionary gate recurrent unit coupling convolutional neural network and improved manta ray foraging optimization algorithm for performance degradation prediction of PEMFC. Appl. Energy 2023, 336, 120821. [Google Scholar] [CrossRef]
  37. Guo, Y.; Yang, D.; Zhang, Y.; Wang, L.; Wang, K. Online estimation of SOH for lithium-ion cell based on SSA-Elman neural network. Prot. Control Mod. Power Syst. 2022, 7, 1–17. [Google Scholar] [CrossRef]
  38. Mohamed, A.E.H.; Mahmoud, M.E.; Attia, A.E.F. Three-diode model for characterization of industrial solar generating units using Manta-rays foraging optimizer: Analysis and validations. Energy Convers. Manag. 2020, 219, 113048. [Google Scholar]
  39. Jing, S.W. Research on Testing and Fault Diagnosis of Solid Oxide Fuel Cells. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2017. [Google Scholar] [CrossRef]
  40. Dhruv, K.; Kanwar, P.S.R.; Vineet, K. Parameter extraction of fuel cells using hybrid interior search algorithm. Int. J. Energy Res. 2019, 43, 2855–2880. [Google Scholar]
  41. Molenda, J.; Kupecki, J.; Baron, R.; Blesznowski, M.; Brus, G.; Brylewski, T.; Bucko, M.; Chmielowiec, J.; Cwieka, K.; Gazda, M.; et al. Status report on high temperature fuel cells in Poland-recent advances and achievements. Int. J. Hydrogen Energy 2017, 42, 4366–4403. [Google Scholar] [CrossRef]
  42. Oussama, H.; Attia, A.E.F. Efficient PEM fuel cells parameters identification using hybrid artificial bee colony differential evolution optimizer. Energy 2022, 250, 123830. [Google Scholar]
  43. Wu, Z.Q.; Yu, D.Q.; Kang, X.H. Parameter identification of photovoltaic cell model based on improved ant lion optimizer. Energy Convers. Manag. 2017, 151, 107–115. [Google Scholar] [CrossRef]
  44. Huang, C.Z.; Sheng, X.X. Data-driven model identification of boiler-turbine coupled process in 1000 MW ultra-supercritical unit by improved bird swarm algorithm. Energy 2020, 205, 118009. [Google Scholar] [CrossRef]
  45. Bai, Q.C.; Li, H. The application of hybrid cuckoo search-grey wolf optimization algorithm in optimal parameters identification of solid oxide fuel cell. Int. J. Hydrogen Energy 2022, 47, 6200–6216. [Google Scholar] [CrossRef]
  46. Zhang, H.; Chen, H.X.; Liu, C. Optimal reconfiguration of active distribution network based on improved differential grey wolf algorithm. Shandong Electr. Power 2023, 50, 7–13. [Google Scholar]
  47. Liu, W.; Guo, Z.Q.; Wang, D. Improved whale optimization algorithm and its weights and thresholds optimization in shallow neural architecture search. Control Decis. 2023, 38, 1144–1152. [Google Scholar]
  48. Thuan, T.N.; Thanh, L.D.; Thanh, Q.N. Network reconfiguration and distributed generation placement for multi-goal function based on improved moth swarm algorithm. Math. Probl. Eng. 2022, 2022, 5015771. [Google Scholar]
  49. Dey, S.; Roy, P.K.; Sarkar, K. Adaptive IIR model identification using chaotic opposition-based whale optimization algorithm. J. Electr. Syst. Inf. Technol. 2023, 10, 1–23. [Google Scholar] [CrossRef]
  50. Mei, J.; Meng, X.; Tang, X.; Li, H.; Hasanien, H.; Alharbi, M.; Dong, Z.; Shen, J.; Sun, C.; Fan, F.; et al. An accurate parameter estimation method of the voltage model for proton exchange membrane fuel cells. Energies 2024, 17, 2917. [Google Scholar] [CrossRef]
  51. Yang, B.; Guo, Z.; Yang, Y.; Chen, Y.; Zhang, R.; Su, K.; Shu, H.; Yu, T.; Zhang, X. Extreme learning machine based meta-heuristic algorithms for parameter extraction of solid oxide fuel cells. Appl. Energy 2021, 303, 117630. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of SOFC reaction principle.
Figure 1. Schematic diagram of SOFC reaction principle.
Processes 12 02504 g001
Figure 2. The structure of ENN.
Figure 2. The structure of ENN.
Processes 12 02504 g002
Figure 3. Schematic diagram of foraging strategy for manta ray.
Figure 3. Schematic diagram of foraging strategy for manta ray.
Processes 12 02504 g003
Figure 4. The execution process of SOFC parameter identification.
Figure 4. The execution process of SOFC parameter identification.
Processes 12 02504 g004
Figure 5. Results of two datasets processed by ENN: (a) predicted data of HUST; (b) denoised data of HUST; (c) predicted data of Poland; (d) denoised data of Poland.
Figure 5. Results of two datasets processed by ENN: (a) predicted data of HUST; (b) denoised data of HUST; (c) predicted data of Poland; (d) denoised data of Poland.
Processes 12 02504 g005aProcesses 12 02504 g005b
Figure 6. Convergence curves of RMSE obtained by various algorithms for ECM under predicted data and original data. (a) Original data of HUST; (b) predicted data of HUST; (c) original data of Poland; (d) predicted data of Poland.
Figure 6. Convergence curves of RMSE obtained by various algorithms for ECM under predicted data and original data. (a) Original data of HUST; (b) predicted data of HUST; (c) original data of Poland; (d) predicted data of Poland.
Processes 12 02504 g006aProcesses 12 02504 g006b
Figure 7. Boxplot of RMSE obtained by various algorithms for ECM under two datasets after prediction processing: (a) data of HUST and (b) data of Poland.
Figure 7. Boxplot of RMSE obtained by various algorithms for ECM under two datasets after prediction processing: (a) data of HUST and (b) data of Poland.
Processes 12 02504 g007
Figure 8. Fitting curve of dFDB-MRFO for ECM under two datasets after prediction processing: (a) V-I curve under predicted data of HUST and (b) V-I curve under predicted data of Poland.
Figure 8. Fitting curve of dFDB-MRFO for ECM under two datasets after prediction processing: (a) V-I curve under predicted data of HUST and (b) V-I curve under predicted data of Poland.
Processes 12 02504 g008aProcesses 12 02504 g008b
Figure 9. Comparison of evaluation metrics of eight algorithms for ECM under predicted data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Figure 9. Comparison of evaluation metrics of eight algorithms for ECM under predicted data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Processes 12 02504 g009
Figure 10. Convergence curves of RMSE obtained by various algorithms for ECM under denoised data and noised data. (a) Noised data of HUST; (b) denoised data of HUST; (c) noised data of Poland; (d) denoised data of Poland.
Figure 10. Convergence curves of RMSE obtained by various algorithms for ECM under denoised data and noised data. (a) Noised data of HUST; (b) denoised data of HUST; (c) noised data of Poland; (d) denoised data of Poland.
Processes 12 02504 g010
Figure 11. Boxplot of RMSE obtained by various algorithms for ECM under two datasets after denoising processing: (a) data of HUST and (b) data of Poland.
Figure 11. Boxplot of RMSE obtained by various algorithms for ECM under two datasets after denoising processing: (a) data of HUST and (b) data of Poland.
Processes 12 02504 g011
Figure 12. Fitting curve of dFDB-MRFO for ECM under two datasets after denoising processing: (a) V-I curve under denoised data of HUST and (b) V-I curve under denoised data of Poland.
Figure 12. Fitting curve of dFDB-MRFO for ECM under two datasets after denoising processing: (a) V-I curve under denoised data of HUST and (b) V-I curve under denoised data of Poland.
Processes 12 02504 g012
Figure 13. Comparison of evaluation metrics of eight algorithms for ECM under denoised data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Figure 13. Comparison of evaluation metrics of eight algorithms for ECM under denoised data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Processes 12 02504 g013aProcesses 12 02504 g013b
Figure 14. Convergence curves of RMSE obtained by various algorithms for SECM under predicted data and original data. (a) Original data of HUST; (b) predicted data of HUST; (c) original data of Poland; (d) predicted data of Poland.
Figure 14. Convergence curves of RMSE obtained by various algorithms for SECM under predicted data and original data. (a) Original data of HUST; (b) predicted data of HUST; (c) original data of Poland; (d) predicted data of Poland.
Processes 12 02504 g014aProcesses 12 02504 g014b
Figure 15. Boxplot of RMSE obtained by various algorithms for SECM under two datasets after prediction processing: (a) data of HUST and (b) data of Poland.
Figure 15. Boxplot of RMSE obtained by various algorithms for SECM under two datasets after prediction processing: (a) data of HUST and (b) data of Poland.
Processes 12 02504 g015aProcesses 12 02504 g015b
Figure 16. Fitting curve of dFDB-MRFO for SECM under two datasets after prediction processing: (a) V-I curve under predicted data of HUST and (b) V-I curve under predicted data of Poland.
Figure 16. Fitting curve of dFDB-MRFO for SECM under two datasets after prediction processing: (a) V-I curve under predicted data of HUST and (b) V-I curve under predicted data of Poland.
Processes 12 02504 g016
Figure 17. Comparison of evaluation metrics of eight algorithms for SECM under predicted data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Figure 17. Comparison of evaluation metrics of eight algorithms for SECM under predicted data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Processes 12 02504 g017aProcesses 12 02504 g017b
Figure 18. Convergence curves of RMSE obtained by various algorithms for ECM under denoised data and noised data. (a) Noised data of HUST; (b) denoised data of HUST; (c) noised data of Poland; (d) denoised data of Poland.
Figure 18. Convergence curves of RMSE obtained by various algorithms for ECM under denoised data and noised data. (a) Noised data of HUST; (b) denoised data of HUST; (c) noised data of Poland; (d) denoised data of Poland.
Processes 12 02504 g018aProcesses 12 02504 g018b
Figure 19. Boxplot of RMSE obtained by various algorithms for SECM under two datasets after denoising processing: (a) data of HUST and (b) data of Poland.
Figure 19. Boxplot of RMSE obtained by various algorithms for SECM under two datasets after denoising processing: (a) data of HUST and (b) data of Poland.
Processes 12 02504 g019aProcesses 12 02504 g019b
Figure 20. Fitting curve of dFDB-MRFO for SECM under two datasets after denoising processing: (a) V-I curve under denoised data of HUST and (b) V-I curve under denoised data of Poland.
Figure 20. Fitting curve of dFDB-MRFO for SECM under two datasets after denoising processing: (a) V-I curve under denoised data of HUST and (b) V-I curve under denoised data of Poland.
Processes 12 02504 g020
Figure 21. Comparison of evaluation metrics of eight algorithms for SECM under denoised data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Figure 21. Comparison of evaluation metrics of eight algorithms for SECM under denoised data: (a) RMSE; (b) MAE; (c) MAPE; (d) time required for a single run.
Processes 12 02504 g021
Table 1. Models of SOFC.
Table 1. Models of SOFC.
ModelsMathematical EquationsIdentification Parameters
ECM [23] V = N c e l l ( E o V a c t V o h m V c o n )
= N c e l l ( E o A s i n h 1 I 2 I 0 , a A s i n h 1 I 2 I 0 , c
+ B · l n 1 I I L I · R o h m )
E o , A , R o h m , B , I 0 , a , I 0 , c , and I L
SECM [24] V = N c e l l ( E o V a c t V o h m V c o n )
= N c e l l ( E o A s i n h 1 I 2 I 0 + B · l n ( 1 I I L ) I · R o h m )
E o , A , R o h m , B , I 0 , and I L
Table 2. The searching range of each unknown parameter for SOFC models.
Table 2. The searching range of each unknown parameter for SOFC models.
ECMBoundParameters
Eo (V)A (V)Rohm ( k Ω · c m 2 )B (V)I0,a (mA/cm2)I0,c (mA/cm2)IL (mA/cm2)
Lower0000000
Upper1.21113003002000
SECMBoundParameters
Eo (V)A (V)Rohm ( k Ω · c m 2 )B (V)I0 (mA/cm2)IL (mA/cm2)
Lower000000
Upper1.21113002000
Table 3. Model and algorithm parameter setting.
Table 3. Model and algorithm parameter setting.
TypesParametersValue
ENNNumber of neurons6
Training functionTrainscg
Training times6000
Eight heuristic algorithmsABCSearch range expansion factor1
Maximum iterations1000
Population size50
BSAPerception coefficient1.5
Maximum iterations1000
Population size50
Social acceleration coefficient1.5
Flight interval5
ALO
dFDB-MRFO
GWO
IWOA
MSA
WOA
Maximum iterations1000
Population size50
DatasetHUSTNumber of series cells26
Pairs of V-I data18
Pairs of predicted V-I data180
PolandNumber of series cells1
Pairs of V-I data20
Pairs of predicted V-I data190
Table 4. Identified parameter and RMSE results obtained by various algorithms for ECM based on predicted data and original data.
Table 4. Identified parameter and RMSE results obtained by various algorithms for ECM based on predicted data and original data.
DatasetAlgorithmDataIdentified ParametersRMSE
EoARohmBI0,aI0,cIL
(HUST)ABCO1.00840.04761.8374 × 10−50.3334217.171420.70651980.72976.0511 × 10−3
P0.98860.05069.3017 × 10−50.1906152.982550.95241981.64484.8217 × 10−3
ALOO1.01210.03103.2092 × 10−50.398146.189820.77022000.00006.8041 × 10−3
P0.98410.10576.4114 × 10−50.0006247.6706122.85421754.89863.1884 × 10−3
BSAO1.00660.06767.0196 × 10−50.1317227.260642.47701994.57717.8553 × 10−3
P0.98080.13420.00000.0000261.1027256.44121549.14217.9009 × 10−3
dFDB-
MRFO
O1.02990.03012.6629 × 10−40.0280130.11995.58401356.16671.7875 × 10−3
P1.02800.03412.9020 × 10−40.0036249.31777.43991172.69761.4748 × 10−3
GWOO1.02540.03020.00000.417239.28159.91961999.92893.2237 × 10−3
P1.01920.04290.00000.3557146.235114.97701967.26702.3444 × 10−3
IWOAO0.98730.13550.00000.0000259.2276246.1912864.06001.2561 × 10−2
P0.98040.04900.00000.3948284.548247.11092000.00007.9059 × 10−3
MSAO1.00430.03290.00000.385141.869333.92361937.69146.2021 × 10−3
P0.98540.09490.00000.1068153.7583133.17011997.21297.9444 × 10−3
WOAO0.96420.01953.9581 × 10−40.0304250.5678159.24261546.52989.9934 × 10−3
P0.96410.03650.00000.4497162.8437159.94842000.00004.1312 × 10−3
(Poland)ABCO1.01090.01532.2785 × 10−40.1363253.5995156.2469715.29026.0578 × 10−3
P1.01150.01901.8986 × 10−40.1577259.2224185.6259710.65294.6889 × 10−3
ALOO1.00950.00004.4685 × 10−50.3629300.000024.4399925.85854.8371 × 10−3
P1.00610.00231.9289 × 10−40.1852104.3663101.2819787.52873.7879 × 10−3
BSAO1.02080.00120.00000.3169299.99910.0000845.61716.8994 × 10−3
P1.00420.00002.9484 × 10−50.3710202.0059116.3580933.72756.8444 × 10−3
dFDB-
MRFO
O1.02190.02094.5817 × 10−50.132748.784844.4692665.40881.5902 × 10−3
P1.02330.02592.5100 × 10−50.125676.691241.0389656.26699.5081 × 10−4
GWOO1.00480.00330.00000.3469266.6240169.0980887.07228.2074 × 10−3
P1.00690.00740.00000.3056191.881449.7938856.18007.1003 × 10−3
IWOAO1.00860.09250.00000.0987289.9562259.9562668.27628.5109 × 10−3
P1.01960.02180.00000.1678131.724429.4241703.24037.8439 × 10−3
MSAO1.02140.08980.00000.0643299.9952132.7677617.70429.7095 × 10−3
P1.02570.16530.00000.0294300.0000106.7103593.08967.0223 × 10−3
WOAO1.02680.02010.00000.1655151.193814.6126691.93222.1817 × 10−2
P1.00890.00000.00000.4771233.546247.91211079.14257.7361 × 10−3
Table 5. Evaluation metrics of eight algorithms for ECM based on predicted data.
Table 5. Evaluation metrics of eight algorithms for ECM based on predicted data.
DatasetAlgorithmDataMAEMAPETcost (s)
(HUST)ABCP2.2783 × 10−30.241837.3324
ALOP2.4007 × 10−30.288923.8300
BSAP1.9064 × 10−30.334920.6537
dFDB-MRFOP1.2302 × 10−30.146034.1251
GWOP1.8876 × 10−32.026318.8555
IWOAP2.2625 × 10−30.392553.6705
MSAP2.2570 × 10−30.233019.4584
WOAP4.2883 × 10−30.538017.8711
(Poland)ABCP2.2092 × 10−30.337440.3677
ALOP3.4368 × 10−30.451826.5260
BSAP5.7945 × 10−30.220219.8924
dFDB-MRFOP6.8940 × 10−40.089437.3125
GWOP1.2923 × 10−30.152819.9486
IWOAP4.0978 × 10−30.313055.8479
MSAP1.1075 × 10−30.237520.3565
WOAP3.2596 × 10−30.364819.3635
Table 6. Identified parameters and RMSE results obtained by various algorithms for ECM based on noised data and denoised data.
Table 6. Identified parameters and RMSE results obtained by various algorithms for ECM based on noised data and denoised data.
DatasetAlgorithmDataIdentified ParametersRMSE
EoARohmBI0,aI0,cIL
(HUST)ABCN1.04080.04801.7318 × 10−40.0473243.05698.80391932.77092.5196 × 10−2
DN1.01730.03721.0597 × 10−40.2728188.980849.84842000.00006.3903 × 10−3
ALON1.05310.04808.2077 × 10−50.0311242.914715.03552000.00002.6823 × 10−2
DN1.02210.03757.4828 × 10−50.2839126.173612.36802000.00006.5182 × 10−3
BSAN0.94070.00000.00000.7348221.2304220.44002000.00002.6894 × 10−2
DN1.00620.06300.00000.2332261.680035.23681744.55761.0173 × 10−2
dFDB-MRFON1.05560.03762.2173 × 10−40.0032104.535478.44851712.09422.4767 × 10−2
DN1.02900.03292.3280 × 10−40.0918176.7367171.12541976.26551.4120 × 10−3
GWON1.05770.04100.00000.297795.257565.33101999.64852.6910 × 10−2
DN1.01450.04460.00000.1703124.276268.11331318.85977.7041 × 10−3
IWOAN1.03780.05451.1626 × 10−50.099957.967320.36461375.42614.6882 × 10−2
DN0.97930.06590.00000.3245154.7226126.74792000.00002.9647 × 10−2
MSAN1.01050.09195.3483 × 10−50.0033248.068995.9777923.70042.6578 × 10−2
DN1.01070.05180.00000.2929180.807925.24531880.13094.9883 × 10−3
WOAN1.02040.08260.00000.0758281.307533.40791399.66554.6882 × 10−2
DN0.94690.00000.00000.7489171.1936142.38632000.00002.9645 × 10−2
(Poland)ABCN1.02930.08291.5714 × 10−40.0463260.5604206.0742623.67712.2675 × 10−2
DN1.00770.01043.1564 × 10−40.0825296.8653151.6166651.21735.1109 × 10−3
ALON1.01940.02390.00000.154092.439152.9801683.08282.3778 × 10−2
DN1.02480.10883.6119 × 10−50.0560275.6218191.8501647.44611.0263 × 10−2
BSAN1.03820.01203.8280 × 10−40.0254299.948054.5671594.92362.1734 × 10−2
DN1.01420.10554.6119 × 10−50.0665289.5618272.4108623.93584.1625 × 10−3
dFDB-
MRFO
N1.02280.03113.5181 × 10−40.0179261.374488.7520593.38632.1010 × 10−2
DN1.02150.01968.9138 × 10−50.118749.192546.8012657.32151.2957 × 10−3
GWON1.01240.00450.00000.5224280.8711261.73581196.15452.4765 × 10−2
DN1.01710.02591.1441 × 10−50.148193.960675.6367683.74872.4633 × 10−3
IWOAN1.01980.06790.00000.2447249.5798225.9068975.15232.4780 × 10−2
DN1.01330.00410.00000.5743256.7416229.68591251.02598.7935 × 10−3
MSAN1.02100.01500.00000.218749.119132.6179779.76192.4618 × 10−2
DN1.01800.07840.00000.0726148.1870117.0644623.11943.2388 × 10−3
WOAN1.01900.00650.00000.3209164.8314121.9037900.90272.4201 × 10−2
DN1.02770.11510.00000.0382300.0000149.1398601.06191.1927 × 10−2
Table 7. Identified parameters and RMSE results obtained by various algorithms for SECM based on predicted data and original data.
Table 7. Identified parameters and RMSE results obtained by various algorithms for SECM based on predicted data and original data.
DatasetAlgorithmDataIdentified ParametersRMSE
EoARohmBI0IL
(HUST)ABCO0.99590.06441.8789 × 10−40.085337.99331572.58258.6550 × 10−3
P0.98750.12463.1201 × 10−50.166395.76611688.10185.7883 × 10−3
ALOO0.98240.17492.2731 × 10−50.0701130.42381549.57081.2283 × 10−2
P1.02140.02952.9226 × 10−40.242164.98091985.98966.2325 × 10−3
BSAO0.99400.22860.00000.0002146.9997864.06008.0427 × 10−3
P0.98920.10530.00000.185774.45441446.74524.9445 × 10−3
dFDB-MRFOO1.02990.03593.3483 × 10−40.009227.38041963.30142.8946 × 10−3
P1.01650.04282.8037 × 10−40.060514.97651914.96962.1036 × 10−3
GWOO0.99710.18640.00000.0118112.5108874.78161.0391 × 10−2
P0.99930.09530.00000.202856.65991483.31044.2223 × 10−3
IWOAO0.98720.20840.00000.0094144.8862875.51061.1768 × 10−2
P0.97600.18390.00000.1063155.55711562.49187.1351 × 10−3
MSAO0.99070.22291.3483 × 10−50.0038151.9466864.65461.1419 × 10−2
P0.98200.16910.00000.0768130.53881236.58416.2959 × 10−3
WOAO0.97590.26890.00000.0945243.34251999.69291.4692 × 10−2
P0.98510.12880.00000.183999.61731631.85955.6719 × 10−3
(Poland)ABCO1.00870.08202.0200 × 10−40.0939283.1375672.07336.0210 × 10−3
P1.01430.05492.1321 × 10−40.0858181.8114642.76154.4861 × 10−3
ALOO1.03870.26271.1876 × 10−40.0088243.3413592.89206.2782 × 10−3
P1.03430.31798.3912 × 10−50.0175300.0000589.87906.6270 × 10−3
BSAO1.03890.42770.00000.0000300.0000611.05388.3311 × 10−3
P1.02820.02310.00000.196816.0398722.55941.5154 × 10−2
dFDB-MRFOO1.02190.04996.6134 × 10−60.135054.4361665.84461.4992 × 10−3
P1.02210.04329.6881 × 10−50.106751.9844643.91991.0670 × 10−3
GWOO1.00540.01420.00000.3221162.6233866.15723.6775 × 10−3
P1.00470.00340.00000.364875.2659920.91672.4511 × 10−3
IWOAO1.03690.41250.00000.0032299.5569592.89201.7263 × 10−2
P1.01580.13030.00000.2504299.4474895.91017.8751 × 10−3
MSAO1.01860.01280.00000.248610.7416780.43194.9739 × 10−3
P1.00980.01550.00000.266143.1013806.25054.2821 × 10−3
WOAO1.01380.09110.00000.1212134.2589662.88503.2853 × 10−3
P1.01220.01823.3970 × 10−50.230341.8487773.81041.8440 × 10−3
Table 8. Evaluation metrics of eight algorithms for SECM based on predicted data.
Table 8. Evaluation metrics of eight algorithms for SECM based on predicted data.
DatasetAlgorithmDataMAEMAPETcost (s)
(HUST)ABCP2.1547 × 10−30.263442.3649
ALOP3.1916 × 10−30.379824.9545
BSAP2.0090 × 10−30.290219.8749
dFDB-MRFOP1.4111 × 10−30.182538.7129
GWOP2.0460 × 10−30.228819.1030
IWOAP2.1170 × 10−30.350154.0887
MSAP2.0387 × 10−30.253819.0055
WOAP2.3336 × 10−30.332818.9393
(Poland)ABCP3.5170 × 10−30.327738.7902
ALOP4.4878 × 10−30.370923.6014
BSAP6.1218 × 10−30.734718.1037
dFDB-MRFOP6.6031 × 10−40.082334.0141
GWOP1.9230 × 10−30.156018.1561
IWOAP3.7055 × 10−30.430452.9035
MSAP1.4135 × 10−30.189218.4856
WOAP3.7348 × 10−30.149117.7537
Table 9. Identified parameters and RMSE results obtained by various algorithms for SECM based on denoised data and noised data.
Table 9. Identified parameters and RMSE results obtained by various algorithms for SECM based on denoised data and noised data.
DatasetAlgorithmDataIdentified ParametersRMSE
EoARohmBI0IL
(HUST)ABCN1.02390.09028.0400 × 10−50.157433.65322000.00002.5354 × 10−2
DN0.99500.11912.3572 × 10−50.107379.37501214.91059.1871 × 10−3
ALON1.02850.08568.6930 × 10−50.103930.88791444.44582.6887 × 10−2
DN1.00840.16183.1818 × 10−50.018789.0017902.61661.1141 × 10−2
BSAN1.04900.06910.00000.332413.25862000.00001.4015 × 10−1
DN0.99440.21340.00000.0025134.9176864.06002.9648 × 10−2
dFDB-MRFON1.05330.05482.6043 × 10−40.00008.60011651.71662.4958 × 10−2
DN1.02900.03713.3480 × 10−40.00328.20641883.15191.5600 × 10−3
GWON1.04880.06905.0449 × 10−50.262813.55322000.00002.6694 × 10−2
DN1.02030.06745.3987 × 10−50.174123.99541373.09484.6941 × 10−3
IWOAN1.02140.09320.00000.263935.96102000.00002.6214 × 10−2
DN0.97250.02703.9130 × 10−40.025665.29282000.00001.2417 × 10−2
MSAN1.01370.13861.0114 × 10−50.043261.81851134.29612.7332 × 10−2
DN1.00230.09370.00000.233853.31751628.96307.2302 × 10−3
WOAN1.03300.08480.00000.269424.95381958.23012.8336 × 10−2
DN0.98310.02553.5701 × 10−40.076028.94812000.00001.4312 × 10−2
(Poland)ABCN1.01830.03613.8249 × 10−40.0332188.5794603.84242.2084 × 10−2
DN1.01520.10571.5161 × 10−40.1030209.2736668.62885.9881 × 10−3
ALON1.02990.29281.2785 × 10−40.0113300.0000592.89202.1858 × 10−2
DN1.00990.01842.0317 × 10−40.1557183.2716741.50576.9660 × 10−3
BSAN1.04440.04293.5072 × 10−50.100418.4335635.71122.4820 × 10−2
DN1.01980.00000.00000.8499254.97111632.90961.1887 × 10−2
dFDB-MRFON1.02130.20691.2547 × 10−40.0198238.8162593.56002.1003 × 10−2
DN1.02170.05260.00000.133957.2269664.48081.1003 × 10−3
GWON1.02420.35720.00000.0106300.0000592.89202.4745 × 10−2
DN1.00550.00150.00000.355235.1242893.90478.0012 × 10−3
IWOAN1.01830.13370.00000.0910165.2074648.44772.2751 × 10−2
DN1.00710.03200.00000.2930196.4814848.13467.6441 × 10−3
MSAN1.02490.15121.5676 × 10−50.0412132.3149599.28172.1464 × 10−2
DN1.01630.00244.1476 × 10−50.297314.1800842.33754.5383 × 10−3
WOAN1.01650.04640.00000.3960159.96331093.54452.8834 × 10−2
DN1.00870.00023.3680 × 10−40.090270.9730656.29334.5140 × 10−3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Gao, D.; Shi, L.; Zheng, F.; Yang, B. Parameter Identification of Solid Oxide Fuel Cell Using Elman Neural Network and Dynamic Fitness Distance Balance-Manta Ray Foraging Optimization Algorithm. Processes 2024, 12, 2504. https://doi.org/10.3390/pr12112504

AMA Style

Li H, Gao D, Shi L, Zheng F, Yang B. Parameter Identification of Solid Oxide Fuel Cell Using Elman Neural Network and Dynamic Fitness Distance Balance-Manta Ray Foraging Optimization Algorithm. Processes. 2024; 12(11):2504. https://doi.org/10.3390/pr12112504

Chicago/Turabian Style

Li, Hongbiao, Dengke Gao, Linlong Shi, Fei Zheng, and Bo Yang. 2024. "Parameter Identification of Solid Oxide Fuel Cell Using Elman Neural Network and Dynamic Fitness Distance Balance-Manta Ray Foraging Optimization Algorithm" Processes 12, no. 11: 2504. https://doi.org/10.3390/pr12112504

APA Style

Li, H., Gao, D., Shi, L., Zheng, F., & Yang, B. (2024). Parameter Identification of Solid Oxide Fuel Cell Using Elman Neural Network and Dynamic Fitness Distance Balance-Manta Ray Foraging Optimization Algorithm. Processes, 12(11), 2504. https://doi.org/10.3390/pr12112504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop