Next Article in Journal
Adaptive Reuse of a Historic Building by Introducing New Functions: A Scenario Evaluation Based on Participatory MCA Applied to a Former Carthusian Monastery in Tuscany, Italy
Next Article in Special Issue
Machine Learning-Based Node Characterization for Smart Grid Demand Response Flexibility Assessment
Previous Article in Journal
Cardiac Autonomic Effects of Yearly Athletic Retreats on Elite Basket Players: Usefulness of a Unitary Autonomic Nervous System Indicator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electrical Power Prediction through a Combination of Multilayer Perceptron with Water Cycle Ant Lion and Satin Bowerbird Searching Optimizers

1
Informetrics Research Group, Ton Duc Thang University, Ho Chi Minh City, Vietnam
2
Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam
3
School of Economics and Business, Norwegian University of Life Sciences, 1430 Ås, Norway
4
School of the Built Environment, Oxford Brookes University, Oxford OX3 0BP, UK
5
John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
6
Department of Informatics, Selye Janos University, 94501 Komarom, Slovakia
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(4), 2336; https://doi.org/10.3390/su13042336
Submission received: 25 January 2021 / Revised: 9 February 2021 / Accepted: 10 February 2021 / Published: 21 February 2021

Abstract

:
Predicting the electrical power (PE) output is a significant step toward the sustainable development of combined cycle power plants. Due to the effect of several parameters on the simulation of PE, utilizing a robust method is of high importance. Hence, in this study, a potent metaheuristic strategy, namely, the water cycle algorithm (WCA), is employed to solve this issue. First, a nonlinear neural network framework is formed to link the PE with influential parameters. Then, the network is optimized by the WCA algorithm. A publicly available dataset is used to feed the hybrid model. Since the WCA is a population-based technique, its sensitivity to the population size is assessed by a trial-and-error effort to attain the most suitable configuration. The results in the training phase showed that the proposed WCA can find an optimal solution for capturing the relationship between the PE and influential factors with less than 1% error. Likewise, examining the test results revealed that this model can forecast the PE with high accuracy. Moreover, a comparison with two powerful benchmark techniques, namely, ant lion optimization and a satin bowerbird optimizer, pointed to the WCA as a more accurate technique for the sustainable design of the intended system. Lastly, two potential predictive formulas, based on the most efficient WCAs, are extracted and presented.

1. Introduction

The accurate forecast of power generation capacity is a significant task for power plants [1]. This task concerns the efficiency of plants toward an economically beneficial performance [2]. Due to the nonlinear effect of several factors on thermodynamic systems [3,4] and related parameters like electrical power (PE), many scholars have updated earlier solutions by using machine learning. As a matter of fact, there are diverse types of machine learning methods (e.g., regression [5], neural systems [6,7], fuzzy-based approaches [8],) that have presented reliable solutions to various problems. Liao [9] could successfully predict the output power of a plant using a regression model. The model attained 99% accuracy and was introduced as a promising approach for this purpose. Wood [10] employed a transparent open box algorithm for the PE output approximation of a combined cycle power plant (CCPP). The evaluations revealed the suitability of this algorithm as it provided an efficient and optimal prediction. Besides, as discussed by many scholars, intelligence techniques have a high capability to undertake nonlinear and complicated calculations [11,12,13,14,15,16]. A large number artificial intelligence-based practices are studied, for example, in the subjects of environmental concerns [17,18,19,20,21], pan evaporation and soil precipitation prediction [22,23], sustainability [24], water and groundwater supply chains [25,26,27,28,29,30,31,32], natural gas consumption [33], optimizing energy systems [34,35,36,37,38,39,40,41,42,43,44,45], air quality [46], image classification and processing [47,48,49], face or particular pattern recognition [50,51,52], structural health monitoring [53], target tracking and computer vision [54,55,56], building and structural design analysis [57,58,59], soil-pile analysis and landslide assessment [60,61,62,63,64], quantifying climatic contributions [65], structural material (e.g., steel and concrete) behaviors [66,67,68,69,70,71], or even some complex concerns such as signal processing [72,73] as well as feature selection and extraction problems [74,75,76,77,78]. Similar to deep learning-based applications [79,80,81,82,83,84], many decision-making applications are related to complicated engineering problems as well [85,86,87,88,89,90,91]. In another sense, the technique of the artificial neural network (ANN) is a sophisticated nonlinear processor that has attracted massive attention for sensitive engineering modeling [92]. In this sense, the multi-layer perceptron (MLP) [93,94] is composed of a minimum of three layers, each of which contains some neurons for handling the computations—noting that a more complicated ANN-based solution is known as deep learning [95]. For instance, Chen et al. [96], Hu et al. [97], Wang et al. [98], and Xia et al. [99] employed the use of extreme machine learning techniques in the field of medical sciences. As new advanced prediction techniques, hybrid searching algorithms have been widely developed to have more accurate prediction outputs; namely, harris hawks optimization [100,101,102], fruit fly optimization [103], multi-swarm whale optimizer [104,105], ant colony optimization [57,106], grasshopper optimizer [107], bacterial foraging optimization [108], many-objective optimization [109,110], and chaos enhanced grey wolf optimization [111,112].
In machine learning, ANNs have been widely used for analyzing diverse energy-related parameters in power plants [113,114,115]. Akdemir [116], for example, suggested the use of ANNs for predicting the hourly power of combined gas and steam turbine power plants. Regarding the coefficient of determination (R2) of nearly 0.97, the products of the ANN were found to be in great agreement with real data. The successful use of two machine learning models, namely, recurrent ANN and a neuro-fuzzy system, was reported by Bandić et al. [117], who applied three popular machine learning approaches, namely, random forest, random tree, and an adaptive neuro-fuzzy inference system (ANFIS), to the same problem. Their findings indicated that the random forest outperforms other models. They also took a feature selection measure. It was shown that the original and changed data led to root mean square errors (RMSEs) of 3.0271 and 3.0527 MW, respectively. Mohammed et al. [118] used an ANFIS to find the thermal efficiency and optimal power output of combined cycle gas turbines which were 61% and 1540 MW, respectively.
Metaheuristic techniques have effectively assisted engineers and scholars in optimizing diverse problems [23,119,120,121,122,123,124,125,126,127,128], especially energy-related parameters such as solar energy [129], building thermal load [130], wind turbine interconnections [131], and green computing awareness [132]. Seyedmahmoudian et al. [133] used a differential evolution and particle swarm optimization (DEPSO) method to analyze the output power for a building-integrated photovoltaic system. These algorithms have also gained a lot of attention for optimally supervising conventional predictors like ANNs. Hu et al. [134] proposed a sophisticated hybrid composed of an ANN with a genetic algorithm (GA) and the PSO for predicting short-term electric load. With a relative error of 0.77%, this model performed better than the GA-ANN and PSO-ANN. Another application of the GA was studied by Lorencin et al. [135]. They tuned an ANN to estimate the PE output of a CCPP. Since the proposed model achieved a noticeably smaller error than a typical ANN, it was concluded that the GA is a nice optimizer for this system. Ghosh et al. [136] used a metaheuristic algorithm called beetle antennae search (BAS) to exploit a cascade feed-forward neural network applied to simulate the PE output of a CCPP. Due to the suitable performance of the developed model, they introduced it as an effective method for PE analysis. Chatterjee et al. [137] combined the ANN with cuckoo search (CS) and the PSO for electrical energy modeling at a combined cycle gas turbine. Their findings showed the superiority of the CS-trained ANN (with an average RMSE of approximately 2.6%) over the conventional ANN and PSO-trained version.
Due to the crucial role of power generation forecast in the sustainability of systems like gas turbines [138], selecting an appropriate predictive model is of great importance. On the other hand, the above literature reflects the high potential of metaheuristic algorithms for supervising the ANN. However, a significant gap in the knowledge emerges when the literature of PE analysis relies mostly on the first generation of these techniques (e.g., PSO and GA). Hence, this study is concerned with the application of a novel metaheuristic technique, namely, the water cycle algorithm (WCA) for the accurate prediction of the PE of a base load operated CCPP. Moreover, the performance of this algorithm is comparatively validated by ant lion optimization (ALO) and satin bowerbird optimizer (SBO) as benchmarks. These techniques are applied to this problem through a neural network framework. Some previous studies have shown the competency of the WCA [139], ALO [140], and SBO [141] in optimizing intelligent models like ANNs and ANFIS. The main contribution of these algorithms to the PE estimation lies in finding the optimal relationship between this parameter and influential factors.

2. Materials and Methods

2.1. Data Provision

When it comes to intelligent learning, the models acquire knowledge by mining the data. In ANN-based models, this knowledge draws on a group of tunable weights, as well as biases. The data should represent records of one (or a number of) input parameter(s) and their corresponding target(s).
In this work, the data are downloaded from a publicly available repository at: http://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant, based on studies by Tüfekci [138] and Kaya et al. [142]. The 6 years of records (2006–2011) of a CCPP working with full load (nominal generating capacity of 480 MW, made up of 2 × 160 MW ABB 13E2 gas turbines, 2 × dual pressure heat recovery steam generators, and 1 × 160 MW ABB steam turbine) form this dataset [138]. It gives full load electrical power output as the target parameter, along with four input parameters, namely, ambient temperature (AT), exhaust steam pressure (vacuum, V), atmospheric pressure (AP), and relative humidity (RH). Figure 1 shows the relationship between the PE and input parameters. According to the drawn trendlines, a meaningful correlation can be seen in the figures of PE-AT and PE-V (R2 of 0.8989 and 0.7565, respectively), while the values of AP and RH do not indicate an explicit correlation. Both AT and V are adversely proportional to the PE.
Table 1 describes the dataset statistically. The values of AT, V, AP, and RH range in [1.8, 37.1] °C, [25.4, 81.6] cm Hg, [992.9, 1033.3] mbar, and [25.6, 100.2] % with average values of 19.7 °C, 54.3 cm Hg, 1013.3 mbar, and 73.3%, respectively. Additionally, the minimum and maximum recorded PEs are 420.3 and 495.8 MW. The dataset comprises a total of 9568 samples, out of which 7654 samples are selected as training data and the remaining 1914 samples form the testing data. To do this, a random selection with an 80:20 ratio is applied.

2.2. Methodology

The overall methodology used in this study is shown in Figure 2.
The description of the used algorithms is presented below.

2.2.1. The WCA

Simulating the water cycle process was the main idea of the WCA algorithm, which was designed by Eskandar et al. [143]. In studies like [144], scholars have used this algorithm for sustainable energy issues. When the algorithm gets started, a population with the size Npop is generated from raindrops. Among the individuals, the best one is designated as the sea whose solution is shown by Xsea. Additionally, individuals with promising solutions (Xrs) are considered as rivers. The number of rivers is determined based on the parameter Nsr that gives the number of rivers plus the unique sea. The residual individuals form the stream group (Xss). The number of streams is the difference between Npop and Nsr.
The population can be expressed as follows:
[ x 1 1 x 1 2 x 1 D x 2 1 x 2 2 x 2 D x N p o p 1 x N p o p 2 x N p o p D ]   =   [ X s e a   X r 1 X r 2 X s 1 X s N p o p N s r ] ,
Concerning the function value of Xr and Xsea in the beginning, a number of Xs are designated to each Xr and Xsea based on the following relationship:
C n = f ( n ) f ( X s 1 ) ,
N S ( n ) = r o u n d   { | C n / j = 1 N s r C j | × ( N p o p   N s r ) } ,
in which f stands for the function value and n = Xsea, X r 1 , ..., X r N s r 1 .
Despite the typical procedure in nature (stream → river → sea), some streams may flow straight to the sea. The new values of Xr and Xs are obtained from the below equations:
X r t + 1   =   X r t   +   r a n d × c o n s × ( X s e a t   X r t ) ,
X s t + 1   =   X s t   +   r a n d × c o n s × ( X r t   X s   t ) ,
X s t + 1   =   X s t   +   r a n d × c o n s × ( X s e a t   X s   t ) ,
where rand is a random number (in [0, 1]), cons gives a positive constant value (in [1, 2]), t signifies the iteration number. Xr and Xs are evaluated and compared. If the quality of Xs is better than that of Xr, they exchange their positions. A similar process happens between Xr and Xsea [145,146]. By performing the evaporation part of the water cycle, the algorithm is again implemented to improve the solution iteratively.

2.2.2. The Benchmarks

The first benchmark algorithm is the ALO. Mirjalili [147] designed this algorithm as a robust nature-inspired strategy. Additionally, it has attracted the attention of experts for tasks like load shifting in analyzing sustainable renewable resources [148]. The pivotal idea of this algorithm is simulating the idealized hunting actions of the antlion. They build a cone-shaped fosse and wait for prey (often ants) to fall into the trap. The prey makes some movements to escape from antlions. The fitness of the solution is evaluated by a roulette wheel selection function. In this sense, the more powerful the hunter is, the better the prey is [149]. The details of the ALO and its application for optimizing intelligent models like ANNs can be found in earlier literature [150].
The SBO is considered as the second benchmark for the WCA. Inspired by the lifestyle of satin bowerbirds, Moosavi and Bardsiri [141] developed the SBO. Scholars like Zhang et al. [151] and Chintam and Daniel [152] have confirmed the successful performance of this algorithm in dealing with structural and energy-related optimization issues. In this strategy, there is a bower-making competition between male birds to attract a mate. The population is randomly created and the fitness of each bower is calculated. By making an elitism decision, the most promising individual is considered as the best solution. After determining the changes in the positions, a mutation operation is applied, followed by a step to combine the solutions of the old and new (updated) population [153]. A mathematical description of the SBO can be found in studies like [154].

3. Results and Discussion

3.1. Accuracy Assessment Measures

Two essential error criteria, namely, the RMSE and mean absolute error (MAE), are defined to return different forms of the prediction error. Another error indicator called mean absolute percentage error (MAPE) is also defined to report the relative (percentage) error. Given P E   i e x p e c t e d   and P E   i p r e d i c t e d as the expected and predicted electrical power outputs, Equations (7) to (9) denote the calculation of these indicators.
R M S E = 1 N i = 1 N [ ( P E i e x p e c t e d P E i p r e d i c t e d ) ] 2 ,
M A E = 1 N i = 1 N | P E i e x p e c t e d P E i p r e d i c t e d | ,
M A P E = 1 N i = 1 N | P E i e x p e c t e d P E i p r e d i c t e d P E i e x p e c t e d | × 100 ,
where the number of samples (i.e., 7654 and 1914 in the training and testing groups, respectively) is signified by N.
Moreover, a correlation indicator called the Pearson correlation coefficient (R) is used. According to Equation (10), it reports the consistency between P E   e x p e c t e d and P E   p r e d i c t e d . Note that the ideal value for this indicator is 1.
R = i = 1 N ( P E i p r e d i c t e d P E ¯ p r e d i c t e d ) ( P E i e x p e c t e d P E ¯ e x p e c t e d ) i = 1 N ( P E i p r e d i c t e d P E ¯ p r e d i c t e d ) 2 i = 1 N ( P E i e x p e c t e d P E ¯ e x p e c t e d ) 2 ,

3.2. Hybridizing and Training

It was earlier stated that this study pursues a novel forecasting method for the problem of PE modeling. To this end, the water cycle algorithm explores the relationship between this parameter and four inputs through an MLP neural network. This skeleton is used to establish nonlinear equations between the mentioned parameters. A three-layer MLP is considered wherein the number of neurons lying in the first, second, and third layer (also known as input, hidden, and output layers) equals four (the number of inputs), nine (obtained by trial and error practice), and one (the number of outputs only), respectively. Figure 3 shows this structure:
There are two kinds of tunable computational parameters in an MLP: (a) weights (W) that are designated to each input factor and (b) bias terms. Equation (11) shows the calculation of a neuron with a given input (I).
R e s p o n s e   =   T a n s i g ( W × I   +   b )   ,
where Tansig signifies an activation function which is defined as follows:
Tansig   ( x ) =   2 1 +   e 2 x 1 ,
Each neuron of the ANN applies an activation functions to a linear combination of inputs and network parameters (i.e., W and b) to give its specific response. There are a number of functions (e.g., Logsig, Purelin, etc.) that can be used for this purpose. However, many studies have stated the superiority of Tansig for hidden neurons [155,156,157].
The WCA finds the optimal values of the parameters in Equation (11) in an iterative procedure. In this way, the suitability of each response (in each iteration) is reported by an objective function (OF). This study uses the RMSE of training data for this purpose. So, the lower the OF is, the better the optimization is. Figure 4a shows the optimization curves of the WCA for the given problem. The reduction of the OF in this figure shows that the RMSE error is being reduced consecutively.
Famously, the size of the population can greatly impact the quality of optimization. The convergence curves are plotted for seven different WCA-NN networks distinguished by different population sizes (PS of 10, 50, 100, 200, 300, 400, 500). As is seen, the curve of PS = 400 is finally below the others. Therefore, this network is the representative of the WCA-NN for further evaluations. Note that a total of 1000 iterations were considered for all tested PSs.
The same strategy (i.e., the same PSs and number of iterations) was executed for the benchmark models. It was shown that ALO-NN and SBO-NN with PSs of 400 and 300 are superior. Figure 4b depicts and compares the convergence behavior of the selected networks. According to this figure, all three algorithms have a similar performance in dealing with error minimization. The OF is chiefly reduced over the initial iterations.
Figure 4b also says that the OF of the WCA-NN is below both benchmarks. In this sense, the RMSEs of 4.1468, 4.2656, and 4.2484 are calculated for the WCA-NN, ALO-NN, and SBO-NN, respectively. Additionally, the corresponding MAEs (3.2112, 3.3389, and 3.3075) can support this claim.
Subtracting P E   p r e d i c t e d from P E   e x p e c t e d returns an error value for each sample. Figure 5 shows these errors. It can be seen that close-to-zero values are obtained for the majority of training samples. Concerning peak values, the errors lie in the ranges [−18.4548, 42.4231], [−18.9855, 43.2264], and [−19.1242, 42.8160]. With respect to the range of PE (Table 1), these values indicate a very good prediction for all models. Moreover, the calculated MAPEs report less than 1% relative errors (0.7076%, 0.7359%, and 0.7289%).
Moreover, the R values of 0.96985, 0.96807, and 0.96834 profess an excellent correlation between the products of the used models and the observed PE. This favorable performance means that the WCA, ALO, and SBO have nicely understood the dependence of the PE on AT, V, AP, and RH and, accordingly, they have optimally tuned the parameters of the MLP system.

3.3. Testing Performance

The testing ability of a forecasting model illustrates the generalizability of the captured knowledge for unfamiliar conditions. The weights and bias terms tuned by the WCA, ALO, and SBO created three separate methods that predicted the PE for testing samples. The quality of the results is assessed in this section.
Figure 6 presents two charts for each model. First, the correlation between the P E   e x p e c t e d and P E   p r e d i c t e d is graphically shown. Along with it, the frequency of errors ( P E   e x p e c t e d P E   p r e d i c t e d ) is shown in the form of histogram charts. At a glance, the results of all three models demonstrate promising generalizability, due to the aggregation of points around the ideal line (i.e., x = y) in Figure 6a,c,e. Additionally, as a general trend in Figure 6b,d,f, small errors (zero and close-to-zero ranges) have a higher frequency compared to large values. Remarkably, testing errors range within [−16.6585, 44.7929], [−15.8225, 45.7482], and [−16.3683, 45.8428].
The RMSE and MAE of the WCA-NN, ALO-NN, and SBO-NN were 4.0852 and 3.1996, 4.1719 and 3.3028, and 4.1614 and 3.2802, respectively. These values are close to those of the training phase. Hence, all three models enjoy a high accuracy in dealing with out-of-data situations. Furthermore, a desirable level of relative error can be represented by the MAPEs of 0.7045%, 0.7272%, and 0.7221%.
According to the obtained R values (0.97164, 0.97040, and 0.97061), all three hybrids are able to predict the PE of a CCPP with highly reliable accuracy. In all regression charts, there is an outlying value, PE = 435.58 (obtained for AT = 7.14°C, V = 41.22 cm Hg, AP = 1016.6 mbar, and RH = 97.09%) that is predicted to be 480.3728513, 481.3282482, and 481.4228308.

3.4. WCA vs. ALO and SBO

The quality of the results showed that the WCA, ALO, and SBO metaheuristic algorithms benefit from potential search strategies for exploring and mapping the PE pattern. However, comparative evaluation using the RMSE, MAE, MAPE, and R pointed out noticeable distinctions in the performance of these algorithms.
Figure 7 depicts and compares the accuracies in the form of radar charts. The shape of the produced triangles indicates the superiority of the WCA-based model over the benchmark algorithms in both training and testing phases. In terms of all four indicators, this model could predict the PE with the best quality. It means that the ANN supervised by the WCA is constructed of more promising parameters. Following the proposed algorithm, the SBO won the competition with ALO. It is noteworthy that the accuracy of these two algorithms in the testing phase was closer compared to the training results.
From the time-efficiency point of view, computations of the ALO were shorter than the two other methods. The elapsed times for tuning the ANN parameters were nearly 14,261.1, 12,928.1, and 14,871.3 s by the WCA, ALO, and SBO, respectively. It should be also noted that the WCA and ALO used PS = 400, while this value was 300 for the SBO.
According to the above results, the WCA provides both an accurate and efficient solution to the problem of PE approximation, and thus, sustainable development of the CCPPs. It is true that the ALO could optimize the neural network in a shorter time, but smaller PSs of the WCA (i.e., 300, 200, ...) were far faster. On the other hand, back to Figure 4, the PS of 300 produced a solution almost as good as that of 400. It is interesting to know that the prediction of PS = 300 was slightly better than PS = 400 (testing RMSEs 4.0760 vs. 4.0852). The computation time of this configuration was around 3186.9 seconds which is considerably smaller than the two other algorithms. Thus, for time-sensitive projects, less complex configurations of the WCA are efficiently applicable.

3.5. Predictive Formulas

Due to the comparisons in the previous section, the solutions found by WCAs with the PSs of 300 and 400 are presented here in the form of two separate (different) formulas for forecasting the electrical power. Equations (13) and (14) give the PE through a linear relationship.
P E P S   =   300 = 0.814   ×   Y 1     0.543   ×   Y 2   +   0.825   ×   Y 3     0.584   ×   Y 4     0.509   ×   Y 5     0.220   ×   Y 6   +   0.296   ×   Y 7   +   0.039   ×   Y 8   +   0.542   ×   Y 9 .     0.076 ,
P E P S   =   400   = 0.782   ×   Z 1   +   0.627   ×   Z 2     0.569   ×   Z 3   0.594   ×   Z 4     0.891   ×   Z 5     0.548   ×   Z 6   +   0.661   ×   Z 7   +   0.416   ×   Z 8   +   0.383   ×   Z 9     0.696 ,
where Y i and Z i (i = 1, 2, ..., 9) symbolize the output of the hidden neurons. These parameters are calculated using a generic equation as follows:
Y i   and   Z i   = T a n s i g   ( W i 1 × A T + W i 2 × V + W i 3 × A P + W i 4 × R H + b i ) ,
and with the help of Table 2.
According to the above formulas, calculating the PE consists of two steps: First, recalling the MLP structure (Figure 3) and also Equation (11) from Section 3.2, Equation (15) is applied to produce the response of nine hidden neurons (e.g., Y 1 , Y 2 , …, Y 9 for the formula corresponding to PS = 300). For instance,   W 32 represents the weight of the 3rd neuron applied to the 2nd input (i.e., V). Thus, it equals 1.152 in Table 2 used for calculating Y 3 . Next, these parameters are used by the output neuron (in Equation (13)) to yield the PE. The same goes for the formula corresponding to PS = 400 ( Z 1 , Z 2 , …, Z 9 and Equation (14)).

4. Conclusions

This paper investigated the efficiency of three capable metaheuristic approaches for the accurate analysis of electrical power output. The water cycle algorithm was used to supervise the learning process of an ANN. This algorithm was compared with two other techniques, namely antlion optimization and a satin bowerbird optimizer. The results showed the superiority of the WCA in all cases and terms of all accuracy indicators. For example, the RMSEs of 4.1468 vs. 4.2656 and 4.2484 in the training phase and 4.0852 vs. 4.1719 and 4.1614 in the prediction phase. However, all three hybrids could understand and reproduce the PE pattern with less than 1% error. All in all, a significant sustainability issue was efficiently managed and solved by metaheuristic science. Thus, the presented hybrid models can be practically employed to forecast the electrical power output of combined cycle power plants by having the records of AT, V, AP, and RH. They can also be appropriate substitutes for time-consuming and costly methods. However, further efforts are recommended for future projects to compare the applicability of different metaheuristic techniques and also to present innovative measures that may improve the efficiency of the existing models in terms of both time and accuracy.

Author Contributions

H.M. and A.M., methodology; software validation, writing—original draft preparation. H.M. and A.M., writing—review and editing, visualization, supervision, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, J.H.; Kim, T.S.; Kim, E.-H. Prediction of power generation capacity of a gas turbine combined cycle cogeneration plant. Energy 2017, 124, 187–197. [Google Scholar] [CrossRef]
  2. Sun, W.; Zhang, J.; Wang, R. Predicting electrical power output by using Granular Computing based Neuro-Fuzzy modeling method. In Proceedings of the The 27th Chinese Control and Decision Conference (2015 CCDC), Qingdao, China, 23–25 May 2015; pp. 2865–2870. [Google Scholar]
  3. Han, X.; Chen, N.; Yan, J.; Liu, J.; Liu, M.; Karellas, S. Thermodynamic analysis and life cycle assessment of supercritical pulverized coal-fired power plant integrated with No. 0 feedwater pre-heater under partial loads. J. Clean. Prod. 2019, 233, 1106–1122. [Google Scholar] [CrossRef]
  4. Han, X.; Zhang, D.; Yan, J.; Zhao, S.; Liu, J. Process development of flue gas desulphurization wastewater treatment in coal-fired power plants towards Zero Liquid Discharge: Energetic, economic and environmental analyses. J. Clean. Prod. 2020, 261, 121144. [Google Scholar] [CrossRef]
  5. Xu, X.; Chen, L. Projection of long-term care costs in China, 2020–2050: Based on the Bayesian quantile regression method. Sustainability 2019, 11, 3530. [Google Scholar] [CrossRef] [Green Version]
  6. Shi, K.; Wang, J.; Tang, Y.; Zhong, S. Reliable asynchronous sampled-data filtering of T–S fuzzy uncertain delayed neural networks with stochastic switched topologies. Fuzzy Sets Syst. 2020, 381, 1–25. [Google Scholar] [CrossRef]
  7. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C.J.N. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  8. Chen, H.; Qiao, H.; Xu, L.; Feng, Q.; Cai, K. A Fuzzy Optimization Strategy for the Implementation of RBF LSSVR Model in Vis–NIR Analysis of Pomelo Maturity. Ieee Trans. Ind. Inform. 2019, 15, 5971–5979. [Google Scholar] [CrossRef]
  9. Liao, Y. Linear Regression and Gradient Descent Method for Electricity Output Power Prediction. J. Comput. Commun. 2019, 7, 31–36. [Google Scholar] [CrossRef] [Green Version]
  10. Wood, D.A. Combined cycle gas turbine power output prediction and data mining with optimized data matching algorithm. Sn Appl. Sci. 2020, 2, 1–21. [Google Scholar] [CrossRef] [Green Version]
  11. Liu, Z.; Shao, J.; Xu, W.; Chen, H.; Zhang, Y. An extreme learning machine approach for slope stability evaluation and prediction. Nat. Hazards 2014, 73, 787–804. [Google Scholar] [CrossRef]
  12. Chen, Y.; He, L.; Li, J.; Zhang, S. Multi-criteria design of shale-gas-water supply chains and production systems towards optimal life cycle economics and greenhouse gas emissions under uncertainty. Comput. Chem. Eng. 2018, 109, 216–235. [Google Scholar] [CrossRef]
  13. Zhu, J.; Shi, Q.; Wu, P.; Sheng, Z.; Wang, X. Complexity analysis of prefabrication contractors’ dynamic price competition in mega projects with different competition strategies. Complexity 2018, 2018, 5928235. [Google Scholar] [CrossRef]
  14. Hu, X.; Chong, H.-Y.; Wang, X. Sustainability perceptions of off-site manufacturing stakeholders in Australia. J. Clean. Prod. 2019, 227, 346–354. [Google Scholar] [CrossRef]
  15. He, L.; Shao, F.; Ren, L. Sustainability appraisal of desired contaminated groundwater remediation strategies: An information-entropy-based stochastic multi-criteria preference model. Environ. Dev. Sustain. 2020, 1–21. [Google Scholar] [CrossRef]
  16. Li, C.; Hou, L.; Sharma, B.Y.; Li, H.; Chen, C.; Li, Y.; Zhao, X.; Huang, H.; Cai, Z.; Chen, H. Developing a new intelligent system for the diagnosis of tuberculous pleural effusion. Comput. Methods Programs Biomed. 2018, 153, 211–225. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, J.; Liu, Y.; Wang, X. An environmental assessment model of construction and demolition waste based on system dynamics: A case study in Guangzhou. Environ. Sci. Pollut. Res. 2020, 27, 37237–37259. [Google Scholar] [CrossRef]
  18. Liu, L.; Li, J.; Yue, F.; Yan, X.; Wang, F.; Bloszies, S.; Wang, Y. Effects of arbuscular mycorrhizal inoculation and biochar amendment on maize growth, cadmium uptake and soil cadmium speciation in Cd-contaminated soil. Chemosphere 2018, 194, 495–503. [Google Scholar] [CrossRef]
  19. Yang, Y.; Liu, J.; Yao, J.; Kou, J.; Li, Z.; Wu, T.; Zhang, K.; Zhang, L.; Sun, H. Adsorption behaviors of shale oil in kerogen slit by molecular simulation. Chem. Eng. J. 2020, 387, 124054. [Google Scholar] [CrossRef]
  20. Feng, S.; Lu, H.; Tian, P.; Xue, Y.; Lu, J.; Tang, M.; Feng, W. Analysis of microplastics in a remote region of the Tibetan Plateau: Implications for natural environmental response to human activities. Sci. Total Environ. 2020, 739, 140087. [Google Scholar] [CrossRef]
  21. Liu, J.; Yi, Y.; Wang, X. Exploring factors influencing construction waste reduction: A structural equation modeling approach. J. Clean. Prod. 2020, 276, 123185. [Google Scholar] [CrossRef]
  22. Zhang, B.; Xu, D.; Liu, Y.; Li, F.; Cai, J.; Du, L. Multi-scale evapotranspiration of summer maize and the controlling meteorological factors in north China. Agric. For. Meteorol. 2016, 216, 1–12. [Google Scholar] [CrossRef]
  23. Chao, L.; Zhang, K.; Li, Z.; Zhu, Y.; Wang, J.; Yu, Z. Geographically weighted regression based methods for merging satellite and gauge precipitation. J. Hydrol. 2018, 558, 275–289. [Google Scholar] [CrossRef]
  24. Keshtegar, B.; Heddam, S.; Sebbar, A.; Zhu, S.-P.; Trung, N.-T. SVR-RSM: A hybrid heuristic method for modeling monthly pan evaporation. Environ. Sci. Pollut. Res. 2019, 26, 35807–35826. [Google Scholar] [CrossRef] [PubMed]
  25. He, L.; Chen, Y.; Zhao, H.; Tian, P.; Xue, Y.; Chen, L. Game-based analysis of energy-water nexus for identifying environmental impacts during Shale gas operations under stochastic input. Sci. Total Environ. 2018, 627, 1585–1601. [Google Scholar] [CrossRef] [PubMed]
  26. Chen, Y.; Li, J.; Lu, H.; Yan, P. Coupling system dynamics analysis and risk aversion programming for optimizing the mixed noise-driven shale gas-water supply chains. J. Clean. Prod. 2021, 278, 123209. [Google Scholar] [CrossRef]
  27. Cheng, X.; He, L.; Lu, H.; Chen, Y.; Ren, L. Optimal water resources management and system benefit for the Marcellus shale-gas reservoir in Pennsylvania and West Virginia. J. Hydrol. 2016, 540, 412–422. [Google Scholar] [CrossRef]
  28. Li, X.; Zhang, R.; Zhang, X.; Zhu, P.; Yao, T. Silver-Catalyzed Decarboxylative Allylation of Difluoroarylacetic Acids with Allyl Sulfones in Water. Chem. Asian J. 2020, 15, 1175–1179. [Google Scholar] [CrossRef] [PubMed]
  29. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  30. Qian, J.; Feng, S.; Li, Y.; Tao, T.; Han, J.; Chen, Q.; Zuo, C. Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry. Opt. Lett. 2020, 45, 1842–1845. [Google Scholar] [CrossRef]
  31. Lyu, Z.; Chai, J.; Xu, Z.; Qin, Y.; Cao, J. A Comprehensive Review on Reasons for Tailings Dam Failures Based on Case History. Adv. Civ. Eng. 2019, 2019, 4159306. [Google Scholar] [CrossRef]
  32. Feng, W.; Lu, H.; Yao, T.; Yu, Q. Drought characteristics and its elevation dependence in the Qinghai–Tibet plateau during the last half-century. Sci. Rep. 2020, 10, 14323. [Google Scholar] [CrossRef] [PubMed]
  33. Su, Z.; Liu, E.; Xu, Y.; Xie, P.; Shang, C.; Zhu, Q. Flow field and noise characteristics of manifold in natural gas transportation station. Oil Gas Sci. Technol. Rev. D’ifp Energ. Nouv. 2019, 74, 70. [Google Scholar] [CrossRef]
  34. Chen, Y.; He, L.; Guan, Y.; Lu, H.; Li, J. Life cycle assessment of greenhouse gas emissions and water-energy optimization for shale gas supply chain planning based on multi-level approach: Case study in Barnett, Marcellus, Fayetteville, and Haynesville shales. Energy Convers. Manag. 2017, 134, 382–398. [Google Scholar] [CrossRef]
  35. He, L.; Shen, J.; Zhang, Y. Ecological vulnerability assessment for ecological conservation and environmental management. J. Environ. Manag. 2018, 206, 1115–1125. [Google Scholar] [CrossRef] [PubMed]
  36. Lu, H.; Tian, P.; He, L. Evaluating the global potential of aquifer thermal energy storage and determining the potential worldwide hotspots driven by socio-economic, geo-hydrologic and climatic conditions. Renew. Sustain. Energy Rev. 2019, 112, 788–796. [Google Scholar] [CrossRef]
  37. Wang, Y.; Yao, M.; Ma, R.; Yuan, Q.; Yang, D.; Cui, B.; Ma, C.; Liu, M.; Hu, D. Design strategy of barium titanate/polyvinylidene fluoride-based nanocomposite films for high energy storage. J. Mater. Chem. A 2020, 8, 884–917. [Google Scholar] [CrossRef]
  38. Zhao, X.; Ye, Y.; Ma, J.; Shi, P.; Chen, H. Construction of electric vehicle driving cycle for studying electric vehicle energy consumption and equivalent emissions. Environ. Sci. Pollut. Res. 2020, 27, 37395–37409. [Google Scholar] [CrossRef]
  39. Zhu, L.; Kong, L.; Zhang, C. Numerical Study on Hysteretic Behaviour of Horizontal-Connection and Energy-Dissipation Structures Developed for Prefabricated Shear Walls. Appl. Sci. 2020, 10, 1240. [Google Scholar] [CrossRef] [Green Version]
  40. Deng, Y.; Zhang, T.; Sharma, B.K.; Nie, H. Optimization and mechanism studies on cell disruption and phosphorus recovery from microalgae with magnesium modified hydrochar in assisted hydrothermal system. Sci. Total Environ. 2019, 646, 1140–1154. [Google Scholar] [CrossRef]
  41. Zhang, T.; Wu, X.; Fan, X.; Tsang, D.C.W.; Li, G.; Shen, Y. Corn waste valorization to generate activated hydrochar to recover ammonium nitrogen from compost leachate by hydrothermal assisted pretreatment. J. Environ. Manag. 2019, 236, 108–117. [Google Scholar] [CrossRef]
  42. Peng, S.; Zhang, Z.; Liu, E.; Liu, W.; Qiao, W. A new hybrid algorithm model for prediction of internal corrosion rate of multiphase pipeline. J. Nat. Gas Sci. Eng. 2021, 85, 103716. [Google Scholar] [CrossRef]
  43. Peng, S.; Chen, Q.; Zheng, C.; Liu, E. Analysis of particle deposition in a new-type rectifying plate system during shale gas extraction. Energy Sci. Eng. 2020, 8, 702–717. [Google Scholar] [CrossRef] [Green Version]
  44. Liu, E.; Wang, X.; Zhao, W.; Su, Z.; Chen, Q. Analysis and Research on Pipeline Vibration of a Natural Gas Compressor Station and Vibration Reduction Measures. Energy Fuels 2020. [Google Scholar] [CrossRef]
  45. Liu, E.; Guo, B.; Lv, L.; Qiao, W.; Azimi, M. Numerical simulation and simplified calculation method for heat exchange performance of dry air cooler in natural gas pipeline compressor station. Energy Sci. Eng. 2020, 8, 2256–2270. [Google Scholar] [CrossRef] [Green Version]
  46. Wang, Y.; Yuan, Y.; Wang, Q.; Liu, C.; Zhi, Q.; Cao, J. Changes in air quality related to the control of coronavirus in China: Implications for traffic and industrial emissions. Sci. Total Environ. 2020, 731, 139133. [Google Scholar] [CrossRef]
  47. Xu, M.; Li, C.; Zhang, S.; Callet, P.L. State-of-the-Art in 360° Video/Image Processing: Perception, Assessment and Compression. IEEE J. Sel. Top. Signal Process. 2020, 14, 5–26. [Google Scholar] [CrossRef] [Green Version]
  48. Zhang, X.; Wang, T.; Luo, W.; Huang, P. Multi-level Fusion and Attention-guided CNN for Image Dehazing. Ieee Trans. Circuits Syst. Video Technol. 2020. [Google Scholar] [CrossRef]
  49. Zhang, X.; Wang, T.; Wang, J.; Tang, G.; Zhao, L. Pyramid Channel-based Feature Attention Network for image dehazing. Comput. Vision Image Underst. 2020, 197, 103003. [Google Scholar] [CrossRef]
  50. Shi, K.; Wang, J.; Zhong, S.; Tang, Y.; Cheng, J. Non-fragile memory filtering of T-S fuzzy delayed neural networks based on switched fuzzy sampled-data control. Fuzzy Sets Syst. 2020, 394, 40–64. [Google Scholar] [CrossRef]
  51. Mi, C.; Cao, L.; Zhang, Z.; Feng, Y.; Yao, L.; Wu, Y. A port container code recognition algorithm under natural conditions. J. Coast. Res. 2020, 103, 822–829. [Google Scholar] [CrossRef]
  52. Salari, N.; Shohaimi, S.; Najafi, F.; Nallappan, M.; Karishnarajah, I. Application of pattern recognition tools for classifying acute coronary syndrome: An integrated medical modeling. Theor. Biol. Med. Model. 2013, 10, 57. [Google Scholar] [CrossRef] [Green Version]
  53. Zhang, C.-W.; Ou, J.-P.; Zhang, J.-Q. Parameter optimization and analysis of a vehicle suspension system controlled by magnetorheological fluid dampers. Struct. Control Health Monit. 2006, 13, 885–896. [Google Scholar] [CrossRef]
  54. Xu, S.; Wang, J.; Shou, W.; Ngo, T.; Sadick, A.-M.; Wang, X. Computer Vision Techniques in Construction: A Critical Review. Arch. Comput. Methods Eng. 2020. [Google Scholar] [CrossRef]
  55. Yan, J.; Pu, W.; Zhou, S.; Liu, H.; Bao, Z. Collaborative detection and power allocation framework for target tracking in multiple radar system. Inf. Fusion 2020, 55, 173–183. [Google Scholar] [CrossRef]
  56. Liu, D.; Wang, S.; Huang, D.; Deng, G.; Zeng, F.; Chen, H. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns. Comput. Biol. Med. 2016, 72, 185–200. [Google Scholar] [CrossRef] [PubMed]
  57. Wang, B.; Zhang, B.F.; Liu, X.W.; Zou, F.C. Novel infrared image enhancement optimization algorithm combined with DFOCS. Optik 2020, 224, 165476. [Google Scholar] [CrossRef]
  58. Abedini, M.; Mutalib, A.A.; Zhang, C.; Mehrmashhadi, J.; Raman, S.N.; Alipour, R.; Momeni, T.; Mussa, M.H. Large deflection behavior effect in reinforced concrete columns exposed to extreme dynamic loads. Front. Struct. Civ. Eng. 2020, 14, 532–553. [Google Scholar] [CrossRef] [Green Version]
  59. Mou, B.; Li, X.; Bai, Y.; Wang, L. Shear behavior of panel zones in steel beam-to-column connections with unequal depth of outer annular stiffener. J. Struct. Eng. 2019, 145, 04018247. [Google Scholar] [CrossRef]
  60. Wang, S.; Zhang, K.; van Beek, L.P.H.; Tian, X.; Bogaard, T.A. Physically-based landslide prediction over a large region: Scaling low-resolution hydrological model results for high-resolution slope stability assessment. Environ. Model. Softw. 2020, 124, 104607. [Google Scholar] [CrossRef]
  61. Zhang, K.; Wang, Q.; Chao, L.; Ye, J.; Li, Z.; Yu, Z.; Yang, T.; Ju, Q. Ground observation-based analysis of soil moisture spatiotemporal variability across a humid to semi-humid transitional zone in China. J. Hydrol. 2019, 574, 903–914. [Google Scholar] [CrossRef]
  62. Zhang, S.; Zhang, J.; Ma, Y.; Pak, R.Y.S. Vertical dynamic interactions of poroelastic soils and embedded piles considering the effects of pile-soil radial deformations. Soils Found. 2020, 61, 16–34. [Google Scholar] [CrossRef]
  63. Pourya, K.; Abdolreza, O.; Brent, V.; Arash, H.; Hamid, R. Feasibility Study of Collapse Remediation of Illinois Loess Using Electrokinetics Technique by Nanosilica and Salt. In Geo-Congress 2020; American Society of Civil Engineers: Reston, VA, USA, 2020; pp. 667–675. [Google Scholar]
  64. Baziar, M.H.; Rostami, H. Earthquake Demand Energy Attenuation Model for Liquefaction Potential Assessment. Earthq. Spectra 2017, 33, 757–780. [Google Scholar] [CrossRef]
  65. Chao, M.; Kai, C.; Zhiwei, Z. Research on tobacco foreign body detection device based on machine vision. Trans. Inst. Meas. Control 2020, 42, 2857–2871. [Google Scholar] [CrossRef]
  66. Abedini, M.; Zhang, C. Performance Assessment of Concrete and Steel Material Models in LS-DYNA for Enhanced Numerical Simulation, A State of the Art Review. Arch. Comput. Methods Eng. 2020. [Google Scholar] [CrossRef]
  67. Gholipour, G.; Zhang, C.; Mousavi, A.A. Numerical analysis of axially loaded RC columns subjected to the combination of impact and blast loads. Eng. Struct. 2020, 219, 110924. [Google Scholar] [CrossRef]
  68. Mou, B.; Zhao, F.; Qiao, Q.; Wang, L.; Li, H.; He, B.; Hao, Z. Flexural behavior of beam to column joints with or without an overlying concrete slab. Eng. Struct. 2019, 199, 109616. [Google Scholar] [CrossRef]
  69. Zhang, C.; Abedini, M.; Mehrmashhadi, J. Development of pressure-impulse models and residual capacity assessment of RC columns using high fidelity Arbitrary Lagrangian-Eulerian simulation. Eng. Struct. 2020, 224, 111219. [Google Scholar] [CrossRef]
  70. Sun, Y.; Wang, J.; Wu, J.; Shi, W.; Ji, D.; Wang, X.; Zhao, X. Constraints hindering the development of high-rise modular buildings. Appl. Sci. 2020, 10, 7159. [Google Scholar] [CrossRef]
  71. Liu, C.; Huang, X.; Wu, Y.-Y.; Deng, X.; Liu, J.; Zheng, Z.; Hui, D. Review on the research progress of cement-based and geopolymer materials modified by graphene and graphene oxide. Nanotechnol. Rev. 2020, 9, 155–169. [Google Scholar] [CrossRef] [Green Version]
  72. Xiong, Z.; Xiao, N.; Xu, F.; Zhang, X.; Xu, Q.; Zhang, K.; Ye, C. An Equivalent Exchange Based Data Forwarding Incentive Scheme for Socially Aware Networks. J. Signal Process. Syst. 2020. [Google Scholar] [CrossRef]
  73. Zenggang, X.; Zhiwen, T.; Xiaowen, C.; Xue-min, Z.; Kaibin, Z.; Conghuan, Y. Research on Image Retrieval Algorithm Based on Combination of Color and Shape Features. J. Signal Process. Syst. 2019, 1–8. [Google Scholar] [CrossRef]
  74. Yue, H.; Wang, H.; Chen, H.; Cai, K.; Jin, Y. Automatic detection of feather defects using Lie group and fuzzy Fisher criterion for shuttlecock production. Mech. Syst. Signal Process. 2020, 141, 106690. [Google Scholar] [CrossRef]
  75. Zhu, G.; Wang, S.; Sun, L.; Ge, W.; Zhang, X. Output Feedback Adaptive Dynamic Surface Sliding-Mode Control for Quadrotor UAVs with Tracking Error Constraints. Complexity 2020, 2020, 8537198. [Google Scholar] [CrossRef]
  76. Xiong, Q.; Zhang, X.; Wang, W.-F.; Gu, Y. A Parallel Algorithm Framework for Feature Extraction of EEG Signals on MPI. Comput. Math. Methods Med. 2020, 2020, 9812019. [Google Scholar] [CrossRef] [PubMed]
  77. Zhang, J.; Liu, B. A review on the recent developments of sequence-based protein feature extraction methods. Curr. Bioinform. 2019, 14, 190–199. [Google Scholar] [CrossRef]
  78. Zhao, X.; Li, D.; Yang, B.; Chen, H.; Yang, X.; Yu, C.; Liu, S. A two-stage feature selection method with its application. Comput. Electr. Eng. 2015, 47, 114–125. [Google Scholar] [CrossRef]
  79. Chen, H.; Chen, A.; Xu, L.; Xie, H.; Qiao, H.; Lin, Q.; Cai, K. A deep learning CNN architecture applied in smart near-infrared analysis of water pollution for agricultural irrigation resources. Agric. Water Manag. 2020, 240, 106303. [Google Scholar] [CrossRef]
  80. Li, T.; Xu, M.; Zhu, C.; Yang, R.; Wang, Z.; Guan, Z. A Deep Learning Approach for Multi-Frame In-Loop Filter of HEVC. IEEE Trans. Image Process. 2019, 28, 5663–5678. [Google Scholar] [CrossRef] [Green Version]
  81. Qian, J.; Feng, S.; Tao, T.; Hu, Y.; Li, Y.; Chen, Q.; Zuo, C. Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement. APL Photonics 2020, 5, 046105. [Google Scholar] [CrossRef]
  82. Qiu, T.; Shi, X.; Wang, J.; Li, Y.; Qu, S.; Cheng, Q.; Cui, T.; Sui, S. Deep Learning: A Rapid and Efficient Route to Automatic Metasurface Design. Adv. Sci. 2019, 6, 1900128. [Google Scholar] [CrossRef]
  83. Xu, M.; Li, T.; Wang, Z.; Deng, X.; Yang, R.; Guan, Z. Reducing Complexity of HEVC: A Deep Learning Approach. IEEE Trans. Image Process. 2018, 27, 5044–5059. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Zhu, Q. Research on Road Traffic Situation Awareness System Based on Image Big Data. IEEE Intell. Syst. 2020, 35, 18–26. [Google Scholar] [CrossRef]
  85. Liu, S.; Chan, F.T.S.; Ran, W. Decision making for the selection of cloud vendor: An improved approach under group decision-making with integrated weights and objective/subjective attributes. Expert Syst. Appl. 2016, 55, 37–47. [Google Scholar] [CrossRef]
  86. Tian, P.; Lu, H.; Feng, W.; Guan, Y.; Xue, Y. Large decrease in streamflow and sediment load of Qinghai–Tibetan Plateau driven by future climate change: A case study in Lhasa River Basin. CATENA 2020, 187, 104340. [Google Scholar] [CrossRef]
  87. Yang, W.; Pudasainee, D.; Gupta, R.; Li, W.; Wang, B.; Sun, L. An overview of inorganic particulate matter emission from coal/biomass/MSW combustion: Sampling and measurement, formation, distribution, inorganic composition and influencing factors. Fuel Process. Technol. 2020, 106657. [Google Scholar] [CrossRef]
  88. Cao, B.; Dong, W.; Lv, Z.; Gu, Y.; Singh, S.; Kumar, P. Hybrid Microgrid Many-Objective Sizing Optimization with Fuzzy Decision. IEEE Trans. Fuzzy Syst. 2020, 28, 2702–2710. [Google Scholar] [CrossRef]
  89. Cao, B.; Zhao, J.; Gu, Y.; Ling, Y.; Ma, X. Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evol. Comput. 2020, 53, 100626. [Google Scholar] [CrossRef]
  90. Qu, S.; Han, Y.; Wu, Z.; Raza, H. Consensus Modeling with Asymmetric Cost Based on Data-Driven Robust Optimization. Group Decis. Negot. 2020, 1–38. [Google Scholar] [CrossRef]
  91. Wu, C.; Wu, P.; Wang, J.; Jiang, R.; Chen, M.; Wang, X. Critical review of data-driven decision-making in bridge operation and maintenance. Struct. Infrastruct. Eng. 2020, 1–24. [Google Scholar] [CrossRef]
  92. Adeli, H. Neural networks in civil engineering: 1989–2000. Comput. Aided Civ. Infrastruct. Eng. 2001, 16, 126–142. [Google Scholar] [CrossRef]
  93. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  94. Lv, Z.; Qiao, L. Deep belief network and linear perceptron based cognitive computing for collaborative robots. Appl. Soft Comput. 2020, 92, 106300. [Google Scholar] [CrossRef]
  95. Yang, J.; Li, S.; Wang, Z.; Dong, H.; Wang, J.; Tang, S. Using Deep Learning to Detect Defects in Manufacturing: A Comprehensive Survey and Current Challenges. Materials 2020, 13, 5755. [Google Scholar] [CrossRef] [PubMed]
  96. Chen, H.-L.; Wang, G.; Ma, C.; Cai, Z.-N.; Liu, W.-B.; Wang, S.-J. An efficient hybrid kernel extreme learning machine approach for early diagnosis of Parkinson's disease. Neurocomputing 2016, 184, 131–144. [Google Scholar] [CrossRef] [Green Version]
  97. Hu, L.; Hong, G.; Ma, J.; Wang, X.; Chen, H. An efficient machine learning approach for diagnosis of paraquat-poisoned patients. Comput. Biol. Med. 2015, 59, 116–124. [Google Scholar] [CrossRef]
  98. Wang, S.-J.; Chen, H.-L.; Yan, W.-J.; Chen, Y.-H.; Fu, X. Face recognition and micro-expression recognition based on discriminant tensor subspace analysis plus extreme learning machine. Neural Process. Lett. 2014, 39, 25–43. [Google Scholar] [CrossRef]
  99. Xia, J.; Chen, H.; Li, Q.; Zhou, M.; Chen, L.; Cai, Z.; Fang, Y.; Zhou, H. Ultrasound-based differentiation of malignant and benign thyroid Nodules: An extreme learning machine approach. Comput. Methods Programs Biomed. 2017, 147, 37–49. [Google Scholar] [CrossRef]
  100. Chen, H.; Heidari, A.A.; Chen, H.; Wang, M.; Pan, Z.; Gandomi, A.H. Multi-population differential evolution-assisted Harris hawks optimization: Framework and case studies. Future Gener. Comput. Syst. 2020, 111, 175–198. [Google Scholar] [CrossRef]
  101. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted binary Harris hawks optimizer and feature selection. Eng. Comput. 2020, 25, 26. [Google Scholar] [CrossRef]
  102. Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl. Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
  103. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. J. 2020, 88, 105946. [Google Scholar] [CrossRef]
  104. Tu, J.; Chen, H.; Liu, J.; Heidari, A.A.; Zhang, X.; Wang, M.; Ruby, R.; Pham, Q.-V.J.K.-B.S. Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowl. Based Syst. 2021, 212, 106642. [Google Scholar] [CrossRef]
  105. Zhao, X.; Li, D.; Yang, B.; Ma, C.; Zhu, Y.; Chen, H. Feature selection based on improved ant colony optimization for online detection of foreign fiber in cotton. Appl. Soft Comput. 2014, 24, 585–596. [Google Scholar] [CrossRef]
  106. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Liang, G.; Muhammad, K.; Chen, H.J.K.-B.S. Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy. Knowl. Based Syst. 2020, 106510. [Google Scholar] [CrossRef]
  107. Yu, C.; Chen, M.; Cheng, K.; Zhao, X.; Ma, C.; Kuang, F.; Chen, H. SGOA: Annealing-behaved grasshopper optimizer for global tasks. Eng. Comput. 2021, 1–28. [Google Scholar] [CrossRef]
  108. Xu, X.; Chen, H.-L. Adaptive computational chemotaxis based on field in bacterial foraging optimization. Soft Comput. 2014, 18, 797–807. [Google Scholar] [CrossRef]
  109. Cao, B.; Wang, X.; Zhang, W.; Song, H.; Lv, Z. A Many-Objective Optimization Model of Industrial Internet of Things Based on Private Blockchain. IEEE Netw. 2020, 34, 78–83. [Google Scholar] [CrossRef]
  110. Cao, B.; Fan, S.; Zhao, J.; Yang, P.; Muhammad, K.; Tanveer, M. Quantum-enhanced multiobjective large-scale optimization via parallelism. Swarm Evol. Comput. 2020, 57, 100697. [Google Scholar] [CrossRef]
  111. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z.J.K.-B.S. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl. Based Syst. 2020, 213, 106684. [Google Scholar] [CrossRef]
  112. Zhao, X.; Zhang, X.; Cai, Z.; Tian, X.; Wang, X.; Huang, Y.; Chen, H.; Hu, L. Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients. Comput. Biol. Chem. 2019, 78, 481–490. [Google Scholar] [CrossRef] [PubMed]
  113. Zhang, X.; Wang, D.; Zhou, Z.; Ma, Y. Robust low-rank tensor recovery with rectification and alignment. IEEE Trans. Pattern Anal. Mach. Intell. 2019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Zhang, X.; Jiang, R.; Wang, T.; Wang, J. Recursive neural network for video deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020. [Google Scholar] [CrossRef]
  115. Zhang, Y.; Liu, R.; Heidari, A.A.; Wang, X.; Chen, Y.; Wang, M.; Chen, H.J.N. Towards augmented kernel extreme learning models for bankruptcy prediction: Algorithmic behavior and comprehensive analysis. Neurocomputing 2020. [Google Scholar] [CrossRef]
  116. Akdemir, B. Prediction of Hourly Generated Electric Power Using Artificial Neural Network for Combined Cycle Power Plant. Int. J. Electr. Energy 2016, 4, 91–95. [Google Scholar] [CrossRef]
  117. Bandić, L.; Hasičić, M.; Kevrić, J. Prediction of Power Output for Combined Cycle Power Plant Using Random Decision Tree Algorithms and ANFIS. In International Symposium on Innovative and Interdisciplinary Applications of Advanced Technologies; Springer: Berlin/Heidelberg, Germany, 2019; pp. 406–416. [Google Scholar]
  118. Mohammed, M.K.; Awad, O.I.; Rahman, M.; Najafi, G.; Basrawi, F.; Abd Alla, A.N.; Mamat, R. The optimum performance of the combined cycle power plant: A comprehensive review. Renew. Sustain. Energy Rev. 2017, 79, 459–474. [Google Scholar]
  119. Moayedi, H.; Mehrabi, M.; Kalantar, B.; Abdullahi Mu’azu, M.; Rashid, A.S.A.; Foong, L.K.; Nguyen, H. Novel hybrids of adaptive neuro-fuzzy inference system (ANFIS) with several metaheuristic algorithms for spatial susceptibility assessment of seismic-induced landslide. Geomat. Nat. Hazards Risk 2019, 10, 1879–1911. [Google Scholar] [CrossRef] [Green Version]
  120. Moayedi, H.; Mehrabi, M.; Mosallanezhad, M.; Rashid, A.S.A.; Pradhan, B. Modification of landslide susceptibility mapping using optimized PSO-ANN technique. Eng. Comput. 2019, 35, 967–984. [Google Scholar] [CrossRef]
  121. Liu, J.; Wu, C.; Wu, G.; Wang, X. A novel differential search algorithm and applications for structure design. Appl. Math. Comput. 2015, 268, 246–269. [Google Scholar] [CrossRef]
  122. Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2019, 1–20. [Google Scholar] [CrossRef]
  123. Fu, X.; Pace, P.; Aloi, G.; Yang, L.; Fortino, G. Topology optimization against cascading failures on wireless sensor networks using a memetic algorithm. Comput. Netw. 2020, 177, 107327. [Google Scholar] [CrossRef]
  124. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y.J.K.-B.S. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl. Based Syst. 2020, 214, 106728. [Google Scholar] [CrossRef]
  125. Yu, H.; Li, W.; Chen, C.; Liang, J.; Gui, W.; Wang, M.; Chen, H. Dynamic Gaussian bare-bones fruit fly optimizers with abandonment mechanism: Method and analysis. Eng. Comput. 2020, 1–29. [Google Scholar] [CrossRef]
  126. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X.J.I.S. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  127. Zhang, X.; Wang, J.; Wang, T.; Jiang, R.; Xu, J.; Zhao, L.J.I.S. Robust Feature Learning for Adversarial Defense via Hierarchical Feature Alignment. Inf. Sci. 2020. [Google Scholar] [CrossRef]
  128. Zhang, X.; Fan, M.; Wang, D.; Zhou, P.; Tao, D. Top-k feature selection framework using robust 0-1 integer programming. IEEE Trans. Neural Netw. Learn. Syst. 2020. [Google Scholar] [CrossRef] [PubMed]
  129. Abedinia, O.; Amjady, N.; Ghadimi, N. Solar energy forecasting based on hybrid neural network and improved metaheuristic algorithm. Comput. Intell. 2018, 34, 241–260. [Google Scholar] [CrossRef]
  130. Zhou, G.; Moayedi, H.; Bahiraei, M.; Lyu, Z. Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings. J. Clean. Prod. 2020, 254, 120082. [Google Scholar] [CrossRef]
  131. El Mokhi, C.; Addaim, A. Optimization of Wind Turbine Interconnections in an Offshore Wind Farm Using Metaheuristic Algorithms. Sustainability 2020, 12, 5761. [Google Scholar] [CrossRef]
  132. Okewu, E.; Misra, S.; Maskeliūnas, R.; Damaševičius, R.; Fernandez-Sanz, L. Optimizing green computing awareness for environmental sustainability and economic security as a stochastic optimization problem. Sustainability 2017, 9, 1857. [Google Scholar] [CrossRef] [Green Version]
  133. Seyedmahmoudian, M.; Jamei, E.; Thirunavukkarasu, G.S.; Soon, T.K.; Mortimer, M.; Horan, B.; Stojcevski, A.; Mekhilef, S. Short-term forecasting of the output power of a building-integrated photovoltaic system using a metaheuristic approach. Energies 2018, 11, 1260. [Google Scholar] [CrossRef] [Green Version]
  134. Hu, Y.; Li, J.; Hong, M.; Ren, J.; Lin, R.; Liu, Y.; Liu, M.; Man, Y. Short term electric load forecasting model and its verification for process industrial enterprises based on hybrid GA-PSO-BPNN algorithm—A case study of papermaking process. Energy 2019, 170, 1215–1227. [Google Scholar] [CrossRef]
  135. Lorencin, I.; Anđelić, N.; Mrzljak, V.; Car, Z. Genetic algorithm approach to design of multi-layer perceptron for combined cycle power plant electrical power output estimation. Energies 2019, 12, 4352. [Google Scholar] [CrossRef] [Green Version]
  136. Ghosh, T.; Martinsen, K.; Dan, P.K. Data-Driven Beetle Antennae Search Algorithm for Electrical Power Modeling of a Combined Cycle Power Plant. In World Congress on Global Optimization; Springer: Berlin/Heidelberg, Germany, 2019; pp. 906–915. [Google Scholar]
  137. Chatterjee, S.; Dey, N.; Ashour, A.S.; Drugarin, C.V.A. Electrical energy output prediction using cuckoo search based artificial neural network. In Smart Trends in Systems, Security and Sustainability; Springer: Berlin/Heidelberg, Germany, 2018; pp. 277–285. [Google Scholar]
  138. Tüfekci, P. Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods. Int. J. Electr. Power Energy Syst. 2014, 60, 126–140. [Google Scholar] [CrossRef]
  139. Foong, L.K.; Moayedi, H.; Lyu, Z. Computational modification of neural systems using a novel stochastic search scheme, namely evaporation rate-based water cycle algorithm: An application in geotechnical issues. Eng. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
  140. Moayedi, H.; Tien Bui, D.; Anastasios, D.; Kalantar, B. Spotted hyena optimizer and ant lion optimization in predicting the shear strength of soil. Appl. Sci. 2019, 9, 4738. [Google Scholar] [CrossRef] [Green Version]
  141. Moosavi, S.H.S.; Bardsiri, V.K. Satin bowerbird optimizer: A new optimization algorithm to optimize ANFIS for software development effort estimation. Eng. Appl. Artif. Intell. 2017, 60, 1–15. [Google Scholar] [CrossRef]
  142. Kaya, H.; Tüfekci, P.; Gürgen, F.S. Local and global learning methods for predicting power of a combined gas & steam turbine. In Proceedings of the International Conference on Emerging Trends in Computer and Electronics Engineering ICETCEE, Dubai, UAE, 24 March 2012; pp. 13–18. [Google Scholar]
  143. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  144. Mohamed, A.-A.A.; Ali, S.; Alkhalaf, S.; Senjyu, T.; Hemeida, A.M. Optimal Allocation of Hybrid Renewable Energy System by Multi-Objective Water Cycle Algorithm. Sustainability 2019, 11, 6550. [Google Scholar] [CrossRef] [Green Version]
  145. Chen, C.; Wang, P.; Dong, H.; Wang, X. Hierarchical Learning Water Cycle Algorithm. Appl. Soft Comput. 2020, 86, 105935. [Google Scholar] [CrossRef]
  146. M’zoughi, F.; Bouallègue, S.; Garrido, A.J.; Garrido, I.; Ayadi, M. Water cycle algorithm–based airflow control for oscillating water column–based wave energy converters. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2020, 234, 118–133. [Google Scholar] [CrossRef]
  147. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  148. Gajula, V.; Rajathy, R. An agile optimization algorithm for vitality management along with fusion of sustainable renewable resources in microgrid. Energy Sources Part A Recovery Util. Environ. Eff. 2020, 42, 1580–1598. [Google Scholar] [CrossRef]
  149. Moayedi, H.; Kalantar, B.; Foong, L.K.; Tien Bui, D.; Motevalli, A. Application of three metaheuristic techniques in simulation of concrete slump. Appl. Sci. 2019, 9, 4340. [Google Scholar] [CrossRef] [Green Version]
  150. Heidari, A.A.; Faris, H.; Mirjalili, S.; Aljarah, I.; Mafarja, M. Ant lion optimizer: Theory, literature review, and application in multi-layer perceptron neural networks. In Nature-Inspired Optimizers; Springer: Berlin/Heidelberg, Germany, 2020; pp. 23–46. [Google Scholar]
  151. Zhang, S.; Zhou, G.; Zhou, Y.; Luo, Q. Quantum-inspired satin bowerbird algorithm with Bloch spherical search for constrained structural optimization. J. Ind. Manag. Optim. 2017, 13. [Google Scholar] [CrossRef]
  152. Chintam, J.R.; Daniel, M. Real-power rescheduling of generators for congestion management using a novel satin bowerbird optimization algorithm. Energies 2018, 11, 183. [Google Scholar] [CrossRef] [Green Version]
  153. Chen, W.; Chen, X.; Peng, J.; Panahi, M.; Lee, S. Landslide susceptibility modeling based on ANFIS with teaching-learning-based optimization and Satin bowerbird optimizer. Geosci. Front. 2020, 12, 93–107. [Google Scholar] [CrossRef]
  154. Mostafa, M.A.; Abdou, A.F.; Abd El-Gawad, A.F.; El-Kholy, E. SBO-based selective harmonic elimination for nine levels asymmetrical cascaded H-bridge multilevel inverter. Aust. J. Electr. Electron. Eng. 2018, 15, 131–143. [Google Scholar] [CrossRef]
  155. Zhou, G.; Moayedi, H.; Foong, L.K. Teaching–learning-based metaheuristic scheme for modifying neural computing in appraising energy performance of building. Eng. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
  156. Seyedashraf, O.; Mehrabi, M.; Akhtari, A.A. Novel approach for dam break flow modeling using computational intelligence. J. Hydrol. 2018, 559, 1028–1038. [Google Scholar] [CrossRef]
  157. Guo, Z.; Moayedi, H.; Foong, L.K.; Bahiraei, M. Optimal modification of heating, ventilation, and air conditioning system performances in residential buildings using the integration of metaheuristic optimization and neural computing. Energy Build. 2020, 214, 109866. [Google Scholar] [CrossRef]
Figure 1. The graphical situation of PE versus (a) AT, (b) V, (c) AP, and (d) RH.
Figure 1. The graphical situation of PE versus (a) AT, (b) V, (c) AP, and (d) RH.
Sustainability 13 02336 g001
Figure 2. The general path of the study.
Figure 2. The general path of the study.
Sustainability 13 02336 g002
Figure 3. The used artificial neural network (ANN).
Figure 3. The used artificial neural network (ANN).
Sustainability 13 02336 g003
Figure 4. (a) Convergence curves belonging to the tested PS of WCA-ANN and (b) comparison between the convergence behaviors of the chosen networks.
Figure 4. (a) Convergence curves belonging to the tested PS of WCA-ANN and (b) comparison between the convergence behaviors of the chosen networks.
Sustainability 13 02336 g004aSustainability 13 02336 g004b
Figure 5. The magnitude of error over the training dataset obtained by (a) WCA-NN, (b) ALO-NN, and (c) SBO-NN.
Figure 5. The magnitude of error over the training dataset obtained by (a) WCA-NN, (b) ALO-NN, and (c) SBO-NN.
Sustainability 13 02336 g005
Figure 6. The testing results in terms of (a,c,e) correlation and (b,d,f) histogram of errors for the WCA-NN, ALO-NN, and SBO-NN, respectively.
Figure 6. The testing results in terms of (a,c,e) correlation and (b,d,f) histogram of errors for the WCA-NN, ALO-NN, and SBO-NN, respectively.
Sustainability 13 02336 g006
Figure 7. Radar charts for comparing the calculated (a) RMSE, (b) MPE, (c) MAPE, and (d) R.
Figure 7. Radar charts for comparing the calculated (a) RMSE, (b) MPE, (c) MAPE, and (d) R.
Sustainability 13 02336 g007
Table 1. Descriptive statistics of the PE and input parameters.
Table 1. Descriptive statistics of the PE and input parameters.
FactorUnitDescriptive Indicator
MeanStd. ErrorStd. DeviationSample VarianceMinimumMaximum
AT°C19.70.17.555.51.837.1
Vcm Hg54.30.112.7161.525.481.6
APmbar1013.30.15.935.3992.91033.3
RH%73.30.114.6213.225.6100.2
PEMW454.40.217.1291.3420.3495.8
Table 2. The optimized parameters of the WCA configurations.
Table 2. The optimized parameters of the WCA configurations.
i For   Z i   ( PS   =   400 ) For   Y i ( PS   =   300 )
W i 1 W i 2 W i 3 W i 4 b i W i 1 W i 2 W i 3 W i 4 b i
1−1.2380.3441.240−1.6402.4250.887−1.6701.5170.068−2.425
21.482−1.8510.3110.399−1.819−0.0422.181−0.983−0.3951.819
3−0.8701.152−1.755−0.8471.2121.0351.7700.8480.979−1.212
4−0.8300.1721.7161.4890.6060.6391.6901.572−0.378−0.606
50.864−1.691−1.3430.6850.000−1.587−1.512−1.016−0.2130.000
6−1.394−1.677−1.052−0.136−0.6061.2561.282−1.2041.1000.606
7−2.004−1.2610.276−0.446−1.212−0.3130.385−1.739−1.615−1.212
81.6090.8831.5320.4021.8191.2770.190−1.739−1.0901.819
9−1.876−0.7400.819−1.069−2.425−0.514−1.6791.003−1.339−2.425
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moayedi, H.; Mosavi, A. Electrical Power Prediction through a Combination of Multilayer Perceptron with Water Cycle Ant Lion and Satin Bowerbird Searching Optimizers. Sustainability 2021, 13, 2336. https://doi.org/10.3390/su13042336

AMA Style

Moayedi H, Mosavi A. Electrical Power Prediction through a Combination of Multilayer Perceptron with Water Cycle Ant Lion and Satin Bowerbird Searching Optimizers. Sustainability. 2021; 13(4):2336. https://doi.org/10.3390/su13042336

Chicago/Turabian Style

Moayedi, Hossein, and Amir Mosavi. 2021. "Electrical Power Prediction through a Combination of Multilayer Perceptron with Water Cycle Ant Lion and Satin Bowerbird Searching Optimizers" Sustainability 13, no. 4: 2336. https://doi.org/10.3390/su13042336

APA Style

Moayedi, H., & Mosavi, A. (2021). Electrical Power Prediction through a Combination of Multilayer Perceptron with Water Cycle Ant Lion and Satin Bowerbird Searching Optimizers. Sustainability, 13(4), 2336. https://doi.org/10.3390/su13042336

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop