Next Article in Journal
Macro Performances and Microstructures of Graphene Oxide-Modified Cement Mortar Under Steam Curing Conditions
Previous Article in Journal
Review of Evaporative Cooling Systems for Buildings in Hot and Dry Climates
Previous Article in Special Issue
Strength Reduction Due to Acid Attack in Cement Mortar Containing Waste Eggshell and Glass: A Machine Learning-Based Modeling Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Soft Computing Techniques to Model the Compressive Strength in Geo-Polymer Concrete: Approaches Based on an Adaptive Neuro-Fuzzy Inference System

1
Key Laboratory of Xinjiang Coal Resources Green Mining (Xinjiang Institute of Engineering), Ministry of Education, Urumqi 830023, China
2
Xinjiang Institute of Engineering, School of Mining Engineering and Geology, Urumqi 830023, China
3
School of Mining Engineering, China University of Mining and Technology, Xuzhou 221116, China
4
School of Civil Engineering, Guangzhou University, Guangzhou 510006, China
5
Higher School of Advanced Digital Technologies, Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia
6
School of Civil Engineering and Architecture, Linyi University, Linyi 276000, China
*
Authors to whom correspondence should be addressed.
Buildings 2024, 14(11), 3505; https://doi.org/10.3390/buildings14113505
Submission received: 20 September 2024 / Revised: 29 October 2024 / Accepted: 30 October 2024 / Published: 1 November 2024

Abstract

:
Media visual sculpture is a landscape element with high carbon emissions. To reduce carbon emission in the process of creating and displaying visual art and structures (visual communication), geo-polymer concrete (GePC) is considered by designers. It has emerged as an environmentally friendly substitute for traditional concrete, boasting reduced carbon emissions and improved longevity. This research delves into the prediction of the compressive strength of GePC (CSGePC) employing various soft computing techniques, namely SVR, ANNs, ANFISs, and hybrid methodologies combining Genetic Algorithm (GA) or Firefly Algorithm (FFA) with ANFISs. The investigation utilizes empirical datasets encompassing variations in concrete constituents and compressive strength. Evaluative metrics including RMSE, MAE, R2, VAF, NS, WI, and SI are employed to assess predictive accuracy. The results illustrate the remarkable precision of all soft computing approaches in predicting CSGePC, with hybrid models demonstrating superior performance. Particularly, the FFA-ANFISs model achieves a MAE of 0.8114, NS of 0.9858, RMSE of 1.0322, VAF of 98.7778%, WI of 0.9236, R2 of 0.994, and SI of 0.0358. Additionally, the GA-ANFISs model records a MAE of 1.4143, NS of 0.9671, RMSE of 1.5693, VAF of 96.8278%, WI of 0.8207, R2 of 0.987, and SI of 0.0532. These findings underscore the effectiveness of soft computing techniques in predicting CSGePC, with hybrid models showing particularly promising results. The practical application of the model is demonstrated through its reliable prediction of CSGePC, which is crucial for optimizing material properties in sustainable construction. Additionally, the model’s performance was compared with the existing literature, showing significant improvements in predictive accuracy and robustness. These findings contribute to the development of more efficient and environmentally friendly construction materials, offering valuable insights for real-world engineering applications.

1. Introduction

In the modern era, humanity is reaching significant developmental milestones. The advancement of infrastructure, as evidenced by multiple studies [1], holds the potential to foster societal progress [2]. The construction industry, therefore, assumes a pivotal role in societal advancement. Concrete stands out as a primary material for constructing various structures. Unlike traditional concrete, which has dominated residential construction for the past two to three decades, geo-polymer concrete (GPC) has emerged as an environmentally friendly alternative. GPC utilizes ground granulated blast furnace slag, fly ash, and an alkaline solution to entirely swap out the cement. Cement, the primary binding agent in conventional concrete, is responsible for significant carbon dioxide emissions during production. By substituting cement with GGBFS and fly ash, geopolymer concrete significantly diminishes carbon footprints, reducing emissions by approximately 80% compared to conventional concrete.
Escalating carbon dioxide emissions have direct implications for global warming [3], necessitating a commitment to sustainable development in construction [4]. Geopolymer has proven to be more cost-effective than traditional concrete, reducing costs by around 40%. With its superior strength and durability, geopolymer concrete has emerged as a viable alternative [5]. Geo-polymer concretes entirely replace cement weight with fly ash, slag, or metakaolin from concrete constituent, necessitating an alkaline solution to activate pozzolanic materials for bonding. Sodium hydroxide and silicate serve as potential alkaline agents. The distinct reactions and bonds of geopolymer concrete, stemming from these reactions, as first elucidated by Davidovits [6], position it as a promising alternative to conventional concrete, exhibiting superior performance in laboratory tests.
The strength of geopolymer concrete is influenced by various parameters, encompassing both external and internal components. External components such as temperature, curing duration, humidity, and air containment play significant roles, while internal factors include material quality and composition variations. Binding material compositions and particle size serve as crucial factors in initiating and strengthening reactions, with ratios playing a pivotal role in controlling strength as required. Increased slag content augments compressive strength in ambient-cured geopolymer concrete. The rapid reaction of finer fly ash and slag particles, owing to increased surface area, enhances early concrete strength. The ratio of liquid to binder is also crucial for reaction and strength attainment. Although water is initially required for geopolymer reactions, it evaporates during hardening, necessitating only a minimal amount for reaction with all geopolymer concrete components. The optimal liquid content directly impacts geopolymer concrete strength. The selection and usage of superplasticizers play a critical role in bond formation, with SNF-based superplasticizers being best suited for geopolymer concrete bonding.
The quality and concentration of alkaline solutions are crucial in initiating geopolymer reactions, directly affecting concrete performance and strength. Curing temperature and conditions significantly influence reaching design strength, with oven-cured materials exhibiting faster strength gain than ambient-cured specimens. Geopolymer concrete demonstrates remarkable stability against severe environmental conditions, underpinning its potential for sustainable developments in civil industries. With numerous applications worldwide, including its utilization by the Delhi Metro Rail Corporation (DMRC) in tunnelling and platform constructions in Delhi, India, geopolymer concrete holds promise as a sustainable construction material [7].
The determination of concrete mix strength is time-consuming and labor-intensive. However, machine learning methods offer an effective means to predict strength based on previous datasets, without the need for destructive testing. By constructing models derived from historical data, machine learning accurately predicts strength, with various techniques employed to reduce prediction errors. Artificial neural networks (ANNs) stand out as the most prominent method in concrete strength prediction.
The utilization of alkali binders (ABs) in concrete mixes introduces complexities in predicting concrete compressive strength (CS), thus necessitating the application of increasingly sophisticated computational methods to formulate predictive equations [8,9,10]. Within materials science, soft computing methods such as ANNs are commonly employed by researchers to estimate the compressive strength (CS) and modulus of elasticity of concrete. A thorough examination of the utilization of ANNs in modeling concrete materials is available in existing literature [11,12,13]. Over the past decade, alongside ANN models, various computational techniques including fuzzy logic and evolutionary methods have been utilized to characterize concrete material properties [14,15,16]. Fuzzy systems and fuzzy numbers have found acceptable applications in mining and civil engineering domains [17,18,19,20,21,22,23,24,25,26]. Recent studies have shown that machine learning techniques are widely applied for predicting compressive strength and other performance parameters in diverse construction materials. For instance, the compressive strength of rubberized slag-based geopolymer concrete has been estimated using machine learning-based models, achieving notable accuracy [27]. Moreover, advancements in machine learning have been leveraged to predict the compressive strength of AAC blocks, illustrating the utility of these techniques in non-traditional construction materials [28]. Furthermore, ensemble systems have been used to predict the axial-bearing capacity of fully grouted rock bolting systems, further emphasizing the versatility of these approaches in structural engineering [29]. These studies underscore the potential and effectiveness of machine learning models, aligning with the objectives of our study to enhance predictive capabilities in geopolymer concrete compressive strength. Upon reviewing the literature, it becomes apparent that the majority of research relies on either conventional machine learning (ML) methods [30,31] such as support vector machine, gene expression programming, ANNs, random forest, data envelopment analysis, response surface methodology, adaptive neuro fuzzy inference system, gradient boosting, gaussian process regression, and multivariate adaptive regression splines or hybrid models combining ML and optimization algorithms such as particle swarm optimization, grey wolf optimization, and jellyfish search optimizer [32,33,34] to estimate concrete CS. However, the practical implications of these established models have not been thoroughly discussed previously. Some studies [35,36] have employed hybrid ML systems to address the limitations of ML techniques in estimating outputs such as local minima and overfitting problems. Nonetheless, detailed experimental simulations for tackling these issues have not been adequately explored. Given that each approach purports to reliably anticipate material mechanical properties, it is essential to ascertain which technique is suitable for estimating the CS of contemporary concrete. Furthermore, numerous endeavors have been undertaken to implement machine learning techniques in the realm of concrete and civil projects [32,37,38,39,40,41,42,43,44,45,46,47]. Mustapha et al. [48] conducted a comprehensive comparative analysis of gradient-boosting ensemble models for estimating the compressive strength of quaternary blend concrete. Their findings highlight the superiority of CatBoost in terms of predictive accuracy, with an R2 value of 0.9838, demonstrating significant improvements over other gradient-boosting models. Similarly, Alhakeem et al. [49] employed a hybrid Gradient Boosting Regression Tree (GBRT) model combined with GridSearch CV hyperparameter optimization to predict the compressive strength of eco-friendly concrete. This hybrid model achieved an R2 of 0.9612 and an RMSE of 2.3214, underscoring the effectiveness of grid search optimization in enhancing model performance. Moreover, the study by Faraz et al. [50] explored the use of gene expression programming (GEP) and multigene expression programming (MEP) to predict the compressive strength of metakaolin-based concrete. Their results indicated that MEP models outperformed GEP models, with the best MEP model achieving an R2 value of 0.96. This study also identified water–binder ratio, superplasticizer percentage, and age as critical factors influencing compressive strength. Shah et al. [51] utilized MEP to model the mechanical properties of E-waste aggregate-based concrete, achieving high accuracy with R-values exceeding 0.9 for both compressive and tensile strength predictions. The sensitivity analysis revealed that the water–cement ratio and E-waste aggregate percentages were the most influential parameters.
These studies collectively demonstrate the potential of advanced machine learning models, particularly ensemble and hybrid approaches, in accurately predicting the properties of sustainable concrete. They also highlight the importance of parameter optimization and sensitivity analysis in enhancing model performance and understanding the influence of various input factors.
Dey et al. [52] stated that the feasibility of utilizing waste materials, specifically recycled glass powder (GP) and gold mine tailings (MTs), as sustainable alternatives in geopolymer concrete production, is promising. Their study employed response surface methodology to identify the optimal material proportions that yield maximum compressive strength. Comprehensive evaluations were conducted on the concrete mixtures’ fresh-state properties, mechanical characteristics, and long-term durability. The authors found that GP improves workability, while MT decreases it due to its increased fineness and larger surface area. They reported that adding GP and MT enhances compressive strength by up to 25%, though GP alone slightly reduces mechanical properties. Both materials positively influence flexural and splitting tensile strengths, albeit to a lesser extent than compressive strength. Durability tests, including 300 freeze–thaw cycles and rapid chloride permeability assessments, demonstrate that GP and MT mixtures exceed standard benchmarks. The study concludes that incorporating GP and MT together improves mechanical properties and enhances durability, underscoring their potential as sustainable substitutes in concrete production. In another study, Martini et al. [53] conducted a study investigating the mechanical properties of concrete mixes containing recycled concrete aggregate (RCA) sourced from demolished buildings in Abu Dhabi, with the aim of promoting sustainable construction practices. They utilized ground granulated blast-furnace slag and fly ash as supplementary cementitious materials across 70 different concrete mixes, incorporating varying levels of RCA replacement (0%, 20%, 40%, 60%, and 100%). Through uniaxial compressive and flexural tests, the researchers found that concrete with 20% RCA could be effectively used in structural applications, as its strength exceeded 45 MPa.
Numerous efforts have been dedicated to determining the compressive strength of geopolymer concrete (CSGePC) in laboratory settings, a process also known as direct determination. Wakjira et al. [54] propose a novel framework for the strength prediction and multi-objective optimization (MOO) of cost-effective and environmentally sustainable ultra-high-performance concrete (UHPC), facilitating intelligent, sustainable, and resilient construction practices. Their framework integrates various tree- and boosting ensemble-based machine learning (ML) models to develop an accurate and reliable prediction model for the uniaxial compressive strength of UHPC. Their optimized models are combined into a super learner model, which serves as a robust predictive tool and one of the objective functions in the MOO problem. TG Wakjira and MS Alam [55] have developed a predictive model powered by explainable machine learning (ML) to address challenges in the performance-based seismic design (PBSD) of ultra-high-performance concrete (UHPC) bridge columns. UHPC, known for its superior strength, toughness, and durability, faces a significant gap in the accurate quantification of damage states using appropriate engineering demand parameters (EDPs). Their study aims to bridge that gap by predicting the drift ratio limit states of UHPC bridge columns across four damage states.
The procedures involved in sample preparation and test execution have proven to be both time-consuming and expensive, as underscored by various sources [56,57,58,59,60]. Scholars have sought to introduce machine learning models as a potential solution to this challenge. Moreover, these methodologies have demonstrated effectiveness in various aspects of civil engineering applications [37,42,61,62,63,64,65,66,67]. Therefore, the current investigation delved into the examination of CSGePC utilizing an array of soft computing methodologies, including support vector regression (SVR), ANNs, adaptive neuro-fuzzy inference systems (ANFISs), and hybrid approaches that amalgamate Genetic Algorithms (GAs) or Firefly Algorithm (FFA) with ANFISs. SVR is known for its robustness in handling small- to medium-sized datasets and its ability to model complex nonlinear relationships, which are often encountered in material properties prediction. This technique has strong generalization capabilities due to its margin maximization principle, making it suitable for scenarios where the training data may be limited but of high quality. On the other, ANNs are highly flexible and powerful tools capable of capturing complex patterns and relationships in data due to their nonlinear architecture and ability to learn from data. This technique has been widely used in materials science for property prediction tasks, consistently showing high predictive accuracy. Furthermore, ANNs’ adaptability make them particularly suitable for handling the varied nature of geo-polymer concrete datasets, where the relationships between inputs and outputs can be highly nonlinear and complex. ANFISs combine the learning capability of neural networks with the linguistic reasoning of fuzzy logic. This hybrid approach allows ANFISs to handle uncertainties and imprecise information effectively. Moreover, ANFISs are particularly useful for problems where the relationship between inputs and outputs is nonlinear and ambiguous. The selection of SVR, ANNs, and ANFISs leverages their complementary strengths. SVR provides robust and generalizable models, ANNs offer flexibility in capturing complex nonlinear relationships, and ANFISs excel in handling uncertainties and imprecise data. This combination ensures a comprehensive approach to modeling the CSGePC. Analyzing experimental datasets containing variations in concrete constituents and compressive strength facilitated this study. The outcomes unveiled the commendable accuracy of all utilized soft computing techniques in predicting the CSGePC. Specifically, SVR, ANNs, ANFISs, as well as the hybrid GA-ANFIS and hybrid FFA-ANFIS models showcased adept performance in grasping the intricate correlations between concrete constituents and compressive strength. The utilization of statistical metrics enabled the assessment of prediction accuracy, indicating favorable outcomes across diverse modeling strategies. These discoveries underscore the efficacy of soft computing techniques as valuable instruments for prognosticating concrete strength, presenting a non-destructive and resource-conserving alternative to conventional testing procedures. Some literature models for predicting different characteristics of concretes are reported in Table 1.
Geo-polymer concrete (GePC) has gained significant attention as a sustainable alternative to traditional concrete due to its reduced carbon emissions and enhanced durability. This study focuses on predicting the CSGePC by employing various soft computing techniques. To achieve this, the several critical input parameters was considered that influence the CSGePC.
Among these parameters, fly ash plays a pivotal role as it serves as the primary binder in GePC, replacing conventional cement and thereby reducing carbon emissions. The rest period, which is the duration before the curing process starts and the curing temperature, significantly affects the hydration process and the resultant strength of the concrete. The curing period, which is the length of time the concrete is allowed to cure, further impacts the development of its strength. Additionally, the NaOH/Na2SiO3 ratio, referring to the proportion of sodium hydroxide to sodium silicate in the alkaline activator solution, is crucial for the geopolymerization process. Including superplasticizer is another essential factor, as it enhances the workability of the concrete mix without compromising its strength. Extra water added to the mix helps achieve the desired consistency. At the same time, the molarity of the NaOH solution used in the alkaline activator influences the dissolution of fly ash particles and the formation of the geopolymeric gel. The alkaline activator/binder ratio, which is the proportion of the alkaline activator to the binder (fly ash), also plays a critical role in the mix’s overall performance.
Furthermore, the type and size of aggregates used in the mix are important parameters. The coarse aggregate and fine aggregate types influence the density and strength of the concrete. By carefully selecting and optimizing these parameters, this study aims to develop high-performance, environmentally friendly concrete.
Moreover, the effective utilization of these approaches with CSGePC emphasizes their flexibility and significance concerning sustainable building materials. This highlights the pivotal role of technological progress in tackling relevant environmental issues within construction. In this study, visual communication refers to the creation and presentation of visual art and structures, such as media visual sculptures. These structures, which typically have high carbon emissions, can benefit from the use of GePC, thereby promoting more sustainable practices in visual communication. Through the incorporation of soft computing techniques, professionals in construction have the potential to refine material utilization, bolster structural integrity, and diminish environmental impacts. As a result, this investigation enriches the ongoing dialogue on sustainable construction methodologies, advocating for the assimilation of inventive approaches to facilitate environmentally conscious and streamlined construction processes.
Furthermore, the implications of this study extend beyond CSGePC to encompass diverse concrete types and construction materials. The resilient performance of soft computing methods implies their wide-ranging applicability in various construction industry contexts. Consequently, this research not only enhances the comprehension of CSGePC but also sets the stage for further examination and adoption of soft computing methodologies in construction engineering and material science.
This study introduces a novel approach to predicting the compressive strength of CSGePC by employing hybrid soft computing techniques, specifically GA-ANFIS and FFA-ANFIS models. Unlike previous research, which primarily relied on single soft computing methods, our work integrates Genetic and Firefly Algorithms to optimize the ANFIS model, resulting in enhanced prediction accuracy. Furthermore, we provide a detailed comparison with existing models, demonstrating the superiority of these hybrid techniques in terms of prediction performance. The novelty of this research lies in its potential to promote sustainable construction practices through accurate, non-destructive prediction methods, reducing the need for extensive physical testing and conserving resources. This contribution paves the way for future studies to explore and build upon hybrid methodologies for broader applications in construction engineering and material science.

2. Data Sets

This study utilizes 11 input parameters and 61 data samples to predict CSGePC, with the requisite database collected from studies conducted by Verma [7] (see Supplementary Materials). Geopolymer concretes consist of fly ash (FA), slag (SS), sodium hydroxide (SH), fine aggregate (FAg), coarse aggregate (CA), superplasticizers (Su), and water. Before commencing large-scale concrete production, thorough quality inspections are consistently carried out in laboratories to ensure the suitability of raw materials. Fly ash is typically sourced from chemical industries, while alkaline solutions and superplasticizers are commonly obtained from nearby power plants. Coarse aggregate and fine aggregate are acquired from easily accessible local resources. Water usage conforms to regional requirements and availability. Alkaline solutions are prepared at least 20–24 h prior to mixing due to the significant production time required. The preparation of geopolymer concrete takes longer than traditional concrete due to the more time-consuming mixing process compared to manual blending. Manufactured sand (M-sand), with its well-graded particles, yields superior results compared to regular sand and is extensively utilized in geopolymer concretes.
To determine the composition of materials used, X-ray fluorescence spectroscopy (XRF) tests are conducted on fly ash and other pozzolanic components. This analysis assesses the ratios of alumina, silica, and sodium oxides in the mixtures. When procuring chemicals from different suppliers, information regarding mineral quantities or minimum chemical solution analyses is provided. Laboratory testing is conducted to ascertain particle size, bulk density, and other crucial specifications. These assessments must be completed before proceeding with further concrete composition mixing.
The current investigation aims to devise a strategy to assist users in enhancing various aspects of CSGePC for construction purposes. The input and output parameters, along with their specifications, are detailed in Table 2. A total of 61 data points are compiled to establish the CSGePC database. Fly ash, rest periods (RPs), curing temperature (CT), curing periods (CPs), ratio of sodium hydroxide to sodium silicate (NaOH/Na2SiO3), superplasticizers (Su), equivalent water (EW), molarity (M), alkali activator binder (AAB), coarse aggregate (CA), and fine aggregate (FAg) are identified as effective variables for modeling to train artificial intelligence (AI) models, while CSGePC is considered the output parameter for the AI process. In machine learning applications for materials science, limited experimental data can pose challenges such as model overfitting and reduced generalizability. Advanced data generation methods like Conditional Tabular Generative Adversarial Networks (CTGANs) offer a solution by creating synthetic data that retain the statistical properties of the original dataset. The study conducted by Wakjira [99] successfully applied CTGANs to augment their dataset, demonstrating its efficacy in enhancing data-driven models. While the current dataset has proven sufficient for achieving robust predictive performance, future work may incorporate CTGANs to bolster the data further and improve model accuracy.
A violin plot illustrating CSGePC data is presented in Figure 1, while a heatmap depicting the correlation coefficient of CSGePC data is displayed in Figure 2. This figure is used for showing the Pearson correlation between parameters [100]. In Figure 2, the correlation matrix presents the relationships between input parameters and CSGePC. The values reflect both the strength and direction of these relationships. A positive correlation (+) implies a direct relationship, where an increase in the input leads to an increase in the output, whereas a negative correlation (-) indicates an inverse relationship, where an increase in the input results in a decrease in the output. A correlation value of zero (0) means no relationship exists between the input and output variables [101]. The strongest positive correlation is observed between CA and CSGePC (0.76), highlighting CA as a critical factor influencing compressive strength. Notably, the strongest negative correlation is between CP and CSGePC (−0.80), indicating that higher values of CP are associated with lower CSGePC. A value of zero, for instance, as seen between NaOH/Na2SiO3 and many other variables, suggests no observable linear relationship between those specific parameters. By analyzing these correlations, a deeper understanding of the most influential parameters on CSGePC was gained, aiding in the optimization of geopolymer concrete formulations. Notably, to ensure the quality of our dataset, the box plots were employed to visually inspect the data for potential outliers as shown in Figure 3. Figure 3 presents boxplots of the effective parameters used to detect outlier data. The boxplots display the distribution of each input parameter, including median values (represented by the lines within the boxes) and the interquartile range (IQR). The whiskers extend to the minimum and maximum data points within 1.5 times the IQR, with points outside of this range considered outliers. These visualizations help identify any anomalies or deviations in the dataset, ensuring the model’s robustness against the influence of outliers. Parameters such as Su, CT, and RP show a wider distribution, with Su exhibiting potential outliers based on the whisker extensions. Other parameters, like EW and NaOH/Na2SiO3, have very narrow ranges, indicating less variation and no outliers detected. By understanding the spread and variability of each parameter, the outlier detection process becomes clearer, supporting the model’s accuracy and reliability in the predictive tasks.
The optimal ranges for each input parameter were primarily derived from the dataset used to train the model, ensuring that the input values stayed within the statistical distribution of the training data. These ranges were further validated by comparing them with findings from previous experimental studies on geopolymer concrete, thus confirming their practical and physical relevance.
While no additional experimental trials were specifically conducted to revalidate these ranges within this study, the combination of data-driven analysis and established experimental results from the literature provided a robust framework for defining these input intervals. Future studies may explore experimental trials aimed explicitly at refining these ranges further to improve model performance.

3. Research Methodology

The flowchart of research is demonstrated in Figure 4. To conduct this research, the experimental datasets containing variations in concrete constituents and compressive strength was first collected. The data included 61 data samples and 11 parameters such as fly ash content, rest period, curing temperature, curing period, NaOH/Na2SiO3 ratio, superplasticizer, extra water, NaOH solution molarity, alkaline activator/binder ratio, and the type and size of aggregates. The necessary dataset was sourced from the work of Verma [7]. This comprehensive dataset ensured a wide range of values for each parameter, capturing diverse concrete compositions. Next, the data were normalized to ensure all input parameters were on a comparable scale, which is crucial for effective model training. The dataset was then split into training and testing sets in a ratio of 80:20, to evaluate model performance accurately. Any missing data points were addressed through imputation or removal to maintain a clean dataset for analysis.
For model development, the several soft computing techniques was employed, including SVM, ANNs, and ANFISs. For SVM, the appropriate kernel function was selected, such as the sigmoid kernel, and optimized its parameters using methods like grid search. The SVM model was then trained on the training dataset, utilizing a maximum-margin hyperplane for classification. For the ANN, the architecture was defined by determining the number of layers and neurons in each layer. Activation functions were chosen to suit the specific requirements of the data. The ANN was implemented in a Python environment, using the “sklearn” package for functions such as “standard scaler” to preprocess the data. The ANFIS model combined the strengths of neural networks and fuzzy inference systems to tackle nonlinear problems. The architecture comprised five levels, starting with fuzzification, where inputs were converted into fuzzy membership functions. Subsequent layers involved the evaluation of firing strengths, normalization, defuzzification, and the aggregation of outputs. Parameters for the membership functions were determined during training using input–output data.
To enhance the performance of the ANFIS models, metaheuristic optimization algorithms such as Genetic Algorithm (GA) and Firefly Algorithm (FFA) were employed. The GA, inspired by natural selection, optimized the parameters through processes like crossover, mutation, and selection. The FFA, inspired by the flashing behavior of fireflies, guided the search for optimal solutions by leveraging attraction mechanisms and random movement.
In addition to model development and validation, a comparative analysis of the performance of the models was conducted against the existing literature. This comparison aims to evaluate whether the hybrid models, particularly FFA-ANFIS and GA-ANFIS, offer improvements over previously reported methodologies for predicting the compressive strength of geopolymer concrete.

3.1. Support Vector Machine (SVM)

When it comes to classification, regression, and identifying outliers, SVMs are your go-to supervised learning tools. Multiple linear and non-linear classifications are possible using SVMs, which were first created by Cortes and Vapnik [102] and are on the basis of statistical-based learning procedures. A non-linear SVM employs kernels for classifications, whereas the linear type employs a maximum-margin hyperplane. Another use of SVM is support vector clustering (SVC), which is a kind of unsupervised learning [103]. Linear, non-linear, polynomial, radial basis function, and sigmoid kernel functions are the most prevalent kinds used by SVM algorithms [104]. A sigmoid kernel function was used in this work. The perceptron neural networks with two layers are analogous to this kernel function; it serves as a transfer function for nodes. This function is useful for adjusting the prediction process in a way that is analogous to neural networks. Utilizing a maximum-margin hyperplane produced from performing the optimization, SVMs classify the input dataset during train process. Specialized techniques exist to swiftly resolve the quadratic programming issue that emerges from SVMs. These algorithms often use heuristic to subdivide the problem into solvable, more tractable portions. Other way to handle the Karush–Kuhn–Tucker requirements of primal and dual problems is to utilize an interior-point technique with Newton-like iterations [105]. This method answers the problem directly, rather than a chain of broken-down problems.
A low-rank approximation to the kernel matrix is often used in the kernel trick to evade solving a linear system requiring a big kernel matrix [103]. A SVR model was suggested by Drucker et al. [106] as an SVM variant for prediction and regression. An SVR model is derived from support-vector classification and relies on a subset of the training data; this is due to the fact that the loss function used to construct the model is indifferent to training points located outside the margin. In a similar vein, SVR solely uses a portion of the training data to construct the model, as the loss function does not take into account any data that are near the model’s prediction [107,108]. Equation (1) need to be solved in order to train the initial SVR [108]:
m i n i m i z e w 2 2 + C i = 1 N ( ξ i + ξ i * ) s u b j e c t   t o y i w , x i b ε
The above sentence describes the following: w stands the vector that is normal to the hyperplane (the margin), C is a quadratic function, b is the width from the origin, ξ i is the slack variable, C is the width from the hyperplane, xi is a training sample with a goal value (yi), ε is a free parameter, and i ranges from 1 to N. The prediction for that sample is the inner product plus the intercept w , x i > b. is a free parameter that acts as a threshold; all predictions must fall within a range of from the actual predictions. In the event that the aforementioned issue is not achievable, slack variables are often included to provide an estimate and to account for mistakes. For various values of ε , there is an objective value that influences the outcomes of the regression analysis and predictions.

3.2. Artificial Neural Network (ANN)

For the purpose of representing the complex number of uses between inputs and outputs in experimental data, an ANN is a technique of parallel information processing that is comprised of a large number of neurons, which are processing units, that are coupled to build a complicated network [109] One of the most successful types of neural networks is known as the multi-layer perceptron (MLP), which normally consists of a basic design that is composed of three layers. In the neural network, the trained data samples were sent into the input layer, which is the layer that is located to the left of the network’s leftmost layer. The hidden layer consisted of all of the nodes that were located between the input and output layers. This layer provided the neural network with assistance in understanding the intricate connections that existed between the data, and it also served as a layer for the processing of data. Having been formed from the previous two levels, the last layer of the neural network was the output layer, which was responsible for providing the final results. The complexity of the experimental data was used to define the number of neurons and hidden layers. Through the use of known non-constant parameters, output changes were induced, which allowed the system to handle a wide variety of nonlinear data types. When it comes to processing a substantial volume of fuzzy or random data, the intrinsic capability of the model to create nonlinear mappings between inputs and outputs is very advantageous. A Python 3.10 environment was utilized to create the ANN technique that was employed in this research. The functions that were used in the model were added from the “standard scaler” function that is included in the “sklearn” package.

3.3. Adaptive Neuro Fuzzy Inference System (ANFIs)

An adaptable ANN and a fuzzy interference system (FIS) are both components of the ANFIS, which takes a hybrid approach to problem solving. When it comes to tackling nonlinear issues, the ANFIS has repeatedly proved its efficacy across a variety of investigations. Jang first presented it in 1993 [110]. The best parameters for membership functions (MFs) are determined by the ANFIS mechanism via the process of training using input–output data. The architecture of the ANFIS, which is seen in Figure 5, is composed of five levels. An activation function μ is used in the first layer, which is referred to as fuzzification. At this layer, the inputs in each node (j) are converted into fuzzy membership functions. This layer is responsible for this transformation. Equations (2) and (3) illustrate the many ways in which this function may be expressed, including triangular, trapezoidal, sigmoidal, Gaussian, and so on. These are only few of the possible representations.
Q j 1 = μ A j ( x )   f o r   j = 1 , 2
Q j 1 = μ B j 2 ( y )   f o r   j = 3 , 4
In this sentence, the inputs are x and y, the membership function is denoted by Q j 1 , and the membership values of μ are represented by Aj and Bj. Equation (4), which describes the Gaussian membership function, is the one that is used in the inquiry that is being carried out:
μ ( x ) = exp [ ( ( x c j ) 2 2 σ j 2 ) ]
The primary variables representing the functions involving members are Cj and σj, representing the Gaussian curve’s mean and standard deviation, respectively. These criteria are also applied. Equation (5), which estimates the rules’ firing intensity, aids in computing the weights (wk) for these inclusion values in layer 2. The quantities are decided there.
Q k 2 = wk = μ Aj ( x ) × μ Bj ( y ) for   k = 1 , , 4   and   = 1 , 2
Using Equation (6), Layer 3 is responsible for normalizing the firing strengths.
Q j 4 = w ¯ j = w j k = 1 4 w k           f o r   j = 1 , , 4
The defuzzification layer, the fourth layer, uses Equation (7) to evaluate the output of each node j, and then calculates the parameters (p, q, and r) of the firing strengths f.
Q j 4 = w ¯ j fj = w ¯ j ( pj x + qj y + rj ) for = 1 , , 4
Following the steps outlined in Equation (8), the last layer adds up all the incoming signals to determine the total output in a single node.
Q j 5 = w ¯ j f j = j = 1 4 w j f j j = 1 4 w j

3.4. Genetic Algorithm (GA)

The concept of Genetic Algorithms was first conceived by John Holland in the year 1970 [111], and they are affected by natural selection as well as the processes that take place in genetics. It is the success of the individuals who are the healthiest members of the species that decides whether or not the population can renew. In the beginning, the population of a Genetic Algorithm (GA) is filled with encoded points. Then, via the use of three operators—crossover, mutation for space exploration with alternative solutions, and selection—the population is guided in the direction of the most efficient solution to a problem. A variety of elements that are pertinent to the kinds of issues that are being addressed are taken into consideration while making the decision about whether or not to halt the execution of the Genetic Algorithm. The average convergence of the adaptation means inside a group, the most significant number of generations or assessment operations, and the amount of time allowed for a single GA execution are a few criteria often employed in GA testing. Those represent only a few instances. One possibility is that a sample chain generated at random could investigate a limited answer domain. This is only a hypothetical scenario. While this may increase the chances of finding the finest approach, the search may continue to take you more than the ideal outcome. Because of this, to increase the likelihood of identifying the perfect answer, the GA must be run multiple times utilizing a range of unpredictable starting locations. The purpose of doing this is to increase the probability of finding the best option.

3.5. Firefly Algorithm (FFA)

There is a metaheuristic optimization strategy known as the Firefly Algorithm (FFA) [112], which was inspired by the flashing behavior of fireflies. Fireflies are well-known for their flashing activity. In order to drive fireflies toward brighter (better) options, the FF makes advantage of this behavior. It then explores the search space in order to locate the global optimal configuration. The FFA search method is based on two main concepts. The magnetism theory and the mechanism of motion formula are the two theories that dispute it. Fireflies lose some of their allure as you go further away from them since their solution quality (brightness) determines how appealing they are. This is how the attractiveness ν of fireflies might be represented:
β ( r ) = β 0 e ( γ r 2 )
In this context, ν0 denotes the attraction at zero distance, which is its highest value; γ regulates the decline “light intensity” refers to the amount of light emitted by fireflies. The variable “r” represents the Euclidean distance between two firefly solutions.
One solution (the firefly) may be directed to approach another, brighter firefly by using the following formula:
x i k + 1 = x i k + β r x j k x i k + α ( r a n d 1 2 )
The values of α, x i ( k + 1 ) , and x j ( k ) denote the positions of the potential answer i at time k + 1. Moreover, the signs x i ( k ) and x ( k ) represent the positions of firefly ϝ and a different firefly i that is visually appealing at time k. To facilitate the discovery of finding area, an aspect of randomness is included into the action via a variable called α. Furthermore, the value of r is a random integer generated from a standard distribution inside the interval [0, 1]. The ½ dimension ensures that the erratic motion may occur in every direction. These formulas, when put together, mimic the natural attraction behaviour of fireflies by allowing them to navigate the search space by approaching brighter people. Thanks to its stochastic components and movement strategy, the Firefly Algorithm may efficiently identify global optimum solutions to a broad variety of optimization problems by striking a good balance between exploration and exploitation.

3.6. Hyperparameter Tuning

Hyperparameter tuning is a critical step in optimizing the performance of machine learning models. For ANFISs, selecting optimal hyperparameters is crucial for achieving accurate and reliable predictions. In this study, two metaheuristic optimization algorithms—Genetic Algorithm (GA) and Firefly Algorithm (FFA)—were employed to fine-tune the hyperparameters of ANFIS models. This section provides a detailed explanation of the tuning processes using these algorithms.
GA is inspired by the principles of natural evolution and genetics. It is particularly effective for optimizing complex functions with multiple parameters. In the context of ANFISs, GA was used to find the optimal values for hyperparameters by mimicking evolutionary processes such as selection, crossover, and mutation.
The process begins with generating an initial population of potential solutions, each representing a set of hyperparameters for ANFISs. Each individual in the population is evaluated based on a fitness function, which, in this case, is the performance of the ANFIS model with the given hyperparameters. The fitness is typically assessed using metrics such as the R2 value on the validation dataset. Individuals with higher fitness scores are selected to propagate their genes to the next generation. To explore the solution space, crossover and mutation operations are performed on the selected individuals. Crossover involves combining features from two parents to create offspring, while mutation introduces random changes to some genes to maintain diversity. The algorithm iterates through these steps, refining the population over multiple generations until the hyperparameters converge to optimal values.
For ANFISs, GA was utilized to optimize parameters such as the number of membership functions, the type of membership function, and the learning rate. By systematically exploring the hyperparameter space, GA helped identify configurations that enhance the ANFIS’s predictive accuracy.
Fireflies’ flashing behaviour inspires the FFA. It is known for its efficacy in solving optimization problems by simulating the natural attraction mechanism among fireflies. In this study, FFA was employed to optimize ANFIS hyperparameters by guiding fireflies toward brighter (better) solutions.
Similar to GA, FFA starts with an initial population of fireflies, where each firefly represents a set of ANFIS hyperparameters. Fireflies are attracted to brighter fireflies, which correspond to better hyperparameter configurations. The movement of each firefly is influenced by the attractiveness of other fireflies, which is determined by their fitness (i.e., the performance of the ANFIS model). The fitness of each firefly is evaluated based on the ANFIS performance metrics. Fireflies with better fitness values are considered brighter. The position of each firefly is updated based on the attractiveness of other fireflies and some level of randomness. This ensures the exploration of the solution space while converging towards optimal solutions. The process repeats until convergence is achieved or a predefined number of iterations is completed.
FFA optimized hyperparameters similar to GA, including the number and type of membership functions and the learning parameters. The algorithm’s capacity to balance exploration and exploitation enabled efficient tuning of ANFISs hyperparameters.
Both GA and FFA effectively optimize ANFIS hyperparameters but offer distinct advantages. GA’s evolutionary approach is well-suited for exploring large and complex hyperparameter spaces, while FFA’s attraction-based mechanism provides efficient convergence toward optimal solutions. The choice between GA and FFA depends on the specific requirements of the optimization task, such as the size of the hyperparameter space and the computational resources available.
Hyperparameter tuning using GA and FFA significantly enhances the performance of ANFIS models. By leveraging these metaheuristic optimization techniques, the optimal hyperparameters was identified that improve the accuracy and reliability of the predictive models. This approach underscores the importance of metaheuristic algorithms in fine-tuning complex systems and contributes to advancing the field of soft computing in concrete strength prediction.

4. Development of the Models

4.1. Predicting CSGePC by SVR

There was just one output from SVR when 11 input parameters were used for this experiment. The CSGePC is the estimated output parameter, whereas the following parameters are input: fly ash (FA), rest period (RP), curing temperature (CT), curing period (CP), NaOH/Na2SiO3 (NaOH/Na2SiO3), superplasticizer (Su), extra water added (EW), molarity (M), alkaline activator/binder ratio (AAB), coarse aggregate (CA), and fine aggregate (FAg). There are 61 data sets that were utilized, but only 49 of them were used for training, and there are 12 data sets that were used to test the operation. Factually, the dataset is categorized into two main parts of training involving 80% of whole data and testing including 20% [113,114,115,116,117,118,119,120]. The interactive interface that MATLAB R2024b has offered has made it possible to perform heavy computations more quickly [121]. LIBSVM is the SVM interface for MATLAB R2024b that was used for this project. After transforming the data into the SVM format, which was used for both training and predicting, data scaling became necessary. This is due to the fact that the SVM method relies on numerical qualities, which might lead to data that are either very big or very little (in range). Data rescaling to a more suitable range has also improved the training and predicting process. In conjunction with scaling, another action that has to be performed is choosing the appropriate kernel function. The RBF kernel is an excellent starting point since it is one of the basic kernel functions, which also include sigmoid, linear, polynomial, and radial basis functions. According to the relevant literature, however, linear kernel would not be considered if RBF were used as a kernel function. The sigmoid function’s performance would not outperform RBF if the kernel matrix used to it could not be positive definite. The use of polynomial kernels is possible, but only with low degrees; nevertheless, when such kernels are utilized with large degrees, numerical difficulties are more likely to develop [122]. The application of general LIBSVM consisted of two stages: (1) the training of a dataset in order to acquire a model, and (2) the utilization of the model in order to predict information from a testing dataset. Due to the fact that the radial basis function is the sort of kernel function that is applied, the results were obtained via the process of performance on both the train and test datasets. This section provides an expression of the performance of the SVR that was created.
The selection of hyperparameters for the SVR model was carried out using a grid search optimization process. This method systematically tested a range of values for key hyperparameters, including the penalty parameter C and the kernel coefficient γ, to find the best combination. In this study, RBF kernel was chosen due to its ability to handle non-linear relationships effectively. After evaluating the model’s performance for different parameter values, the optimal C and γ were selected based on the best results from the training data. This process ensured that the SVR model was fine-tuned for maximum predictive accuracy.

4.2. Predicting CSGePC by ANN

A successful framework for goal prediction in several engineering domains, ANN was introduced. In ANNs, the input nodes, the hidden nodes, and the output node are all part of the design [123]. A single neuron makes up the output layer of an ANN, whereas six neurons make up the input layer when six parameters are supplied. Choosing the right size for the hidden layer is an important aspect of ANNs. Figuring out how many hidden layers there are and how many neurons each layer contains is part of this process. To put it another way, all of the complications are resolved by using a single hidden layer. That being said, it is possible to use a variety of neuronal counts inside a single hidden layer to obtain the highest possible level of accuracy. The obtained results show that the optimal number of hidden neurons to use for producing the highest R2 value is 19. This is why the ANN used in this study contained 11 nodes in the input layer, 11 in the first hidden layer, 8 in the second hidden layer, and 1 in the output layer.
A recent innovation in various engineering domains, the ANN has emerged as an effective framework for goal prediction. Within ANN architecture, the design encompasses input, hidden, and output layers. While the output layer comprises a single neuron, the input layer encompasses six neurons when six parameters are provided. The determination of an appropriate size for the hidden layer stands as a crucial element within ANN. This involves discerning the number of hidden layer(s) and the number of nodes within each layer. The simplification of complexities has been achieved through the utilization of a solitary hidden layer. Nevertheless, optimal accuracy necessitates consideration of diverse neuronal counts within this layer. Empirical findings indicate that the pinnacle R2 value is attained with 19 hidden neurons. Consequently, the ANN implemented in this investigation featured 11 nodes in the input layer, 11 in the initial hidden layer, 8 in the subsequent hidden layer, and 1 in the output layer.

4.3. Predicting CSGePC by ANFISs

In multiple fields of engineering, ANFISs are highlighted for goal prediction purposes [124,125,126,127]. Fuzzy c-mean clustering (FCM), the subtractive clustering technique (SCM), and grid partitioning (GP) are the top three approaches for ANFIS modelling [128,129]. This research modelled CSGePC using the aforementioned methodologies. Using FCM, SCM, and GP as its tenets, this research found that FCM outperformed its counterparts. Hence, ANFIS models were tested using the FCM method and compared to SVR and ANN outcomes.
Noteworthy, the selection of clustering methods is pivotal in ANFIS modeling, significantly impacting the accuracy and resilience of predictive models. FCM, SCM, and GP each present unique advantages and limitations, contingent upon the dataset’s characteristics and the underlying patterns to capture. For example, FCM excels in scenarios featuring overlapping clusters and non-spherical data distributions due to its adaptability in assigning data points to multiple clusters. Conversely, SCM proves advantageous for handling sizable datasets and high-dimensional spaces, efficiently identifying representative cluster centers while mitigating computational complexity. GP, on the other hand, provides a structured approach to data partitioning, enhancing interpretability and implementation ease. Through a systematic examination drawing on insights from prior research, this study assesses the effectiveness of these clustering techniques within the ANFIS framework, illuminating their comparative performance and relevance in CSGePC prediction contexts.
Furthermore, the juxtaposition of ANFIS models employing diverse clustering methodologies underscores the significance of methodological choice in predictive modeling endeavors. Beyond achieving mere accuracy, the chosen approach should demonstrate robustness and scalability, ensuring its efficacy across varied datasets and real-world applications. Through meticulous experimentation and performance scrutiny, this research contributes to a nuanced comprehension of ANFIS modeling techniques, offering valuable insights to practitioners and researchers seeking to harness the predictive potential of this versatile methodology.

4.4. Predicting CSGePC by GA-ANFISs

The training procedure of the GA-ANFISs system is detailed in this section. This study uses GA to improve ANFIS results. To restate, GA is utilized to determine the optimal ANFISs settings. The flow diagram of the ANFIS-GA system is demonstrated in Figure 6. Parameters such as population sizes, mutation rates, minimum errors, and maximum iterations must be precisely specified in order to run the GA-ANFIS model. Using the trial-and-error technique, the crossover, mutation rates, minimal errors, and maximum iterations values were determined to be 0.45, 0.65, 1 × 10−8, and 1000, respectively, in this work. The results of testing the different values for the acceptable population size are shown in Table 3. The results show that model No. 6 performs the best (higher R2), thus it used as the basis for selecting 60 participants as the current study’s population size. The ANFIS-GA model that was created for this research likewise relied on a linear output membership function and a Gaussian input function. The fitness function was assumed to be the RMSE, and there were three fuzzy rules.
The findings derived from the experimentation conducted on various swarm sizes within the GA-ANFIS model for the prediction of CSGePC unveil intriguing observations. Across the spectrum of swarm sizes investigated, a discernible pattern emerges wherein performance, as gauged by the R2, demonstrates a consistent upsurge in both the training and testing datasets. Initially, with the expansion of swarm size, there is an observable enhancement in the model’s capacity to discern the underlying data patterns, as evidenced by the elevation in R2 values. For instance, the R2 value for the training set ranges from 0.9614 with a swarm size of 10 to 0.9809 with a swarm size of 60. Nevertheless, beyond a certain threshold, the rate of improvement diminishes, indicating a plateau in the benefit derived from increased model complexity. Notably, while the R2 values for the training set tend to stabilize or fluctuate after reaching a peak, those for the testing set continue to exhibit progression with larger swarm sizes, signifying the model’s augmented generalization capability. Among the varied swarm sizes tested, a size of 60 emerges as particularly efficacious, yielding the highest R2 values for both training and testing datasets. For example, the R2 value for the testing set escalates from 0.888 with a swarm size of 10 to 0.9736 with a swarm size of 60. Nevertheless, it is imperative to consider factors beyond R2 values alone, including computational efficiency and the risk of overfitting. Models with excessively large swarm sizes manifest indications of potential overfitting, thereby emphasizing the necessity of striking a balance between model complexity and generalization. In summary, these findings underscore the critical role of swarm size selection in optimizing the performance and resilience of the GA-ANFIS model for CSGePC prediction.
The examination of the GA-ANFIS model’s predictive efficacy concerning CSGePC, predicated on alterations in R2 values corresponding to escalating swarm sizes, elucidates intriguing patterns. When scrutinizing the R2 values of the training dataset, the transitions between successive models exhibit diversity. Initially, from model 1 to model 2, there is a marginal decline of roughly 0.1043%, succeeded by an ascent of approximately 1.4297% from model 2 to model 3. A marginal downturn in performance transpires from model 3 to model 4, characterized by a change rate approximating −0.4006%. Subsequent models manifest oscillating trajectories, marked by both ameliorations and regressions in R2 values. Notably, when evaluating R2 values affiliated with the testing dataset, akin trends emerge, albeit with slightly more pronounced fluctuations. Generally, discernible enhancements in performance coincide with augmenting swarm sizes, evidenced by positive change rates in R2 values. Nonetheless, in the latter models, instances of performance attenuation arise, notably exemplified from model 9 to model 10, where a substantial decrease of approximately −3.3383% transpires. These performance undulations intimate that while enlarging swarm size typically augments model performance, there may exist diminishing returns or even deteriorations in performance beyond a certain threshold.

4.5. Predicting CSGePC by FFA-ANFISs

Yang [130] was the first person to introduce the FFA as a developing intelligent algorithm. The use of this technique to handle a wide variety of difficult optimization issues is becoming more common. A great deal of information on the FFA and how it was put into practice may be found in the published literature [131,132]. Figure 7 shows the FFA-ANFIS model’s modelling approach used in the present research. The main essential phase of the FFA-ANFIS modelling process is choosing proper values for γ, β0, and α, together with the iterations count and swarm sizes. A method based on trial and error was considered for this investigation. The hyperparameters of γ, β0, and α, number of iterations, and swarm sizes values of 0.005, 2, 0.15, 1000, and 250, respectively, was selected according to the results. Prediction results of FFA-ANFIS models with different swarm sizes summarized in Table 4.
In Table 3 and Table 4, the R2 values for the testing phase are observed to be higher than those for the training phase across various swarm sizes. This can be attributed to the effectiveness of the metaheuristic optimization algorithms used in tuning the models’ hyperparameters. Both the Genetic Algorithm (GA) and Firefly Algorithm (FFA) efficiently search the hyperparameter space, resulting in models that generalize well to unseen data. Furthermore, the variation in swarm sizes allows for diverse exploration of hyperparameter configurations, with certain swarm sizes yielding optimal balance between bias and variance. Consequently, the models achieve higher R2 values on the testing dataset, indicating robust generalization capabilities. This phenomenon underscores the importance of hyperparameter tuning in enhancing model performance and ensuring reliable predictions on new data.

5. Results and Discussion

This study used five models based on artificial intelligence (ANN, SVR, ANFIS, GA-ANFIS, and FFA-ANFIS) to predict CSGePC. MAE, NS, RMSE, VAF, WI, R2, and SI were among the statistical indicators used to assess their performance [114,133,134,135,136,137,138,139,140,141]:
Root Mean Square Error (RMSE)
R M S E = 1 m i = 1 m m C S G e P C i P C S G e P C i 2
Mean Absolute Error (MAE)
M A E = 1 m i = 1 m P C S G e P C i m C S G e P C i
Coefficient of Determination (R2)
R 2 = i = 1 m m C S G e P C i m ¯ C S G e P C 2 i = 1 r m C S G e P C i P ¯ C S G e P C 2 i = 1 m m C S G e P C i m ¯ C S G e P C 2
Variance Accounted For (VAF)
V A F = 100 1 v a r ( m C S G e P C i P C S G e P C i ) v a r ( m C S G e P C i )
Nash–Sutcliffe efficiency (NS)
N S = 1 i = 1 m m C S G e P C i P C S G e P C i 2 i = 1 m m C S G e P C i m ¯ C S G e P C 2
Willmott’s index for agreement (WI)
W I = 1 i = 1 m m C S G e P C i P C S G e P C i 2 i = 1 m P C S G e P C i m ¯ C S G e P C + m C S G e P C i m ¯ C S G e P C 2
Scattered Index (SI)
S I = R M S E 1 m i = 1 m P C S G e P C i
where m denotes the number of samples; m C S G e P C i , m ¯ C S G e P C i , and P C S G e P C i are actual, predicted, and average of the actual values, respectively. The values of MAE, NS, RMSE, VAF, WI, R2, and SI should be around 0, 1, 0, 100, 1, 1, and 0, respectively, in a model that is correct [29,142,143,144,145,146,147].
The evaluation of machine learning models in predicting CSGePC is a critical endeavor, necessitating the utilization of various quantitative metrics to assess predictive performance rigorously. Among the suite of evaluation tools used, Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) serve as fundamental measures of predictive accuracy. RMSE determines the mean value of error between predicted and observed values, while MAE provides insight into the average absolute deviations. These metrics, essential in understanding the precision of model predictions, enable analysts to gauge the reliability of machine learning algorithms [26,148,149] when applied to CSGePC predicting tasks. In addition to measures of accuracy, the Coefficient of Correlation (r) offers valuable insight into the linear relationship between predicted and observed values. With its range spanning from −1 to 1, r provides a nuanced understanding of the strength and direction of correlations within the dataset. Moreover, Variance Accounted For (VAF) provides a percentage-based assessment of the model’s explanatory power, offering clarity on the proportion of total variance in observed data explained by the model. Together, these metrics furnish analysts with a comprehensive understanding of the predictive capabilities of machine learning models in capturing complex geopolitical dynamics. Further enhancing the evaluation toolkit are metrics such as Nash–Sutcliffe efficiency (NS), Willmott’s index for agreement (WI), and the scattered index (SI), each offering unique perspectives on predictive performance. NS, ranging from negative infinity to 1, assesses the model’s ability to predict observed values, while WI evaluates the agreement between predicted and observed values, considering both bias and variability. Additionally, SI provides insight into the dispersion of predicted values around the 1:1 line compared to observed values, offering a measure of the model’s ability to capture data variability. By considering these metrics holistically, analysts can ascertain the suitability and reliability of machine learning models in predicting CSGePC, thereby informing decision-making processes in complex geopolitical landscapes with a high degree of confidence.
Table 5 displays the results of all models for MAE, NS, RMSE, VAF, WI, R2, and SI. For selecting best predictive model based on statistical metrics, a rating system proposed by wang et al. [150] is considered. The rating results are presented in Table 6 and total rate of models and their rank is illustrated in Table 6.
In the training phase, various metrics are utilized to evaluate the performance of different models. These metrics include seven well-known indices of MAE, NS, RMSE, VAF, WI, R2, and SI. Upon examining the provided data, for the FFA-ANFIS model in the training phase, MAE stands at 0.35929, NS at 0.98876, RMSE at 0.4165, VAF at 99.07247%, WI at 0.9738, R2 at 0.995, and SI at 0.0101. Similarly, other models such as the ANN, ANFIS, GA-ANFIS, and SVR exhibit their respective metrics in the training part. Consequently, in the testing portion, these metrics are once again applied for evaluating the capability of the models. For instance, for the FFA-ANFIS model during testing, MAE is measured at 0.81142, NS at 0.98577, RMSE at 1.0322, VAF at 98.77779%, WI at 0.9236, R2 at 0.994, and SI at 0.0358. Likewise, other models like the ANN, ANFIS, GA-ANFIS, and SVR demonstrate their respective metrics during testing. Comparing the performance between training and testing phases reveals how well the models generalize to unseen data. Notably, in the testing phase, the MAE, NS, RMSE, and SI tend to be higher compared to the training phase for all models, indicating a certain level of overfitting in the training phase. However, the VAF, WI, and R2 metrics generally maintain high values, suggesting that the models are still capable of explaining a high proportion of variance in the testing data. Specifically, the FFA-ANFIS model exhibits commendable performance in both train and test portions, with lower MAE, RMSE, and higher NS, VAF, R2, and SI compared to other models.
Table 6 furnishes a comprehensive depiction of model efficacy across diverse metrics throughout both the training and testing stages. The rating system is based on ranking the models across multiple statistical metrics. For each metric, the models are ranked from 1 (best performance) to 5 (poorest performance) based on their values. The final ranking for each model is determined by summing its ranks across all metrics. A similar approach has been used in a study conducted by Kahraman et al. [138] for comparing models based on different evaluation metrics (e.g., MAE, NS, RMSE). This ranking method provides a simplified way to visualize the relative performances of the models. The Equation (18) for the ranking system is as follows:
R model = i = 1 n r a n k ( M e t r i c i )
where Rmodel is the total rank of a model, rank (Metrici) is the rank assigned to the model for the ith metric, and n is the total number of evaluation metrics considered. This system allows for a holistic assessment of model performance, simplifying the comparison across multiple dimensions of accuracy and consistency.
Noteworthy distinctions emerge during the training phase, wherein SVR, ANN, and ANFIS exhibit disparate ratings across all metrics, each attaining a score of 1 in MAE, NS, RMSE, VAF, WI, R2, and SI. However, upon transitioning to the testing phase, discernible differentiations among these models become evident. While SVR maintains stability in the MAE, RMSE, WI, and SI metrics, it experiences marginal decreases in VAF and R2. Conversely, ANN showcases enhancements in the MAE, RMSE, VAF, R2, and SI metrics but undergoes a slight downturn in WI. ANFIS, on the other hand, sustains uniformity across all metrics. The introduction of the FFA-ANFIS and GA-ANFIs models for comparison unveils their consistent superiority over other models in both training and testing phases across all metrics. These models consistently yield higher scores overall, indicative of heightened accuracy, precision, and generalization. Notably, during the testing phase, FFA-ANFIS emerges as particularly remarkable, displaying superior performance in the MAE, RMSE, WI, and SI metrics, while ANFIS retains its supremacy in VAF and R2. Essentially, while ANN and ANFIS exhibit varying degrees of effectiveness across different metrics during testing, FFA-ANFIS and GA-ANFIS consistently showcase superior performance across all metrics, highlighting their effectiveness and resilience in predictive modeling endeavors.
Table 7 displays the performance metrics of five different models: SVR, ANN, ANFIS, FFA-ANFIS, and GA-ANFIS, based on their rates of training and overall performance. The “Rate of training” columns represent the scores each model received in terms of training performance, while the “Total rate” column indicates the sum of the rates of training for each model, determining their overall rank. Among the models, FFA-ANFIS stands out with the highest total rate of 70, securing the top rank. This implies that FFA-ANFIS performed the best across all training metrics evaluated. Following closely behind is GA-ANFIS, with a total rate of 56, earning it the second position. ANFIS, despite having the same rate of training as GA-ANFIS, achieves a slightly higher total rate of 42, placing it third overall. Comparing SVR and ANN, SVR lags behind with a total rate of 19, ranking it fifth among the models, while ANN performs slightly better with a total rate of 23, placing it fourth. However, both SVR and ANN fall short compared to ANFISs, FFA-ANFIS, and GA-ANFIS. Hence, based on the presented data, the FFA-ANFIS model emerges as the most promising predictive model due to its superior performance across all metrics evaluated, particularly in training. Therefore, researchers are advised to consider utilizing the FFA-ANFIS model for predictive modeling tasks.
The predicted CSGePC using developed models are depicted in Figure 8 for both training and testing portions. Figure 8 presents a comparative analysis of the predicted CSGePC using different models—SVR, ANN, ANFIS, FFA-ANFIS, and GA-ANFIS—against the measured values, for two distinct data sets. The figure relevant to training phase shows the performance of these models on the first data set, where it can be observed that the FFA-ANFIS and GA-ANFIS models provide a closer fit to the measured data, especially in sections with sharp variations (e.g., around data points 10 and 25). Although the SVR model captures the general trend, it exhibits larger deviations in some areas, notably between data points 10 and 15, highlighting its lower accuracy compared to the hybrid models. On the figure relevant to testing phase, which corresponds to the second data set, a similar trend is evident. FFA-ANFIS and GA-ANFIS continue to outperform the standalone models, particularly in maintaining closer proximity to the measured values around critical points, such as at data points 4, 6, and 10. The ANN and SVR models, while performing reasonably well overall, show more pronounced deviations, indicating potential difficulties in capturing the more complex nonlinear patterns within this data set. Overall, these results emphasize that hybrid models, particularly FFA-ANFIS and GA-ANFIS, exhibit stronger predictive capabilities, providing more accurate estimates of CSGePC compared to standalone models like ANN, SVR, and ANFIS. This highlights the effectiveness of hybrid approaches in handling the non-linearity and variability present in compressive strength prediction. The correlations between measured and predicted CSGePC for all developed models demonstrated in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 show the R2 value of models for training and testing parts. Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 compare the R2 values of different models for both the training and test sets, highlighting their predictive accuracy and generalization capabilities. Figure 9 shows that the FFA-ANFIS model performs exceptionally well on both sets, though with a slight decrease in the test set due to overfitting. Figure 10 reveals similar strong performance for the GA-ANFIS model, with balanced R2 values for both sets, indicating effective generalization. In Figure 11, the ANFIS model shows lower R2 values than the hybrid models, suggesting it struggles more with capturing complex patterns. Figure 12 illustrates that the ANN model performs well on the training set but sees a notable drop in R2 for the test set, reflecting a decrease in predictive accuracy. Finally, Figure 13 shows the SVR model achieving lower R2 values across both sets, indicating weaker performance compared to the other models. Overall, the hybrid models (FFA-ANFIS and GA-ANFIS) consistently outperform the standalone models, especially in terms of generalization to unseen data.
Figure 14 emphasizes the strength of the hybrid models—especially ANFIS-FFA—in predicting compressive strength, showcasing their capacity to provide the most reliable and accurate results among the models evaluated. Figure 14 illustrates the R2 values for the developed models—SVR, ANN, ANFIS, ANFIS-FFA, and ANFIS-GA—across all samples, highlighting the predictive accuracy of each model. The R2 values indicate the proportion of the variance in the measured CSGePC data that is explained by the model. According to the figure, the ANFIS-FFA model achieves the highest R2 value of 0.9942, demonstrating its superior ability to capture nearly all the variability in the data and thus offering the most accurate predictions. The ANFIS-GA model follows closely with an R2 of 0.9856, also showing strong predictive performance, though slightly less accurate than ANFIS-FFA. The standalone ANFIS model, with an R2 of 0.9763, outperforms both ANN and SVR models, achieving better generalization but falling behind the hybrid models that incorporate optimization techniques. The ANN and SVR models, with R2 values of 0.9593 and 0.9534, respectively, while still demonstrating good predictive capabilities, explain a smaller portion of the variance compared to the other models. This suggests that they are less effective in modeling the complex, nonlinear relationships within the dataset. The comparison of these R2 values clearly indicates that integrating optimization algorithms such as FFA and GA into the ANFIS model significantly improves predictive performance, allowing the models to achieve higher accuracy.
Furthermore, the Taylor diagram of constructed models is illustrated in Figure 15. Figure 15 presents a Taylor diagram, which provides a comprehensive visual comparison of the performance of the developed models (SVR, ANN, ANFIS, FFA-ANFIS, and GA-ANFIS) in predicting CSGePC. This diagram effectively summarizes three key statistical indicators: the correlation coefficient (indicating the strength of the linear relationship between the predicted and observed values), the root-mean-square error (RMSE), and the standard deviation of the models. The FFA-ANFIS and GA-ANFIS models are positioned closer to the reference point, indicating that these models have higher correlation coefficients and lower RMSE values compared to the other models. This suggests that they are more accurate and consistent in their predictions. In contrast, the SVR and ANN models, which are located further from the reference point, exhibit lower correlation coefficients and higher RMSE values, implying less reliable predictions. The ANFIS model, while performing better than SVR and ANN, still falls short of the hybrid models (FFA-ANFIS and GA-ANFIS) in terms of predictive accuracy. The Taylor diagram highlights the significant improvement in performance achieved by integrating optimization techniques (FFA and GA) into the ANFIS model, resulting in better model calibration and reduced prediction errors. Overall, the figure clearly demonstrates the superior performance of the hybrid models, particularly FFA-ANFIS, in capturing the nonlinear relationships inherent in the data, making them more suitable for predicting compressive strength.

5.1. Comparison with Literature Models

The results of this study were compared with those reported in the literature to evaluate the performance of various models. Table 8 provides a summary of the techniques, number of data points, and R2 values from different studies.
The comparison shows the effectiveness of the models used in this study. The comparison shows that the models used in this study are highly effective. For instance, the FFA-ANFIS model in this study achieved an R2 value of 0.987, which is comparable to the highest R2 values reported in the literature, such as the 0.9909 and 0.99 reported by Azimi-Pour et al. [72] and Farooq et al. [76], respectively. Additionally, the GA-ANFIS model was found to achieve the highest R2 value of 0.994, surpassing many existing models in the literature.
Advanced hybrid techniques like GA-ANFIS and FFA-ANFIS showed superior performance compared to single techniques like SVM and GEP used in other studies. This suggests the effectiveness of using metaheuristic optimization algorithms to tune hyperparameters of models like ANN, SVR, and ANFIS. Despite the use of a relatively smaller dataset (61 data points), high R2 values were still achieved by the models, demonstrating their robustness and efficacy.
The ANN and SVR models also performed well, with R2 values of 0.962 and 0.956, respectively. These values are competitive when compared to similar techniques in the literature, such as the R2 of 0.966 reported by Emad et al. [93] for an ANN model. Hence, the models developed in this study, particularly the GA-ANFIS and FFA-ANFIS models, demonstrate high predictive accuracy with R2 values that are competitive or superior to those reported in the literature. This highlights the potential of using metaheuristic optimization algorithms to enhance model performance, even with smaller datasets.
While hybrid models, such as FFA-ANFIS, have demonstrated superior predictive accuracy compared to traditional modeling techniques, this comes at the expense of higher computational costs and longer training times. The complexity of hybrid models, which involves the optimization of multiple parameters through iterative algorithms like the Firefly Algorithm (FFA), significantly increases the computational burden. Fine-tuning the hyperparameters further contributes to the extended training duration.
In contrast, traditional models are typically faster to train and require fewer computational resources. However, they may not capture the intricate nonlinear relationships between variables as effectively as hybrid models. In the context of predicting CSGePC, the enhanced accuracy and flexibility provided by hybrid models justify the additional computational effort, particularly when dealing with multifactorial data and complex dependencies.

5.2. Sensitivity Analysis

In the final stage of this research, the parameters with the greatest and smallest impact on the determination of CSGePC were identified. To accomplish this, the researchers used a sensitivity analysis method called Cosine Amplitude Method (CAM) [134]. The CAM method assesses the strength of the relationship between each pair of effective parameters on CSGePC. In this respect, the following equation is employed [123]:
r i j = k = 1 m x i k . x j k k = 1 m x i k 2 . k = 1 m x j k 2
where rij represents the level of influence between xi (input) and xj (output). The results of sensitivity analysis and important parameters were identified. Noteworthy, In Equation (18), the symbol k refers to the index of the individual features or attributes within the dataset. These attributes can include various input parameters such as fly ash content, curing temperature, or other material properties. The summation over k represents the combination of all these features in calculating the similarity between two data points i and j. In the present study, a sensitivity analysis was conducted using the Cosine Amplitude Method (CAM) as shown in Figure 16 to identify the parameters that significantly influence the compressive strength of geopolymer concrete (CSGePC). The obtained results revealed that the CA parameter exhibited the most substantial impact on CSGePC, with a strength value of 0.993. This indicates a very strong positive relationship between CA and the compressive strength, suggesting that variations in this parameter will lead to significant changes in the compressive strength of the geopolymer concrete. On the other hand, the EW parameter had the least influence, with a strength value of 0.269. This suggests that while EW does affect the compressive strength, its impact is considerably weaker compared to other parameters. The significance of the parameters according to the rij value can be ranked from smallest to largest as EW < Su < RP < CP < FAg < M < AAB < CT < NaOH/Na2SiO3 < FA < CA with corresponding impacts of 0.269, 0.579, 0.644, 0.895, 0.937, 0.959, 0.974, 0.977, 0.98, 0.983, and 0.993.

6. Practical Applications

Table 9 illustrates the evaluation of the developed ANFIS-FFA model using a set of unseen data, assessing the model’s ability to predict the CSGePC. In this study, unseen data refers to the data points that were not included in the training phase of the model. These data points were reserved for the testing phase to evaluate the model’s predictive performance on entirely new and untrained instances. This approach helps ensure the robustness and generalizability of the model, as it assesses how well the model can predict outcomes for data it has never encountered before, which simulates real-world applications. The input variables used in this evaluation include FA, RP, CT, CP, NaOH/Na2SiO3, Su, EW, M, AAB, CA, and FAg are also considered to capture the full range of factors influencing the geopolymer concrete’s properties. The table compares the measured compressive strength values with those predicted by the ANFIS-FFA model. The accuracy of the model’s predictions is reflected in the small error margins, with errors ranging from as low as 0.0483 to a maximum of 0.7225. For example, in one case with measured compressive strength of 35 MPa, the model predicted 35.0483 MPa, resulting in an error of only 0.0483 MPa. Similarly, for a measured strength of 38 MPa, the predicted value was 38.7225 MPa, with an error of 0.7225 MPa. Hence, the low average error values across the dataset confirm that the ANFIS-FFA model performs well in predicting compressive strength, demonstrating its reliability and robustness for practical applications. The error metrics underscore the model’s capability to generalize effectively to unseen data, thus offering a valuable tool for predicting the mechanical properties of geopolymer concrete in real-world scenarios.
The application of predictive models like those developed in this study offers substantial benefits for the construction industry. Accurate forecasting of compressive strength enables optimized material usage, reducing reliance on physical testing, which is both time-consuming and costly. By implementing soft computing techniques, construction projects can achieve faster quality control, allowing for timely adjustments in mix design to meet strength requirements. Hence, these predictive models facilitate more efficient, cost-effective, and environmentally conscious approaches to construction material development and quality assurance.
To enhance both the usability and reproducibility of the achieved results, a dedicated graphical user interface (GUI) was developed as illustrated in Figure 17. This GUI allows users to input specific values for various parameters, enabling predictions using our trained machine learning model. Each input parameter has a defined range, which is displayed below its corresponding input field in the interface. These ranges are crucial for ensuring that the model receives valid data, aligning with the distribution and characteristics of the training dataset.
The ranges for each parameter have been derived from the underlying dataset used to train the model, ensuring that the predictions remain robust within these intervals. When the input values fall within these predefined ranges, as specified for each parameter, the model demonstrates high predictive accuracy and reliability. This is because the model has been trained and optimized using data that conform to these ranges, which minimizes the risk of extrapolation errors and ensures that the results are reflective of the model’s true performance capabilities.
By employing this GUI, users can easily conduct experiments, test hypotheses, or apply the model to new datasets that share similar characteristics. Furthermore, the GUI improves the reproducibility of the results, as it provides clear guidance on the acceptable range of input values, ensuring consistency across different users and datasets. This setup allows other researchers or practitioners to replicate this study with a high degree of precision, furthering the practical applicability of the developed method. Therefore, this tool bridges the gap between the theoretical model and its practical application, offering both high accuracy and reproducibility when the input data adhere to the specified ranges.

7. Limitations

In this study, the various soft computing techniques, including SVR, ANN, ANFIS, and their hybrid forms (GA-ANFIS and FFA-ANFIS), was employed to predict the compressive strength of geopolymer concrete (CSGePC). While these techniques demonstrated strong performance in predicting CSGePC, there are inherent limitations to be considered.
The hybrid models, particularly GA-ANFIS and FFA-ANFIS, involve complex optimization processes and multiple parameters. This complexity can make these models less interpretable compared to simpler methods, which may limit their practical application and understanding. Moreover, the accuracy and effectiveness of the soft computing models are highly dependent on the quality and quantity of the dataset. This study utilized a relatively small dataset (61 samples), which may affect the generalizability of the results. Larger and more diverse datasets could provide a more comprehensive evaluation of the models’ performance. Noteworthy, the performance of these models is sensitive to the chosen hyperparameters and optimization algorithms. Variations in swarm sizes and other parameters can significantly impact the results, as evidenced by the variability in R2 values across different models and swarm sizes. Furthermore, the models were specifically tuned for predicting CSGePC with certain input parameters. While the results are promising, the applicability of these models to other types of concrete or different construction materials may require further validation and adjustment.
When considering the transition from laboratory-scale production of geopolymer concrete (GePC) to larger-scale applications, several practical considerations must be addressed. These include maintaining the consistency and quality of raw materials across larger production volumes, the need for specialized equipment to ensure proper mixing and curing, and the potential variations in environmental conditions that can affect the curing process. Moreover, sourcing sufficient quantities of industrial by-products like fly ash and ensuring compliance with construction standards are essential for the successful implementation of GePC in large-scale projects. These factors, though not covered in this study, are important areas for future research to facilitate the broader application of GePC in the construction industry.

8. Conclusions

To reduce carbon emissions during the creation of media visual sculptures and convey design elements efficiently, a new cementing material, CSGePC (compressive strength of geopolymer concrete), has garnered attention from engineers and designers.
  • This study extensively examined the predictive capabilities of various soft computing techniques, focusing on models like FFA-ANFIS and GA-ANFIS, which displayed significant accuracy in predicting the compressive strength of CSGePC.
  • By analyzing experimental datasets and conducting comprehensive statistical evaluations, the study revealed that the FFA-ANFIS model achieved the highest performance, with a mean absolute error (MAE) of 0.8114 and Nash–Sutcliffe efficiency (NS) of 0.9858, whereas the GA-ANFIS model exhibited slightly lower accuracy, with a MAE of 1.4143 and an NS of 0.9671. These results highlight the superiority of hybrid models, especially FFA-ANFIS, in delivering precise and reliable predictions for the compressive strength of geopolymer concrete.
  • This research not only contributes to advancing the use of geopolymer concrete as a sustainable construction material but also emphasizes the practical value of soft computing techniques in optimizing material properties, minimizing waste, and reducing environmental impacts.
  • The findings indicate the potential of these methods in real-world applications, such as large-scale construction projects where accurate strength prediction is crucial for structural integrity and material efficiency. Moreover, the study underscores the relevance of hybrid soft computing techniques in broader construction engineering and material science contexts.
  • As the construction industry continues to adopt more environmentally conscious practices, integrating these predictive models can significantly contribute to sustainable construction.
  • Future research should focus on expanding the dataset to encompass a wider range of geographical and material conditions, as well as testing these models in practical, industrial-scale applications to further validate their effectiveness.
  • This study lays the groundwork for several interesting future research directions. First, incorporating larger and more diverse datasets can improve the robustness and generalizability of predictive models. Advanced data augmentation techniques, such as Conditional Tabular Generative Adversarial Networks (CTGAN), could also be employed to synthetically expand existing datasets, potentially enhancing model accuracy further. Furthermore, exploring additional input parameters relevant to the geopolymer concrete formulation may capture more complex interactions and lead to improved predictive capabilities. Future studies could also experiment with advanced hybrid or ensemble learning techniques, which may yield even higher accuracy and adaptability for compressive strength prediction models in the construction industry.
  • In summary, this research offers valuable insights into the applicability of hybrid soft computing methods in the prediction of geopolymer concrete strength and highlights the importance of adopting innovative, AI-driven approaches in promoting both sustainable and efficient construction practices.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/buildings14113505/s1.

Author Contributions

Conceptualization, Z.C., K.Z. and J.H.; Methodology, Z.C., X.S., K.Z. and J.H.; Validation, X.S.; Formal analysis, Z.C. and J.H.; Data curation, Z.C., X.S., K.Z., Y.L., Y.D. and J.H.; Writing—original draft, K.Z., Y.L., Y.D. and J.H.; Writing—review & editing, X.S., Y.L. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Open fund for the Key Laboratory of Green Mining of Coal Resources of Ministry of Education (KLXGY-KB2415), National Natural Science Foundation of China (52104153), Research and Engineering Demonstration of Low Cost Large Scale Purification and Cascade Utilization Technology for Mining Brackish Water in the Zhundong Region (2023B03009-1), Natural Science Foundation of Xinjiang Uygur Autonomous Region (2021D01B34), and Natural Science Foundation of Colleges and Universities in Xinjiang Uygur Autonomous Region (XJEDU2024J126).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jeyasehar, A.C.; Saravanan, G.; Salahuddin, M.; Thirugnanasambandam, S. Development of Fly Ash Based Geopolymer Precast Concrete Elements. Asian J. Civ. Eng. 2013, 14, 605–615. [Google Scholar]
  2. Murthy, T.V.; Rai, D.A.K. Geopolymer Concrete, An Earth Friendly Concrete, Very Promising in the Industry. Int. J. Civ. Eng. Technol. 2014, 5, 113–122. [Google Scholar]
  3. Central Electricity Authority. Fly Ash Generation at Coal/Lignite Based Thermal Power Stations and Its Utilization in the Country Report; Central Electricity Authority: New Delhi, India, 2015. [Google Scholar]
  4. Kumar, R.; Dev, N. Effect of acids and freeze–thaw on the durability of modified rubberized concrete with optimum rubber crumb content. J. Appl. Polym. Sci. 2022, 139, 52191. [Google Scholar] [CrossRef]
  5. Verma, M.; Dev, N. Effect of ground granulated blast furnace slag and fly ash ratio and the curing conditions on the mechanical properties of geopolymer concrete. Struct. Concr. 2022, 23, 2015–2029. [Google Scholar] [CrossRef]
  6. Davidovits, J. Geopolymers and geopolymeric materials. J. Therm. Anal. Calorim. 1989, 35, 429–441. [Google Scholar] [CrossRef]
  7. Verma, M. Prediction of compressive strength of geopolymer concrete using random forest machine and deep learning. Asian J. Civ. Eng. 2023, 24, 2659–2668. [Google Scholar] [CrossRef]
  8. Asteris, P.G.; Armaghani, D.J.; Hatzigeorgiou, G.D.; Karayannis, C.G.; Pilakoutas, K. Predicting the shear strength of reinforced concrete beams using Artificial Neural Networks. Comput. Concr. 2019, 24, 469–488. [Google Scholar] [CrossRef]
  9. Chou, J.-S.; Chiu, C.-K.; Farfoura, M.; Al-Taharwa, I. Optimizing the Prediction Accuracy of Concrete Compressive Strength Based on a Comparison of Data-Mining Techniques. J. Comput. Civ. Eng. 2011, 25, 242–253. [Google Scholar] [CrossRef]
  10. Gandomi, A.H.; Alavi, A.H.; Ting, T.O.; Yang, X.S. Intelligent Modeling and Prediction of Elastic Modulus of Concrete Strength via Gene Expression Programming. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  11. Atici, U. Prediction of the strength of mineral admixture concrete using multivariable regression analysis and an artificial neural network. Expert Syst. Appl. 2011, 38, 9609–9618. [Google Scholar] [CrossRef]
  12. Ghorbani, B.; Arulrajah, A.; Narsilio, G.; Horpibulsuk, S. Experimental and ANN analysis of temperature effects on the permanent deformation properties of demolition wastes. Transp. Geotech. 2020, 24, 100365. [Google Scholar] [CrossRef]
  13. Kostić, S.; Vasović, D. Prediction model for compressive strength of basic concrete mixture using artificial neural networks. Neural Comput. Appl. 2015, 26, 1005–1024. [Google Scholar] [CrossRef]
  14. Akkurt, S.; Tayfur, G.; Can, S. Fuzzy logic model for the prediction of cement compressive strength. Cem. Concr. Res. 2004, 34, 1429–1433. [Google Scholar] [CrossRef]
  15. Falcone, R.; Lima, C.; Martinelli, E. Soft computing techniques in structural and earthquake engineering: A literature review. Eng. Struct. 2020, 207, 110269. [Google Scholar] [CrossRef]
  16. Pérez, J.L.; Cladera, A.; Rabuñal, J.R.; Martínez-Abella, F. Optimization of existing equations using a new Genetic Programming algorithm: Application to the shear strength of reinforced concrete beams. Adv. Eng. Softw. 2012, 50, 82–96. [Google Scholar] [CrossRef]
  17. Jiskani, I.M.; Yasli, F.; Hosseini, S.; Rehman, A.U.; Uddin, S. Improved Z-number based fuzzy fault tree approach to analyze health and safety risks in surface mines. Resour. Policy 2022, 76, 102591. [Google Scholar] [CrossRef]
  18. Poormirzaee, R.; Hosseini, S.; Taghizadeh, R. Smart mining policy: Integrating fuzzy-VIKOR technique and the Z-number concept to implement industry 4.0 strategies in mining engineering. Resour. Policy 2022, 77, 102768. [Google Scholar] [CrossRef]
  19. Hosseini, S.; Lawal, A.I.; Kwon, S. A causality-weighted approach for prioritizing mining 4.0 strategies integrating reliability-based fuzzy cognitive map and hybrid decision-making methods: A case study of Nigerian Mining Sector. Resour. Policy 2023, 82, 103426. [Google Scholar] [CrossRef]
  20. Mikaeil, R.; Bakhtavar, E.; Hosseini, S.; Jafarpour, A. Fuzzy classification of rock engineering indices using rock texture characteristics. Bull. Eng. Geol. Environ. 2022, 81, 312. [Google Scholar] [CrossRef]
  21. Poormirzaee, R.; Hosseini, S.S.; Taghizadeh, R. Choosing the Appropriate Strategy of 4.0 Industries for the Implementation of Intelligent Methods in Mining Engineering. J. Miner. Resour. Eng. 2023, 8, 71–93. [Google Scholar]
  22. Hosseini, S.S.; Poormirzaee, R.; Moosazadeh, S. Study of Hazards in Underground Mining: Using Fuzzy Cognitive Map and Z-Number Theory for Prioritizing of Effective Factors on Occupational Hazards in Underground Mines. Iran. J. Min. Eng. 2022, 17, 11–20. [Google Scholar]
  23. Jiskani, I.M.; Zhou, W.; Hosseini, S.; Wang, Z. Mining 4.0 and climate neutrality: A unified and reliable decision system for safe, intelligent, and green & climate-smart mining. J. Clean. Prod. 2023, 410, 137313. [Google Scholar] [CrossRef]
  24. Poormirzaee, R.; Hosseini, S.S.; Taghizadeh, R. Selection of Industry 4.0 Strategies to Implement Smart Mining Policy. J. Miner. Resour. Eng. 2022, 410, 137313. [Google Scholar]
  25. Zhang, Z.; Hosseini, S.; Monjezi, M.; Yari, M. Extension of reliability information of Z-numbers and fuzzy cognitive map: Development of causality-weighted rock engineering system to predict and risk assessment of blast-induced rock size distribution. Int. J. Rock Mech. Min. Sci. Géoméch. Abstr. 2024, 178, 105779. [Google Scholar] [CrossRef]
  26. Hosseini, S.; Armaghani, D.J.; He, X.; Pradhan, B.; Zhou, J.; Sheng, D. Fuzzy Cognitive Map for Evaluating Critical Factors Causing Rockbursts in Underground Construction: A Fundamental Study. Rock Mech. Rock Eng. 2024, 57, 9713–9738. [Google Scholar] [CrossRef]
  27. Yeluri, S.C.; Singh, K.; Kumar, A.; Aggarwal, Y.; Sihag, P. Estimation of Compressive Strength of Rubberised Slag Based Geopolymer Concrete Using Various Machine Learning Techniques Based Models. Iran. J. Sci. Technol. Trans. Civ. Eng. 2024, 1–16. [Google Scholar] [CrossRef]
  28. Harirchian, E. Predicting compressive strength of AAC blocks through machine learning advancements. Chall. J. Concr. Res. Lett. 2024, 15, 56–68. [Google Scholar] [CrossRef]
  29. Hosseini, S.; Shokri, B.J.; Mirzaghorbanali, A.; Nourizadeh, H.; Entezam, S.; Motallebiyan, A.; Entezam, A.; McDougall, K.; Karunasena, W.; Aziz, N. Predicting axial-bearing capacity of fully grouted rock bolting systems by applying an ensemble system. Soft Comput. 2024, 28, 10491–10518. [Google Scholar] [CrossRef]
  30. Asteris, P.G.; Ashrafian, A.; Rezaie-Balf, M. Prediction of the compressive strength of self-compacting concrete using surrogate models. Comput. Concr. 2019, 24, 137–150. [Google Scholar] [CrossRef]
  31. Sarıdemir, M. Genetic programming approach for prediction of compressive strength of concretes containing rice husk ash. Constr. Build. Mater. 2010, 24, 1911–1919. [Google Scholar] [CrossRef]
  32. Armaghani, D.J.; Asteris, P.G. A comparative study of ANN and ANFIS models for the prediction of cement-based mortar materials compressive strength. Neural Comput. Appl. 2021, 33, 4501–4532. [Google Scholar] [CrossRef]
  33. Özcan, F.; Atiş, C.D.; Karahan, O.; Uncuoğlu, E.; Tanyildizi, H. Comparison of artificial neural network and fuzzy logic models for prediction of long-term compressive strength of silica fume concrete. Adv. Eng. Softw. 2009, 40, 856–863. [Google Scholar] [CrossRef]
  34. Lan, G.; Wang, Y.; Zeng, G.; Zhang, J. Compressive strength of earth block masonry: Estimation based on neural networks and adaptive network-based fuzzy inference system. Compos. Struct. 2020, 235, 111731. [Google Scholar] [CrossRef]
  35. Apostolopoulou, M.; Asteris, P.G.; Armaghani, D.J.; Douvika, M.G.; Lourenço, P.B.; Cavaleri, L.; Bakolas, A.; Moropoulou, A. Mapping and holistic design of natural hydraulic lime mortars. Cem. Concr. Res. 2020, 136, 106167. [Google Scholar] [CrossRef]
  36. Mansouri, I.; Kisi, O. Prediction of debonding strength for masonry elements retrofitted with FRP composites using neuro fuzzy and neural network approaches. Compos. Part B Eng. 2015, 70, 247–255. [Google Scholar] [CrossRef]
  37. He, B.; Armaghani, D.J.; Lai, S.H. Assessment of tunnel blasting-induced overbreak: A novel metaheuristic-based random forest approach. Tunn. Undergr. Space Technol. 2023, 133, 104979. [Google Scholar] [CrossRef]
  38. Hasanipanah, M.; Monjezi, M.; Shahnazar, A.; Jahed Armaghani, D.; Farazmand, A. Feasibility of indirect determination of blast induced ground vibration based on support vector machine. Meas. J. Int. Meas. Confed. 2015, 75, 289–297. [Google Scholar] [CrossRef]
  39. Momeni, E.; Nazir, R.; Armaghani, D.J.; Maizir, H. Prediction of pile bearing capacity using a hybrid genetic algorithm-based ANN. Measurement 2014, 57, 122–131. [Google Scholar] [CrossRef]
  40. Skentou, A.D.; Bardhan, A.; Mamou, A.; Lemonis, M.E.; Kumar, G.; Samui, P.; Armaghani, D.J.; Asteris, P.G. Closed-Form Equation for Estimating Unconfined Compressive Strength of Granite from Three Non-destructive Tests Using Soft Computing Models. Rock Mech. Rock Eng. 2023, 56, 487–514. [Google Scholar] [CrossRef]
  41. Li, D.; Liu, Z.; Armaghani, D.J.; Xiao, P.; Zhou, J. Novel Ensemble Tree Solution for Rockburst Prediction Using Deep Forest. Mathematics 2022, 10, 787. [Google Scholar] [CrossRef]
  42. Asteris, P.G.; Rizal, F.I.M.; Koopialipoor, M.; Roussis, P.C.; Ferentinou, M.; Armaghani, D.J.; Gordan, B. Slope Stability Classification under Seismic Conditions Using Several Tree-Based Intelligent Techniques. Appl. Sci. 2022, 12, 1753. [Google Scholar] [CrossRef]
  43. Zhou, J.; Huang, S.; Zhou, T.; Armaghani, D.J.; Qiu, Y. Employing a genetic algorithm and grey wolf optimizer for optimizing RF models to evaluate soil liquefaction potential. Artif. Intell. Rev. 2022, 55, 5673–5705. [Google Scholar] [CrossRef]
  44. Xie, C.; Nguyen, H.; Choi, Y.; Armaghani, D.J. Optimized functional linked neural network for predicting diaphragm wall deflection induced by braced excavations in clays. Geosci. Front. 2022, 13, 101313. [Google Scholar] [CrossRef]
  45. Zhou, J.; Qiu, Y.; Zhu, S.; Armaghani, D.J.; Khandelwal, M.; Mohamad, E.T. Estimation of the TBM advance rate under hard rock conditions using XGBoost and Bayesian optimization. Undergr. Space 2021, 6, 506–515. [Google Scholar] [CrossRef]
  46. Zhou, J.; Qiu, Y.; Zhu, S.; Armaghani, D.J.; Li, C.; Nguyen, H.; Yagiz, S. Optimization of support vector machine through the use of metaheuristic algorithms in forecasting TBM advance rate. Eng. Appl. Artif. Intell. 2021, 97, 104015. [Google Scholar] [CrossRef]
  47. Armaghani, D.J.; Koopialipoor, M.; Marto, A.; Yagiz, S. Application of several optimization techniques for estimating TBM advance rate in granitic rocks. J. Rock Mech. Geotech. Eng. 2019, 11, 779–789. [Google Scholar] [CrossRef]
  48. Mustapha, I.B.; Abdulkareem, M.; Jassam, T.M.; AlAteah, A.H.; Al-Sodani, K.A.A.; Al-Tholaia, M.M.H.; Nabus, H.; Alih, S.C.; Abdulkareem, Z.; Ganiyu, A. Comparative Analysis of Gradient-Boosting Ensembles for Estimation of Compressive Strength of Quaternary Blend Concrete. Int. J. Concr. Struct. Mater. 2024, 18, 201–224. [Google Scholar] [CrossRef]
  49. Alhakeem, Z.M.; Jebur, Y.M.; Henedy, S.N.; Imran, H.; Bernardo, L.F.A.; Hussein, H.M. Prediction of Ecofriendly Concrete Compressive Strength Using Gradient Boosting Regression Tree Combined with GridSearchCV Hyperparameter-Optimization Techniques. Materials 2022, 15, 7432. [Google Scholar] [CrossRef]
  50. Faraz, M.I.; Arifeen, S.U.; Amin, M.N.; Nafees, A.; Althoey, F.; Niaz, A. A comprehensive GEP and MEP analysis of a cement-based concrete containing metakaolin. Structures 2023, 53, 937–948. [Google Scholar] [CrossRef]
  51. Shah, S.; Houda, M.; Khan, S.; Althoey, F.; Abuhussain, M.; Abuhussain, M.A.; Ali, M.; Alaskar, A.; Javed, M.F. Mechanical behaviour of E-waste aggregate concrete using a novel machine learning algorithm: Multi expression programming (MEP). J. Mater. Res. Technol. 2023, 25, 5720–5740. [Google Scholar] [CrossRef]
  52. Dey, A.; Rumman, R.; Wakjira, T.G.; Jindal, A.; Bediwy, A.G.; Islam, M.S.; Alam, M.S.; Al Martini, S.; Sabouni, R. Towards net-zero emission: A case study investigating sustainability potential of geopolymer concrete with recycled glass powder and gold mine tailings. J. Build. Eng. 2024, 86, 108683. [Google Scholar] [CrossRef]
  53. Al Martini, S.; Sabouni, R.; Khartabil, A.; Wakjira, T.G.; Alam, M.S. Development and strength prediction of sustainable concrete having binary and ternary cementitious blends and incorporating recycled aggregates from demolished UAE buildings: Experimental and machine learning-based studies. Constr. Build. Mater. 2023, 380, 131278. [Google Scholar] [CrossRef]
  54. Wakjira, T.G.; Kutty, A.A.; Alam, M.S. A novel framework for developing environmentally sustainable and cost-effective ultra-high-performance concrete (UHPC) using advanced machine learning and multi-objective optimization techniques. Constr. Build. Mater. 2024, 416, 135114. [Google Scholar] [CrossRef]
  55. Wakjira, T.G.; Alam, M.S. Performance-based seismic design of Ultra-High-Performance Concrete (UHPC) bridge columns with design example—Powered by explainable machine learning model. Eng. Struct. 2024, 314, 118346. [Google Scholar] [CrossRef]
  56. Biswas, R.; Bardhan, A.; Samui, P.; Rai, B.; Nayak, S.; Armaghani, D.J. Efficient soft computing techniques for the prediction of compressive strength of geopolymer concrete. Comput. Concr. 2021, 28, 221–232. [Google Scholar] [CrossRef]
  57. Paji, M.K.; Gordan, B.; Biklaryan, M.; Armaghani, D.J.; Zhou, J.; Jamshidi, M. Neuro-swarm and neuro-imperialism techniques to investigate the compressive strength of concrete constructed by freshwater and magnetic salty water. Measurement 2021, 182, 109720. [Google Scholar] [CrossRef]
  58. Sun, L.; Koopialipoor, M.; Armaghani, D.J.; Tarinejad, R.; Tahir, M.M. Applying a meta-heuristic algorithm to predict and optimize compressive strength of concrete samples. Eng. Comput. 2019, 37, 1133–1145. [Google Scholar] [CrossRef]
  59. Mohammed, A.; Kurda, R.; Armaghani, D.J.; Hasanipanah, M. Prediction of Compressive Strength of Concrete Modified with Fly Ash: Applications of Neuro-Swarm and Neuro-Imperialism Models. Comput. Concr. 2021, 27, 489–512. [Google Scholar]
  60. Zhang, X.; Bayat, V.; Koopialipoor, M.; Armaghani, D.J.; Yong, W.; Zhou, J. Evaluation of Structural Safety Reduction Due to Water Penetration into a Major Structural Crack in a Large Concrete Project. Smart Struct. Syst. 2020, 26, 319–329. [Google Scholar] [CrossRef]
  61. Liao, J.; Asteris, P.G.; Cavaleri, L.; Mohammed, A.S.; Lemonis, M.E.; Tsoukalas, M.Z.; Skentou, A.D.; Maraveas, C.; Koopialipoor, M.; Armaghani, D.J. Novel Fuzzy-Based Optimization Approaches for the Prediction of Ultimate Axial Load of Circular Concrete-Filled Steel Tubes. Buildings 2021, 11, 629. [Google Scholar] [CrossRef]
  62. Armaghani, D.J.; Asteris, P.G.; Fatemi, S.A.; Hasanipanah, M.; Tarinejad, R.; Rashid, A.S.A.; Huynh, V. Van on the Use of Neuro-Swarm System to Forecast the Pile Settlement. Appl. Sci. 2020, 10, 1904. [Google Scholar] [CrossRef]
  63. Ghanizadeh, A.R.; Ghanizadeh, A.; Asteris, P.G.; Fakharian, P.; Armaghani, D.J. Developing bearing capacity model for geogrid-reinforced stone columns improved soft clay utilizing MARS-EBS hybrid method. Transp. Geotech. 2023, 38, 100906. [Google Scholar] [CrossRef]
  64. Shan, F.; He, X.; Armaghani, D.J.; Zhang, P.; Sheng, D. Success and challenges in predicting TBM penetration rate using recurrent neural networks. Tunn. Undergr. Space Technol. 2022, 130, 104728. [Google Scholar] [CrossRef]
  65. Zhou, J.; Asteris, P.G.; Armaghani, D.J.; Pham, B.T. Prediction of ground vibration induced by blasting operations through the use of the Bayesian Network and random forest models. Soil Dyn. Earthq. Eng. 2020, 139, 106390. [Google Scholar] [CrossRef]
  66. Jahed Armaghani, D.; Ming, Y.Y.; Salih Mohammed, A.; Momeni, E.; Maizir, H. Effect of SVM Kernel Functions on Bearing Capacity Assessment of Deep Foundations. J. Soft Comput. Civ. Eng. 2023, 7, 111–128. [Google Scholar]
  67. Fakharian, P.; Eidgahee, D.R.; Akbari, M.; Jahangir, H.; Taeb, A.A. Compressive strength prediction of hollow concrete masonry blocks using artificial intelligence algorithms. Structures 2023, 47, 1790–1802. [Google Scholar] [CrossRef]
  68. Huang, J.; Sun, Y.; Zhang, J. Reduction of computational error by optimizing SVR kernel coefficients to simulate concrete compressive strength through the use of a human learning optimization algorithm. Eng. Comput. 2022, 38, 3151–3168. [Google Scholar] [CrossRef]
  69. Sarir, P.; Chen, J.; Asteris, P.G.; Armaghani, D.J.; Tahir, M.M. Developing GEP tree-based, neuro-swarm, and whale optimization models for evaluation of bearing capacity of concrete-filled steel tube columns. Eng. Comput. 2021, 37, 1–19. [Google Scholar] [CrossRef]
  70. Balf, F.R.; Kordkheili, H.M.; Kordkheili, A.M. A New Method for Predicting the Ingredients of Self-Compacting Concrete (SCC) Including Fly Ash (FA) Using Data Envelopment Analysis (DEA). Arab. J. Sci. Eng. 2021, 46, 4439–4460. [Google Scholar] [CrossRef]
  71. Ahmad, A.; Farooq, F.; Ostrowski, K.A.; Śliwa-Wieczorek, K.; Czarnecki, S. Application of Novel Machine Learning Techniques for Predicting the Surface Chloride Concentration in Concrete Containing Waste Material. Materials 2021, 14, 2297. [Google Scholar] [CrossRef]
  72. Azimi-Pour, M.; Eskandari-Naddaf, H.; Pakzad, A. Linear and non-linear SVM prediction for fresh properties and compressive strength of high volume fly ash self-compacting concrete. Constr. Build. Mater. 2020, 230, 117021. [Google Scholar] [CrossRef]
  73. Saha, P.; Debnath, P.; Thomas, P. Prediction of fresh and hardened properties of self-compacting concrete using support vector regression approach. Neural Comput. Appl. 2020, 32, 7995–8010. [Google Scholar] [CrossRef]
  74. Shahmansouri, A.A.; Bengar, H.A.; Jahani, E. Predicting compressive strength and electrical resistivity of eco-friendly concrete containing natural zeolite via GEP algorithm. Constr. Build. Mater. 2019, 229, 116883. [Google Scholar] [CrossRef]
  75. Aslam, F.; Farooq, F.; Amin, M.N.; Khan, K.; Waheed, A.; Akbar, A.; Javed, M.F.; Alyousef, R.; Alabdulijabbar, H. Applications of Gene Expression Programming for Estimating Compressive Strength of High-Strength Concrete. Adv. Civ. Eng. 2020, 2020, 1–23. [Google Scholar] [CrossRef]
  76. Farooq, F.; Amin, M.N.; Khan, K.; Sadiq, M.R.; Javed, M.F.; Aslam, F.; Alyousef, R. A Comparative Study of Random Forest and Genetic Engineering Programming for the Prediction of Compressive Strength of High Strength Concrete (HSC). Appl. Sci. 2020, 10, 7330. [Google Scholar] [CrossRef]
  77. Asteris, P.G.; Kolovos, K. Self-compacting concrete strength prediction using surrogate models. Neural Comput. Appl. 2019, 31, 409–424. [Google Scholar] [CrossRef]
  78. Selvaraj, S.; Sivaraman, S. RETRACTED ARTICLE: Prediction model for optimized self-compacting concrete with fly ash using response surface method based on fuzzy classification. Neural Comput. Appl. 2019, 31, 1365–1373. [Google Scholar] [CrossRef]
  79. Zhang, J.; Ma, G.; Huang, Y.; Sun, J.; Aslani, F.; Nener, B. Modelling uniaxial compressive strength of lightweight self-compacting concrete using random forest regression. Constr. Build. Mater. 2019, 210, 713–719. [Google Scholar] [CrossRef]
  80. Kaveh, A.; Bakhshpoori, T.; Hamze-Ziabari, S.M. M5′ and Mars Based Prediction Models for Properties of Self- Compacting Concrete Containing Fly Ash. Period. Polytech. Civ. Eng. 2018, 62, 281–294. [Google Scholar] [CrossRef]
  81. Sathyan, D.; Anand, K.B.; Prakash, A.J.; Premjith, B. Modeling the Fresh and Hardened Stage Properties of Self-Compacting Concrete using Random Kitchen Sink Algorithm. Int. J. Concr. Struct. Mater. 2018, 12, 24. [Google Scholar] [CrossRef]
  82. Vakhshouri, B.; Nejadi, S. Prediction of compressive strength of self-compacting concrete by ANFIS models. Neurocomputing 2018, 280, 13–22. [Google Scholar] [CrossRef]
  83. Douma, O.B.; Boukhatem, B.; Ghrici, M.; Tagnit-Hamou, A. Prediction of properties of self-compacting concrete containing fly ash using artificial neural network. Neural Comput. Appl. 2016, 28, 707–718. [Google Scholar] [CrossRef]
  84. Abu Yaman, M.; Elaty, M.A.; Taman, M. Predicting the ingredients of self compacting concrete using artificial neural network. Alex. Eng. J. 2017, 56, 523–532. [Google Scholar] [CrossRef]
  85. Ahmad, A.; Farooq, F.; Niewiadomski, P.; Ostrowski, K.; Akbar, A.; Aslam, F.; Alyousef, R. Prediction of Compressive Strength of Fly Ash Based Concrete Using Individual and Ensemble Algorithm. Materials 2021, 14, 794. [Google Scholar] [CrossRef] [PubMed]
  86. Farooq, F.; Ahmed, W.; Akbar, A.; Aslam, F.; Alyousef, R. Predictive modeling for sustainable high-performance concrete from industrial wastes: A comparison and optimization of models using ensemble learners. J. Clean. Prod. 2021, 292, 126032. [Google Scholar] [CrossRef]
  87. Buši, R.; Benšić, M.; Miličević, I.; Strukar, K. Prediction Models for the Mechanical Properties of Self-Compacting Concrete with Recycled Rubber and Silica Fume. Materials 2020, 13, 1821. [Google Scholar] [CrossRef] [PubMed]
  88. Javed, M.F.; Farooq, F.; Memon, S.A.; Akbar, A.; Khan, M.A.; Aslam, F.; Alyousef, R.; Alabduljabbar, H.; Rehman, S.K.U.; Rehman, S.K.U.; et al. New Prediction Model for the Ultimate Axial Capacity of Concrete-Filled Steel Tubes: An Evolutionary Approach. Crystals 2020, 10, 741. [Google Scholar] [CrossRef]
  89. Nematzadeh, M.; Shahmansouri, A.A.; Fakoor, M. Post-fire compressive strength of recycled PET aggregate concrete reinforced with steel fibers: Optimization and prediction via RSM and GEP. Constr. Build. Mater. 2020, 252, 119057. [Google Scholar] [CrossRef]
  90. Güçlüer, K.; Özbeyaz, A.; Göymen, S.; Günaydın, O. A comparative investigation using machine learning methods for concrete compressive strength estimation. Mater. Today Commun. 2021, 27, 102278. [Google Scholar] [CrossRef]
  91. Ahmad, A.; Ostrowski, K.A.; Maślak, M.; Farooq, F.; Mehmood, I.; Nafees, A. Comparative Study of Supervised Machine Learning Algorithms for Predicting the Compressive Strength of Concrete at High Temperature. Materials 2021, 14, 4222. [Google Scholar] [CrossRef]
  92. Asteris, P.G.; Skentou, A.D.; Bardhan, A.; Samui, P.; Pilakoutas, K. Predicting concrete compressive strength using hybrid ensembling of surrogate machine learning models. Cem. Concr. Res. 2021, 145, 106449. [Google Scholar] [CrossRef]
  93. Emad, W.; Mohammed, A.S.; Kurda, R.; Ghafor, K.; Cavaleri, L.; Qaidi, S.M.A.; Hassan, A.; Asteris, P.G. Prediction of concrete materials compressive strength using surrogate models. Structures 2022, 46, 1243–1267. [Google Scholar] [CrossRef]
  94. Shen, Z.; Deifalla, A.F.; Kamiński, P.; Dyczko, A. Compressive Strength Evaluation of Ultra-High-Strength Concrete by Machine Learning. Materials 2022, 15, 3523. [Google Scholar] [CrossRef]
  95. Kumar, A.; Arora, H.C.; Kapoor, N.R.; Mohammed, M.A.; Kumar, K.; Majumdar, A.; Thinnukool, O. Compressive Strength Prediction of Lightweight Concrete: Machine Learning Models. Sustainability 2022, 14, 2404. [Google Scholar] [CrossRef]
  96. Jaf, D.K.I.; Abdulrahman, P.I.; Mohammed, A.S.; Kurda, R.; Qaidi, S.M.; Asteris, P.G. Machine learning techniques and multi-scale models to evaluate the impact of silicon dioxide (SiO2) and calcium oxide (CaO) in fly ash on the compressive strength of green concrete. Constr. Build. Mater. 2023, 400, 132604. [Google Scholar] [CrossRef]
  97. Mahmood, W.; Mohammed, A.S.; Asteris, P.G.; Ahmed, H. Soft computing technics to predict the early-age compressive strength of flowable ordinary Portland cement. Soft Comput. 2023, 27, 3133–3150. [Google Scholar] [CrossRef]
  98. Ali, R.; Muayad, M.; Mohammed, A.S.; Asteris, P.G. Analysis and prediction of the effect of Nanosilica on the compressive strength of concrete with different mix proportions and specimen sizes using various numerical approaches. Struct. Concr. 2023, 24, 4161–4184. [Google Scholar] [CrossRef]
  99. Wakjira, T.G.; Alam, M.S. Peak and ultimate stress-strain model of confined ultra-high-performance concrete (UHPC) using hybrid machine learning model with conditional tabular generative adversarial network. Appl. Soft Comput. 2024, 154, 111353. [Google Scholar] [CrossRef]
  100. Hosseini, S.; Mousavi, A.; Monjezi, M. Prediction of blast-induced dust emissions in surface mines using integration of dimensional analysis and multivariate regression analysis. Arab. J. Geosci. 2022, 15, 163. [Google Scholar] [CrossRef]
  101. Albostami, A.S.; Al-Hamd, R.K.S.; Al-Matwari, A.A. Data-Driven Predictive Modeling of Steel Slag Concrete Strength for Sustainable Construction. Buildings 2024, 14, 2476. [Google Scholar] [CrossRef]
  102. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  103. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Meth-Ods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  104. Pettijohn, F.J. Sedimentary Rocks; Harper & Row: New York, NY, USA, 1975; Volume 3. [Google Scholar]
  105. Ferris, M.C.; Munson, T.S. Interior-Point Methods for Massive Support Vector Machines. SIAM J. Optim. 2003, 13, 783–804. [Google Scholar] [CrossRef]
  106. Drucker, H.; Burges, C.; Kaufman, L.; Smola, A.; Vapnik, V. Linear Support Vector Regression Machines. Adv. Neural Inf. Process. Syst. 1996, 9, 155–161. [Google Scholar]
  107. Awad, M.; Khanna, R.; Awad, M.; Khanna, R. Support Vector Regression. In Efficient Learning Machines Theories, Concepts, and Applications for Engineers and System Designers; Springer Nature: Berlin/Heidelberg, Germany, 2015; pp. 67–80. [Google Scholar]
  108. Zhang, F.; O’Donnell, L.J. Support Vector Regression. In Machine Learning; Elsevier: Amsterdam, The Netherlands, 2020; pp. 123–140. [Google Scholar]
  109. Garcia-Pedrajas, N.; Hervas-Martinez, C.; Munoz-Perez, J. COVNET: A cooperative coevolutionary model for evolving artificial neural networks. IEEE Trans. Neural Netw. 2003, 14, 575–596. [Google Scholar] [CrossRef] [PubMed]
  110. Jang, J.-S.R. ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  111. Mohan, S.; Vijayalakshmi, D.P. Genetic Algorithm Applications in Water Resources. ISH J. Hydraul. Eng. 2009, 15, 97–128. [Google Scholar] [CrossRef]
  112. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  113. Hosseini, S.; Khatti, J.; Taiwo, B.O.; Fissha, Y.; Grover, K.S.; Ikeda, H.; Pushkarna, M.; Berhanu, M.; Ali, M. Assessment of the ground vibration during blasting in mining projects using different computational approaches. Sci. Rep. 2023, 13, 18582–18629. [Google Scholar] [CrossRef]
  114. Hosseini, S.; Poormirzaee, R.; Gilani, S.-O.; Jiskani, I.M. A reliability-based rock engineering system for clean blasting: Risk analysis and dust emissions forecasting. Clean Technol. Environ. Policy 2023, 25, 1903–1920. [Google Scholar] [CrossRef]
  115. Hosseini, S.; Pourmirzaee, R.; Armaghani, D.J.; Sabri, M.M.S. Prediction of ground vibration due to mine blasting in a surface lead–zinc mine using machine learning ensemble techniques. Sci. Rep. 2023, 13, 65911–65920. [Google Scholar] [CrossRef]
  116. Hosseini, S.; Mousavi, A.; Monjezi, M.; Khandelwal, M. Mine-to-crusher policy: Planning of mine blasting patterns for environmentally friendly and optimum fragmentation using Monte Carlo simulation-based multi-objective grey wolf optimization approach. Resour. Policy 2022, 79, 103087. [Google Scholar] [CrossRef]
  117. Hosseini, S.; Monjezi, M.; Bakhtavar, E. Minimization of blast-induced dust emission using gene-expression programming and grasshopper optimization algorithm: A smart mining solution based on blasting plan optimization. Clean Technol. Environ. Policy 2022, 24, 2313–2328. [Google Scholar] [CrossRef]
  118. Hosseini, S.; Monjezi, M.; Bakhtavar, E.; Mousavi, A. Prediction of Dust Emission Due to Open Pit Mine Blasting Using a Hybrid Artificial Neural Network. Nat. Resour. Res. 2021, 30, 4773–4788. [Google Scholar] [CrossRef]
  119. Bakhtavar, E.; Hosseini, S.; Hewage, K.; Sadiq, R. Air Pollution Risk Assessment Using a Hybrid Fuzzy Intelligent Probability-Based Approach: Mine Blasting Dust Impacts. Nat. Resour. Res. 2021, 30, 2607–2627. [Google Scholar] [CrossRef]
  120. Bakhtavar, E.; Hosseini, S.; Hewage, K.; Sadiq, R. Green blasting policy: Simultaneous forecast of vertical and horizontal distribution of dust emissions using artificial causality-weighted neural network. J. Clean. Prod. 2021, 283, 124562. [Google Scholar] [CrossRef]
  121. Chang, C.; Lin, C.-J. LIBSVM: A Library for Support Vector Machines. Technical Report; Department of Computer Science and Information Engineering National Taiwan University: Taipei, Taiwan, 2001; Available online: http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf (accessed on 23 August 2022).
  122. Anandhi, V.; Chezian, R.M. Support Vector Regression to Forecast the Demand and Supply of Pulpwood. Int. J. Futur. Comput. Commun. 2013, 2, 266–269. [Google Scholar] [CrossRef]
  123. Zhao, J.; Hosseini, S.; Chen, Q.; Armaghani, D.J. Super learner ensemble model: A novel approach for predicting monthly copper price in future. Resour. Policy 2023, 85, 103903. [Google Scholar] [CrossRef]
  124. Toghroli, A.; Mohammadhassani, M.; Suhatril, M.; Shariati, M.; Ibrahim, Z. Prediction of shear capacity of channel shear connectors using the ANFIS model. Steel Compos. Struct. 2014, 17, 623–639. [Google Scholar] [CrossRef]
  125. Sedghi, Y.; Zandi, Y.; Shariati, M.; Ahmadi, E.; Azar, V.M.; Toghroli, A.; Safa, M.; Mohamad, E.T.; Khorami, M.; Wakil, K. Application of ANFIS Technique on Performance of C and L Shaped Angle Shear Connectors. Smart Struct. Syst. 2018, 22, 335–340. [Google Scholar] [CrossRef]
  126. Safa, M.; Shariati, M.; Ibrahim, Z.; Toghroli, A.; Bin Baharom, S.; Nor, N.M.; Petkovic, D. Potential of adaptive neuro fuzzy inference system for evaluating the factors affecting steel-concrete composite beam’s shear strength. Steel Compos. Struct. 2016, 21, 679–688. [Google Scholar] [CrossRef]
  127. Koçaslan, A.; Yüksek, A.G.; Görgülü, K.; Arpaz, E. Evaluation of blast-induced ground vibrations in open-pit mines by using adaptive neuro-fuzzy inference systems. Environ. Earth Sci. 2017, 76, 57. [Google Scholar] [CrossRef]
  128. Hasanipanah, M.; Armaghani, D.J.; Khamesi, H.; Amnieh, H.B.; Ghoraba, S. Several non-linear models in estimating air-overpressure resulting from mine blasting. Eng. Comput. 2015, 32, 441–455. [Google Scholar] [CrossRef]
  129. Fattahi, H.; Bayatzadehfard, Z. A Comparison of Performance of Several Artificial Intelligence Methods for Estimation of Required Rotational Torque to Operate Horizontal Directional Drilling. Iran Univ. Sci. Technol. 2017, 7, 45–70. [Google Scholar]
  130. Yang, X. Nature-Inspired Metaheuristic Algorithms; Cambridge University Press: Cambridge, UK, 2010; ISBN 9781905986286. [Google Scholar]
  131. Majumder, A.; Das, A.; Das, P.K. A standard deviation based firefly algorithm for multi-objective optimization of WEDM process during machining of Indian RAFM steel. Neural Comput. Appl. 2018, 29, 665–677. [Google Scholar] [CrossRef]
  132. Kazemivash, B.; Moghaddam, M.E. A predictive model-based image watermarking scheme using Regression Tree and Firefly algorithm. Soft Comput. 2018, 22, 4083–4098. [Google Scholar] [CrossRef]
  133. Hosseini, S.; Poormirzaee, R.; Hajihassani, M. Application of reliability-based back-propagation causality-weighted neural networks to estimate air-overpressure due to mine blasting. Eng. Appl. Artif. Intell. 2022, 115, 105281. [Google Scholar] [CrossRef]
  134. Hosseini, S.; Poormirzaee, R.; Hajihassani, M. An uncertainty hybrid model for risk assessment and prediction of blast-induced rock mass fragmentation. Int. J. Rock Mech. Min. Sci. Géoméch. Abstr. 2022, 160, 105250. [Google Scholar] [CrossRef]
  135. Hosseini, S.; Poormirzaee, R.; Hajihassani, M.; Kalatehjari, R. An ANN-Fuzzy Cognitive Map-Based Z-Number Theory to Predict Flyrock Induced by Blasting in Open-Pit Mines. Rock Mech. Rock Eng. 2022, 55, 4373–4390. [Google Scholar] [CrossRef]
  136. Wang, Q.; Qi, J.; Hosseini, S.; Rasekh, H.; Huang, J. ICA-LightGBM Algorithm for Predicting Compressive Strength of Geo-Polymer Concrete. Buildings 2023, 13, 2278. [Google Scholar] [CrossRef]
  137. Lawal, A.I.; Hosseini, S.; Kim, M.; Ogunsola, N.O.; Kwon, S. Prediction of factor of safety of slopes using stochastically modified ANN and classical methods: A rigorous statistical model selection approach. Nat. Hazards 2023, 120, 2035–2056. [Google Scholar] [CrossRef]
  138. Kahraman, E.; Hosseini, S.; Taiwo, B.O.; Fissha, Y.; Jebutu, V.A.; Akinlabi, A.A.; Adachi, T. Fostering sustainable mining practices in rock blasting: Assessment of blast toe volume prediction using comparative analysis of hybrid ensemble machine learning techniques. J. Saf. Sustain. 2024, 1, 75–88. [Google Scholar] [CrossRef]
  139. Esangbedo, M.O.; Taiwo, B.O.; Abbas, H.H.; Hosseini, S.; Sazid, M.; Fissha, Y. Enhancing the exploitation of natural resources for green energy: An application of LSTM-based meta-model for aluminum prices forecasting. Resour. Policy 2024, 92, 105014. [Google Scholar] [CrossRef]
  140. Bin, F.; Hosseini, S.; Chen, J.; Samui, P.; Fattahi, H.; Armaghani, D.J. Proposing Optimized Random Forest Models for Predicting Compressive Strength of Geopolymer Composites. Infrastructures 2024, 9, 181. [Google Scholar] [CrossRef]
  141. Pourmirzaee, R.; Hosseini, S. Development of an ANN-Based Technique for Inversion of Seismic Refraction Travel Times. J. Environ. Eng. Geophys. 2024, 29, 75–90. [Google Scholar] [CrossRef]
  142. Hosseini, S.; Javanshir, S.; Sabeti, H.; Tahmasebizadeh, P. Mathematical-Based Gene Expression Programming (GEP): A Novel Model to Predict Zinc Separation from a Bench-Scale Bioleaching Process. J. Sustain. Met. 2023, 9, 1601–1619. [Google Scholar] [CrossRef]
  143. Zhou, J.; Su, Z.; Hosseini, S.; Tian, Q.; Lu, Y.; Luo, H.; Xu, X.; Chen, C.; Huang, J. Decision tree models for the estimation of geo-polymer concrete compressive strength. Math. Biosci. Eng. 2023, 21, 1413–1444. [Google Scholar] [CrossRef]
  144. Hosseini, S.; Pourmirzaee, R. Green policy for managing blasting induced dust dispersion in open-pit mines using probability-based deep learning algorithm. Expert Syst. Appl. 2023, 240, 122469. [Google Scholar] [CrossRef]
  145. Hosseini, S.; Entezam, S.; Shokri, B.J.; Mirzaghorbanali, A.; Nourizadeh, H.; Motallebiyan, A.; Entezam, A.; McDougall, K.; Karunasena, W.; Aziz, N. Predicting grout’s uniaxial compressive strength (UCS) for fully grouted rock bolting system by applying ensemble machine learning techniques. Neural Comput. Appl. 2024, 36, 18387–18412. [Google Scholar] [CrossRef]
  146. Taiwo, B.O.; Fissha, Y.; Hosseini, S.; Khishe, M.; Kahraman, E.; Adebayo, B.; Sazid, M.; Adesida, P.A.; Famobuwa, O.V.; Faluyi, J.O.; et al. Machine learning based prediction of flyrock distance in rock blasting: A safe and sustainable mining approach. Green Smart Min. Eng. 2024, 1, 346–361. [Google Scholar] [CrossRef]
  147. Hosseini, S.; Gordan, B.; Kalkan, E. Development of Z number-based fuzzy inference system to predict bearing capacity of circular foundations. Artif. Intell. Rev. 2024, 57, 146. [Google Scholar] [CrossRef]
  148. Taiwo, B.O.; Hosseini, S.; Fissha, Y.; Kilic, K.; Olusola, O.A.; Chandrahas, N.S.; Li, E.; Akinlabi, A.A.; Khan, N.M. Indirect Evaluation of the Influence of Rock Boulders in Blasting to the Geohazard: Unearthing Geologic Insights Fused with Tree Seed based LSTM Algorithm. Geohazard Mech. 2024, in press. [Google Scholar] [CrossRef]
  149. Kamran, M.; Chaudhry, W.; Taiwo, B.O.; Hosseini, S.; Rehman, H. Decision Intelligence-Based Predictive Modelling of Hard Rock Pillar Stability Using K-Nearest Neighbour Coupled with Grey Wolf Optimization Algorithm. Processes 2024, 12, 783. [Google Scholar] [CrossRef]
  150. Wang, X.; Hosseini, S.; Armaghani, D.J.; Mohamad, E.T. Data-Driven Optimized Artificial Neural Network Technique for Prediction of Flyrock Induced by Boulder Blasting. Mathematics 2023, 11, 2358. [Google Scholar] [CrossRef]
Figure 1. Histogram plot of CSGePC data.
Figure 1. Histogram plot of CSGePC data.
Buildings 14 03505 g001
Figure 2. Heatmap of CSGePC data.
Figure 2. Heatmap of CSGePC data.
Buildings 14 03505 g002
Figure 3. Boxplot of effective parameters to detect outlier data.
Figure 3. Boxplot of effective parameters to detect outlier data.
Buildings 14 03505 g003
Figure 4. Flowchart of research to predict CSGePC.
Figure 4. Flowchart of research to predict CSGePC.
Buildings 14 03505 g004
Figure 5. Architecture of the ANFIS model.
Figure 5. Architecture of the ANFIS model.
Buildings 14 03505 g005
Figure 6. Flowchart of the ANFIS combined with GA algorithm.
Figure 6. Flowchart of the ANFIS combined with GA algorithm.
Buildings 14 03505 g006
Figure 7. Flowchart of the ANFIS combined with FFA algorithm.
Figure 7. Flowchart of the ANFIS combined with FFA algorithm.
Buildings 14 03505 g007
Figure 8. Prediction results of developed models in the training phase (above) and testing phase (below).
Figure 8. Prediction results of developed models in the training phase (above) and testing phase (below).
Buildings 14 03505 g008
Figure 9. R2 value of FFA-ANFIS model for the training set (left) and test set (right).
Figure 9. R2 value of FFA-ANFIS model for the training set (left) and test set (right).
Buildings 14 03505 g009
Figure 10. R2 value of GA-ANFIS model for the training set (left) and test set (right).
Figure 10. R2 value of GA-ANFIS model for the training set (left) and test set (right).
Buildings 14 03505 g010
Figure 11. R2 value of ANFIS model for the training set (left) and test set (right).
Figure 11. R2 value of ANFIS model for the training set (left) and test set (right).
Buildings 14 03505 g011
Figure 12. R2 value of ANN model for the training set (left) and test set (right).
Figure 12. R2 value of ANN model for the training set (left) and test set (right).
Buildings 14 03505 g012
Figure 13. R2 value of SVR model for the training set (left) and test set (right).
Figure 13. R2 value of SVR model for the training set (left) and test set (right).
Buildings 14 03505 g013
Figure 14. R2 value of developed models for all samples.
Figure 14. R2 value of developed models for all samples.
Buildings 14 03505 g014
Figure 15. Taylor diagram of developed models.
Figure 15. Taylor diagram of developed models.
Buildings 14 03505 g015
Figure 16. The strength relationships among input parameters on CSGePC.
Figure 16. The strength relationships among input parameters on CSGePC.
Buildings 14 03505 g016
Figure 17. A designed GUI for predicting CSGePC.
Figure 17. A designed GUI for predicting CSGePC.
Buildings 14 03505 g017
Table 1. Using artificial intelligence techniques to predict various characteristics of concretes.
Table 1. Using artificial intelligence techniques to predict various characteristics of concretes.
AuthorYearTechniqueNumber of Data
Huang et al. [68]2021SVM114
Sarir et al. [69]2019GEP303
Balf et al. [70]2021DEA114
Ahmad et al. [71]2021GEP, ANNs, DT642
Azimi-Pour et al. [72]2020SVM-
Saha et al. [73]2020SVM115
Hahmansouri et al. [74]2019GEP54
Aslam et al. [75]2020GEP357
Farooq et al. [76]2020RF and GEP357
Asteris and Kolovos [77]2019ANNs205
Selvaraj and Sivaraman [78]2019IREMSVM-FR with RSM114
Zhang et al. [79]2019RF131
Kaveh et al. [80]2018M5MARS114
Sathyan et al. [81]2018RKSA40
Vakhshouri and Nejadi [82]2018ANFISs55
Belalia Douma et al. [83]2017ANNs114
Abu Yaman et al. [84]2017ANNs69
Ahmad et al. [85]2021GEP, DT, and bagging270
Farooq et al. [86]2021ANNs, bagging, and boosting1030
Bušić et al. [87]2020MV21
Javad et al. [88]2020GEP277
Nematzadeh et al. [89]2020RSM, GEP108
Güçlüer et al. [90]2021ANNs, SVM, DT100
Ahmad et al. [91]2021ANNs, DT, GB207
Asteris et al. [92]2021ANNs, GPR, MARS1030
Emad et al. [93]2022ANNs, M5P306
Shen et al. [94]2022XGBoost, AdaBoost, and bagging372
Kuma et al. [95]2022GPR, SVMR194
Jaf et al. [96]2023NLR, MLR, ANNs236
Mahmood et al. [97]2023NLR, M5P, ANNs280
Ali et al. [98]2023LR, MLR, NLR, PQ, IA, FQ420
SVM: support vector machine, GEP: gene expression programming, ANNs: artificial neural networks, DT: decision tree, RF: random forest, DEA: data envelopment analysis, RSM: response surface methodology, ANFISs: adaptive neuro fuzzy inference systems, MV: Micali–Vazirani algorithm, RKSA: retina key scheduling algorithm, GB: gradient boosting, GPR: Gaussian process regression, MARS: multivariate adaptive regression splines, SVMR: support vector machine regression, NLR: nonlinear regression, MLR: multi-linear regression, LR: linear regression, PQ: pure quadratic, IA: interaction, FQ: full quadratic.
Table 2. Descriptive statistics of input parameters for geopolymer concrete data.
Table 2. Descriptive statistics of input parameters for geopolymer concrete data.
VariableSignUnitMinMeanMaxStD
1Fly ashFAkg/m3298.00401.92430.0039.13
2Rest periodRPh0.0014.1672.0014.78
3Curing temperatureCT°C40.0071.80100.0018.66
4Curing periodCPh24.0027.9348.008.96
5NaOH/Na2SiO3NaOH/
Na2SiO3
without unit0.300.400.500.03
6SuperplasticizerSukg/m30.004.1110.504.38
7Extra water addedEWkg/m30.005.7435.0013.07
8MolarityMwithout unit8.0012.6618.002.77
9Alkaline activator/binder ratioAABwithout unit0.250.380.450.05
10Coarse aggregateCAkg/m3875.001223.921377.00158.90
11Fine aggregateFAgkg/m3533.00605.56875.00121.05
12Compressive strengthCSGePCMPa17.5038.7147.927.08
Table 3. Performance of the GA-ANFIS model with various swarm size.
Table 3. Performance of the GA-ANFIS model with various swarm size.
Model No.Swarm SizeR2 of TrainR2 of Test
1100.96140.888
2200.96040.9027
3300.97410.9311
4400.97020.9345
5500.97870.9553
6600.98090.9736
7700.96300.9725
8800.96240.9704
9900.95870.9678
101000.96720.9355
Table 4. Performance of the FFA-ANFIS model with various swarm size.
Table 4. Performance of the FFA-ANFIS model with various swarm size.
Model No.Swarm SizeR2 of TrainR2 of Test
1250.98410.9786
2500.98190.9805
3750.98430.9808
41000.98350.9796
51250.98140.9799
61500.98190.9787
71750.98750.986
82000.98790.9852
92250.9910.9886
102500.98630.9859
112750.98820.9849
123000.98570.9814
133250.98170.9793
143500.98260.9815
153750.98270.9814
164000.98250.9816
174250.98390.9828
184500.98250.9836
194750.98220.9821
205000.98270.9796
Table 5. The achieved statistical metrics of developed models.
Table 5. The achieved statistical metrics of developed models.
Training Phase
ModelMAENSRMSEVAFWIR2SI
SVR1.00560.91441.149291.47130.79990.96000.0280
ANN0.88680.93441.005993.44600.84110.96700.0245
ANFIS0.60400.96870.695096.94930.92830.98700.0169
FFA-ANFIS0.35930.98880.416599.07250.97380.99500.0101
GA-ANFIS0.46940.98050.549298.04560.95460.99000.0134
Testing phase
ModelMAENSRMSEVAFWIR2SI
SVR2.22370.91382.540291.38900.54480.95600.0872
ANN2.58050.90102.722292.35680.46120.96200.0976
ANFIS1.72830.93832.150093.82570.68530.97300.0736
FFA-ANFIS0.81140.98581.032298.77780.92360.99400.0358
GA-ANFIS1.41430.96711.569396.82780.82070.98700.0532
Table 6. Rating statistical metrics of developed models.
Table 6. Rating statistical metrics of developed models.
Training Phase
ModelMAENSRMSEVAFWIR2SI
SVR1111111
ANN2222222
ANFIS3333333
FFA-ANFIS5555555
GA-ANFIS4444444
Testing phase
ModelMAENSRMSEVAFWIR2SI
SVR2221212
ANN1112121
ANFIS3333333
FFA-ANFIS5555555
GA-ANFIS4444444
Table 7. Selecting best predictive model based on total rate of models.
Table 7. Selecting best predictive model based on total rate of models.
ModelRate of TrainingRate of TrainingTotal RateRank
SVR712195
ANN149234
ANFIS2121423
FFA-ANFIS3535701
GA-ANFIS2828562
Table 8. Comparison of the proposed method of this study with those from the literature for anticipating CSGePC.
Table 8. Comparison of the proposed method of this study with those from the literature for anticipating CSGePC.
AuthorYear TechniqueNumber of DataR2
Huang et al. [68]2021SVM1140.947
Sarir et al. [69]2019GEP3030.939
Ahmad et al. [71]2021GEP, ANN, DT6420.88
Azimi-Pour et al. [72]2020SVMIndeterminate0.9909
Saha et al. [73]2020SVM1150.955
Hahmansouri et al. [74]2019GEP540.9071
Aslam et al. [75]2020GEP3570.957
Farooq et al. [76]2020RF and GEP3570.99
Belalia Douma et al. [83]2017ANN1140.95
Javad et al. [88]2020GEP2770.99
Güçlüer et al. [90]2021ANN, SVM, DT1000.86
Emad et al. [93]2022ANN, M5P, 3060.966
Kuma et al. [95]2022GPR, SVMR1940.9803
Jaf et al. [96]2023NLR, MLR, ANN2360.987
Ali et al. [98]2023LR, MLR, NLR, PQ, IA, FQ4200.96
Our Study2024SVR, ANN, ANFIS, GA-ANFIS, FFA-ANFIS610.956, 0.962, 0.973, 0.994, and 0.987
Table 9. Evaluation of the developed ANFIS-FFA model by using unseen data.
Table 9. Evaluation of the developed ANFIS-FFA model by using unseen data.
FA
(kg/m3)
RP
(hr)
CT
(°C)
CP
(hr)
NaOH/Na2SiO3Su
(kg/m3)
EW
(kg/m3)
MAABCA
(kg/m3)
Fag
(kg/m3)
Measured CSGePC
(MPa)
Predicted by ANFIS-FFAError
4302460240.48.6080.45124353332.532.23120.2688
3972470240.47.94080.4513075473535.28320.2832
3642480240.47.28080.4513115623737.23870.2387
3312490240.46.62080.45134457637.537.55110.0511
29824100240.45.96080.4513775903535.04830.0483
4302460240.48.60100.4512435333635.79070.2093
3972470240.47.940100.45130754736.537.01870.5187
3642480240.47.280100.45131156237.537.99780.4978
3312490240.46.620100.4513445763838.72250.7225
29824100240.45.960100.45137759037.537.60780.1078
4302460240.48.60120.4512435333938.74480.2552
3972470240.47.940120.4513075474040.37550.3755
3642480240.47.280120.45131156240.540.63240.1324
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, Z.; Shi, X.; Zheng, K.; Lu, Y.; Deng, Y.; Huang, J. Soft Computing Techniques to Model the Compressive Strength in Geo-Polymer Concrete: Approaches Based on an Adaptive Neuro-Fuzzy Inference System. Buildings 2024, 14, 3505. https://doi.org/10.3390/buildings14113505

AMA Style

Chang Z, Shi X, Zheng K, Lu Y, Deng Y, Huang J. Soft Computing Techniques to Model the Compressive Strength in Geo-Polymer Concrete: Approaches Based on an Adaptive Neuro-Fuzzy Inference System. Buildings. 2024; 14(11):3505. https://doi.org/10.3390/buildings14113505

Chicago/Turabian Style

Chang, Zhiguo, Xuyang Shi, Kaidan Zheng, Yijun Lu, Yunhui Deng, and Jiandong Huang. 2024. "Soft Computing Techniques to Model the Compressive Strength in Geo-Polymer Concrete: Approaches Based on an Adaptive Neuro-Fuzzy Inference System" Buildings 14, no. 11: 3505. https://doi.org/10.3390/buildings14113505

APA Style

Chang, Z., Shi, X., Zheng, K., Lu, Y., Deng, Y., & Huang, J. (2024). Soft Computing Techniques to Model the Compressive Strength in Geo-Polymer Concrete: Approaches Based on an Adaptive Neuro-Fuzzy Inference System. Buildings, 14(11), 3505. https://doi.org/10.3390/buildings14113505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop