Next Article in Journal
Nitrogen Modulates the Effects of Short-Term Heat, Drought and Combined Stresses after Anthesis on Photosynthesis, Nitrogen Metabolism, Yield, and Water and Nitrogen Use Efficiency of Wheat
Next Article in Special Issue
Application of Conductive Concrete as a Microbial Fuel Cell to Control H2S Emission for Mitigating Sewer Corrosion
Previous Article in Journal
Water Quality Improvement through Rainwater Tanks: A Review and Simulation Study
Previous Article in Special Issue
Anaerobic Co-Digestion of Food Waste with Sewage Sludge: Simulation and Optimization for Maximum Biogas Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Neural Network (ANN) Modelling for Biogas Production in Pre-Commercialized Integrated Anaerobic-Aerobic Bioreactors (IAAB)

1
Department of Chemical and Environmental Engineering, University of Nottingham Malaysia, 43500 Kuala Lumpur, Selangor, Malaysia
2
Department of Fundamental and Applied Sciences, HICoE-Centre for Biofuel and Biochemical Research, Institute of Self-Sustainable Building, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak Darul Ridzuan, Malaysia
3
Faculty of Bioengineering and Technology, Universiti Malaysia Kelantan, Jeli Campus, 17600 Jeli, Kelantan , Malaysia
4
Department of Chemical and Materials Engineering, Tamkang University, Tamsui, New Taipei 251, Taiwan
5
Department of Chemistry, Faculty of Science, Universiti Brunei Darussalam, Jalan Tungku Link, Gadong BE1410, Brunei
6
Residues and Resource Reclamation Centre, Nanyang Environment and Water Research Institute, Nanyang Technological University, 1 Cleantech Loop, CleanTech One, Singapore 637141, Singapore
7
School of Civil and Environmental Engineering, College of Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore
8
Department of Biotechnology, Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1, Tokyo 113-8657, Japan
9
Chemistry Section, School of Distance Education, Universiti Sains Malaysia, 11800 Minden, Penang, Malaysia
*
Author to whom correspondence should be addressed.
Water 2022, 14(9), 1410; https://doi.org/10.3390/w14091410
Submission received: 2 March 2022 / Revised: 7 April 2022 / Accepted: 7 April 2022 / Published: 28 April 2022
(This article belongs to the Special Issue Water Quality Engineering and Wastewater Treatment Ⅱ)

Abstract

:
The use of integrated anaerobic-aerobic bioreactor (IAAB) to treat the Palm Oil Mill Effluent (POME) showed promising results, which successfully overcome the limitation of a large space that is needed in the conventional method. The understanding of synergism between anaerobic digestion and aerobic process is required to achieve maximum biogas production and COD removal. Hence, this work presents the use of artificial neural network (ANN) to predict the COD removal (%), purity of methane (%), and methane yield (LCH4/gCODremoved) of anaerobic digestion and COD removal (%), biochemical oxygen demand (BOD) removal (%), and total suspended solid (TSS) removal (%) of aerobic process in a pre-commercialized IAAB located at Negeri Sembilan, Malaysia. MATLAB R2019b was used to develop the two ANN models. Bayesian regularization backpropagation (BR) showed the best performance among the 12 training algorithms. The trained ANN models showed high accuracy (R2 > 0.997) and demonstrated good alignment with the industrial data obtained from the pre-commercialized IAAB over a 6-month period. The developed ANN model is subsequently used to create the optimal operating conditions which maximize the output parameters. The COD removal (%) was improved by 33.9% (from 68.7% to 92%), while the methane yield was improved by 13.4% (from 0.23 LCH4/gCODremoved to 0.26 LCH4/gCODremoved). Sensitivity analysis shows that COD inlet is the most influential input parameters that affect the methane yield, anaerobic COD, BOD and TSS removals, while for aerobic process, COD removal is most affected by mixed liquor suspended solids (MLSS). The trained ANN model can be utilized as a decision support system (DSS) for operators to predict the behavior of the IAAB system and solve the problems of instability and inconsistent biogas production in the anaerobic digestion process. This is of utmost importance for the successful commercialization of this IAAB technology. Additional input parameters such as the mixing time, reaction time, nutrients (ammonium nitrogen and total phosphorus) and concentration of microorganisms could be considered for the improvement of the ANN model.

1. Introduction

The palm oil industry in Malaysia is substantial, contributing nearly 3.6% to Malaysia’s gross domestic product (GDP) in 2020 [1]. Moreover, the global market share of Malaysia in palm oil production and export is 25.8% and 34.3% respectively [2]. Hence, it is important that the waste produced by the palm oil industry, namely the palm oil mill effluent (POME), is regulated, and treated safely before discharging it into the environment. POME contains several polluting characteristics, including phosphorus, sulphate, volatile solid, and total organic carbon [3]. It is estimated that 2.5 to 3.5 tons of palm oil mill effluent is produced from every 1 ton of crude palm oil produced locally [4]). Therefore, a proper and appropriate operation of wastewater treatment plant (WWTP) must be utilized [5].
There are many methods of treating POME, the most common method being anaerobic digestion and facultative digestion [6]. There are some recent studies on the treatment of POME that suggest a more innovative way of treating POME. A study done by Chong et al. [7] highlights the benefit of using both aerobic and anaerobic processes in a single bioreactor, namely integrated anaerobic-aerobic bioreactor (IAAB) which are able to perform much better than conventional treatment of palm oil mill effluent methods. The simulation of integrated anaerobic and aerobic bioreactor shows that removal efficiency of chemical oxygen demand (COD) and biochemical oxygen demand (BOD) of up to 99% can be achieved while the cost of net expenditure can be reduced by 5.8%. However, there are two main concerns that have to be addressed in order to convince the palm oil millers to adopt this newly invented IAAB technology to generate biogas and producing high quality treated effluent simultaneously. This includes the stability of the anaerobic digestion process and consistent biogas generation. POME treatment is usually prone to have stability issue due to the varying characteristics of POME which depend on the crop season and the loading rate of the milling process. Besides, lack of skilled personnel for monitoring and control of anaerobic digester could also lead to slow feedback time to address any potential instability issue [8]. Therefore, the development of a prediction model of anaerobic digestion of POME for IAAB is crucial to address the aforementioned issues.
Despite the development of several mechanistic model such as International Water Association (IWA) Anaerobic Digestion Model No.1 (ADM1) [9], the need for a large number of parameters and estimations [10] makes it difficult to quantify the organic composition of the effluent feed stream, which is a piece of essential information needed to utilize the model. In recent years, there has been an increased use of machine learning such as artificial neural networks (ANNs) to model and optimize the process as an alternative to mechanistic models [11]. The ANN modelling can be used as a monitoring framework for WTTP operation that can increase cost optimization and identification of the quality or stability of wastewater effluent. The practical benefit of using ANN in WWTP is its ability to identify the complex non-linear relationship between input and output parameters that are useful for prediction and estimation of WWTP effluent properties without the need for prior knowledge on theoretical and physical laws that govern the biological process. However, the mentioned benefits of using an ANN is not applicable if the simulation were done solely with engineering software like HYSYS or SuperPro. ANN modelling can be used in WWTP to predict the performance of WTTP with a high degree of accuracy, provided that the quality of historical data is fairly good [12]. Numerous studies that use ANN to optimize the anaerobic digestion have been reported with high R2 of up to 0.998. For instance, Güçlü, Yılmaz and Ozkan-Yucel [13] achieve a R2 = 0.89 using the ANN model. The parameters used to train the model were pH value, gas flow rate, VFAs, temperature, organic matter, dry matter, and alkalinity. Similarly, Sathish and Vivekanandan [14] also used ANN to optimize the anaerobic digestion process using 4 input variables, which are temperature, agitation time, pH level, and concentration of substrate. They are able to obtain a satisfying result with an accuracy of R2 = 0.998 for the prediction of biogas production. Therefore, ANN is preferred in this study due to their faster and simpler computation modelling, particularly for large scale operations like IAAB.
However, the potential use of ANN to accurately model the methane production and chemical oxygen demand (COD) removal of IAAB that treats POME has not been reported. This is necessary as POME has a varying characteristics (COD of 44,300 mg/L–112,000 mg/L) where close monitoring and efficient feedback are required to ensure stable operation [8]. Most importantly, process optimization can be conducted at a lower cost and shorter period of time with the aid of the prediction model prior to the real application of IAAB in industry scale. Therefore, this work aims to model the POME treatment process using ANN to further optimize the production of biogas and COD removal efficiencies. MATLAB software is used to develop an artificial neural network that is based on the principle of Feedforward Neural Network (FFNN). A set of training data (96 data sets) obtained from the pre-commercialized IAAB over a six-month period will be used to train the feedforward network using 12 different back propagation training algorithms. It is hypothesized that this ANN model can serve as a platform to estimate the behavior of IAAB in treating POME and provide decisive information to the plant engineers in applying suitable control approaches and preventing digester upset.

2. Methodology

The ANN will be trained using 12 different training algorithms in MATLAB R2019b software. The best training algorithm will be determined by considering several performance criteria. Two ANN models will be developed for anaerobic digestion and aerobic process respectively. The training data set for the ANN model will be obtained from the pre-commercialized IAAB located at Negeri Sembilan, Malaysia. The operation of the IAAB can be found elsewhere [7]. The ANN model with the highest prediction accuracy and acceptable correlation coefficient (R), mean squared error (MSE), mean absolute error (MAE) and mean absolute percent error (MAPE) will be used. Sensitivity analysis is conducted to evaluate the relative importance of input parameters on the output parameters. The first ANN model developed is used to determine the optimum operating condition for the removal rate of COD, purity of methane and methane yield for the anaerobic digestion. The second ANN model developed will be used to determine the optimum operating condition for the COD removal, biochemical oxygen demand (BOD) removal and total suspended solids (TSS) removal for the aerobic process. The industry data (experimental data) will be compared with the predicted data obtained from the ANN model.

2.1. Artificial Neural Network Model (ANN)

Artificial neural networks (ANNs), or neural networks, are a complex computational model technique that can be used under machine learning to predict the output of a process. The artificial neural network structure is similar to the biological neural networks in our human brains in the way that neurons signal is being transferred from one node to another [15]. Numerous studies have been done on the application of ANN for use in improving the anaerobic reaction. For instance, the feed-forward backpropagation ANN is used to help in investigating the influence of substrates such as food waste, vegetable waste and organic loading rate on the methane generation to achieve methane purity of around 60 to 70% [16]. Moreover, an ANN model with R value of 0.997 was developed by Mougari et al. [17] to predict the production of methane and biogas from anaerobic digestion process where its substrates are organic wastes. A typical artificial neural network architecture is made up of a single input layer, a hidden layer or multiple hidden layers, and a single output layer. Each layer in the neural network architecture contains several nodes or neurons in biological terms that connect to the next layer. The nodes are given a weight that can be adjusted during the training process. The weights are increased or decreased to strengthen or weaken the strength of the interconnected neuron’s signal.
In a single layer architecture, the connection between the nodes can be shown in Equation (1) [18].
Y i = f (   j = 1 N W ij X j + b i )
where Y is the output value, and X is the input layer and W and b represents the weights and bias respectively. The subscript i and j denote the previous layer and current layer respectively. The function, f is the activation function or transfer function of the single layer architecture. There are multiple types of input and output activation functions that can be used to model a single neuron, the three most commonly used are log-sigmoid transfer function, tan-sigmoid transfer function and linear transfer function. In the case of modelling a neural network that is used for pattern recognition problems, sigmoid output neurons are the most suitable, whereas linear function output neurons are more suitable to be used for function fitting problems [19]. A non-linear input function is needed for the artificial neural network to learn non-linear relationships between the input variable and output variable. Between a non-linear function like tan-sigmoid transfer function (tansig) and log-sigmoid function (logsig), tan-sigmoid transfer function is preferred due to the [−1, 1] range of tansig as compared to [0, 1] for log-sigmoid function. Hence, tan-sigmoid transfer function is able to provide a stronger gradient. To build an ANN model, three phases are needed which is training, testing and validation. Hence, the dataset is typically divided into three categories where 70% to 80% are used for training, which the remaining data is used for testing and validation purposes. In a training process of a feedforward neural network, two main processes are involved: (a) feed-forward process and (b) back-propagation process. In the feed-forward process, the input is propagated into the neural network specifically the hidden layer and then to the output layer where all the weights are randomly assigned, bias and activation function is used to provide a predictive value [20]. Next, the back-propagation phase is initiated where the weights are adjusted to minimize the error between the predicted value and the experimental value of the network inputted from the training data. This is done by a specific set of backpropagation training algorithms.
There is a normalization of data before proceeding to train the network so that each input data contributes equally and is scaled to a standard range which helps in the efficiency of the learning process [21]. The input and output data can be normalized using the Equation (2).
y = y min + ( x x min ) ( y max y min ) ( x max x min )
where the xmin and xmax represent the value of the minimum and maximum data in the dataset, while ymin and ymax represent the range for normalization. Finally, y is the normalized value of x. The range of normalization is between −1 to 1.
The hidden layers comprise several hidden neurons that are adjustable. The learning performance of the artificial neural network usually performs better if the number of hidden neurons is increased. However, a high number of hidden neurons could result in worse performance as the weights are given too much freedom to adjust and overfitting problems may arise. On the other hand, if the number of hidden neuron is too low, it will limit the performance of the artificial neural network.
The anaerobic digestion remains a kind of a black box as it is difficult to control the process due to its complex mechanism and the number of process parameters is high and variable [22]. Therefore, mechanistic modelling of the anaerobic digestion process is too simplistic and unreliable because there is limited information regarding gas-liquid mass transfer coefficients that is relevant to the biogas formation process [23]. The complexity of anaerobic digestion mechanism makes black-box modelling like artificial neural network a more attractive option as compared to mechanistic modelling. The advantage of using artificial neural network is that it can provide accurate prediction without needing any prior information or concept about the relationship between process parameters and the output [24]. Moreover, the learning abilities of artificial neural network modelling allow them to be adaptive to the complex non-linear behavior of the many complex processes in the wastewater treatment plant [25].
There are different types of ANNs models that can be classified according to their purpose and relevant features. For example, the Hopfield network is a type of recurrent ANN model that is best used for image recognition and detection, enhancement of X-ray images, etc. [26]. Kohonen network is a two layer (input and output layer) neural network that forms data cluster, which is also referred to as a self-organizing map. It is typically used for compression of higher dimensional data to a lower dimensional data while maintaining the content [27]. Both Kohonen and Hopfield network uses unsupervised learning algorithm. An unsupervised learning algorithm does not need to be labelled or classified as input data as the learning algorithm will identify the datasets based on patterns. On the other hand, supervised learning algorithms labelled and structured input data that correspond to the output data. The most common neural network that uses supervised learning algorithm is the feedforward backpropagation network (BP), also referred to as the Multi-layer Perceptron (MLP). The BP network is made up of 3 layers, mainly the input layer (independent variable), hidden layer, and output layer (dependent variable). The number of hidden layers can be varied, and its function is to assist in capturing the nonlinearity relationship between the input and output data through a supervised learning algorithm. A BP network is versatile and flexible and can be used for classification, data modelling, pattern recognition and forecasting [28]. Hence, feedforward BP network is used as the parameters (input data) in a wastewater treatment plant are structured and labelled that has an effect on the output data. However, it should be acknowledged that the accuracy of the ANN model is dependent on the quality of the training data. Hence, increasing the number of good quality training data will improve the accuracy of the ANN model.

2.1.1. Activation Function

In a neural network, the activation function determines how the weighted sum of the input neuron signal is transferred into an output node. There are generally only two types of activation functions that will be used to model the neural network. The activation function can be divided into two categories, non-linear and linear function. The linear function (purelin) is calculated based on the multiplication of the input value and a constant shown below in Equation (3).
f ( x ) = Purelin ( x ) = x
The linear activation function is used at the output layer to find a linear approximation to a non-linear function. A non-linear function is required to introduce non-linearity to the network [29]. As such, a non-linear function will be used in the hidden layer. There are more variety of non-linear activation functions that can be used such as the radial basis function (Radbas), tangent sigmoid activation (tansig) function, logistic sigmoid (logsig) activation function, rectified linear unit (ReLu) activation function and symmetric saturating linear (Hardlims) activation function. Non-linearity is essential in the network to allow the model to generate complex mapping between the input and output layer. Hence, the neural network is able to predict a target variable that varies non-linearly with its input variables. A literature study done by [30] that compares different combination of activation function in the hidden layer and output layer conclude that the prediction accuracy for tansig non-linear activation function in the hidden layer and purelin linear activation function in the output layer produces the highest accuracy of predictive value of wastewater effluent. Tansig activation function used in ANN modelling also showed the highest accuracy in predicting the susceptibility of shallow landslides with an accuracy of 90% [20]. The tan-sigmoid function shown in Equation (4) has an output range of −1 to 1. A hyperbolic tangent function is preferred as it can give a stronger gradient and positive and negative output values.
tansig ( x ) = 2 ( 1 + e 2 x ) 1
The linear activation function was chosen for the output layer as it is suitable for a continuous valued target such as the chemical oxygen demand (COD) and biogas production.
Hence, the final form of the equation in the feedforward neural network model proposed can be written as shown in Equation (5).
output k = j = 1 j = 8 [ LW k , j ( 2 1 + exp ( 2 ( i = 1 i = 5 ( IW j , 1 x i ) + b 1 j ) ) 1 ) ] + b 2 k

2.1.2. Training Algorithms

One of the more important values in an artificial neural network model is the weight and bias values. A training algorithm is used to determine the best value of weights and bias that is able to predict the output value. The most commonly used training algorithm is the Levenberg-Marquardt (LM) method due to lesser computational time and better performance [31,32].
The LM training algorithm is a type Gauss-Newton method, which is an optimization technique that utilized steepest descent for complex non-linear patterns. Generally, training algorithms that use Quasi-Newton method requires less computational time. However, the drawback of using LM second order optimization technique is it requires storing and loading the inverse Hessian approximation matrix. This means that extra storage is required in the computer’s memory in used. The combination of inverse of Hessian algorithm in Equation (6) and Gauss-Newton algorithm in Equation (7), the LM algorithm can be represented in the below Equation (8).
H = J k T J k + μ I
where, μ represent the combination coefficient, and I is the identity matrix
w k + 1 = w k ( J k T J k ) 1 J k e k
w k + 1 = w k ( J k T J k + μ I ) 1 J k e k
The combination of Gaus-Newton and steepest descent algorithm enable LM to train ANN model using two algorithms enable high efficiency and less computation time. It is typically used for small size network training where computation time is the shortest instead of large networks such as image recognition that will result in longer computation time.
The BR training algorithm can help overcome the problem of overfitting during the training of the neural network. Hence, the prediction accuracies can be enhanced. The adjustment of weights and biases is based on the LM optimization [33]. Hence, it will determine the least combination of squared error to generate a network that generalizes well. The BR trained neural network is almost identical to the back-propagation network with the difference being an additional ridge parameter included in the objective function. There are many advantages of using BR as the training algorithm for producing the neural network. For instance, there is a low probability of neural network being over-trained and over-fit, it is also not affected by the size of the network, and require minimal data set as Bayesian neural network generate consistent result with data [34]. Hence, the BR training algorithms were chosen to develop the ANN model for anaerobic digestion and aerobic process.
The measurement of the performance of the ANN model is done by using the mean squared error (MSE), coefficient of correlation (R), mean absolute error (MAE) and mean absolute percentage error (MAPE) shown in Equations (9)–(12).
MSE = ( E i P i ) 2 N
MAE = 1 N   ( i = 1 N | E i   P i | )
MAPE = 1 N   ( i = 1 N | E i   P i | | E i |   )
R = i = 1 N ( P i P i ¯ ) ( E i E i ¯ ) i = 1 N ( E i E i ¯ ) 2 i = 1 N ( P i P i ¯ ) 2
where,
Ei:Experimental value
Pi:Predicted value
P ¯ i :Mean of observed data
E ¯ i :Mean of experimental data
N: Number of datasets/model
The value of MSE provides the information regarding the accuracy of the ANN model and hence is a crucial performance factor in deciding if the model can be regarded as usable. It indicates the difference between the predicted values and experimental data. The lower the MSE value, the higher the accuracy of the fit. The value of R is used to evaluate and indicate the closeness of the data fit to the regression line. R value ranges from 0 to 1, a value closer to 1 indicates a smaller difference between the observed data and the fitted values. A R2 value of more than 0.8 is considered to be satisfactory.

2.2. Selection of Input and Outputparameters

There are several factors that are important and greatly affect the performance of the anaerobic digester. These factors are useful in determining the production of biogas and corresponding efficiency which ultimately will help to further understand and optimize our anaerobic digestion process. In addition to that, with the use of artificial neural network, process parameters can be adjusted to maximize methane production yield while minimizing cost.
The most critical process parameters that can affect the efficiency of anaerobic digestion include the organic loading rate (OLR), pH level, presence of toxic compounds, temperature and hydraulic retention time (HRT) and others [35]. The input parameters that were used to train the artificial neural network for anaerobic digestion were the inlet flowrate (Qin), OLR, CODin, BODin, TSSin, and pH inlet, which are recorded periodically by the plant operators. These parameters can represent the real situation in the IAAB.
On the other hand, the input parameters used for aerobic process were OLR, CODin, BODin, TSSin, and MLSS, DO, and F/M ratio. There are a number of parameters that could affect the performance of aerobic reaction. One of the parameters is the operating temperature of the aerobic process as it will have an effect on the aerobic bacteria ability to digest the organic waste. Different species of aerobic bacteria will have different optimum temperatures. For instance, the optimum temperature for psychrophilic microorganisms is between 12 °C to 18 °C, for mesophilic microorganisms it is between 25 °C to 40 °C, and finally for thermophilic microorganisms, the optimum temperature for best performance for the bacteria is 55 °C to 65 °C [36].
The second factor is the ratio of food to microorganisms (F/M). The ratio simply means the amount of substrate available to the microorganism in relation to the number of microorganisms present in the aeration tank. The unit can be expressed as the kg COD per kg of MLSS per day. A high F/M ratio indicates a higher amount of food as compared to the number of microorganisms that will lead to the increased growth of aerobic bacteria in the bioreactor. A high level of aerobic bacteria activity is not always the best-case scenario as excessive food will degrade the flocculation process, causing the sludge to not settle easily [37]. On the other hand, a low F/M ratio is not preferable as well due to the rapid settling rate of sludge that would lead to an increased waste rate [37]. In order to ensure that bacteria functioned optimally, the operating pH level must be adjusted to increase microbial growth. Typical bacteria will have optimum microbial growth at pH level of between 6.5 and 7.5 [36]. As mentioned, aerobic bacteria require oxygen to grow. Hence, dissolved oxygen (DO) is one of the critical parameters to tune for optimum microbial growth. The optimum concentration of DO is around 2 mg/L and above [7].
For an anaerobic digestion reactor, the most important output parameters are the production of biogas and the removal of COD. This will reflect the profitability and sustainability of the IAAB plant. Hence, the output for the ANN model will be COD removal (%), percentage of methane (CH4) in the biogas, and methane yield (LCH4/gCODremoved). The aerobic process is typically the secondary treatment to remove the remaining biodegradable organic matter. Hence, the output parameters for the aerobic process would be the percentages of COD, BOD, and TSS removal. These parameters are critical to ensure the final treated effluent is able to meet the discharge limit.

2.3. Optimization of Artificial Neural Network

There are 4 characteristics of the ANN’s architecture that are important elements in optimizing the ANN model. First, it is the number of hidden layers present in the ANN architecture. Secondly, it is the number of neurons in each of the input, hidden, and output layers. The third element is the type of activation function in each of the layers respectively. Finally, the training algorithm used to train the ANN model is crucial as well. However, it is also worth noting that the quality and number of training data available could also impact the performance of the ANN model. The typical method to determine the best ANN architecture is by a trial-and-error process where different ANN architecture is compared to one another. The general rule of thumb is that the number of neurons in the hidden layer is increased when the time taken to train is longer and performance parameter such as the mean squared error is large. There is no definitive number of neurons or hidden layers that are needed to produce an ANN model that has high predictive value. The number of neurons present in the input layer and output layer is determined by the number of input parameters and output parameters respectively.
The activation functions are selected based on the various type of data that are present and for which layer. For example, identity function is usually utilized in the input layer, while non-linear activation function such as the hyperbolic tangent sigmoid function is used in the hidden layer. The performance of the ANN is affected by the type of training algorithm as it adjusts the weights and bias to generate the ANN model. The speed at which the training is done is also affected by the training algorithm. In general, the influencing factor of an ANN model is the number of neurons in the hidden layer and the type of training algorithm. The artificial neuron network will train 70% of the input data, 15% as the testing data, and the remaining 15% as the validation data. There are a total number of 96 datasets for anaerobic and aerobic digestion, where each dataset contains 6 input parameters and 3 output parameters.

2.4. Determination of the Number of Hidden Neurons

One of the key network parameters is the number of hidden neurons that are used in the development of artificial neural network (ANN) model. A high number of hidden neurons can cause an overfitting problem, which is that the model overestimated the target value due to the complexity of the input parameters [38]. The determination of the number of hidden neurons is done through trial and error. There have been several equations that are developed by various authors that supposedly are able to calculate the optimum number of hidden neurons. The findings are summarized in Table 1.
Nevertheless, the equations mentioned may not be accurate in determining the optimum number of hidden neurons required. Another method that is helpful is by calculating the mean squared error (MSE) of the training and validation data at varying numbers of hidden neurons.
As shown in Figure 1, the training and validation set both showed the least MSE value when the number of hidden neurons is 10 or 12. Nonetheless, 12 training algorithms are trained using the trial and error method by changing the number of hidden neurons to further verify the optimum number of hidden neurons.

2.5. Sensitivity Analysis

Two methods are employed in this work to evaluate the relative importance of the input parameters, including connection weight approach and Garson algorithm [46,47].

2.5.1. Connection Weight Approach

The equation used for connection weight approach is shown as below:
V = j = 1 N ( W ij × W jo )
where the W represent the value of connection weight, the superscript, I, j, o represent the input, hidden, and output neuron, respectively.

2.5.2. Garson Algorithm Method

The equation used for 1.1.1 Garson algorithm method is shown as below:
I j = m = 1 m = Nh ( ( | W jm ih | k = 1 Ni | W km ih | ) × | W mn ho | ) k = 1 k = Ni { m = 1 m = Nh ( | W km ih | k = 1 Ni | W km ih | ) × | W mn ho | }
where Ij represent the relative importance of jth input parameter on the output parameter, and the input and hidden neuron is represented by Ni and Nh respectively. The connection weight is represented by W. The subscript o, h, and i represent the output, hidden and input neuron respectively.

3. Results and Discussion

3.1. Performance of ANN Model

The performance of all 12 trained algorithms is analyzed for the anaerobic process and aerobic processes. The results will be discussed in the next section.

3.1.1. Anaerobic Digestion

Table A7 and Table A18 (Appendix A) summarized the values of R, MAE, MAPE and MSE of anaerobic digestion.
Table A7 showed the highest R value that can be achieved in a trained ANN model for COD removal response, while Table A8, Table A9 and Table A10 showed the lowest MAE, MAPE, and MSE values for COD removal response of different training algorithms utilized at different number of hidden neurons. The training algorithm that achieves the highest R value (0.998) for the COD removal processes is the Bayesian Regularization backpropagation (BR) algorithm. The lowest MAE value of 0.211 was obtained from BR using 12 hidden neurons; the lowest MAPE value of 0.009 from BR using 10 hidden neurons; the lowest MSE value of 0.43 using 12 hidden neurons. In general, LM, BR, RP, and BFG performed much better than 8 other training algorithms. However, LM and BR have consistently ranked higher in terms of producing the best performances ANN model. Moreover, by using BR training algorithm, the MAPE error has never exceeded 10% from one neuron to the twenty neurons. On the other hand, GD has the worst performance among the other training algorithms, where it has the highest MAE value of 95.015, MAPE value of 0.77, and MSE of 9769.
The values of R value, MAE, MAPE, and MSE value for methane purity response are summarized in Table A11, Table A12, Table A13 and Table A14 (Appendix A) respectively. The highest R value achieved after training the neural network is 0.997 by using BR training algorithm with 13 hidden neurons. Interestingly, the highest and lowest MAE value that was obtained is from BR training algorithm at 13 hidden neurons and 1 hidden neuron respectively. For MAPE performance criteria, BR training algorithm clearly performs far better than others as the error for each added neuron were less than 1%. Again, BR and LM trained the best ANN model compared to others with the lowest MAE, MAPE, and MAE values. The MSE values for BR were less than 1 from 3 hidden neurons to 20 hidden neurons, indicating good performance.
The highest and lowest value of R, MAE, MAPE, and MSE for methane yield response can be found in Table A15, Table A16, Table A17 and Table A18 (Appendix A). The highest R value across all training algorithms is again obtained from BR with a value of 0.996 at 15 hidden neurons, while GD performed the worst at R value of 0.267 using 11 hidden neurons. Table A16 shows the lowest MAE that is from BR, training algorithms, where it achieves a value of less than 0.001 at 3 different cases, while GDX has the highest MAE value at 1.389. In general, all the training algorithms performed well to predict the methane yield, with MSE error ranges from 0.0001 to 0.04000. For the results shown below, 6 input parameter and 3 output parameters are used to train the ANN model. Using the Bayesian regularization (BR) backpropagation training algorithm, the input and output weights and biases are shown in Table A8 and Table A9 (Appendix A).

3.1.2. Aerobic Process

Following the consistently better performance of BR and LM than the 10 other training algorithms in predicting the output from anaerobic digestion, only BR and LM training algorithm is used to train the data for the aerobic process. The performance criteria are summarized in Table A19, Table A20, and Table A21 (Appendix A) for aerobic process. The use of 6 hidden neurons gives an overall lower MSE value for the 3 output parameters. MSE values of 1.0288, 0.6047, and 0.5560 for the prediction of COD removal, BOD removal and TSS removal respectively.

3.2. Optimization of ANN Model Parameters

To determine the final ANN model parameters, the consideration of MSE value and R value between experimental and predicted values are used [48]. The MSE value obtained from using 12 neurons is 0.48, 0.43 and, 0.0000206 for the prediction of COD removal, purity of methane, and methane yield respectively. The R value achieved is 0.998, 0.983, and 0.990 for the prediction of COD removal, purity of methane, and methane yield respectively, which indicates good fitting between the predicted values and experimental data. As shown in Figure 2a, an ANN architecture of 6-12-3 were developed where bias for the hidden layer and output layer is represented by b1 and b2 respectively.
For aerobic process, the MSE value obtained from using 6 neurons is 1.0288, 0.6047 and, 0.5560 for the prediction of COD removal, purity of methane, and methane yield respectively. The R value achieved is 0.997, 0.981, and 0.995 for the prediction of COD removal, purity of methane, and methane yield respectively. The high value of R and low value of MSE using 6 hidden neurons indicates good fitting and minimal deviation between the predicted values and experimental data. As shown in Figure 2b, an ANN architecture of 7-6-3 were developed where bias for the hidden layer and output layer is represented by b1 and b2 respectively.

3.3. Validation of the ANN Model

The illustration in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 depicted the correlations and corresponding visual agreement between the experimental data and the BR-ANN output. The proposed BR-ANN model demonstrated very satisfactory performance in predicting the COD removal, methane yield, and methane purity for anaerobic process and COD, BOD and TSS removals for aerobic process.
The R value for predicting the value of methane purity for each hidden neuron can be found in Table A7. A high R value close to 1 indicates a good accuracy of the COD removal. It can be seen that by using 12 hidden neurons, the highest R value of 0.998 can be achieved compared to other combinations. The R2 value of 0.997 was obtained for predicting the COD removal using 12 hidden neurons and BR training algorithm (Figure 3a). The result is comparable with the neural network obtained from Dibaba et al. [49] where he obtained a R2 = 0.906 and R2 = 0.87 obtained from Antwi et al. [50] on the removal of COD during up flow anaerobic sludge blanket bioreactor (UASB) operation. Therefore, it can be seen from Figure 3b that the predicted value obtained from the ANN model and experimental value of COD removal has a slight deviation, proving the capability of the ANN model to predict the outcome of anaerobic digestion.
The R value for predicting the value of methane purity for each hidden neuron can be found in Table A11. The R2 value obtained from the BR trained ANN model with 12 hidden neurons is 0.972 (Figure 4a), which is slightly lower than the R2 = 0.985 achieved by Yu, Jaroenpoj and Griffith [51] using 5 input neurons with 8 hidden neurons. Similar to COD removal, the predicted value obtained from the ANN model and experimental value of methane purity has slight deviation only (Figure 4b).
The R value for predicting the value of methane purity for each hidden neuron can be found in Table A15 (Appendix A). The BR algorithm produced the best performance ANN model for predicting the methane yield with R value of 0.990, and lowest MAE, MAPE, and MSE value at 0.001, 0.002, 2.06 × 10−5 respectively (Figure 5). The results are better with the ANN developed by Xu, Wang and Li [52], which achieve a R value of 0.937 using 8 input neurons and 6 hidden neurons to predict methane yield from anaerobic digestion of plant biomass.
For the ANN model to predict the COD removal, BOD removal, and TSS removal of aerobic process, two of the best training algorithm based on the simulation conducted on anaerobic was chosen to train the dataset, which is BR and LM algorithm. The values of the performance criteria such as R, MAE, MAPE, and MSE value can be found in Table A19, Table A20 and Table A21.
The overall performance of BR is better than LM in terms of R, MAE, MAPE, and MSE values across the 3 output parameters. BR training algorithm with 6 hidden neurons was chosen as it gives overall lower MAE, MAPE and MSE values at 0.495, 0.005, and 1.0288 respectively for the prediction of COD removal. As shown in Figure 6, the R2 value of 0.995 indicates a good fitting between the experimental COD value and the predicted COD value.
The high value of determination coefficient, R2 value (Figure 7a) indicates that the ANN model is able to learn the non-linear relationship between the input and output parameters well. In Figure 7b, minimal deviation can be observed between the experimental BOD removal and predicted BOD removal values.
High R2 value and minimal deviation between the predicted TSS removal and experimental TSS removal was observed in Figure 8 respectively. While all three R2 values for COD, BOD and TSS removals are more than 0.9, this shows that ANN model can be used to predict the output parameters accurately.

3.4. Sensitivity Analysis

The connection weight approach and Garson method are used to perform the sensitivity analysis and the results are summarized in Table 2 and Table 3 respectively. It can be seen from Table 2 that the inflow of biochemical oxygen demand (BODin) and organic loading rate (OLR) are the least influential parameters on the COD removal process, while the value of inlet chemical oxygen demand (CODin), inlet flowrate (Qin), and pH inlet, and inlet total suspended solid (TSS) have more influence towards the COD removal process in anaerobic digestion. Besides, CODin is the most influential factor in determining the COD removal and the methane yield according to Table 2. The methane yield in anaerobic digestion is known to increase with increasing COD strength [53]. The input parameters strength of influence on purity of methane is as follows: Qin > CODin > pH inlet > BODin > TSSin > OLR. As for methane yield, CODin ranked the highest in terms of relative importance. The organic waste is the source of the production of methane [54], where level of COD is high, hence CODin is one of the more important chemical properties that could impact the methane yield.
As for aerobic process, the efficiency of COD removal is most affected by mixed liquor suspended solids (MLSS) value, and least affected by F/M ratio. Basim [55] showed that increasing the quantity of MLSS would lead to higher concentration of COD in the sludge, increasing the COD removal efficiency. For output parameter of BOD removal (%), concentration of DO is the most influential parameter, while F/M ratio is the least influential parameter relatively. The amount of DO required by the aerobic microorganisms is the value of biochemical oxygen demand (BOD) [56]. Therefore, there is a strong relationship between the two parameters. Finally, the MLSS is the most influential parameter on the removal efficiency of TSS, and BODin is the least influential parameter in aerobic process.
However, it is worth noting that the importance of TSS and pH inlet is the highest in anaerobic digestion when using the Garson method (Table 3). The deviation between the two sensitivity analysis methods can be explained by the poor estimation of Garson algorithm. According to Olden, Joy and Death [57], the connection weight approach is superior to the Garson algorithm, where the mean similarity between the estimated variable and true ranks was 92% as compared to less than 50% for Garson’s algorithm.

3.5. Optimum Operating Conditions

The developed ANN model for anaerobic digestion is used to create the optimal operating conditions which maximize the output parameters. Following the optimal values, the COD removal (%) increases from 68.7% to 92% as shown in Table 4. The maximum methane yield of 0.26 LCH4/gCODremoved was able to achieve from 0.23 LCH4/gCODremoved, which represent a 13.4% increase in the methane yield as shown in Table 5.
The optimum operating value is shown below in Table 6, where aerobic COD removal (%) increases from 85% to 99% using the developed ANN model. The recommended operating conditions on IAAB based on the literature study are summarized in Table 7. The operating conditions based on ANN model were within the range of recommended operating condition provided by Chan et al. (2020). The purity of methane obtained from Chan et al. [58] were 63%, while ANN based operating condition were able to achieved purity of 66.4%. This indicates that higher purity of methane is possible for IAAB configuration. Finally, the most important parameter that determines the quality of the final discharge waste is the COD level. Hence, it is important to achieve maximum COD removal (%) in the aerobic process in this IAAB plant. The developed ANN model predicted the value of COD removal (%) of 99% based on the operating condition in Table 6. Similar results (85–99.6% COD removal) were achieved from literature [58].
In summary, CODin of raw POME (the most influential parameter in anaerobic process) needs to be closely monitored and maintained in the range of 50,000–97,000 mg/L to achieve high COD removal and methane yield based on the results presented in the sensitivity analysis (Section 3.4). For aerobic process, it is essential to maintain the most influential parameters, i.e., MLSS and DO at the ranges of 37,000–40,500 mg/L and 2.0–7.3 mg/L respectively to ensure the compliance with the discharge limit. These show that the synergism created between anaerobic and aerobic processes is the key to achieve maximum biogas production and overall COD removal in the IAAB plant.

4. Conclusions

Two ANN models were developed to predict the effluent parameters of anaerobic digestion and aerobic processes in a pre-commercialized IAAB. A total of 6 input parameters that is Qin, OLR, CODin, BODin, and TSSin and pH inlet were used to predict the 3 output parameters for anaerobic digestion. For aerobic process, 7 input parameters that are OLR, CODin, BODin, TSSin, MLSS, DO and F/M ratio were used to predict 3 output parameters for aerobic process. The R value obtained for COD removal (%), purity of methane (%) and methane yield were 0.998, 0.983, and 0.990 respectively for anaerobic digestion. As for aerobic process, accurate prediction for COD removal, BOD removal and TSS removal was obtained with high R value of 0.997, 0.981, and 0.995 respectively. The ANN architecture used for anaerobic reaction was 6-12-3 and 7-6-3 for aerobic process with both using BR training algorithm and tan-sig activation function. The developed ANN models successfully predicted the properties of effluent for the IAAB plant accurately with minimal errors. The developed ANN model is used to optimize the output parameters. Under optimum operating conditions, the anaerobic COD removal (%) and methane yield were successfully improved by 33.9% and 13.4% respectively. Sensitivity analysis shows that COD inlets the most influential input parameters that affect the anaerobic COD removal (%) and methane yield. The trained ANN model can be utilized as a decision support system (DSS) for operators to predict the behavior of the IAAB system. Therefore, it will enable user to perform cost analysis using the ANN model to achieve optimum performance of the IAAB system, which can bring one step closer to successful commercialization of this IAAB technology. For further studies, input parameters such as the mixing time, reaction time, nutrients (ratio C:N:P) and concentration of microorganisms can be used to further develop the ANN model.

Author Contributions

Conceptualization, W.-Y.C.; methodology, C.S.L. and J.W.L.; software, W.-Y.C.; validation, M.M.; formal analysis, C.-D.H.; investigation, A.U.; resources, G.L.; data curation, H.H. and W.-N.T.; writing—original draft preparation, W.-Y.C.; writing—review and editing, Y.J.C.; visualization, W.-Y.C.; supervision, Y.J.C.; project administration, Y.J.C.; funding acquisition, J.W.L. All authors have read and agreed to the published version of the manuscript.

Funding

The financial supports received from the following funders are gratefully acknowledged: Yayasan Universiti Teknologi PETRONAS (YUTP) with the cost center of 015LC0-126 and Universitas Muhammadiyah Surakarta, Indonesia via External Grant with the cost center of 015ME0-246.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Tables

Table A1. Input weights and biases of the ANN model for anaerobic digestion.
Table A1. Input weights and biases of the ANN model for anaerobic digestion.
Position of Neurons Weights (Wi)
QinOLRCODinBODinTSSinpH InletBias (b1)
11.0241.2720.8881.098−0.962−0.5080.115
2−0.186−0.0930.1010.1620.0720.234−0.224
3−0.627−0.517−0.0730.0990.8510.7210.497
4−0.253−0.793−0.902−0.7170.1501.080−0.482
5−0.2370.1260.127−0.082−1.0211.599−0.095
60.9790.100−0.352−0.397−1.7600.5670.255
7−0.196−0.654−1.175−1.024−0.6551.3960.233
80.2050.211−0.338−0.3790.307−0.4930.476
9−1.154−1.319−0.189−0.0740.5270.516−0.981
10−0.708−0.976−0.560−0.472−0.145−0.1420.711
11−0.1400.124−0.275−0.279−0.464−1.125−0.515
120.8110.624−0.171−0.124−0.6590.274−0.308
Table A2. Output weights and biases of the ANN model for anaerobic digestion.
Table A2. Output weights and biases of the ANN model for anaerobic digestion.
Outputs Weights (Wi)
W1W2W3W4W5W6W7W8W9W10W10W12Bias (b2)
COD removal (%)−1.0630.4291.101−0.703−0.1201.323−0.969−0.919−1.3690.2680.140−0.356−0.582
Methane purity 1.2640.3050.7341.137−1.3410.0530.194−0.322−0.8051.082−0.4480.025−0.642
Methane yield (m3CH4/gCOD removed)−0.6810.022−0.4230.088−0.0781.028−1.140−0.306−0.7880.716−0.929−0.979−0.515
Table A3. Best performance result of the ANN model for anaerobic digestion.
Table A3. Best performance result of the ANN model for anaerobic digestion.
% of SeparationNumber of SampleType of SampleMSE
70%16Training2.26 × 10−14
15%4Validation1.246496
15%4TestingN/A
100%24All-
Table A4. Input weights and biases of the ANN model for aerobic process.
Table A4. Input weights and biases of the ANN model for aerobic process.
Position of Neurons Weights (Wi)
OLRCODinBODinTSSinMLSS (mg/L)DO (mg/L)F/M (kgCOD/kg MLVSS. Day)Bias (b1)
1−2.0470.130−1.591−0.5410.6310.595−0.882−1.786
2−1.0690.913−0.5800.076−1.382−0.3660.373−1.803
30.149−0.5040.333−1.1051.741−2.891−0.174−1.809
4−0.0210.5910.0220.758−1.1170.761−0.6920.722
50.0241.6950.313−0.086−1.0430.0410.6650.793
60.875−0.339−1.3030.0101.6880.6840.5850.510
Table A5. Output weights and biases of the ANN model for aerobic process.
Table A5. Output weights and biases of the ANN model for aerobic process.
OutputsWeight (Wi)
W1W2W3W4W5W6Bias (b2)
COD removal−1.676770.391637−2.03913−0.07522−1.51773−0.8974−0.35335
BOD removal −1.40607−1.66764−1.98104−0.33949−1.19136−1.31442−1.69595
TSS removal1.159768−2.19573−0.8436−1.681530.573482−0.69684−0.45734
Table A6. Best performance result of the ANN model for aerobic process.
Table A6. Best performance result of the ANN model for aerobic process.
% of SeparationNumber of SampleType of SampleMSE
70%16Training0.115
15%4Validation3.804727
15%4TestingN/A
100%24All-
Table A7. R value for COD removal (%) for different training algorithms at different number of hidden neurons.
Table A7. R value for COD removal (%) for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.8230.6570.7590.7670.8200.7730.7320.7750.7700.5230.7620.733
20.9400.9220.9300.9000.9250.9380.9120.9350.9000.7950.9620.635
30.9530.8630.9300.9300.9090.9010.9150.9280.9380.5650.9580.658
40.9080.9430.9510.8960.9030.8840.9310.8950.8860.5340.9620.649
50.9640.8910.9480.9350.9350.9290.9670.9210.9080.7210.9880.651
60.9530.8320.9430.8910.9410.9390.9240.9460.8970.6180.9810.773
70.9730.9240.9180.8260.9620.9140.9600.9040.9540.3930.9940.634
80.8990.9500.9030.9060.9120.9240.9360.9210.9620.4520.9900.542
90.9530.9170.8520.8720.8990.9180.9010.9400.9270.5310.9940.625
100.9210.7620.8630.9440.9360.9040.9460.8730.9220.5740.9870.698
110.9450.8480.9400.8740.8620.8990.9460.9020.8810.7340.9940.568
120.9160.9240.9240.8800.9300.8630.9100.9220.9220.6820.9980.419
130.9600.8190.9300.9030.8800.8490.9640.8820.8300.5990.9890.711
140.9730.8780.8880.9140.9600.9280.9050.8890.9000.5100.9950.871
150.9640.8210.9330.9310.8940.9280.8830.8850.9040.4980.9960.574
160.9330.8240.8870.9070.9060.9110.9540.8410.8460.5540.9890.615
170.9590.8910.9560.8180.9280.7950.9190.9050.9350.6260.9950.543
180.9550.8800.9090.9280.9720.9460.9450.9630.7830.6030.9980.344
190.9440.9100.9230.8840.9670.8920.9100.9550.9430.3460.9920.495
200.9680.8730.9210.8400.9300.9350.9280.8950.9050.4990.9970.693
Bolded number are the lowest and highest R value.
Table A8. MAE for COD removal (%) for different training algorithms at different number of hidden neurons.
Table A8. MAE for COD removal (%) for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
15.2080.0264.2524.4035.0834.4425.3384.7404.8197.7654.1234.973
23.2243.3863.2483.7973.7162.8433.7583.3334.10529.5372.09818.369
312.1203.7303.1403.1113.5243.8533.5313.2852.89015.2001.6169.097
42.4873.0712.9703.2634.3194.1502.9853.4974.18910.1271.05719.697
52.1193.6682.9313.3723.2603.3512.0472.6113.66537.1610.65623.856
61.9814.6852.9483.8292.3372.7972.6682.8284.04534.2570.8349.163
71.3453.2383.5294.5212.3313.0312.3493.5691.93950.8970.45311.841
82.3202.2433.8023.0582.9522.9052.6573.1432.29540.4570.50115.996
91.8803.1354.4854.2504.2253.2903.2943.0082.55549.3060.47614.285
102.0045.6984.1692.4883.1243.4942.1463.7633.13515.3460.7416.472
112.1323.8211.9612.8393.6953.8221.9822.8773.20634.5910.4349.620
121.8502.7693.5163.1712.6483.3663.2742.9182.67027.0770.21127.139
131.4763.9472.1053.8054.4384.1951.8693.0962.78132.3790.50911.513
141.3203.0293.9292.7102.2452.3522.5053.7553.74423.3460.4165.579
151.6904.0822.6561.9182.8083.1953.2292.7352.55432.2150.35811.765
162.2863.7703.5552.5333.7843.0001.9624.0283.51517.1540.64423.895
172.1523.6462.4313.5732.8694.0382.8612.4352.81851.9560.3777.989
181.1403.7483.1072.6871.5282.4281.9021.8884.33641.4450.22030.201
192.3073.2232.8283.5142.1183.6433.5702.3992.66195.0150.46230.494
201.4543.1492.1863.4692.3992.7212.5353.0992.80332.1930.23831.910
Bolded number are the lowest and highest MAE value.
Table A9. MAPE for COD removal (%) for different training algorithms at different number of hidden neurons.
Table A9. MAPE for COD removal (%) for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.070.10.060.060.070.060.080.070.070.110.0660.07
20.040.050.040.050.050.040.060.050.060.410.0340.27
30.160.050.040.040.050.050.050.040.040.210.0250.13
40.040.040.040.050.060.060.040.050.060.150.0220.26
50.030.050.040.050.040.050.030.030.050.510.0140.33
60.030.070.040.050.030.040.040.040.060.470.0160.13
70.020.040.050.060.030.040.030.050.020.680.0110.17
80.030.030.050.040.040.040.040.040.030.550.0170.22
90.020.040.060.060.060.040.050.040.030.680.0180.19
100.030.080.060.030.040.050.030.050.040.210.0090.09
110.030.050.030.030.050.050.030.040.050.480.0060.14
120.020.040.050.040.040.040.040.040.040.390.0030.37
130.020.060.030.050.060.060.020.050.040.450.0060.16
140.020.050.050.030.030.030.030.050.050.320.0060.07
150.020.060.040.030.040.040.040.040.030.460.0050.15
160.030.060.050.030.050.040.020.050.050.230.0090.32
170.030.050.030.050.040.060.040.040.040.710.0050.12
180.010.050.040.030.020.030.030.020.060.560.0030.39
190.030.040.040.050.030.050.050.030.040.770.0060.42
200.020.040.030.050.030.040.030.040.030.440.0030.42
Bolded number are the lowest MAPE value.
Table A10. MSE for COD removal (%) for different training algorithms at different number of hidden neurons.
Table A10. MSE for COD removal (%) for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
139.3772.0750.1451.1341.2549.9755.2647.2048.2297.3052.7959.20
214.2218.0316.1723.4419.6915.0926.2915.0022.39924.889.30426.35
313.9530.3416.7016.7521.2522.3922.6916.6515.44321.7410.81125.63
421.9916.2411.5625.8023.1830.0116.0024.5026.58185.539.61477.13
59.6926.7512.2115.8514.7517.017.8018.4321.221442.882.83799.09
613.9538.8613.5824.9715.9115.6817.4713.0624.181257.655.13112.85
77.9020.3120.3437.709.0820.9210.4723.3010.982773.361.70199.71
823.2411.5722.0021.4521.4617.3614.9118.089.082060.302.53441.06
912.5219.5932.8231.1025.6819.6725.9514.1516.822551.041.47432.19
1018.6655.9631.2313.7216.0621.7352.8728.6021.58337.344.0367.22
1113.2433.8115.4631.1332.0323.4312.6424.8629.991251.081.45156.54
1221.6917.5219.0627.4916.1430.9620.7018.2817.751251.110.431106.73
1311.9540.3116.3423.6143.1333.028.3727.0839.501151.902.63215.13
147.0929.0625.4720.5210.3416.6323.3227.4122.65841.851.1749.48
159.6439.7216.8015.9526.4817.2526.7227.7321.891571.341.05259.51
1617.9638.1925.8523.6924.8320.1511.3538.1238.28390.732.88831.74
1711.0225.8010.5242.4517.0947.2619.2922.0715.622797.181.2799.43
1811.0827.9922.0116.756.7012.4512.599.0081.801933.790.59982.16
1913.0920.6519.7526.847.9526.2821.7410.7413.239769.542.201096.88
208.4330.7818.0439.5819.5015.8219.1123.5523.101232.830.701261.02
Bolded number are the lowest MSE value.
Table A11. R value for CH4 percentage in biogas for different training algorithms at different number of hidden neurons.
Table A11. R value for CH4 percentage in biogas for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.5710.6530.5650.5420.5680.5290.6000.5170.5170.5230.5690.531
20.8100.7100.7200.7940.8100.7640.8090.7450.7890.7950.8240.691
30.9010.8520.8010.7610.8360.7550.8900.7790.8070.5650.9040.424
40.9090.7390.8020.8460.8130.8350.8560.7980.8140.5340.9150.634
50.8640.7450.8340.8160.7830.8130.8520.8450.7110.7210.9890.454
60.9010.8260.7850.7690.8650.8450.8570.7990.8230.6180.9890.671
70.8310.8490.8040.7850.8410.8280.7900.8100.8430.3930.9850.714
80.9360.7370.7130.8210.8460.8700.8160.6330.8270.4520.9920.684
90.9070.8060.8080.7750.7630.7700.8630.7340.8720.5310.9840.707
100.9540.7240.8120.8260.8830.8160.8610.6320.8520.5740.9930.573
110.9090.7440.9000.8700.7160.7030.9200.8890.8470.7340.9850.428
120.9710.7670.7510.7700.8840.7170.9330.8790.7880.6820.9830.704
130.9640.6930.8870.8040.7720.7720.8730.8150.8480.5990.9970.560
140.9650.6550.8110.7730.8590.8120.9080.8500.6340.5100.9840.637
150.9750.7490.9230.9120.8120.7320.8920.8220.7890.4980.9770.571
160.9520.8170.8170.8080.6440.6870.8500.7780.8720.5540.9900.601
170.9310.7520.7310.8420.8900.7840.8830.9070.8090.6260.9840.606
180.9490.7800.8740.7410.8210.7050.8520.7590.8900.6030.9820.581
190.9070.8680.7750.9070.8870.6970.8380.7460.6190.3460.9760.574
200.9080.6080.8760.8240.8200.5950.8040.8150.7910.4990.9910.256
Bolded number are the lowest and highest R value.
Table A12. MAE for percentage of CH4 in biogas for different training algorithms at different number of hidden neurons.
Table A12. MAE for percentage of CH4 in biogas for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
11.6426.8221.3971.3781.6211.4621.5251.4881.492.3921.3371.538
21.1111.751.4141.4311.3991.3651.2571.8391.5514.6691.1292.918
32.861.1211.081.6751.1221.4880.9151.291.1742.3920.8312.025
40.8061.4351.1571.0321.5361.2421.0281.221.1093.1270.6061.673
50.9371.3221.2461.2811.7191.1891.0320.9441.5621.9020.1143.519
60.6241.111.2351.270.931.0661.0461.1761.0938.9240.1341.414
70.5871.0851.0571.3331.1061.1961.2241.3361.0466.4140.1421.393
80.4081.2941.4451.1260.9390.9831.1031.4561.14710.7630.1054.515
90.4631.1971.951.2651.3361.0740.9651.3260.98214.3490.14315.518
100.3321.2941.2761.0730.9221.0670.9521.5521.0588.5830.0884.126
110.6481.4330.8630.7311.341.3430.6480.920.9664.2090.1623.029
120.3581.3081.3321.1220.921.1460.6461.0781.5193.2960.1473.621
130.3341.5080.8821.2081.2221.320.8531.1741.0098.2830.0724.256
140.4221.6331.1731.3571.0561.150.8781.4961.4985.2810.1792.824
150.3311.130.7250.841.2311.3480.8881.0711.26712.6570.2014.762
160.3841.1971.1670.951.2111.3010.9151.2010.9555.0270.138.688
170.9121.2191.5660.9910.9451.1520.9390.7681.2731.660.1443.503
180.371.3360.9051.1781.0841.3810.8161.2110.86812.7140.1453.806
190.7310.8961.3510.8011.1191.611.0931.1861.8635.10.1993.972
200.451.4540.9781.2380.9511.4951.0841.1821.075.1710.11919.23
Bolded number are the lowest and highest MAE value.
Table A13. MAPE for percentage of CH4 in biogas for different training algorithms at different number of hidden neurons.
Table A13. MAPE for percentage of CH4 in biogas for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.030.020.020.020.030.020.030.020.020.040.0200.03
20.020.030.020.020.020.020.020.030.030.080.0200.05
30.050.020.020.030.020.020.020.020.020.040.0100.03
40.010.020.020.020.030.020.020.020.020.050.0100.03
50.020.020.020.020.030.020.020.020.030.030.0010.06
60.010.020.020.020.020.020.020.020.020.150.0030.02
70.010.020.020.020.020.020.020.020.020.110.0010.02
80.010.020.020.020.020.020.020.020.020.180.0010.07
90.010.020.030.020.020.020.020.020.020.240.0020.25
100.010.020.020.020.020.020.020.030.020.140.0010.07
110.010.020.010.010.020.020.010.020.020.070.0030.05
120.010.020.020.020.020.020.010.020.020.050.0010.06
130.010.020.010.020.020.020.010.020.020.140.0010.07
140.010.030.020.020.020.020.010.020.020.090.0030.05
150.010.020.010.010.020.020.010.020.020.210.0030.08
160.010.020.020.020.020.020.010.020.020.080.0020.14
170.010.020.030.020.020.020.020.010.020.030.0020.06
180.010.020.010.020.020.020.010.020.010.210.0020.06
190.010.010.020.010.020.030.020.020.030.080.0030.07
200.010.020.020.020.020.020.020.020.020.090.0020.32
Bolded number are the lowest and highest MAPE value.
Table A14. MSE for percentage of CH4 in biogas for different training algorithms at different number of hidden neurons.
Table A14. MSE for percentage of CH4 in biogas for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
13.773.143.763.953.774.353.524.034.109.003.774.70
21.944.522.693.152.772.892.025.233.2225.781.7811.64
31.161.761.983.901.693.441.312.321.918.570.996.17
40.972.812.041.553.892.341.492.091.9014.690.984.64
51.392.782.492.134.252.101.541.623.785.660.1323.74
61.161.752.102.421.581.741.482.011.7687.020.133.83
71.711.541.982.721.702.182.162.421.6846.600.192.85
80.802.643.611.771.571.371.893.481.91123.630.0930.70
90.972.026.122.662.562.301.423.171.44209.590.19267.10
100.552.992.751.871.371.857.033.781.5687.740.0820.69
111.143.251.041.432.743.110.851.161.6323.200.1712.12
120.372.312.502.531.202.860.881.743.3213.030.1918.95
130.463.221.262.412.302.491.512.051.6687.050.0326.11
140.494.172.243.631.981.861.033.934.1936.490.2010.95
150.322.440.831.002.562.931.131.852.09174.160.2929.62
160.532.051.922.063.342.911.592.311.4032.860.1492.72
171.182.893.431.611.242.181.360.982.254.540.1816.35
180.572.451.322.461.933.041.662.441.27210.130.2022.09
191.271.392.820.981.613.811.712.454.6637.640.2826.60
200.974.411.311.951.943.532.242.352.1934.990.10398.13
Bolded number are the lowest and highest MSE value.
Table A15. R value for methane yield in biogas for different training algorithms at different number of hidden neurons.
Table A15. R value for methane yield in biogas for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.7410.5940.6650.6440.7690.6650.6140.6930.6790.4670.6590.603
20.9260.6440.9080.8790.9260.8440.8340.8340.7790.6830.9350.548
30.9300.7970.8170.8860.8140.8880.8700.8400.6710.4770.8940.845
40.7900.8420.8270.8570.7520.7960.9130.8000.8390.6290.8720.530
50.8380.6260.7330.7040.8810.6920.9150.9150.7570.4960.9410.563
60.9300.6390.6700.9150.6660.6760.8550.8580.7270.4200.9780.529
70.9140.5050.6450.6840.7160.5510.9360.6630.6120.6530.9890.314
80.9490.4270.7130.6920.7800.7790.8950.8220.7700.5600.9900.457
90.9250.4170.8070.6400.6780.7240.8990.6550.6500.5540.9920.494
100.9250.5840.6920.6420.7710.6170.9330.8410.6220.5770.9890.534
110.9560.4970.7060.5790.7770.7940.9120.5810.6650.2670.9930.664
120.9140.4730.5550.6280.6910.6220.8860.6410.7060.5490.9900.531
130.9580.5780.5850.5700.7210.6440.8390.6530.7040.4910.9880.450
140.8910.6620.6750.7080.5560.6020.9020.5730.7420.4950.9900.587
150.9430.4760.6490.5760.6670.6030.8730.6620.6000.5970.9960.485
160.9630.5300.6030.6370.7710.5930.9120.6180.6690.4090.9840.404
170.9640.6730.5910.6580.6380.6380.8820.5060.5190.4430.9920.480
180.9530.5440.6240.6310.5670.6440.9330.5120.5780.5140.9930.539
190.9580.4270.6980.6400.6340.6720.9060.6010.6660.7210.9940.384
200.9610.6920.5120.6690.6470.6750.8340.6420.6470.4380.9950.629
Bolded number are the lowest and highest R value.
Table A16. MAE for methane yield for different training algorithms at different number of hidden neurons.
Table A16. MAE for methane yield for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.0151.3890.0420.0200.0240.0200.0160.0610.0440.0460.0330.077
20.0480.0310.0160.0650.0970.0160.0110.0820.0640.0420.0330.073
30.0860.0680.0270.0800.0240.1150.0120.0120.0510.0790.0280.021
40.0140.0500.0200.0730.0710.1280.0100.0550.0280.1060.0210.052
50.0380.0250.0240.0250.0120.0780.0090.0400.0230.0340.0070.052
60.0080.0720.0230.0370.0610.0680.0120.0180.1080.1190.0040.116
70.0090.0480.0330.0170.0230.0780.0080.0610.0920.0300.0040.072
80.0070.0280.0710.0340.0270.0340.0110.1600.1030.0950.0030.057
90.0090.0860.0680.0990.0770.0210.0100.1300.0530.0830.0020.103
100.0080.0470.0940.0250.1190.0650.0080.0350.0810.0420.0010.058
110.0070.0590.0330.0700.0330.1820.0100.1090.0180.1460.0020.052
120.0080.0670.0810.0580.1190.0600.0130.1550.1230.1000.0010.058
130.0050.1030.0590.0180.0230.0940.0150.0970.0970.1320.0030.052
140.0110.0750.0230.0710.0570.0550.0110.0680.1630.0620.0020.149
150.0060.0300.0910.0330.1600.0320.0120.1080.1410.0660.0020.089
160.0040.1040.0610.0860.0190.0600.0120.0570.1750.1240.0020.047
170.0090.0500.0870.1000.1410.1040.0130.0830.0540.0530.0030.092
180.0060.1050.0320.1770.0910.0450.0080.1860.0840.0660.0030.083
190.0060.1670.0570.0500.0390.0740.0110.1420.0560.0470.0020.078
200.0050.1030.0620.0890.1070.0250.0120.0430.1210.0490.0010.043
Bolded number are the lowest and highest MAE value.
Table A17. MAPE methane yield for different training algorithms at different number of hidden neurons.
Table A17. MAPE methane yield for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.080.220.190.100.120.100.090.310.200.210.1600.34
20.220.160.080.320.470.090.060.410.290.220.1600.36
30.390.330.140.400.140.550.060.070.250.390.1400.12
40.080.260.100.360.350.620.050.270.140.510.1100.26
50.180.140.120.120.060.390.050.200.110.170.0400.26
60.040.320.130.180.280.310.060.080.520.570.0200.55
70.050.250.160.090.110.390.040.290.440.160.0200.36
80.030.150.340.180.130.170.060.770.500.460.0100.27
90.040.410.330.470.380.100.050.620.250.380.0100.50
100.040.220.460.120.570.310.040.170.400.210.0060.28
110.040.290.160.340.150.850.050.530.100.690.0120.26
120.040.320.370.270.560.270.060.750.590.460.0070.29
130.020.480.300.100.110.440.070.470.460.640.0140.25
140.060.350.120.340.270.260.050.320.750.290.0080.68
150.030.140.440.170.740.150.060.520.670.310.0090.42
160.020.500.300.390.090.290.060.280.820.580.0110.23
170.040.230.400.460.680.480.060.390.270.250.0140.44
180.030.500.150.820.450.220.040.880.410.320.0130.39
190.030.790.280.240.190.360.060.690.260.220.0100.39
200.020.470.290.430.510.120.060.200.580.230.0070.19
Bolded number are the lowest and highest MAPE value.
Table A18. MSE for methane yield for different training algorithms at different number of hidden neurons.
Table A18. MSE for methane yield for different training algorithms at different number of hidden neurons.
Hidden NeuronsTraining Algorithm
LMGDXCGPSCGBFGOSSRPCGBCGFGDBRGDm
10.00040.00230.00190.00080.00070.00080.00060.00430.00210.00230.00140.0066
20.00250.00150.00030.00450.00950.00050.00030.00720.00460.00230.00140.0061
30.00010.00500.00110.00690.00110.01350.00020.00030.00320.00730.00110.0010
40.00040.00300.00060.00550.00540.01660.00020.00340.00110.01260.00080.0036
50.00180.00110.00070.00110.00020.00660.00020.00180.00070.00200.00010.0032
60.00010.00570.00100.00150.00420.00530.00030.00050.01210.01902.32 × 10−050.0158
70.00020.00300.00190.00060.00070.00700.00010.00470.00950.00133.22 × 10−050.0072
80.00010.00120.00550.00150.00090.00150.00020.02610.01110.01082.16 × 10−050.0041
90.00010.01030.00500.01190.00650.00070.00020.01770.00360.00761.45 × 10−050.0125
100.00010.00270.00930.00130.01460.00640.00060.00150.00710.00282.27 × 10−050.0038
110.00010.00470.00170.00700.00130.03460.00020.01320.00050.02591.39 × 10−050.0037
120.00020.00700.00710.00390.01610.00410.00020.02460.01600.01092.06 × 10−050.0040
130.00010.01200.00430.00070.00080.00990.00030.01010.01040.01972.21 × 10−050.0043
140.00020.00840.00100.00590.00480.00380.00020.00710.02980.00572.11 × 10−050.0336
150.00010.00140.00890.00170.03110.00130.00020.01230.02190.00577.39 × 10−060.0096
160.00010.01690.00450.01150.00060.00440.00020.00410.03290.01973.30 × 10−050.0031
170.00010.00310.01140.01620.02200.01720.00020.00870.00410.00381.62 × 10−050.0104
180.00010.01640.00140.04040.01080.00390.00010.04070.00830.00601.36 × 10−050.0086
190.00010.03580.00450.00410.00270.00580.00020.02110.00520.00351.10 × 10−050.0080
200.00010.01550.00680.01040.01280.00120.00030.00230.01550.00408.63 × 10−060.0043
Table A19. R, MAE, MAPE, MSE value for COD removal for different training algorithms at different number of hidden neurons (aerobic process).
Table A19. R, MAE, MAPE, MSE value for COD removal for different training algorithms at different number of hidden neurons (aerobic process).
Hidden NeuronTraining Algorithm
LMBR
RMAEMAPEMSERMAEMAPEMSE
10.9234.4010.06131.39750.8455.9630.08061.8890
20.9383.4200.04826.36830.9603.1460.04016.2590
30.9821.9200.0217.31870.9042.5750.03047.6270
40.8873.3650.03858.14030.9751.4970.02010.5095
50.9552.8510.03627.32670.9950.6510.0102.2123
60.9352.8270.03244.24470.9970.4590.0051.0288
70.9093.1310.03660.13020.9970.3210.0041.0767
80.9631.5250.01619.05590.9820.9560.0107.9566
90.9661.9540.02216.40310.9801.0700.01612.4715
100.9432.5030.02827.92880.9960.6050.0071.9727
110.9672.7540.03415.72440.9910.6740.0104.7788
120.9164.0150.04446.51660.9930.6010.0072.9810
130.8754.6880.05991.51530.9910.6640.0083.5952
140.9392.8230.03427.38490.9781.1360.01713.6653
150.9224.0540.05330.83320.9970.3590.0041.1977
160.9542.7480.03519.04880.9970.3720.0041.5340
170.9133.7740.04247.53490.9831.0260.0127.4674
180.8663.2640.03674.76720.9970.5280.0061.0595
190.8984.0050.04654.90890.9910.6390.0074.0884
200.9292.8180.0333.82390.9890.8300.0125.6022
Bolded number are the highest R, and lowest MAE, MAPE and MSE value.
Table A20. R, MAE, MAPE, MSE value for BOD removal for different training algorithms at different number of hidden neurons (aerobic process).
Table A20. R, MAE, MAPE, MSE value for BOD removal for different training algorithms at different number of hidden neurons (aerobic process).
Hidden NeuronTraining Algorithm
LMBR
RMAEMAPEMSERMAEMAPEMSE
10.4472.9180.03112.50230.5022.7270.03011.8510
20.8751.2350.0134.67470.6742.5650.0309.3357
30.9291.2600.0132.21240.9480.8290.0101.6992
40.9390.9300.0101.91140.9201.2300.0102.3544
50.9271.1720.0122.24000.9560.8330.0101.3128
60.9350.8740.0092.37170.9810.4590.0050.6047
70.9280.8480.0093.42870.9840.4060.0040.5496
80.9570.5780.0061.36680.9930.3240.0030.2235
90.9360.8890.0092.28420.9950.2470.0030.1734
100.9450.6060.0061.66600.9680.7300.0081.2808
110.9440.7090.0071.80580.9900.3290.0040.3182
120.9321.0560.0112.17500.9910.3870.0040.3320
130.9081.2400.0136.08260.9860.3780.0040.5745
140.9380.7350.0082.32680.9900.3210.0030.3284
150.9380.9750.0101.92480.9850.2610.0030.4613
160.9330.9610.0102.23380.9760.3620.0040.8479
170.8761.2030.0123.78920.9810.4180.0050.6142
180.9290.6030.0062.37060.9640.8140.0091.1946
190.9420.7340.0081.75870.9840.4630.0050.5261
200.9600.6120.0061.53700.9860.3380.0040.5504
Bolded number are the highest R, and lowest MAE, MAPE and MSE value.
Table A21. R, MAE, MAPE, MSE value for TSS removal for different training algorithms at different number of hidden neurons (aerobic process).
Table A21. R, MAE, MAPE, MSE value for TSS removal for different training algorithms at different number of hidden neurons (aerobic process).
Hidden Neuron Training Algorithm
LMBR
RMAEMAPEMSERMAEMAPEMSE
10.6384.5350.05027.04760.6374.6290.05027.6880
20.6824.2650.04724.62780.8333.2840.04014.1102
30.9181.9300.0217.23760.9181.8670.0207.5477
40.9251.7070.0187.59250.9820.7120.0101.6731
50.9351.6670.0186.91120.9810.7200.0101.7804
60.9830.5830.0071.85200.9950.3990.0040.5560
70.9231.1890.0136.78180.9910.4080.0050.9309
80.9590.8800.0093.99170.9840.5030.0061.4665
90.9920.3820.0040.75410.9780.5760.0072.1969
100.9371.2100.0136.20740.9740.9440.0112.4801
110.8901.7200.0199.41410.9840.5250.0061.7226
120.8911.7390.01910.12810.970.7840.0093.1270
130.9451.0570.0125.73020.9770.5940.0072.6534
140.9431.2760.0146.05310.9850.5240.0061.7528
150.9511.6370.0185.32870.9810.4590.0052.0162
160.9571.5840.0184.88230.9650.6870.0083.4741
170.9301.6330.0187.22150.990.4820.0051.0659
180.8911.4190.01510.39100.9750.9150.0102.3832
190.8931.4000.0159.84830.9840.5690.0061.6107
200.9031.6540.0189.95050.9830.4960.0061.5933
Bolded number are the highest R, and lowest MAE, MAPE and MSE value.

Appendix A.2. MATLAB CODE

clear all
load(’finalone.mat’)
x = Input’;
t = Output’;
 
for k=1:20 ;% for 1 hidden neurons to 20 hidden neurons
load(’finalone.mat’)
x = Input’;
t = Output’;
trainFcn = ’trainlm’;% Levenberg-Marquardt backpropagation.
 
% Create a Fitting Network
 
for i=1:1000
hiddenLayerSize = (k);
net = fitnet(hiddenLayerSize,trainFcn);
 
% Choose Input and Output Pre/Post-Processing Functions
net.input.processFcns = {’removeconstantrows’,’mapminmax’};
net.output.processFcns = {’removeconstantrows’,’mapminmax’};
 
% Setup Division of Data for Training, Validation, Testing
net.divideFcn = ’dividerand’; % Divide data randomly
net.divideMode = ’sample’; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
 
% Choose a Performance Function
net.performFcn = ’mse’; % Mean Squared Error
 
% Choose Plot Functions
net.plotFcns = {’plotperform’,’plottrainstate’,’ploterrhist’, ...
  ’plotregression’, ’plotfit’};
 
% Train the Network
[net,tr] = train(net,x,t);
 
 
% Test the Network
y = net(x);
e = gsubtract(t,y);
performance = perform(net,t,y)
 
% Recalculate Training, Validation and Test Performance
trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,y);
valPerformance = perform(net,valTargets,y);
testPerformance = perform(net,testTargets,y);
%extraction of weight and biases
w1(i)=net.IW(1,1);
w2(i)=net.LW(2,1);
b1(i)=net.b(1);
b2(i)=net.b(2);
Rval(:,i)=regression(t,y);
Predval(i)={y};
MSEtrain(i)=trainPerformance;
MSEval(i)=valPerformance;
MSEtest(i)=testPerformance;
MAE(i)={1/96*sum(abs(Output-y’))};
MAPE(i)={(1/96*(sum(rdivide(abs(Output-y’),Output))))};
MSE(i)={(sum((Output-y’).^2))/24};
 
end
 
%R value
R=regression(t,y);
 
%extract predicted value
m=cell2mat(Predval);
p=cell2mat(MAE);
c=cell2mat(MAPE);
d=cell2mat(MSE);
 
%To export to excel
filename=[’gdm’,’.xlsx’];
Sheet=sprintf(’neuron%d’,k);
col_header={’R value’,’’,’’,’Sum of R2value’,’Average if R2
  value’,’Best R2 value’,’’,’MAEvalue’,’’,’’,’’,’SUM of MAE’,’’,’MAPE value’,’’,’’,’’,’Sum of MAPE’,’’,’MSE
  value’,’’,’’,’’,’Sum of MSE’,’’,’Predvalue’,’’,’’,’Position’,’’,’Expval’};
sumofr2={’=SUM(A2:C2)’,’’,’=MAX(D:D)’};
match1={’=MATCH(F2,D:D,0)’};
maeoffset={’=OFFSET($H$2,COLUMNS($H2:H2)-1+(ROWS($2:2)-
  1)*3,0)’,’’,’’,’=SUM(I2:K2)’,’=MIN(L2:L1001)’};
match2={’=MATCH(M2,L:L,0)’};
mapeoffset1={’=OFFSET($N$2,COLUMNS($N2:N2)-1+(ROWS($2:2)-
  1)*3,0)’,’’,’’,’=SUM(O2:Q2)’,’=MIN(R2:R1001)’};
match3={’=MATCH(S2,R:R,0)’};
mseoffset1={’=OFFSET($T$2,COLUMNS($T2:T2)-1+(ROWS($2:2)-
  1)*3,0)’,’’,’’,’=SUM(U2:W2)’,’=MIN(X2:X1001)’};
match4={’=MATCH(Y2,X:X,0)’};
position1={’=INT((ROW(E1)-1)/24)+1’};
xlswrite(filename,col_header,Sheet,’A1’);
xlswrite(filename,sumofr2,Sheet,’D2’);
xlswrite(filename,match1,Sheet,’F3’);
xlswrite(filename,maeoffset,Sheet,’I2’);
xlswrite(filename,match2,Sheet,’M3’);
xlswrite(filename,mapeoffset1,Sheet,’O2’);
xlswrite(filename,match3,Sheet,’S3’);
xlswrite(filename,mseoffset1,Sheet,’U2’);
xlswrite(filename,match4,Sheet,’Y3’);
xlswrite(filename,position1,Sheet,’AC2’);
 
xlswrite(filename,Rval’,Sheet,’A2’)
xlswrite(filename,p’,Sheet,’H2’)
xlswrite(filename,c’,Sheet,’N2’)
xlswrite(filename,d’,Sheet,’T2’)
xlswrite(filename,m’,Sheet,’Z2’)
xlswrite(filename,t’,Sheet,’AE2’)
save([’workspacefor’ num2str(k)])
clear all
 
end

References

  1. Department of Statistics Malaysia. Department of Statistics Malaysia Press Release. 2021. Available online: https://www.dosm.gov.my/v1/index.php?r=column/cthemeByCat&cat=72&bul_id=TDV1YU4yc1Z0dUVyZ0xPV0ptRlhWQT09&menu_id=Z0VTZGU1UHBUT1VJMFlpaXRRR0xpdz09 (accessed on 1 March 2022).
  2. Malaysian Palm Oil Industry Council; Malaysian Palm Oil Council. 2020. Available online: https://mpoc.org.my/malaysian-palm-oil-industry/#:~:text=In2020%2CMalaysiaaccountedfor,fatsinthesameyear (accessed on 1 March 2022).
  3. Poh, P.E.; Yong, W.-J.; Chong, M.F. Palm Oil Mill Effluent (POME) Characteristic in High Crop Season and the Applicability of High-Rate Anaerobic Bioreactors for the Treatment of POME. Ind. Eng. Chem. Res. 2010, 49, 11732–11740. [Google Scholar] [CrossRef]
  4. Madaki, Y.S.; Seng, L.G. Palm oil mill effluent (pome) from malaysia palm oil mills: Waste or resource. Int. J. Sci. Environ. Technol. 2013, 2, 1138–1155. [Google Scholar]
  5. Ge, D.; Yuan, H.; Xiao, J.; Zhu, N. Insight into the enhanced sludge dewaterability by tannic acid conditioning and pH. regulation. Sci. Total Environ. 2019, 679, 298–306. [Google Scholar] [CrossRef] [PubMed]
  6. Ahmad, A.L.; Ismail, S.; Bhatia, S. Water recycling from palm oil mill effluent (POME) using membrane technology. Desalination 2003, 157, 87–95. [Google Scholar] [CrossRef]
  7. Chong, J.; Chan, Y.; Chong, S.; Ho, Y.; Mohamad, M.; Tan, W.; Cheng, C.; Lim, J. Simulation and Optimisation of Integrated Anaerobic-Aerobic Bioreactor (IAAB) for the Treatment of Palm Oil Mill Effluent. Processes 2021, 9, 1124. [Google Scholar] [CrossRef]
  8. Tan, H.M.; Poh, P.E.; Gouwanda, D. Resolving stability issue of thermophilic high-rate anaerobic palm oil mill effluent treatment via adaptive neuro-fuzzy inference system predictive model. J. Clean. Prod. 2018, 198, 797–805. [Google Scholar] [CrossRef]
  9. Batstone, D.; Keller, J.; Angelidaki, I.; Kalyuzhnyi, S.; Pavlostathis, S.; Rozzi, A.; Sanders, W.; Siegrist, H.; Vavilin, V. Anaerobic digestion model No 1 (ADM1). Water Sci. Technol. 2002, 45, 65–73. [Google Scholar] [CrossRef]
  10. Güçlü, D.; Dursun, S. Amelioration of Carbon Removal Prediction for an Activated Sludge Process using an Artificial Neural Network (ANN). CLEAN–Soil. Air. Water 2008, 36, 781–787. [Google Scholar] [CrossRef]
  11. Yang, S.-S.; Yu, X.-L.; Ding, M.-Q.; He, L.; Cao, G.-L.; Zhao, L.; Tao, Y.; Pang, J.-W.; Bai, S.-W.; Ding, J.; et al. Simulating a combined lysis-cryptic and biological nitrogen removal system treating domestic wastewater at low C/N ratios using artificial neural network. Water Res. 2021, 189, 116576. [Google Scholar] [CrossRef]
  12. Manu, D.S.; Thalla, A.K. Artificial intelligence models for predicting the performance of biological wastewater treatment plant in the removal of Kjeldahl Nitrogen from wastewater. Appl. Water Sci. 2017, 7, 3783–3791. [Google Scholar] [CrossRef] [Green Version]
  13. Güçlü, D.; Yılmaz, N.; Ozkan-Yucel, U.G. Application of neural network prediction model to full-scale anaerobic sludge digestion. J. Chem. Technol. Biotechnol. 2011, 86, 691–698. [Google Scholar] [CrossRef]
  14. Sathish, S.; Vivekanandan, S. Parametric optimization for floating drum anaerobic bio-digester using Response Surface Methodology and Artificial Neural Network. Alex. Eng. J. 2016, 55, 3297–3307. [Google Scholar] [CrossRef] [Green Version]
  15. Park, Y.-S.; Lek, S. Artificial Neural Networks. In Developments in Environmental Modelling; Elsevier BV: Amsterdam, The Netherlands, 2016; pp. 123–140. [Google Scholar]
  16. Nair, V.V.; Dhar, H.; Kumar, S.; Thalla, A.K.; Mukherjee, S.; Wong, J.W. Artificial neural network based modeling to evaluate methane yield from biogas in a laboratory-scale anaerobic bioreactor. Bioresour. Technol. 2016, 217, 90–99. [Google Scholar] [CrossRef] [PubMed]
  17. Mougari, N.E.; Largeau, J.F.; Himrane, N.; Hachemi, M.; Tazerout, M. Application of artificial neural network and kinetic modeling for the prediction of biogas and methane production in anaerobic digestion of several organic wastes. Int. J. Green Energy 2021, 18, 1584–1596. [Google Scholar] [CrossRef]
  18. Khalil, A.; Almasri, M.N.; Mc Kee, M.; Kaluarachchi, J.J. Applicability of statistical learning algorithms in groundwater quality modeling. Water Resour. Res. 2005, 41, 1–16. [Google Scholar] [CrossRef] [Green Version]
  19. Lecun, Y.; Bottou, L.; Muller, K. Efficient BackProp. 2000, pp. 1–44. Available online: http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf (accessed on 1 March 2022).
  20. Lee, D.-H.; Kim, Y.-T.; Lee, S.-R. Shallow Landslide Susceptibility Models Based on Artificial Neural Networks Considering the Factor Selection Method and Various Non-Linear Activation Functions. Remote Sens. 2020, 12, 1194. [Google Scholar] [CrossRef] [Green Version]
  21. Alver, A.; Kazan, Z. Prediction of full-scale filtration plant performance using artificial neural networks based on principal component analysis. Sep. Purif. Technol. 2020, 230, 115868. [Google Scholar] [CrossRef]
  22. Roopnarain, A.; Rama, H.; Ndaba, B.; Bello-Akinosho, M.; Bamuza-Pemu, E.; Adeleke, R. Unravelling the anaerobic digestion ‘black box’: Biotechnological approaches for process optimization. Renew. Sustain. Energy Rev. 2021, 152, 111717. [Google Scholar] [CrossRef]
  23. Merkel, W.; Krauth, K. Mass transfer of carbon dioxide in anaerobic reactors under dynamic substrate loading conditions. Water Res. 1999, 33, 2011–2020. [Google Scholar] [CrossRef]
  24. Roopnarain, A.; Adeleke, R. Current status, hurdles and future prospects of biogas digestion technology in Africa. Renew. Sustain. Energy Rev. 2017, 67, 1162–1179. [Google Scholar] [CrossRef]
  25. Sebti, A.; Boutra, B.; Trari, M.; Aoudjit, L.; Igoud, S. Application of Artificial Neural Network for Modeling Wastewater Treatment Process. In Lecture Notes in Networks and Systems; Springer Science and Business Media LLC: Berlin, Germany, 2019; pp. 143–154. [Google Scholar]
  26. Chen, M.; Chen, C.-J.; Bao, B.-C.; Xu, Q. Multi-stable patterns coexisting in memristor synapse-coupled Hopfield neural network. In Mem-Elements for Neuromorphic Circuits with Artificial Intelligence Applications; Elsevier BV: Amsterdam, The Netherlands, 2021; pp. 439–459. [Google Scholar]
  27. Miner, G.; Elder, J.; Fast, A.; Hill, T.; Nisbet, R.; Delen, D. Practical Text Mining and Statistical Analysis for Non-Structured Text Data Applications; Elsevier BV: Amsterdam, The Netherlands, 2012; pp. 1007–1016. [Google Scholar]
  28. Thanki, R.; Borra, S. Application of Machine Learning Algorithms for Classification and Security of Diagnostic Images. In Machine Learning in Bio-Signal Analysis and Diagnostic Imaging; Elsevier BV: Amsterdam, The Netherlands, 2019; pp. 273–292. [Google Scholar]
  29. Schmitt, F.; Banu, R.; Yeom, I.-T.; Do, K.-U. Development of artificial neural networks to predict membrane fouling in an anoxic-aerobic membrane bioreactor treating domestic wastewater. Biochem. Eng. J. 2018, 133, 47–58. [Google Scholar] [CrossRef]
  30. Bekkari, N.; Zeddouri, A. Using artificial neural network for predicting and controlling the effluent chemical oxygen demand in wastewater treatment plant. Manag. Environ. Qual. Int. J. 2019, 30, 593–608. [Google Scholar] [CrossRef]
  31. Madaeni, S.S.; Shiri, M.; Kurdian, A.R. Modeling, Optimization, and Control of Reverse Osmosis Water Treatment in Kazeroon Power Plant Using Neural Network. Chem. Eng. Commun. 2015, 202, 6–14. [Google Scholar] [CrossRef]
  32. Jawad, J.; Hawari, A.H.; Zaidi, S.J. Artificial neural network modeling of wastewater treatment and desalination using membrane processes: A review. Chem. Eng. J. 2021, 419, 129540. [Google Scholar] [CrossRef]
  33. Kaur, H.; Salaria, D.S. Bayesian Regularization based Neural Network Tool for Software Effort Estimation. Glob. J. Comput. Sci. Technol. 2013, 13, 1–7. [Google Scholar]
  34. Burden, F.; Winkler, D. Bayesian Regularization of Neural Networks. Methods Mol. Biol. 2008, 458, 23–42. [Google Scholar] [CrossRef]
  35. Bajpai, P. Basics of Anaerobic Digestion Process. In Gigaseal Formation in Patch Clamping; Springer Science and Business Media LLC: Berlin, Germany, 2017; pp. 7–12. [Google Scholar]
  36. Buchanan, J.R.; Seabloom, R.W. Aerobic Treatment of Wastewater and Aerobic Treatment Units; University Curriculum Development for Decentralized Wastewater Management: Fayetteville, AR, USA, 2004; pp. 1–27. Available online: http://onsite.tennessee.edu/Aerobic%20Treatment%20&%20ATUs.pdf (accessed on 21 December 2021).
  37. Li, J.-Y.; Chow, T.; Yu, Y.-L. The estimation theory and optimization algorithm for the number of hidden units in the higher-order feedforward neural network. In Proceedings of the ICNN’95-International Conference on Neural Networks; IEEE, Perth, Australia, 27 November–1 December 1995; pp. 1229–1233. [Google Scholar]
  38. Ke, J.; Liu, X. Empirical Analysis of Optimal Hidden Neurons in Neural Network Modeling for Stock Prediction. In 2008 IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 2008; Volume 2, pp. 828–832. [Google Scholar]
  39. Sheela, K.G.; Deepa, S.N. Review on Methods to Fix Number of Hidden Neurons in Neural Networks. Math. Probl. Eng. 2013, 2013, 425740. [Google Scholar] [CrossRef] [Green Version]
  40. Hunter, D.; Yu, H.; Pukish, M.S., III; Kolbusz, J.; Wilamowski, B.M. Selection of Proper Neural Network Sizes and Architectures—A Comparative Study. IEEE Trans. Ind. Inform. 2012, 8, 228–240. [Google Scholar] [CrossRef]
  41. Shibata, K.; Ikeda, Y. Effect of number of hidden neurons on learning in large-scale layered neural networks in 2009. In Proceedings of the ICROS-SICE International Joint Conference, Fukuok, Japan, 18–21 August 2009; pp. 5008–5013. [Google Scholar]
  42. Xu, S.; Chen, L. A Novel Approach for Determining the Optimal Number of Hidden Layer Neurons for FNn_s and Its Application in Data Mining. 2008, pp. 683–686. Available online: https://eprints.utas.edu.au/6995/1/02-au-xu.pdf. (accessed on 1 March 2022).
  43. Zhang, Z.; Ma, X.; Yang, Y. Bounds on the number of hidden neurons in three-layer binary neural networks. Neural Netw. 2003, 16, 995–1002. [Google Scholar] [CrossRef]
  44. Fujita, O. Statistical estimation of the number of hidden units for feedforward neural networks. Neural Netw. 1998, 11, 851–859. [Google Scholar] [CrossRef]
  45. Tamura, S.; Tateishi, M. Capabilities of a four-layered feedforward neural network: Four layers versus three. IEEE Trans. Neural Netw. 1997, 8, 251–255. [Google Scholar] [CrossRef] [PubMed]
  46. Garson, D.G. Interpreting neural-network connection weights. AI Expert 1991, 6, 46–51. [Google Scholar]
  47. Aleboyeh, A.; Kasiri, M.; Olya, M. Prediction of azo dye decolorization by UV/H2O2 using artificial neural networks. Dye. Pigment. 2008, 77, 288–294. [Google Scholar] [CrossRef]
  48. Yetilmezsoy, K.; Demirel, S. Artificial neural network (ANN) approach for modeling of Pb(II) adsorption from aqueous solution by Antep pistachio (Pistacia Vera L.) shells. J. Hazard. Mater. 2008, 153, 1288–1300. [Google Scholar] [CrossRef]
  49. Dibaba, O.R.; Lahiri, S.K.; T’jonck, S.; Dutta, A. Experimental and Artificial Neural Network Modeling of a Upflow Anaerobic Contactor (UAC) for Biogas Production from Vinasse. Int. J. Chem. React. Eng. 2016, 14, 1241–1254. [Google Scholar] [CrossRef]
  50. Antwi, P.; Li, J.; Meng, J.; Deng, K.; Quashie, F.K.; Li, J.; Boadi, P.O. Feedforward neural network model estimating pollutant removal process within mesophilic upflow anaerobic sludge blanket bioreactor treating industrial starch processing wastewater. Bioresour. Technol. 2018, 257, 102–112. [Google Scholar] [CrossRef] [Green Version]
  51. Jaroenpoj, S.; Yu, Q.J.; Ness, J. Development of Artificial Neural Network Models for Biogas Production from Co-Digestion of Leachate and Pineapple Peel. Glob. Environ. Eng. 2014, 1, 42–47. [Google Scholar] [CrossRef] [Green Version]
  52. Xu, F.; Wang, Z.-W.; Li, Y. Predicting the methane yield of lignocellulosic biomass in mesophilic solid-state anaerobic digestion based on feedstock characteristics and process parameters. Bioresour. Technol. 2014, 173, 168–176. [Google Scholar] [CrossRef]
  53. Ghani, W.; Idris, A. Preliminary study on biogas production of biogas from municipal solid waste (MSW) leachate. J. Eng. Sci. Technol. 2009, 4, 374–380. [Google Scholar]
  54. Patel, P.; Modi, A.; Minipara, D.; Kumar, A. Microbial biosurfactants in management of organic waste. In Sustainable Environmental Clean-Up; Elsevier BV: Amsterdam, The Netherlands, 2021; pp. 211–230. [Google Scholar]
  55. Basim, K. The effect of MLSS value on removal of COD and phosphorus control method of return activated sludge concentration. J. Eng. Appl. Sci. 2018, 13, 9730–9734. [Google Scholar]
  56. Senapati, T.; Samanta, P.; Roy, R.; Sasmal, T.; Ghosh, A.R. Artificial neural network: An alternative approach for assessment of biochemical oxygen demand of the Damodar River, West Bengal, India. In Intelligent Environmental Data Monitoring for Pollution Management; Elsevier BV: Amsterdam, The Netherlands, 2020; pp. 231–240. [Google Scholar]
  57. Olden, J.D.; Joy, M.K.; Death, R.G. An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 2004, 178, 389–397. [Google Scholar] [CrossRef]
  58. Chan, Y.J. Foo Seng, H.; Chong, M.F.; Ng, D.; Lim, D.L.K. Pre-commercialized integrated anaerobic-aerobic bioreactor (iaab) for palm oil mill effluent (pome) treatment & biogas generation. Environ. Health 2020, 40, 57–66. [Google Scholar] [CrossRef]
Figure 1. MSE vs. number of hidden neurons.
Figure 1. MSE vs. number of hidden neurons.
Water 14 01410 g001
Figure 2. Neural network architecture for (a) anaerobic digestion, (b) aerobic process.
Figure 2. Neural network architecture for (a) anaerobic digestion, (b) aerobic process.
Water 14 01410 g002
Figure 3. (a) Correlation between experimental data and predicted value of COD removal. (b) Prediction value and measured data by ANN model for COD removal of anaerobic digestion.
Figure 3. (a) Correlation between experimental data and predicted value of COD removal. (b) Prediction value and measured data by ANN model for COD removal of anaerobic digestion.
Water 14 01410 g003aWater 14 01410 g003b
Figure 4. (a) Correlation between experimental data and predicted value of percentage of CH4 in biogas. (b) Prediction value and measured data by ANN model for percentage of CH4 in biogas.
Figure 4. (a) Correlation between experimental data and predicted value of percentage of CH4 in biogas. (b) Prediction value and measured data by ANN model for percentage of CH4 in biogas.
Water 14 01410 g004aWater 14 01410 g004b
Figure 5. (a) Correlation between experimental data and predicted value of methane yield, (b) Prediction value and measured data by ANN model for methane yield (anaerobic digestion).
Figure 5. (a) Correlation between experimental data and predicted value of methane yield, (b) Prediction value and measured data by ANN model for methane yield (anaerobic digestion).
Water 14 01410 g005aWater 14 01410 g005b
Figure 6. (a) Correlation between experimental data and predicted value of COD removal. (b) Prediction value and measured data by ANN model for COD removal (aerobic process).
Figure 6. (a) Correlation between experimental data and predicted value of COD removal. (b) Prediction value and measured data by ANN model for COD removal (aerobic process).
Water 14 01410 g006
Figure 7. (a) Correlation between experimental data and predicted value of BOD removal. (b) Prediction value and measured data by ANN model for BOD removal (aerobic process).
Figure 7. (a) Correlation between experimental data and predicted value of BOD removal. (b) Prediction value and measured data by ANN model for BOD removal (aerobic process).
Water 14 01410 g007
Figure 8. (a) Correlation between experimental data and predicted value of TSS removal. (b) Prediction value and measured data by ANN model for TSS removal (aerobic process).
Figure 8. (a) Correlation between experimental data and predicted value of TSS removal. (b) Prediction value and measured data by ANN model for TSS removal (aerobic process).
Water 14 01410 g008
Table 1. Equations to determine the number of hidden neurons required.
Table 1. Equations to determine the number of hidden neurons required.
EquationReference
N h = ( 4 n 2 + 3 ) n 2 8 [39]
N h = 2 n 1 [40]
N h = N i N o [41]
N h = C f ( N d log N ) 0.5 [42]
N h = ( N in + N p ) L [38]
N h = 2 n n + 1 [43]
N h = K log P c Z log S [44]
N h = N 1 [45]
N h = ( 1 + 8 n 1 ) 2 [37]
Table 2. Influence of input factors on anaerobic process using connection weight approach.
Table 2. Influence of input factors on anaerobic process using connection weight approach.
ReactionOutputInputConnection WeightsRank
Anaerobic reaction COD removal Qin0.7262
OLR0.4915
CODin0.7521
BODin0.3206
TSSin−0.5584
pH inlet−0.6683
Purity of methane Qin1.0011
OLR−0.0946
CODin−0.5682
BODin0.3804
TSSin0.2635
pH inlet−0.5593
Methane yieldQin0.4663
OLR−0.3324
CODin0.5921
BODin0.1356
TSSin−0.2095
pH inlet−0.5752
Aerobic process COD removalOLR1.8925
CODin−1.1466
BODin2.4544
TSSin3.2563
MLSS (mg/L)−4.9981
DO (mg/L)4.0202
F/M (kgCOD/kg MLVSS. Day)0.4977
BOD removalOLR3.1973
CODin−2.4826
BODin3.8772
TSSin2.6564
MLSS (mg/L)−2.6305
DO (mg/L)4.2931
F/M (kgCOD/kg MLVSS. Day)−0.3647
TSS removalOLR−0.7125
CODin−1.2153
BODin0.1977
TSSin−1.1934
MLSS (mg/L)2.4001
DO (mg/L)2.1982
F/M (kgCOD/kg MLVSS. Day)−0.5586
Table 3. Relative importance of input variables for anaerobic process using Garson method.
Table 3. Relative importance of input variables for anaerobic process using Garson method.
ReactionOutput InputImportance (%)
Anaerobic digestion COD removalQin18.0
OLR18.0
CODin12.8
BODin12.5
TSSin20.3
pH inlet18.4
Total:100
Purity of methane Qin15.8
OLR19.8
CODin13.1
BODin12.6
TSSin16.4
pH inlet22.3
Total:100
Methane yield Qin17.6
OLR17.2
CODin13.3
BODin12.6
TSSin20.3
pH inlet19.0
Total: 100
Aerobic process COD removalOLR13.2
CODin12.0
BODin13.8
TSSin9.0
MLSS 22.0
DO 20.6
F/M 9.4
Total:100.0
BOD removalOLR14.0
CODin12.3
BODin13.6
TSSin7.9
MLSS 24.0
DO 19.1
F/M 9.1
Total:100
TSS removalOLR15.1
CODin13.1
BODin12.4
TSSin8.4
MLSS 24.4
DO 15.7
F/M 10.9
Total:100.0
Table 4. Optimal operating condition for maximum COD removal (%) from anaerobic digestion.
Table 4. Optimal operating condition for maximum COD removal (%) from anaerobic digestion.
Parameters Optimal Values
Qin (m3/h)153.81
OLR (g COD/L day)7.00
CODin (mg/L)97,200.06
BODin (mg/L)49,090.82
TSSin (mg/L)37,399.8
pH inlet4.54
Table 5. Optimal operating condition for maximum methane yield from anaerobic digestion.
Table 5. Optimal operating condition for maximum methane yield from anaerobic digestion.
Parameters Optimal Values
Qin (m3/h)154.27
OLR (g COD/L day)7.00
CODin (mg/L)50,500.08
BODin (mg/L)25,250.07
TSSin (mg/L)29,200.28
pH inlet4.00
Table 6. Optimum operating condition for maximum COD removal (%) from aerobic process.
Table 6. Optimum operating condition for maximum COD removal (%) from aerobic process.
Parameters Optimal Values
OLR (g COD/L day)5.8
CODin (mg/L)30,351.00
BODin (mg/L)19,103.93
TSSin (mg/L)17,971.41
MLSS (mg/L)37,007.56
DO (mg/L)7.27
F/M (kg COD/kg MLVSS • Day)0.2776
Table 7. Recommended operating condition for anaerobic digestion and aerobic process [58].
Table 7. Recommended operating condition for anaerobic digestion and aerobic process [58].
Operating conditionsAnaerobicAerobic
OLR (g COD/L day)0–200–9.5
HRT (day)4.59–27.74.1–22.7
MLSS (mg/L)9000–49,6009000–40,500
DO (mg/L)-≥2
pH6.5–7.47.5–8.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, W.-Y.; Chan, Y.J.; Lim, J.W.; Liew, C.S.; Mohamad, M.; Ho, C.-D.; Usman, A.; Lisak, G.; Hara, H.; Tan, W.-N. Artificial Neural Network (ANN) Modelling for Biogas Production in Pre-Commercialized Integrated Anaerobic-Aerobic Bioreactors (IAAB). Water 2022, 14, 1410. https://doi.org/10.3390/w14091410

AMA Style

Chen W-Y, Chan YJ, Lim JW, Liew CS, Mohamad M, Ho C-D, Usman A, Lisak G, Hara H, Tan W-N. Artificial Neural Network (ANN) Modelling for Biogas Production in Pre-Commercialized Integrated Anaerobic-Aerobic Bioreactors (IAAB). Water. 2022; 14(9):1410. https://doi.org/10.3390/w14091410

Chicago/Turabian Style

Chen, Wei-Yao, Yi Jing Chan, Jun Wei Lim, Chin Seng Liew, Mardawani Mohamad, Chii-Dong Ho, Anwar Usman, Grzegorz Lisak, Hirofumi Hara, and Wen-Nee Tan. 2022. "Artificial Neural Network (ANN) Modelling for Biogas Production in Pre-Commercialized Integrated Anaerobic-Aerobic Bioreactors (IAAB)" Water 14, no. 9: 1410. https://doi.org/10.3390/w14091410

APA Style

Chen, W. -Y., Chan, Y. J., Lim, J. W., Liew, C. S., Mohamad, M., Ho, C. -D., Usman, A., Lisak, G., Hara, H., & Tan, W. -N. (2022). Artificial Neural Network (ANN) Modelling for Biogas Production in Pre-Commercialized Integrated Anaerobic-Aerobic Bioreactors (IAAB). Water, 14(9), 1410. https://doi.org/10.3390/w14091410

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop