Next Article in Journal
Microstructure Analysis of Drawing Effect and Mechanical Properties of Polyacrylonitrile Precursor Fiber According to Molecular Weight
Next Article in Special Issue
Influence of Processing Glass-Fiber Filled Plastics on Different Twin-Screw Extruders and Varying Screw Designs on Fiber Length and Particle Distribution
Previous Article in Journal
Evaluating Antimicrobial Activity and Wound Healing Effect of Rod-Shaped Nanoparticles
Previous Article in Special Issue
Advanced Polymer Simulation and Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pyrolysis Study of Mixed Polymers for Non-Isothermal TGA: Artificial Neural Networks Application

Department of Chemical Engineering, King Faisal University, Al-Hassa 31982, Saudi Arabia
Polymers 2022, 14(13), 2638; https://doi.org/10.3390/polym14132638
Submission received: 11 June 2022 / Revised: 22 June 2022 / Accepted: 27 June 2022 / Published: 28 June 2022
(This article belongs to the Special Issue Advanced Polymer Simulation and Processing)

Abstract

:
Pure polymers of polystyrene (PS), low-density polyethylene (LDPE) and polypropylene (PP), are the main representative of plastic wastes. Thermal cracking of mixed polymers, consisting of PS, LDPE, and PP, was implemented by thermal analysis technique “thermogravimetric analyzer (TGA)” with heating rate range (5–40 K/min), with two groups of sets: (ratio 1:1) mixture of PS and PP, and (ratio 1:1:1) mixture of PS, LDPE, and PP. TGA data were utilized to implement one of the machine learning methods, “artificial neural network (ANN)”. A feed-forward ANN with Levenberg-Marquardt (LM) as learning algorithm in the backpropagation model was performed in both sets in order to predict the weight fraction of the mixed polymers. Temperature and the heating rate are the two input variables applied in the current ANN model. For both sets, 10-10 neurons in logsig-tansig transfer functions two hidden layers was concluded as the best architecture, with almost (R > 0.99999). Results approved a good coincidence between the actual with the predicted values. The model foresees very efficiently when it is simulated with new data.

1. Introduction

Recently, most of the researchers are aiming to deal with machine learning methods “ANN” for the forecasting of different data since it approved that it has a strong performance to deal with non-linearity relationships. Therefore, ANN is considered as another option to deal with the TGA datum.
The literatures surveyed listed below will be limited only for the papers handling ANN for TGA data [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18].
Conesa et al. [1] was the first to explore ANN in the thermal analysis by initiating a way to treat with the pyrolysis kinetics at different samples for non-isothermal runs. Bezerra et al. [2] applied the ANN model to the thermal cracking of carbon fiber/phenolic resin composite laminate. Yıldız et al. [3] examined the oxidation of mixtures of different ratio by enforcing ANN. Çepelioĝullar et al. [4] extended an ANN to foresee the pyrolysis of waste fuel. Ahmad et al. [5] established ANN for the pyrolysis of Typha latifolia. They collected 1021 data for the feed-forward Levenberg–Marquardt back-propagation algorithm. Çepelioĝullar et al. [6] performed the ANN models for Lignocellulosic forest residue (LFR) and olive oil residue (OOR) in two different sets: (i) two separate networks for each sample, and (ii) one network for both samples. Later, Chen et al. [7] studied the co-combustion characteristics of sewage sludge and coffee grounds (CG) mixtures. Naqvi et al. [8] suggested an ANN to tip the thermal cracking of one type of sludge and offered a strong harmonization for the predicted with experimental figures. In this paper, a richly powerful promoted ANN model (R ≈ 1.0) predicted a pyrolytic behavior of mixed polymers. Ahmad et al. [9] validated the pyrolysis of Staghorn Sumac by ANN model.
Bi et al. [10] investigated the co-combustion co-pyrolysis of sewage sludge and peanut shell by ANN model. Bong et al. [11] applied the ANN model for the catalytic pyrolysis of pure microalgae, peanut shell wastes, and their binary mixtures with the microalgae ash as a catalyst. In addition, Bi et al. [12] repeated the study for the co-pyrolysis of coal gangue and peanut shell. In both papers, they found there was consistency between the experimental and the ANN model results. Liew et al. [13] predicted the co-pyrolysis of corn cob and high-density polyethylene (HDPE) mixtures, with chicken and duck egg shells as catalysts. Zaker et al. [14] investigated the effects of two catalysis (HZSM5 and sludge-derived activated char) on the pyrolysis of sewage sludge. Dubdub and Al-Yaari [15,16] and Al-Yaari and Dubdub [17,18] tried to use the ANN to predict the performance for different samples. They used a feed-forward LM optimization technique for backpropagation process in the ANN model, in two hidden layers. In the first paper, they applied two input variables, temperature and heating rate, and one output variable, weight left %, while in the second paper, catalyst/polymer weight ratio was added as third input.
Almost all of the above-mentioned studies have good agreement between the experimental collected data and the ANN predicted results efficiently in common. The architecture details of all the papers above are similar to this work (non-isothermal TGA data) are summarized in Table 1. Most of these papers used the temperature and the heating rate for the input variables with weight left % as the only output. This table showed and approved that the application of ANN to predict TGA data is feasible and promising research. In this work, the novelty of this work is in applying the ANN for new two mixture of polymers (PS, LDPE, and PP), and using the final best architecture efficiently in the simulation of new input data.

2. Materials and Methods

2.1. Thermal Decomposition

Pyrolysis experiments were conducted under nitrogen with different compositions of three polymers: PP, PS, and LDPE. Table 2 shows six tests of two sets: tests 1–3 (ratio 1:1) binary of PS and PP, and tests 4–6 (ratio 1:1:1) of PS, LDPE, and PP. 10 mg of each powder sample was used throughout the study. Proximate and ultimate analysis that was performed to characterize the polymer samples can be found in reference [16]. Thermal decomposition experiments were conducted under N2 (99.999%) gas flowing at 100 cm3/min using the thermogravimetric analyzer (TGA-7), manufactured by PerkinElmer, Shelton, CT, USA [16].

2.2. Structure of ANNs

The common procedure for modelling engineering units is to develop a model depending on the basic principles of physics and chemistry and then the values of the model parameters are estimated from some experimental data by some numerical techniques. However, formulating any model and finding the values of the parameters are the most difficult works in most of the cases, especially when the final model is very complicated with non-linear relations among the variables. In these cases, the ANN may become the alternative option. One of the strengths of ANN is its ability to model the non-linear functions and complex process by mapping these relations by some approximation functions. Moreover, ANN can deal with the noisy data.
ANN architecture is ordered in three consecutive layers: input, hidden/s, and output. Every layer possesses a number of neurons, a weight, a bias, and output [19]. Initially, one must figure out all the variables, with the effect on the main process being variable. The data collection, normally established before the ANN steps, becomes the mirror of the problem area. The best ANN architecture is subjected to learning quality and generalization ability, which relies on whether the collected data fall within the variation margin of the variables and are big enough in size [8].
The type of the task to be handled by the ANN is crucial in finding the best architecture. For better performance of ANNs, the parameters such as the number of neurons in the hidden layer(s), number of the hidden layers, the momentum, and the learning rates should be optimized.
The performance of an ANN model in portending the output can be checked and assessed by five statistical correlations [3,5,7,10,20,21]:
Average   correlation   factor   ( R 2 ) = 1 ( ( W   % ) e s t ( W   % ) exp ) 2 ( ( W   % ) e s t ( W   % ) exp ¯ ) 2
Root   mean   square   error   ( RMSE ) = 1 N   ( ( W   % ) e s t ( W   % ) exp ) 2
Mean   absolute   error   ( MAE ) = 1 N | ( W   % ) e s t ( W   % ) exp |
Mean   bias   error   ( MBE ) = 1 N ( ( W   % ) e s t ( W   % ) exp )
Correlation   coefficient   ( R ) = m = 1 n ( ( W   % ) exp , m ( W   % ) exp , m ¯ ) ( ( W   % ) e s t , m ( W   % ) e s t , m ¯ ) m = 1 n ( ( W   % ) exp , m ( W   % ) exp , m ¯ ) 2 m = 1 n ( ( W   % ) e s t , m ( W   % ) e s t , m ¯ ) 2
where
  • (W %)est: is the estimated value of the weight left % by ANN model;
  • (W %)exp, is the experimental value of the weight left %; and
  • ( W   % ) ¯ : is the average values of weight left %.
In order to get the best ANN model, it should be targeted to get the lowest error with (RMSE, MAE, MBE), and the highest with (R2, R) correlations [10]. In this investigation, weight left % of mixed polymers has been predicted by an ANN model. There are some advantages and some disadvantages for using ANN. Some of these advantages can be summarized as being easy to work with linear and non-linear relationships and learning these relationships directly from the data used, while a disadvantages is that doing the fitting needs big memory and computational efforts [22].

3. Results and Discussion

3.1. TGA of Mixed Polymers

TGA provides us with the thermogravimetric (TG), and the derivative thermogravimetric (DTG) at different heating rates of the pyrolysis of two sets at different polymers compositions, which are shown in Figure 1 and Figure 2, respectively [16].

3.2. Pyrolysis Prediction by ANN Model

Neural Network with “Feed-Forward, Back-Propagation” (FFBPNN) was established in “nntool” function in MATLAB® R2020a based on 358, 752 data for the two sets. This type of ANN model is widely used because it is very efficient and simple [3]. Usually, in the thermal analysis instrument TGA, the raw signal (weight left %) will be the output of the ANN model and the independent variables (temperature and heating rate in the non-isothermal TGA data) could be the inputs of the ANN model.
The collected data will be divided by three subsets: training set will be used to establish the network learning and correct the weights by minimizing the error function; the validation set checks the performance of the network; and finally, the test set will test the generalization of the network [23].
The whole data comprising 358, 752 sets are shown in Table 3, and randomly divided into three sets as follows: 70% for training, 15% for validation and testing. Osman and Aggour [24] mentioned that collecting large sets of data could help the model with high accuracy.
Table 4 listed the parameters of the ANN “nntool” model and Table 5 shows a comparison of different ANN structure performance with different numbers of hidden layers and different numbers of neuron and transfer functions in each hidden layer. Usually, the best architecture is found by a trial and error process [8]. The value of R is examined as the criteria in judging the most efficient network architecture for finding the percentage weight loss %. Values of four statistical correlations will be tabulated only for the last best-selected architecture.
The final and best ANN architecture is AN7-A and AN7-B, as shown in Figure 3 for both sets. This network is utilized for the next simulation step. This architecture constitutes 10 neurons with logsig-tansig functions in the two hidden layers with linear transfer function for the output layer. Hidden layers with non-linear functions were used to deal with complex functions [2]. Usually, linear function is not recommended in the hidden layers in order to avoid a linearly separable prediction, while tansig is more preferable since it has larger range of output [11]. Most of the researchers mentioned in Table 1 implied more than one hidden layer [11]. The number of neurons in the hidden layer is a crucial parameter in the efficiency and the accuracy of the ANN output. To avoid the underfitting and the overfitting (too many neurons), one should select the number of neurons in such a way that the performance function will get eventually the optimum value [6,23,25]. There are different supervised learning algorithms such as Levenberg–Marquardt (LM), Bayesian Regularization, and Scaled Conjugate Gradient, but LM is used due its best performance and relevance for this data number [8,10,26]. This optimization LM algorithm technique will update the values of the weighted and biases factors in order to get the calculated output close to the target [5,10].
Figure 4 shows all the results fall close to the diagonal, which confirms a strong agreement and good correlation for ANN prediction with experimental values at minimum mean square error (MSE) values of 2.1275 × 10−7 and 4.58 × 10−8 of the two sets, respectively (Figure 5). This MSE’s values are too small (<2.1275 × 10−7), which shows that the prediction of the system is very reliable [8]. Naqvi et al. [8] also pointed out that for a good prediction ANN, output values should be close to the target values, and ANN model is a good fit for TGA data.
The performance of the current AN7-A and AN7-B model in predicting the weight left % was measured by calculating these four statistical correlations. Table 6 shows all these four statistical correlations. Notice that values of RMSE, MAE, and MBE are significantly low. Consequently, this model can powerfully predict the output within an acceptable limit of error.
Once checking the ANN for the two sets, the final architecture will be simulated by new input data. Table 7 presented the simulation stage with nine datasets for each AN7-A and AN7-B for only new input data, and the final network will produce the simulated output according to the final architecture AN7-A and AN7-B. Figure 6 shows the comparison between the simulated network with the actual output and indicates very high performance of the selected network. In addition, Table 8 lists all statistical parameters for each set: AN7-A and AN7-B. As presented, the value of R is slightly high (>0.9900) and RMSE, MAE, and MBE have reasonably low values. Finally, Figure 7 shows the error histogram for the two sets, which is distributed across the zero error normally [11]. The error lies in a very small value range (−0.00085 to 0.002678) for the first set and (−0.00123 to 0.000489) for the second set, which indicates very good performance of the proposed ANN model.

4. Conclusions

Thermal cracking of polymers, consisting of PS, LDPE, and PP, was implemented using TGA at heating rate range (5–40 K/min), with two groups of sets: (ratio 1:1) a mixture of PS and PP, and (ratio 1:1:1) a mixture of PS, LDPE, and PP. TGA data are used in modeling ANN for two sets of PS, LDPE, and PP polymers in order to predict the weight left %.
However, an efficient ANN model has been created to predict the thermal decomposition of these two sets separately. The best architecture of 2-10-10-1 (logsig-tansig-purelin) transfer functions has been adopted as the highest efficient model. This could foresee the output very precisely with high regression coefficient value. After that, the best model has been simulated with untrained input data, and its behavior (calculated output) shows a close agreement with the actual values (high R > 0.9999).

Funding

This research and APC was funded by Deanship of Scientific Research at King Faisal University (Saudi Arabia).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author gratefully thank the Deanship of Scientific Research at King Faisal University (Saudi Arabia) for supporting this research as a part of the Research Grants Program (Old No.: NA000169, New No.:GRANT963).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Conesa, J.A.; Caballero, J.A.; Labarta, J.A. Artificial neural network for modelling thermal decompositions. J. Anal. Appl. Pyrolysis 2004, 71, 343–352. [Google Scholar] [CrossRef]
  2. Bezerra, E.; Bento, M.; Rocco, J.; Iha, K.; Lourenço, V.; Pardini, L. Artificial neural network (ANN) prediction of kinetic parameters of (CRFC) composites. Comput. Mater. Sci. 2008, 44, 656–663. [Google Scholar] [CrossRef]
  3. Yıldız, Z.; Uzun, H.; Ceylan, S.; Topcu, Y. Application of artificial neural networks to co-combustion of hazelnut husk–lignite coal blends. Bioresour. Technol. 2016, 200, 42–47. [Google Scholar] [CrossRef] [PubMed]
  4. Çepelioğullar, Ö.; Mutlu, I.; Yaman, S.; Haykiri-Acma, H. A study to predict pyrolytic behaviors of refuse-derived fuel (RDF): Artificial neural network application. J. Anal. Appl. Pyrolysis 2016, 122, 84–94. [Google Scholar] [CrossRef]
  5. Ahmad, M.S.; Mehmood, M.A.; Taqvi, S.T.H.; Elkamel, A.; Liu, C.-G.; Xu, J.; Rahimuddin, S.A.; Gull, M. Pyrolysis, kinetics analysis, thermodynamics parameters and reaction mechanism of Typha latifolia to evaluate its bioenergy potential. Bioresour. Technol. 2017, 245, 491–501. [Google Scholar] [CrossRef] [Green Version]
  6. Çepelioğullar, Ö.; Mutlu, I.; Yaman, S.; Haykiri-Acma, H. Activation energy prediction of biomass wastes based on different neural network topologies. Fuel 2018, 220, 535–545. [Google Scholar] [CrossRef]
  7. Chen, J.; Xie, C.; Liu, J.; He, Y.; Xie, W.; Zhang, X.; Chang, K.; Kuo, J.; Sun, J.; Zheng, L.; et al. Co-combustion of sewage sludge and coffee grounds under increased O2/CO2 atmospheres: Thermodynamic characteristics, kinetics and artificial neural network modeling. Bioresour. Technol. 2018, 250, 230–238. [Google Scholar] [CrossRef]
  8. Naqvi, S.R.; Tariq, R.; Hameed, Z.; Ali, I.; Taqvi, S.A.; Naqvi, M.; Niazi, M.B.; Noor, T.; Farooq, W. Pyrolysis of high-ash sewage sludge: Thermo-kinetic study using TGA and artificial neural networks. Fuel 2018, 233, 529–538. [Google Scholar] [CrossRef]
  9. Ahmad, M.S.; Liu, H.; Alhumade, H.; Tahir, M.H.; Çakman, G.; Yıldız, A.; Ceylan, S.; Elkamel, A.; Shen, B. A modified DAEM: To study the bioenergy potential of invasive Staghorn Sumac through pyrolysis, ANN, TGA, kinetic modeling, FTIR and GC–MS analysis. Energy Convers. Manag. 2020, 221, 113173. [Google Scholar] [CrossRef]
  10. Bi, H.; Wang, C.; Lin, Q.; Jiang, X.; Jiang, C.; Bao, L. Combustion behavior, kinetics, gas emission characteristics and artificial neural network modeling of coal gangue and biomass via TG-FTIR. Energy 2020, 213, 118790. [Google Scholar] [CrossRef]
  11. Bong, J.T.; Loy, A.C.M.; Chin, B.L.F.; Lam, M.K.; Tang, D.K.H.; Lim, H.Y.; Chai, Y.H.; Yusup, S. Artificial neural network approach for co-pyrolysis of Chlorella vulgaris and peanut shell binary mixtures using microalgae ash catalyst. Energy 2020, 207, 118289. [Google Scholar] [CrossRef]
  12. Bi, H.; Wang, C.; Lin, Q.; Jiang, X.; Jiang, C.; Bao, L. Pyrolysis characteristics, artificial neural network modeling and environmental impact of coal gangue and biomass by TG-FTIR. Sci. Total Environ. 2020, 751, 142293. [Google Scholar] [CrossRef] [PubMed]
  13. Liew, J.X.; Loy, A.C.M.; Chin, B.L.F.; AlNouss, A.; Shahbaz, M.; Al-Ansari, T.; Govindan, R.; Chai, Y.H. Synergistic effects of catalytic co-pyrolysis of corn cob and HDPE waste mixtures using weight average global process model. Renew. Energy 2021, 170, 948–963. [Google Scholar] [CrossRef]
  14. Zaker, A.; Chen, Z.; Zaheer-Uddin, M. Catalytic pyrolysis of sewage sludge with HZSM5 and sludge-derived activated char: A comparative study using TGA-MS and artificial neural networks. J. Environ. Chem. Eng. 2021, 9, 105891. [Google Scholar] [CrossRef]
  15. Dubdub, I.; Al-Yaari, M. Pyrolysis of Low Density Polyethylene: Kinetic Study Using TGA Data and ANN Prediction. Polymers 2020, 12, 891. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Dubdub, I.; Al-Yaari, M. Thermal Behavior of Mixed Plastics at Different Heating Rates: I. Pyrolysis Kinetics. Polymers 2021, 13, 3413. [Google Scholar] [CrossRef]
  17. Al-Yaari, M.; Dubdub, I. Application of Artificial Neural Networks to Predict the Catalytic Pyrolysis of HDPE Using Non-Isothermal TGA Data. Polymers 2020, 12, 1813. [Google Scholar] [CrossRef]
  18. Al-Yaari, M.; Dubdub, I. Pyrolytic Behavior of Polyvinyl Chloride: Kinetics, Mechanisms, Thermodynamics, and Artificial Neural Network Application. Polymers 2021, 13, 4359. [Google Scholar] [CrossRef]
  19. Quantrille, T.E.; Liu, Y.A. Artificial Intelligence in Chemical Engineering; Elsevier Science: Amsterdam, The Netherlands, 1992. [Google Scholar]
  20. Halali, M.A.; Azari, V.; Arabloo, M.; Mohammadi, A.H.; Bahadori, A. Application of a radial basis function neural network to estimate pressure gradient in water–oil pipelines. J. Taiwan Inst. Chem. Eng. 2016, 58, 189–202. [Google Scholar] [CrossRef]
  21. Govindan, B.; Jakka, S.C.B.; Radhakrishnan, T.K.; Tiwari, A.K.; Sudhakar, T.M.; Shanmugavelu, P.; Kalburgi, A.K.; Sanyal, A.; Sarkar, S. Investigation on Kinetic Parameters of Combustion and Oxy-Combustion of Calcined Pet Coke Employing Thermogravimetric Analysis Coupled to Artificial Neural Network Modeling. Energy Fuels 2018, 32, 3995–4007. [Google Scholar] [CrossRef]
  22. Bar, N.; Bandyopadhyay, T.K.; Biswas, M.N.; Das, S.K. Prediction of pressure drop using artificial neural network for non-Newtonian liquid flow through piping components. J. Pet. Sci. Eng. 2010, 71, 187–194. [Google Scholar] [CrossRef]
  23. Al-Wahaibi, T.; Mjalli, F.S. Prediction of Horizontal Oil-Water Flow Pressure Gradient Using Artificial Intelligence Techniques. Chem. Eng. Commun. 2013, 201, 209–224. [Google Scholar] [CrossRef]
  24. Osman, E.-S.A.; Aggour, M.A. Artificial Neural Network Model for Accurate Prediction of Pressure Drop in Horizontal and Near-Horizontal-Multiphase Flow. Pet. Sci. Technol. 2002, 20, 1–15. [Google Scholar] [CrossRef]
  25. Qinghua, W.; Honglan, Z.; Wei, L.; Junzheng, Y.; Xiaohong, W.; Yan, W. Experimental Study of Horizontal Gas-liquid Two-phase Flow in Two Medium-diameter Pipes and Prediction of Pressure Drop through BP Neural Networks. Int. J. Fluid Mach. Syst. 2018, 11, 255–264. [Google Scholar] [CrossRef]
  26. Beale, M.H.; Hagan, M.T.; Demuth, H.B. Neural Network Toolbox TM User’s Guide; MathWorks: Natick, MA, USA, 2018. [Google Scholar]
Figure 1. TG curves of binary mixtures of PP and PS with DTG curves inside.
Figure 1. TG curves of binary mixtures of PP and PS with DTG curves inside.
Polymers 14 02638 g001
Figure 2. TG curves of ternary mixtures of PP, PS, and LDPE with DTG curves inside.
Figure 2. TG curves of ternary mixtures of PP, PS, and LDPE with DTG curves inside.
Polymers 14 02638 g002
Figure 3. Topology of the selected AN7-A and AN7-B network.
Figure 3. Topology of the selected AN7-A and AN7-B network.
Polymers 14 02638 g003
Figure 4. Regression of training, validation, and test plots for the selected (i) AN7-A, (ii) AN7-B.
Figure 4. Regression of training, validation, and test plots for the selected (i) AN7-A, (ii) AN7-B.
Polymers 14 02638 g004
Figure 5. Mean square error for training, validation, and test plots for the selected (i) AN7-A, (ii) AN7-B.
Figure 5. Mean square error for training, validation, and test plots for the selected (i) AN7-A, (ii) AN7-B.
Polymers 14 02638 g005
Figure 6. Regression of simulated data for (i) AN7-A, (ii) AN7-B.
Figure 6. Regression of simulated data for (i) AN7-A, (ii) AN7-B.
Polymers 14 02638 g006
Figure 7. Error histogram of simulated data for (i) AN7-A, (ii) AN7-B.
Figure 7. Error histogram of simulated data for (i) AN7-A, (ii) AN7-B.
Polymers 14 02638 g007
Table 1. Literature summary of ANN applications for non-isothermal TGA data.
Table 1. Literature summary of ANN applications for non-isothermal TGA data.
AuthorInput VariablesOutput VariablesArchitecture ModelNo. of Hidden LayersTransfer Function for Hidden LayersData Points
Bezerra et al. [2]temperatureheating rate-mass retained2-21-21-12 1941
Yıldız et al. [3]temperatureheating rateblend ratioMass loss %3-5-15-12tangsig-tansig
Ahmad et al. [5]temperatureHeating rate-weight loss 2 1021
Çepelioĝullar et al. [6] Individualtemperatureheating rate-weight loss2-20–20-1 (LFR)2-19–16-1 (OOR)2tangsig-logsig4000
Çepelioĝullar et al. [6] Combined2-7–6-128000
Chen et al. [7]temperatureheating ratemixing ratio mass loss %3-3-19-12tansig-tansig
Naqvi et al. [8]temperatureheating rate-weight loss2-5-11tansig1400
Ahmad et al. [9]temperatureHeating rate-weight loss2-10-11 1155
Bi et al. [10] (combustion), (pyrolysis)temperature mixing ratio-residual mass2-3-18-1
2-3-15-1
2tangsig-tangsig
Bong et al. [11]temperatureheating rate-weight loss %2-(9-12)-(9-12)-12tansig-tansig and
logsig-tansig
Bi et al. [12]temperatureheating ratemixing ratio remaining
mass %
3-5-10-12tangsig-tangsig5000
Zaker et al. [14]temperatureheating rate-weight loss (%)2-7-11tansig
Al-Yaari and Dubdub [17]temperatureheating ratemass ratiomass left %3-10-10-12tansig-logsig 900
Table 2. List of six runs of different PS, LDPE, and PP polymers compositions.
Table 2. List of six runs of different PS, LDPE, and PP polymers compositions.
Set No.Test No.Heating Rate
(K/min)
Weight %Comment
PPPSLDPE
11550500mixture of PS, and PP
22050500
34050500
24533.333.333.3mixture of PS, LDPE, and PP
52033.333.333.3
64033.333.333.3
Table 3. Data set number of six tests.
Table 3. Data set number of six tests.
Set No.Test No.Heating Rate
(K/min)
Data Set NumberTotal
115126358
220101
340131
245251752
520251
640250
Table 4. Main parameters of the ANN “nntool” model.
Table 4. Main parameters of the ANN “nntool” model.
Number of inputs2 (Temperature (K), Heating rate (K/min)
Number of output1 (Mass left %)
Number of hidden layers1-2
Transfer function of hidden layerslogsig-tansig
Number of neurons of hidden layers
Transfer function of out layer
10-10
purelin
Data division functionDividerand
Learning algorithmLevenberg-Marquardt (TRAINLM)
Data division (Training-Validation-Testing)70%-15%-15%
Data number (Training-Validation-Testing)250-54-54 = 358
526-113-113 = 752
Data number (Simulation)9-9
Performance functionMSE
Validation checks6
Table 5. Comparison between different ANN structures for the two sets: (i) mixtures of PS and PP, (ii) mixtures of PS, LDPE, and PP.
Table 5. Comparison between different ANN structures for the two sets: (i) mixtures of PS and PP, (ii) mixtures of PS, LDPE, and PP.
ModelNetwork Topology (no. of Neurons)
2 Input-Hidden Layers (1 or 2 Layers)-1 Output
Hidden LayersR
1st Transfer Function2nd Transfer Function
i
AN1-A2-5-1tansig-0.99881
AN2-A2-5-1logsig-0.99972
AN3-A2-10-1tansig-0.99995
AN4-A2-10-1logsig-0.99997
AN5-A2-15-1tansig-0.99997
AN6-A2-15-1logsig-0.99999
AN7-A2-10-10-1logsigtansig1.00000
ii
AN1-B2-5-1tansig-0.99976
AN2-B2-5-1logsig-0.99997
AN3-B2-10-1tansig-0.99999
AN4-B2-10-1logsig-0.99999
AN5-B2-15-1tansig-0.99999
AN6-B2-15-1logsig-0.99999
AN7-B2-10-10-1logsigtansig1.00000
Table 6. Statistical parameters of the (A) AN7-A, (B) AN7-B model.
Table 6. Statistical parameters of the (A) AN7-A, (B) AN7-B model.
SetAN7-AAN7-B
Statistical ParametersStatistical Parameters
R2RMSEMAEMBER2RMSEMAEMBE
Training1.00.000550.00030−0.000011.00.000440.000161.49 × 10−6
Validation1.00.000460.00029−0.000011.00.000210.00012−1.74 × 10−6
Test1.00.000580.000320.0000181.00.000240.000140.000034
All1.00.000540.00030−0.0000121.00.0003890.0001546.018 × 10−6
Table 7. Simulation input data and output data: mixtures of PS and PP mixtures of PS, LDPE, and PP.
Table 7. Simulation input data and output data: mixtures of PS and PP mixtures of PS, LDPE, and PP.
No.Mixture of PS and PP for AN7-AMixture of PS, LDPE, and PP for AN7-B
Input DataOutput DataInput DataOutput Data
Heating Rate (K/min)Temperature (K)Weight FractionHeating Rate (K/min)Temperature (K)Weight Fraction
156900.1147157310.10335
256680.4101256970.40892
356340.7089256690.70090
4207160.21154207310.20736
5206980.51639207050.51387
6206720.80757206690.80014
7407180.32648407410.30962
8407000.62535407170.60931
9406580.90289406710.90323
Table 8. Statistical parameters for the simulated data of AN7-A and AN7-B.
Table 8. Statistical parameters for the simulated data of AN7-A and AN7-B.
AN7-AAN7-B
Statistical ParametersStatistical Parameters
R2RMSEMAEMBER2RMSEMAEMBE
0.999990.001440.00123−0.000520.999990.000620.000490.00026
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dubdub, I. Pyrolysis Study of Mixed Polymers for Non-Isothermal TGA: Artificial Neural Networks Application. Polymers 2022, 14, 2638. https://doi.org/10.3390/polym14132638

AMA Style

Dubdub I. Pyrolysis Study of Mixed Polymers for Non-Isothermal TGA: Artificial Neural Networks Application. Polymers. 2022; 14(13):2638. https://doi.org/10.3390/polym14132638

Chicago/Turabian Style

Dubdub, Ibrahim. 2022. "Pyrolysis Study of Mixed Polymers for Non-Isothermal TGA: Artificial Neural Networks Application" Polymers 14, no. 13: 2638. https://doi.org/10.3390/polym14132638

APA Style

Dubdub, I. (2022). Pyrolysis Study of Mixed Polymers for Non-Isothermal TGA: Artificial Neural Networks Application. Polymers, 14(13), 2638. https://doi.org/10.3390/polym14132638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop