Base Oil Process Modelling Using Machine Learning
Abstract
:1. Introduction
2. Machine Learning Models
3. Datasets and Methods
3.1. Base Oil Processing Plant Product Sampling and Laboratory Analysis
3.2. Training, Validation and Plant Testing and Model Deployment Data
- Training and validation datasets (for model development)
- ○
- These are data collected from 1 January 2016 to 29 June 2020 consisting of the 54 input variables and 2 output variables (base oil viscosity and viscosity index). The data are split into 70% training data set and 30% validation data set.
- Plant testing and model deployment (for out-of-sample model test)
- ○
- To assess the predictive model performance with the real time data, the models are deployed in a platform application that is connected directly to the plant information system. A testing period is performed from 29 June 2020 until 13 January 2021. The production ran from one product grade continuously for several days and then changed to another grade continuously as per scheduled by the production planner.
3.3. Machine Learning Models Development
3.3.1. Support Vector Regression (SVR) Model Development
3.3.2. Decision Tree Regression (DTR) Model Development
3.3.3. Random Forest Regression (RFR) Model Development
3.3.4. Extreme Gradient Boosting (XGBoost) Model Development
4. Results and Discussion
4.1. Model Training and Validation Performance
4.2. Physicochemical Insights from the Machine Learning Activities
4.3. Plant Testing and Model Deployment
4.4. Product Recovery Using Prediction Model
5. Limitations and Assumptions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Abbasi, M.; Farokhnia, A.; Bahreinimotlagh, M.; Roozbahani, R. A hybrid of Random Forest and Deep Auto-Encoder with support vector regression methods for accuracy improvement and uncertainty reduction of long-term streamflow prediction. J. Hydrol. 2020, 597, 125717. [Google Scholar] [CrossRef]
- Altgelt, K.H.; Boduszynski, M.M. Composition and Analysis of Heavy Petroleum Fractions; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar] [CrossRef]
- ASTM. Standard Practice for Calculating Viscosity Index from Kinematic Viscosity at 40 °C and 100 °C; ASTM International: West Conshohocken, PA, USA, 2010; pp. 1–6. [Google Scholar]
- ASTM. Standard Test Method for Kinematic Viscosity of Transparent and Opaque Liquids (the Calculation of Dynamic Viscosity); ASTM International: West Conshohocken, PA, USA, 2008; pp. 126–128. [Google Scholar] [CrossRef]
- Aubert, C.; Durand, R.; Geneste, P.; Moreau, C. Hydroprocessing of Dibenzothiophene, Phenothiazine, Phenozanthlin, Thianthrene, and Thioxanthene on a Sulfided NiO-MoO3/y-Al2O3. J. Catal. 1986, 97, 169–176. [Google Scholar] [CrossRef]
- Awad, M.; Khanna, R. Efficient Learning Machines; Apress Open: Berkeley, CA, USA, 2015. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 16–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
- Xgboost. Pip. 2021. Available online: https://pypi.org/project/xgboost/ (accessed on 13 January 2021).
- Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 13 May 2020).
- Cousineau, D.; Chartier, S. Outliers detection and treatment: A review. Int. J. Psychol. Res. 2010, 3, 59–68. [Google Scholar] [CrossRef]
- Drucker, H.; Surges, C.J.C.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. Adv. Neural Inf. Process. Syst. 1997, 1, 155–161. [Google Scholar]
- Fiedman, J. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 1189–1232. [Google Scholar]
- Gabriel, T. Gradient Boosting and XGBoost. 1. 2018. Available online: https://medium.com/@gabrieltseng/gradient-boosting-and-xgboost-c306c1bcfaf5 (accessed on 20 May 2021).
- Glen, S. Decision Tree vs. Random Forest vs. Gradient Boosting Machines: Explained Simply. 2019. Available online: https://www.datasciencecentral.com/profiles/blogs/decision-tree-vs-random-forest-vs-boosted-trees-explained (accessed on 20 July 2020).
- Gumus, M.; Kiran, M.S. Crude oil price forecasting using XGBoost. In Proceedings of the 2017 International Conference on Computer Science and Engineering (UBMK), Antalya, Turkey, 5–8 October 2017; pp. 1100–1103. [Google Scholar] [CrossRef]
- Jiyuan, Z.; Qihong, F.; Xianmin, Z.; Chenlong, S.; Shuhua, W.; Kuankuan, W. A supervised Learning Approach for Accurate Modeling of CO2-Brine Interfacial Tension with Applicaiton in identifying the Optimum Sequestration Depth in Salin Aquifers. Energy Fuel 2020, 34, 7353–7362. [Google Scholar]
- Louppe, G. Understanding Random Forests: From Theory to Practice. Ph.D. Thesis, University of Liège, Liège, Belgium, 2015. [Google Scholar]
- Lynch, T.R. Process Chemistry of Lubricant Base Stocks, 1st ed.; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
- Machado, G.; Mendoza, M.R.; Corbellini, L.G. What variables are important in predicting bovine viral diarrhea virus? A random forest approach. Vet. Res. 2015, 46, 1–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Markovic, S.; Bryan, J.L.; Ishimtsev, V.; Turakhanov, A.; Rezaee, R.; Cheremisin, A.; Kantzas, A.; Koroteev, D.; Mehta, S.A. Improved Oil Viscosity Characterization by Low-Field NMR Using Feature Engineering and Supervised Learning Algorithms. Energy Fuels 2020, 34, 13799–13813. [Google Scholar] [CrossRef]
- Meng, M.; Zhong, R.; Wei, Z. Prediction of methane adsorption in shale: Classical models and machine learning based models. Fuel 2020, 278, 118358. [Google Scholar] [CrossRef]
- Mokhatab, S.; William, A.P. Chapter 15—Process Modeling in the Natural Gas Processing Industry. In Handbook of Natural Gas Transmission and Processing, 2nd ed.; Mokhatab, W.A.P.S., Ed.; Gulf Professional Publishing: Waltham, MA, USA, 2012; pp. 511–541. [Google Scholar]
- Moreau, C.; Aubert, C.; Durand, R.; Zmimita, N.; Geneste, P. Structure-Activity Relationships in Hydroprocessing of Aromatic and Heteroatomic Model Compounds Over Sulphided NiO-MoO3 y-Al2O3 and NiO-WO3 y-Al2O3 Catalysis: Chemical Evidence for the Existence of Two Types of Catalytic Sites. Catal. Today 1988, 4, 117–131. [Google Scholar] [CrossRef]
- De Myttenaere, A.; Golden, B.; Le Grand, B.; Rossi, F. Mean Absolute Percentage Error for Regression Models. Neurocomputing 2016, 192, 38–48. [Google Scholar] [CrossRef] [Green Version]
- Nag, N.K.; Sapre, A.V.; Broderick, D.H.; Gates, B.C. Hydrodesulfurization of Polycyclic Aromatics Catalyzed by Sulfided CoO-MoO3 y-Al2O3: The Relative Reactivities. J. Catal. 1979, 57, 509–512. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O. Scikit-Learn: Machine Learning in Python. 2011, pp. 2825–2830. Available online: https://scikit-learn.org/ (accessed on 15 February 2020).
- Platt, J.C. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 1999, 10, 61–74. [Google Scholar]
- Rudin, C. Support Vector Machines MIT 15.097 Course Notes. 2012, pp. 1–14. Available online: http://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine-learning-and-statistics-spring-2012/lecture-notes/MIT15_097S12_lec12.pdf (accessed on 3 April 2021).
- Üstün, B.; Melssen, W.J.; Buydens, L.M.C. Visualisation and interpretation of Support Vector Regression models. Anal. Chim. Acta 2007, 595, 299–309. [Google Scholar] [CrossRef] [PubMed]
- Vapnik, V.N. Statistical Learning Theory; Wiley: New York, NY, USA, 1998. [Google Scholar]
- Wilkinson, L. Classification and Regression Trees. 1995. Available online: http://cda.psych.uiuc.edu/multivariate_fall_2012/systat_cart_manual.pdf (accessed on 16 January 2021).
- Zhang, J.; Feng, Q.; Wang, S.; Zhang, X. Estimation of CO2-Brine Interfacial Tension Using an Artificial Neural Network. J. Supercrit. Fluids 2016, 107, 31–37. [Google Scholar] [CrossRef]
- Zhong, R.; Johnson, R.; Chen, Z. Generating pseudo density log from drilling and logging-while-drilling data using extreme gradient boosting (XGBoost). Int. J. Coal Geol. 2020, 220, 103416. [Google Scholar] [CrossRef]
Category | Variables | |
---|---|---|
1 | Input | Hydrotreating reactor temperature, °C |
2 | Input | Hydrotreating reactor inlet pressure, barg |
3 | Input | Hydrotreating reactor outlet pressure, barg |
4 | Input | Hydroisomerization reactor temperature, °C |
5 | Input | Hydroisomerization reactor inlet pressure, barg |
6 | Input | Hydroisomerization reactor outlet pressure, barg |
7 | Input | Hydrodearomatization reactor inlet temperature, °C |
8 | Input | Hydrodearomatization reactor outlet pressure, barg |
9 | Input | Hydrodearomatization reactor outlet pressure, barg |
10 | Input | Feed flowrate, m3/h |
11 | Input | Unconverted oil flowrate, m3/h |
12 | Input | Days on stream |
13 | Input | Product fractionation overhead pressure, barg |
14 | Input | Steam flowrate to ejector, m3/h |
15 | Input | Product fractionation overhead temperature, °C |
16 | Input | Light gas oil flowrate, m3/h |
17 | Input | Dewaxed feed viscosity index |
18 | Input | Dewaxed feed kinematic viscosity at 100 °C, cSt |
19 | Input | Dewaxed feed density, g/cm3 |
20 | Input | Feed wax content, % |
21 | Input | Dewaxed feed pour point, °C |
22 | Input | Feed nitrogen content, ppm |
23 | Input | Feed sulfur content, ppm |
24 | Input | Feed kinematic viscosity at 100 deg C, cSt |
25 | Input | Feed density at 15 deg C |
26 | Input | Feed simulated distillation boiling point at 5% weight, °C |
27 | Input | Feed simulated distillation boiling point at 10% weight, °C |
28 | Input | Feed simulated distillation boiling point at 20% weight, °C |
29 | Input | Feed simulated distillation boiling point at 30% weight, °C |
30 | Input | Feed simulated distillation boiling point at 40% weight, °C |
31 | Input | Feed simulated distillation boiling point at 50% weight, °C |
32 | Input | Feed simulated distillation boiling point at 60% weight, °C |
33 | Input | Feed simulated distillation boiling point at 70% weight, °C |
34 | Input | Feed simulated distillation boiling point at 80% weight, °C |
35 | Input | Feed simulated distillation boiling point at 90% weight, °C |
36 | Input | Feed simulated distillation boiling point at 95% weight, °C |
37 | Input | Hydrotreater product separator temperature, °C |
38 | Input | Product fractionation bed 1 temperature, °C |
39 | Input | Product fractionation furnace temperature, °C |
40 | Input | Product fractionation overhead temperature, °C |
41 | Input | Product fractionation overhead flowrate, m3/hr |
42 | Input | Heavy diesel draw temp, °C |
43 | Input | Steam temperature, °C |
44 | Input | Light diesel draw flowrate, m3/h |
45 | Input | Light diesel pump around flowrate, m3/h |
46 | Input | Heavy diesel pump around flowrate, m3/h |
47 | Input | Heavy diesel from product fractionation flowrate, m3/h |
48 | Input | Product fractionation bed 2 temperature, °C |
49 | Input | Light gas oil draw temperature, °C |
50 | Input | Product fractionation bed 3 temperature, °C |
51 | Input | Heavy gas oil draw flowrate, m3/h |
52 | Input | Product fractionation bed 4 temperature, °C |
53 | Input | Base oil draw flowrate, m3/h |
54 | Input | Base oil product flowrate, m3/h |
55 | Output | Base oil kinematic viscosity at 100 °C, cSt |
56 | Output | Base oil draw temp, °C |
Rank | Hyperparameters {C, γ, Kernel} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {10.0, 0.01, RBF} | 0.994110 | 0.992482 | 0.993983 | 0.991555 | 0.992792 | 0.992984 | 0.348 |
2 | {100.0, 0.01, RBF} | 0.993922 | 0.992371 | 0.993965 | 0.991438 | 0.992586 | 0.992856 | 0.359 |
3 | {100.0, 0.001, RBF} | 0.993965 | 0.992730 | 0.993041 | 0.991773 | 0.992250 | 0.992751 | 0.658 |
4 | {1.0, 0.01, RBF} | 0.993257 | 0.990981 | 0.992991 | 0.990072 | 0.990845 | 0.991629 | 0.260 |
5 | {10.0, 0.001, RBF} | 0.991811 | 0.989462 | 0.991479 | 0.987730 | 0.989280 | 0.989952 | 0.321 |
Rank | Hyperparameters {C, γ, Kernel} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {10.0, 0.01, RBF} | 0.935744 | 0.935028 | 0.942827 | 0.929901 | 0.929151 | 0.934530 | 1.131 |
2 | {100.0, 0.01, RBF} | 0.936181 | 0.932619 | 0.940660 | 0.924958 | 0.934157 | 0.933715 | 4.000 |
3 | {1.0, 0.01, RBF} | 0.913299 | 0.917684 | 0.922385 | 0.911537 | 0.907999 | 0.914581 | 0.712 |
4 | {100.0, 0.001, RBF} | 0.909918 | 0.912505 | 0.922980 | 0.907889 | 0.909430 | 0.912545 | 1.465 |
5 | {10.0, 0.1, RBF} | 0.911189 | 0.905840 | 0.910973 | 0.896035 | 0.877023 | 0.900212 | 0.840 |
Rank | Hyperparameters {Minimum Samples Leaf, Minimum Samples Split, Splitter} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {4, 2, Random} | 0.994501 | 0.995087 | 0.993551 | 0.993826 | 0.994481 | 0.994289 | 0.027 |
2 | {4, 2, Best} | 0.994856 | 0.993041 | 0.993626 | 0.992474 | 0.994793 | 0.993758 | 0.185 |
3 | {4, 3, Best} | 0.994835 | 0.993492 | 0.993640 | 0.992008 | 0.994761 | 0.993747 | 0.182 |
4 | {4, 4, Best} | 0.994826 | 0.993065 | 0.993648 | 0.992080 | 0.994852 | 0.993694 | 0.160 |
5 | {2, 3, Random} | 0.993585 | 0.994857 | 0.993089 | 0.993378 | 0.992137 | 0.993409 | 0.038 |
(23) |
Rank | Hyperparameters {Minimum Samples Leaf, Minimum Samples Split, Splitter} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {4, 4, Random} | 0.879111 | 0.858396 | 0.871642 | 0.879224 | 0.886705 | 0.875016 | 0.026 |
2 | {4, 2, Random} | 0.878089 | 0.855507 | 0.880610 | 0.885057 | 0.865756 | 0.873004 | 0.036 |
3 | {4, 3, Random} | 0.866830 | 0.846901 | 0.864142 | 0.887466 | 0.885432 | 0.870154 | 0.030 |
4 | {2, 3, Random} | 0.861392 | 0.846726 | 0.880559 | 0.875025 | 0.867032 | 0.866147 | 0.044 |
5 | {2, 4, Random} | 0.857238 | 0.865266 | 0.859714 | 0.889441 | 0.856261 | 0.865584 | 0.043 |
Rank | Hyperparameters {Bootstrap Sampling, Minimum Samples Leaf, Minimum Samples Split, Number of Estimators} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {bootstrap sampling, 2, 4, 200} | 0.997034 | 0.996949 | 0.996602 | 0.995643 | 0.995952 | 0.996436 | 26.814 |
2 | {bootstrap sampling, 2, 3, 200} | 0.997009 | 0.996960 | 0.996571 | 0.995662 | 0.995953 | 0.996431 | 28.347 |
3 | {bootstrap sampling, 1, 2, 150} | 0.997064 | 0.996942 | 0.996507 | 0.995642 | 0.995970 | 0.996425 | 19.739 |
4 | {bootstrap sampling, 2, 4, 150} | 0.997022 | 0.996962 | 0.996497 | 0.99569 | 0.995942 | 0.996423 | 20.594 |
5 | {bootstrap sampling, 1, 2, 200} | 0.997003 | 0.996929 | 0.996570 | 0.995644 | 0.995943 | 0.996418 | 29.455 |
Rank | Hyperparameters {Bootstrap Sampling, Minimum Samples Leaf, Minimum Samples Split, Number of Estimators} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {bootstrap sampling, 1, 2, 150} | 0.925393 | 0.913473 | 0.922048 | 0.936699 | 0.929408 | 0.925404 | 19.341 |
2 | {bootstrap sampling, 1, 3, 200} | 0.924188 | 0.912819 | 0.921509 | 0.936269 | 0.931697 | 0.925296 | 26.816 |
3 | {bootstrap sampling, 1, 2, 200} | 0.92423 | 0.911964 | 0.922398 | 0.935298 | 0.931607 | 0.925100 | 27.201 |
4 | {bootstrap sampling, 2, 3, 200} | 0.924849 | 0.911072 | 0.921152 | 0.936610 | 0.929965 | 0.924729 | 27.705 |
5 | {bootstrap sampling, 1, 4, 200} | 0.92416 | 0.912197 | 0.922115 | 0.935300 | 0.929705 | 0.924695 | 27.177 |
Rank | Hyperparameters {Learning Rate, Maximum Depth of Tree, Number of Estimators, Regularization Term (Lambda)} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {0.1, None, 1000, 0.1} | 0.997007 | 0.997051 | 0.996499 | 0.996288 | 0.996719 | 0.996713 | 47.156 |
1 | {0.1, None, 1500, 0.1} | 0.997007 | 0.997051 | 0.996499 | 0.996288 | 0.996719 | 0.996713 | 49.561 |
1 | {0.1, None, 2000, 0.1} | 0.997007 | 0.997051 | 0.996499 | 0.996288 | 0.996719 | 0.996713 | 55.937 |
4 | {0.2, None, 1000, 0.1} | 0.997081 | 0.996983 | 0.996539 | 0.99626 | 0.996445 | 0.996661 | 33.471 |
4 | {0.2, None, 1500, 0.1} | 0.997081 | 0.996983 | 0.996539 | 0.99626 | 0.996445 | 0.996661 | 32.596 |
Rank | Hyperparameters {Learning Rate, Maximum Depth of Tree, Number of Estimators, Regularization Term (Lambda)} | Cross-Validation Split 1 Score (R2) | Cross-Validation Split 2 Score (R2) | Cross-Validation Split 3 Score (R2) | Cross-Validation Split 4 Score (R2) | Cross-Validation Split 5 Score (R2) | Mean Test Score (R2) | Time (s) |
---|---|---|---|---|---|---|---|---|
1 | {0.1, None, 2000, 0.5} | 0.931619 | 0.93024 | 0.928867 | 0.941385 | 0.939479 | 0.934318 | 84.943 |
2 | {0.1, None, 1500, 0.5} | 0.93162 | 0.930239 | 0.928869 | 0.941381 | 0.939475 | 0.934317 | 76.110 |
3 | {0.1, None, 1000, 0.5} | 0.931582 | 0.930205 | 0.928839 | 0.941363 | 0.939454 | 0.934288 | 49.925 |
4 | {0.1, None, 1500, 1} | 0.930846 | 0.929399 | 0.930767 | 0.939366 | 0.936341 | 0.933344 | 74.179 |
5 | {0.1, None, 2000, 1} | 0.930842 | 0.929395 | 0.930769 | 0.939366 | 0.936344 | 0.933343 | 89.165 |
Algorithm | RMSE, cSt | MAE, cSt | MLSE, (log cSt)2 | MAPE, % | R2 | Adjusted R2 |
---|---|---|---|---|---|---|
SVR | 0.176318 | 0.129536 | 0.000122 | 2.280873 | 0.993983 | 0.993510 |
DTR | 0.178782 | 0.102995 | 0.000083 | 0.065393 | 0.993728 | 0.999882 |
RFR | 0.134087 | 0.076117 | 0.000051 | 1.223567 | 0.996471 | 0.996193 |
XGBoost | 0.129602 | 0.076019 | 0.000050 | 1.231804 | 0.996698 | 0.996438 |
Algorithm | RMSE | MAE | MLSE | MAPE, % | R2 | Adjusted R2 |
---|---|---|---|---|---|---|
SVR | 1.466030 | 1.060731 | 0.000025 | 0.839677 | 0.944306 | 0.939928 |
DTR | 2.273253 | 1.588505 | 0.000060 | 1.260725 | 0.858878 | 0.847786 |
RFR | 1.545438 | 1.135266 | 0.000028 | 0.898492 | 0.933187 | 0.927935 |
XGBoost | 1.451323 | 1.070453 | 0.000024 | 0.847400 | 0.940982 | 0.936344 |
Algorithm | RMSE, cSt | MAE, cSt | MLSE, (log cSt)2 | MAPE, % | R2 | Adjusted R2 |
---|---|---|---|---|---|---|
SVR | 0.322116 | 0.221127 | 0.000317 | 3.501471 | 0.976023 | 0.971925 |
DTR | 0.165008 | 0.121092 | 0.000108 | 2.165202 | 0.993958 | 0.992925 |
RFR | 0.149435 | 0.112607 | 0.000087 | 1.901727 | 0.995136 | 0.994305 |
XGBoost | 0.158008 | 0.117923 | 0.000096 | 2.083482 | 0.994867 | 0.993989 |
Algorithm | RMSE | MAE | MLSE | MAPE, % | R2 | Adjusted R2 |
---|---|---|---|---|---|---|
SVR | 3.226487 | 2.561265 | 0.000117 | 1.883010 | 0.754189 | 0.712183 |
DTR | 3.447437 | 2.794253 | 0.000134 | 2.072352 | 0.661024 | 0.603097 |
RFR | 2.485591 | 1.980499 | 0.000069 | 1.458827 | 0.818735 | 0.787760 |
XGBoost | 2.441857 | 1.819650 | 0.000066 | 1.404457 | 0.840025 | 0.812687 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fadzil, M.A.M.; Zabiri, H.; Razali, A.A.; Basar, J.; Syamzari Rafeen, M. Base Oil Process Modelling Using Machine Learning. Energies 2021, 14, 6527. https://doi.org/10.3390/en14206527
Fadzil MAM, Zabiri H, Razali AA, Basar J, Syamzari Rafeen M. Base Oil Process Modelling Using Machine Learning. Energies. 2021; 14(20):6527. https://doi.org/10.3390/en14206527
Chicago/Turabian StyleFadzil, Muhamad Amir Mohd, Haslinda Zabiri, Adi Aizat Razali, Jamali Basar, and Mohammad Syamzari Rafeen. 2021. "Base Oil Process Modelling Using Machine Learning" Energies 14, no. 20: 6527. https://doi.org/10.3390/en14206527
APA StyleFadzil, M. A. M., Zabiri, H., Razali, A. A., Basar, J., & Syamzari Rafeen, M. (2021). Base Oil Process Modelling Using Machine Learning. Energies, 14(20), 6527. https://doi.org/10.3390/en14206527