A Highly Accurate NILM: With an Electro-Spectral Space That Best Fits Algorithm’s National Deployment Requirements
Abstract
:1. Introduction
2. Materials and Methods
2.1. Basic Definitions for Algorithm Presentation
2.2. Suggested Architecture
2.3. A Background on Datasets—Which Datasets Fit High-Sampling Rate Algorithms and Which Low-Sampling Rate Algorithms
2.4. A Survey of Algorithms in Order to Comprehend How to Approach the Problem
2.5. Spectral Theory of Non-Intrusive Load Monitoring—A Front-End Chest of Drawers
2.5.1. Simultaneous Spectra and Slow Time Representation
2.5.2. Automatic Characterization of the Spectra Following Bottleneck’s Identification
2.5.3. Construction of High-Order Dimensional Space
2.6. Proposed Theory of NILM Electro-Spectral Multidimensional Space—In Light of Insights from Separated Device Signatures vs. Collaborative Signature Theory
2.6.1. Electricity Knowledge-Based Model Construction
2.6.2. Visualization of Eighteen-Order Dimensional-Space Spectra—Release of Bottleneck Pointed-Out by Signature Separateness
2.6.3. Architecture Presentation
- (1)
- Data acquisition module: The algorithm starts with sampling voltage and current waveforms using a SatecTM PM 180 smart meter model type. At least two out of three phases are required and are sufficient. A driver to PM 180 exists on its own (“PAS” software) along with an API to the system, which is the “meter data management” system (MDM) local database. The MDM system is called “Expert power pro”.
- (2)
- The time series waveforms are inserted into the FFT module to turn them into spectral signals .
- (3)
- The third module is the “electro-spectral preprocessor” that is used to cluster the AI that is illustrated in Figure 1 and implements Equation (7). This module implements the high-order dimensional space features for the AI using expert knowledge.
- (4)
- The next module is “placement of separate device signatures in features space” (4.1) takes recording segments that have been tagged by 80% of the universal dataset and translates them into points in the proposed high-order dimensional feature space. A sub-module performs 2D and 3D PCA dimensionality reduction transform to ensure that the separate device signature clusters can be visualized through human cognition. (4.2) The module then receives recordings performed by PM 180 at local residential premises, and these are currently tagged semi-automatically. In the future, they will be tagged by an automatic recording card.
- (5)
- Clustering AI core module: Ensemble learning then uses five clustering or classification cores: K-NN, decision tree, logistic classifier, ridge, and random forest. The entire toolset comprising (i) the classification report, which is most notably used for precision and recall; (ii) the AUC-ROC curve for better representation than f1-score; (iii) the confusion matrix; (iv) and the Pearson correlation heatmap. The entire toolset is operated using the high-order feature space to yield class identification.
2.6.4. Electric Sensor Used for Some of the Testing Dataset
2.6.5. K-Nearest Neighbor (KNN) in Light of PCA 2D Image and Distance Theorems
2.6.6. Ridge Classifier in Light of PCA 2D Image and Distance Theorems
2.6.7. Random Forest Classifier in Light of PCA 2D Image and Distance Theorems
2.6.8. Logistic Classifier in Light of PCA 2D Image and Distance Theorems
2.6.9. Decision Tree Classifier
2.6.10. Scoring Methods for the Supervised Learning Algorithm
Comparative Tool #1: Computation of Classification Report: Accuracy, Precision, Recall, F-Measure and Support
Comparative Tool #2: ROC-AUC Curve
Comparative Tool #3: Confusion Matrix over the Supervised Learning Algorithms
Comparative Tool #4: Heatmap
2.7. A Proof That Electro-Spectral Dimensional Space Is Potentially Increasing the Separability of Individual Device Signatures
2.8. Theoretical Computation of the Effect of Distance between Electric Device Signatures over Mix-Up Probability between Electrical Devices
3. Results
3.1. Comparative Study of the Algorithms
3.1.1. Specific Test Equipment Used and Project Software Language
3.1.2. Classifier 1: KNN Classification Algorithm
3.1.3. Classifier 2: Ridge Classifier
3.1.4. Classifier 3: Random Forest Classification
3.1.5. Classifier 4: Decision tree classifier
3.1.6. Classifier 5: Logistic Classifier
3.2. Comparative Algorithms Results Based on References and Results, and Experiment of Training Required Scenarios Count
4. A Discussion on Future Research Implied by the Presented Research
- The replacement of the machine learning classifier with a CNN deep learning classifier would most probably yield a strong NILM requiring much more data. Such research, if successful, would provide evidence on points pertaining to ground zero level, electricity knowledge, and human-based knowledge of separating features, especially in terms of redirecting the self-generated features and improving their cluster of device separability.
- In this paper, the issue corresponding to the training time and the number of scenarios required for training were discussed. Model training requires a comparative study to other algorithms, and in addition, requires training using a dataset with more than 13 device types. computation of The exact order of magnitude of mix-up probability as a function of the distance between electrical devices in the features space was shown.
5. Conclusions
- (1)
- Referring to the first objective, the algorithm presented an accuracy of 98% for each and every device compared to the low-sampling rate NILM, which presented 22–92% accuracy. The accuracy at the compared low-sampling rate algorithms was device dependent at and an average of 70% was obtained. This article also compared the five performance algorithms quantitatively and showed that nonlinear classifiers are much more accurate (98%) than linear classifiers, such as the logistic classifier (75%), are. The evidence indicates that architecture selection matters and that the problem is a curly cluster shape. However, besides the implementation code and experimental testing, it was explained theoretically. It was shown that by using vector algebra and knowledge of electro-magnetic and electricity theory that the proposed features significantly increase the distance between separate device signatures in the feature space, and this was proven in the appendix, which was referred to in Section 2.8, that the mix-up probability between two devices, devices A, B, reduces with that distance. The intuition was simple. There is much less or no overlap between the signature clusters, and less overlap means less mix-up. It was shown using 2D and 3D PCA dimensionality reduction that the thirteen devices were separable, as they were colored differently. Finally, the entire Results chapter spans the quantitative study and uses standard device accuracy parameters. True positive, false positive, AUC/ROC, confusion matrix, and classification report all provide a 360° comparative view of the proposed preprocessor with various AI classifier/clustering algorithms.
- (2)
- Regarding the second objective of training time acceleration, the sampling rate of the IoT sensor is faster than low-sampling rate NILM algorithms and sensors of energetic load profile of 0.001 Hz (once every fifteen minutes). This means that training time using new devices is accelerated. There is more to acceleration than the sampling rate, but this is outside the scope of this paper and will perhaps be introduced in a future paper. Accelerated data generation is sufficient enough to explain training improvements. Observing the second and minute counts of the used Belkin dataset, it is evident that much less time is recorded compared to low-sampling rate datasets.
- (3)
- In terms of computer resources, the entire code executed a single very large dataset that included thirteen devices in no more than ten minutes over a core-i7 CPC. It required one terabyte of RAM for space construction. When considering industrial premises, there are many profile types; therefore, training time is multiplied. Training deep learning algorithms consumes time to the order of an hour per every 1000 epochs when executed on a 28 TFLOPS GPU. It was referenced in the Introduction chapter that the entry of NILM into industrial premises is a challenge.
- (4)
- Reports on training using larger devices are prevalent in previous work. It was empirically demonstrated that the algorithm is easily able to identify thirteen devices collaboratively compared to most/all previous work reporting on five devices collaboratively.
- (5)
- A comparative study was conducted using a large variety of quantitative tools, which were used over five different clustering algorithms, and the results show a large spectrum of conduct and a non-uniform behavior performance. The standard tool set included: precision, recall, AUC/ROC, confusion matrix, and Pearson correlation coefficient heatmap. The comparative study even extended beyond that to include comparison over the same parameters from previous works. The conclusion indicated more accurate results for all devices by the presented algorithm and over a larger device count.
6. Patents
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhuang, M.; Shahidehpour, M.; Li, Z. An Overview of Non-Intrusive Load Monitoring: Approaches, Business Applications, and Challenges. In Proceedings of the 2018 International Conference on Power System Technology, POWERCON 2018—Proceedings, Guangzhou, China, 6–8 November 2018; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2019; pp. 4291–4299. [Google Scholar]
- Gupta, S.; Reynolds, M.S.; Patel, S.N. ElectriSense: Single-Point Sensing Using EMI for Electrical Event Detection and Classification in the Home. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark, 26–29 September 2010; pp. 139–148. [Google Scholar]
- Singer, S.; Ozeri, S.; Shmilovitz, D. A pure realization of Loss-Free Resistor. IEEE Trans. Circuits Syst. 2004, 51, 1639–1647. [Google Scholar] [CrossRef]
- Xiaohan, S.; Rafiq, H. “Non-Intrusive Load Monitoring” Scholarly Community Encyclopedia, “Zhejiang University | ZJU · Department of Mathematics, Shandong University | SDU · School of Electrical Engineering. Available online: https://encyclopedia.pub/1384 (accessed on 28 October 2020).
- Verma, A.; Anwar, A.; Mahmud, M.A.P.; Ahmed, M.; Kouzani, A. A Comprehensive Review on the NILM Algorithms for Energy Disaggregation. arXiv 2021, arXiv:2102.12578. [Google Scholar]
- Jiang, L.; Li, J.; Luo, S.; Jin, J.; West, S. Literature review of power disaggregation. In Proceedings of the 2011 International Conference on Modelling, Identification and Control, Innsbruck, Austria, 24–26 October 2011; pp. 38–42. [Google Scholar] [CrossRef]
- Hosseini, S.S.; Agbossou, K.; Kelouwani, S.; Cardenas, A. Non-intrusive load monitoring through home energy management systems: A comprehensive review. Renew. Sustain. Energy Rev. 2017, 79, 1266–1274. [Google Scholar] [CrossRef]
- Herrero, J.R.; Murciego, Á.L.; Barriuso, A.L.; de La Iglesia, D.H.; González, G.V.; Rodríguez, J.M.C.; Carreira, R. Non-intrusive load monitoring (nilm): A state of the art. In Proceedings of the International Conference on Practical Applications of Agents and Multi-Agent Systems, Porto, Portugal, 21–23 June 2017; Springer: Porto, Portugal, 2017; pp. 125–138. [Google Scholar]
- Ruano, A.; Hernandez, A.; Ureña, J.; Ruano, M.; Garcia, J. NILM Techniques for Intelligent Home Energy Management and Ambient Assisted Living: A Review. Energies 2019, 12, 2203. [Google Scholar] [CrossRef] [Green Version]
- Shin, C.; Rho, S.; Lee, H.; Rhee, W. Data requirements for applying machine learning to energy disaggregation. Energies 2019, 12, 1696. [Google Scholar] [CrossRef] [Green Version]
- Energy Disaggregation from Non-Intrusive Load Monitoring: AM207. Available online: https://www.youtube.com/watch?v=9a8dR9NEe6w (accessed on 6 May 2016).
- D’Incecco, M.; Squartini, S.; Zhong, M. Transfer Learning for Non-Intrusive Load Monitoring. IEEE Trans. Smart Grid 2020, 11, 1419–1429. [Google Scholar] [CrossRef] [Green Version]
- Delfosse, A.; Hebrail, G.; Zerroug, A. Deep Learning Applied to NILM: Is Data Augmentation Worth for Energy Disaggregation? In Proceedings of the ECAI 2020, Santiago de Compostela, Spain, 29 August–8 September 2020; Volume 325, pp. 2972–2977. [Google Scholar]
- Renaux, D.P.B.; Pottker, F.; Ancelmo, H.C.; Lazzaretti, A.E.; Lima, C.R.E.; Linhares, R.R.; Oroski, E.; Nolasco, L.; Lima, L.T.; Mulinari, B.M.; et al. A Dataset for Non-Intrusive Load Monitoring: Design and Implementation. Energies 2020, 13, 5371. [Google Scholar] [CrossRef]
- Machlev, R.; Belikov, J.; Beck, Y.; Levron, Y. MO-NILM: A multi-objective evolutionary algorithm for NILM classification. Energy Build. 2019, 199, 134–144. [Google Scholar] [CrossRef]
- Machlev, R.; Tolkachov, D.; Levron, Y.; Beck, Y. Dimension reduction for NILM classification based on principle component analysis. Electr. Power Syst. Res. 2020, 187, 134–144. [Google Scholar] [CrossRef]
- Aho, A.V.; Hopcroft, J.E.; Ullman, J.D. The Design and Analysis of Computer Algorithms; Addison Wesley: London, UK; Amsterdam, The Netherlands; Don Mills, ON, Canada; Sydney, Australia, 1974. [Google Scholar]
- Arora, S.; Barak, B. Computational Complexity: A Modern Approach; Cambridge University Press: Cambridge, UK, 2009; ISBN 978-0-521-42426-4. [Google Scholar]
- Belkin Residential Apartments Waveforms Dataset. Available online: https://www.kaggle.com/c/belkin-energy-disaggregation-competition/data (accessed on 26 February 2013).
- REDD Residential Dataset. Available online: http://redd.csail.mit.edu/ (accessed on 26 February 2019).
- Paper’s Code Location. Available online: https://www.kaggle.com/moshedo500/nilm-project (accessed on 26 February 2019).
- Kahl, M.; Haq, A.; Kriechbaumer, T.; Jacobsen, H.A. WHITED—A Worldwide Household and Industry Transient Energy Data Set. In Proceedings of the 3rd International Workshop on Non-Intrusive Load Monitoring, Vancouver, BC, Canada, 14–15 May 2016. [Google Scholar]
- Kriechbaumer, T.; Jacobsen, H.A. BLOND, a building-level office environment dataset of typical electrical appliances. Sci. Data 2018, 5, 180048. [Google Scholar] [CrossRef]
- Calamaro, N.; Beck, Y.; Ben Melech, R.; Shmilovitz, D. An Energy-Fraud Detection-System Capable of Distinguishing Frauds from Other Energy Flow Anomalies in an Urban Environment. Sustainability 2021, 13, 10696. [Google Scholar] [CrossRef]
- Singhal, V.; Maggu, J.; Majumdar, A. Simultaneous Detection of Multiple Appliances From Smart-Meter Measurements via Multi-Label Consistent Deep Dictionary Learning and Deep Transform Learning. IEEE Trans. Smart Grid 2019, 3, 2969–2978. [Google Scholar] [CrossRef] [Green Version]
- Beckel, C.; Kleiminger, W.; Cicchetti, R.; Staake, T.; Santini, S. The ECO data set and the performance of non-intrusive load monitoring algorithms. In Proceedings of the BuildSys’14: 1st ACM International Conference on Embedded Systems for Energy-Efficient Buildings, Memphis, TN, USA, 5–6 November 2014; Srivastava, M., Ed.; Association for Computing Machinery: New York, NY, USA; pp. 80–89. [Google Scholar]
- Stanford University. Residential Energy Disaggregation Data Set. Initial REDD. Available online: https://peec.stanford.edu/research/residential-energy-disaggregation-dataset-redd (accessed on 26 April 2021).
- MIT. Reference Energy Disaggregation Data Set (REDD). Available online: https://energy.duke.edu/content/reference-energy-disaggregation-data-set-redd (accessed on 26 April 2012).
- Kolter, J.Z.; Johnson, M.J. Redd: A public data set for energy disaggregation research. Artif. Intell. 2011, 25, 59–62. [Google Scholar]
- ECO Dataset. Available online: https://www.vs.inf.ethz.ch/res/show.html?what=eco-data (accessed on 26 February 2019).
- UK Domestic Appliance Level Electricity(UK-DALE-2017)-Disaggregated Appliance/Whole House Power. Available online: https://ukerc.rl.ac.uk/DC/cgi-bin/edc_search.pl/?WantComp=138 (accessed on 26 April 2017).
- Kelly, J.; Knottenbelt, W. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. Sci. Data 2015, 2, 150007. [Google Scholar] [CrossRef] [Green Version]
- NILMTK: An Open-Source-Code NILM Library. Available online: https://nilmtk.github.io/ (accessed on 26 April 2014).
- Batra, N.; Kelly, J.; Parson, O.; Dutta, H.; Knottenbelt, W.; Rogers, A.; Singh, A.; Srivastava, M. NILMTK: An open-source toolkit for non-intrusive load monitoring. In Proceedings of the 5th International Conference on Future Energy Systems, Cambridge, UK, 11–13 June 2014. [Google Scholar]
- Bonfigli, R.; Felicetti, A.; Principi, E.; Fagiani, M.; Squartini, S.; Piazza, F. Denoising autoencoders for non-intrusive load monitoring: Improvements and comparative evaluation. Energy Build. 2018, 158, 1461–1474. [Google Scholar] [CrossRef]
- García-Pérez, D.; Pérez-López, D.; Díaz-Blanco, I.; González-Muñiz, A.; Domínguez-González, M.; Vega, A.A.C. Fully-Convolutional Denoising Auto-Encoders for NILM in Large Non-Residential Buildings. IEEE Trans. Smart Grid 2021, 12, 2722–2731. [Google Scholar] [CrossRef]
- Rafiq, H.; Shi, X.; Zhang, H.; Li, H.; Ochani, M.K. A Deep Recurrent Neural Network for Non-Intrusive Load Monitoring Based on Multi-Feature Input Space and Post-Processing. Energies 2020, 13, 2195. [Google Scholar] [CrossRef]
- Jia, R.; Gao, Y.; Spanos, C.J. A fully unsupervised non-intrusive load monitoring framework. In Proceedings of the 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), Miami, FL, USA, 2–5 November 2015; pp. 872–878. [Google Scholar] [CrossRef]
- Zhao, W.; Wang, C.; Peng, W.; Liu, W.; Zhang, H. Non-intrusive load monitoring using factorial hidden markov model based on adaptive density peak clustering. Energy Build. 2021, 244, 111025. [Google Scholar] [CrossRef]
- Flores, M.J.R.; Wittmann, M.F. Optimization Applied to Residential Non-Intrusive Load Monitoring. Master’s Thesis, Universidade Estadual de Campinas, Campinas, Brazil, 2017. [Google Scholar]
- Hart, G.W. Nonintrusive appliance load monitoring. Proc. IEEE 1992, 80, 1870–1891. [Google Scholar] [CrossRef]
- Holweger, J.; Dorokhova, M.; Bloch, L.; Ballif, C.; Wyrsch, N. Unsupervised algorithm for disaggregating low-sampling-rate electricity consumption of households. Sustain. Energy Grids Netw. 2019, 19, 100244. [Google Scholar] [CrossRef] [Green Version]
- Chen, U.; Wang, Q.; He, Z.; Chen, K.; Hu, J.; He, J. Convolutional sequence to sequence non-intrusive load monitoring. J. Eng. 2018, 17, 1860–1864. [Google Scholar] [CrossRef]
- Zhou, G.; Li, Z.; Fu, M.; Feng, Y.; Wang, X.; Huang, C. Sequence-to-Sequence Load Disaggregation Using Multiscale Residual Neural Network. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
- Kelly, J.; Knottenbelt, W. Neural NILM: Deep neural networks applied to energy disaggregation. In Proceedings of the ACM BuildSys, Seoul, Korea, 4–5 November 2015; pp. 55–64. [Google Scholar]
- Rafiq, H.; Zhang, H.; Li, H.; Ochani, M.K. Regularized LSTM Based Deep Learning Model: First Step towards Real-Time Non-Intrusive Load Monitoring. In Proceedings of the 2018 IEEE International Conference on Smart Energy Grid Engineering (SEGE), Oshawa, ON, Canada, 12–15 August 2018; pp. 234–239. [Google Scholar] [CrossRef]
- Lin, G.Y.; Lee, S.C.; Hsu, Y.J.; Jih, W.R. Applying Power Meters for Appliance Recognition on the Electric Panel. In Proceedings of the 5th IEEE Conference on Industrial Electronics and Applications, Melbourne, Australia, 15–17 June 2010; pp. 2254–2259. [Google Scholar]
- Reynolds, D. Gaussian Mixture Models. In Encyclopedia of Biometrics; Springer Science Business Media: New York, NY, USA, 2009; pp. 659–663. [Google Scholar]
- Zucchini, W.; Berzel, A.; Nenadic, O. Applied Smoothing Techniques. Part I: Kernel Density Estimation. 2003, pp. 5–19. Available online: https://docplayer.net/60547422-Applied-smoothing-techniques-part-1-kernel-density-estimation-walter-zucchini.html (accessed on 26 February 2017).
- Smith, L.I.A. Tutorial on Principal Components Analysis; University of Otago: Dunedin, New Zeeland, 2002; Volume 51, pp. 5–24. [Google Scholar]
- Marler, R.T.; Arora, J.S. Survey of multi-objective optimization methods for engineering. Struct. Multidiscipl. Optim. 2004, 26, 369–395. [Google Scholar] [CrossRef]
- Deb, K.; Gupta, H. Introducing robustness in multi-objective optimization. J. Evol. Comput. 2006, 14, 463–494. [Google Scholar] [CrossRef]
- Sutton, O. Introduction to k Nearest Neighbor Classification and Condensed Nearest Neighbor Data Reduction; University of Leicester: Leicester, UK, 2012. [Google Scholar]
- Oladunni, O.O.; Trafalis, T.B. A pairwise reduced kernel-based E = multi-classification Tikhonov regularization machine. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’06), Vancouver, BC, Canada, 16–21 July 2006; pp. 130–137. [Google Scholar] [CrossRef]
- Available online: https://towardsdatascience.com/machine-learning-basics-random-forest-classification-499279bac51e?gi=dff15d2726cc (accessed on 26 June 2021).
- Jordan, A.Y. On discriminative versus generative classifiers: A comparison of logistic classifier and naive Bayes. Adv. Neural Inform. Process. Syst. 2001, 14, 605–610. [Google Scholar]
- Afanador, N.L.; Smolinska, A.; Tran, T.; Blanchet, L. Unsupervised random forest: A tutorial with case studies. J. Chemometr. 2016, 30, 232–241. [Google Scholar] [CrossRef]
- Understanding AUC-ROC Curve. Available online: https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5 (accessed on 26 June 2018).
- Massidda, L.; Marrocu, M.; Manca, S. Non-Intrusive Load Disaggregation by Convolutional Neural Network and Multilabel Classification. Appl. Sci. 2020, 10, 1454. [Google Scholar] [CrossRef] [Green Version]
- External Appendix: File Appendix_Proof_Computational_Complexity_Post_Review1.docx. Available online: https://www.kaggle.com/moshedo500/nilm-project (accessed on 26 June 2021).
- Kim, J.; Le, T.T.H.; Kim, H. Nonintrusive load monitoring based on advanced deep learning and novel signature. Comput. Intell. Neurosci. 2017, 2017, 1–23. [Google Scholar] [CrossRef]
- Powers, D.M.W. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation. J. Mach. Learn. Tech. 2011, 2, 37–63. [Google Scholar]
- Myles, A.J.; Feudale, R.N.; Liu, Y.; Woody, N.A.; Brown, S.D. An introduction to decision tree modeling. J. Chemometr. 2004, 18, 275–285. [Google Scholar] [CrossRef]
- Li, J.; Wang, F. Non-Technical Loss Detection in Power Grids with Statistical Profile Images Based on Semi-Supervised Learning. Sensors 2020, 20, 236. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hu, T.; Guo, Q.; Shen, X.; Sun, H.; Wu, R.; Xi, H. Utilizing unlabeled data to detect electricity fraud in AMI: A semi-supervised deep learning approach. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Alaton, C.; Tounquet, F.; Directorate-General for Energy; European Commission; Tractebel Impact ENGIE. Benchmarking Smart Metering Deployment in the EU-28 Final Report; European Union: Brussel, Belgium, 2020. [Google Scholar]
Algorithm | f1-Score | Accuracy Range (E.A.) | Precision | Recall | AUC-ROC | Training Time (*) over 133 Days Data (min) | Quality of Data Insert to Core | Required Data (Days) | Theoretical Speed of Stream Data (Count Every 15 min) |
---|---|---|---|---|---|---|---|---|---|
Proposed Electro-spectral ensemble learning | 0.98 | 0.98 0.94 | 0.025 | 0.98 | 5 | X50 more and processed data | 1–2 | ||
Electro-spectral with KNN | 0.98 | 0.98 0.94 | 0.025 | 0.98 | Mostly 0.98–1.00, worst: 0.95 | ||||
Electro-spectral with RF | 0.98 | 0.98 0.94 | 0.025 | 0.98 | Mostly 1.00 worst: 0.99 | ||||
Electro-spectral with Ridge | 0.98 | 0.98 0.94 | 0.025 | 0.98 | Mostly 0.98–1.00 Worst 0.93 | ||||
Electro-spectral with DT | 0.98 | 0.98 0.94 | 0.025 | 0.98 | Mostly 0.83–0.94 Worst 0.75 | ||||
DAE | 0.679 | 0.888 0.518 | -- | -- | 6 | Raw | 133 | 1 | |
MFS LSTM | 0.887 | 0.964 0.856 | 0.05 | 0.62 | 21.3 | Raw | 133 | 1 | |
FHMM | 0.259 | 0.813 0.536 | 0.06 | 0.87 | 2.76 | Raw | 133 | 1 | |
Neural-LSTM | 0.619 | 0.891 0.289 | 0.05 | 0.62 | 15.13 | Raw | 133 | 1 | |
CNN(S-S) | 0.848 | 0.924 0.633 | 0.05 | 0.75 | 31.65 | Raw | 133 | 1 | |
CO | 0.259 | 0.907 0.544 | 0.08 | 0.56 | 0.168 = 11 s | Raw | 133 | 1 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Calamaro, N.; Donko, M.; Shmilovitz, D. A Highly Accurate NILM: With an Electro-Spectral Space That Best Fits Algorithm’s National Deployment Requirements. Energies 2021, 14, 7410. https://doi.org/10.3390/en14217410
Calamaro N, Donko M, Shmilovitz D. A Highly Accurate NILM: With an Electro-Spectral Space That Best Fits Algorithm’s National Deployment Requirements. Energies. 2021; 14(21):7410. https://doi.org/10.3390/en14217410
Chicago/Turabian StyleCalamaro, Netzah, Moshe Donko, and Doron Shmilovitz. 2021. "A Highly Accurate NILM: With an Electro-Spectral Space That Best Fits Algorithm’s National Deployment Requirements" Energies 14, no. 21: 7410. https://doi.org/10.3390/en14217410
APA StyleCalamaro, N., Donko, M., & Shmilovitz, D. (2021). A Highly Accurate NILM: With an Electro-Spectral Space That Best Fits Algorithm’s National Deployment Requirements. Energies, 14(21), 7410. https://doi.org/10.3390/en14217410