A Review of the Enabling Methodologies for Knowledge Discovery from Smart Grids Data †
Abstract
:1. Introduction
2. Knowledge Discovery from Massive Data
- Definition of the KDD process goal from the customer point of view. Understanding of the domain and of the a priori knowledge;
- Selection of the target data from the available ones performing the KDD process;
- Data cleaning and preprocessing: it includes the basic operation of noise removal, the collecting and merging procedures of samples, and the accounting of date and time information;
- Data reduction and projection: the features of the samples are processed by adopting cardinality reduction or feature selection techniques aimed at either reducing the set of data to the most relevant feature or finding invariant transformation of data;
- Goal matching of the KDD process to the choice of a particular data mining methods (e.g., clustering, regression, classification, etc.);
- Data mining algorithm selection to find patterns in the data in consideration of the goal and data available;
- Performing the data mining algorithm to search for patterns in data;
- Mined pattern interpretation, it involves the possibility to visualize the results of mining and coming back to the previous steps to adjust patterns or select a different algorithm to improve the results;
- Knowledge consolidation, it consists of processing data in the most suitable form for either successive KDD processees or visual report generation for the customer.
- Prediction: the goal is the patterns development for the prediction of the behavior of certain features given a forecasting horizon;
- Description: the goal is the patterns development aimed at presenting data in a more understandable form.
- Classification: learning a function that maps a data to a certain class;
- Regression: learning a function that finds a relation between an observed set of input–output data discovering possible functional relations;
- Clustering: grouping data in a given set based on their similarity, by identifying samples (or patterns) with similar features;
- Summarizing: finding compact representation of multi-variate data;
- Dependency Modeling: learning model describing the dependencies; between variables in probabilistic and graphical terms;
- Change and Deviation Detection: learning model to find differences or strong deviation measured in a flow process.
- model representation;
- model evaluation criteria;
- search method.
- Decision Trees and Rules;
- Nonlinear Regression and Classification methods;
- Data-driven models;
- Probabilistic Graphic Dependency models;
- Relation Learning Models.
Research and Application Challenges in Smart Grids
- Raw waveform data (voltage and currents, exchanged active, reactive power at bus, conductor temperature, etc.);
- Preprocessed waveforms (voltage and currents, weather parameters over the grid);
- Status variables of system components;
- Consumer consumption/distributed generation data;
- Power Plants operation and energy bidding data;
- Electricity Market data.
3. Cardinality Reduction and Data Compression
- sampling;
- squashing;
- clustering;
- binning.
- Normalize the data matrix so that each column of will assume a null mean and unitary variance;
- Compute the Singular Value Decomposition on :
- The new variables in lower dimensional space are computed by choosing the first columns of matrix where:There are many ways to choose the optimal number of PC, where one of them is to take into account the percentage amount of variance in the chosen components where a value greater than is considered satisfactory.
4. Proposed Methodology
- Time referenced datasets about load consumption acquired by smart meters and customer substations are precious information sources to extract in order to catch the user behavior profile. Generally, electric load trajectory shapes assume similarity patterns according to the season, the day type (workweek, weekend, and annual holidays), and load type (households, tertiary sector, industrial, etc.). Electric prices, weather conditions, and spot social events complete the phenomena list affecting the electric load. It is clear that much of this information such as date–time are available in string or character format, needing adequate transformations to allow the application of regression models. Given a date–time sample, a simple preprocessing step allows for extracting several useful codified variables, including their type and timestamp, which are relevant to season, month, day of the week, day of the month, and so on.
- A raw time series matrix , which is characterized by samples and variables (or features), is often characterized by noise or chaotic behavior, which do not allow a clear understanding of the signal trajectory over the time. Excessive volatility needs to be managed in order to have more stable signals, which are able to catch the time series trend. For this reason, the application of feature engineering moves toward this direction, by allowing the extraction of a large number of hidden features and smooth signals from the original time series, producing the matrix , which has dimensions , with . In this sense, Table 1 summarizes the main smoothing variables used in the literature and the corresponding variable. For the sake of clarity, matrix dimensions are summarized in Table 2.
- The supervised learning approach for time series forecasting requires a transformation of data, which are usually arranged in a matrix form. Preparing data for this approach requires producing a couple of input–output set for each sample t (the jth rows of matrix ), which considers a portion of the predictor trajectories (how many samples in the past are considered as process memory) and the forecasting horizon of the target variables (how many samples ahead we want to predict) (Figure 2). The embedding procedure is a map between the samples of a time-series, which produces two matrices , whose dimensions are and p, and , whose dimensions are and r, called predictors and target matrices, respectively, given an input matrix , once assigned an auto-regressive lag L, a delay d, and a forecasting horizon H. The parameter r is computed by the product between and H, where is the number of variables in to predict. The delay is crucial since it shifts the most recent available sample in the past at time t. A rough indication about number of L can be chosen on the basis of the signal auto-correlation analysis. and were consequently split into , , , and , which are the training and test set of the predictors and target matrices. For the sake of clarity, the variable list is summarized in Table 2.
- The previous steps cause a huge increase in the number of variables; indeed, L new predictors (the lagged variables in the past) are produced for each starting variable (columns of ). Unfortunately, the consequence of this cardinality growth may cause collateral effects on the prediction accuracy, since a large dimension of data causes the previously described “curse of dimensionality”, which causes critical issues in the right operation of learning models. For this reason, techniques for cardinality reduction, as PCA, and feature selection, as MRMR, were considered. As described, the main difference between them is that the former produces a new set of uncorrelated variables in the PCA domain, whilst the latter extracts the most correlated and lesser redundant variables with respect to a target variable without transforming the original dataset. The reduced predictor training and test matrices are defined as and .
- Two different machine-learning models such as Lazy Learning [60] and Random Forest Regression [61] are assessed in this methodology. Random Forest (RF) origins arise from the bootstrap aggregation (bagging), which is a technique aimed at reducing the variance of the prediction function by averaging several prediction functions trained with random extracted samples from the dataset. RF extended this concept to the features in order to build decorrelated trees, where a random selection of variable is considered for each split. On the contrary, a Lazy Learning model as the K-Nearest Neighbors is based on local regression, where the predictor training set is used to extract the nearest neighbor samples given a query one. These latter and the corresponding targets are consequently used for building a local learner that supplies the prediction. Since the nearest neighbors are chosen by discriminating them considering a distance metric, the reduction of cardinality is crucial to reduce the number of dimensions (features) to consider in the distance computation. According to the multi-step nature of the problem, a direct strategy was applied, which, even if it requests a more computational effort with respect to an iterative approach, is less subject to the error explosion. Hence, the multi-step load forecasting problem was decomposed in H MISO problems, one for each time step ahead.
- An exhaustive proposed methodology validation requires testing on a large number of cases in order to appreciate the spreading of accuracy performance at the changing of training and test sets. For this reason, a time-rolling window validation was employed to slice in the ith training and test sets, according to a sequence of splitting points.
- The model performance data were analyzed in order to assess the effectiveness of the proposed methodology, where a Naive model was considered as a benchmark. The tests were performed by progressively increasing the forecasting horizons. The MSE was computed for both sample (jth row of ) and wth target variable over the considered forecasting horizon span according to the (5):
- Aggregate data are performed by considering statistical tests as Friedman tests [62]. The aim is to assess if the model performs differently or not. In particular, the Friedman test is a non-parametric randomized block of analysis of variance, where the null hypothesis considers all methods having the same error distribution. The test does not assume any hypothesis about data distribution. If the test rejects the null hypothesis, the Tukey-based Post Hoc test is performed in order to analyze the difference between the performance of each couple of models. In particular, the Tukey’s test supplies an upper triangular matrix where the elements are sorted by an accuracy rank. This information is processed for producing useful visualizations for the choice of the best model.
5. Case Study
6. Experimental Results
6.1. Case A
6.2. Case B
7. Critical Discussion
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
AMI | Advanced Metering Interface |
ANN | Artificial Neural Network |
CNN | Convolutional Neural Network |
DG | Distributed Generation |
ESN | Echo State Network |
GRU | Gated Recurrent Unit |
KDD | Knowledge Discovery Process |
LSTM | Long Short Term Memory unit |
mRMR | Minimum Redundancy Maximum Relevancy |
PCA | Principal Component Analysis |
PMU | Phasor Measurement Unit |
PSOPE | Power System Operation, Planning, and Economics |
RNN | Recurrent Neural Network |
SCADA | Supervisory Control And Data Acquisition |
WAMS | Wide Area Measurement Systems |
References
- Madani, V.; King, R.L. Strategies and roadmaps to meet grid challenges for safety and reliability. In Innovations in Power Systems Reliability; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–11. [Google Scholar]
- King, R.L. Information services for smart grids. In Proceedings of the 2008 IEEE Power and Energy Society General Meeting-Conversion and Delivery of Electrical Energy in the 21st Century, Pittsburgh, PA, USA, 20–24 July 2008; pp. 1–5. [Google Scholar]
- Zobaa, A.F.; Vaccaro, A.; Lai, L.L. Guest editorial enabling technologies and methodologies for knowledge discovery and data mining in smart grids. IEEE Trans. Ind. Inform. 2016, 12, 820–823. [Google Scholar] [CrossRef] [Green Version]
- Loia, V.; Furno, D.; Vaccaro, A. Decentralised smart grids monitoring by swarm-based semantic sensor data analysis. Int. J. Syst. Control. Commun. 2013, 5, 1–14. [Google Scholar] [CrossRef]
- Vaccaro, A.; Cañizares, C.A. A Knowledge-Based Framework for Power Flow and Optimal Power Flow Analyses. IEEE Trans. Smart Grid 2018, 9, 230–239. [Google Scholar] [CrossRef]
- Gu, Y.; Jiang, H.; Zhang, Y.; Zhang, J.J.; Gao, T.; Muljadi, E. Knowledge discovery for smart grid operation, control, and situation awareness—A big data visualization platform. In Proceedings of the 2016 North American Power Symposium (NAPS), Denver, CO, USA, 18–20 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Xu, Y.; Zhang, Y.; Dong, Z.Y.; Zhang, R. Intelligent Systems for Stability Assessment and Control of Smart Power Grids: Security Analysis, Optimization, and Knowledge Discovery; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
- Rodrigues, P.P.; Gama, J. Holistic distributed stream clustering for smart grids. In Proceedings of the Workshop on Ubiquitous Data Mining, Montpellier, France, 27 August 2012; p. 18. [Google Scholar]
- Shanmuganathan, S. From data mining and knowledge discovery to big data analytics and knowledge extraction for applications in science. J. Comput. Sci. 2014, 10, 2658–2665. [Google Scholar] [CrossRef] [Green Version]
- Green, R.C.; Wang, L.; Alam, M. Applications and trends of high performance computing for electric power systems: Focusing on smart grid. IEEE Trans. Smart Grid 2013, 4, 922–931. [Google Scholar] [CrossRef]
- Soroudi, A.; Amraee, T. Decision making under uncertainty in energy systems: State of the art. Renew. Sustain. Energy Rev. 2013, 28, 376–384. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, N.; Kang, C.; Miao, M.; Shi, R.; Xia, Q. An efficient approach to power system uncertainty analysis with high-dimensional dependencies. IEEE Trans. Power Syst. 2017, 33, 2984–2994. [Google Scholar] [CrossRef]
- Bhattarai, B.P.; Paudyal, S.; Luo, Y.; Mohanpurkar, M.; Cheung, K.; Tonkoski, R.; Hovsapian, R.; Myers, K.S.; Zhang, R.; Zhao, P.; et al. Big data analytics in smart grids: State-of-the-art, challenges, opportunities, and future directions. IET Smart Grid 2019, 2, 141–154. [Google Scholar] [CrossRef]
- Allam, Z.; Dhunny, Z.A. On big data, artificial intelligence and smart cities. Cities 2019, 89, 80–91. [Google Scholar] [CrossRef]
- Piatetsky-Shapiro, G.; Fayyad, U.; Smith, P. From data mining to knowledge discovery: An overview. Adv. Knowl. Discov. Data Min. 1996, 1, 35. [Google Scholar]
- Kamiński, B.; Jakubczyk, M.; Szufel, P. A framework for sensitivity analysis of decision trees. Cent. Eur. J. Oper. Res. 2018, 26, 135–159. [Google Scholar] [CrossRef] [PubMed]
- Elith, J.; Leathwick, J.R.; Hastie, T. A working guide to boosted regression trees. J. Anim. Ecol. 2008, 77, 802–813. [Google Scholar] [CrossRef] [PubMed]
- Bontempi, G.; Ben Taieb, S. Statistical Foundations of Machine Learning; Université Libre de Bruxelles: Bruxelles, Belgium, 2008. [Google Scholar]
- Haykin, S. Neural Networks and Learning Machines, 3rd ed.; Pearson Education, Inc.: Hoboken, NJ, USA, 2009. [Google Scholar]
- Jang, J.S.; Sun, C.T.; Mizutani, E. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence; Prentice-Hall: Upper Saddle River, NJ, USA, 1997. [Google Scholar]
- Rosato, A.; Rosa, A.; Araneo, R.; Panella, M. Prediction in Photovoltaic Power by Neural Networks. Energies 2017, 10, 1003. [Google Scholar] [CrossRef] [Green Version]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chollet, F.J.; Allaire, J. Deep Learning with R; Manning Publication: Shelter Island, NY, USA, 2018. [Google Scholar]
- Zhang, D.; Han, X.; Deng, C. Review on the research and practice of deep learning and reinforcement learning in smart grids. CSEE J. Power Energy Syst. 2018, 4, 362–370. [Google Scholar] [CrossRef]
- Dong, X.; Qian, L.; Huang, L. Short-term load forecasting in smart grid: A combined CNN and K-means clustering approach. In Proceedings of the 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju Island, Korea, 13–16 February 2017; pp. 119–125. [Google Scholar]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Cogn. Model. 1988, 5, 1. [Google Scholar] [CrossRef]
- Jaeger, H. A Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the “Echo State Network” Approach; Technical report; German National Research Center for Information Technology: Sankt Augustin, Germany, 2005. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Kolodner, J. Case-Based Reasoning; Morgan Kaufmann: Burlington, MA, USA, 2014. [Google Scholar]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Bellman, R.E. Adaptive Control Processes: A Guided Tour; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
- Jensen, F.V. An Introduction to Bayesian Networks; UCL Press: London, UK, 1996; Volume 210. [Google Scholar]
- De Raedt, L. A perspective on inductive logic programming. In The Logic Programming Paradigm; Springer: Berlin/Heidelberg, Germany, 1999; pp. 335–346. [Google Scholar]
- Arghandeh, R.; Zhou, Y. Big Data Application in Power Systems; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
- Cai, J.Y.; Huang, Z.; Hauer, J.; Martin, K. Current status and experience of WAMS implementation in North America. In Proceedings of the 2005 IEEE/PES Transmission & Distribution Conference & Exposition: Asia and Pacific, Dalian, China, 15–18 August; 2005; pp. 1–7. [Google Scholar]
- Zhou, Y.; Arghandeh, R.; Konstantakopoulos, I.; Abdullah, S.; von Meier, A.; Spanos, C.J. Abnormal event detection with high resolution micro-PMU data. In Proceedings of the 2016 Power Systems Computation Conference (PSCC), Genoa, Italy, 20–24 June 2016; pp. 1–7. [Google Scholar]
- Zhang, J.; Chen, Z. The impact of AMI on the future power system. Autom. Electr. Power Syst. 2010, 34, 20–23. [Google Scholar]
- Azimi, R.; Ghofrani, M.; Ghayekhloo, M. A hybrid wind power forecasting model based on data mining and wavelets analysis. Energy Convers. Manag. 2016, 127, 208–225. [Google Scholar] [CrossRef]
- De Caro, F.; Vaccaro, A.; Villacci, D. Spatial and Temporal Wind Power Forecasting by Case-Based Reasoning Using Big-Data. Energies 2017, 10, 252. [Google Scholar] [CrossRef] [Green Version]
- Quan, H.; Srinivasan, D.; Khambadkone, A.M.; Khosravi, A. A computational framework for uncertainty integration in stochastic unit commitment with intermittent renewable energy sources. Appl. Energy 2015, 152, 71–82. [Google Scholar] [CrossRef]
- Singh, C.; Wang, L. Role of artificial intelligence in the reliability evaluation of electric power systems. Turk. J. Electr. Eng. Comput. Sci. 2008, 16, 189–200. [Google Scholar]
- Tso, S.; Lin, J.; Ho, H.; Mak, C.; Yung, K.; Ho, Y. Data mining for detection of sensitive buses and influential buses in a power system subjected to disturbances. IEEE Trans. Power Syst. 2004, 19, 563–568. [Google Scholar] [CrossRef]
- Rosato, A.; Panella, M.; Araneo, R. A Distributed Algorithm for the Cooperative Prediction of Power Production in PV Plants. IEEE Trans. Energy Convers. 2019, 34, 497–508. [Google Scholar] [CrossRef]
- Rosato, A.; Panella, M.; Araneo, R.; Andreotti, A. A Neural Network Based Prediction System of Distributed Generation for the Management of Microgrids. IEEE Trans. Ind. Appl. 2019, 55, 7092–7102. [Google Scholar] [CrossRef]
- Deka, D.; Chertkov, M. Topology Learning in Radial Distribution Grids. In Big Data Application in Power Systems; Elsevier: Amsterdam, The Netherlands, 2018; pp. 261–279. [Google Scholar]
- Wang, Y.; Zhang, N.; Chen, Q.; Kirschen, D.S.; Li, P.; Xia, Q. Data-driven probabilistic net load forecasting with high penetration of behind-the-meter PV. IEEE Trans. Power Syst. 2017, 33, 3255–3264. [Google Scholar] [CrossRef]
- Chicco, G. Overview and performance assessment of the clustering methods for electrical load pattern grouping. Energy 2012, 42, 68–80. [Google Scholar] [CrossRef]
- Lu, X.; Dong, Z.Y.; Li, X. Electricity market price spike forecast with data mining techniques. Electr. Power Syst. Res. 2005, 73, 19–29. [Google Scholar] [CrossRef]
- García, S.; Luengo, J.; Herrera, F. Data Preprocessing in Data Mining; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
- Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
- DuMouchel, W.; Volinsky, C.; Johnson, T.; Cortes, C.; Pregibon, D. Squashing flat files flatter. In Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 15–18 August 1999; pp. 6–15. [Google Scholar]
- Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
- Cai, L.; Thornhill, N.F.; Kuenzel, S.; Pal, B.C. Wide-Area Monitoring of Power Systems Using Principal Component Analysis and k-Nearest Neighbor Analysis. IEEE Trans. Power Syst. 2018, 33, 4913–4923. [Google Scholar] [CrossRef] [Green Version]
- Qiao, S.; Wang, P.; Tao, T.; Shrestha, G. Maximizing profit of a wind genco considering geographical diversity of wind farms. IEEE Trans. Power Syst. 2014, 30, 2207–2215. [Google Scholar] [CrossRef]
- Cover, T.M. The Best Two Independent Measurements Are Not the Two Best. IEEE Trans. Syst. Man, Cybern. 1974, SMC-4, 116–117. [Google Scholar] [CrossRef]
- Jain, A.K.; Duin, R.P.W.; Mao, J. Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 4–37. [Google Scholar] [CrossRef] [Green Version]
- Birattari, M.; Bontempi, G.; Bersini, H. Lazy learning meets the recursive least squares algorithm. Adv. Neural Inf. Process. Syst. 1999, 11, 375–381. [Google Scholar]
- Breiman, L. Random forests machine learning. View Artic. Pubmed/Ncbi Google Sch. 2001, 45, 5–32. [Google Scholar]
- Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
- Jeng, J.C. Adaptive process monitoring using efficient recursive PCA and moving window PCA algorithms. J. Taiwan Inst. Chem. Eng. 2010, 41, 475–481. [Google Scholar] [CrossRef]
- De Caro, F.; De Stefani, J.; Bontempi, G.; Vaccaro, A.; Villacci, D. Robust Assessment of Short-Term Wind Power Forecasting Models on Multiple Time Horizons. Technol. Econ. Smart Grids Sustain. Energy 2020, 5, 1–15. [Google Scholar] [CrossRef]
Feature | Equation | Notes |
---|---|---|
smoothing average | t and z are the generic time sample and raw variable, respectively m is the size of the rolling window | |
rolling upper bound | ||
rolling lower bound | ||
1st order difference | ||
absolute 1st order difference | ||
p-quantile | where X is the population of |
Matrix | No. of Samples | No. of Variables | Notes |
---|---|---|---|
(Rows) | (Columns) | ||
raw signal matrix | |||
c | signal matrix after feature engineering process , q is the number of features made per variable of | ||
n | c | slice of used in the ith case test | |
p | predictor matrix ; where is the number of target variables | ||
r | target matrix | ||
p | training predictor matrix | ||
p | training predictor matrix | ||
r | test target matrix | ||
r | test target matrix | ||
f | reduced training target matrix is the number of selected features/Principal Components by applying MRMR / PCA | ||
f | reduced test target matrix |
Parameter | Value | Parameter | Value |
---|---|---|---|
H | d | 0 | |
L | f | 5 | |
5 | ∼8800 | ||
20 | c | ∼100 | |
1 | p | ||
∼2000 | ∼4 | ||
Parameter | Value | Parameter | Value |
---|---|---|---|
H | d | 0 | |
L | 5 | , | |
5 | ∼600 | ||
20 | c | ∼100 | |
1 | p | ||
∼700 | ∼4 | ||
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
De Caro, F.; Andreotti, A.; Araneo, R.; Panella, M.; Rosato, A.; Vaccaro, A.; Villacci, D. A Review of the Enabling Methodologies for Knowledge Discovery from Smart Grids Data. Energies 2020, 13, 6579. https://doi.org/10.3390/en13246579
De Caro F, Andreotti A, Araneo R, Panella M, Rosato A, Vaccaro A, Villacci D. A Review of the Enabling Methodologies for Knowledge Discovery from Smart Grids Data. Energies. 2020; 13(24):6579. https://doi.org/10.3390/en13246579
Chicago/Turabian StyleDe Caro, Fabrizio, Amedeo Andreotti, Rodolfo Araneo, Massimo Panella, Antonello Rosato, Alfredo Vaccaro, and Domenico Villacci. 2020. "A Review of the Enabling Methodologies for Knowledge Discovery from Smart Grids Data" Energies 13, no. 24: 6579. https://doi.org/10.3390/en13246579
APA StyleDe Caro, F., Andreotti, A., Araneo, R., Panella, M., Rosato, A., Vaccaro, A., & Villacci, D. (2020). A Review of the Enabling Methodologies for Knowledge Discovery from Smart Grids Data. Energies, 13(24), 6579. https://doi.org/10.3390/en13246579