Next Article in Journal
Quality and Reliability-Exploitation Modeling of Power Supply Systems
Previous Article in Journal
A Novel Surface Parameterization Method for Optimizing Radial Impeller Design in Fuel Cell System
Previous Article in Special Issue
A Powerful Tool for Optimal Control of Energy Systems in Sustainable Buildings: Distortion Power Bivector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Evaluation of Non-Intrusive Load Monitoring Methods Using Relevant Features and Transfer Learning

by
Sarra Houidi
1,
Dominique Fourer
1,*,
François Auger
2,
Houda Ben Attia Sethom
3 and
Laurence Miègeville
2
1
Laboratoire IBISC (Informatique, BioInformatique, Systèmes Complexes), EA 4526, University Evry/Paris-Saclay, 91020 Evry-Courcouronnes, France
2
Institut de Recherche en Energie Electrique de Nantes Atlantique (IREENA), EA 4642, University of Nantes, 44602 Saint-Nazaire, France
3
Laboratoire des Systèmes Electriques, Université de Tunis El Manar, Tunis 1002, Tunisia
*
Author to whom correspondence should be addressed.
Energies 2021, 14(9), 2726; https://doi.org/10.3390/en14092726
Submission received: 16 March 2021 / Revised: 16 April 2021 / Accepted: 23 April 2021 / Published: 10 May 2021
(This article belongs to the Special Issue Data-Driven Energy-Cost Analysis of HVAC System for Buildings)

Abstract

:
Non-Intrusive Load Monitoring (NILM) refers to the analysis of the aggregated current and voltage measurements of Home Electrical Appliances (HEAs) recorded by the house electrical panel. Such methods aim to identify each HEA for a better control of the energy consumption and for future smart grid applications. Here, we are interested in an event-based NILM pipeline, and particularly in the HEAs’ recognition step. This paper focuses on the selection of relevant and understandable features for efficiently discriminating distinct HEAs. Our contributions are manifold. First, we introduce a new publicly available annotated dataset of individual HEAs described by a large set of electrical features computed from current and voltage measurements in steady-state conditions. Second, we investigate through a comparative evaluation a large number of new methods resulting from the combination of different feature selection techniques with several classification algorithms. To this end, we also investigate an original feature selection method based on a deep neural network architecture. Then, through a machine learning framework, we study the benefits of these methods for improving Home Electrical Appliance (HEA) identification in a supervised classification scenario. Finally, we introduce new transfer learning results, which confirm the relevance and the robustness of the selected features learned from our proposed dataset when they are transferred to a larger dataset. As a result, the best investigated methods outperform the previous state-of-the-art results and reach a maximum recognition accuracy above 99% on the PLAID evaluation dataset.

1. Introduction

During the last decades, the electricity consumption in the residential sector increased steadily with the worldwide population growth and became a major ecological issue. Prior studies show that a real-time feedback down to the HEA level can help to effectively reduce consumption, with almost 15% of energy savings [1]. For consumers, the main advantages are the control and the understanding of their electricity consumption through a transparent access and a promptly forwarded information. For utilities, it can improve the load-forecasting accuracy and provide a basic scheme to set up energy management strategies [2]. In this context, Non Intrusive Load Monitoring (NILM) methods offer an efficient answer, since they provide a breakdown of the residential energy consumption without instrumenting each HEA.
Here, we are interested in event-based NILM systems, where the current and voltage measurements are recorded using a single sensor connected to the house electrical panel [3,4,5,6,7]. An event detection method is used to predict the changes in the aggregated power signals that occur at each HEA’s operating state [6]. Then, relevant features that meet the additive criterion [8] (which is required for the subsequent steps) are computed to recognize the HEA electrical signature that triggered an event using a pattern matching method.
NILM encounters several challenges that concern the correct identification of HEAs. Indeed, in a house, a broad range of HEAs can be present, and many of them can rely on the same electrical behavior [9,10,11]. Hence, discerning the most relevant and informative features is of paramount importance for any NILM application whose performances depend on the HEA signature uniqueness [4,9,12]. This is the reason why an HEA must be described by a reduced number of relevant features. To date, prior studies have focused on HEA recognition performance and often involve non-physics-related and difficult to interpret features. This is the case for the features provided by deep convolutional neural networks (CNN), which can suffer from the robustness issue with adversarial examples [13,14], which require the use of attention mechanisms [15]. However, only a few works investigate in detail the role and the meaning of Feature Selection (FS) methods in NILM problems when addressed through a pattern recognition approach [16,17,18,19,20].
Thus, this study aims at filling the gap by investigating different FS techniques to tackle the HEA identification problem in a supervised machine learning scenario when a large number of electrical features are available and when HEAs from distinct manufacturers belonging to the same device category are considered. The goal is to show the benefits provided by FS in terms of HEAs’ classification performance and interpretability, and for enhancing the generalization capability by reducing overfitting of the trained models. Thus, transfer learning [21], which deals with the generalization capability of the selected features for different datasets, is also an important NILM challenge that is investigated in this study.
This paper is organized as follows. In Section 2, the addressed problem is formulated. In Section 3, we introduce the two HEAs’ datasets considered in this study. In particular, we present a novel dataset of HEAs’ current and voltage measurements in steady-state conditions, and we detail the electrical features computed from these measurements. In Section 4, several FS methods are presented. Two of them that are novel are detailed: a heuristic forward method and a method based on a trained Dense neural network. Finally, in Section 5, the selected features are used in combination with several classification algorithms on both considered datasets to demonstrate the importance of selecting the suitable combination of features and classification algorithm. The paper is concluded with future work directions in Section 6.

2. Problem Statement

2.1. Supervised HEA Identification

From the values of a set of features describing a unique HEA signature, we aim at identifying the closest pre-registered HEA, referred to (according to the machine learning literature) as a class. To this end, we use a supervised machine learning framework trained on an annotated dataset. The overall HEAs’ recognition problem can be illustrated in the flowchart of Figure 1, which also includes references to the paper sections where each method is detailed. During the training step, several features (or descriptors) deduced from voltage and current HEAs’ measurements are first computed and used to derive a discriminative model for each HEA. Then, the most discriminant features are automatically selected to obtain the optimal classification accuracy. During the test and operating procedures, the selected features are computed from the observed measurements to predict the class of the corresponding HEA.
Since this work is motivated by HEA recognition for energy consumption estimation and prediction, the same appliance type can thus be considered from a different class when the HEA is recorded at different power levels, or when its energy consumption sufficiently differs due to a different manufacturing brand. Hence, we use a larger and more complicated classification taxonomy than those commonly used for HEA identification in the literature. From another hand, we only consider loads in steady-state condition because switching on (or off) introduces transient signals or fluctuations which are often not sufficient to accurately characterize an HEA. In fact, transients are also affected by HEA-independent factors such as the time on the voltage waveform where the HEA is switched on/off, the network impedance, supply voltage distortion, the sampling frequency or the switching on/off mechanism [22]. To deal with transient signals and state change detection for multiple HEA recognition, the reader can for example refer to a multivariate statistical approach recently proposed in [6,23].

2.2. Features Selection for HEA Identification

Let F = { f 1 , f 2 , , f p } , with f i R , i [ 1 , p ] be the overall set of p features. The FS process aims at finding the optimal subset of features F F that maximizes the identification accuracy such that its cardinality verifies c a r d ( F ) = d p  [24]. This process induces several benefits such as avoiding the curse of dimensionality and overfitting phenomenon [25], improving the classification performance with the removal of non-discriminating features and decreasing the computational cost by only collecting the needed features. In contrast to the other dimensionality reduction techniques [24], FS does not change the original meaning of features and therefore allows further interpretation by a domain expert. Our work consists in evaluating several feature selection techniques when applied to an HEA recognition scenario, in terms of recognition accuracy and in terms of robustness by investigating two distinct datasets and several classification methods.

2.3. Transfer Learning

The last challenge addressed in this study concerns transfer learning [21], which consists in using the knowledge extracted from a given dataset to tackle a new problem based on a different dataset with a different setup and classification taxonomy. The investigated datasets can be of different natures since they use different recording protocols, different grid properties and different annotation taxonomies. This main motivation is to show the validity and the generalization capability of the selected features from one dataset to another. To this end, we introduce a novel dataset recorded on a French grid (utility frequency at 50 Hz ) which is different from the other existing publicly available datasets such as PLAID [26], which was recorded recorded in the USA grid with utility frequency at 60 Hz . Hence, our new proposed dataset contains a large variety of HEA types including new ones such as LCD and Plasma TV, Coffee maker, and oven.

3. Materials

3.1. HEAs Datasets

To test the performance of an NILM technique, it is important to consider real data. Due to the challenges in the creation of such datasets, mainly related to the required time and the high costs involved, we investigated in this study two data sets that are freely available: the PLAID dataset [26], which is shared online and is used as common reference, and our novel publicly available dataset freely accessible at http://dx.doi.org/10.21227/ww76-d733). Both datasets contain individual HEAs measurements which are convenient for extracting features, training models, conducting performance evaluation and performing benchmarking on a common basis. Indeed, some existing datasets include scenarios of multiple simultaneous loads [27]. However, before conducting disaggregation (i.e., decomposing the whole energy consumption of a dwelling into the energy usage of individual HEAs), it is important to build an initial signature database that is key to many NILM techniques.
For both datasets considered in this study, a class of HEA corresponds to a brand of a category of HEA, e.g., the class “Incandescent light bulb-Electrix-soft white”, which is distinct from the HEA class “Incandescent light bulb-Philips Durama”. Furthermore, we only consider steady-state conditions, and we extracted for each class of HEAs in both datasets several periods of the current and voltage steady-state waveforms. Indeed, during HEAs’ switching On/Off, momentary fluctuations of the current and voltage signals occur before settling in to a steady-state value. These fluctuations are called transients and can characterize a given HEA. However, one major drawback of the switching transients is their reproducibility, since they can be affected by HEA-independent factors.

3.1.1. PLAID Dataset

This dataset [26] contains current and voltage measurements sampled at 30 kHz from 11 different HEA types present in more than 60 households in Pittsburgh, Pennsylvania, USA. The goal of this dataset is to provide a public library for high-frequency measurements that can be used to assess existing or novel HEA classification algorithms. There are 11 categories of HEAs, which correspond to: air conditioner, compact fluorescent lamp, fridge, hairdryer, laptop, microwave, washing machine, bulb, vacuum, fan, and heater. Each category of HEA is represented by more than ten different instances. For each HEA, three to six measurements are collected for each state transition (i.e., on/off changing state). As mentioned, we only consider the steady-state operations, and we extracted for each class of HEAs in the PLAID dataset several periods of the current and voltage steady-state waveforms such that we have a total of 71 HEAs (with different categories and brands) and n = 36 , 720 distinct recordings (also called individuals in the statistical terminology).

3.1.2. New Proposed Dataset

We introduce a novel publicly availabledataset containing 24 categories of HEAs (e.g., fans, fridges, washers, etc.) considered with distinct brands (35 types) and that were recorded at several power levels to lead us to a total of 61 considered HEAs. For HEAs with a wide variety of operational programs or adjustable settings such as temperature or intensity, we recorded all power consumption patterns and considered that an HEA power level corresponds to a specific class of HEA, although power consumption patterns refer to the same device. Figure 2 lists the considered categories of HEAs.
The HEAs have been recorded in steady-state conditions in a French 50 Hz electrical grid. The measurement setup consists of an AC current probe (E3N Chauvin Arnoux) with a 10 mV/A sensitivity and a differential voltage probe with a 1/100 attenuation. Voltage and current waveforms were captured by an 8-bit resolution digital oscilloscope (RIGOL DS1104Z). The sampling rate was set at f s 1 = 250  kHz for some of the recordings and f s 2 = 50  kHz for the other part. Electrical assembly and instrumentation are depicted in Figure 3. Hence, each HEA of this dataset is described by 8 periods of the current and voltage steady-state waveforms, resulting consequently in a set of n = 8 × 61 = 488 distinct individuals.

3.2. Electrical Features Computed from Current and Voltage Measurements

We can describe the different HEAs using 90 features extracted at each voltage period summarized in Table 1. The detail of their computation was introduced in [19], based on the latest IEEE 1459-2010 standard for the definition of single phase physical components under non-sinusoidal conditions [28,29]. From the voltage v ( t ) and current i ( t ) signals, we compute the Fourier coefficients v a k , v b k , i a k and i b k , using the following formulas:
x a k = 2 M m = 0 M 1 x [ m ] cos ( 2 π m k f 0 / f s ) , x b k = 2 M m = 0 M 1 x [ m ] sin ( 2 π m k f 0 / f s )
where x [ m ] = x ( m / f s ) is the sampled discrete-time signal, f s is the sampling frequency, f 0 the utility frequency (e.g., 50 Hz in France) and M = f s / f 0 . From these Fourier coefficients, the following features can be computed for any harmonic rank k N .
  • The Root Mean Squared (RMS) value of the k t h harmonic component of the voltage V k and currents I k , and their sum V H and I H :
    V k = v a k 2 + v b k 2 2 , I k = i a k 2 + i b k 2 2 , V H = k = 2 15 V k 2 , I H = k = 2 15 I k 2
  • The RMS voltage V and current I:
    V = V 1 2 + V H 2 , I = I 1 2 + I H 2
  • The  k t h harmonic component of the active, reactive, and apparent powers P k , Q k , S k , and their sums P H , Q H , S H :
    P k = 1 2 ( v a k i a k + v b k i b k ) P H = k = 2 15 P k
    Q k = 1 2 ( v a k i b k v b k i a k ) Q H = k = 2 15 Q k
    S k = V k × I k S H = V H × I H
  • The active, reactive apparent, and distorsion powers P, Q, S, and D:
    P = P 1 + P H , Q = Q 1 + Q H , S = V × I , D = S 2 P 2 Q 2
  • The voltage and current total harmonic distortion THD V and THD I :
    THD V = V H V 1 , THD I = I H I 1
  • The voltage and current distortion powers D V and D I :
    D V = V H I 1 , D I = V 1 I H
  • The non-fundamental apparent power S N :
    S N = D I 2 + D V 2 + S H 2
  • The voltage and current crest factors for m [ 0 , M 1 ] :
    F C V = max | v [ m ] | V , F C I = max | i [ m ] | I
  • Finally, the global and harmonic power factors F p and F p k :
    F p = P S , F p k = P k S k
Since the NILM problem consists in the breakdown of an unknown mixture of HEAs into a set of identifiable HEAs signatures possibly belonging to a database, it is important to consider the electrical features that meet the additivity criterion [8] such that:
f ( v , i ) = n = 1 N s f ( v , i n ) for i = n = 1 N s i n
where f ( v , i ) is an electrical feature, v is a vector of voltage, i a vector of current samples acquired during one voltage period and N s is the number of HEAs that are simultaneously switched on. Thus, when an HEA is connected (resp. disconnected) to the power network, an “additive” feature is increased (resp. decreased) by an amount equal to that produced by this HEA operating individually. Among the 90 features, p = 34 features meet the additivity criterion and are reported in red in Table 1. This property is required for extracting HEAs features from an aggregated signal and for comparing them with the dataset of individual HEAs signatures. For example, a change detection method as proposed in [6] can be used in order to separate the distinct contribution of each HEA when several ones are switched on simultaneously. Hence, this study only focuses on the additive features computed for a unique observed HEA. Both datasets are stored in an n × p normalized matrix X ¯ , where the p = 34 features detailed earlier are in columns and have been normalized with a zero mean and a unit standard deviation.

4. Feature Selection

Existing approaches for feature selection can be categorized into filter-based methods and wrapper-based methods [30,31]. Filter-based methods perform FS independently of the classification process. Features are individually assigned to relevance scores, which are assumed to reflect their usefulness in the classification task. Hence, the features are sorted by descending order of the obtained score of relevance [32]. Filter-based methods are often computationally faster but are known to be less accurate than other approaches. On the other hand, wrapper-based methods use the classifier of interest to score feature subsets according to the classification accuracy. This allows selecting an optimal subset of features that maximizes the classification accuracy with an improvement of the computational cost [32].

4.1. Investigated Feature Selection Methods

This paper investigates the methods listed below, which are comparatively assessed in the remainder. More details are given to the sequential forward FS method and our original contributions concerning the DNN methods, in light of the other methods’ literature review.

4.1.1. Existing Methods

  • Principal Component Analysis (PCA) can be used as a filter-based method based on the maximization of the dataset dispersion [33,34].
  • Linear Discriminant Analysis (LDA) can be used as a supervised filter-based method maximizing the separation between classes (see Section 5.1).
  • Mutual Information (MI) is a filter-based method measuring the amount of information each feature conveys from the class labels [35].

4.1.2. New Proposed Sequential Forward Method

This approach is an iterative wrapper-based FS method that aims at maximizing the classifier accuracy when adding each feature one-by-one [30,36,37,38,39]. The first iteration starts with an empty set of selected features, and each feature f F , where F is the set of p features, is tested individually using a given classifier (in this study, we used LDA and K-nearest-neighbor (KNN) presented in Section 5). The feature that maximizes the classification accuracy ( A c c ) [40] of the training dataset is the first one selected. Then, each remaining feature is added to the set of previously selected features, and the same process is applied to find again the highest accuracy. The iterative feature selection ends once further feature addition yields no accuracy improvement. The proposed method is presented hereafter in Algorithm 1.
Algorithm 1: Sequential forward FS algorithm
Input: 
dataset X n , p , whole set of features F, ground-truth labels Y  
Output: 
Set of sorted features F p  
1:
Initialization: F 0  
2:
for  k 0 to p 1  do
3:
for  f F F k  do
4:
F + F k { f }  
5:
evaluate (LDA or KNN classifier, X n ( F + ) , Y)  
6:
compute A c c ( F + )  
7:
end for 
8:
Select the best remaining feature: f k arg max f F F k A c c ( F k { f } )  
9:
Update feature set: F k + 1 F k { f k }  
10:
end for 
11:
return F p

4.1.3. New Proposed Deep Neural Network (DNN) Feature Selection Method

This method is inspired by [41], which is a filter-based approach that uses our proposed neural architecture. It uses the sum of the trained weights of the first layer neurons as a score of relevance of the input features.
Deep learning uses a combination of agents (the neurons) to learn high-level non-linear relationships and correlations in the analyzed data. Such methods can tackle complicated problems and become a promising approach for smart grid applications [42]. Here, we use a dense fully connected DNN architecture [43], where each neuron of the input layer is associated with a feature f i as presented in Figure 4. Our architecture is made of one input layer with p neurons and 2 hidden blocks containing one layer of 128 dense formal neurons including a Batch Normalization (BN) combined with a dropout layer. The output y of each neuron can be expressed as a function of an input vector x R N such as:
y = g w 0 + i = 1 N w i x i
where g is the neuron activation function chosen as REctified Linear Unit (RELU) defined by g ( x ) = max ( 0 , x ) except for the last output layer, which uses the softmax activation function [44]. The w i coefficients ( w 0 being the bias) are the synaptic weights, which are learned during the training process. In our implementation, we choose w 0 = 0 and we use a BN of the output, which was shown to improve the training performances [45] when applied on the activation of each neuron. Thus, we have BN ( y ) = y μ σ , where μ and σ are the mean and the standard deviation computed from the processed training batch defined with a size equal to 64. Each dropout layer is defined to randomly discard 10 % of the connected input layer and is used in order to reduce overfitting [46]. The last layer of our proposed DNN model applies a softmax activation function on each output neuron (each output neuron is associated with a prediction class) returning output values in [ 0 , 1 ] . This value corresponds to the probability of predicting the analyzed input as a member of class i. Hence, the final output label corresponds to the class index which maximizes the resulting probability of the last layer such as y ^ = arg max i y i , y i being the output of neuron i of the last layer. The proposed DNN is designed for classification or regression problems; however, we propose here a new feature selection method based on the trained weights of each neuron. Following this idea, we consider the sum of the weights of all the neurons of the first layer linked to the same input feature as a score of relevance for this feature. The DNN is trained on the studied datasets using cross-entropy [43] as a loss function.

4.2. Feature Selection Results

In the context of event-based NILM [6], the selection of features meeting the additivity criterion matters. Each FS method, among the five ones previously presented in Section 4.1, is then applied on the two datasets considered in this study (see Section 3.1), where each HEA is represented by a vector of p = 34 features meeting the additivity criterion (see Section 3.2). Datasets are centered and reduced in order to get an ( n × p ) matrix X ¯ with a zero mean and a unit standard deviation. This is obtained by applying the same operation as used for the Batch Normalization described above (subtracting the mean and dividing by the standard deviation of each individual). For the FS methods based on a feature relevance score, we sort the features by descending order of relevance, and then we select the subset before the highest decrease in the relevancy score. The results obtained on our own dataset and the PLAID dataset are reported in Table 2 and Table 3.
The following observations can be made according to the obtained results:
  • For both datasets, some features such as P , P 1 , P H , Q , Q 1 , or Q H are present regardless of the used FS method;
  • For both datasets, the features selected by the DNN method for FS and by the PCA method are diversified in terms of harmonic orders;
  • For both datasets, the features selected by the MI and LDA methods are related to odd-order harmonics, which describe the power supply structures included in most of the HEAs;
  • For the sequential forward FS method, our experiments compare the results provided by the Euclidean-based KNN classifier (where the neighborhood parameter is set to K = 7 ) and to the LDA classifier. The number of nearest neighbors is set to 7 because it is the closest odd number to the number of instances in a class in the proposed dataset, so that for each neighborhood, there is a majority vote. The selected features are those that reach the maximum accuracy [47] reported in Table 2 and Table 3. For our dataset, only 12 features allow maximizing the KNN classifier accuracy and 18 features maximize the accuracy of the LDA classifier (see Figure 5). For the PLAID dataset, 25 features allow maximizing the KNN classifier accuracy and 33 features maximize the LDA classifier accuracy (see Figure 6). The low accuracy reached by the LDA classifier in the PLAID dataset can be explained by the unbalanced training sets (where one or several classes outnumber the other classes) [48]. Indeed, LDA is known to not provide good performances in this setting since classification is generally biased towards the majority classes. It is observed that for the LDA classifier, the accuracies less rapidly reach a plateau than for the KNN. Indeed, the KNN is known to be affected by the overfitting phenomenon [49,50], which increases the distance between individuals of the same class and decreases the accuracy. The classifier cannot deal with the feature relevance and is more sensitive to FS than other classifiers such as LDA, which can handle irrelevant features [51].

5. Home Electrical Appliances Classification Results

5.1. Investigated Classification Methods

We investigate four classification methods that can be combined with the different feature subsets provided by the FS methods presented in Section 4.
  • The KNN method is widely used by the NILM community for HEAs’ identification [12,52]. We use the Euclidean distance and K = 7 , which corresponds to the closest odd number to the number of instances in a class in the proposed dataset. Hence, the predicted class corresponds to the most represented one in the neighborhood through majority voting.
  • The LDA method estimates the optimal linear combination between features using the eigenvectors of the projection matrix ( B + S ) 1 B of dimension p × p , where B = 1 n k = 1 K n k ( g k g ) ( g k g ) T and S = 1 n k = 1 K n k V k , where V k are the covariance matrices built from the corresponding n k individuals (number of individuals of class k { 1 , , K } ); g = 1 n i = 1 n x ¯ i and g k = 1 n k i = 1 n k x ¯ i correspond to the mean over all the individuals of the whole dataset X ¯ and the mean over all the individuals in the class k, respectively. Then, the tested individuals are projected into the discriminative linear space before being assigned to the class whose centroid is the closest in terms of the Euclidean distance.
  • The proposed DNN classification method uses the same fully connected DNN architecture as presented in Section 4.1.3 for FS. Our implementation is based on tensorflow/keras (https://keras.io/). The training is completed with a batch size equal to 64 and a maximal number of 350 epochs (one epoch is reached each time the whole training dataset is processed once). The optimization is completed using the RMSprop algorithm [43] with a learning rate set to η = 10 3 .
  • The Random Forest (RF) classification method creates a set of decision trees and aggregates the votes from the decision trees to predict the class of the tested individuals [53]. The number of trees was set to 5 after experimental tuning to get the best results.
Other classification methods such as Support Vector Machine (SVM) [8] or Adaboost [54] were not evaluated in this study because they require a very high computation cost, and our preliminary results did not reveal a significant accuracy improvement in comparison to the four investigated methods.

5.2. Classification Test Procedure

The evaluation of the classification performances uses an 8-fold cross validation methodology, which randomly splits the dataset into eight equal partitions, which are individually tested using the seven remaining partitions for training. This process is repeated eight times, until all subsamples have been used for cross validation [55]. The number of folds was arbitrarily set to 8 so that 12.5% of the studied dataset corresponds to the test set and the remaining 87.5% to the training set. The final metrics for classification performances were then computed by merging the results of each partition.
Our results are expressed in terms of the classical classification metrics used for NILM—Accuracy ( A c c ), F Measure ( F M ), Recall ( R e c ) and Precision ( P r e ) [40,47]—which are deduced from the computed confusion matrices. Thus, if we denote C a resulting confusion matrix of dimension I × I (I being the considered number of classes), where C i j corresponds to the number of individuals of the true class i (row) predicted as being in the class j (column), the Accuracy, Precision, and Recall are computed as:
Acc = i C i i i j C i j Prec = 1 I i C i i j C j i Rec = 1 I i C i i j C i j
where n = i j C i j is the total number of individuals. The F-measure (also called F- or F 1 -score) is the harmonic mean of the Precision and Recall, computed as:
F M = 2 Prec · Rec Prec + Rec
We also present the ratio A c c # f e a t u r e s , which measures the efficiency and allows us to judge on the right combination of FS method and classifier. Indeed, the goal is to achieve the highest accuracy for a small number of descriptors. A large ratio means that a high accuracy is reached with a small number of selected features. Each method is evaluated on both datasets.

5.3. Self-Database Results

Section 5.3.1 and Section 5.3.2 present the classification success rate as a function of the provided subset of selected features used to describe each HEA. Overall results using our new proposed dataset are summarized in Table 4 for which the details are provided in Table 5. The results obtained using the PLAID dataset are summarized in Table 6 where the details are presented in Table 7. The confusion matrices of the best methods are also presented in Appendix A. We also compare the results with the active and reactive powers (P,Q) which are the usual features proposed in the NILM literature [56]. The results show the clear improvement provided by the FS methods on each evaluated dataset. In addition, to study how noise influences our training, we have trained the studied classifiers using the selected features and Data Augmentation (DA) [57]. For this, both studied datasets are 100 % augmented by adding a white Gaussian noise to the current signals of each HEA class, to obtain an Signal to Noise Ratio (SNR) = 20 dB (defined as 10 log 10 | | x | | 2 | | b | | 2 , with x the original signal amplitude and b the noise signal). This leads to new augmented datasets containing noisy current signals, from which we compute the 34 features meeting the additive criterion. Then, for each original dataset, we apply cross-validation for classification evaluation using the selected features in Section 4.2 and the classification methods used in Section 5. The experiment setup also consists of an 8-fold cross-validation experiment, where the original datasets were partitioned into eight. For each of the 8 simulations, 20% of each HEA class of the noisy generated dataset was added to the training set. No noise was added to the test part that was used for performance generalization for each of the eight simulations in our experiments. The success rate of a classification method for a specific subset of selected features was obtained by calculating the average scores of the performances metrics used in Section 5. All the results presented in the following tables are ranked in descending order of the best F M scores.

5.3.1. Proposed Dataset

Table 5 shows that the RF classifier outperforms all the other classifiers. This is consistent with the findings of X. Wu et al. in [53], where an accuracy equal to 98.0% was found using eight steady-state features (which do not meet the additive criterion). These authors showed the advantages of RF classifier over KNN. In our experiment, RF classifier obtains the best recognition rate equal to 99.18% with the features selected by the MI method and DA, and 98.15% of accuracy when combined with LDA feature selection method without DA. This leads to a ratio (Acc/# feat.) of 7.01 and a computational time that is very low. It can be observed that the classifier performances are also better when data is artificially augmented during training. The confusion matrix depicted in Figure A1 shows that 20% of tested individuals that belong to class index 4 “Fan-Coala level 1” are classified as class index 5 “Fan -Coala Level 2” and 10% of the tested individuals belonging to class index 1 “Electric mixer Moulinex” are classified as class index 7 “Fan-Coala Level 3”. Interestingly, the performance of the DNN method is improved by a suitable choice of relevant descriptors as confirmed by the usage of the MI and the DNN featurs selection methods. Indeed, DNN classifier usually obtains the best results when using all the considered features. This point is of interest to develop new strategies to improve the training efficiency of DNN when used on a small training dataset. Finally, it can be observed that the results obtained with the DNN classifier are lower than those obtained for the PLAID dataset in Table 7. This can be explained by the small size of the training dataset. Indeed, DNN is known to require a large number of data to efficiently be trained. It can also be observed that for most of the classifiers, data augmentation by the addition of white Gaussian noise can significantly improve the results. Table 4 depicts the average F M scores obtained for each FS method considering all the studied classifiers. It can be seen that the odd order harmonics features selected by the MI method are the ones for which the best average F M score is reached.

5.3.2. PLAID Dataset

In Table 7, we can notice the improvement brought by FS approaches combined with the KNN classifier. Indeed, the obtained ratios (Acc/# feat.) are all greater, which denotes the fact that KNN classifier needs a small number of features to reach excellent accuracy. The best ratio (Acc/# feat.) of 47.29 is obtained when using KNN with P, Q features, which are usually considered for HEAs’ identification in the NILM literature, but the reached accuracy is only of 94.58%. The best accuracy of 99.19% is reached for only 25 features selected by the KNN-based sequential forward-FS method with and without data augmentation (according to the confusion matrix depicted in Figure A2, 10% of the tested individuals belonging to class index 7 “Incandescent light bulb-Electrix-soft white” are classified as class index 8 “Incandescent light bulb-Philips Duramax”). However, an accuracy of 99.13% with a very good ratio A c c # f e a t = 4.96 is obtained when considering the subset of 20 features selected by the MI method with the KNN classifier. As we seek to achieve the best identification rates (close to 99 % of accuracy) for the smallest number of selected features, this combination classifier/selected features is a good trade-off. This allows the KNN classifier to slightly exceed the performances reached by the RF classifier with the 20 features selected by the MI method (98.91 % of accuracy).
In contrast to our dataset, the high number of individuals ( n = 36 720 ) eases the DNN to learn the features from the dataset and to figure out that an important number of features is reliable and should be used. Our results outperform the previous ones reported in [58] for the PLAID dataset using the VI trajectories features, where an F-measure of F M 77 % is given. This validates the efficiency of our approach consisting in carefully selecting features before classification. Combining RF classifier with features selected by the KNN-based sequential forward FS method also allows us to exceed the performances obtained in [18], where authors also used an RF classifier applied on PLAID dataset using an optimal subset of 20 steady-state and transient features (selected through a systematic feature elimination process) and were able to reach an accuracy of 93.2%. The performances obtained by the LDA are the worst ones. As mentioned, this can be explained by the fact that the LDA classifier struggles when the number of individuals is very large, resulting in individuals overlapping between classes. In almost all the cases, data augmentation improves or slightly modifies the classification performance for all methods (exception for the KNN classifier). Finally, Table 6 depicts the average F M scores obtained for each FS method considering all the studied classifiers. The same observation as the one made for the proposed dataset can be made: the odd order harmonics features selected by the MI method are the ones for which the best average F M score is reached.

5.4. Transfer Learning Results

Now, we propose evaluating if the features selected from a dataset are able to be transferred to another dataset. For this, a cross-learning strategy is adopted for both studied datasets. The goal is to study to what extent the features selected for a particular dataset are invariant across HEAs and can be used to get good classification performances in another dataset. Indeed, as several common features are selected from each dataset separately, we assume that they convey common information that can be transferred from one dataset to another. This approach allows us to reduce the number of training samples from unknown HEAs. First, the subsets of selected features from the proposed dataset are tested using the several classifiers presented previously and applied to the PLAID dataset. The results are presented in Table 8. Very good results are obtained with the KNN, the DNN, and the RF classifiers with the different subsets of selected features. The best ones are obtained with the KNN classifier and particularly when combined to the MI feature selection method, which allows us to obtain the best performances with only 20 features. Figure A3 shows the tested individuals that were misclassified, such as the following: 10% of the individuals belonging to class index 7 “Incandescent bulb light - Electrix soft white” were classified as class index 8 “Incandescent light bulb- Philips Duramax” or 10% of the individuals in class index 39 “Laptop-HP-C24” were classified as class index 40 “Laptop-Apple macbook air”. Second, the subsets of selected features from the PLAID dataset were tested using the several classifiers presented previously and applied to the proposed dataset. The results are presented in Table 9. Very good results were obtained with the RF and the LDA classifiers with the different subsets of selected features. An accuracy of 98.77% was obtained when combining the RF classifier to the subset of features selected with the LDA sequential forward-FS method. The confusion matrix in Figure A4 shows that 50% of the individuals belonging to class index 5 “Fan-Coala Level 2” were classified as class index 4 “Fan-Coala Level 1”, and 50% of the individuals that belong to class index 53 “Washing machine-LG state (b)” were classified as class index 58 “Washing machine-LG state (g)”. Through cross transfer learning, we show that in both situations, the features selected from another dataset improve the classification rate. There is therefore transfer of knowledge on the discriminating “power” of the selected features.

6. Conclusions

In this paper, we addressed one of the main challenges of the NILM problem consisting in the HEA identification from electrical measurements. To this end, we covered a broad sweep of existing supervised FS and classification methods to show the efficiency of this approach for identifying distinct HEAs using the suitable set of relevant features.
As a first contribution, in addition to a novel publicly available dataset, we introduced a comparative evaluation of a large number of methods in an event-based NILM context, involving all the possible combinations of these techniques applied on two distinct annotated HEA datasets where each HEA signature made of relevant features is extracted from different categories and manufacturers.
Second, thanks to our proposed data augmentation and feature selection, we have improved the best HEA identification results on the PLAID dataset by obtaining a resulting classification rate above 99%. To our knowledge, this result outperforms the best available results obtained with the PLAID dataset using state-of-the-art methods. Furthermore, in this regard, validating our solution using two datasets has helped in (i) showing its high performance despite the data collection procedure being different and (ii) proving its capability to give very good results even if HEAs are from different categories and manufacturers.
Moreover, our results show that the number of extracted features can significantly be reduced to efficiently perform HEAs recognition. A cross transfer learning strategy was adopted by using the subsets of features selected by FS approaches applied to our dataset to classify HEAs of the PLAID dataset and conversely. Very good results were obtained, which confirms that the features selected with one dataset can be transferred to another one.
Several conclusions can be safely drawn from this study. First, the selected electrical features can be justified by the power supply topologies included in an HEA (the front-end circuitry that connects them to the power grid), which affect their current waveforms. Each subset of selected features contains for the most part odd-order harmonics related to power components. The performance of a designed classifier can be improved by the use of an optimal subset of features. Some features, such as P , P 1 , P H , Q , Q 1 , a n d Q H , are retrieved in most of the subsets of selected features, which shows their importance for HEAs’ identification. Secondly, several observations can be made according to the chosen classifier: DNN requires many training data and works poorly on the small proposed dataset (and works better on the PLAID dataset); LDA is sensitive to unbalanced datasets, and it works less well on PLAID; KNN method is efficient and sensitive to the choice of descriptors; RF is a state-of-the art method in automatic classification before the arrival of deep learning and is robust and gives good results in most cases. In addition, the augmentation of data shows that it is possible to improve in almost all cases classification performance for all methods (exception for PLAID with KNN classifier). Finally, features selected through FS methods in a dataset could be used to correctly identify unknown HEAs from another dataset using, for example, an unsupervised statistical modeling approach [6]. This addresses one of the biggest NILM challenges: generalization.

Author Contributions

Investigation, S.H., D.F. and F.A.; Methodology, S.H. and D.F.; Supervision, D.F., F.A., H.B.A.S. and L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by Tunisian Ministry of Higher Education and Scientific Research (LSE-ENIT-LR11ES15) of University of Tunis El Manar and the French ANR (investissements d’avenir) ORACLES project with Univ. Paris-Saclay.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The matlab and python codes used in this research are available for academic purposes only by contacting the corresponding author (D. Fourer). The new dataset used in this study is freely available at https://ieee-dataport.org/documents/home-electrical-appliances-recordings-nilm.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BNBatch Normalization
DAData Augmentation
DNNDeep Neural Network
FSFeature Selection
HEA(s)Home Electrical Appliance(s)
KNNK-nearest-neighbor
LDALinear Discriminant Analysis
MIMutual Information
NILMNon-Intrusive Load Monitoring
PCAPrincipal Component analysis
PCCPoint of Common Coupling
PLAIDPublic Dataset of High Resolution for Load Identification Research
RELU[RELU]REctified Linear Unit
RFRandom Forest
SNRSignal-to-Noise Ratio

Appendix A. Confusion Matrices

Figure A1. Confusion matrix from the proposed dataset obtained with RF classifier using the subset of 20 features selected with the MI FS method.
Figure A1. Confusion matrix from the proposed dataset obtained with RF classifier using the subset of 20 features selected with the MI FS method.
Energies 14 02726 g0a1
Figure A2. Confusion matrix from PLAID dataset obtained with KNN classifier using the subset of 25 features selected with the KNN based Seq. forw. FS method.
Figure A2. Confusion matrix from PLAID dataset obtained with KNN classifier using the subset of 25 features selected with the KNN based Seq. forw. FS method.
Energies 14 02726 g0a2
Figure A3. Confusion matrix from the PLAID dataset obtained with KNN classifier using the subset of 20 features selected with the MI FS method from the proposed dataset.
Figure A3. Confusion matrix from the PLAID dataset obtained with KNN classifier using the subset of 20 features selected with the MI FS method from the proposed dataset.
Energies 14 02726 g0a3
Figure A4. Confusion matrix from the proposed dataset obtained with the RF classifier using the subset of 33 features selected with the LDA based Seq. forw. FS method from the PLAID dataset.
Figure A4. Confusion matrix from the proposed dataset obtained with the RF classifier using the subset of 33 features selected with the LDA based Seq. forw. FS method from the PLAID dataset.
Energies 14 02726 g0a4

References

  1. Darby, S. The Effectiveness of Feedback on Energy Consumption. In A Review for DEFRA of the Literature on Metering, Billing and direct Displays; University of Oxford: Oxford, UK, 2006; Volume 486. [Google Scholar]
  2. Wang, Y.; Chen, Q.; Hong, T.; Kang, C. Review of smart meter data analytics: Applications, methodologies, and challenges. IEEE Trans. Smart Grid 2018, 10, 3125–3148. [Google Scholar] [CrossRef] [Green Version]
  3. Hart, G.W. Non-intrusive appliance load monitoring. Proc. IEEE 1992, 80, 1870–1891. [Google Scholar] [CrossRef]
  4. Zoha, A.; Gluhak, A.; Imran, M.A.; Rajasegarar, S. Non-intrusive load monitoring approaches for disaggregated energy sensing: A survey. Sensors 2012, 12, 16838–16866. [Google Scholar] [CrossRef] [Green Version]
  5. Faustine, A.; Mvungi, N.H.; Kaijage, S.; Michael, K. A survey on non-intrusive load monitoring methodies and techniques for energy disaggregation problem. arXiv 2017, arXiv:1703.00785. [Google Scholar]
  6. Houidi, S.; Auger, F.; Sethom, H.B.A.; Fourer, D.; Miègeville, L. Multivariate event detection methods for non-intrusive load monitoring in smart homes and residential buildings. Energy Build. 2020, 208, 109624. [Google Scholar] [CrossRef]
  7. Houidi, S.; Fourer, D.; Auger, F. On the Use of Concentrated Time-Frequency Representations as Input to a Deep Convolutional Neural Network: Application to Non Intrusive Load Monitoring. Entropy 2020, 22, 911. [Google Scholar] [CrossRef]
  8. Liang, J.; Ng, S.K.K.; Kendall, G.; Cheng, J.W.M. Load Signature Study Part I: Basic Concept, Structure, and Methodology. IEEE Trans. Power Del. 2010, 25, 551–560. [Google Scholar] [CrossRef]
  9. Zeifman, M.; Roth, K. Non intrusive appliance load monitoring: Review and outlook. IEEE Trans. Consum. Electron. 2011, 57, 76–84. [Google Scholar] [CrossRef]
  10. Kalogridis, G.; Efthymiou, C.; Denic, S.; Cepeda, R. Privacy for smart meters: Towards undetectable appliance load signatures. In Proceedings of the First IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, 4–6 October 2010; pp. 232–237. [Google Scholar]
  11. Ruano, A.; Hernandez, A.; Ureña, J.; Ruano, M.; Garcia, J. NILM Techniques for Intelligent Home Energy Management and Ambient Assisted Living: A Review. Energies 2019, 12. [Google Scholar] [CrossRef] [Green Version]
  12. Figueiredo, M.; De Almeida, A.; Ribeiro, B. An Experimental Study on Electrical Signature Identification of Non-Intrusive Load Monitoring (NILM) Systems. In Proceedings of the 10th international Conference on Adaptive and Natural Computing Algorithms (ICANNGA), Ljubljana, Slovenia, 14–16 April 2011; pp. 31–40. [Google Scholar]
  13. Do Nascimento, P.P.M. Applications of Deep Learning Techniques on NILM. Ph.D. Thesis, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil, 2016. [Google Scholar]
  14. De Paiva Penha, D.; Castro, A.R.G. Home appliance identification for NILM systems based on deep neural networks. Int. J. Artif. Intell. Appl. 2018, 9, 69–80. [Google Scholar] [CrossRef]
  15. Piccialli, V.; Sudoso, A.M. Improving Non-Intrusive Load Disaggregation through an Attention-Based Deep Neural Network. Energies 2021, 14, 847. [Google Scholar] [CrossRef]
  16. Cannas, B.; Carcangiu, S.; Carta, D.; Fanni, A.; Muscas, C. Selection of Features Based on Electric Power Quantities for Non-Intrusive Load Monitoring. Appl. Sci. 2021, 11. [Google Scholar] [CrossRef]
  17. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn.Res. 2003, 3, 1157–1182. [Google Scholar]
  18. Sadeghianpourhamami, N.; Ruyssinck, J.; Deschrijver, D.; Dhaene, T.; Develder, C. Comprehensive feature selection for appliance classification in NILM. Energy Build. 2017, 151, 98–106. [Google Scholar] [CrossRef] [Green Version]
  19. Houidi, S.; Auger, F.; Ben Attia Sethom, H.; Fourer, D.; Miègeville, L. Relevant feature selection for home appliances recognition. In Proceedings of the Electrimacs 2017 Conference, Nancy, France, 4–6 July 2017. [Google Scholar]
  20. Kato, T.; Cho, H.S.; Lee, D.; Toyomura, T.; Yamazaki, T. Appliance Recognition from Electric Current Signals for Information-Energy Integrated Network in Home Environments. In Proceedings of the Smart Homes and Health Telematics (ICOST), Tours, France, 1–3 July 2009; pp. 150–157. [Google Scholar]
  21. D’Incecco, M.; Squartini, S.; Zhong, M. Transfer learning for non-intrusive load monitoring. IEEE Trans. Smart Grid 2019, 11, 1419–1429. [Google Scholar] [CrossRef] [Green Version]
  22. Houidi, S. Classification des Charges Électriques Résidentielles en vue de Leur Gestion Intelligente et de Leur comptabilisation. Ph.D. Thesis, University of Nantes, Saint-Nazaire, France, 2020. [Google Scholar]
  23. Houidi, S.; Auger, F.; Sethom, H.B.A.; Miègeville, L.; Fourer, D.; Jiang, X. Statistical Assessment of Abrupt Change Detectors for Non Intrusive Load Monitoring. In Proceedings of the 2018 IEEE International Conference on Industrial Technology (ICIT), Lyon, France, 20–22 February 2018. [Google Scholar]
  24. Molina, L.C.; Belanche, L.; Nebot, À. Feature selection algorithms: A survey and experimental evaluation. In Proceedings of the IEEE International Conference on Data Mining (ICDM), IEEE, Lyon, France, 20–22 February 2002; pp. 306–313. [Google Scholar]
  25. Jain, A.; Zongker, D. Feature selection: Evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. 1997, 19, 153–158. [Google Scholar] [CrossRef] [Green Version]
  26. Gao, J.; Giri, S.; Kara, E.C.; Bergés, M. PLAID: A Public Dataset of High-resolution Electrical Appliance Measurements for Load Identification Research: Demo Abstract. In Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments, Seoul, Korea, 4–5 November 2014; pp. 198–199. [Google Scholar] [CrossRef]
  27. Renaux, D.P.B.; Pottker, F.; Ancelmo, H.C.; Lazzaretti, A.E.; Lima, C.R.E.; Linhares, R.R.; Oroski, E.; Nolasco, L.d.S.; Lima, L.T.; Mulinari, B.M.; et al. A Dataset for Non-Intrusive Load Monitoring: Design and Implementation. Energies 2020, 13. [Google Scholar] [CrossRef]
  28. Langella, R.; Testa, A. IEEE standard definitions for the measurement of electric power quantities under sinusoidal, nonsinusoidal, balanced, or unbalanced conditions. Rev. IEEE Std. 1459–2000 2010, 1–40. [Google Scholar] [CrossRef] [Green Version]
  29. Eigeles, E.A. On the Assessment of Harmonic Pollution. IEEE Trans. Power Del. 1995, 10, 693–698. [Google Scholar]
  30. Aha, D.W.; Bankert, R.L. A comparative evaluation of sequential feature selection algorithms. In Learning From Data; Springer: New York, NY, USA, 1996; pp. 199–206. [Google Scholar]
  31. Yu, L.; Liu, H. Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 2004, 5, 1205–1224. [Google Scholar]
  32. Cadenas, J.M.; Garrido, M.C.; Martínez, R. Feature subset selection filter—wrapper based on low quality data. Expert Syst. Appl. 2013, 40, 6241–6252. [Google Scholar] [CrossRef]
  33. Anderson, T.W. An Introduction to Multivariate Statistical Analysis; John Wiley and Sons Inc.: New York, NY, USA, 1958; Volume 2. [Google Scholar]
  34. Boutsidis, C.; Mahoney, M.W.; Drineas, P. Unsupervised feature selection for principal components analysis. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), Las Vegas, NV, USA, 24–27 August 2008; pp. 61–69. [Google Scholar]
  35. Pohjalainen, J.; Rasanen, O.; Kadioglu, S. Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Comput. Speech Lang. 2015, 29, 145–171. [Google Scholar] [CrossRef]
  36. Blanchet, F.G.; Legendre, P.; Borcard, D. Forward selection of explanatory variables. Ecology 2008, 89, 2623–2632. [Google Scholar] [CrossRef]
  37. Marcano-Cedeño, A.; Quintanilla-Domínguez, J.; Cortina-Januchs, M.; Andina, D. Feature selection using sequential forward selection and classification applying artificial metaplasticity neural network. In Proceedings of the IECON 2010—36th Annual Conference on IEEE Industrial Electronics Society, Glendale, AZ, USA, 7–10 November 2010; pp. 2845–2850. [Google Scholar]
  38. Kozbur, D. Testing-Based Forward Model Selection. Am. Econ. Rev. 2017, 107, 266–269. [Google Scholar] [CrossRef] [Green Version]
  39. Le, T.T.H.; Kim, Y.; Kim, H. Network intrusion detection based on novel feature selection model and various recurrent neural networks. Appl. Sci. 2019, 9, 1392. [Google Scholar] [CrossRef] [Green Version]
  40. Pereira, L.; Nunes, N. Performance evaluation in non-intrusive load monitoring: Datasets, metrics, and tools-A review. Wiley Interdiscip. Rev. Data Min. Know. Discov. 2018, 1–17. [Google Scholar] [CrossRef] [Green Version]
  41. Gevrey, M.; Dimopoulos, I.; Leka, S. Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecol. Modell. 2003, 160, 249–264. [Google Scholar] [CrossRef]
  42. Peng, C.; Lin, G.; Zhai, S.; Ding, Y.; He, G. Non-Intrusive Load Monitoring via Deep Learning Based User Model and Appliance Group Model. Energies 2020, 13, 5629. [Google Scholar] [CrossRef]
  43. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  44. Gao, B.; Pavel, L. On the properties of the softmax function with application in game theory and reinforcement learning. arXiv 2017, arXiv:1704.00805. [Google Scholar]
  45. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  46. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  47. Pereira, L.; Nunes, N. A comparison of performance metrics for event classification in Non-Intrusive Load Monitoring. In Proceedings of the 2017 IEEE International Conference on Smart Grid Communications (SmartGridComm), Dresden, Germany, 23–27 October 2017; pp. 159–164. [Google Scholar]
  48. Xue, J.; Titterington, M. Do unbalanced data have a negative effect on Linear Discriminant Ananlysis. Pattern Recognit. 2008, 41, 1558–1571. [Google Scholar] [CrossRef] [Green Version]
  49. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; John Wiley and Sons Inc.: Hoboken, NJ, USA, 2001. [Google Scholar]
  50. Theodoridis, S.; Koutroumbas, K. Pattern Recognition, 2nd ed.; Academic Press: Cambridge, MA, USA, 2003. [Google Scholar]
  51. Lee, Y.; Lin, Y.; Wahba, G. Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. J. Am. Stat. Associat. 2004, 99, 67–81. [Google Scholar] [CrossRef] [Green Version]
  52. Basu, K.; Debusschere, V.; Bacha, S.; Maulik, U.; Bondyopadhyay, S. Non intrusive load monitoring: A temporal multi-label classification approach. IEEE Trans. Indust. Inform. 2015, 11, 262–270. [Google Scholar] [CrossRef]
  53. Wu, X.; Gao, Y.; Jiao, D. Multi-Label Classification Based on Random Forest Algorithm for Non-Intrusive Load Monitoring System. Processes 2019, 7, 337. [Google Scholar] [CrossRef] [Green Version]
  54. Murata, H.; Onoda, T. Applying Kernel Based Subspace Classification to a Non-intrusive Monitoring for Household Electric Appliances. In Proccedings of the International Conference on Artificial Neural Networks (ICANN), Vienna, Austria, 21–25 August 2001. [Google Scholar]
  55. Makonin, S.; Popowich, F. Non-intrusive load monitoring performance evaluation. Energy Effic. 2015, 8, 809–814. [Google Scholar] [CrossRef]
  56. Saitoh, T.; Osaki, T.; Konishi, R.; Sugahara, K. Current sensor based home appliance and state of appliance recognition. SICE J. Control Meas. Sys. Integrat. 2010, 3, 86–93. [Google Scholar] [CrossRef]
  57. Mignot, R.; Peeters, G. An Analysis of the Effect of Data Augmentation Methods: Experiments for a Musical Genre Classification Task. Trans. Int. Soc. Mus. Inform. Ret. 2019. [Google Scholar] [CrossRef] [Green Version]
  58. De Baets, L.; Ruyssinck, J.; Develder, C.; Dhaene, T.; Deschrijver, D. Appliance classification using VI trajectories and convolutional neural networks. Energy Build. 2018, 158, 32–36. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of the general process for supervised HEA identification.
Figure 1. Flowchart of the general process for supervised HEA identification.
Energies 14 02726 g001
Figure 2. Measured appliances included in our new proposed dataset.
Figure 2. Measured appliances included in our new proposed dataset.
Energies 14 02726 g002
Figure 3. Electrical assembly and instrumentation.
Figure 3. Electrical assembly and instrumentation.
Energies 14 02726 g003
Figure 4. Diagram of the proposed deep neural network architecture with L hidden layers.
Figure 4. Diagram of the proposed deep neural network architecture with L hidden layers.
Energies 14 02726 g004
Figure 5. Classification success rate as a function of the number of the features on our dataset using KNN and LDA classifiers with the subset of p = 34 features meeting the additive criterion.
Figure 5. Classification success rate as a function of the number of the features on our dataset using KNN and LDA classifiers with the subset of p = 34 features meeting the additive criterion.
Energies 14 02726 g005
Figure 6. Classification success rate as a function of the number of the features on the PLAID dataset using KNN and LDA classifiers with the subset of p = 34 features meeting the additive criterion.
Figure 6. Classification success rate as a function of the number of the features on the PLAID dataset using KNN and LDA classifiers with the subset of p = 34 features meeting the additive criterion.
Energies 14 02726 g006
Table 1. Summary of the proposed electrical features  [19]. The features colored in red meet the additivity criterion given by Equation (13).
Table 1. Summary of the proposed electrical features  [19]. The features colored in red meet the additivity criterion given by Equation (13).
Electrical Features NameNumber of Computed Features
Effective current I and its harmonics I k for k { 1 , , 15 } and I H (A)17
Active power P and its harmonics P k for k { 1 , , 15 } and P H (W)17
Reactive power Q and its harmonics Q k for k { 1 , , 15 } and Q H (VAR)17
Apparent power S and its harmonics S k for k { 1 , , 15 } and S H , S N (VA)18
Current harmonic distortion THD I 1
Distortion power D, D I , D V (VAD)3
Power factor F p and F p k for k { 1 , , 15 } 16
Current crest factor F CI 1
Total90
Table 2. Results of the features selection methods applied on the proposed dataset.
Table 2. Results of the features selection methods applied on the proposed dataset.
MethodFeatures Meeting the Additive Criterion
# FeaturesSelected Features
P 1 , Q 1 , P 7 , Q 3 , Q , P , P 3 ,
KNN based sequential forward FS method12 P H , P 5 , Q 5 , Q H , Q 9
P 1 , Q , P , Q 9 , Q 3 , P 3 , P 2 ,
LDA based sequential forward FS method18 P 10 , Q 4 , P 4 , P 6 , P 9 , Q 8 , P 13 ,
P 8 , Q 15 , Q 5 , Q 11
P 1 , P , P 5 , Q , Q 1 , Q H , P H ,
MI20 P 7 , P 15 , P 9 , Q 5 , Q 7 , P 13 , Q 3 ,
Q 9 , P 11 , Q 13 , P 3 , Q 15 , Q 11
Q , Q 1 , P 3 , P 9 , P 7 , P H , Q 5 ,
PCA17 P 11 , Q 9 , P 6 , Q 3 , Q 11 , Q 7 , P 15 ,
P 12 , Q 6 , P 8
P 7 , Q , Q 1 , P 9 , P 3 , Q 5 , P H ,
LDA14 Q 7 , P 11 , P 5 , Q 9 , P , P 1 , Q 11
Q 9 , P 3 , P H , P 12 , Q 15 , Q 5 , Q 12 ,
DNN27 P 6 , P 9 , Q 8 , P 7 , Q 4 , P 5 , Q H ,
P 11 , Q 2 , Q 3 , Q 11 , Q 14 , P 10 , P 1 ,
P , P 2 , P 4 , P 8 , P 13 , P 14
Table 3. Results of the features selection methods applied on PLAID dataset.
Table 3. Results of the features selection methods applied on PLAID dataset.
MethodFeatures Meeting the Additive Criterion
# FeaturesSelected Features
P H , Q , P 1 , P 3 , Q H , Q 5 , P 5 ,
KNN based sequential forward FS method25 Q 3 , Q 1 , Q 7 , P 15 , Q 2 , P , Q 13 ,
P 12 , P 7 , Q 15 , Q 9 , P 13 , P 9 , P 10 ,
Q 10 , Q 11 , Q 12 , Q 8
LDA based sequential forward FS method33All features except Q 9
P 3 , P H , P 1 , P , Q 1 , Q , Q 9 ,
MI20 Q 7 , P 7 , P 5 , Q 5 , Q H , Q 3 , P 9 ,
Q 11 , P 11 , Q 13 , Q 15 , P 15 , P 13
PCA31All features except P 14 , Q , Q 1
P 3 , P H , P 15 , Q 13 , Q 5 , Q H , Q 15 ,
LDA12 P 9 , P 7 , Q 3 , Q 9 , Q 7
P H , Q 11 , P 3 , P 9 , Q 9 , P 15 , P 5 ,
DNN17 Q 7 , P 7 , P 10 , Q 3 , Q H , Q 12 , Q 13 ,
Q 15 , P 13 , P 6
Table 4. Average F M scores obtained for the several different feature subsets selected for the proposed dataset (61 classes, n = 488) over all considered classifiers. Results are sorted in descending order of F-measure.
Table 4. Average F M scores obtained for the several different feature subsets selected for the proposed dataset (61 classes, n = 488) over all considered classifiers. Results are sorted in descending order of F-measure.
Feature Selection Method F M Average (%)
MI92.96
KNN based Seq. forw. FS method92.58
LDA92.42
All features90.92
LDA-based Seq. forw. FS method90.14
PCA87.64
P,Q features86.62
DNN84.87
Table 5. Performance (in percentage) of the classification methods applied on the proposed dataset using different feature subsets from the additive feature set (61 classes, n = 488). Results are sorted in descending order of F-measure.
Table 5. Performance (in percentage) of the classification methods applied on the proposed dataset using different feature subsets from the additive feature set (61 classes, n = 488). Results are sorted in descending order of F-measure.
Methods (Selected Features/Classifier)Results
Feat.# feat.ClassifierD.AAcc. F M Rec.Pre. Acc . # feat .
MI20R.FYes99.1899.1799.1899.304.95
LDA14R.FYes98.5698.5298.5698.717.04
P,Q features2R.FYes98.1598.1598.1598.2449.07
All features34R.FYes98.1598.1498.1598.282.88
KNN-based Seq. forw. FS (K = 7)12R.FYes97.7497.7197.7497.998.14
LDA14R.FNo98.1597.6797.4398.157.01
KNN-based Seq. forw. FS (K = 7)12R.FNo98.1597.6097.3398.158.17
MI20R.FNo98.1597.5497.2398.154.90
DNN27R.FYes97.3497.3297.3497.603.60
P,Q features2R.FNo97.7497.0696.7297.7448.87
MI20LDAYes96.9296.8396.9297.554.84
LDA-based Seq. forw. FS18LDANo97.5496.7296.3197.545.41
DNN27LDAYes96.7296.6296.7297.493.58
LDA-based Seq. forw. FS18LDAYes96.7296.5796.7297.435.37
LDA-based Seq. forw. FS18R.FYes96.3196.2996.3196.775.35
PCA17R.FYes96.3196.2596.3196.705.66
All features34LDAYes96.3196.1896.3196.842.83
DNN27LDANo96.9396.1495.7796.913.59
LDA14LDAYes95.6995.5995.6996.326.83
KNN-based Seq. forw. FS (K = 7)12KNNNo96.7295.7695.2996.728.06
All features34KNNYes95.6995.5395.6996.822.81
MI20KNNYes95.4995.2795.4996.734.77
DNN27R.FNo96.3195.2294.6796.313.57
MI20LDANo96.3195.2194.6796.314.81
LDA-based Seq. forw. FS18R.FNo96.3195.2194.6796.315.35
MI20DNNNo96.3195.0894.4796.314.81
KNN-based Seq. forw. FS (K = 7)12KNNYes95.2895.0295.3296.687.94
LDA14KNNYes95.2895.0195.2896.656.80
LDA-based Seq. forw. FS18KNNYes95.2795.0195.2896.495.29
P,Q features2KNNYes95.2894.9795.2896.6647.64
All features34LDANo96.1094.9494.3696.102.82
All features34R.FNo96.1094.9194.3396.102.82
PCA17R.FNo95.9094.8794.3995.905.64
KNN-based Seq. forw. FS (K = 7)12LDAYes95.2894.8695.2896.347.94
KNN-based Seq. forw. FS (K = 7)12LDANo95.9094.6093.9595.907.99
PCA17LDAYes94.4694.3794.4695.485.55
LDA14LDANo95.6994.3393.6595.696.83
P,Q features2KNNNo95.0893.4492.6295.0847.54
PCA17LDANo94.6793.0692.2894.675.57
Feat.# feat.ClassifierD.AAcc. F M Rec.Pre. Acc . # feat .
LDA14KNNNo94.4692.9692.2194.466.74
PCA17DNNNo93.6492.1591.4893.655.51
P,Q features2LDAYes93.2391.8593.2393.6946.61
All features34DNNNo93.2491.3490.4493.242.74
P,Q features2LDANo93.2390.9889.8593.2346.61
KNN-based Seq. forw. FS (K = 7)12DNNNo92.6290.4089.3492.627.72
MI20KNNNo91.6089.2188.0191.594.58
LDA-based Seq. forw. FS18DNNNo90.9888.4887.3990.985.05
LDA14DNNNo92.0189.8988.9992.016.57
DNN27KNNYes87.0986.1887.0989.463.23
LDA-based Seq. forw. FS18KNNNo88.1184.3582.5188.114.89
DNN27DNNNo86.2782.7088.9992.013.20
DNN27KNNNo85.8681.3979.1785.863.18
All features34KNNNo84.4380.3178.3684.432.48
PCA17KNNYes81.1479.8781.1482.584.77
PCA17KNNNo82.3778.1876.2582.374.84
All features34DNNYes77.0576.0777.0579.512.27
LDA14DNNYes76.8475.4076.8477.415.49
MI20DNNYes76.8475.3376.8477.183.84
KNN-based Seq. forw. FS (K = 7)12DNNYes76.0274.7076.0277.516.33
PCA17DNNYes74.5972.3574.5975.474.39
LDA-based Seq. forw. FS18DNNYes70.7068.4570.7070.503.93
P,Q features2DNNNo71.5266.4364.2871.5235.76
P,Q features2DNNYes63.3260.0563.3259.1931.66
DNN27DNNYes45.0843.3645.0845.341.67
Table 6. Average F M scores obtained for the several different feature subsets selected for the PLAID dataset (71 classes, n = 36,720) over all considered classifiers. Results are sorted in descending order of F-measure.
Table 6. Average F M scores obtained for the several different feature subsets selected for the PLAID dataset (71 classes, n = 36,720) over all considered classifiers. Results are sorted in descending order of F-measure.
Feature Selection Method F M Average (%)
MI85.80
LDA based Seq. forw. FS method85.07
KNN based Seq. forw. FS method85.05
All features84.56
PCA83.53
DNN82.52
LDA81.61
P,Q features58.31
Table 7. Performance (in percentage) of the classification methods applied on PLAID dataset using different feature subsets from the additive feature set (71 classes, n = 36,720). Results are sorted in descending order of F-measure.
Table 7. Performance (in percentage) of the classification methods applied on PLAID dataset using different feature subsets from the additive feature set (71 classes, n = 36,720). Results are sorted in descending order of F-measure.
Methods (Selected Features/Classifier)Results
Feat.# Feat.ClassifierD.AAcc. F M Rec.Pre. Acc . # feat .
KNN-based Seq. forw. FS (K = 7)25KNNNo99.1998.6398.8198.563.97
KNN-based Seq. forw. FS (K = 7)25KNNYes99.0998.5498.2598.903.96
LDA-based Seq. forw. FS33KNNNo99.0798.5098.7198.393.00
LDA-based Seq. forw. FS33KNNYes99.0298.5098.2198.833.00
All features34KNNNo99.0798.4998.6998.402.91
PCA30KNNNo99.0098.4698.6098.413.30
DNN17KNNNo98.9998.4598.6298.415.82
MI20KNNNo99.1398.4398.4398.534.96
DNN17KNNYes98.9498.4198.1498.745.82
All features34KNNYes99.0798.3898.0998.812.91
Feat.# Feat.ClassifierD.AAcc. F M Rec.Pre. Acc . # feat .
MI20KNNYes99.0398.3398.2398.484.95
PCA30KNNYes98.9598.2898.0098.693.30
MI20R.FNo98.9198.2798.5098.184.95
LDA12KNNNo98.9298.0998.0698.188.24
LDA-based Seq. forw. FS33R.FNo98.7597.9798.2097.912.99
All features34R.FNo98.7997.8898.0997.892.91
KNN-based Seq. forw. FS (K = 7)25R.FNo98.8097.8598.0097.853.95
LDA12KNNYes98.7697.8497.6498.068.23
DNN17R.FNo98.5897.7498.1497.545.80
PCA30R.FNo98.6797.6697.8497.603.29
LDA12R.FNo98.6097.6297.9997.518.22
LDA-based Seq. forw. FS33R.FYes98.2397.2296.8897.702.98
KNN-based Seq. forw. FS (K = 7)25R.FYes98.1997.1496.6997.783.93
All features34R.FYes98.2597.0496.6797.532.89
PCA30R.FYes98.1696.9095.5397.433.27
MI20R.FYes98.1996.8996.4897.344.91
LDA12R.FYes97.8896.8696.4397.408.16
MI20DNNYes98.3296.7896.3897.634.92
DNN17R.FYes97.9696.6196.0797.625.76
All features34DNNNo98.0496.3596.6496.672.88
KNN-based Seq. forw. FS (K = 7)25DNNNo97.5696.3096.5996.583.90
MI20DNNNo96.2696.0596.9996.284.81
LDA-based Seq. forw. FS33DNNNo96.2994.4095.4294.852.89
LDA12DNNNo95.5294.1095.1094.097.96
LDA12DNNYes95.5192.8891.5795.887.96
DNN17DNNNo94.5592.4194.0392.525.56
PCA30DNNNo94.4592.1592.8592.633.15
LDA-based Seq. forw. FS33DNNYes94.8191.6690.7294.432.87
KNN-based Seq. forw. FS (K = 7)25DNNYes94.8791.2290.2993.273.79
P,Q features2KNNYes94.4390.9191.7591.6847.22
P,Q features2KNNNo94.5890.8291.3990.8647.29
PCA30DNNYes91.1990.8288.9994.333.04
DNN17DNNYes93.2890.5288.1494.845.49
P,Q features2R.FNo93.7990.3190.5090.6046.90
All features34DNNYes90.3386.3385.0089.602.66
P,Q features2R.FYes90.6385.8486.1485.7045.32
LDA-based Seq. forw. FS33LDANo45.5253.4659.4554.251.38
All features34LDANo45.3153.2559.4254.051.33
MI20LDANo44.5553.1158.4953.842.23
KNN-based Seq. forw. FS (K = 7)25LDANo44.3751.5057.6252.321.77
Feat.# Feat.ClassifierD.AAcc. F M Rec.Pre. Acc . # feat .
P,Q features2DNNYes65.5650.4851.2755.2932.79
P,Q features2DNNNo65.5249.7552.5852.4532.76
PCA30LDANo42.5448.5854.2849.231.42
LDA-based Seq. forw. FS33LDAYes42.8948.8749.6857.031.30
All features34LDAYes42.9148.7849.7556.291.26
KNN-based Seq. forw. FS (K = 7)25LDAYes42.7248.6749.6155.401.71
MI20LDAYes42.4548.5049.4953.772.12
DNN17LDANo40.6546.7954.2146.852.39
PCA30LDAYes40.5945.4245.6452.091.35
LDA12LDANo38.6640.4047.4340.753.22
DNN17LDAYes37.0939.2039.6645.502.18
LDA12LDAYes36.0835.8835.4941.933.00
P,Q features2LDAYes12.124.297.114.196.06
P,Q features2LDANo11.594.053.657.555.80
Table 8. Performance (in percentage) of the classification methods applied on the PLAID dataset using the different feature subsets selected in the proposed dataset. Results are sorted in descending order of F-measure.
Table 8. Performance (in percentage) of the classification methods applied on the PLAID dataset using the different feature subsets selected in the proposed dataset. Results are sorted in descending order of F-measure.
Methods (Selected Features/Classifier)Results
Feat.# Feat.ClassifierAcc. F M Rec.Pre. Acc . # feat .
MI20KNN99.1298.4298.4098.534.96
LDA14KNN99.1098.3698.5398.317.08
KNN-based Seq. forw. FS (K = 7)12R.F98.9998.3698.4898.358.25
KNN-based Seq. forw. FS (K = 7)12KNN99.0498.2998.3998.308.25
PCA17KNN98.9598.2898.5498.175.82
DNN27KNN98.9598.2098.4298.093.66
LDA-based Seq. forw. FS18KNN98.9298.1498.33198.135.49
LDA14R.F98.9697.9598.0997.947.07
MI20R.F98.8997.8697.9097914.94
PCA17R.F98.7797.8498.0497.795.81
LDA-based Seq. forw. FS18R.F98.7697.6497.7997.625.49
DNN27R.F98.6897.6197.8497.533.65
KNN-based Seq. forw. FS (K = 7)12DNN98.0896.8397.4096.878.17
LDA14DNN97.8596.3597.1096.326.99
MI20DNN94.8594.8295.8695.104.74
PCA17DNN96.8794.6895.5594.895.70
DNN27DNN93.7693.6694.8694.173.47
LDA-based Seq. forw. FS18DNN95.7693.1894.2293.436.32
MI20LDA44.1452.4358.2153.182.21
DNN27LDA42.3648.2954.2248.481.57
PCA17LDA41.8443.6350.1043.872.46
LDA14LDA40.7543.5448.9544.32.46
KNN-based Seq. forw. FS (K = 7)12LDA37.7338.2342.8140.303.14
LDA-based Seq. forw. FS18LDA38.6537.7043.2038.742.15
Table 9. Performance (in percentage) of the classification methods applied on the proposed dataset using the different feature subsets selected in PLAID dataset. Results are sorted in descending order of F-measure.
Table 9. Performance (in percentage) of the classification methods applied on the proposed dataset using the different feature subsets selected in PLAID dataset. Results are sorted in descending order of F-measure.
Methods (Selected features/Classifier)Results
Feat.# Feat.ClassifierAcc. F M Rec.Pre. Acc . # feat .
LDA-based Seq. forw. FS33R.F98.7798.3698.1598.772.99
MI20R.F98.7698.3698.1598.774.93
KNN-based Seq. forw. FS (K = 7)25R.F97.7496.9996.6197.743.90
PCA30LDA96.5195.4694.9596.513.21
LDA12R.F96.5195.4594.9496.518.04
LDA-based Seq. forw. FS33LDA96.3195.1894.6396.312.92
KNN-based Seq. forw. FS (K = 7)25LDA96.3195.1594.5696.313.85
Feat.# Feat.ClassifierAcc. F M Rec.Pre. Acc . # feat .
MI20LDA96.3195.1594.5696.314.81
DNN17R.F95.7094.3393.6595.705.63
PCA30R.F95.4993.9893.2395.493.18
MI20DNN93.8592.1491.3293.854.69
LDA12LDA93.6492.0791.2993.647.80
DNN17LDA93.4491.6790.7893.445.50
LDA12LDA92.6290.5089.4892.627.72
KNN-based Seq. forw. FS (K = 7)25DNN93.2491.0289.9293.243.73
LDA-based Seq. forw. FS33DNN92.6290.6789.7592.622.80
MI20KNN91.8089.2788.0191.804.59
DNN17DNN90.9888.3887.1690.985.35
PCA30DNN90.3787.5686.2790.373.01
PCA30KNN85.0480.6678.5885.042.83
LDA-based Seq. forw. FS33KNN84.4279.8477.6684.422.55
LDA12KNN83.8179.5777.5283.816.98
KNN-based Seq. forw. FS (K = 7)25KNN84.0179.3777.1184.013.36
DNN17KNN77.8772.9670.6577.874.58
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Houidi, S.; Fourer, D.; Auger, F.; Sethom, H.B.A.; Miègeville, L. Comparative Evaluation of Non-Intrusive Load Monitoring Methods Using Relevant Features and Transfer Learning. Energies 2021, 14, 2726. https://doi.org/10.3390/en14092726

AMA Style

Houidi S, Fourer D, Auger F, Sethom HBA, Miègeville L. Comparative Evaluation of Non-Intrusive Load Monitoring Methods Using Relevant Features and Transfer Learning. Energies. 2021; 14(9):2726. https://doi.org/10.3390/en14092726

Chicago/Turabian Style

Houidi, Sarra, Dominique Fourer, François Auger, Houda Ben Attia Sethom, and Laurence Miègeville. 2021. "Comparative Evaluation of Non-Intrusive Load Monitoring Methods Using Relevant Features and Transfer Learning" Energies 14, no. 9: 2726. https://doi.org/10.3390/en14092726

APA Style

Houidi, S., Fourer, D., Auger, F., Sethom, H. B. A., & Miègeville, L. (2021). Comparative Evaluation of Non-Intrusive Load Monitoring Methods Using Relevant Features and Transfer Learning. Energies, 14(9), 2726. https://doi.org/10.3390/en14092726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop