Next Article in Journal
Antifungal and Antiaflatoxigenic Activities of 1,8-Cineole and t-Cinnamaldehyde on Aspergillus flavus
Next Article in Special Issue
The State of the Art of the Control Strategies for Single-Phase Electric Springs
Previous Article in Journal
Optical Transmitters without Driver Amplifiers—Optimal Operation Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Revealing Household Characteristics from Electricity Meter Data with Grade Analysis and Machine Learning Algorithms

by
Krzysztof Gajowniczek
1,*,
Tomasz Ząbkowski
1 and
Mariya Sodenkamp
2
1
Department of Informatics, Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences SGGW, Warsaw 02-776, Poland
2
Information Systems and Energy Efficient Systems Group, Information Systems and Applied Computer Sciences, University of Bamberg, Bamberg 96047, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(9), 1654; https://doi.org/10.3390/app8091654
Submission received: 22 August 2018 / Revised: 9 September 2018 / Accepted: 12 September 2018 / Published: 14 September 2018
(This article belongs to the Special Issue Intelligent Energy Management of Electrical Power Systems)

Abstract

:
In this article, the Grade Correspondence Analysis (GCA) with posterior clustering and visualization is introduced and applied to extract important features to reveal households’ characteristics based on electricity usage data. The main goal of the analysis is to automatically extract, in a non-intrusive way, number of socio-economic household properties including family type, age of inhabitants, employment type, house type, and number of bedrooms. The knowledge of specific properties enables energy utilities to develop targeted energy conservation tariffs and to assure balanced operation management. In particular, classification of the households based on the electricity usage delivers value added information to allow accurate demand planning with the goal to enhance the overall efficiency of the network. The approach was evaluated by analyzing smart meter data collected from 4182 households in Ireland over a period of 1.5 years. The analysis outcome shows that revealing characteristics from smart meter data is feasible, and the proposed machine learning methods were yielding for an accuracy of approx. 90% and Area Under Receiver Operating Curve (AUC) of 0.82.

1. Introduction

Electricity providers are currently driving deployment of smart electricity meters in a number of households worldwide to collect fine-grained electricity usage data. The changes taking place in the electricity industry require effective methods to provide end users with the feedback on electricity usage which is in turn used by the network operators for formulating pricing strategies, constructing tariffs and undertaking actions to improve the efficiency and reliability of the distribution grid. With high expectations towards smart metering adoption and its influence on households notwithstanding, it is observed that utilization of the information from fine-grained consumption profiles is in its initial stage. This is due to the fact that consumption patterns of individual residential customers vary a lot which is the function of the number of inhabitants, their activity, age and lifestyle [1]. Various techniques for customer classification are discussed in the literature, with the focus on electricity usage behavior of the customers [2,3,4,5]. These works contribute to higher energy awareness by providing the input for demand response systems in homes and supporting accurate usage forecasting on the household level [6,7,8].
Recently, a new relevant research stream may be distinguished with the underlying idea to identify important household characteristics and leverage it for energy efficiency. It is focused on the application of supervised machine learning techniques for inferring such household properties as number of inhabitants including children, family type, size of the house, and many other characteristics [9,10]. In particular, this work relies upon the works of Beckel et al. and Hopf et al. and further it is supposed to enhance the approach by extending the methodology for features selection. Therefore, this paper applies the GCA segmentation approach to derive important features describing electricity usage patterns of the households. The knowledge of the load profiles captured by smart meters might be helpful to reveal relevant household characteristics. These customer insights can be further utilized to optimize the energy efficiency programs in many ways, including with the introduction of flexible tariff plans and enhanced feedback loop [11,12]. The later one applies to feedback programs that engage households in energy saving behaviors, and helps to recognize what actions inhabitants are undertaking to bring the feedback into energy savings [13].
In particular, the proposed paper enhances methodology for customer classification taking into account historical electricity consumption data captured by a large set of 91 attributes, tailored specially to describe various aspects of behaviors typical for different type of households. Therefore, the scope of the paper is threefold:
(1)
Extraction of the comprehensive set of the behavioral features to capture different aspects of household characteristics;
(2)
Application of grade cluster analysis to identify important attributes to detect distinct consumption patterns of the customers and further, using only a subset of relevant features for classification, to reveal socio-demographic characteristics of the households;
(3)
Classification of households’ properties using three machine learning algorithms and three feature selection techniques.
The proposed research fits into the attempt focused on leveraging smart meter data to support energy efficiency on the individual user level. This gives novel research challenges in monitoring usage, data gathering, and inferring from data in a non-intrusive way since customer classification and profiling is methodically sound and offers a variety of potentials for application within the energy industry [14,15,16]. In the attempt to reduce electricity consumption in buildings, identification of important features responsible for specific patterns of energy consumption at different customer groups is a key to improving efficiency of available energy usage.
In this context, the proposed approach is, to some extent, similar to non-intrusive load monitoring (NILM) or non-intrusive appliance load monitoring (NIALM) [17,18,19]. However, the difference is that our goal is to extract high-level household characteristics from the electricity consumption instead of disaggregating the consumption of individual appliances. Nevertheless, both approaches—NILM/NIALM and the proposed approach for detecting households’ characteristics—are delivering interesting knowledge that has implications for households and utility providers. It may help them to understand the key drivers responsible for the electricity consumption and, finally, the costs associated with this.
In the following sections we characterize the data used in the experiments and introduce the idea of grade analysis. Subsequently, we describe the technical and methodological realization of the classification as well as the evaluation of the results. The final section provides a summary and an outlook on further application scenarios.

2. Smart Meter Data Used

2.1. The CER Data Set

This research is conducted based on the Irish Commission for Energy Regulation (CER) data set. The CER initiated a Smart Metering Project in 2007 with the purpose of undertaking trials to assess the performance of Smart Meters and their impact on consumer behavior. It contains measurements of electricity consumption gathered from 4182 households between July 2009 and December 2010 (75 weeks in total with 30 min data granularity). Each participating household was asked to fill out a questionnaire before and after the study. The questionnaire contained inquiries regarding the consumption behavior of the occupants, the household’s socio-economic status, properties of the dwelling and appliance stock [20].
Some characteristics of the underlying data are presented in Figure 1, where the normalized consumption observed at different aggregation levels is visualized. Aggregation reduces the variability in electricity consumption resulting in increasingly smooth load shapes when at least 100 households are considered.
The CER data set, to the best of our knowledge, does not account for energy that is consumed by heating and cooling systems. The heating systems of the participating households either use oil or gas as a source of energy or their consumption is measured by a separate electricity meter. The households registered in the project were reported to have no cooling system installed [20].

2.2. Features

The definition of features vector is crucial to the success of any classifier based on a machine learning algorithm. To make the high-volume time series data applicable to the classification problem, they have to be transformed into a number of representative variables. As suggested in [10,20], features can be divided in four groups: consumption features, ratios, temporal features, and statistics. This set of features especially considers the relation between the consumption on weekdays and on the weekend, parameters of seasonal and trend decomposition, estimation of the base load and some statistical features (please refer to Table 1). Altogether the attributes describe consumption characteristics (such as mean consumption at different times of the day and on different days), ratios (e.g., daytime-ratios and ratios between different days), statistical aspects (e.g., the variance, the auto-correlation and other statistical numbers) and finally different temporal aspects (such as consumption levels, peaks, important moments, temporal deviations, values of time series analysis) [10,20].
All attributes were created based on time series, so we did not apply any dimensionality reduction techniques e.g., Principal Component Analysis in order not to reduce interpretability of a particular variable and to prevent information loss. After the feature extraction, the values are normalized. To evaluate algorithms, we have separated the data into training and testing dataset at a 70%:30% ratio.

3. Grade Data Analysis

In the following lines, Grade Data Analysis is presented. It is an interesting technique that works on variables measured on any scale, including categorical. The method uses dissimilarity measures including concentration curves and the measure of monotonic dependence. The framework is based on grade transformation proposed by [21], and developed by [22]. The general idea is to transform any distribution of two variables into a structure that enables to capture the underlying dependencies of the so-called grade distribution. In practical applications, the grade data approach consists of analyzing the two-way table with rows/columns, which is preceded by proper recoding of variable values and providing the values of monotone dependence measures like Spearman’s ρ * and Kendall’s τ .
The main component of the grade methods is Grade Correspondence Analysis (GCA), which stems from classical correspondence analysis. Importantly, Grade Data Analysis is going significantly beyond the correspondence approach, thanks to the means of grade transformation. An important feature of GCA is that it does not create a new measure but takes into account the original structure of the underlying phenomenon. GCA performs multiple ordering iterations on both the columns and the rows of the table, in such a way that neighboring rows are more similar than those further apart, and at the same time, neighboring columns are more similar than those that are further apart. Once the optimal structure is found, it is possible to combine neighboring rows and neighboring columns, and therefore, to build the clusters representing similar distributions. The Spearman ρ * was originally proposed for continuous distributions, however it may be defined also as Pearson’s correlation applied to the distribution after the grade transformation. Importantly, the grade distribution is applicable for discrete distribution too, and it is possible to calculate Spearman ρ * for the probability table P with m rows and k columns, where pis is the probability of i-th row in s-th column:
ρ * ( P ) = 3 i = 1 m s = 0 k ( p i s ( 2 S r o w ( i ) 1 ) ( 2 S c o l ( s ) 1 ) ) ,
where
S r o w ( i ) = ( j = 1 i 1 p j + ) + 1 2 p i + ,     S c o l ( s ) = ( t = 1 s 1 p + t ) + 1 2 p + s
and p j + and p + t are marginal sums defined as: p j + = s = 1 k p j s ,   p + t = t = 1 m p t s .
GCA is supposed to maximize ρ * by ordering the columns and the rows taking into account their grade regression value, which represents the gravity center for each column or each row. The grade regression for the rows is defined as:
r e g r r o w ( i ) = s = 1 k p i s S c o l ( s ) p i +
and, similarly, for the columns:
r e g r c o l ( s ) = i = 1 m p i s S r o w ( i ) p + s .
The idea behind the algorithm is to measure the grade regression for columns and to sort the columns by its values, which results in an increase of the regression for columns. At the same time, the regression for rows changes as well. Similarly, if the regression for rows is sorted then regression for columns changes. As evidenced in [23], each sorting iteration with respect to grade regression values, in fact, increases the value of Spearman ρ * . The number of possible combinations with rows and columns permutations is finite and it is equal to k ! m ! . With the increasing value of Spearman ρ * , the last sorting iteration produces the largest ρ * , called local maximum of Spearman ρ * .
In consecutive steps, GCA randomly permutes rows and columns and reorders them so local maximum can be achieved. In practical application, when the data volume and dimension is huge, the search over the all possible combinations of rows and columns is a computationally demanding and long-lasting process. Therefore, in order to find a global maximum of ρ * , Monte Carlo simulations are used. To achieve it, the algorithm is iteratively searching for such a representation where ρ * reaches local maximum, starting from randomly ordered rows and columns. From the whole set of local maxima, the highest value of ρ * is chosen and it is assumed to be close to the global maximum, which usually happens after 100 iterations of the algorithm. Importantly, the calculation of grade regression requires non-zero sum for each and every row and column in a table, so this requirement is applicable also to the GCA. A more detailed description on grade transformation mechanics can be found in [22,24].
As far as grade cluster analysis (GCCA) is concerned, its framework is based on optimal permutations provided by the GCA. The following assumptions are associated with the cluster analysis: the number of clusters is provided, and the rows and columns of the data table (variables, say X and Y ) are optimally aggregated. The respective, aggregated probabilities in the table for cluster analysis, are derived from the sums of component probabilities which are found in initial, optimally ordered table, and number of rows in the aggregated table equals the specified number of clusters. The optimal clustering is supposed to be achieved when ρ * ( X , Y ) is maximal in the set of aggregated rows and/or columns, which are adjacent in optimal permutations. The rows and the columns may be combined either separately–by maximizing ρ * for aggregated X and non-aggregated Y , or for non-aggregated X and aggregated Y , or simultaneously. Details of the maximization procedure can be found in [23].
Finally, the grade analysis is highly supported by visualizations using over-representation maps. The maps are acting as a very convenient tool for plotting both source and transformed data structures where the idea is to show the various structures in the data with respect to the average values. Every cell in the data table is covered by the respective rectangle in [ 0 , 1 ] × [ 0 , 1 ] space and it is visualized using shades of grey, which corresponds to the level of the randomized grade density. The scale of grade density is divided into several intervals and respective colors represent particular intervals, with black corresponding to the highest values and white corresponding to the lowest. With the grade density used to measure the deviation from independence of variables X and Y , the dark colors indicate overrepresentation while the white ones show underrepresentation.

4. GCA Clustering Experiments

The starting point for the experiments was to prepare the initial matrix with normalized features ( x i min ( x ) ) / ( max ( x ) min ( x ) ) in the columns and the rows representing each of the households. The structure of the dataset is presented in Table 2.
The data structure presented in Table 2 has been analyzed using GradeStat software [25], which is the tool that was developed in the Institute of Computer Science Polish Academy of Science.
The next step was to compute over-representation ratios for each field (cell) of the table with households and the attributes describing them. For a given m × k data matrix with non-negative values, a visualization using over-representation map is possible, in the same way as a contingency table. However, instead of frequency n i j the value of j -th feature for i -th household is used. Subsequently, it is compared in a contingency table with the corresponding neutral or fair representation of n i × n j / n i j where n i = j n i j , n j = i n i j . The ratio of the expression is called the over-representation. An over-representation surface over a unit square is then divided into m × k cells situated in m rows and k columns, and the area of cells placed in row i and column j is assumed to be equal to fair representation of normalized n i j . Based on the over-representation ratios, the over-representation map for the initial raw data can be constructed. The color intensity of each cell in the map is the result of the comparison between two values: (1) the real value of the measure connected to the underlying cell; (2) the expected value of the measure. In Figure 2 there is an initial over-representation map for the analyzed data presented. The colors of the cells in the map are grouped into three classes representing different properties:
  • gray–the feature for the element (household) is neutral (ranging between the 0.99–1.01) which means that the real value of the feature is equal to its expected value;
  • black or dark gray–the feature for the element (household) is over-represented (between 1.01 and 1.5 for weak over-representation and more than 1.5 for strong) which means that the real value of the feature is greater than the expected one;
  • light gray or white–the feature for the element (household) is under-represented (between 0.66 and 0.99 for weak under-representation and less than 0.66 for strong under-representation), which means that the real value of feature is less than the expected one.
Besides the differences in color’s scales on the map—its rows and columns could be of different sizes. A row’s height depends on the evaluation of the element (household) in comparison to the entire population, so the households with higher evaluation are represented by higher rows. A column’s width depends on the evaluation of the element (feature) in comparison to the evaluation of all the features from the set, so the features with higher evaluation are represented by wider columns.
In order to reveal the structural trends in data, the following step was to apply the grade analysis to measure the dissimilarity between analyzed data distributions—households and feature dimensions. The grade analysis was conducted based on Spearman’s ρ * , used as the total diversity index. The value of ρ * strongly depends on the mutual order of the rows and the columns and therefore, to calculate ρ * , the concentration indexes of differentiation between the distributions were used. The basic GCA procedure is executed through permuting the rows and columns of a table in order to maximize the value of ρ * . After each sorting, the ρ * value increases and the map becomes more similar to the ideal one. As presented on the maps, the darkest fields are placed in the upper-left and the lower-right corners while the rest of the fields are assigned according to the following property: the farther from the diagonal towards the two other map corners (the lower-left and the upper-right ones) the lighter gray (or white) color the fields have.
The result of the GCA procedure is presented in Figure 3. The rows represent households and the columns represent the features describing the households. The resulting order presents the structure of underlying trends in data. The analysis of the map reveals that two groups of the features can be distinguished: the features which non-differentiate the population of households (the middle columns of the map) and those which differentiate the households (the most-left and the most-right columns).
Four vertical clusters were marked in Figure 3 (C1, C2, C3 and C4) and these show typical behavior of the households in terms of the electricity usage characterized by the respective number of features (in brackets).
Finally, the aggregation of some rows representing unique households was performed. The optimal number of four clusters was obtained when the changes of the subsequent ρ * values appeared to be irrelevant as referenced in [22]. In Figure 4, the chart with the ρ * values as a function of the number of clusters is presented. The points on the O X axis correspond to the cluster numbers. The O Y axis is denoted by the values of ρ * .
The proposed GCA method applied for the clustering enables identification of the features describing different aspects of the consumption behaviors. The clusters are further utilized to select representative features within each cluster to be used for revealing selected households’ characteristics.

5. Classification of Selected Household Characteristics

5.1. Problem Statement

In the following lines we present and assess a classification system that applies supervised machine learning algorithms to automatically reveal specific patterns or characteristics of the households, having their aggregated electricity consumption as an input. The patterns/characteristics are related to the socio-economic status of a particular household and its dwelling. In particular, the following properties are explored:
  • Family type;
  • Number of bedrooms;
  • Number of appliances;
  • Employment;
  • Floor area;
  • House type;
  • House age;
  • Householder age.
Along with the detailed smart metering data, the data set provides information on the characteristics of each household collected through the questionnaires. Such information delivers true output to classification to validate the proposed models. Table 3 presents eight questionnaire questions that were used as the target features for classification (true outcome).
For classification of the households’ properties, three experimental feature setups were considered:
  • All the variables (91) were used in the algorithms;
  • Eight variables based on GCA and selected as representatives of each cluster having the highest AUC measure (please refer to Appendix A, Table A1);
  • Eight variables based on Boruta package which is the feature selection algorithm for finding relevant variables [26].

5.2. Accuracy Measures

For the purpose of model evaluation, four performance measures were used, i.e., classification accuracy, sensitivity, specificity and area under the ROC curve (AUC) [27]. For the binary classification problem, i.e., having positive class and negative class, four possible outcomes exist, as shown in Table 4.
Based on Table 4, the accuracy (AC) measure can be computed, which is the proportion of the total number of predictions that were correct:
A C = T P + T N T P + F P + T N + F N .
AUC estimation requires two indicators defined as: true positive rate T p r = T P T P + F N , and false positive rate F p r = F P F P + T N = 1 T n r . These measures can be calculated for different decision threshold values. An increase of the threshold from 0 to 1 will yield a series of points ( F p r ,   T p r ) constructing the curve with T p r and F p r on the horizontal and vertical axes, respectively. In a general form, the value of AUC is given by A U C = 0 1 R O C ( u ) d u .
From another point of view, AUC can be understood as P ( X p > X n ) where X p and X n denote the markers for positive and negative cases, which can be interpreted as the probability that in a randomly drawn pair of positive and negative cases, the classifier probability is higher for the positive one.

5.3. Classification Algorithms

Building predictive models involves complex algorithms, therefore R-CRAN was used as the computing environment. In this research, all the numerical calculations were performed on a personal computer equipped with an Intel Core i5-2430M 2.4 GHz processor (2 CPU × 2 cores), 8 GB RAM and the Ubuntu 16.04 LTS operating system. To achieve predictive models having good generalization abilities, special learning process incorporating AUC measure was performed. Because of this, the following maximized function assures the best parameters of each algorithm:
f ( A U C T , A U C V ) = 1 2 | A U C T A U C V | + 1 2 A U C V ,
where A U C T stands for the training accuracy, and A U C V stands for the validation accuracy.

5.3.1. Artificial Neural Networks

Artificial neural networks (ANN) are mathematical objects in the form of equations or systems of equations, usually nonlinear, for analysis and data processing. The purpose of neural networks is to convert input data into output data with a specific characteristic or to modify such systems of equations to read useful information from their structure and parameters. On a statistical basis, selected types of neural networks can be interpreted in general non-linear regression categories [28].
In studies related to forecasting in power engineering, multilayer, one-way artificial neural networks with no feedback are most commonly used. Multilayer Perceptron networks (MLP) are one of the most popular types of supervised neural networks. For example, the MLP network (3, 4, 1) means a neural network with three inputs, four neurons in the hidden layer and one neuron in the output layer. In general, the three-layer MLP neural network ( P ,   M ,   K ) is described by the expression:
f ( x i , w ) = h 2 ( W 2 [ h 1 ( W 1 x i + b 1 ) ] + b 2 ) ,  
where x i = ( x 1 , , x p ) T represents the input data, W 1 is the matrix of the first layer weights with dimensions M × P , W 2 is the matrix of the second layer weights with dimensions K × M , h i ( u ) and b i are nonlinearities (functions of neuron activation e.g., logistic function) and constant values in subsequent layers respectively [28].
The goal of supervised learning of the neural network is to search for such network parameters that minimize the error between the desired values L i and received at the output of the network P i . The most frequently minimized error function is the sum of the squares of differences between the actual value of the explained variable and its theoretical value determined by the model, with the values of the synaptic weight vector set:
E ( w ) = 1 2 k = 1 K e ( k ) = 1 2 k = 1 K ( i = 1 n ( P i ( k ) L i ( k ) ) 2 ) ,
where n is the number of the training sample, P i ( k ) and L i ( k ) are predicted and reference value and K is the number of training epochs of the neural network [28].
The neural network learning process involves the iterative modification of the values of the synaptic weight vector w (all weights are set in one vector), in iteration k + 1 :
w k + 1 = w k + η k p k ,
where p k is the direction of the minimization of the function E ( w ) and η is the magnitude of the learning error. The most popular optimization methods are undoubtedly gradient methods, which are based on the knowledge of the function gradient:
p k = [ H ( w k ) ] 1 g ( w k ) ,
where g and H denote the gradient and the hesian of the last known solution w k , respectively [28].
In the practical implementations of the algorithm, the exact determination of hesian H ( w k ) is abandoned, and its approximation G ( w k ) is used instead. One of the most popular methods of learning neural networks is the algorithm of variable metrics. In this method, the hesian (or its reversal) in each step is modified from the previous step by some correction. If by c k and r k the increments of the vector w and the gradient g in two successive iterative steps are marked, c k = w k w k 1 , r k = g k g k 1 , and by V k the inverse matrix of the approximate hessian V k = [ G ( w k ) ] 1 , V k 1 = [ G ( w k 1 ) ] 1 , according to the most effective formula of Broyden-Fletcher-Goldfarb-Shanno (BFGS), the process of updating the value of the V k matrix is described by the recursive relationship:
V k = V k 1 + ( 1 + r k T V k 1 r k c k T r k ) c k c k T c k T r k c k r k T V k 1 + V k 1 r k c k T c k T r k .
As a starting value V 0 = 1 is usually assumed, and the first iteration is carried out in accordance with the algorithm of the largest slope [28].
Artificial neural networks are often used to estimate or approximate functions that can depend on a large number of inputs. In contrast to the other machine learning algorithms considered in these experiments, the ANN required the input data to be specially prepared. The vector of continuous variables was standardized, whereas the binary variables were converted such that 0 s were transformed into values of −1 [3,5,29].
To train the neural networks, we used the BFGS algorithm implemented in the nnet library. The network had an input layer with 91 neurons and a hidden layer with 1, 2, 3, …, 15 neurons. A logistic function was used to activate all of the neurons in the network. To achieve robust estimation of the neural networks error, 10 different neural networks were learned with different initial weights vector. Final estimation of the error was computed as the average value over 10 neural networks [3,5,29].
In each experiment, 15 neural networks were learned with various parameters (the number of neurons in the hidden layer). To avoid overfitting, after each learning iteration had finished (with a maximum of 50 iterations), the models were checked using the measure defined in (6). Finally, out of the 15 learned networks, that with the highest value was chosen as the best for prediction [3,5,29].

5.3.2. K-Nearest Neighbors Classification

The k -nearest neighbors (KNN) regression [30] is a non-parametric method, which means that no assumptions are made regarding the model that generates the data. Its main advantage is the simplicity of the design and low computational complexity. The prediction of the value of the explained variable L i on the basis of the vector of explanatory variables x i is determined as:
P i = k = 1 K L k   I ( x i , x k ) K ,
where:
I ( x i , x k ) = { 1 ,       i f   x k   is   one   of   t h e   k   nearest   neighbors   x i 0 ,                   otherwise ,  
whereas x k is one of the k -nearest neighbors x i , in the case where the distance d ( x i , x k ) belongs to k , the smallest distance between the observations from the set X and x k . The most commonly used distance is the Euclid distance [3,5,29,30].
To improve the algorithm, we normalized the explanatory variables (standardization for quantitative variables and replacement of 0 by −1 for binary variables). The normalization ensures that all dimensions for which the Euclidean distance is calculated have the same importance. Otherwise, a single dimension could dominate the other dimensions [3,5,29].
The algorithm was trained with knn implemented in the caret library. Different values of k were investigated in the experiments: {5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 250, 300}. The optimal value, and thus the final form of the model, was determined as that giving the maximum value according to (6) [3,5,29].

5.3.3. Support Vector Classification

Support Vector learning is based on simple ideas which originated in statistical learning theory [31]. The simplicity comes from the fact that Support Vector Machines (SVMs) apply a simple linear method to the data but in a high-dimensional feature space non-linearly related to the input space. Moreover, even though we can think of SVMs as a linear algorithm in a high-dimensional space, in practice, it does not involve any computations in that high-dimensional space [28].
SVMs use an implicit mapping Φ of the input data into a high-dimensional feature space defined by a kernel function, i.e., a function returning the inner product Φ ( x i ) , Φ ( x i ) between the images of two data points x i , x i in the feature space. The learning then takes place in the feature space, and the data points only appear inside dot products with other points [32]. More precisely, if a projection Φ : X H is used, the dot product 〈 Φ ( x i ) , Φ ( x i ) 〉 can be represented by a kernel function k which is computationally simpler than explicitly projecting x i and x i into the feature space H [28].
Training an SVM involves solving a quadratic optimization problem. Using a standard quadratic problem solver for training an SVM would involve solving a big QP problem even for a moderately sized data set, including the computation of an n × n matrix in memory ( n number of training points). In general, predictions correspond to the decision function:
P i = sign ( w , Φ ( x i ) ) ,
where solution w has an expansion w = α i i Φ ( x i ) in terms of a subset of training patterns that lie on the margin [25].
In the case of the L2-norm soft margin classification, the primal optimization problem takes the form:
minimize   ( t , w ) = 1 2 | | w | | 2 + C n i = 1 n ( ξ i ) ,
s u b j e c t   t o   L i ( Φ ( x i ) , w + b ) 1 ξ i * ξ i 0   ( i = 1 , , n ) ,
where n is the number of training patterns, and L i 1 , C is the cost parameter that controls the penalty paid by the SVM for misclassifying a training point and thus the complexity of the prediction function. A high cost value C will force the SVM to create a complex enough prediction function to misclassify as few training points as possible, while a lower cost parameter will lead to a simpler prediction function.
To construct the support vector machine model, C -SVR from the kernlab library with sequential minimal optimization (SMO) was used to solve the quadratic programming problem. A linear, polynomial (of degree 1, 2 and 3) and radial ( γ from 0.1 to 1 by 0.2) kernel function were used, and ε (which defines the margin width for which the error function is zero) was arbitrarily taken from the following set {0.1, 0.3, 0.5, 0.7, 0.9}. The regularized parameter C that controls overfitting was arbitrarily set to one of the following values {0, 0.2, 0.4, 0.6, 0.8, 1}. Finally, as in all previous cases, the model that maximized the function (6) was chosen [29].

5.4. Classification Results

This section refers to application of classification algorithms mentioned in Section 5.3. For the sake of clarity and synthesis, the results are visualized and provided for the testing dataset only. However, in the appendix section the detailed results for each algorithm and for three feature sets are presented (Appendix B).
Additionally, in Appendix C the final set of independent variables used in classification models and for each dependent variable was provided.
As far as summary results are concerned, Figure 5 shows the accuracy achieved by the algorithms–KNN, NNET and SVM with break down into three feature selection techniques—All variables, 8 GCA, 8 Boruta. From the left to the right are the results for family type, number of bedrooms, employment type, floor area, house type, number of appliances, householder age and house age. The whiskers represent standard deviations.
It can be observed that the methods achieve approx. 90% accuracy for classification of appliances and age of the house, regardless of the classification algorithm. Family type is classified with nearly 75% accuracy. On the other hand, the most difficult characteristic to be discovered by algorithms is number of bedrooms, with the accuracy reaching only 50%.
In terms of different approaches for features selection, it was observed that proposed GCA algorithm (8-GCA), used for clustering variables and selecting only two representatives of the clusters, worked well and can be considered as a technique for feature selection. Broader set of all variables was relevant for classification of floor area only.
The next figure, Figure 6, illustrates the AUC values for the classifiers. The range of AUC values between analyzed households’ characteristics vary from 0.52 (for age of the house and using KNN) to 0.82 (for family type, regardless classification algorithm). Overall, all variables are necessary to result in high AUC only for classification of main inhabitant’s age and floor area. For other characteristics, using 8 variables, either GCA or Boruta, resulted in equally good classification measured by AUC.
In general, the results indicate that the choice of a classification model should depend on the specific target application. In the experiment it was observed that SVM and NNET stand out as the classifiers that allow to achieve the best performance. However, the results may vary taking into account variable selection mechanism.

6. Conclusions

The approach presented in this paper shows that classification of households’ socio-demographic and dwelling characteristics based on the electricity consumption is feasible and gives the opportunity to derive additional knowledge about the customers.
In practice, such knowledge can motivate electricity providers to offer new and more customer-oriented energy services. With growing liberalization of the energy market, premium and non-standard services may represent a competitive advantage to both existing customers and new ones.
The experimental results reported in Section 5 show that selected classification algorithms can reveal household characteristics from electricity consumption data with fair accuracy. In general, the choice of a particular classifier should depend on the specific target application. In the experiment, it was observed that SVM and NNET delivered equally good performance, however the results varied depending on the variable selection procedure. For six out of eight household characteristics, using only eight variables, either GCA or Boruta resulted in a satisfactory level of accuracy.
The GCA proposed in this article allowed for quickly grasping general trends in data, and then to cluster the attributes, taking into account historical electricity usage. It is worth underlining that the method was competitive with the Boruta algorithm, having its roots in random forest algorithms. The results obtained by grade analysis might be the basis not only for feature selection but also for the customers’ segmentation.
Since the results are promising, we aim, as an extension to this research, to focus on a broader set of variables including external factors like weather information (including humidity, temperature, sunrises and sunsets) as well as holidays and observances (including school holidays). The other direction for future research may involve application of selected segmentation algorithms to extract homogeneous groups of customers and to look for specific socio-demographic characteristics within the clusters.

Author Contributions

K.G. prepared the simulation and analysis and wrote the 2nd, 5th and 6th section of the manuscript; M.S. wrote the 2nd section of the manuscript; T.Z. coordinated the main theme of the research and wrote the 1st, 3rd, 4th and 6th section of the manuscript. All authors have read and approved the final manuscript.

Funding

This study was cofounded by the Polish National Science Centre (NCN), Grant No. 2016/21/N/ST8/02435.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. AUC values for variables grouped into four clusters.
Table A1. AUC values for variables grouped into four clusters.
Cluster: 1
VariableFamilyBedroomsAge_PersonEmployHouse_TypeAge_HouseAppliancesFloor_Area
r_var_wd_we0.4670.4950.4950.4820.5080.4880.4680.498
number_zeros0.5130.5060.5060.4820.5080.5030.490.5
r_morning_noon_no_min0.5660.5390.5390.5090.4850.5170.5380.555
r_wd_morning_noon0.5680.520.520.6080.4870.5230.5420.546
r_wd_evening_noon0.490.4990.4990.5950.5050.5460.5260.521
r_evening_noon_no_min0.5250.510.510.6380.4930.540.5130.519
r_we_morning_noon0.610.5180.5180.6440.5490.5170.4950.533
width_peaks0.5140.5380.5380.5210.5770.470.4190.514
r_max_wd_we0.530.5080.5080.5130.5230.5010.4910.522
r_morning_noon0.5890.520.520.4940.5270.5210.5380.552
const_time0.5470.5420.5420.5620.5110.5410.5570.537
r_evening_noon0.4960.5010.5010.530.5080.5090.530.52
r_min_mean0.5130.5530.5530.6330.5280.4870.5810.537
r_we_evening_noon0.5150.4950.4950.4840.5060.5120.5180.487
r_mean_max_no_min0.5540.5790.5790.5660.5720.5460.5620.536
r_wd_night_day0.5840.5190.5190.550.5570.5210.6310.514
value_min_guess0.4810.5330.5330.5020.5790.5160.5770.509
first_above_base0.5350.5150.5150.5310.5240.5680.5520.522
r_night_day0.5880.520.520.4990.5590.5190.5660.521
dist_big_v0.5440.5260.5260.5020.5180.5120.540.522
r_noon_wd_we0.5120.5020.5020.5140.4950.4960.5370.536
r_we_night_day0.5740.5180.5180.5650.550.5150.5690.542
r_afternoon_wd_we0.5080.4950.4950.5010.5020.5050.5370.485
time_above_base20.510.5230.5230.5570.5510.5380.5670.547
number_big_peaks0.6180.5410.5410.5040.510.5330.4810.512
r_evening_wd_we0.4780.5080.5080.50.5110.50.5060.537
r_night_wd_we0.5040.50.50.5080.5330.5030.4880.554
Cluster: 2
VariableFamilyBedroomsAge_PersonEmployHouse_TypeAge_HouseAppliancesFloor_Area
number_small_peaks0.5690.5460.5460.5220.5160.5270.5260.508
s_num_peaks0.5690.5460.5460.5220.5160.5270.5260.508
r_min_wd_we0.5080.510.510.5110.5520.5120.5030.537
r_morning_wd_we0.5670.4830.4830.5790.5440.5090.4670.499
t_daily_max0.5020.5060.5060.4940.5070.5230.5160.55
s_cor_we0.5250.5190.5190.5330.5060.5230.540.515
s_cor_wd_we0.5480.5410.5410.5620.5110.5030.5440.542
percent_above_base0.630.5850.5850.4990.5420.5110.5710.547
s_cor_wd0.5490.5450.5450.550.5220.5210.5840.517
t_above_mean0.5690.5560.5560.5580.5230.5410.5420.516
ts_acf_mean3h0.5470.5610.5610.5180.5610.520.5650.527
t_daily_min0.5440.5490.5490.530.5350.5070.5550.548
ts_acf_mean3h_weekday0.5890.5650.5650.4970.520.510.5740.526
Cluster: 3
VariableFamilyBedroomsAge_PersonEmployHouse_TypeAge_HouseAppliancesFloor_Area
r_mean_max0.570.590.590.5270.5870.5280.5980.539
t_above_base0.7180.5840.5840.5190.5020.5160.5520.528
r_day_night_no_min0.5820.5290.5290.5530.5120.5390.5680.506
wide_peaks0.4570.5410.5410.4970.5770.5080.5880.515
c_max0.7580.6350.6350.6360.5450.5460.6340.557
c_wd_max0.7570.630.630.6420.5380.5570.6290.544
c_we_max0.7460.6340.6340.6230.5460.5380.6160.541
s_max_avg0.7830.6470.6470.650.5510.5490.6520.543
value_above_base0.7790.650.650.6090.550.5590.6320.54
c_sm_max0.7660.6460.6460.6390.5640.5510.6530.551
c_min0.6580.6410.6410.5740.5780.520.6350.533
sm_variety0.7310.6310.6310.5840.5750.5170.6120.56
Cluster: 4
VariableFamilyBedroomsAge_PersonEmployHouse_TypeAge_HouseAppliancesFloor_Area
c_wd_min0.6610.6470.6470.5720.5850.5190.6610.514
c_we_evening0.7370.6490.6490.630.5730.5410.6190.554
c_evening0.7640.6610.6610.6450.5830.5410.6450.557
c_wd_evening0.7650.6570.6570.6450.5810.5410.6410.553
c_evening_no_min0.7610.650.650.6450.5710.5420.6240.548
b_day_diff0.7440.640.640.610.5660.5450.6470.552
c_wd_night0.6610.6330.6330.5760.6110.5050.6540.54
c_afternoon0.6540.6330.6330.5780.6150.50.6520.522
c_night0.6540.6330.6330.5780.6150.50.6520.522
b_day_weak0.7140.6310.6310.6050.5650.5430.630.557
c_wd_morning0.6840.6280.6280.5990.5870.4980.6220.55
c_morning0.6730.6290.6290.5850.6010.4980.6240.55
c_weekend0.7420.6570.6570.6030.5970.4890.6410.528
c_we_morning0.6270.6170.6170.5410.6150.5040.6120.535
c_we_min0.6480.640.640.5720.6060.490.6290.549
c_we_night0.6390.6280.6280.5790.6060.4940.6380.494
c_we_afternoon0.7490.6350.6350.6030.5590.5350.6110.529
s_min_avg0.6650.6550.6550.5750.6170.4920.6590.526
c_week0.7610.6690.6690.6030.5990.4960.670.555
c_night_no_min0.6380.6060.6060.5660.5940.5120.6230.521
s_diff0.7650.6680.6680.60.5940.4990.6680.554
c_weekday0.7650.6680.6680.60.5940.4990.6680.554
bg_variety0.8060.6570.6570.6030.5610.5250.6310.546
n_d_diff0.6360.6030.6030.5670.590.5070.6240.512
c_morning_no_min0.6760.6120.6120.5810.5780.5010.590.548
s_q10.7150.6650.6650.5730.6120.4980.6570.515
c_we_noon0.710.6250.6250.5570.5710.5170.610.541
c_wd_afternoon0.7660.6460.6460.5740.5620.5410.6480.553
s_q30.7430.6620.6620.5880.60.4940.660.554
c_noon0.730.6440.6440.5360.5810.5050.630.538
s_q20.7590.6620.6620.5650.5890.4860.6510.532
c_wd_noon0.7170.6360.6360.5180.5730.50.6220.532
c_noon_no_min0.7150.6240.6240.5130.5570.4980.6040.528
ts_stl_varRem0.7480.6320.6320.6460.5560.4940.6160.541
s_var_we0.7350.6350.6350.6230.5530.4910.610.535
t_above_1kw0.740.6550.6550.6050.5950.4990.6680.549
s_variance0.750.6410.6410.6350.5630.4990.6450.545
s_var_wd0.7520.6370.6370.6340.5590.5040.6440.545
t_above_2kw0.7450.6510.6510.6320.5730.4960.6570.53

Appendix B

Table A2. Classification results for each dependent variable.
Table A2. Classification results for each dependent variable.
Training/Validation Sample
ACAUC
Model for familyAll variablesANN (iteration = 17, neurons = 9)0.766 (±0.015)/0.731 (±0.025)0.854 (±0.016)/0.822 (±0.029)
KNN (k = 260)0.722 (±0.017)/0.701 (±0.026)0.806 (±0.019)/0.787 (±0.031)
SVM (kernel = polynomial, degree = 1, C = 0.3, gamma = 0.1)0.778 (±0.016)/0.735 (±0.026)0.825 (±0.017)/0.808 (±0.033)
8 best variables based on AUC and GCAANN (iteration = 28, neurons = 7)0.755 (±0.016)/0.736 (±0.025)0.834 (±0.019)/0.812 (±0.031)
KNN (k = 280)0.776 (±0.016)/0.759 (±0.025)0.831 (±0.019)/0.811 (±0.033)
SVM (kernel = sigmoid, degree = 1, C = 0.9, gamma = 0.1)0.673 (±0.018)/0.668 (±0.027)0.798 (±0.019)/0.794 (±0.025)
8 best variables based on BorutaANN (iteration = 28, neurons = 14)0.769 (±0.016)/0.740 (±0.025)0.847 (±0.016)/0.817 (±0.031)
KNN (k = 160)0.754 (±0.016)/0.737 (±0.025)0.833 (±0.017)/0.803 (±0.031)
SVM (kernel = sigmoid, degree = 1, C = 0.3, gamma = 0.1)0.761 (±0.016)/0.750 (±0.025)0.826 (±0.018)/0.800 (±0.032)
Training/Validation Sample
ACAUC
Model for bedroomsAll variablesANN (iteration = 2217, neurons = 4)0.493 (±0.018)/0.509 (±0.028)0.700 (±0.013)/0.674 (±0.024)
KNN (k = 250)0.494 (±0.019)/0.496 (±0.028)0.668 (±0.012)/0.660 (±0.024)
SVM (kernel = sigmoid, degree = 1, C = 0.1, gamma = 0.1)0.494 (±0.018)/0.508 (±0.028)0.674 (±0.014)/0.656 (±0.023)
8 best variables based on AUC and GCAANN (iteration = 19, neurons = 6)0.482 (±0.019)/0.492 (±0.028)0.683 (±0.013)/0.669 (±0.025)
KNN (k = 300)0.491 (±0.018)/0.505 (±0.028)0.685 (±0.013)/0.657 (±0.025)
SVM (kernel = polynomial, degree = 1, C = 0.9, gamma = 0.9)0.490 (±0.018)/0.504 (±0.028)0.667 (±0.012)/0.664 (±0.025)
8 best variables based on BorutaANN (iteration = 26, neurons = 9)0.494 (±0.018)/0.507 (±0.028)0.683 (±0.013)/0.665 (±0.025)
KNN (k = 300)0.492 (±0.019)/0.514 (±0.028)0.687 (±0.013)/0.667 (±0.024)
SVM (kernel = polynomial, degree = 3, C = 0.7, gamma = 0.7)0.486 (±0.018)/0.512 (±0.028)0.679 (±0.011)/0.667 (±0.021)
Training/Validation Sample
ACAUC
Model for age_personAll variablesANN (iteration = 16, neurons = 3)0.678 (±0.017)/0.683 (±0.027)0.708 (±0.016)/0.690 (±0.026)
KNN (k = 90)0.670 (±0.017)/0.670 (±0.027)0.713 (±0.017)/0.673 (±0.028)
SVM (kernel = polynomial, degree = 1, C = 0.1, gamma = 0.9)0.674 (±0.017)/0.678 (±0.027)0.726 (±0.013)/0.691 (±0.023)
8 best variables based on AUC and GCAANN (iteration = 27, neurons = 4)0.666 (±0.017)/0.674 (±0.027)0.666 (±0.019)/0.625 (±0.029)
KNN (k = 260)0.665 (±0.018)/0.680 (±0.026)0.663 (±0.021)/0.614 (±0.030)
SVM (kernel = polynomial, degree = 2, C = 0.93, gamma = 0.1)0.666 (±0.018)/0.680 (±0.027)0.639 (±0.023)/0.613 (±0.036)
8 best variables based on BorutaANN (iteration = 23, neurons = 9)0.666 (±0.017)/0.669 (±0.027)0.699 (±0.017)/0.670 (±0.028)
KNN (k = 300)0.665 (±0.017)/0.671 (±0.027)0.698 (±0.019)/0.660 (±0.025)
SVM (kernel = polynomial, degree = 3, C = 0.5, gamma = 0.1)0.662 (±0.017)/0.671 (±0.027)0.659 (±0.016)/0.658 (±0.025)
Training/Validation Sample
ACAUC
Model for employAll variablesANN (iteration = 13, neurons = 8)0.696 (±0.017)/0.676 (±0.027)0.754 (±0.017)/0.732 (±0.027)
KNN (k = 140)0.674 (±0.017)/0.655 (±0.027)0.734 (±0.017)/0.711 (±0.025)
SVM (kernel = linear, degree = 1, C = 1, gamma = 1)0.703 (±0.017)/0.663 (±0.027)0.758 (±0.018)/0.728 (±0.034)
8 best variables based on AUC and GCAANN (iteration = 6, neurons = 11)0.678 (±0.018)/0.671 (±0.027)0.713 (±0.018)/0.712 (±0.030)
KNN (k = 260)0.682 (±0.018)/0.671 (±0.027)0.734 (±0.018)/0.713 (±0.027)
SVM (kernel = polynomial, degree = 3, C = 0.1, gamma = 0.5)0.672 (±0.017)/0.663 (±0.027)0.726 (±0.020)/0.713 (±0.031)
8 best variables based on BorutaANN (iteration = 5, neurons = 13)0.652 (±0.018)/0.655 (±0.027)0.704 (±0.018)/0.702 (±0.030)
KNN (k = 300)0.677 (±0.017)/0.662 (±0.027)0.723 (±0.019)/0.703 (±0.031)
SVM (kernel = sigmoid, degree = 1, C = 0.9, gamma = 0.9)0.678 (±0.017)/0.666 (±0.027)0.718 (±0.021)/0.704 (±0.030)
Training/Validation Sample
ACAUC
Model for floor_areaAll variablesANN (iteration = 17, neurons = 9)0.622 (±0.018)/0.587 (±0.028)0.604 (±0.033)/0.594 (±0.038)
KNN (k = 260)0.598 (±0.018)/0.585 (±0.028)0.681 (±0.031)/0.573 (±0.055)
SVM (kernel = sigmoid, degree = 1, C = 0.7, gamma = 0.1)0.609 (±0.018)/0.598 (±0.028)0.587 (±0.053)/0.627 (±0.057)
8 best variables based on AUC and GCAANN (iteration = 28, neurons = 7)0.613 (±0.018)/0.578 (±0.028)0.580 (±0.033)/0.566 (±0.063)
KNN (k = 280)0.585 (±0.018)/0.571 (±0.028)0.692 (±0.026)/0.560 (±0.064)
SVM (kernel = polynomial, degree = 1, C = 0.9, gamma = 0.1)0.599 (±0.018)/0.583 (±0.028)0.574 (±0.044)/0.575 (±0.061)
8 best variables based on BorutaANN (iteration = 28, neurons = 14)0.603 (±0.017)/0.592 (±0.027)0.583 (±0.034)/0.583 (±0.049)
KNN (k = 160)0.591 (±0.018)/0.571 (±0.028)0.625 (±0.032)/0.576 (±0.083)
SVM (kernel = polynomial, degree = 1, C = 0.9, gamma = 0.5)0.593 (±0.018)/0.586 (±0.027)0.579 (±0.032)/0.584 (±0.055)
Training/Validation Sample
ACAUC
Model for appliancesAll variablesANN (iteration = 19, neurons = 1)0.908 (±0.011)/0.905 (±0.017)0.686 (±0.048)/0.566 (±0.088)
KNN (k = 40)0.908 (±0.011)/0.905 (±0.017)0.784 (±0.023)/0.591 (±0.126)
SVM (kernel = polynomial, degree = 1, C = 0.3, gamma = 0.9)0.908 (±0.011)/0.905 (±0.017)0.596 (±0.060)/0.616 (±0.078)
8 best variables based on AUC and GCAANN (iteration = 12, neurons = 2)0.908 (±0.011)/0.905 (±0.017)0.605 (±0.055)/0.566 (±0.111)
KNN (k = 70)0.908 (±0.011)/0.905 (±0.017)0.766 (±0.022)/0.606 (±0.125)
SVM (kernel = polynomial, degree = 1, C = 0.5, gamma = 0.3)0.908 (±0.011)/0.905 (±0.017)0.659 (±0.049)/0.654 (±0.099)
8 best variables based on BorutaANN (iteration = 11, neurons = 7)0.908 (±0.011)/0.905 (±0.017)0.650 (±0.056)/0.607 (±0.080)
KNN (k = 120)0.908 (±0.011)/0.905 (±0.017)0.740 (±0.024)/0.594 (±0.092)
SVM (kernel = radial, degree = 1, C = 1, gamma = 0.9)0.908 (±0.011)/0.905 (±0.017)0.666 (±0.068)/0.667 (±0.041)
Training/Validation Sample
ACAUC
Model for age_houseAll variablesANN (iteration = 17, neurons = 9)0.900 (±0.011)/0.899 (±0.018)0.563 (±0.029)/0.564 (±0.042)
KNN (k = 260)0.876 (±0.012)/0.870 (±0.020)0.616 (±0.035)/0.525 (±0.047)
SVM (kernel = sigmoid, degree = 1, C = 0.3, gamma = 0.5)0.900 (±0.011)/0.899 (±0.018)0.548 (±0.033)/0.558 (±0.038)
8 best variables based on AUC and GCAANN (iteration = 28, neurons = 15)0.900 (±0.011)/0.899 (±0.018)0.593 (±0.032)/0.575 (±0.045)
KNN (k = 280)0.871 (±0.013)/0.878 (±0.019)0.625 (±0.030)/0.570 (±0.046)
SVM (kernel = polynomial, degree = 3, C = 0.3, gamma = 0.5)0.900 (±0.011)/0.899 (±0.018)0.586 (±0.029)/0.583 (±0.049)
8 best variables based on BorutaANN (iteration = 28, neurons = 2)0.900 (±0.011)/0.899 (±0.018)0.581 (±0.033)/0.568 (±0.045)
KNN (k = 160)0.873 (±0.013)/0.865 (±0.021)0.606 (±0.029)/0.561 (±0.051)
SVM (kernel = polynomial, degree = 1, C = 0.1, gamma = 0.1)0.900 (±0.011)/0.899 (±0.018)0.558 (±0.028)/0.563 (±0.047)
Training/Validation Sample
ACAUC
Model for house_typeAll variablesANN (iteration = 10, neurons = 13)0.611 (±0.017)/0.606 (±0.028)0.650 (±0.020)/0.616 (±0.029)
KNN (k = 300)0.598 (±0.018)/0.559 (±0.027)0.626 (±0.020)/0.587 (±0.032)
SVM (kernel = sigmoid, degree = 1, C = 0.1, gamma = 0.5)0.590 (±0.018)/0.597 (±0.028)0.606 (±0.020)/0.596 (±0.038)
8 best variables based on AUC and GCAANN (iteration = 13, neurons = 7)0.600 (±0.018)/0.596 (±0.028)0.632 (±0.019)/0.604 (±0.029)
KNN (k = 210)0.602 (±0.018)/0.590 (±0.027)0.628 (±0.023)/0.597 (±0.031)
SVM (kernel = polynomial, degree = 3, C = 0.1, gamma = 0.5)0.615 (±0.018)/0.620 (±0.028)0.679 (±0.025)/0.648 (±0.031)
8 best variables based on BorutaANN (iteration = 25, neurons = 2)0.603 (±0.018)/0.590 (±0.028)0.628 (±0.021)/0.595 (±0.030)
KNN (k = 240)0.602 (±0.018)/0.590 (±0.028)0.627 (±0.019)/0.600 (±0.027)
SVM (kernel = sigmoid, degree = 1, C = 0.5, gamma = 0.7)0.599 (±0.018)/0.590 (±0.027)0.619 (±0.021)/0.585 (±0.034)

Appendix C

Table A3. Final set of independent variables for classification models for each dependent variable.
Table A3. Final set of independent variables for classification models for each dependent variable.
FamilyBedrooms
VariableAUCClusterVariableAUCCluster
number_big_peaks0.6181r_mean_max_no_min0.5781
r_we_morning_noon0.6091r_min_mean0.5531
percent_above_base0.6302percent_above_base0.5842
ts_acf_mean3h_weekday0.582ts_acf_mean3h_weekday0.5652
s_max_avg0.7823value_above_base0.6493
c_min0.6583c_min0.6403
bg_variety0.8064c_week0.6684
c_wd_min0.6604c_wd_min0.6464
Age_PersonEmploy
VariableAUCClusterVariableAUCCluster
r_mean_max_no_min0.5781r_evening_noon_no_min0.6431
r_min_mean0.5531r_wd_morning_noon0.5951
percent_above_base0.5842r_morning_wd_we0.5782
ts_acf_mean3h_weekday0.5652s_cor_wd_we0.5622
value_above_base0.6493s_max_avg0.6493
c_min0.6403sm_variety0.5833
c_week0.6684ts_stl_varRem0.6464
c_wd_min0.6464c_wd_morning0.5984
House_TypeAge_House
VariableAUCClusterVariableAUCCluster
value_min_guess0.5781first_above_base0.5671
width_peaks0.5771r_wd_evening_noon0.5461
ts_acf_mean3h0.5612t_above_mean0.5402
r_min_wd_we0.5522number_small_peaks0.5272
r_mean_max0.5863value_above_base0.5593
c_min0.5783r_day_night_no_min0.5393
s_min_avg0.6164b_day_diff0.5444
s_q30.6004c_we_evening0.5404
AppliancesFloor_Area
VariableAUCClusterVariableAUCCluster
r_wd_night_day0.6311r_morning_noon_no_min0.551
r_min_mean0.5801time_above_base20.541
s_cor_wd0.5842t_daily_max0.5492
percent_above_base0.5712t_daily_min0.5482
c_sm_max0.6533sm_variety0.5603
c_min0.6343c_max0.5563
c_week0.6694c_evening0.5574
c_wd_min0.6614c_wd_morning0.5494

References

  1. Chicco, G. Overview and performance assessment of the clustering methods for electrical load pattern grouping. Energy 2012, 421, 68–80. [Google Scholar] [CrossRef]
  2. Chicco, G.; Napoli, R.; Piglione, F.; Postolache, P.; Scutariu, M.; Toader, C. Load pattern-based classification of electricity customers. IEEE Trans. Power Syst. 2004, 192, 1232–1239. [Google Scholar] [CrossRef]
  3. Gajowniczek, K.; Ząbkowski, T. Short term electricity forecasting based on user behavior using individual smart meter data. Intell. Fuzzy Syst. 2015, 30, 223–234. [Google Scholar] [CrossRef]
  4. Haben, S.; Singleton, C.; Grindrod, P. Analysis and clustering of residential customers energy behavioral demand using smart meter data. IEEE Trans. Smart Grid 2016, 7, 136–144. [Google Scholar] [CrossRef]
  5. Gajowniczek, K.; Ząbkowski, T. Electricity forecasting on the individual household level enhanced based on activity patterns. PLoS ONE 2017, 12, e0174098. [Google Scholar] [CrossRef] [PubMed]
  6. Sial, A.; Singh, A.; Mahanti, A.; Gong, M. Heuristics-Based Detection of Abnormal Energy Consumption. In International Conference on Smart Grid Inspired Future Technologies; Chong, P., Seet, B.C., Chai, M., Eds.; Springer: Cham, Switzerland, 2018; pp. 21–31. [Google Scholar]
  7. Batra, N.; Singh, A.; Whitehouse, K. Creating a Detailed Energy Breakdown from just the Monthly Electricity Bill. In Proceedings of the 3rd International NILM Workshop, San Francisco, CA, USA, 14–15 May 2016. [Google Scholar]
  8. Rashid, H.; Arjunan, P.; Singh, P.; Singh, A. Collect, compare, and score: A generic data-driven anomaly detection method for buildings. In Proceedings of the Seventh International Conference on Future Energy Systems Poster Sessions, Waterloo, ON, Canada, 21–24 June 2016. [Google Scholar]
  9. Beckel, C.; Sadamori, L.; Santini, S. Automatic socio-economic classification of households using electricity consumption data. In Proceedings of the Fourth International Conference on Future Energy Systems, Waterloo, ON, Canada, 15 January 2013. [Google Scholar]
  10. Hopf, K.; Sodenkamp, M.; Kozlovkiy, I.; Staake, T. Feature extraction and filtering for household classification based on smart electricity meter data. Comput. Sci. Res. Dev. 2016, 31, 141–148. [Google Scholar] [CrossRef]
  11. Poortinga, W.; Steg, L.; Vlek, C.; Wiersma, G. Household preferences for energy-saving measures: A conjoint analysis. J. Econ. Psychol. 2003, 24, 49–64. [Google Scholar] [CrossRef]
  12. Vassileva, I.; Campillo, J. Increasing energy efficiency in low-income households through targeting awareness and behavioral change. Renew. Energy 2014, 67, 59–63. [Google Scholar] [CrossRef]
  13. Ehrhardt-Martinez, K. Changing habits, lifestyles and choices: The behaviours that drive feedback-induced energy savings. In Proceedings of the 2011 ECEEE Summer Study on Energy Efficiency in Buildings, Toulon, France, 1–6 June 2011. [Google Scholar]
  14. Chicco, G.; Napoli, R.; Postolache, P.; Scutariu, M.; Toader, C. Customer characterization options for improving the tariff offer. IEEE Trans. Power Syst. 2003, 18, 381–387. [Google Scholar] [CrossRef]
  15. Carroll, J.; Lyons, S.; Denny, E. Reducing household electricity demand through smart metering: The role of improved information about energy saving. Energy Econ. 2014, 45, 234–243. [Google Scholar] [CrossRef]
  16. Anda, M.; Temmen, J. Smart metering for residential energy efficiency: The use of community based social marketing for behavioural change and smart grid introduction. Renew. Energy 2014, 67, 119–127. [Google Scholar] [CrossRef]
  17. Hart, G.W. Nonintrusive Appliance Load Monitoring; IEEE: New York, NY, USA, 1992. [Google Scholar]
  18. Zeifman, M.; Roth, K. Nonintrusive appliance load monitoring: Review and outlook. IEEE Trans. Consum. Electron. 2011, 57, 76–84. [Google Scholar] [CrossRef]
  19. Zoha, A.; Gluhak, A.; Imran, M.A.; Rajasegarar, S. Non-intrusive load monitoring approaches for disaggregated energy sensing: A survey. Sensors 2012, 12, 16838–16866. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Beckel, C.; Sadamori, L.; Staake, T.; Santini, S. Revealing household characteristics from smart meter data. Energy 2014, 78, 397–410. [Google Scholar] [CrossRef] [Green Version]
  21. Szczesny, W. On the performance of a discriminant function. J. Classif. 1991, 8, 201–215. [Google Scholar] [CrossRef]
  22. Kowalczyk, T.; Pleszczynska, E.; Ruland, F. Grade Models and Methods for Data Analysis: With Applications for the Analysis of Data Populations; Springer: Berlin/Heidelberg, Germany, 2004; Volume 151. [Google Scholar]
  23. Ciok, A.; Kowalczyk, T.; Pleszczyńska, E. How a new statistical infrastructure induced a new computing trend in data analysis. In International Conference on Rough Sets and Current Trends in Computing; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  24. Szczesny, W. Grade correspondence analysis applied to contingency tables and questionnaire data. Intell. Data Anal. 2002, 6, 17–51. [Google Scholar]
  25. Program for Grade Data Analysis. Available online: gradestat.ipipan.waw.pl (accessed on 9 June 2018).
  26. Kursa, M.B.; Rudnicki, W.R. Feature selection with the Boruta package. J. Stat. Softw. 2010, 36, 1–13. [Google Scholar] [CrossRef]
  27. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  28. Gajowniczek, K.; Ząbkowski, T. Simulation study on clustering approaches for short-term electricity forecasting. Complex 2018, 2018, 3683969. [Google Scholar] [CrossRef]
  29. Gajowniczek, K.; Ząbkowski, T. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms. Energies 2017, 10, 1547. [Google Scholar] [CrossRef]
  30. Nguyen, B.; Morell, C.; De Baets, B. Large-scale distance metric learning for k-nearest neighbours regression. Neurocomputing 2016, 214, 805–814. [Google Scholar] [CrossRef]
  31. Davò, F.; Vespucci, M.T.; Gelmini, A.; Grisi, P.; Ronzio, D. Forecasting Italian electricity market prices using a Neural Network and a Support Vector Regression. In Proceedings of the 2016 AEIT International Annual Conference (AEIT), Capri, Italy, 5–7 October 2016. [Google Scholar]
  32. Muandet, K.; Fukumizu, K.; Sriperumbudur, B.; Schölkopf, B. Kernel mean embedding of distributions: A review and beyond. Found. Trends Mach. Learn. 2017, 10, 1–141. [Google Scholar] [CrossRef]
Figure 1. Hourly electricity consumption for various aggregation levels.
Figure 1. Hourly electricity consumption for various aggregation levels.
Applsci 08 01654 g001
Figure 2. The initial over-representation map.
Figure 2. The initial over-representation map.
Applsci 08 01654 g002
Figure 3. The final over-representation map with four clusters.
Figure 3. The final over-representation map with four clusters.
Applsci 08 01654 g003
Figure 4. The ρ * values for different number of clusters.
Figure 4. The ρ * values for different number of clusters.
Applsci 08 01654 g004
Figure 5. Classification accuracy.
Figure 5. Classification accuracy.
Applsci 08 01654 g005
Figure 6. Area Under Receiver Operating Curve (AUC) values.
Figure 6. Area Under Receiver Operating Curve (AUC) values.
Applsci 08 01654 g006aApplsci 08 01654 g006b
Table 1. List of 91 features used in the analysis.
Table 1. List of 91 features used in the analysis.
ConsumptionRatiosStatisticalTemporal
c_weekr_night_days_variancet_above_base
c_weekdayr_morning_noons_cor_wdt_above_1kw
c_weekendr_evening_noons_num_peakst_above_2kw
c_eveningr_mean_maxs_difft_above_mean
c_morningr_min_means_q1t_daily_max
c_nightr_evening_wd_wes_q2t_daily_min
c_noonr_night_wd_wes_q3ts_acf_mean3h
c_minr_morning_wd_wes_min_avgts_acf_mean3h_weekday
c_maxr_noon_wd_wes_max_avgts_stl_varRem
c_we_maxr_afternoon_wd_wes_var_web_day_diff
c_we_eveningr_min_wd_wes_var_wdb_day_weak
c_wd_eveningr_max_wd_wes_cor_wd_wewide_peaks
c_we_nightr_var_wd_wes_cor_wewidth_peaks
c_wd_nightr_we_night_dayn_d_diffsm_variety
c_we_morningr_wd_night_daynumber_zerosbg_variety
c_wd_morningr_we_morning_noon time_above_base2
c_we_noonr_wd_morning_noon percent_above_base
c_wd_noonr_we_evening_noon value_above_base
c_we_afternoonr_wd_evening_noon const_time
c_wd_afternoonr_mean_max_no_min value_min_guess
c_afternoonr_evening_noon_no_min first_above_base
c_we_minr_morning_noon_no_min number_big_peaks
c_wd_maxr_day_night_no_min number_small_peaks
c_wd_min dist_big_v
c_sm_max
c_evening_no_min
c_morning_no_min
c_night_no_min
c_noon_no_min
Table 2. The sample matrix with the features extracted for each of the households.
Table 2. The sample matrix with the features extracted for each of the households.
HouseholdFeature_1Feature_2Feature_91
10.230.570.85
20.640.770.27
41820.510.730.63
Table 3. Questionnaire questions and their corresponding category labels.
Table 3. Questionnaire questions and their corresponding category labels.
CategoryPerson’s AgeNumber of Appliances
What Age Were You on Your Last Birthday?Approximately How Many Appliances Are in Your Home?
118–35≤8 appliances
236–65between 9 and 11
365+>11 appliances
CategoryNumber of BedroomsFloor Area
How many bedrooms are in your home?Approximately what is the area of your home?
1≤2 bedroomsNot available
23 bedrooms<100 m2
34 bedroomsbetween 100 m2 and 200 m2
4≥5 bedrooms>200 m2
CategoryEmploymentFamily type
What is the employment status of the chief income earner in your household?What best describes the people you live with?
1An employee, Self-employed (with employees), Self-employed (with no employees)I live alone
2Unemployed (actively seeking work), Unemployed (not actively seeking work), Retired, Carer: Looking after relative familyAll people in my home are over 15 years of age, both adults and children under 15 years of age live in my home
CategoryHouse AgeHouse Type
Approximately how old is your home?Which best describes your home?
1≤30 yearsSemi-detached house, Terraced house
2>30 yearsApartment, Detached house, Bungalow
Table 4. Confusion matrix for binary classification.
Table 4. Confusion matrix for binary classification.
Predicted Value
Positive (P)Negative (N)
Real ValuePositive (P)True positive (TP)False negative (FN)
Negative (N)False positive (FP)True negative (TN)

Share and Cite

MDPI and ACS Style

Gajowniczek, K.; Ząbkowski, T.; Sodenkamp, M. Revealing Household Characteristics from Electricity Meter Data with Grade Analysis and Machine Learning Algorithms. Appl. Sci. 2018, 8, 1654. https://doi.org/10.3390/app8091654

AMA Style

Gajowniczek K, Ząbkowski T, Sodenkamp M. Revealing Household Characteristics from Electricity Meter Data with Grade Analysis and Machine Learning Algorithms. Applied Sciences. 2018; 8(9):1654. https://doi.org/10.3390/app8091654

Chicago/Turabian Style

Gajowniczek, Krzysztof, Tomasz Ząbkowski, and Mariya Sodenkamp. 2018. "Revealing Household Characteristics from Electricity Meter Data with Grade Analysis and Machine Learning Algorithms" Applied Sciences 8, no. 9: 1654. https://doi.org/10.3390/app8091654

APA Style

Gajowniczek, K., Ząbkowski, T., & Sodenkamp, M. (2018). Revealing Household Characteristics from Electricity Meter Data with Grade Analysis and Machine Learning Algorithms. Applied Sciences, 8(9), 1654. https://doi.org/10.3390/app8091654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop