Next Article in Journal
Security in IoMT Communications: A Survey
Next Article in Special Issue
Quantitative Modeling of Spasticity for Clinical Assessment, Treatment and Rehabilitation
Previous Article in Journal
Polylidar3D-Fast Polygon Extraction from 3D Data
Previous Article in Special Issue
Use of Predicted Behavior from Accelerometer Data Combined with GPS Data to Explore the Relationship between Dairy Cow Behavior and Pasture Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Indirect Recognition of Predefined Human Activities

Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB—Technical University of Ostrava, 70833 Ostrava-Poruba, Czech Republic
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4829; https://doi.org/10.3390/s20174829
Submission received: 31 July 2020 / Revised: 19 August 2020 / Accepted: 25 August 2020 / Published: 26 August 2020
(This article belongs to the Special Issue Body Worn Sensors and Related Applications)

Abstract

:
The work investigates the application of artificial neural networks and logistic regression for the recognition of activities performed by room occupants. KNX (Konnex) standard-based devices were selected for smart home automation and data collection. The obtained data from these devices (Humidity, CO2, temperature) were used in combination with two wearable gadgets to classify specific activities performed by the room occupant. The obtained classifications can benefit the occupant by monitoring the wellbeing of elderly residents and providing optimal air quality and temperature by utilizing heating, ventilation, and air conditioning control. The obtained results yield accurate classification.

Graphical Abstract

1. Introduction

The availability of various affordable and cost-effective technologies for automation drives the rapid increase in smart homes. Such technologies provide the possibility of monitoring and tracking events such as unauthorized entry detection, the status of doors and windows, and presence monitoring. An increase in the number of sensors and integration with the Internet of Things (IoT) within smart homes creates new possibilities for improving the daily life of the residents, such as monitoring the activity and well-being of disabled people or seniors [1].
In recent years the health care and assisted living has gained much attention among researchers. In a case study, Panagopoulos et al. [2] presented a usability assessment of “Heart Around”, an integrated homecare solution incorporating communication functionalities, as well as health monitoring and emergency response features. Loukatos et al. [3] investigated educationally fruitful speech-based methods to assist people with special needs to care for potted plants. Wiljer et al. [4] suggested improving health care by developing an artificial intelligence-enabled healthcare practice. Many of the works in the field of activity recognition are emphasizing fall detections [5,6,7]. Sadreazami et al. [5] proposed using the Standoff Radar and a time series-based method for detecting fall incidents in human daily activities. A time was obtained by summing all the range bins corresponding to the ultra-wideband radar return signals. Ahamed et al. [6] investigated accelerometer-based fall detection, the Feed Forward Neural Network and Long Short-Term Memory based on deep learning networks, applied to detect falls. Dhiraj et al. [7] proposed two vision-based solutions, one using convolutional neural networks in 3D-mode and another using a hybrid approach by combining convolutional neural networks and long short-term memory networks using 360-degree videos for human fall detection.
On a larger scale, Hsuseh et al. [8] adopted deep learning techniques to learn the long-term dependencies from videos for human behavior recognition in a multi-view framework detection. Often, camera-based solutions create concerns regarding security and privacy. Therefore, several works are based on indirect occupancy monitoring. Szczurek et al. [9] investigated occupancy determination based on time series of CO2 concentration, temperature and relative humidity. There are works [1] monitoring the daily living activities in smart home care using CO2 concentration. Vanus et al. [10] designed an indirect method for human presence monitoring in an intelligent building. Vanus et al. [11] used the IBM SPSS modeler tool and neural networks for CO2 prediction within smart home care. Vanus et al. [12] compared neural networks, random trees, and linear regression for the purpose of indirect occupancy recognition in intelligent buildings. This paper proposes to employ an identical KNX-based setup building on the above contributions with a significant difference in expanding the occupancy monitoring to activity recognition.
The indirect recognition of human activity is one of the most highly anticipated research topics. Albert et al. [13] used mobile phones for activity recognition in Parkinson’s patients. Nweke et al. [14] reviewed deep learning algorithms for human activity recognition using mobile and wearable sensor networks. Lara et al. [15] reviewed human activity recognition using wearable sensors and Yousefi et al. [16] reviewed behavior recognition using Wi-Fi channel state information. Minarno et al. [17] compared the performance Logistic Regression and Support Vector Machine to recognize activities such as laying, standing sitting, walking, walking upstairs or downstairs. Kwapisz et al. [18] proposed using logistic regression and multilayer perceptron with data obtained from cell phone accelerometers to recognize similar human activities. In a similar study, Bayat et al. [19] proposed using accelerometer data from smartphones to recognize more complex human activities such as running and dancing. Trost et al. [20] compared results obtained from the hip and wrist-worn accelerometer data for the recognition of seven classes of activities.
This study is aimed at taking the data analysis within smart homes beyond occupancy monitoring and fall detection. Although there are a few available works in the field of activity recognition, this study targets new types of recognizable activities beyond common walking, running, and climbing stairs. The proposed method employs KNX standard-based devices to obtain room air quality data (Humidity, CO2, temperature) and combines the obtained data with two wearable gadgets that provide movement-related data. KNX-based devices were selected due to properties such as cost-effectiveness, compatibility and wide availability within locations such as smart homes, office buildings, shopping centers, medical facilities, and industrial locations.
Initially, logistic regression-based models are developed (using IBM SPSS statistic 26) to classify the obtained datasets. Logistic regression is one of the most used methods in the field of activity recognition. Therefore, it provides a good reference for the evaluation of the method using artificial neural networks. Ultimately, the article proposes to use artificial neural networks and the obtained datasets to classify few types of human daily activities such as relaxing, eating, cleaning, exercising using a stationary bike and using a computer. IBM SPSS statistic 26 and IBM SPSS modeler 18 were selected as suitable data analysis platforms to develop required logistic regression and artificial neural network predictive models. In addition to monitoring the wellbeing of elderly residents, the obtained predictions can benefit the occupant by providing optimal air quality and temperature by utilizing heating, ventilation, and air conditioning control. The obtained results yield highly accurate prediction accuracies.

2. Materials and Methods

The proposed method contains three main stages of data collection, pre-processing, and predictive analytics (Figure 1). In the first stage, the KNX devices were employed to monitor the air quality of the room in terms of room temperature (C), humidity level (%), CO2 Concentration level (ppm). The movements of the room occupant were monitored using two individual wearable gadgets based on the Inertial Measurement Unit (IMU). After data synchronization and dealing with the missing data, predictive analytics were applied. Figure 1 shows the application of logistic regression using IBM SPSS statistics 26. A separate predictive model with binary output was dedicated to each type of activity classes, where 0 represents false and 1 represents true. Since logistical regression is commonly used in this particular field of research, it provides a good benchmark or reference point for the evaluation of the artificial neural network-based method. Figure 2 shows the application of artificial neural networks using IBM SPSS modeler 18. It can be observed that in the second approach a single output was used to determine the outcome of the predictive model.

2.1. Data Collection

The data collection was performed in laboratory EB312 at the new Faculty of Electrical Engineering and Computer Science building of the VSB Technical University of Ostrava. The data collection was performed on the 19 July 2019 (08:28:00 to 10:31:00) and 26 July 2019 (08:09:00 to 10:10:00). The activities were performed by a single occupant present in the room. The performed activities were divided into five classes that are described in Table 1. These classes can simulate part of the daily activities performed in a single occupant room.

2.1.1. KNX Technology

A KNX (Konnex) setup was used to monitor the experiment’s room. In general, KNX is an open standard (EN 50090 [21], ISO/IEC 14543 [22]) for commercial and domestic building automation in a variety of locations such as office buildings, shopping centers, medical facilities, and industrial locations. It can be used to control functions such as heating, cooling, ventilation, energy management, and lighting control. The KNX bus system is a decentralized system with multi-master communication. KNX modules are commonly commissioned using the Engineering Tool Software (ETS). In addition to ETS, a .net-based software was developed [12] to ensure the connection of KNX-based devices and IBM cloud storage technology, which enables the communication between IBM Watson IoT platform and KNX smart installation. The measurements of CO2 accumulation, indoor temperature, and humidity were performed using the MTN6005-0001 module. The measuring range of this device is listed in Table 2.

2.1.2. Wearable Gadgets

Two wearable gadgets were used to monitor the experimenter’s movements [23,24]. One was worn on the right hand and the other on the right leg (Figure 3). The wearable gadgets were based on the new generation of the Inertial Measurement Unit (IMU), developed by x-io Technologies, UK. The IMU is a compact data acquisition platform that combines diverse onboard sensors (as displayed in Table 3), and it is largely used for the evaluation of gait variability [24,25]. As regards this study, it comprises an 8-channel analog input, and an SD-card to store the data. The analog input of the IMU is equipped with a 10-bit AD-converter that allows us to acquire and convert the signals from a variety of modules. Table 3 shows the measured parameters and their units.

2.2. Pre-Processing

The wearable gadgets are using an approximated data collection rate of 30 to 60 samples per second and the KNX-based data collection rate is between 1 to 10 samples per minute. This large difference creates a database synchronization problem. Therefore, the data collected from KNX devices had been expanded to match the fast rates of the wearable gadget. A .Net-based script was used to perform data synchronization. Missing data could result in algorithm failure or decrease the accuracy of the analysis. Therefore, IBM SPSS software tool automatically removes all of the records with missing data from the analysis. Using the IBM SPSS software tool time-related variables were removed and correct variable types were assigned to each parameter (continuous and binary).

2.3. Predictive Analytics

Predictive modeling is the general concept of building a model that uses big data to develop models capable of making reliable predictions. In general, these models are based on variables (also known as predictors) that are most likely to influence the outcome [26]. Predictive models are widely applied in various applications such as weather forecasting [27,28,29], Bayesian spam filters [30,31,32,33], business [34,35,36,37], and fraud detection [38,39,40]. Predictive models typically include a machine learning algorithm that learns certain properties from a training dataset. The learning process can be applied using supervised learning [41,42], unsupervised learning [42], semi-supervised learning [43], active learning. In the purposed method, supervised learning was employed by presenting a set of solved (labeled) examples to the model for training. Once the model is established, a pattern between the predictors and the outcome could solve similar predictions on its own.

2.3.1. Logistic Regression

Regression is one the oldest and often used algorithms in machine learning with a supervised learning strategy [44,45]. Linear Regression and Logistic Regression are the two famous types of regression. In general, Linear Regression is used for solving Regression problems whereas Logistic Regression is used for solving the Classification problems such as predicting the categorical dependent variable with the help of independent variables or where the probabilities between two classes are required [45]. Logistic regression is used in various fields, including machine learning, most medical fields, and social sciences [46,47,48,49,50]. The weighted sum of inputs passes through the logistic function Equation (1) that can map values in between 0 and 1. The logistic function is a sigmoid function [51] and the curve obtained is called a sigmoid curve or S-curve (Figure 4).
The output of the binary logistic regression model can be only binary (either 0 or 1). Outputs with more than two values are modeled by multinomial logistic regression and if the multiple categories are ordered, by ordinal logistic regression. The logistic regression is not a classifier by itself, it simply provides a probability of output in terms of input. However, it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as 1 and below the cutoff as 0; this is a common way to make a binary classifier. The general equation of logistic regression is provided by Equation (2).
f x   =   1 1   +   e k x x 0
y   =   1 1   +   e ( β 0 + β 1 x 1 + β 2 x 2 + β 3 x 3 + + β n x n )
Regression models can be created using multiple algorithms, these algorithms specify how independent variables are entered into the model [52,53,54,55,56]. The common algorisms are Enter (Regression) [56,57], Stepwise [58], Backward Elimination [59] and Forward Selection [60,61]. The Hosmer–Lemeshow test and Omnibus test are some of the most common statistical tests used to examine the goodness of fit for logistic regression. It compares the observed event rates and expected event rates in subgroups of the model population. The test mainly identifies subgroups as the deciles of fitted risk values. Well calibrated models are the models with similar expected and observed event rates in their subgroups. The expected probability of success is given by the equation for the logistic regression model. In general, the Hosmer–Lemeshow test is useful to determine if the lack of fit (poor prediction) is significant but it does not properly take overfitting into account. The omnibus test is a likelihood-ratio chi-square test of the current model versus the null (in this case, the intercept) model. Generally, the significance value of less than 0.05 indicates that the current model outperforms the null model. The odds ratio is often used to quantify the strength of the association between two events. In logistic regression, the odds ratio shows the amount of increase in the output variable with every unit increase in a specific input variable. The odds ratio for a continuous independent variable can be defined as Equation (3). This exponential relationship provides an interpretation for β 1 where the odds is multiplied by e β 1 for every 1-unit increase in x [62]. If a, b, c and d can are cells in a 2 × 2 contingency table then formula 4 describes odds ratio for a binary independent variable.
o d d s   r a t i o O R   =   p   x   +   1 p x   =   e β 0 + β 1 x + 1 e β 0 + β 1 x   =   e β 1
o d d s   r a t i o O R   =   a d b c

2.3.2. Artificial Neural Network

Due to their power flexibility and ease of use, artificial neural networks are widely used [63,64,65,66,67,68,69]. Artificial neural networks obtain their knowledge from the learning process and then use interneuron connection strengths (known as synaptic weights) to store the obtained knowledge [70,71]. One of the most used classes of artificial neural networks is a multilayer perceptron which is a feedforward neural network that belongs to deep learning. Deep learning utilizes a hierarchical level of artificial neural networks to carry out the process of machine learning. Unlike traditional programs, the hierarchical function of deep learning systems enables machines to process data with a nonlinear approach. The multilayer perceptron utilizes backpropagation for training [72,73,74]. Due to its multiple layers and nonlinear activation, a multilayer perceptron can distinguish data that are not linearly separable [75].
In deep learning, in addition to input and output, layers of the neural network contain multiple hidden layers and each can contain multiple neurons. The first layer of the neural network processes a raw data input like the amount of the transaction and passes it on to the next layer as output. The second layer processes the previous layer’s information by including additional information. This continues across all levels of the neural network. Each layer of its neural network builds on its previous layer. The multilayer perceptron artificial neural network with two hidden layers was chosen as a suitable deep learning method for this article (Figure 5).
The multilayer perceptron artificial neural network was implemented in the IBM SPSS Modeler 18 software. The IBM SPSS modeler algorithm guide mathematically describes its multilayer perceptron model as followings [76]:
Input layer: j 0   =   p units, a 0 : j , , a 0 : j 0 , with a 0 : j   =   x j , where j is the number of neurons in the layer and X is the input.
ith hidden layer: j i units, a i : 1 , , a i : j i , with a 1 : k   =   γ i C i : k and C i : k   =   j = 0 j i 1 ω I : j 1 ,   k a i 1 : j , where a i 1 : 0   =   1 , γ i is the activation function for the layer I, and ω I : j 1 is weight leading from layer i−1. At this layer, the model uses hyperbolic tangent as an activation function provided by γ C   =   tanh c e c     e c e c   +   e c .
Output layer: j I   =   R units, a I : 1 , , a I : J I , with a I : k   =   γ I C I : k and C I : k   =   J = 0 J 1 ω I : j ,   k a i 1 : j , where a i 1 : 0 = 1 . The SoftMax function ( γ C k   =   e c k j Γ h e c j ) is used as an activation function.
To evaluate the performance of predictive modeled three methods of splitting, partitioning, and scoring is commonly used. In the partitioning method, the datasets are randomly divided into training, testing, and validation partitions where models are trained, tested and evaluated using different segments of the dataset. Partitioning is mostly recommended for very large datasets. The scoring method uses entirely different datasets for training and evaluation. One dataset is solely used for training and a separate dataset for evaluation. Therefore, it provides a better indication of the real accuracy of the models.

3. Implementation and Results

This section discusses the implementation and results of the classifications performed by logistic regression and artificial neural networks (multilayer perceptron). The logistic regression is a commonly used classification method in the field of activity recognition. Therefore, it can provide a good comparison point for the main purposed method using artificial neural networks.

3.1. Linear Regression

The obtained data from measurements performed on the 19 July 2019 (dataset A) and 26 July 2019 (dataset B) were analyzed using IBM SPSS Statistics 26 software tool. Since IBM SPSS Modeler 18 does not natively include logistic regression, IBM SPSS Statistics 26 software was used to perform the logistic regression analysis. In the first stage, the datasets A and B were individually classified. The logistical regression models were developed using enter configuration, classification cutoff of “0.5” and a maximum of 20 iterations.
The goodness of fit describes how well a statistical model fits a set of observations. Hosmer and Lemeshow and omnibus tests were used to determine the goodness of fit. For a good fit, the Hosmer & Lemeshow test significance value should be more than 0.05 and omnibus should have a significance value less than 0.05. These conditions were satisfied with large margins across all models. Table 4 and Table 5 show the accuracy of classification for data obtained from measurements of dataset A (total of 296,188 records) and dataset B (total of 290,174 records). The result shows that all models obtained classification accuracy above 91.2%. In analysis performs on the measurement interval of dataset A (Table 4), the activity Class 3 shows almost complete accuracy (only two wrong predictions in 296,188 records) and activity Class 4 shows the lowest accuracy (97.4%).
Similar characteristics can be observed from the dataset B (Table 5) result where Class 3 yields the highest accuracy (99.9%) and Class 4 the lowest (91.2%). Table 1 indicates that Class 4 is dedicated to cleaning activities such as wiping tables and vacuum cleaning. Therefore, the lower accuracy could be the direct result of less consistent movement during this activity class. Using a stationary bicycle (Class 5 activity) is a high energy activity, and on the contrary, relaxing with minimal movements (Class 1) is a low energy activity. Regardless of energy levels, both of these activities provide consistent movements that directly translate to a more recognizable pattern within data. This can be easily observed within the classification results (99.3% and 98.9% for Class 5 and 98.9 and 99.5% for Class 1). Summing up the classification accuracy resulted in 97.8% of correctly classified records.
Table 6 shows the odds ratio of different parameters in developed models. The odds ratio shows the amount of increase in the output variable with every unit increase in a specific input variable. Simply, the output variable is more associated with changes in parameters with a larger absolute value of the odds ratio. Table 6 shows consistent odds ratios for gyroscope and magnetometer (both devices and across all three axes), CO2 for all models. Therefore, it affects all models with a similar significance.
By comparing each model with its alternative interval, it can be observed that except for Model 5, the temperature has a similar range on both datasets. However, models 1, 2, and 4 share the very high odds ratio and model 3 shows a null effect. Table 6 also shows that this null effect is also shared with the models based on the dataset A. It is also apparent that KNX-based data do not influence the recognition of Class 3 activity. On the other hand, the accelerometer y-axis (both devices) share similar large odds Table 6 shows the odds ratio of different parameters in developed models. The odds ratio shows the amount of increase in the output variable with every unit increase in a specific input variable. Simply, the output variable is more associated with changes in parameters with a larger absolute value of the odds ratio. Table 6 shows consistent odds ratios for gyroscope and magnetometer (both devices and across all three axes), CO2 for all models. Therefore, it affects all models with a similar significance.
By comparing each model with its alternative interval, it can be observed that except for Model 5, the temperature has a similar range on both datasets. However, models 1, 2, and 4 share the very high odds ratio and model 3 shows a null effect. Table 6 also shows that this null effect is also shared with the models based on the dataset A. It is also apparent that KNX-based data do not influence the recognition of Class 3 activity. On the other hand, the accelerometer y-axis (both devices) share similar large odds ratio across both models, and in the case of exercising using a stationary bicycle (Class 5 activity), this large effect can be observed on the X-axis of the leg accelerometer.
With few exceptions, the odds ratio of both datasets remains within a similar range, this indicates the consistency of the analysis. Overall, it can be observed that activity Class 1 is mainly affected by temperature and the Activity Classes 2, 4, 5 are mostly affected by temperature and are accelerometer-based. The obtained conclusions from the odds ratio were verified by regression weights, the test of significance, and Wald statistic. In the last stage of the analysis, the developed models were further evaluated by alternative datasets (scoring), resulting in a significant drop in the prediction accuracy (up to 50% decrease). This indicated is an indication of overfitting. Although Hosmer and Lemeshow and omnibus tests are a good indication for the goodness of fit, they do not detect overfitting.
Across both models, and in the case of exercising using a stationary bicycle (Class 5 activity), this large effect can be observed on the X-axis of the leg accelerometer.
With few exceptions, the odds ratio of both datasets remains within a similar range, this indicates the consistency of the analysis. Overall, it can be observed that activity Class 1 is mainly affected by temperature and the Activity Classes 2, 4, 5 are mostly affected by temperature and are accelerometer-based. The obtained conclusions from the odds ratio were verified by regression weights, the test of significance, and Wald statistic. In the last stage of the analysis, the developed models were further evaluated by alternative datasets (scoring), resulting in a significant drop in the prediction accuracy (up to 50% decrease). This indicated is an indication of overfitting. Although Hosmer and Lemeshow and omnibus tests are a good indication for the goodness of fit, they do not detect overfitting.

3.2. Artificical Neural Network

The IBM SPSS Modeler 18 software tool was used to create the multilayer perceptron artificial neural networks. Figure 6 displays the data stream developed to train, test, and validate predictive models. In the first stage, the datasets A and B were imported to the data stream. To maintain the integrity of the analysis, the excel node was configured to exclude (delete) the records that contain missing values. The filter and type nodes were utilized to select relevant input data, assign correct variable types (continuous, categorical, etc.), and predefining inputs and the outputs.
The data stream uses a partitioning node with a partitioning ratio of 40% training, 30% testing, and 30% validation. The random seed of “229176228” was set automatically by the partitions node. The stream continues with the artificial neural networks modeling node (training) which generates the predictive model (displayed as nugget gem). In artificial neural network training, the stopping rules settings, the use minimum accuracy, and the customized number of maximum training cycles were disabled, and the maximum training time per component model was set to 15 minutes. Additionally, overfit protection was set to 30%. The predictive model (nugget gem) can be connected to additional nodes to export its predictions to Excel files (using excel node) or analyze them using built-in functions. Table 7 and Table 8 show the results of the training stage.
Table 7 and Table 8 show the accuracy of validation partitions for each class in addition to the overall accuracy which is based on all partitions and parameters. Table 7 is based on training results using dataset A and Table 8 is based on training results using dataset B. Similar results can be observed in both training intervals. Model number 3 shows the highest and model 11 the lowest accuracy levels. It can also be observed that the accuracy of the models 1 to 6 are above 99.50% while models 7 to 11 show poor results. Further investigations showed that the models with lower number neurons are mainly based on KNX-based parameters (CO2, temperature, humidity) which change at a slower rate. An example of the predictor’s importance for model 3 trained with dataset A is provided in Figure 7.
Meanwhile, the models with higher neuron count are mainly based on wearable gadget data (accelerometer sensors and gyroscopes) which change at a much faster pace. An example of the importance of predictors for model 11 trained with a dataset A is provided in Figure 8. Additionally, Figure 8 shows that predictors important are more balanced in comparison with Figure 7.
As mentioned earlier, the partitioning is a commonly used method, as an evaluation method. However, soring is commonly applied to obtain a better understanding and estimate of the real performance of the models. In scoring, two different datasets are introduced. One dataset is solely used for training and a separate dataset for evaluation, i.e., the predictive model never sees the input data used for evaluation. Figure 9 shows the scoring diagram implemented in the IBM SPSS modeler 18. The top row of nodes is used for the training data and the bottom for the evaluation data.
Table 9 and Table 10 represent the results obtained from the scoring stage where Table 9 shows the models trained with dataset A evaluated with interval dataset B and Table 10 shows the models trained with dataset B evaluated with dataset A. In Table 9, For models 1 to 6 and activities Classes 1, 3, and 5, we can observe consistent and acceptable results. The highest accuracy for activity Class 2 was achieved by model 1 (94.26%). Meanwhile, model 2 showed the highest accuracy (64%) for activity Class 4. In general, the accuracy of Class 2 and Class 4 are inconsistent. Table 10 shows that models 2 and 3 show better overall accuracy. In particular, model 2 has the highest accuracy for activity Classes 1, 2, 3, and 5.

4. Discussion

This article proposes recognizing human activities in a single occupant room using room air quality data (Humidity, CO2, temperature) in combination with movement-based data (accelerometer, gyroscope, magnetometer). The measured data were classified using logistic regression and multilayer perceptron artificial neural networks. Logistical regression is commonly used in this particular field of research. Therefore, it can provide a good reference point for the evaluation of the artificial neural network-based method. The Hosmer and Lemeshow test and omnibus test showed a good fit for the models. The result showed an average classification accuracy of 97.8% and a minimum average accuracy of 91.2%. The accuracy of models based on dataset A which ranged between 97.4% to 100%, and for dataset B, the accuracy ranged between 91.2% and 99.9%. In both datasets, Class 3 yields the highest accuracy and Class 4 the lowest. The main contributor to the reduced classification accuracy was identified as less consistent movements during cleaning activity. On the contrary, relaxing (Class 1) and using a stationary bike (Class 5) are relevantly consistent activities (with regards to movement patterns), hence the higher accuracy was observed. To develop a better understanding of the results, the obtained models were examined in terms of the odds ratio, regression weights, test of significance, and Wald statistic. With some exceptions, the odds ratio of both datasets remained within a similar range which is a good indication of consistency within the analysis result. Further investigations showed that that activity Class 1 is mainly affected by temperature and the activity Classes 2, 4, 5 are mostly affected by temperature and accelerometer-based data.
Once the logistic regression set an accuracy reference point, the multilayer perceptron artificial neural networks were implemented in the IBM SPSS Modeler 18 software. Initially, the models were developed and evaluated using a partitioning method resulting in relevantly similar validation results for both datasets. Model 3 held the highest accuracy, and model 11 the lowest, the models 1 to 6 were all above 99.50% while models 7 to 11 showed less impressive results. Deeper investigations showed that the models with lower number neuron count (models 1 to 6) are mainly influenced by room measurement data (CO2, temperature, humidity). Meanwhile, the models with a higher number of neurons were mainly based on wearable gadget data (accelerometer sensors and gyroscopes) which change at a much faster pace. In the latter case, the important predictors were more balanced and spread. The validation results of multilayer perceptron models 1 to 6 surpassed the accuracy logistic regression by staying above 98.92% in all classes where the logistical regression score average of 97.8%.
The superiority of the artificial neural networks became significantly more apparent by performing model scoring where a significant decrease in accuracy was observed with logistic regression models. When it comes to scoring, a decrease in prediction accuracy is always expected. Increasing the size of the training datasets closes the gap between the validation and scoring accuracy. In the case of logistic regression, the gap was significant enough to conclude that the models are overfitting. In the case of artificial neural networks, the results were more consistent and acceptable for Activity Classes 1, 2, 3, and 5. Similar to the previous cases, Class 4 performed poorly due to inconsistencies within movements during this activity class.
Overall, the obtained results showed that the proposed method provides a promising outcome. Although there are many contributions in the field of human activity recognition, this study holds its novelty in terms of methodology, measurement techniques, and predefined classes. In terms of accuracy of activity recognition, this study is on par or surpasses most of the similar previous works. In terms of predefined activities, this article studied new classes of activities where similar works are mainly focused on stairs climbing, running, walking, and fall detection. The predefined activities in this article do not represent all possible daily activities performed by humans. However, the highly accurate results obtained in this article show a promising path for expansion. In future works, the study will expand in terms of number recognizable activates in addition to the possibilities of recognition within multiple occupant rooms.

5. Conclusions

This article proposed recognizing human activities in a single occupant room using room air quality data (Humidity, CO2, temperature) in combination with movement-based data (accelerometer, gyroscope, magnetometer). The measured data were used to recognize five predefined human activity classes such as relaxing, using a computer, eating, cleaning and exercising. The classification was performed using logistic regression and artificial neural networks (multilayer perceptron) where logistic regression was used as a reference for the evaluation of the main purposed method that is using artificial neural networks. In comparison with similar studies, this study holds its novelty in terms of methodology, measurement techniques and predefined classes.
The neural network showed more consistent and acceptable results. Although the obtained classification accuracy varied depending on the type of performed activity, the overall results were highly accurate. In general, a promising outcome and highly accurate results were obtained in this analysis which shows a promising path for expansion in the number of recognizable activity classes and possibilities of recognition within multiple occupant rooms.

Author Contributions

Conceptualization, O.M.G.; A.P. and J.V.; Data curation, O.M.G. and A.P.; Formal analysis, O.M.G.; Funding acquisition, P.B.; Investigation, O.M.G.; Methodology, O.M.G.; Project administration, J.V. and P.B.; Resources, O.M.G.; A.P. and P.B.; Software, O.M.G.; Supervision, O.M.G.; J.V. and P.B.; Validation, O.M.G.; Visualization, O.M.G.; Writing—original draft, O.M.G. and A.P.; Writing—review & editing, O.M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the European Regional Development Fund in the Research Centre of Advanced Mechatronic Systems project, project number CZ.02.1.01/0.0/0.0/16019/0000867 within the Operational Programme Research, Development and Education.

Acknowledgments

This work was supported by the European Regional Development Fund in the Research Centre of Advanced Mechatronic Systems project, project number CZ.02.1.01/0.0/0.0/16019/0000867 within the Operational Programme Research, Development, and Education. This work was supported by the Student Grant System of VSB Technical University of Ostrava, grant number SP2020/151.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Vanus, J.; Belesova, J.; Martinek, R.; Nedoma, J.; Fajkus, M.; Bilik, P.; Zidek, J. Monitoring of the daily living activities in smart home care. Hum. Cent. Comput. Inf. Sci. 2017, 7, 30. [Google Scholar] [CrossRef] [Green Version]
  2. Panagopoulos, C.; Menychtas, A.; Tsanakas, P.; Maglogiannis, I. Increasing Usability of Homecare Applications for Older Adults: A Case Study. Designs 2019, 3, 23. [Google Scholar] [CrossRef] [Green Version]
  3. Loukatos, D.; Arvanitis, K.G.; Armonis, N. Investigating Educationally Fruitful Speech-Based Methods to Assist People with Special Needs to Care Potted Plants. In International Conference on Human Interaction and Emerging Technologies; Springer: Cham, Switzerland, 2019; pp. 157–162. [Google Scholar]
  4. Wiljer, D.; Hakim, Z. Developing an artificial intelligence–enabled health care practice: Rewiring health care professions for better care. J. Med. Imaging Radiat. Sci. 2019, 50, S8–S14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Sadreazami, H.; Bolic, M.; Rajan, S. Fall Detection Using Standoff Radar-Based Sensing and Deep Convolutional Neural Network. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 197–201. [Google Scholar] [CrossRef]
  6. Ahamed, F.; Shahrestani, S.; Cheung, H. Intelligent Fall Detection with Wearable IoT. Adv. Intell. Syst. Comput. 2020, 993, 391–401. [Google Scholar] [CrossRef]
  7. Dhiraj; Manekar, R.; Saurav, S.; Maiti, S.; Singh, S.; Chaudhury, S.; Neeraj; Kumar, R.; Chaudhary, K. Activity Recognition for Indoor Fall Detection in 360-Degree Videos Using Deep Learning Techniques. Adv. Intell. Syst. Comput. 2020, 1024, 417–429. [Google Scholar] [CrossRef]
  8. Hsueh, Y.-L.; Lie, W.-N.; Guo, G.-Y. Human Behavior Recognition from Multiview Videos. Inf. Sci. 2020, 517, 275–296. [Google Scholar] [CrossRef]
  9. Szczurek, A.; Maciejewska, M.; Pietrucha, T. Occupancy determination based on time series of CO2 concentration, temperature and relative humidity. Energy Build. 2017, 147, 142–154. [Google Scholar] [CrossRef]
  10. Vanus, J.; Machac, J.; Martinek, R.; Bilik, P.; Zidek, J.; Nedoma, J.; Fajkus, M. The design of an indirect method for the human presence monitoring in the intelligent building. Hum. Cent. Comput. Inf. Sci. 2018, 8, 28. [Google Scholar] [CrossRef]
  11. Vanus, J.; Kubicek, J.; Gorjani, O.M.; Koziorek, J. Using the IBMSPSS SWTool withWavelet Transformation for CO2 Prediction within IoT in Smart Home Care. Sensors 2019, 19, 1407. [Google Scholar] [CrossRef] [Green Version]
  12. Vanus, J.M.; Gorjani, O.; Bilik, P. Novel Proposal for Prediction of CO2 Course and Occupancy Recognition in Intelligent Buildings within IoT. Energies 2019, 12, 4541. [Google Scholar] [CrossRef] [Green Version]
  13. Albert, M.V.; Toledo, S.; Shapiro, M.; Koerding, K. Using mobile phones for activity recognition in Parkinson’s patients. Front. Neurol. 2012, 3, 158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Nweke, H.F.; Teh, Y.W.; Al-Garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
  15. Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2012, 15, 1192–1209. [Google Scholar] [CrossRef]
  16. Yousefi, S.; Narui, H.; Dayal, S.; Ermon, S.; Valaee, S. A survey on behavior recognition using wifi channel state information. IEEE Commun. Mag. 2017, 55, 98–104. [Google Scholar] [CrossRef]
  17. Minarno, A.E.; Kusuma, W.A.; Wibowo, H. Performance Comparisson Activity Recognition using Logistic Regression and Support Vector Machine. In Proceedings of the 2020 3rd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 26–29 February 2020; pp. 19–24. [Google Scholar]
  18. Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
  19. Bayat, A.; Pomplun, M.; Tran, D.A. A study on human activity recognition using accelerometer data from smartphones. Procedia Comput. Sci. 2014, 34, 450–457. [Google Scholar] [CrossRef] [Green Version]
  20. Trost, S.G.; Zheng, Y.; Wong, W.K. Machine learning for activity recognition: Hip versus wrist data. Physiol. Meas. 2014, 35, 2183. [Google Scholar] [CrossRef]
  21. European Committee for Standards. Home and Building Electronic System (HBES); Cenelec EN50090; European Committee for Standards: Brussels, Belgium, 2005. [Google Scholar]
  22. International Organization for Standardization. K.N.X. Standard ISO/IEC14543-3; International Organization for Standardization: Geneva, Switzerland, 2006. [Google Scholar]
  23. NGIMU X-Io Technologies. Available online: https://x-io.co.uk/ngimu/ (accessed on 13 August 2020).
  24. Rantalainen, T.; Karavirta, L.; Pirkola, H.; Rantanen, T.; Linnamo, V. Gait Variability UsingWaist- and Ankle-Worn Inertial Measurement Units in Healthy Older Adults. Sensors 2020, 20, 2858. [Google Scholar] [CrossRef]
  25. Fida, B.; Bernabucci, I.; Bibbo, D.; Conforto, S.; Schmid, M. Pre-processing effect on the accuracy of event-based activity segmentation and classification through inertial sensors. Sensors 2015, 15, 23095–23109. [Google Scholar] [CrossRef] [Green Version]
  26. Alankar, B.; Yousf, N.; Ahsaan, S.U. Predictive Analytics for Weather Forecasting using Back Propagation and Resilient Back Propagation Neural Networks. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1030, pp. 99–115. [Google Scholar] [CrossRef]
  27. Poornima, S.; Pushpalatha, M. Prediction of rainfall using intensified LSTM based recurrent Neural Network with Weighted Linear Units. Atmosphere 2019, 10, 668. [Google Scholar] [CrossRef] [Green Version]
  28. Pooja, S.B.; Siva Balan, R.V. Linear program boosting classification with remote sensed big data for weather forecasting. Int. J. Adv. Trends Comput. Sci. Eng. 2019, 8, 1405–1415. [Google Scholar] [CrossRef]
  29. Yang, Y.; Hu, R.; Sun, G.; Qiu, C. Chinese Spam Data Filter Model in Mobile Internet. In Proceedings of the International Conference on Advanced Communication Technology ICACT, Xi’an, China, 17–20 February 2019; pp. 339–343. [Google Scholar] [CrossRef]
  30. Maguluri, L.P.; Ragupathy, R.; Buddi, S.R.K.; Ponugoti, V.; Kalimil, T.S. Adaptive Prediction of Spam Emails Using Bayesian Inference. In Proceedings of the 3rd International Conference on Computing Methodologies and Communication ICCMC, Erode, India, 27–29 March 2019; pp. 628–632. [Google Scholar] [CrossRef]
  31. Mansourbeigi, S.M.-H. Stochastic Methods to Find Maximum Likelihood for Spam E-mail Classification. Adv. Intell. Syst. Comput. 2019, 927, 623–632. [Google Scholar] [CrossRef] [Green Version]
  32. Mallik, R.; Sahoo, A.K. A novel approach to spam filtering using semantic based naive bayesian classifier in text analytics. Adv. Intell. Syst. Comput. 2019, 813, 301–309. [Google Scholar] [CrossRef]
  33. Van Nguyen, T.; Zhou, L.; Chong, A.Y.L.; Li, B.; Pu, X. Predicting customer demand for remanufactured products: A data-mining approach. Eur. J. Oper. Res. 2019, 281, 543–558. [Google Scholar] [CrossRef]
  34. Liu, C.; Lei, Z.; Morley, D.; Abourizk, S.M. Dynamic, Data-Driven Decision-Support Approach for Construction Equipment Acquisition and Disposal. J. Comput. Civ. Eng. 2020, 34, 34. [Google Scholar] [CrossRef]
  35. Wang, J.; Lai, X.; Zhang, S.; Wang, W.M.; Chen, J. Predicting customer absence for automobile 4S shops: A lifecycle perspective. Eng. Appl. Artif. Intell. 2020, 89. [Google Scholar] [CrossRef]
  36. Park, G.; Song, M. Predicting performances in business processes using deep neural networks. Decis. Support. Syst. 2020, 129, 113191. [Google Scholar] [CrossRef]
  37. Sarno, R.; Sinaga, F.; Sungkono, K.R. Anomaly detection in business processes using process mining and fuzzy association rule learning. J. Big Data 2020, 7, 5. [Google Scholar] [CrossRef]
  38. Matos, T.; Macedo, J.A.; Lettich, F.; Monteiro, J.M.; Renso, C.; Perego, R.; Nardini, F.M. Leveraging feature selection to detect potential tax fraudsters. Expert Syst. Appl. 2020, 145, 113128. [Google Scholar] [CrossRef]
  39. Shi, C.; Li, X.; Lv, J.; Yin, J.; Mumtaz, I. Robust geodesic based outlier detection for class imbalance problem. Pattern Recognit. Lett. 2020, 131, 428–434. [Google Scholar] [CrossRef]
  40. Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; Elsevier: Waltham, MA, USA, 2011. [Google Scholar]
  41. Kachalsky, I.; Zakirzyanov, I.; Ulyantsev, V. Applying Reinforcement Learning and Supervised Learning Techniques to Play Hearthstone. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 1145–1148. [Google Scholar]
  42. Nijhawan, R.; Srivastava, I.; Shukla, P. Land cover classification using super-vised and unsupervised learning techniques. In Proceedings of the 2017 International Conference on Computational Intelligence in Data Science (ICCIDS), Chennai, India, 2–3 June 2017; pp. 1–6. [Google Scholar]
  43. Liu, Q.; Liao, X.; Carin, L. Semi-Supervised Life-Long Learning with Application to Sensing. In Proceedings of the 2007 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, St. Thomas, VI, USA, 12–14 December 2007; pp. 1–4. [Google Scholar]
  44. Yan, X. Linear Regression Analysis: Theory and Computing; World Scientific: Hackensack, NJ, USA, 2009; pp. 1–2. ISBN 9789812834119. [Google Scholar]
  45. Rencher, A.C.; Christensen, W.F. Chapter 10, Multivariate regression—Section 10.1 Introduction Methods of Multivariate Analysis. In Wiley Series in Probability and Statistics, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 709, p. 19. ISBN 9781118391679. [Google Scholar]
  46. Tolles, J.; Meurer, W.J. Logistic regression: Relating patient characteristics to outcomes. JAMA 2016, 316, 533–534. [Google Scholar] [CrossRef] [PubMed]
  47. Boyd, C.R.; Tolson, M.A.; Copes, W.S. Evaluating trauma care: The TRISS method Trauma Score and the Injury Severity Score. J. Trauma 1987, 27, 370–378. [Google Scholar] [CrossRef] [PubMed]
  48. Le Gall, J.R.; Lemeshow, S.; Saulnier, F. A new simplified acute physiology score (SAPS II) based on a European/North American multicenter study. JAMA 1993, 270, 2957–2963. [Google Scholar] [CrossRef] [PubMed]
  49. Berry, M.J.; Linoff, G.S. Data Mining Techniques: For Marketing, Sales, and Customer Relationship Management; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  50. Truett, J.; Cornfield, J.; Kannel, W. A multivariate analysis of the risk of coronary heart disease in Framingham. J. Chronic Dis. 1967, 20, 511–524. [Google Scholar] [CrossRef]
  51. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  52. Freedman, D.A. Statistical Models: Theory and Practice; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  53. Efroymson, M.A. Multiple Regression Analysis. Mathematical Methods for Digital Computers; Ralston, A., Wilf, H.S., Eds.; Wiley: New York, NY, USA, 1960. [Google Scholar]
  54. Hocking, R.R. The Analysis and Selection of Variables in Linear Regression. Biometrics 1976, 32, 1–49. [Google Scholar] [CrossRef]
  55. Draper, N.; Smith, H. Applied Regression Analysis, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1981. [Google Scholar]
  56. SAS Institute. SAS/STAT User’s Guide, 4th ed.; Version 6; SAS Institute Inc.: Cary, NC, USA, 1989; Volume 2. [Google Scholar]
  57. Knecht, W.R. Pilot Willingness to Take off into Marginal Weather, Part II: Antecedent Overfitting with forward Stepwise Logistic Regression. In Technical Report DOT/FAA/AM-O5/15; Federal Aviation Administration: Oklahoma City, OK, USA, 2005. [Google Scholar]
  58. Flom, P.L.; Cassell, D.L. Stopping Stepwise: Why Stepwise and Similar Selection Methods are Bad, and What You Should Use; NESUG: Corvallis, OR, USA, 2007. [Google Scholar]
  59. Myers, R.H.; Myers, R.H. Classical and Modern Regression with Applications; Duxbury Press: Belmont, CA, USA, 1990; Volume 2. [Google Scholar]
  60. Bendel, R.B.; Afifi, A.A. Comparison of stopping rules in forward “stepwise” regression. J. Am. Stat. Assoc. 1977, 72, 46–53. [Google Scholar]
  61. Kubinyi, H. Evolutionary variable selection in regression and PLS analyses. J. Chemom. 1996, 10, 119–133. [Google Scholar] [CrossRef]
  62. Szumilas, M. Explaining odds ratios. J. Can. Acad. Child. Adolesc. Psychiatry 2010, 19, 227. [Google Scholar]
  63. Moosavi, S.R.; Wood, D.A.; Ahmadi, M.A.; Choubineh, A. ANN-Based Prediction of Laboratory-Scale Performance of CO2-Foam Flooding for Improving Oil Recovery. Nat. Resour. Res. 2019, 28, 1619–1637. [Google Scholar] [CrossRef]
  64. Zarei, T.; Behyad, R. Predicting the water production of a solar seawater greenhouse desalination unit using multi-layer perceptron model. Sol. Energy 2019, 177, 595–603. [Google Scholar] [CrossRef]
  65. Yilmaz, I.; Kaynar, O. Multiple regression, ANN (RBF, MLP) and ANFIS models for prediction of swell potential of clayey soils. Expert Syst. Appl. 2011, 38, 5958–5966. [Google Scholar] [CrossRef]
  66. Heidari, E.; Sobati, M.A.; Movahedirad, S. Accurate prediction of nanofluid viscosity using a multilayer perceptron artificial neural network (MLP-ANN). Chemom. Intell. Lab. Syst. 2016, 155, 73–85. [Google Scholar] [CrossRef]
  67. Abdi-Khanghah, M.; Bemani, A.; Naserzadeh, Z.; Zhang, Z. Prediction of solubility of N-alkanes in supercritical CO2 using RBF-ANN and MLP-ANN. J. CO2 Util. 2018, 25, 108–119. [Google Scholar] [CrossRef]
  68. Ahmed, S.A.; Dey, S.; Sarma, K.K. Image texture classification using Artificial Neural Network (ANN). In Proceedings of the 2011 2nd National Conference on Emerging Trends and Applications in Computer Science, Shillong, India, 4–5 March 2011; pp. 1–4. [Google Scholar]
  69. Zarei, F.; Baghban, A. Phase behavior modelling of asphaltene precipitation utilizing MLP-ANN approach. Pet. Sci. Technol. 2017, 35, 2009–2015. [Google Scholar] [CrossRef]
  70. Behrang, M.; Assareh, E.; Ghanbarzadeh, A.; Noghrehabadi, A. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data. Sol. Energy 2010, 84, 1468–1480. [Google Scholar] [CrossRef]
  71. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Macmillan College Publishing: New York, NY, USA, 1998. [Google Scholar]
  72. Ripley, B.D. Pattern Recognition and Neural Networks; Cambridge University Press (CUP): Cambridge, UK, 1996. [Google Scholar]
  73. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009. [Google Scholar]
  74. Rosenblatt, F. Principles of Neurodynamics. Perceptrons and the Theory of Brain Mechanisms; Defense Technical Information Center (DTIC): Baer Fort, VA, USA, 1961. [Google Scholar]
  75. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation; The MIT Press: Cambridge, MA, USA, 1985. [Google Scholar]
  76. IBM SPSS Modeler 18 Algorithms Guide. Available online: http://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/18.0/en/AlgorithmsGuide.pdf (accessed on 1 November 2019).
Figure 1. Block diagram of the proposed method using logistic regression.
Figure 1. Block diagram of the proposed method using logistic regression.
Sensors 20 04829 g001
Figure 2. Block diagram of the proposed method using artificial neural networks.
Figure 2. Block diagram of the proposed method using artificial neural networks.
Sensors 20 04829 g002
Figure 3. Inertial Measurement Unit (IMU) worn on a leg.
Figure 3. Inertial Measurement Unit (IMU) worn on a leg.
Sensors 20 04829 g003
Figure 4. Example of the logistic function.
Figure 4. Example of the logistic function.
Sensors 20 04829 g004
Figure 5. Example of the developed multilayer perceptron artificial neural network model with 24 neurons input layer, eight neurons in the first hidden layer, four neurons in the second hidden layer, five neurons output layer.
Figure 5. Example of the developed multilayer perceptron artificial neural network model with 24 neurons input layer, eight neurons in the first hidden layer, four neurons in the second hidden layer, five neurons output layer.
Sensors 20 04829 g005
Figure 6. Training and validation stream developed in IBM SPPSS modeler.
Figure 6. Training and validation stream developed in IBM SPPSS modeler.
Sensors 20 04829 g006
Figure 7. Predictors importance for model 3 trained with dataset A (19 July 2019 interval).
Figure 7. Predictors importance for model 3 trained with dataset A (19 July 2019 interval).
Sensors 20 04829 g007
Figure 8. Predictors importance for model 11 trained with dataset A (19 July 2019 interval).
Figure 8. Predictors importance for model 11 trained with dataset A (19 July 2019 interval).
Sensors 20 04829 g008
Figure 9. Scoring stream built in IBM SPPSS modeler.
Figure 9. Scoring stream built in IBM SPPSS modeler.
Sensors 20 04829 g009
Table 1. Description of activity categories.
Table 1. Description of activity categories.
Activity ClassDescription
Class 1Relaxing with minimal movements
Class 2using the computer for checking emails and web surfing
Class 3Preparing tea and sandwich—eating breakfast
Class 4Cleaning the room by wiping the Tables and vacuum cleaning
Class 5Exercising using stationary bicycle
Table 2. List of measured parameters and their unit.
Table 2. List of measured parameters and their unit.
SensorUnitRange
CO2ppm300 to 9999
Temperature 1 0 to +40
Relative humidity sensor%20 to 100
Table 3. List of measured parameters using wearable gadgets.
Table 3. List of measured parameters using wearable gadgets.
ParameterUnit
Gyroscope X, Y, Z deg/s
Accelerometer X, Y, Zg
Magnetometer X, Y, ZμT
BarometerhPa
Table 4. Classification table using dataset A (19 July 2019 interval).
Table 4. Classification table using dataset A (19 July 2019 interval).
ClassObservedPredictedPercentage CorrectOverall Accuracy
01
Class 10273,204275899.9%98.9%
142219,80497.9%
Class 20213,908376498.3%97.4%
1387774,63995.1%
Class 30220,3200100.0%100.0%
1275,866100.0%
Class 40243,244532797.9%95.4%
1820939,40882.8%
Class 50223,644109699.5%99.3%
186970,57998.8%
Table 5. Classification table using dataset B (26 July 2019 interval).
Table 5. Classification table using dataset B (26 July 2019 interval).
ActivityObservedPredictedPercentage
Correct
Overall Accuracy
01
Class 10270,37886799.7%99.5%
164018,82996.7%
Class 20213,182379198.3%97.0%
1506868,67393.1%
Class 30216,88395100%99.9%
15173,68599.9%
Class 40235,594991496.0%91.2%
115,72029,48665.2%
Class 50212,940165399.2%98.9%
1153574,58698.05
Table 6. Odds ratio.
Table 6. Odds ratio.
Activity ClassDataset ADataset B
1234512345
KNX Humidity 0.020.000.000.000.001.59 × 1031.03 × 1020.000.371.45 × 108
Temperature2.29 × 10621.40 × 10250.003.29 × 1036.66 × 10361.55 × 10455.29 × 10280.004.45 × 1031.03 × 1011
CO20.940.890.310.860.710.120.990.000.961.05
LegGyroscopex1.000.990.941.001.000.991.001.051.000.99
y0.990.991.051.001.020.990.990.921.001.01
z1.000.990.961.000.990.990.991.021.001.01
Accelerometer x4.200.557.441.570.274.980.790.001.350.34
y0.435.325.68 × 1042.160.372.521.625.75 × 1051.230.28
z0.090.180.003.174.97 × 100.020.340.011.612.14
Magnetometer x0.990.991.651.010.990.990.951.171.011.04
y1.051.020.671.020.951.081.000.901.030.89
z1.020.890.371.061.170.920.930.921.021.24
HandGyroscope x1.001.001.031.001.001.001.001.021.001.00
y1.001.001.071.001.001.011.001.031.001.00
z0.990.991.091.001.000.991.001.031.001.00
Accelerometer x0.060.670.350.088.07 × 103.538.583.090.071.79 × 10
y3.301.50 × 101.67 × 100.140.140.010.198.01 × 1020.681.85
z9.420.040.002.78 × 101.130.873.99 × 100.080.054.28
Magnetometer x0.970.970.440.971.081.011.051.981.001.05
y1.081.0670.2350.9670.9391.0240.9690.6620.9871.01
z1.000.960.861.011.081.111.071.660.990.95
Table 7. Validation result of training using dataset A (19 July 2019 interval).
Table 7. Validation result of training using dataset A (19 July 2019 interval).
ModelNumber of NeuronsOverall AccuracyAccuracy
Hidden
Layer 1
Hidden
Layer 2
12345
18499.80%99.05%99.99%99.85%99.95%99.96%
216899.90%99.79%99.99%99.88%99.96%99.97%
3163299.99%99.81%99.99%99.91%99.98%99.98%
4643299.90%99.76%99.97%99.91%99.93%99.94%
56412899.50%99.22%99.63%99.67%99.38%99.26%
61286499.60%98.92%99.83%99.68%99.74%99.64%
712825696.40%96.02%96.59%96.89%97.12%94.97%
825612898.50%98.44%98.93%98.87%98.85%97.29%
925651276.40%76.48%77.04%78.08%76.81%73.26%
1051225682.80%83.98%82.30%83.84%82.94%80.06%
1151251273.10%74.52%71.98%73.63%71.9%73.4%
Table 8. Validation result of training using dataset B (26 July 2019 interval).
Table 8. Validation result of training using dataset B (26 July 2019 interval).
ModelNumber of NeuronsOverall AccuracyAccuracy of Validation Partition
Hidden
Layer 1
Hidden
Layer 2
12345
18499.7099.20%99.97%99.78%99.71%99.73%
216899.8099.81%99.90%99.91%99.71%99.63%
3163299.9099.70%99.97%99.92%99.94%99.95%
4643299.8099.66%99.89%99.91%99.76%99.67%
56412899.5099.56%99.62%99.79%99.43%99.02%
61286499.5099.43%99.78%99.7%99.45%99.27%
712825696.7097.50%96.55%96.74%97.21%95.38%
825612898.5098.53%98.69%98.95%99.07%97.04%
925651276.4077.13%81.02%73.32%75.54%75.47%
1051225683.3084.31%85.28%80.45%84.27%82.52%
1151251274.0073.75%75.70%70.20%75.69%74.00%
Table 9. Scoring result of training dataset A (19 July 2019 interval) and evaluation dataset B (26 July 2019 interval).
Table 9. Scoring result of training dataset A (19 July 2019 interval) and evaluation dataset B (26 July 2019 interval).
ModelNeurons
In Hidden Layers
Accuracy of Each Activity Class
Layer 1Layer 212345
18493.30%94.26%73.82%47.23%84.45%
216893.30%25.36%73.82%64.37%84.45%
3163293.30%25.36%73.82%25.37%84.45%
4643293.30%74.64%73.82%25.53%84.42%
56412893.30%77.17%75.37%25.47%84.44%
61286493.30%71.41%73.02%25.37%84.45%
71282566.81%46.69%73.64%25.58%30.11%
825612893.30%82.13%41.33%25.39%84.45%
925651289.21%40.60%50.34%34.60%40.89%
1051225692.94%25.42%68.58%41.13%84.26%
1151251230.37%30.83%74.14%35.67%63.93%
Table 10. Scoring result of training dataset B (26 July 2019 interval) and evaluation dataset A (19 July 2019 interval).
Table 10. Scoring result of training dataset B (26 July 2019 interval) and evaluation dataset A (19 July 2019 interval).
ModelNeurons
In Hidden Layers
Accuracy of Each Activity Class
1212345
18450.83%88.64%75.88%63.5%83.92%
216893.17%98.51%75.88%67.11%96.2%
3163281.21%88.59%75.88%55.81%76.11%
4643227.6%88.79%75.88%66.03%68.66%
56412882.01%82.30%75.88%77.77%44.14%
61286477.94%92.65%75.88%73.02%50.21%
712825617.71%49.58%75.71%26.85%44.48%
825612822.60%85.72%75.87%64.71%30.62%
925651240.22%44.92%50.45%49.82%63.95%
1051225633.51%13.1%77.95%52.3%29.81%
1151251270.2%78.37%48.53%56.77%33.39%

Share and Cite

MDPI and ACS Style

Majidzadeh Gorjani, O.; Proto, A.; Vanus, J.; Bilik, P. Indirect Recognition of Predefined Human Activities. Sensors 2020, 20, 4829. https://doi.org/10.3390/s20174829

AMA Style

Majidzadeh Gorjani O, Proto A, Vanus J, Bilik P. Indirect Recognition of Predefined Human Activities. Sensors. 2020; 20(17):4829. https://doi.org/10.3390/s20174829

Chicago/Turabian Style

Majidzadeh Gorjani, Ojan, Antonino Proto, Jan Vanus, and Petr Bilik. 2020. "Indirect Recognition of Predefined Human Activities" Sensors 20, no. 17: 4829. https://doi.org/10.3390/s20174829

APA Style

Majidzadeh Gorjani, O., Proto, A., Vanus, J., & Bilik, P. (2020). Indirect Recognition of Predefined Human Activities. Sensors, 20(17), 4829. https://doi.org/10.3390/s20174829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop