Next Article in Journal
Drive Force and Longitudinal Dynamics Estimation in Heavy-Duty Vehicles
Next Article in Special Issue
Ubiquitous Computing and Ambient Intelligence—UCAmI
Previous Article in Journal
Low Power Contactless Voltage Sensor for Low Voltage Power Systems
Previous Article in Special Issue
Smartphone-Based Platform for Affect Monitoring through Flexibly Managed Experience Sampling Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion

by
Miguel Ángel López Medina
1,†,
Macarena Espinilla
2,
Cristiano Paggeti
3 and
Javier Medina Quero
2,*,†
1
Council of Health for the Andalusian Health Service, Av. de la Constitución 18, 41071 Sevilla, Spain
2
Department of Computer Science, Campus Las Lagunillas, 23071 Jaén, Spain
3
I + Srl, Piazza G.Puccini, 26, 50144 Firenze, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(16), 3512; https://doi.org/10.3390/s19163512
Submission received: 2 June 2019 / Revised: 25 July 2019 / Accepted: 7 August 2019 / Published: 11 August 2019

Abstract

:
The IoT describes a development field where new approaches and trends are in constant change. In this scenario, new devices and sensors are offering higher precision in everyday life in an increasingly less invasive way. In this work, we propose the use of spatial-temporal features by means of fuzzy logic as a general descriptor for heterogeneous sensors. This fuzzy sensor representation is highly efficient and enables devices with low computing power to develop learning and evaluation tasks in activity recognition using light and efficient classifiers. To show the methodology’s potential in real applications, we deploy an intelligent environment where new UWB location devices, inertial objects, wearable devices, and binary sensors are connected with each other and describe daily human activities. We then apply the proposed fuzzy logic-based methodology to obtain spatial-temporal features to fuse the data from the heterogeneous sensor devices. A case study developed in the UJAmISmart Lab of the University of Jaen (Jaen, Spain) shows the encouraging performance of the methodology when recognizing the activity of an inhabitant using efficient classifiers.

1. Introduction

Activity Recognition (AR) defines models able to detect human actions and their goals in smart environments with the aim of providing assistance. Such methods have increasingly been adopted in smart homes [1] and healthcare applications [2] aiming both at improving the quality of care services and allowing people to stay independent in their own homes for as long as possible [3]. In this way, AR has become an open field of research where approaches based on different types of sensors have been proposed [4]. In the first stages, binary sensors were proposed as suitable devices for describing daily human activities within a smart environment setting [5,6]. More recently, wearable devices have been used to analyze activities and gestures in AR [7].
Furthermore, recent paradigms such as edge computing [8] or fog computing [9] place the the data and services within the devices where data are collected, providing virtualized resources and engaged location-based services, at the edge of the mobile networks [10]. In this new perspective on the Internet of Things (IoT) [11], the focus shifts from cloud computing with centralized processing [12] to collaborative networks where the smart objects interact with each other and cooperate with their neighbors to reach common goals [13,14]. In particular, fog computing has had a great impact, between ambient devices [15] and wearable devices [16].
In this context, the proposed work presents a methodology for activity recognition that: (i) integrates interconnected IoT devices that share environmental data and (ii) learns from the heterogeneous data from sensors using a fuzzy approach, which extracts spatial-temporal features. The outcome of this methodology is recognizing daily activities by means of an efficient and lightweight model, which can be included in the future generation of smart objects.
The remainder of the paper is structured as follows: the following subsection reviews works related to our proposal, emphasizing the main novelties we propose. Section 2 presents the proposed methodology for learning daily activities from heterogeneous sensors in a smart environment. Section 3 introduces a case study to show the utility and applicability of the proposed model for AR in the smart environment of the University of Jaen. Finally, in Section 4, conclusions and ongoing works are discussed.

1.1. Related Works

Connectivity plays an important role in Internet of Things (IoT) solutions [17]. Fog computing approaches require the real-time distribution of collaborative information and knowledge [18] to provide a scalable approach in which the heterogeneous sensors are distributed to dynamic subscribers in real time. In this contribution, smart objects are defined as both sources and targets of information using a publish-subscribe model [19]. In the proposed methodology, we define a fog computing approach to: (i) distribute and aggregate information from sensors, which are defined by spatial-temporal features with fuzzy logic, using middleware based on the publish-subscribe model, and (ii) learn from the distributed feature sensors with efficient classifiers, which enable AR within IoT devices.
In turn, in the context of intelligent environments, a new generation of non-invasive devices is combined with the use of traditional sensors. For example, the use of new location devices based on UWB is allowing us to reach extremely high accuracy in indoor contexts [20], which has increased the performance of previous indoor positioning systems based on BLE devices [21]. At the same time, the use of inertial sensors in wearable devices has been demonstrated to enhance activity recognition [22]. These devices coexist with traditional binary sensors, which have been widely used to describe daily user activities from initial AR works [23] to more recent literature [24]. These heterogeneous sensors require integrating several sources: wearable, binary, and location sensors in smart environments [25], to enable rich AR by means of sensor fusion [26].
Traditionally, the features used to describe sensors under data-driven approaches have depended on the type of sensors, whether accelerometers [27] or binary sensors [28]. In previous works, deep learning has also been shown as a suitable approach in AR to describe heterogeneous features from sensors in smart environments [5,6,29]. However, it is proving hard to include learning capabilities in miniature boards or mobile devices integrated in smart objects [30]. First, we note that deep learning requires huge amounts of data [31]. Second, learning, and in some cases, evaluating, under deep learning approaches within low computing boards requires the adaptation of models and the use of costly high-performance embedded boards. In line with this, we highlight the work [32], where a new form of compression models was proposed in several areas to deploy deep neural networks in high-performance embedded processors, such as the ARM Cortex M3. Advancement across a range of interdependent areas, including hardware, systems, and learning algorithms, is still necessary [33].
To bring the capabilities needed to develop general features from sensors to low-computing devices, we propose using spatial-temporal feature extraction based on fuzzy logic with minimal human configuration together with light and efficient classifiers. On the one hand, fuzzy logic has been proposed in sensor fusion [34] and the aggregation of heterogeneous data in distributed architectures [35]. For example, fuzzy temporal windows have increased performance in several datasets [5,6,29], extracting several temporal features from sensors, which have been demonstrated as a suitable representation for AR from binary [36] and wearable [37] sensors.
On the other hand, some other efficient classifiers have been successfully proposed for AR [38] using devices with limited computing power. For example, decision trees, k-nearest neighbor, or support vector machine has enabled AR in ubiquitous devices by processing embedded sensors in mobile devices [39,40].
Taking this research background into consideration, we defined the following key points to include in our approach and compensate for the lack of previous models:
  • To share and distribute data from environmental, wearable, binary, and location sensors among each other using open-source middleware based on MQTT [17].
  • To extract spatial features from sensors using fuzzy logic by means of fuzzy scales [41] with multi-granular modeling [42].
  • To extract temporal features in the short- and middle-term using incremental fuzzy temporal windows [5].
  • To learn from a small amount of data, to avoid the dependency of deep learning on a large amount of data [31].
  • To evaluate the performance of AR with efficient and lightweight classifiers [40], which are compatible with computing in miniature boards [38].

2. Methodology

In this section, we present the proposed methodology for learning daily activities from heterogeneous sensors in a smart environment. As the main aim of this work is integrating and learning the information from sensors in real time, we first describe them formally. A given sensor s i provides information from a data stream S i ( t * ) = { v t 0 i , v t 1 i , , v t N i } , where v t i represents a measurement of the sensor s i in the timestamp t. Under real-time conditions, t * represents the current time and S i ( t * ) , t * t the status of the data stream in this point in time.
In this work, in order to increase scalability and modularity in the deployment of sensors, each sensor s i publishes the data stream independently of the other sensors in real time. For this, a collecting rate Δ t is defined in order to describe the data stream constantly and symmetrically over time:
S i ( t * ) = { v t * i , v t * Δ t i , , v t * Δ t · j i }
Further details on the deployment of sensors in real time are presented in Section 2.1, where a new trend for smart objects and devices are connected with each other using publish-subscribe-based middleware.
Next, in order to relate the data stream with the activities performed by the inhabitant, it is necessary to describe the information from the sensor stream with a set of features F = { F 1 , , F | F | } , where a given feature F m is defined by the function F m ( S i , t * ) to aggregate the values v t j i of the sensor streams S i in the current time t * :
F k ( S i , t * ) = t j t * > t j v t j i
Since our model is based on a data-driven supervised approach, the features that describe the sensors are related to a given label L ( t * ) for each given time t * :
F 1 ( S 1 , t * ) , , F m ( S i , t * ) , , F | F | ( S N , t * ) L ( t * )
where L ( t * ) defines a discrete value L = { L 1 , , L l , L | L | } and L l identifies the labeled activity performed by the inhabitant in the given time t * .
In Section 2.2, we describe a formal methodology based on fuzzy logic to obtain spatial-temporal features to fuse the data from the heterogeneous sensors.
Next, we describe the technical and methodological aspects.

2.1. Technical Approach

In this section, we present sensors and smart objects that have been recently proposed as non-invasive data sources for describing daily human activities, followed by the middleware used to interconnect these different devices.

2.1.1. Smart Object and Devices

As mentioned in Section 1.1, the aim of this work is to enable the interaction between new generations of smart objects. In this section, we present the use of sensors and smart objects, which have been recently proposed as non-invasive data sources for describing daily human activities. These devices have been included in the case study presented in this work, deployed at the the UJAmISmart Lab of University of Jaen (Jaen, Spain) [25] (http://ceatic.ujaen.es/ujami/en/smartlab).
First, in order to gain data on certain objects for AR, we included an inertial miniature board in some daily-use objects, which describes their movement and orientation. To evaluate this information, we attached a Tactigon board [43] to them, which collects inertial data from the accelerometer and sends them in real time under a BLE protocol. For prototyping purposes, in Figure 1, we show the integration of the inertial miniature board in some objects.
Second, we acquired indoor location data by means of UWB devices, which offer high performance with a location accuracy closer to centimeters [44], using wearable devices carried by the inhabitants of the smart environment [44]. In this work, we integrated Decawave’s EVK1000 device [45], which is configured with (almost three) anchors located in the environment and one tag for each inhabitant.
Third, as combining inertial sensors from wearable devices on the user enhances activity recognition [22], we collected inertial information from a wristband device worn by an inhabitant. In this case, we developed an Android Wear application deployed in a Polar M600, which runs on Android Wear [46]. The application allowed us to send data from the accelerometer sensor in real time through a WiFi connection.
Fourth, we included binary sensors in some static objects that the inhabitant interacts with while performing his/her daily activities, such as the microwave or the bed. For this purpose, we integrated some smart things devices [47] in the UJAmI Smart Lab, which transmit the activation of pressure from a mat or the open-close of a door through a Z-Wave protocol. These four types of sensors represent a new trend of high-capability devices for AR. In the next section, we describe the architecture to connect these heterogeneous sources and distribute sensor data in real time.

2.1.2. Middleware for Connecting Heterogeneous Devices

In this section, we describe a distributed architecture for IoT devices using MQTT, where aggregated data from sensors are shared under the publish-subscribe model. The development of an architecture to collect and distribute information from heterogeneous sensors in smart environments has become a key aspect, as well as the prolific research field in the integration of IoT devices due to the lack of standardization [48]. In this section, we present the middleware deployed at the UJAmI Smart Lab of the University of Jaen (Spain) [25] based on the following points:
  • We include connectivity for devices in a transparent way, including BLE, TCP, and Z-Wave protocols.
  • The data collected from heterogeneous devices (without WiFi capabilities) are sent to a given gateway, which reads the raw data in a given protocol, aggregates them, and sends them by MQTT under TCP.
  • The representation of data includes the timestamp for when the data were collected together with the given value of the sensor. The messages in MQTT describe the data in JSON format, a lightweight, text-based, language-independent data interchange format [49].
Next, we describe the specific configuration for each sensor deployed in this work. First, the inertial data from the miniature boards located in smart objects are sent under BLE in raw format at a frequency close to 100 samples per second. A Raspberry Pi is configured as a BLE gateway, reading information from the Tactigon boards, aggregating the inertial data into one-second batches and sending a JSON message in MQTT on a given topic for each sensor.
Second, the Decawave UWB devices are collected in a gateway at a frequency close to 1 sample per second, collecting the location of tag devices by means of a USB connection. The open-source software (https://www.decawave.com/software/) from Decawave was used to read the information and then publish a JSON message in MQTT with the location on a given topic for each tag.
Third, an Android Wear application was developed in order to collect the information from the inertial sensors of the smart wristband devices. The application obtains acceleration samples at a frequency close to 100 per second, collecting a batch of aggregated samples and publishing a JSON message in MQTT on a given topic for each wearable device.
Fourth, a Raspberry Pi is configured as a Z-Wave gateway using a Z-Wave card connected to the GPIO and the software (https://z-wave.me/). In this way, the Raspberry Pi gateway is connected to smart things devices in real time, receiving the raw data and translating them to JSON format to be published in real time using MQTT on a given topic that identifies the device.
In Figure 2, we show the architecture of the hardware devices and software components that configure the middleware for distributing the heterogeneous data from sensors in real time with MQTT in JSON format.

2.2. Fuzzy Fusion of Sensor Spatial-Temporal Features

After detailing the technical configuration of the devices and middleware involved in this work, we present a methodology used to extract spatial-temporal features and represent the heterogeneous data from sensors using fuzzy logic in a homogeneous way in order to learn and evaluate tasks in activity recognition using light and efficient classifiers.
The proposed methodology is based on the following stages:
  • Describing the spatial representation of sensors by means of fuzzy linguistic scales, which provide high interpretability and require minimal expert knowledge, by means of ordered terms.
  • Aggregating and describing the temporal evolution of the terms from linguistic scales by means of fuzzy temporal windows including a middle-to-short temporal evaluation.
  • Predicting AR from the fused sensor features by means of light classifiers, which can be trained and evaluated in devices with low computing power.
In Figure 3, we show the components involved in fusing the spatial-temporal features of heterogeneous sensors.

2.2.1. Spatial Features with Fuzzy Scales

In this section, we detail how the data from heterogeneous sensors is described using fuzzy scales, requiring minimal expert knowledge. A fuzzy scale | L i | ¯ of granularity g describes the values of an environmental sensor s i , which is defined by the terms A l i , l [ 1 , g ] . The terms within the fuzzy linguistic scale (i) fit naturally and are equally ordered within the domain of discourse of the sensor data stream S i from the interval values [ L 1 , L g ] and (ii) fulfill the principle of overlapping to ensure a smooth transition [50].
| L i | ¯ = { A 1 i ¯ , , A l i ¯ , , A g i ¯ } ,
Each term A l i , l [ 1 , g ] is characterized by using a triangular membership function as detailed in Appendix A. Therefore, the terms A l i , which describe the sensor s i , configure the fuzzy spatial features of the sensor from the values:
A l i ( t * ) = A l i ( S i ( t * ) ) = { A l , t * i ( v t * i ) , , A l , j i ( v j i ) }
To show a graphical description of the use of fuzzy linguistic scales in describing the sensor values, we provide an example of the sensor location in Figure 4. In the example, the location sensor s measured the distance in meters to the inhabitant within a maximum of 6 meters (we avoided the sensor super index for the sake of simplicity), whose sensor stream S i ( t * ) = { v t 1 = 0.5 m , v t 2 = 5.0 m } is defined in the two points of time t 1 and t 2 . First, we describe a fuzzy scale | L | ¯ of granularity g = 3 , which determines the membership function of the terms A 1 , A 2 , A 3 . Second, from the values of the sensor stream, which define the distances to the location sensor, we computed the degrees for each term A 1 , A 2 , A 3 in the fuzzy scale obtaining A 1 ( t * ) = { A 1 , t 1 = 0 , A 1 , t 2 = 0.83 } , A 2 ( t * ) = { A 2 , t 1 = 0.33 , A 2 , t 2 = 0.16 } , A 3 ( t * ) = { A 3 , t 1 = 0.66 , A 3 , t 2 = 0 } .

2.2.2. Temporal Features with Fuzzy Temporal Windows

In this section, the use of multiple Fuzzy Temporal Windows (FTW) and fuzzy aggregation methods [35] is proposed to enable the short- and middle-term representation [5,6] of the temporal evolution of the degrees for the terms A l i ( t * ) .
The FTWs are described straightforwardly according to the distance from the current time t * to a given timestamp t j as Δ t j = t * t j using the membership function μ T K ( Δ t j ) . Therefore, a given FTW T k is defined by the values L k , L k 1 , L k 2 , L k 3 , which determine a trapezoidal membership function (referred to in Appendix C), as:
T k = T k ( Δ t i * ) [ L k , L k 1 , L k 2 , L k 3 ]
Next, the aggregation degree from the relevant terms A l i ( t j ) within the temporal window T K i of a sensor s i is computed using a max-min operator [35] (detailed in Appendix B). This aggregation degree is defined as A l i ( t * ) T k i ( t * ) , which represents the aggregation degree of the FTW T K i over the degrees of term A l i ( t * ) in a given time t * .
We provide an example in Figure 5 to show a graphical description of the use of an FTW T 1 ( Δ t j ) in aggregating the degrees of a term A 1 in the sensor stream as A 1 ( t * ) = { A 1 , t 1 = 0.7 , A 1 , t 2 = 0.2 , A 1 , t 3 = 0.4 , A 1 , t 4 = 0.3 , A 1 , t 5 = 0.5 , A 1 , t 6 = 0.9 } (we avoid the sensor super index for the sake of simplicity). First, we define an FTW as T 1 = T 1 ( Δ t i * ) [1 s, 2 s, 4 s, 5 s] | L | ¯ in magnitude of seconds s. Second, we compute the degree of the temporal window T 1 ( t * ) = { t 1 = 0 , t 2 = 0.5 , t 3 = 1 , t 4 = 1 , t 5 = 0.5 , t 6 = 0 , whose aggregation degree A 1 ( t * ) T 1 ( t * ) is computed by the max-min operator and determines the value of the spatial-temporal feature defined by the pair T 1 , A 1 . Therefore, we define a given feature F m = A l i ( t * ) T k i ( t * ) for each pair of fuzzy terms A l i and the FTW T K i of a sensor stream S i in the current time t * .

3. Results

In this section, we describe the experimental setup and results of a case study developed at the UJAmI Smart Lab of the University of Jaen (Spain), which were gathered in order to evaluate the proposed methodology for AR.

3.1. Experimental Setup

The devices defined in Section 2.1.1 were previously deployed at the UJAmI Smart Lab of the University of Jaen. The middleware based on MQTT and JSON messages integrated: (i) UWB-Decawave location devices, (ii) Tactigon inertial devices, (iii) Smart Things binary sensors, (iv) wearable devices (Polar M660) with Android Wear, and (v) Raspberry Pi gateways. The middleware allowed us to collect data from environmental sensors in real time: location and acceleration data from inhabitants; acceleration data from three smart objects: a cup, a toothbrush, and a fork; binary activation from nine static objects: bathroom faucet, toilet flush, bed, kitchen faucet, microwave, TV, phone, closet, and main door.
In the case study, 5 scenes were collected while the inhabitant performed 10 activities: sleep, toileting, prepare lunch, eat, watch TV, phone, dressing, toothbrush, drink water and enter-leave house. A scene consisted of a coherent sequence of human actions in daily life, such as: waking up, preparing breakfast, and getting dressed to leave home. In the 5 scenes, a total of 842 samples for each one of the 26 sensors were recorded in one-second time-steps. Due to the high inflow of data from the inertial sensors, which were configured to 50 Hz, we aggregated the data in one-second batches within the gateways. Other sensors sent the last single value for each one-second step from the gateways where they were connected. In Table 1, we provide a description of the case scenes and in Table 2 the frequency of activities.
The information from all sensors was distributed in real time by means of MQTT messages and topics in one-second time-steps. An MQTT subscriber collected and recorded the sensor data from topics streaming within a database. The collection of data was managed by MQTT messaging, enabling us to start or stop data collection in the database in real time. We note that at the same point of time, each board or computer could take different time-stamps since the clocks did not have to be synchronized. To synchronize all the devices (within the one-second interval), we collected the time-stamp of the first value for each sensor from the initial message for collecting data, which determined the reference time t 0 for this sensor. Therefore, all the following timestamps for each sensor were computed as relative time to starting time t i = t i t 0 . Some examples of data collected from different sources are shown in Figure 6.
During the case study, an external observer labeled the timeline with the activity carried out by the inhabitant in real time. For training and evaluation purposes, a cross-validation was carried out with the 5 scenes (each one was used for testing and another for training). The evaluation of the AR was developed in streaming for each second in real-time conditions, without explicit segmentation of the activities performed. Next, we merged all time-steps from the 5 case tests, configuring a full timeline test, which could be analyzed according to the metrics. The metrics to evaluate the models were precision, recall, and F1-score, which were computed for each activity. In turn, we allowed an error margin of a second, since the human labeling of the scenes may be slightly displaced at this speed (by a margin of seconds).
Finally, as light and efficient classifiers, we evaluated: kNN, decision tree (C4.5), and SVM, whose implementation in Java and C++ [51,52,53] enable learning and evaluation capabilities in miniature boards or mobile devices. We evaluated the approach in a mid-range mobile device (Samsung galaxy J7), where the classifiers were integrated using Weka [52] and the learning time of the classifier was measured.

3.2. Baseline Features

In this section, we present the results of baseline features in AR using raw data provided by sensors. Therefore, first we applied the classification of raw data collected by middleware for each second and activity label. Second, in order to evaluate the impact of aggregating raw data in a temporal range, we included an evaluation of several sizes of temporal windows, which summarized sensor data using maximal aggregation. The configurations were: (i) [ t + , t ] = [ 0 , 1 ] , (ii) [ t + , t ] = [ 1 , 3 ] , and (iii) [ t + , t ] = [ 2 , 5 ] , where [ t + , t ] configure the given temporal window [ t * + t + , t * t ] for each evaluation time t * in the timeline. The number of features corresponds to the number of sensors | S | . Results and learning time in mobile devices for each activity and classifier are shown in Table 3; the confusion matrix with the best configuration [ t + , t ] = [ 0 , 1 ] and SVM is shown in Figure 7.
We can observe that the use of one temporal window with baseline features was only suitable when the window size fit the short-term sensor activation [ t + , t ] = [ 0 , 1 ] .

3.3. Fuzzy Spatial-Temporal Features

In this section, we detail the extraction of fuzzy spatial-temporal features from the sensors of the case study. First, in order to process the data from the UWB and acceleration sensors (in wearable devices and inertial objects), we applied a normalized linguistic scale L i with granularity g = 3 , where the proposed linguistic terms fit naturally ordered within the domain of discourse of the environmental sensor.
The number of features correspond to the number of sensors times granularity | S | × g . The linguistic scale of UWB location was defined in the domain [ 0 , 6 ] m since the the smart lab is less than six meters in size, and the linguistic scale of acceleration data was between the normalized angles defined in [ 1 , 1 ] . Binary sensors which are represented by the values 0 , 1 , one in the case of activation, have the same straightforward representation as a fuzzy or crisp value.
In Table 4, we present the results and learning time in a mobile device for each activity and classifier; the confusion matrix with the best configuration [ t + , t ] = [ 2 , 5 ] and kNN are shown in Figure 7. We note the stability of the results in the different windows compared to the previous results without fuzzy processing.
Second, we applied two configurations of FTWs to represent middle- and short-term activation of sensors: (i) [ t + , t ] = [ 3 , 5 ] T 1 = { 5 , 5 , 3 , 2 } , T 0 = { 3 , 2 , 2 , 3 } , T 1 + = { 0 , 0 , 2 , 3 } and (ii) [ t + , t ] = [ 8 , 13 ] T 1 = { 13 , 13 , 3 , 2 } , T 2 = { 3 , 2 , 2 , 3 } , T 1 + = { 0 , 0 , 3 , 8 } , where T 1 represents a past fuzzy temporal window, T 2 a fuzzy temporal window closer to current time t * , and T 3 a delayed temporal window from the current time t * . The first and second configurations contained a further temporal evaluation of 8 s and 21 s, respectively. The number of features corresponds to the number of sensors times granularity and the number of temporal windows | S | × g × | T | .
Finally, in Table 5, we present the results and learning time in a mobile device for each activity and classifier; the kNN confusion matrix is shown in Figure 7. We note the increase of performance when including several fuzzy temporal windows highlighting the learning time, efficiency, and f1-score of kNN.

3.4. Representation with Extended Baseline Features

In this section, we evaluate the impact of including an advanced representation of sensors as baseline features. For inertial sensors, we included the aggregation function: maximal, minimal, average, and standard deviation, which are identified as a strong representation of acceleration data [27]. In the case of binary sensors, we included the last activation of the sensors and the current activation to represent the last status of the smart environment, which has brought about encouraging results in activity recognition [28,54]. These new features were computed to obtain an extended sensor representation.
First, we computed the performance of the extended representation when used as baseline features within one-second windows [ t + , t ] = [ 0 , 1 ] , which can be compared with Table 3 to see how the results correspond with the non-extended features. Second, we evaluated the impact of applying the fuzzy spatial-temporal methodology to the extended features with the configurations for FTW [ t + , t ] = [ 8 , 13 ] and [ t + , t ] = [ 3 , 5 ] , which can be compared with Table 5 to see how the results correspond with the non-extended features. The results with the performance of the extended representation are shown in Table 6.

3.5. Impact on Selection by Type of Sensor

In this section, we evaluate the impact of selecting a subset of the sensors of the case study on activity recognition. For this, we started with the best configuration, which utilized fuzzy spatial-temporal extended features with FTW [ t + , t ] = [ 8 , 13 ] . From this configuration, we evaluated four subsets of sensors by type:
  • (S1) removing binary (using inertial and location) sensors.
  • (S2) removing location (using binary and inertial) sensors.
  • (S3) removing inertial (using binary and location) sensors.
  • (S4) removing binary and location (using only inertial) sensors.
In Table 7, we show the results of selecting the four subsets of sensors by type.

3.6. Discussion

Based on the results shown in the case study, we defend that the use of fuzzy logic to extract spatial-temporal features from heterogeneous sensors constitutes a suitable model for representation and learning purposes in AR. First, spatial representation based on fuzzy scales increased performance regarding crisp-raw values. Second, the impact of including multiple fuzzy temporal windows as features, which enables middle- and short-term representation of sensor data, brought about a relevant increase in performance. Moreover, two configurations of FTWs were evaluated showing similar results, suggesting that window size definition is not critical in modeling FTW parameters, unlike with crisp windows and baseline features. Third, fuzzy spatial-temporal features showed encouraging performance from raw sensor data; however, we evaluated an advanced representation for inertial sensors and binary sensors. The use of extended features increased performance slightly by around 1–2%. Fourth, we evaluated the impact of removing some types of sensors in the deployment of the smart lab. The combination of all types of sensors provided the best configuration, and we note: (i) the use of inertial sensors and smart objects only by the inhabitant reduced performance notably; (ii) the combination of binary sensors with location or inertial sensors was closer to the best approach, which featured all of them. Finally, it is noteworthy that kNN showed encouraging results, together with SVM. The shorter learning time and the high f1-sc of kNN in AR suggest that it is the best option as a classifier to be integrated in learning AR within miniature boards. Decision trees had lower performance due to poor capabilities in analyzing continuous data.

4. Conclusions and Ongoing Works

The aim of this work was to describe and fuse the information from heterogeneous sensors in an efficient and lightweight manner in order to enable IoT devices to compute spatial-temporal features in AR, which can be deployed in fog computing architectures. On the one hand, a case study with a combination of location, inertial, and binary sensors was performed in a smart lab where an inhabitant carried out 10 daily activities. We included the integration of inertial sensors in daily objects and high-precision location sensors as novel aspects using middleware based on MQTT. On the other hand, we showed the capabilities of fuzzy scales and fuzzy temporal windows to increase the spatial-temporal representation of sensors. We highlighted that the results showed stable performance with fuzzy temporal windows, which helped with the window size selection problem. On spatial features, we applied the same general method based on linguistic scales to fuse and describe heterogeneous sensors. We evaluated the impact of removing the sensors by type (binary, location, and inertial), which provided relevant feedback on which ones performed better for activity recognition in a smart lab setting. Finally, we note the high representativeness of fuzzy logic in describing features, which was made the most of by the use of straightforward and efficient classifiers, among which the performance of kNN stood out.

Author Contributions

Conceptualization: all authors; M.E. and J.M.Q.: methodology, software, and validation.

Funding

This research received funding under the REMINDproject Marie Sklodowska-Curie EU Framework for Research and Innovation Horizon 2020, under Grant Agreement No. 734355. Furthermore, this contribution was supported by the Spanish government by means of the projects RTI2018-098979-A-I00, PI-0387-2018, and CAS17/00292.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UWBUltra-Wide-Band
IoTInternet of Things
BLEBluetooth Low Energy
MQTTMessage Queue Telemetry Transport
JSONJavaScript Object Notation
ARActivity Recognition
FTWFuzzy Temporal Window

Appendix A. Linguistic Scale of Fuzzy Terms by Means of Triangular Membership Functions

A linguistic scale for a given environmental sensor s i is defined by: (i) interval values [ L 1 , L g ] and (ii) granularity g. Each term A l i , l [ 1 , g ] is characterized by using a triangular membership function μ A l i ¯ ( x ) [55], which is defined by the interval values L i 1 , L i , L i + 1 as μ A l i ¯ ( x ) = T R S ( x ) [ L i 1 , L i , L i + 1 ] , where:
T R S ( x ) [ l i 1 , l i , l i + 1 ] = 0 x l i 1 ( x l i 1 ) / ( l i l i 1 ) l i 1 < x < l i ( l i + 1 x ) / ( l i + 1 l i ) l i < x < l i + 1 0 l i + 1 x

Appendix B. Aggregating Fuzzy Temporal Windows and Terms

For a given fuzzy term V r and a fuzzy temporal window T k defined over a sensor stream S i = { v t i } , we define the aggregation V r T k in a given current time t * as:
V r T k ( v t i , t * ) = V r ( v t i ) T k ( Δ t * ) , Δ t * = t * t [ 0 , 1 ] V r i T k i ( t * ) = V r i T k i ( S i , t * ) = v t i ¯ S i V r T k ( v t i , t * ) [ 0 , 1 ]
Using max-min [35] as an operation to model the t-norm and co-norm, we obtain:
V r i T k i ( t * ) = = max t S i ( m i n ( V r ( v t i ) , T k ( Δ t ) ) ) [ 0 , 1 ]

Appendix C. Representation of Fuzzy Temporal Windows using Trapezoidal Membership Functions

Each TFW T k is described by a trapezoidal function based on the time interval from a previous time t j to the current time t * : T k ( Δ t j ) [ l 1 , l 2 , l 3 , l 4 ] and a fuzzy set characterized by a membership function whose shape corresponds to a trapezoidal function. The well-known trapezoidal membership functions are defined by a lower limit l 1 , an upper limit l 4 , a lower support limit l 2 , and an upper support limit l 3 (refer to Equation (A4)):
T S ( x ) [ l 1 , l 2 , l 3 , l 4 ] = 0 x l 1 ( x l 1 ) / ( l 2 l 1 ) l 1 < x < l 2 1 l 2 x l 3 ( l 4 x ) / ( l 4 l 3 ) l 3 < x < l 4 0 l 4 x

References

  1. Bravo, J.; Fuentes, L.; de Ipina, D.L. Theme issue: Ubiquitous computing and ambient intelligence. Pers. Ubiquitous Comput. 2011, 15, 315–316. [Google Scholar] [CrossRef]
  2. Bravo, J.; Hervas, R.; Fontecha, J.; Gonzalez, I. m-Health: Lessons Learned by m-Experiences. Sensors 2018, 18, 1569. [Google Scholar] [CrossRef] [PubMed]
  3. Rashidi, P.; Mihailidis, A. A Survey on Ambient Assisted Living Tools for Older Adults. IEEE J. Biomed. Health Inform. 2013, 17, 579–590. [Google Scholar] [CrossRef] [PubMed]
  4. De-la-Hoz, E.; Ariza-Colpas, P.; Medina, J.; Espinilla, M. Sensor-based datasets for Human Activity Recognition—A Systematic Review of Literature. IEEE Access 2018, 6, 59192–59210. [Google Scholar] [CrossRef]
  5. Medina-Quero, J.; Zhang, S.; Nugent, C.; Espinilla, M. Ensemble classifier of long short-term memory with fuzzy temporal windows on binary sensors for activity recognition. Expert Syst. Appl. 2018, 114, 441–453. [Google Scholar] [CrossRef]
  6. Ali-Hamad, R.; Salguero, A.; Bouguelia, M.H.; Espinilla, M.; Medina-Quero, M. Efficient activity recognition in smart homes using delayed fuzzy temporal windows on binary sensors. IEEE J. Biomed. Health Inform. 2019. [Google Scholar] [CrossRef]
  7. Ordoñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [PubMed]
  8. Garcia Lopez, P.; Montresor, A.; Epema, D.; Datta, A.; Higashino, T.; Iamnitchi, A.; Barcellos, M.; Felber, P.; Riviere, E. Edge-centric computing: Vision and challenges. ACM SIGCOMM Comput. Commun. Rev. 2015, 45, 37–42. [Google Scholar] [CrossRef]
  9. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog computing and its role in the internet of things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, Helsinki, Finland, 17 August 2012; pp. 13–16. [Google Scholar]
  10. Luan, T.H.; Gao, L.; Li, Z.; Xiang, Y.; Wei, G.; Sun, L. Fog computing: Focusing on mobile users at the edge. arXiv 2015, arXiv:1502.01815. [Google Scholar]
  11. Kopetz, H. Internet of Things. In Real-Time Systems; Springer: New York, NY, USA, 2011; pp. 307–323. [Google Scholar]
  12. Chen, L.W.; Ho, Y.F.; Kuo, W.T.; Tsai, M.F. Intelligent file transfer for smart handheld devices based on mobile cloud computing. Int. J. Commun. Syst. 2015, 30, e2947. [Google Scholar] [CrossRef]
  13. Atzori, L.; Iera, A.; Morabito, G. The Internet of Things: A survey. Comput. Netw. 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
  14. Kortuem, G.; Kawsar, F.; Sundramoorthy, V.; Fitton, D. Smart objects as building blocks for the internet of things. IEEE Internet Comput. 2010, 14, 44–51. [Google Scholar] [CrossRef]
  15. Kim, J.E.; Boulos, G.; Yackovich, J.; Barth, T.; Beckel, C.; Mosse, D. Seamless integration of heterogeneous devices and access control in smart homes. In Proceedings of the 2012 Eighth International Conference on Intelligent Environments, Guanajuato, Mexico, 26–29 June 2012; pp. 206–213. [Google Scholar]
  16. Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
  17. Luzuriaga, J.E.; Cano, J.C.; Calafate, C.; Manzoni, P.; Perez, M.; Boronat, P. Handling mobility in IoT applications using the MQTT protocol. In Proceedings of the 2015 Internet Technologies and Applications (ITA), Wrexham, UK, 8–11 September 2015; pp. 245–250. [Google Scholar]
  18. Shi, H.; Chen, N.; Deters, R. Combining mobile and fog computing: Using coap to link mobile device clouds with fog computing. In Proceedings of the 2015 IEEE International Conference on Data Science and Data Intensive Systems, Sydney, NSW, Australia, 11–13 December 2015; pp. 564–571. [Google Scholar]
  19. Henning, M. A new approach to object-oriented middleware. IEEE Internet Comput. 2004, 8, 66–75. [Google Scholar] [CrossRef]
  20. Ruiz, A.R.J.; Granja, F.S. Comparing ubisense, bespoon, and decawave uwb location systems: Indoor performance analysis. IEEE Trans. Instrum. Meas. 2017, 66, 2106–2117. [Google Scholar] [CrossRef]
  21. Lin, X.Y.; Ho, T.W.; Fang, C.C.; Yen, Z.S.; Yang, B.J.; Lai, F. A mobile indoor positioning system based on iBeacon technology. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milano, Italy, 25–29 August 2015; pp. 4970–4973. [Google Scholar]
  22. Fiorini, L.; Bonaccorsi, M.; Betti, S.; Esposito, D.; Cavallo, F. Combining wearable physiological and inertial sensors with indoor user localization network to enhance activity recognition. J. Ambient Intell. Smart Environ. 2018, 10, 345–357. [Google Scholar] [CrossRef] [Green Version]
  23. Singla, G.; Cook, D.J.; Schmitter-Edgecombe, M. Tracking activities in complex settings using smart environment technologies. Int. J. Biosci. Psychiatry Technol. IJBSPT 2009, 1, 25. [Google Scholar]
  24. Yan, S.; Liao, Y.; Feng, X.; Liu, Y. Real time activity recognition on streaming sensor data for smart environments. In Proceedings of the 2016 International Conference on Progress in Informatics and Computing (PIC), Shanghai, China, 23–25 December 2016; pp. 51–55. [Google Scholar]
  25. Espinilla, M.; Martínez, L.; Medina, J.; Nugent, C. The experience of developing the UJAmI Smart lab. IEEE Access 2018, 6, 34631–34642. [Google Scholar] [CrossRef]
  26. Hong, X.; Nugent, C.; Mulvenna, M.; McClean, S.; Scotney, B.; Devlin, S. Evidential fusion of sensor data for activity recognition in smart homes. Pervasive Mob. Comput. 2009, 5, 236–252. [Google Scholar] [CrossRef]
  27. Espinilla, M.; Medina, J.; Salguero, A.; Irvine, N.; Donnelly, M.; Cleland, I.; Nugent, C. Human Activity Recognition from the Acceleration Data of a Wearable Device. Which Features Are More Relevant by Activities? Proceedings 2018, 2, 1242. [Google Scholar] [CrossRef]
  28. Ordonez, F.; de Toledo, P.; Sanchis, A. Activity recognition using hybrid generative/discriminative models on home environments using binary sensors. Sensors 2013, 13, 5460–5477. [Google Scholar] [CrossRef] [PubMed]
  29. Quero, J.M.; Medina, M.Á.L.; Hidalgo, A.S.; Espinilla, M. Predicting the Urgency Demand of COPD Patients From Environmental Sensors Within Smart Cities With High-Environmental Sensitivity. IEEE Access 2018, 6, 25081–25089. [Google Scholar] [CrossRef]
  30. Rajalakshmi, A.; Shahnasser, H. Internet of Things using Node-Red and alexa. In Proceedings of the 2017 17th International Symposium on Communications and Information Technologies (ISCIT), Cairns, Australia, 25–27 September 2017; pp. 1–4. [Google Scholar]
  31. Yamashita, T.; Watasue, T.; Yamauchi, Y.; Fujiyoshi, H. Improving Quality of Training Samples Through Exhaustless Generation and Effective Selection for Deep Convolutional Neural Networks. VISAPP 2015, 2, 228–235. [Google Scholar]
  32. Lane, N.D.; Bhattacharya, S.; Mathur, A.; Georgiev, P.; Forlivesi, C.; Kawsar, F. Squeezing deep learning into mobile and embedded devices. IEEE Pervasive Comput. 2017, 16, 82–88. [Google Scholar] [CrossRef]
  33. Lane, N.D.; Warden, P. The deep (learning) transformation of mobile and embedded computing. Computer 2018, 51, 12–16. [Google Scholar] [CrossRef]
  34. Le Yaouanc, J.M.; Poli, J.P. A fuzzy spatio-temporal-based approach for activity recognition. In International Conference on Conceptual Modeling; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  35. Medina-Quero, J.; Martinez, L.; Espinilla, M. Subscribing to fuzzy temporal aggregation of heterogeneous sensor streams in real-time distributed environments. Int. J. Commun. Syst. 2017, 30, e3238. [Google Scholar] [CrossRef]
  36. Espinilla, M.; Medina, J.; Hallberg, J.; Nugent, C. A new approach based on temporal sub-windows for online sensor-based activity recognition. J. Ambient Intell. Humaniz. Comput. 2018, 1–13. [Google Scholar] [CrossRef] [Green Version]
  37. Banos, O.; Galvez, J.M.; Damas, M.; Guillen, A.; Herrera, L.J.; Pomares, H.; Rojas, I.; Villalonga, C.; Hong, C.S.; Lee, S. Multiwindow fusion for wearable activity recognition. In Proceedings of the International Work-Conference on Artificial Neural Networks, Palma de Mallorca, Spain, 10–12 June 2015; Springer: Cham, Switzerland, 2015; pp. 290–297. [Google Scholar]
  38. Grokop, L.H.; Narayanan, V.U.S. Device Position Estimates from Motion and Ambient Light Classifiers. U.S. Patent No. 9,366,749, 14 June 2016. [Google Scholar]
  39. Akhavian, R.; Behzadan, A.H. Smartphone-based construction workers’ activity recognition and classification. Autom. Constr. 2016, 71, 198–209. [Google Scholar] [CrossRef]
  40. Martin, H.; Bernardos, A.M.; Iglesias, J.; Casar, J.R. Activity logging using lightweight classification techniques in mobile devices. Pers. Ubiquitous Comput. 2013, 17, 675–695. [Google Scholar] [CrossRef]
  41. Chen, S.M.; Hong, J.A. Multicriteria linguistic decision making based on hesitant fuzzy linguistic term sets and the aggregation of fuzzy sets. Inf. Sci. 2014, 286, 63–74. [Google Scholar] [CrossRef]
  42. Morente-Molinera, J.A.; Pérez, I.J.; Ureña, R.; Herrera-Viedma, E. On multi-granular fuzzy linguistic modeling in decision making. Procedia Comput. Sci. 2015, 55, 593–602. [Google Scholar] [CrossRef]
  43. The Tactigon. 2019. Available online: https://www.thetactigon.com/ (accessed on 8 August 2019).
  44. Zafari, F.; Papapanagiotou, I.; Christidis, K. Microlocation for internet-of-things-equipped smart buildings. IEEE Internet Things J. 2016, 3, 96–112. [Google Scholar] [CrossRef]
  45. Kulmer, J.; Hinteregger, S.; Großwindhager, B.; Rath, M.; Bakr, M.S.; Leitinger, E.; Witrisal, K. Using DecaWave UWB transceivers for high-accuracy multipath-assisted indoor positioning. In Proceedings of the 2017 IEEE International Conference on Communications Workshops (ICC Workshops), Paris, France, 21–25 May 2017; pp. 1239–1245. [Google Scholar]
  46. Mishra, S.M. Wearable Android: Android Wear and Google Fit App Development; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  47. Smartthings. 2019. Available online: https://www.smartthings.com/ (accessed on 8 August 2019).
  48. Al-Qaseemi, S.A.; Almulhim, H.A.; Almulhim, M.F.; Chaudhry, S.R. IoT architecture challenges and issues: Lack of standardization. In Proceedings of the 2016 Future Technologies Conference (FTC), San Francisco, CA, USA, 6–7 December 2016; pp. 731–738. [Google Scholar]
  49. Bray, T. The Javascript Object Notation (Json) Data Interchange Format (No. RFC 8259). 2017. Available online: https://buildbot.tools.ietf.org/html/rfc7158 (accessed on 8 August 2019).
  50. Markowski, A.S.; Mannan, M.S.; Bigoszewska, A. Fuzzy logic for process safety analysis. J. Loss Prev. Process. Ind. 2009, 22, 695–702. [Google Scholar] [CrossRef]
  51. Beck, J. Implementation and Experimentation with C4. 5 Decision Trees. Bachelor’s Thesis, University of Central Florida, Orlando, FL, USA, 2007. [Google Scholar]
  52. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  53. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. TIST 2011, 2, 27. [Google Scholar] [CrossRef]
  54. Kasteren, T.L.; Englebienne, G.; Krose, B.J. An activity monitoring system for elderly care using generative and discriminative models. Pers. Ubiquitous Comput. 2010, 14, 489–498. [Google Scholar] [CrossRef] [Green Version]
  55. Chang, D.Y. Applications of the extent analysis method on fuzzy AHP. Eur. J. Oper. Res. 1996, 95, 649–655. [Google Scholar] [CrossRef]
Figure 1. Prototyping of smart objects (a cup, a toothbrush, and a fork), whose orientation and movement data are collected and sent in real time by an inertial miniature board (The Tactigon).
Figure 1. Prototyping of smart objects (a cup, a toothbrush, and a fork), whose orientation and movement data are collected and sent in real time by an inertial miniature board (The Tactigon).
Sensors 19 03512 g001
Figure 2. Architecture for connecting heterogeneous devices. Binary, location, and inertial board sensors send raw data to gateways, which collect, aggregate, and publish data with MQTT in JSON format. The Android Wear application collects, aggregates, and publishes the data directly using MQTT through WiFi connection.
Figure 2. Architecture for connecting heterogeneous devices. Binary, location, and inertial board sensors send raw data to gateways, which collect, aggregate, and publish data with MQTT in JSON format. The Android Wear application collects, aggregates, and publishes the data directly using MQTT through WiFi connection.
Sensors 19 03512 g002
Figure 3. Fuzzy fusion of spatial-temporal features of sensors: (i) data from the heterogeneous sensors are distributed in real time; (ii) fuzzy logic processes spatial-temporal features; (iii) a light and efficient classifier learns activities from the features.
Figure 3. Fuzzy fusion of spatial-temporal features of sensors: (i) data from the heterogeneous sensors are distributed in real time; (ii) fuzzy logic processes spatial-temporal features; (iii) a light and efficient classifier learns activities from the features.
Sensors 19 03512 g003
Figure 4. Example of the fuzzy scale defined for g = 3 on distance to the location sensor. Example degrees for values evaluating the distances S i ( t * ) = { v t 1 = 0.5 m , v t 2 = 5.0 m } A 1 ( t * ) = { A 1 , t 1 = 0 , A 1 , t 2 = 0.83 } , A 2 ( t * ) = { A 2 , t 1 = 0.33 , A 2 , t 2 = 0.16 } , A 3 ( t * ) = { A 3 , t 1 = 0.66 , A 3 , t 2 = 0 } .
Figure 4. Example of the fuzzy scale defined for g = 3 on distance to the location sensor. Example degrees for values evaluating the distances S i ( t * ) = { v t 1 = 0.5 m , v t 2 = 5.0 m } A 1 ( t * ) = { A 1 , t 1 = 0 , A 1 , t 2 = 0.83 } , A 2 ( t * ) = { A 2 , t 1 = 0.33 , A 2 , t 2 = 0.16 } , A 3 ( t * ) = { A 3 , t 1 = 0.66 , A 3 , t 2 = 0 } .
Sensors 19 03512 g004
Figure 5. Example of temporal aggregation of the FTW T 1 ( Δ t i * ) [1 s, 2 s, 4 s, 5 s] (in magnitude of seconds s) for the degrees of the term A 1 ( t * ) = { 0.7 , 0.2 , 0.4 , 0.3 , 0.5 , 0.9 } . The aggregation degree A 1 ( t * ) T 1 ( t * ) = 0.5 is determined by the max-min operator. The value 0.5 defines a fuzzy spatial temporal feature of the sensor stream.
Figure 5. Example of temporal aggregation of the FTW T 1 ( Δ t i * ) [1 s, 2 s, 4 s, 5 s] (in magnitude of seconds s) for the degrees of the term A 1 ( t * ) = { 0.7 , 0.2 , 0.4 , 0.3 , 0.5 , 0.9 } . The aggregation degree A 1 ( t * ) T 1 ( t * ) = 0.5 is determined by the max-min operator. The value 0.5 defines a fuzzy spatial temporal feature of the sensor stream.
Sensors 19 03512 g005
Figure 6. Data from heterogeneous sensors. The top-left shows the location in meters from a UWB device. The top-right shows acceleration from a wearable device. The bottom-left shows acceleration in the inhabitant’s cup. The bottom-right shows the activation of the microwave. Some inhabitant behaviors and the impact on sensors are indicated in the timelines.
Figure 6. Data from heterogeneous sensors. The top-left shows the location in meters from a UWB device. The top-right shows acceleration from a wearable device. The bottom-left shows acceleration in the inhabitant’s cup. The bottom-right shows the activation of the microwave. Some inhabitant behaviors and the impact on sensors are indicated in the timelines.
Sensors 19 03512 g006
Figure 7. Confusion matrix for the best classifiers. (A) SVM + [ t + , t ] = [ 0 , 1 ] in the baseline, (B) KNN + [ t + , t ] = [ 2 , 4 ] with fuzzy spatial features, and (C) KNN + [ t + , t ] = [ 8 , 15 ] with fuzzy spatial temporal features.
Figure 7. Confusion matrix for the best classifiers. (A) SVM + [ t + , t ] = [ 0 , 1 ] in the baseline, (B) KNN + [ t + , t ] = [ 2 , 4 ] with fuzzy spatial features, and (C) KNN + [ t + , t ] = [ 8 , 15 ] with fuzzy spatial temporal features.
Sensors 19 03512 g007aSensors 19 03512 g007b
Table 1. Sequence activities of the case scenes.
Table 1. Sequence activities of the case scenes.
Scene 1Sleep → Toilet → Prepare lunch → Eat → Watch TV → Phone → Dressing → Toothbrush → Exit
Scene 2Enter → Drinking → Toilet → Phone → Exit
Scene 3Enter → Drinking → Toilet → Dressing → Cooking → Eat → Sleep
Scene 4Enter → Toilet → Dressing → Watching TV → Cooking → Eat → Toothbrush → Sleep
Scene 5Enter → Drinking → Toilet → Dressing → Cooking → Eat → Phone → Toothbrush → Sleep
Table 2. Frequency (number of time-steps) for each activity and scene.
Table 2. Frequency (number of time-steps) for each activity and scene.
ActivityScene 1Scene 2Scene 3Scene 4Scene 5
Sleep100161111
Toilet131061014
Cooking290271724
Eat360404641
Watch TV2000160
Phone14170015
Dressing150211617
Toothbrush21001818
Exit32000
Enter03423
Drinking0167012
Table 3. Results with baseline features: precision (Pre), recall (Rec) and F1-score (F1-sc).
Table 3. Results with baseline features: precision (Pre), recall (Rec) and F1-score (F1-sc).
[ t + , t ] = [ 0 , 1 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep94.7377.0885.0094.7377.0885.0065.4583.3373.31
Toilet95.8354.7169.6668.7588.6777.4586.2790.5688.36
Cooking66.3889.6976.2958.0275.2565.5250.4867.0157.08
Eating81.4392.0286.4080.3693.8686.5976.9777.3077.13
Watching TV10094.4497.1494.7394.4494.5991.1791.6691.42
Phone10093.4796.6210010010097.2295.6596.43
Dressing90.0094.2092.0593.2497.1095.1388.0076.8182.02
Brushing Teeth97.3673.6883.8876.2789.4782.3470.9650.8759.26
Drinking56.2545.7150.4332.2557.1441.2332.8580.0046.58
Enter/Exit10088.2393.7578.9494.1185.8686.6694.1190.23
Average88.2080.3283.1277.7386.7181.3774.6080.7376.23
[ t + , t ] = [ 1 , 3 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep90.4777.0883.2490.4777.0883.2475.0056.2564.28
Toilet66.6764.1565.3872.1388.6779.5564.0683.0172.31
Cooking80.9587.6284.1563.8861.8562.8563.6356.7059.96
Eating84.6593.8689.0282.5392.0287.0188.2871.1678.80
Watching TV94.8797.2296.0394.8797.2296.0347.0536.1140.86
Phone94.4410097.1497.7293.4795.5597.8293.4795.60
Dressing94.7310097.2990.3610094.9384.6168.1175.47
Brushing Teeth82.2268.4274.6873.9171.9272.9093.8784.2188.78
Drinking34.6128.5731.3022.8528.5725.3938.1585.7152.80
Enter/Exit89.4782.3585.7689.4710094.4476.0010086.36
Average81.3179.9280.4077.8281.0879.1972.8573.4771.52
[ t + , t ] = [ 2 , 5 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep90.4777.0883.2490.9077.0883.4292.8529.1644.39
Toilet66.1071.6968.7867.1886.7975.7482.0564.1572.00
Cooking78.2185.5681.7273.0379.3876.0745.7852.5748.94
Eating89.0194.4791.6687.5793.2590.3291.8358.8971.76
Watching TV88.0910093.6777.7877.7877.7879.1610088.37
Phone95.7493.4794.5997.5686.9591.9585.4186.9586.17
Dressing92.9594.2093.5793.0595.6594.3375.0062.3168.07
Brushing Teeth86.2087.7186.9580.0077.1978.5764.3884.2179.97
Drinking58.3340.0047.4548.1440.0043.6930.0074.2842.73
Enter/Exit00071.4264.7067.9069.2347.0556.03
Average74.5174.4274.4678.6777.8778.2771.5765.9668.65
Learning TimeSVMkNNC4.5
Average time in mobile device (in ms)998251980
Table 4. Results with fuzzy features (spatial): precision (Pre), recall (Rec) and F1-score (F1-sc).
Table 4. Results with fuzzy features (spatial): precision (Pre), recall (Rec) and F1-score (F1-sc).
[ t + , t ] = [ 0 , 1 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep90.2477.0883.1492.3077.0884.0179.6291.6685.22
Toilet84.9086.7985.8389.5884.9087.1888.0092.4590.17
Cooking63.0286.5972.9560.9773.1966.5244.7175.2556.09
Eating90.3891.4190.8982.0592.6387.0285.0871.7777.86
Watching TV10094.4497.1497.0594.4495.7396.9691.6694.24
Phone10091.3095.4510095.6597.7787.1793.4790.21
Dressing93.1598.5595.7783.7597.1089.9373.2173.9173.56
Brushing Teeth95.7482.4588.6086.2791.2288.6883.6782.4583.06
Drinking50.0071.4258.8244.7362.8552.2732.1477.1445.37
Enter/Exit10088.2393.7582.3594.1187.8455.5594.1169.86
Average86.7486.8386.2381.9086.3283.6972.6184.3976.56
[ t + , t ] = [ 1 , 3 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep92.6877.0884.1686.6677.0881.5975.0068.7571.73
Toilet71.6683.0176.9291.1179.2484.7688.0971.6979.05
Cooking79.7991.7585.3579.7483.5081.5854.7140.2046.35
Eating94.8092.6393.7087.8093.8690.7387.9662.5773.12
Watching TV94.8797.2296.0394.8797.2296.0397.2294.4495.81
Phone95.6593.4794.5595.4591.3093.3397.9510098.96
Dressing89.6110094.5287.6598.5592.7868.2569.5668.90
Brushing Teeth88.4684.2186.2885.4585.9685.7078.0068.4272.89
Drinking57.1462.8559.8658.0660.0059.0124.0780.0037.01
Enter/Exit90.0010094.7387.5082.3584.8477.2710087.17
Average85.4688.2286.6185.4384.9085.0374.8575.5673.10
[ t + , t ] = [ 2 , 5 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep87.1772.9179.4192.3095.8394.0381.3972.9176.92
Toilet63.7969.8166.6677.0490.5683.2666.6675.4770.79
Cooking81.7288.6585.0480.8873.1976.8461.6745.3652.27
Eating94.0090.1892.0590.9693.2592.0988.8855.2168.11
Watching TV92.6810096.2088.8810094.1192.8510096.29
Phone97.9197.8297.8797.9510098.9610067.3980.51
Dressing92.7595.6594.1890.2795.6592.8888.0955.0767.77
Brushing Teeth86.9573.6879.7784.7491.2287.8672.7219.2930.50
Drinking64.2880.0071.2859.0974.2865.8228.4382.8542.33
Enter/Exit81.2594.1187.2188.8894.1191.4280.9588.2384.43
Average84.2586.2885.2585.1090.8187.8676.1666.1870.82
Learning TimeSVMkNNC4.5
Average time in mobile device (in ms)2676235382
Table 5. Results with fuzzy features (spatial and temporal): precision (Pre), recall (Rec) and F1-score (F1-sc).
Table 5. Results with fuzzy features (spatial and temporal): precision (Pre), recall (Rec) and F1-score (F1-sc).
[ t + , t ] = [ 3 , 5 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep90.2477.0883.1492.3077.0884.0179.6291.6685.22
Toilet84.9086.7985.8389.5884.9087.1888.0092.4590.17
Cooking63.0286.5972.9560.9773.1966.5244.7175.2556.09
Eating90.3891.4190.8982.0592.6387.0285.0871.7777.86
Watching TV10094.4497.1497.0594.4495.7396.9691.6694.24
Phone10091.3095.4510095.6597.7787.1793.4790.21
Dressing93.1598.5595.7783.7597.1089.9373.2173.9173.56
Brushing Teeth95.7482.4588.6086.2791.2288.6883.6782.4583.06
Drinking50.0071.4258.8244.7362.8552.2732.1477.1445.37
Enter/Exit10088.2393.7582.3594.1187.8455.5594.1169.86
Average86.7486.8386.2381.9086.3283.6972.6184.3976.56
[ t + , t ] = [ 8 , 13 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep93.0289.5891.2791.8393.7592.7859.2566.6662.74
Toilet84.0975.4779.5485.1077.3581.0471.8747.1656.95
Cooking94.3891.7593.0494.5693.8194.1872.0940.2051.62
Eating97.8786.5091.8390.1193.2591.6583.6958.8969.13
Watching TV90.0075.0081.8195.2310097.5691.6655.5569.18
Phone97.8295.6596.7292.4510096.07100100100
Dressing95.6597.1096.3793.9392.7593.3483.6757.9768.49
Brushing Teeth87.2373.6879.8889.2889.4789.3735.9668.4247.14
Drinking75.6777.1476.4080.5580.0080.2727.3865.7138.65
Enter/Exit10010010073.0710084.4410082.3590.32
Average91.5786.1888.8088.6192.0490.2972.5664.2968.17
Learning TimeSVMkNNC4.5
Average time in mobile device (in ms)2128244308.5
Table 6. Results with extended baseline features: precision (Pre), recall (Rec) and F1-score (F1-sc).
Table 6. Results with extended baseline features: precision (Pre), recall (Rec) and F1-score (F1-sc).
[ t + , t ] = [ 0 , 1 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep84.4477.0880.5977.0877.0877.0880.6456.2566.27
Toilet78.5762.2669.4767.8579.2473.1193.0286.7989.79
Cooking76.0484.5380.0668.5769.0768.8255.2082.4766.13
Eating85.2295.0989.8881.8193.2587.1685.7177.3081.29
Watching TV94.7394.4494.5985.7194.4489.8697.1494.4495.77
Phone10091.3095.4510093.4796.6278.1893.4785.14
Dressing92.1010095.8986.5810092.8186.4110092.71
Brushing Teeth88.6374.5480.9869.6478.1873.6666.6747.2755.31
Drinking74.1976.4775.3161.2294.1174.1843.2891.1758.70
Enter/Exit84.6188.2386.3884.2194.1188.8880.0094.1186.48
Average85.8584.3984.8678.2787.2982.2276.6282.3377.76
[ t + , t ] = [ 5 , 3 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep86.6687.5087.0882.6991.6686.9476.7468.7572.52
Toilet83.7858.4968.8893.6183.0188.0080.4862.2670.21
Cooking88.3786.5987.4791.0282.4786.5357.3065.9761.33
Eating95.2790.7992.9888.6395.0991.7585.5761.9671.88
Watching TV85.0094.4489.4785.7197.2291.1082.8586.1184.45
Phone97.7293.4795.5593.7595.6594.69100100100
Dressing94.2098.5596.3285.1898.5591.3894.2494.2093.30
Brushing Teeth91.4890.9091.1986.6787.2787.9787.1769.0977.08
Drinking85.7194.1189.7186.4897.0591.4655.3191.1768.85
Enter/Exit95.0010097.4379.1610088.37100100100
Average90.3289.4889.9087.4992.8090.0781.7879.9580.86
[ t + , t ] = [ 15 , 8 ] SVMkNNC4.5
ActivityPreAccF1-scPreAccF1-scPreAccF1-sc
Sleep84.6168.7575.8688.6795.8392.1164.0066.6665.30
Toilet92.5071.6980.7890.6975.4782.3884.4477.3580.74
Cooking95.4589.6992.4895.2386.5990.7149.0756.7052.71
Eating96.0790.1893.0389.2892.0290.6371.8337.4249.20
Watching TV94.4494.4494.4486.0410092.4910044.4461.53
Phone10093.4796.6290.3810094.94100100100
Dressing94.5297.1095.7989.8592.7591.2895.2362.3175.33
Brushing Teeth89.7463.6374.4692.4590.9091.6773.8056.3663.91
Drinking97.0510098.5081.1710093.1576.1994.1184.21
Enter/Exit10010010082.6010090.4710082.3590.32
Average94.4486.8990.5189.2493.3591.2581.4567.7473.98
Table 7. S1: Non Binary (inertial + location) sensors. S2: non-location (binary + inertial), S3: non-inertial (binary + location), S4: only inertial.
Table 7. S1: Non Binary (inertial + location) sensors. S2: non-location (binary + inertial), S3: non-inertial (binary + location), S4: only inertial.
[ t + , t ] = [ 8 , 13 ] S1S2
ActivityPreAccF1-scPreAccF1-sc
Sleep91.3089.5890.4385.7191.6688.59
Toilet76.4766.0370.9890.9077.3583.58
Cooking92.6882.4787.2893.7580.4186.57
Eating91.1990.7990.9990.0692.6391.33
Watching TV92.3097.2294.7083.3397.2289.74
Phone90.1997.8293.8587.0310093.06
Dressing71.0179.7175.1188.0097.1092.32
Brushing Teeth92.5990.9091.7491.1181.8186.21
Drinking72.3410083.9584.6110091.66
Enter/Exit73.0710084.4482.6010090.47
Average84.3489.4586.8287.7191.8289.72
[ t + , t ] = [ 8 , 13 ] S3S4
ActivityPreAccF1-scPreAccF1-sc
Sleep79.6285.4182.4266.1289.5876.08
Toilet76.9275.4776.1947.5460.3753.19
Cooking97.7794.8496.2866.6661.8564.17
Eating87.5793.8690.6091.9182.2086.78
Watching TV82.2297.2289.0950.0052.7751.35
Phone96.0095.6595.8287.5091.3089.36
Dressing88.3197.1092.4938.5939.1338.86
Brushing Teeth70.0085.4576.9595.4578.1885.95
Drinking10097.0598.5049.1897.0565.28
Enter/Exit90.4710094.9981.2582.3581.79
Average86.8992.2089.4767.4273.4870.32

Share and Cite

MDPI and ACS Style

López Medina, M.Á.; Espinilla, M.; Paggeti, C.; Medina Quero, J. Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion. Sensors 2019, 19, 3512. https://doi.org/10.3390/s19163512

AMA Style

López Medina MÁ, Espinilla M, Paggeti C, Medina Quero J. Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion. Sensors. 2019; 19(16):3512. https://doi.org/10.3390/s19163512

Chicago/Turabian Style

López Medina, Miguel Ángel, Macarena Espinilla, Cristiano Paggeti, and Javier Medina Quero. 2019. "Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion" Sensors 19, no. 16: 3512. https://doi.org/10.3390/s19163512

APA Style

López Medina, M. Á., Espinilla, M., Paggeti, C., & Medina Quero, J. (2019). Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion. Sensors, 19(16), 3512. https://doi.org/10.3390/s19163512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop