Next Article in Journal
Hybrid Attention Network for Language-Based Person Search
Next Article in Special Issue
How Many Days are Necessary to Represent Typical Daily Leg Movement Behavior for Infants at Risk of Developmental Disabilities?
Previous Article in Journal
Robust Magnetized Graphene Oxide Platform for In Situ Peptide Synthesis and FRET-Based Protease Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assisted Living System with Adaptive Sensor’s Contribution

by
Magdalena Smoleń
* and
Piotr Augustyniak
Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, 30 Mickiewicza Avenue, 30-059 Kraków, Poland
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5278; https://doi.org/10.3390/s20185278
Submission received: 30 July 2020 / Revised: 9 September 2020 / Accepted: 10 September 2020 / Published: 15 September 2020
(This article belongs to the Special Issue Sensors for Gait, Posture, Cognition and Health Monitoring)

Abstract

:
Multimodal sensing and data processing have become a common approach in modern assisted living systems. This is widely justified by the complementary properties of sensors based on different sensing paradigms. However, all previous proposals assume data fusion to be made based on fixed criteria. We proved that particular sensors show different performance depending on the subject’s activity and consequently present the concept of an adaptive sensor’s contribution. In the proposed prototype architecture, the sensor information is first unified and then modulated to prefer the most reliable sensors. We also take into consideration the dynamics of the subject’s behavior and propose two algorithms for the adaptation of sensors’ contribution, and discuss their advantages and limitations based on case studies.

1. Introduction

Nowadays, in developed countries, significant progress in the process of aging is observed—the percentage of elderly people in the population is higher than the percentage of young people. It is expected that in these countries the current 20%-proportion of people age 60 years and above will increase by 32% by the year 2050. Over 50 years between 1950 and 2000 the median age increased from 29.0 years to 37.3 years and its continued growth is estimated to be 45.5 years by the year 2050 [1].
These figures force the governments of developed countries to carry out adequate actions. They mainly consist of the monitoring of health parameters and physical activity for the purposes of prevention against all types of diseases and life risks such as falls and frailty due to the absence of systematic physical exercise, selected on the individual level. Taking care of people who need special treatment (older, with disabilities, during recovery after injuries, accidents, or serious illnesses) is not limited to satisfy their physiological or material needs, but first of all involves physical, psychological, and social stimulation [2]. As early as in ancient times, not without reason, Aristotle said that “movement is life—life is movement”. Thus, all attempts and efforts towards achieving practical support for such people by encouraging their psychomotor autonomy are of great importance.
To face the above needs, projects of technical solutions proposed worldwide aim at the non-invasive, convenient, and secure monitoring of supervised human vital signs [1,3]. Such monitoring is expected to reduce the costs of expensive medical equipment or specialized medical and rehabilitation staff and to assist non-professional individuals in taking continuous care of ill people.
Every approach to an assisted living system raises three issues:
  • Adequacy of the applied sensor set;
  • Intrusion of measurement devices in the subject’s environment and behavior;
  • Violation of the subject’s privacy and vulnerability of the collected data.
With the rising demand for applying the new technical solutions in the field of ambient assisted living, scientific works and their outcomes are widely presented [4]. Therefore, various types of approaches of ambient sensor-based monitoring technologies detecting elderly events (activities of daily living and falls) can be found in the current literature such as non-contact sensor technologies (motion, pressure, video, object contact, and sound sensors), multicomponent technologies (combinations of ambient sensors with wearable sensors), smart technologies, and sensors in robot-based elderly care.
With the aim of non-intrusively monitoring human wellbeing at home, the domestic energy supplies can be also disaggregated in order to detect appliance usage by means of machine learning and signal processing [5]. This enables the identifying of behavioral routines, detecting anomalies in human behavior, and facilitating early intervention.
To support the independent life of seniors and people with chronic conditions and potential health-related emergencies an Internet of Things (IoT) network is implemented for continuous monitoring [6]. The solution is based on the network including mobile phones to transmit the data generated by the IoT sensors to the cloud server and the 3rd party unknown mobile relays.
Since the home environment is usually monitored by sensors collecting a vast volume of collected data, the computational methods should process it in an appropriate time [7]. This implies the need for an event-driven framework in order to detect unusual patterns in such environments.
Another important point is designing and implementing an indoor location and motion tracking system in a smart home setup [8]. The role of such a system is to track human location based on the room in which the supervised person is located at a given time and to recognize the current activity.
Since in real daily life human behavior is not so predictable, a hybrid framework for human behavior modeling could take a great role in managing the changing nature of activity and behavior. The feedback-based mechanism could be significant to recursively append new events and behavior and classify them into normal or abnormal human behavior [9].
Due to the rapid evolvement of Ambient Assisted Living (AAL), there is also the necessity of standardization, uniformities, and facilitation in the system design [10]. The paper presents the latest survey of the AAL system’s models and architectures. The authors investigated the AAL system requirements and implementation challenges, Reference Models (RM) and Reference Architectures (RA) definitions, demands, and specifications.
Simple unimodal approaches propose using a motor signal that adequately describes the state and behavior of the monitored person. This type of measurement allows not only to initiate an alarm in the dangerous or unusual situation [11,12] but also allows to specify a degree [13] and a type of daily physical activity [14]. It was also found helpful in the evaluation of rehabilitation progress and providing biofeedback to support the growth of psychological motivation and engagement in physical exercises [15].
In multimodal approaches, the activity sensors use various physical measurements and data fusion methods to provide consistent information about the subject’s activity. This usually raises a question about the adequate usage of particular sensor types accordingly to their advantages in specific scenarios. Studying numerous papers on ambient assisted living, considering personal longstanding experience, and being inspired by rules of nerve sensitivity modulation in humans, we were motivated to propose a multisensor system with an adaptive contribution of particular sensors to the final behavior classification accordingly to the present and most probable future actions. The scope of the reported research includes the analysis of the performance of the four most commonly applied assisted living sensors (three of them are wearable) in six elementary reversible activity types of the human. Based on this analysis background rules of sensor contribution have been proposed and applied to build an auto-optimizing multimodal surveillance system. The main purpose of the work is to confirm the complementary competencies of the sensors and benefits resulting from their adaptive contribution in realistic assisted-living scenarios.
Consequently, the main novelty presented in this paper is the concept of a system for the recognition of human daily activity that adapts the process of multimodal data fusion following the criteria of sensitive, selective, non-intrusive, and privacy-protective measurements (Section 3).
To this point, we tested basic behavioral measurements with a custom-built multimodal surveillance system (Section 4), registered and interpreted many different vital signs from supervised people with low-cost and easy-to-use sensors, and compare their sensitivity and selectivity of action recognition. Elements of this system have been developed as the result of different previous projects focused on single sensing modalities such as control of the living environment with the eye movements [16], motor cortex rhythm [17], facial information [18], and sound recognition [19]. The cooperation of several sensors with different characteristics has been proposed in two other projects dedicated to the supervision of humans during sleep [20,21]. We also contributed to the research aimed at the development of sensor networks for supervising the human in motion based on motion patterns from wall-mounted cameras [12,22] or data from wearable devices [23,24]. Finally, two approaches of sensor data fusion from multimodal sensing systems have been proposed in [25,26].
This research summarized in Section 5 paved a way to propose two algorithms for continuous modulation of the extent of influence from each particular sensor to the final recognition (Section 6). Section 7 presents the case studies, Section 8 contains discussion and Section 9—concluding remarks.

2. State-of-the-Art

Multimodal systems used in home monitoring of people can be considered in the context of simultaneous acquisition of either a variety of human biomedical signals—motion, EEG, ECG, acoustic [20,21,23,24,25], or the same signal (motor) but using different sensing methods. In this paper, we focus on the motor activity of the subject and propose a multimodal motion recording system. Consequently, the review below includes basic approaches to human motion sensing.

2.1. Behavior Sensing Techniques

Scientific work of Suh and Park [27] presented a monitoring system based on motion sensors of various types: inertial (built of three-axis gyroscope and three-axis accelerometer, attached on a foot back) and pressure (FlexForce A201 from Tekscan company, positioned under a heel). Eight ADL states were analyzed: sitting, walking, walking up and down (walking on an uphill and downhill road), running, running up, running down, and standing. For the estimation of the above states, the filter of the Hidden Markov Model (HMM) was proposed. The 802.15.4 wireless modules were used to identify where the activities were taking place. During detection, such activities as walking or running, the measurements were performed using inertial navigation algorithms.
In [28], surface electromyography was compared with accelerometry in the detection of eleven Functional Motor Activities (FMAs). The sensors were placed on the limbs and trunk. The features vectors, extracted in the signal processing, were used as input data to the Multilayer Feedforward Neural Network (MFNN) with two hidden layers (of 44 and 22 neurons). With a classification error of 10% for both types of sensor, the sensitivity was over 80%, and the specificity was over 97%. The sensitivity for signals received from the ACC was almost 5% higher than from the EMG. Further analysis showed that for some activities the classification based on the EMG sensor is much more sensitive. Across persons, the ACC signal was characterized by less diversity than the EMG. These results were the motivation for the authors to carry out preliminary tests with a hybrid system, which consisted of five ACC and three EMG sensors. With a classification error below 10% the used combination of sensors brought slightly better results than those given separately by the eight-element set of ACC and EMG.
Hsieh et al. [29] designed and built an independent system—an exoskeleton for the monitoring and analysis of the particular events and phases of gait. In order to measure the plantar forces’ distribution and angles of the hip and knee joints four force sensors FlexiForce (under the first metatarsal head, forth metatarsal head, hallux, and heel) and two angles changes sensors—potentiometers (in the hip and knee joints) were used. The results obtained from the proposed system and the reference systems (Vicon and dynamometer platform) were similar.
Mizuno et al. [30] introduced a multimodal system for the recognition of ADL activities of monitored persons. The system integrates piezoresistive pressure sensors, a motion detector placed in a watch, a sound sensor in glasses, an ultrasonic sensor (closed in a pen) measuring a distance from a ceiling, and a position sensor (Bluetooth and GPS). The proposed system enables the detection of walking, running, standing, eating, talking, and office work.
In [31], the physical activity of persons during rehabilitation after the stroke was monitored. For this purpose, the integrated three-axis accelerometric and one-axis gyroscopic sensors (positioned bilaterally to the subjects’ ankles and wrists) were used. Two accelerometers and a pressure sensor were attached to the cane used by the examined person. Measurement data were recorded during level walking, walking carrying an object, walking on an uneven surface, walking up a ramp, walking down a ramp, walking up a flight of stairs, walking down a flight of stairs, walking over an object, pivoting, and opening a door. Each motor activity was identified by a neural network. For all activities at an average specificity of 95%, the sensitivity ranged from 75.1% to 97.4%. Then, the use of the cane was studied in the context of a particular type of activity. The studies were performed based on the measurements data from sensors located on the cane.
An extensive review of methods used in ambient assisted living systems was provided in [32].
The variety of methods used for sensing particular behavior patterns (e.g., fall detectors) raises the question of their substitution or complementary use. This issue was studied in our group [33] and several other authors provided comparative results for the efficiency and accuracy of different sensor types in specific everyday living events. These findings paved the way to a concept of multimodal sensing where sensors of different types are used in the following scenarios:
  • Simultaneous: information from both sensors are gathered concurrently and fused together to yield features of higher sensitivity and specificity;
  • Complementary: information from sensors is switched selecting the best sensor accordingly to the changes in recording conditions (e.g., indoor/outdoor).
While the simultaneous scenario has been applied in numerous proposals, the complementary scenario is also worth studying in a pursuit for continuous surveillance of a mobile human. Consequently, surveillance of physiological parameters may be employed in the healthy population as an essential part of prevention programs and on the other hand, ill or disabled people will not be sent to their beds or premises without the chance of physical exercise or a social life.
An alternative concept was proposed in [34]. Five sensors: pulse, chest accelerometer, limb accelerometers, camera, and microphone were used in pairs for the detection of seven elementary poses, which in turn contributed to the representation of actual behavior. In that previous work, we used graph representation with node values standing for pose contribution and edge flow representing the activity in time. This approach used complementary premise-fixed and wearable sensors, simple yet reliable algorithms for recognition of elementary poses, and a concise representation of any behavior, even unknown at the setup stage.

2.2. Data Fusion Techniques

One of the most cited is the work by Boonma and Suzuki [35], which presents the basics of biologically-inspired architecture for Sensor Networks (BiSNET) with implemented key biological mechanisms such as energy exchange, pheromone emission, replication, and migration. The authors evaluate the BiSNET for oil spill detection in the coastal environment. The network is based on agents without a centralized service to coordinate them, thus it is lightweight, scalable, and self-healing. This means the sensor nodes autonomously adapt their states and data transmission according to dynamic changes of conditions, retain their power efficiency, against the increase of network size (up to 600 nodes), and collectively detect and eliminate false-positive data. Cohen and Edanb [36] propose a sensor fusion framework that adaptively selects the most reliable sensor set and the most suitable algorithm. To this point, the algorithm implements measures continuously quantifying sensor performance. The concept has been software simulated with a grid-map paradigm, logical sensors, and performance measures to allow the random setup of sensors producing multiple data types. The performance was measured as a difference of each particular setup and the final fused map, which has to be known beforehand. The sensor re-configuration procedure is applied once a low-performing sensor is detected.
The system presented by Marti et al. [37] is built with several sensors and a centralized automatic reasoning module that integrates partial descriptions with contextual information of the system, and combines available sensor data, to produce a fused output that best satisfies the goals following given ontology. The system is robust to temporary sensor unavailability, variable reliability of sensor information, and supports on-the-fly redefining its goals. The proposal has been implemented and tested in the ground vehicle navigation.
A comprehensive review of the state-of-the-art techniques on multi-sensor fusion in the area of BSN can be found in [38]. The paper particularly focuses on physical activity recognition and widely discusses the data fusion pros and cons at levels of data (suitable for homogenous sensor set), features, and decisions (allowing for the combination of data from heterogeneous sensors). Moreover, centralized, distributed, and hybrid approaches to collective decision making are studied. Although a waste literature review is presented, only one example of context-adaptive fusion was provided in the work by Cook et al. [39].
Koping, Shirahamaand, and Grzegorzek [40] address the need for a general data fusion framework for a specific smartphone-based multi-sensor body area network. Since the framework is dedicated to a general-purpose surveillance system, it supports the heterogeneous sensor set and the data fusion is performed on the feature vector level through code-based learning. Specific signals are first processed at the sensors with adequate feature extracting algorithms. This approach is also used in the proposed solution; however, we do not follow the static data fusion paradigm.
Very recently Lin et al. [41] proposed a smart sensors data fusion system targeted to support stable, safe, and efficient medical patient-robot interaction. The medical services provided by autonomous robots require real-time monitoring of the state of both users. To this point, various sensor, communication, robot, and data processing technologies have been applied. The proposed hybrid body sensor network architecture is based on multi-sensor fusion employing an interpretable neural network. However, the data integration process seems to be fixed for the given patient. Bazo et al. [42] propose the combination of radiofrequency-based positioning and computer vision-based human pose estimation as a tool for behavioral analysis and activity recognition. The two subsystems have complementary properties i.e., the radiofrequency localizer solves the occlusions that may occur in the computer vision detector, and the computer vision subsystem increases the accuracy of positions measured with the radiofrequency localizer. This model falls in the larger category of bimodal position and activity sensing systems also developed by other authors for analysis of shoppers [43,44], pedestrians [45,46], or just human pose recognition [47]. Both subsystems are independent and separately process the RF and RGBD sensors produced data. The sensor fusion module uses the tag and skeleton and iteratively seeks for its stable state expressed by maximizing data persistence. The priority of visual or radiofrequency data is used solely to avoid ghosting.
He et al. [48] give a critical review of state-of-art solutions for scalable fault-tolerant information fusion in a distributed wireless sensor network. The authors indicate the most challenging areas in sensors application, which are different sensing modalities enriching the robustness but demanding more than simple fusion of homogenous data and a wide range of uncertainties in sensing and communication (misdetection, false alarms, unavailability, or delays). The paper also highlights several interesting areas of future improvements such as mutual calibration and verification of data consistency.
Proper instrumentation and interpretation software enable detecting particular events and classifying human behavior in several categories of risk. Extending this scope leads to a continuous predict-and-verify scenario, where the detection of unexpected behavior provides signs of possible health setbacks [49]. In that previous research, the information of the currently identified pose was not utilized to improve the sensing performance of the current state nor prepare the sensing system for the most probable subject pose. A novel concept stemming from our previous studies is presented in this paper. It combines behavior prediction and sensor reconfigurability schemes into a behavior tracking system that continuously adapts the sensor contribution to the present and most probable future activity of the supervised subject.

3. Concept of Adaptive Sensing

The concept of continuous adaptation of sensors’ contribution in a multimodal system originates from rules of information propagation in living neural systems. Let us shortly recall two different types of chemical synapses: ionotropic, with a quick and short synaptic response, specialized in fast sensory or executory, excitatory or inhibitory pulse messaging, and metabotropic, with a delayed and long-standing response, being primarily responsible for the modulation of pulse conduction. All mammals select the dominating and auxiliary senses that they actually use to perceive the surroundings thanks to these two complementary types of synaptic junctions.
Mimicking the above-mentioned natural rule of neural modulation in a technical multisensor assisted living environment requires solving two issues:
  • Determining competence areas and performance hierarchy in a given sensor set;
  • Specifying data stream modulation rules, allowing to adapt each sensor’s contribution to a final decision.
Initially, we assume each sensor to have an exclusive sector of competence area, where no other sensor is applicable, and its complementary sector, where it competes with one or more other sensors. Although the accuracy and reliability are most naturally selected as competence criteria, a variety of other parameters are applicable in a real surveillance system: availability, intrusiveness, energy consumption, etc. Moreover, the cooperation of two sensors in a common competence sector yields valuable information about the coherence of their data streams, which may be useful in other scenarios to assess the quality of measurements relying only on the auxiliary sensor (e.g., when the principal sensor data are unavailable).
In the following sections, we develop this concept by examining the sensor set and sensor-specific preprocessing software (Section 4) in an experimental detection of human motor activities (Section 5). The discussion of the experiment outcome is followed by a proposal of two data stream adaptation algorithms (Section 6) and the presentation of a use case (Section 7). The discussion and future remarks (Section 8) conclude the paper.

4. Experimental Examination of the Sensor Set

4.1. Components of the Sensor Set

All experiments were carried out indoor in a large room (approx. 150 m2) by means of four different motion signals measurement devices (Figure 1): a wireless (WLAN) EMG biopotentials amplifier ME6000 (Mega Electronics) with MegaWin software (B), a wireless feet pressure measurement system ParoLogg with Parologg software (C), ACC Revitus module with dedicated software (D), and a digital video camera Sony HDR-FX7E (E) [50]. Table 1 illustrates a sampling frequency for each of the used sensors:
Eight-channel electromyographic signals were surface recorded from the muscles of both lower limbs: quadriceps—vastus lateralis (1), biceps femoris (2), tibialis anterior (3), gastrocnemius—medial head (4). Time-series foot pressure signals were obtained from the 64 built-in pressure sensors insoles (each foot insole has 32 independent sensors). A three-dimensional accelerometric signal was recorded with the use of Revitus located on the human sternum, while for video measurements a digital camera placed on the left side of the examined person was set up (720 × 576 pixels).

4.2. Preprocessing of the Measurement Data

The successive steps of processing the measurement data from each of the sensors B ÷ E were presented and described in detail in [50]. The scheme in Figure 2 illustrates the main parts of the proposed signals processing.
The sensors were used individually and in sets of two to four sensors. The classification of motor activities was based on feature vectors recorded by one to four sensors simultaneously. The feature vectors for each setup are presented in Table 2. In the case of multiple sensors, we simply combined the feature vectors of each sensor.

4.3. Materials

In the experiment, 20 volunteers performed 12 selected physical activities (1a ÷ 6b, Figure 3) with about 30 repetitions (19 ÷ 46) for each one:
  • Squat (1a) and getting up (1b) from a stand position;
  • Sitting on a chair (2a) and getting up from a chair (2b) to a stand position;
  • Reaching (3a) and return from reaching (3b) the upper limb forward in the sagittal plane (standing);
  • Reaching (4a) and return from reaching (4b) the upper limb upwards in the sagittal plane (standing);
  • Bending (5a) and straightening the trunk (5b) from bend forward from a stand pose in the sagittal plane;
  • A single step with the right (6a) and the left (6b) lower limb (stance phase).

4.4. Feature Classification Methods

Supervised classification of the selected motor activities was performed with the use of k-NN (k-Nearest Neighbors) method and Manhattan metric. The sizes of learning and test sets were in the ratio of 1:3. With the final results presentation in mind, several variables were introduced [50]:
  • Correctness of recognition for all volunteers—Rs a;
  • Calculation error of Rs_aUs_a—a measure of the results dispersion comes from inter-subject differences (weighted standard deviation due to different numbers of activity repetitions for each volunteer);
  • Percentage of correct recognitions for all activities and all volunteers—Rs_ALL;
  • Calculation error of Rs_ALLUs_ALL;
  • Percent recognition for all activities—Rs_V;
  • Calculation error of Rs_VUs_V—a measure of results value dispersion arising from differences between different activities (weighted standard deviation due to different number of repetitions of each activity for each volunteer);
  • Calculation error of Rs_VUs_ALL—a measure of the dispersion of the results due to recognitions of the individual activities.

5. Sensor Set Performance Results

Based on the data presented in Table 3 and Table 4 and Figure 4 and Figure 5 we concluded that the measurements carried out simultaneously with two, three, or four sensors lead to a significant improvement of recognition reliability.
Matrices of the recognition errors (in %) of the individual motor activities 1a ÷ 6b in the test set for all people together for sets of sensors B ÷ E (BC, BD, BE, CD, CE, DE, BCD, BCE, CDE, BDE, BCDE) are shown in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10 and Table A11.
The experiment results prove that the overall activity recognition performance (right columns of Table 3 and Table 4) can be improved by adapting the sensor set and the features used to the particular action and to the particular subject. This statement is a background of the proposed adaptation algorithms presented in Section 6.

6. Reliability-Driven Sensor Data Fusion

6.1. General Assumptions and System Design

The general architecture of a multisensory environment for assisted living consists of sensors, dedicated feature extraction methods, and modality selectors. The proposed innovation replaces the selector by a modulator using weight coefficients Wk (Figure 6) to prefer the most pertinent features while discriminating the others. As the sensors use specific signals (muscular, pressure, acceleration, and video), one of the consequences of replacement of the feature selector by a modulator is the necessity of uniform representation of all features. To this point, the feature calculation step uniforms the information update rate and normalizes the feature values. The output of each sensor is given as a probability-ordered list of activities {Ai, pi} (see Figure 6 and Figure 7).
Three coefficients are proposed to modulate the influence of each sensor on the final decision about the detected activity. These are listed and shortly explained below.
Hk is an activity-independent coefficient characterizing each sensor cost including hardware, installation, and maintenance as well as human factors like acceptance of each particular sensor set (cameras at home, accelerometer belt or bands, electrodes, etc.); all these factors we consider to be constant in time thus these values need to be evaluated once per subject. In order to efficiently adapt the sensors’ choice, extreme values of Hs should be avoided.
Rk(A) is an activity-dependent factor of reliability; as it was demonstrated in Section 5, sensors show different performance in the detection of basic daily activities of the human; accordingly, in the system paradigm, Rs is the primary factor adapting the contribution from sensors to the current activity of the monitored subject.
L(n) is a penalty factor that discriminates the influence from sensors depending on their position n on the reliability ranking in determining the activity A by sensor k; the actual penalty factor is calculated based on a coefficient p: low values of p equalizes the ranking list what makes the system mostly working with multiple sensors and avoiding the worst, while a high value of p prefers the winner to be a unique working sensor:
L ( n ) = n p   n { 1 4 } ,   p ( 0.1 10 )
The contribution of each sensor k may be thus determined as:
C k = H k · R k ( A ) · L ( n k ,   A )
and normalized over the whole set of sensor weighting coefficients:
W k =   C k n C n
Accordingly, with the currently detected subject’s action, the system automatically adapts the feature set (Table 2) to optimally detect the present action. The optimization criteria may be freely selected from variables presented in Section 4.4 and used jointly with other attributes (including non-technical such as acceptance, usage cost, etc.). To keep the presentation simple, we use the correctness of recognition (given in Table 3). In a real system, besides the subject action, the selection of sensors also takes into account constant factors like costs and availability or acceptance of a sensor by individual subjects.
Instead of applying recognition correctness generalized for all volunteers, an individual table, equivalent to Table 3 may be built for each supervised subject. The personalization of the multisensor environment improves the individual performance (compare columns in Table 4) but requires a set of exercises performed under the supervision of a human assistant who annotates the activities and checks the recognition correctness (or other optimization criteria).
Based on selected optimization criterion (in our example: generalized correctness of recognition, Table 3) a hierarchy of feature vectors is built for each detected activity. Taking the action “bending” (5a) as an example, we have sensor set hierarchy:
(highest)BDE;
(BD, BE, DE, CDE);
(BCD, BCDE);
(CE, BCE);
CD;
(lowest)BC.
It is noteworthy that BD yields better results than BCD, therefore the use of more sensors does not lead to better results, and adding a sensor (C in this case) may degrade the recognition correctness.
The modulation of the sensor’s contribution presented above is confirmative. Firstly, the detection is roughly made with a possibly not optimal sensor set and then confirmed with an adapted set. The modification closes the information loop and, like all kinds of feedback, raises the stability issue if the action detected with adapted features does not match those initially detected. The other drawback of confirmative detection is related to possible erroneous first detection leading to an even less optimal sensor set and confirming the erroneous decision.

6.2. Stability Condition for Modulated Sensor Set

The stability issue in a sensor set with modulated contribution can be solved by limitation of the weight modulation range. Let f be a function A = f(Sk;Wk) assigning a unique subject’s action A to specific sensor outputs Sk modulated by Wk. This means all probability values pi of given activities Ai from sensor k are multiplied by Wk:
A = f ( S k ;   W k ) =   max i k ( { A i ; p i , k · W k } )
Let m be a function Wk = m(A) modulating the contributions from sensors Sk to maximize the reliability of the recognition of A. Therefore, the modulator is stable if:
f : f ( S k , W k ) = f ( S k , m ( A ) )
which means the modulation does not influence the current recognition result.
Since we cannot expect the recognition result to be a linear function of the modulation depth, we propose an iterative try and fail algorithm finding the modulation limits. To find the value of Wk, between the original Wk1 and the desired target Wk2 the algorithm repeatedly bisects an interval and then selects a subinterval in which both ends yield different actions for further processing.
W k = W k 1   when   f ( S k ,   W k ) = f ( S k ,   W k 1 )
W k = W k 2   when   f ( S k ,   W k ) = f ( S k ,   W k 2 )
All necessary steps of the modulation algorithm are performed within the subject state sampling interval. New data gathered from the sensors are processed with optimized sensors’ contribution and confirm the detected subject’s action.
The stability issue can be also avoided by applying a sensor set consistency rule. This rule uses the past sensor set as a reference and requires the new set to be as similar as possible. Continuing the example given in Section 6.1 if “bending” has been detected with BE sensors and a “straightening the trunk” (5b) occurs thereafter, the sensor set hierarchy is the following:
(BD, BDE);
(BE, DE);
(CE, BCD, BCE, CDE, BCDE);
CD;
BC.
Maintaining the BE configuration is preferred over changing to DE, despite their equal performance, for the stability reason.

6.3. Predictive Modulation of Sensors’ Contribution

One may question the purpose of optimization if it only confirms the result of recognition already made. Fortunately, in most assisted living environments, the prevention of dangerous events is stressed as a primary goal, their architecture usually includes an artificial intelligence-based system for learning of the subject’s habits and detecting unusual behavior as a potential sign of danger. Such systems gather the information of individual habits in a form of database learned and updated from real past behavior records. Such a database provides activity statistics, but, more interestingly, for each given activity the most probable next activity can be determined. We propose to use the information from the individual’s habits database to predict the subject’s upcoming action and adjust the sensor’s contribution accordingly (Figure 7). The modulation is still made accordingly to the stability requirements (see Section 6.2), but the sensor’s contribution now adapts to the most probable next subject’s action.
Introducing the habits database in the feedback path has two benefits:
  • Prediction of upcoming action takes into account multimodal time series instead of single points, what stabilizes the prediction in case of singular recognition error;
  • Focusing on optimal recognition for current action makes the system conservative (i.e., expecting a stable status), whereas optimizing for future action makes it progressive (i.e., awaiting changes of the status).

7. Case Studies

7.1. A Compound Action

The proposed sensor’s contribution modulation technique was analyzed in a previously proposed multisensor environment for assisted living [33]. We also used previously recorded data from 20 volunteers (8 women and 12 men, aged between 22 and 61 years), acting accordingly to predefined realistic scenarios. Table 5 presents an example compound action of searching a book on a wall-mounted shell, consisting of elementary poses (defined in Section 4): squatting (1a, 1b), reaching forward (3a, 3b), reaching upward (4a, 4b), and bending (5a, 5b).
Multiple repetitions of patterns in the habits learning phase and opposed direction of elementary poses labeled with a and b facilitate correct prediction of subsequent poses and respective adaptation of sensors’ contribution. In the studied case, no abrupt corrections in the sensor set were necessary, consequently, changes of weighting coefficients were linear and not restricted by stability limits. The smoothing influence of prediction on sensors’ modulation is also revealed in Table 5. Nevertheless, studies of correct work of the system for unexpected activities and possible errors in stabilizing the algorithm need a recording of human performance according to purposely designed misbehavior.

7.2. Change of Environment

In this scenario, we assume that a walking subject (alternating activities 6a and 6b) goes outdoor and sensor E (the video system) no longer provides reliable data. Since the sensor set hierarchies (Table 3) are the following:
  • For 6a:
    • DE;
    • (BCE, BCD, BDE, BCDE);
    • (BD, BE);
    • BC;
    • CDE;
    • (CD, CE).
  • For 6b:
    • (BC, BD, BCD, BCE, BDE, BCDE);
    • CDE; (BE, DE);
    • CD;
    • CE.
The most reliable sets common for both activities are BCE, BCD, and BDE, and after the elimination of sensor E data, the recognition relying on sensor D (accelerometer) data has equivalent correctness. However, if the subject changes the activity, the equivalence of data from E and D is no longer guaranteed (see 1a in Table 3 as an example). For this reason, sensor B starts to be taken into consideration and the system prefers using BD.
In the case of an opposite event (i.e., the subject enters indoor), switching back to video-based sensors is not justified by the possible improvement of recognition correctness, but the video sensor will be more comfortable for the subject than the first choice accelerometer due to having one sensor less to wear. In case the subject decides to take off the accelerometer belt, the persistent consistency of information from B and E will cause a fast return to sensors BE instead of DE.

7.3. Cooperation of Sensors

The cases presented in 7.1 and 7.2 assume the presence or absence of a sensor and do not turn to account the full potential offered by modulation of contributions from multiple sensors to the activity recognition. Here we assume that: (1) all sensors are available but attributed by a quantitative variable of cost and (2) the subject performs a compound action. The modulation is then expected to continuously calculate and maximize the correctness-to-cost ratio. To this point, data on recognition correctness given in Table 3 are considered to be discrete samples in a continuous space of possible actions. The system is then expected to detect actual behavior as composed of simultaneously occurring elementary poses (see [34]) and pose contribution are taken into account to select the best sensor set. Adopting the data from the experiment (Table 3), we assume the subject is simultaneously getting up from a chair (2b) to a stand position, and reaching the upper limb forward in the sagittal plane (3a). In the first part of the action, 2b dominates, in the middle part the contribution, 3a takes over and dominates in the terminal part (e.g., reaching a book on the shelf). To show the modulation process we assume that we only have BCD sensors (no video sensor) and only two of them available at a time. Accordingly to the data in Table 3, we have hierarchies:
  • For 2b:
    • BC;
    • BD;
    • CD...
  • For 3a:
    • BD;
    • (BC, CD)...
Since the CD sensor pair is the least favorable, we are going to use sensor B (EMG signal) and, in the course of action, modulate the contribution from C (pressure) and D (acceleration). Sensor C is then successively replaced by D and BC becomes BD as the action initially resembling 2b is more and more similar to 3a.

7.4. Unexpected Change of Action

The last presented case assumes that the subject stands up from the chair (2b), reaches forward (3a) and, instead of returning from reaching (3b) which was the most probable action, bends (5a) searching for the book on a lower shelf, then instead of return from bending (5b) he or she directly sits back to the chair (2a). Therefore:
  • In action 2b, the sensor priority set is:
    • BC;
    • BD;
    • CD...
  • In action 3a, the sensor priority set is:
    • BD;
    • (BC, CD)...
  • In foreseen, but not actually performed action 3b, the sensor priority set is:
    • (BE, BD, BDE);
    • BCD;
    • (BCE, BCDE);
    • BC...
  • In action 5a performed instead, the sensor priority set is:
    • BDE;
    • (BD, BE, DE, CDE);
    • (BCE, BCDE)...
  • In foreseen, but not actually performed action 5b, the sensor priority set is:
    • (BD, BDE);
    • (BE, DE);
    • (CE, BCD, BCE, CDE, BCDE)...
  • In action 2a the sensor priority set is:
    • (BCE, CDE, BDE, BCDE);
    • (BE, CE, DE, BCD);
    • BD;
    • BC;
    • CD...
The process of sensor selection may be in this case presented in a tree as in Figure 8.

8. Discussion

The results showed that it is possible to recognize the selected motor activities of everyday life with high reliability by using a different kind of individual sensor as well as their 2-, 3-, or 4-elements sets. Although some activities are recognized with less reliability with the use of some sensors, in such case there is a possibility to successfully use the data from other sensors (see discussion and conclusions in [50]) or sensors sets for which the outcome is more reliable. As can be observed from Table 3 and Figure 4 the recognition with the use of sensors sets very often has higher values (94.1–100%) than with the use of the individual sensors, for any type of activity. The same observation can be also taken from Table 4a, Table 4b, and Figure 5, which present very often better results from sensors sets (88.3–100%) that from the individual sensors, for any volunteer. There are sometimes opposite cases, but only when the individual sensor (with lower recognition for some activity or some volunteer) is applied to a sensor set. In such a situation, this sensor decreases the recognition for the sensor set and this recognition is lower than for the other individual sensor (with higher recognition).
To sum up, the individual sensors have complementary scopes of competences and their mutual exchange depending on the current situation benefits better results than the usage of a rigidly defined sensor set.
Studying sensors’ performance in recognition of six elementary daily living activities, we confirmed that particular sensors show their optimal recognition accuracy at different movements (Table 3). Consequently, due to the complementary competencies of sensors, combining information from multiple different sensors is expected to give more reliable recognition. Unfortunately, in compound actions, true recognition falls into the border area or actually moves from the area of competence of one sensor to another. This remark was a foundation of the presented concept, design. and prototype of an assisted living system with an adaptive sensor contribution.
Based on the comparison of the accuracy of activity recognition by four different assisted living sensors, we built activity-specific sensor priority lists and proposed a multimodal surveillance system with adaptive sensor’s contribution. The setup we used as a model of a sensorized environment in which multiple sensors of possibly different paradigms and performance cooperate in the surveillance of a human. We assumed that sensors not only differ in reliability depending on the subject’s action but also give consistent or contradictory results. We proved this assumption in experiments showing that adding sensors may decrease the correctness of recognition (Table 3).
Since the sensor data differ in form and refresh rate, sensor-specific data processing was applied first to provide data in a uniform format before fusion. The sensor-independent format was a list of activities ordered by descending detection probability. Activity data matching and fusion are made on the list level and also allows for continuous adaptation of sensors’ contribution to the final result of the network. This proposal has been inspired by a neuromodulatory mechanism, which, although far more complicated, also leads to modulation of the information flow from the senses to the brain.
Biomimetic modulation of a sensor’s contribution in a multisensory assisted living environment puts forward their advantages according to the subject’s behavior. Being aware of limitations present in any human behavior model, we took selected daily living activities as samples in a continuous space of possible behaviors and tried to represent the actual behavior with a measure of similarity to these primitives [34]. In this paper, we showed that sensors, due to the specificity of their work principle, are somewhat ‘specialized’ in the recognition of particular poses or activities. Consequently, if a compound activity is represented by a set of elementary poses of varying contributions (see Section 7.3), the surveillance system, besides other limitations (see Section 7.2), should optimize the flow of sensor data seamlessly.
Regarding the related works, the main novelty in this paper is the ongoing adaptation of the sensor set dependent on the subject’s behavior. Since the range of activities is virtually unlimited and the prediction of most probable future action is uncertain, given optimization rules had to be proposed and were implemented as:
  • Sensor cost—to balance the sensor usage;
  • Penalty factor—to balance between multimodal and single mode-switching system;
  • Stability check—to maintain decision on detected activity while modifying sensors’ contribution.
Since human activity is a dynamic process, the contribution of the sensors needs to be considered as time-varying. To this point in the design of the multimodal assisted living system with adaptive sensor’s contribution, we proposed to consider conservative and predictive adaptation. The conservative adaptation assumes the sensor contribution is adapted after the activity recognition and, in case other results were issued by the adapted system, raises the stability issue, which can be solved in several ways (e.g., see 6.2). The predictive adaptation requires the use of a subject’s habits database, which has to be created and trained, but it already contains a personalized factor. Moreover, the prediction of behavior is never 100% accurate, something that needs to be taken into consideration in the design of adaptation rules.
We used four different sensors with quite good performance in the given experimental setup. However, one should consider more difficult or unstable conditions (e.g., lighting) and simplified sensors (e.g., when the energy consumption will be taken into consideration). The maximum error the system will make in activity recognition is expected as equal to the error of the second sensor.
Conservative adaptation in the two-sensor mode, (p > 1) may give erroneous recognition which (according to Table 3) may be inaccurate by 5.9% of cases (activity 4b, sensors C and E). The stability check in conservative adaptation prevents the system from changing the recognition decision based on an inappropriate change of sensors. The proposed new sensor set is applied in a subsequent sensing step and if the previous activity is maintained and the new settings are appropriate, a more accurate recognition will be issued.
In predictive adaptation, the unexpected behavior may affect the sensor set adaptation making the new proposed set inappropriate. In this case, again one should consider the case that a less accurate sensor will be proposed, and the overall reliability will decrease. Unlike the conservative case, the subject’s history (represented in Figure 7 as the “habits” database) helps to avoid the adaptation mismatch. However, it is worth noting that we used only a single step prediction (i.e., next most probable activity has been taken as a background for sensors adaptation), and future studies are necessary to potentially extend the prediction range to a tree of n future activities.
Our studies presented here were performed with the data recorded from specific sensors (including custom sensor-specific software, Figure 2) in the given test environment described by Smoleń [50]. With different sensors, particular findings (such as Table 3) may differ significantly, but a general rule of building sensor set hierarchies is universal and worth follow-up by other scientists developing multimodal human activity sensing systems. Therefore, we found it reasonable to present the system operation in four case studies than to give a quantitative evaluation of setup-specific activity detection efficiency.
The building of such a prototype system combining wearable and infrastructural sensors is the aim of our next project. Also, the question of initial personalization of recognition and data flow rules needs to be considered again in the context of a working prototype.

9. Conclusions

Based on the analysis of the performance of four different assisted living sensors in six elementary reversible activity types of the human, we proposed the analysis rules with adaptive sensor contribution. We applied them to the design of an auto-optimizing multimodal surveillance system and studied its behavior in true-to-life assisted-living scenarios, including compound activities. We pointed out the possible advantages of complementary competences of the sensors and confirmed benefits resulting from their adaptive contribution.
The building of such a prototype system combining wearable and infrastructural sensors is the aim of our next project. Also, the question of initial personalization of recognition and data flow rules need to be considered again in the context of a working prototype.

Author Contributions

Conceptualization, P.A. and M.S.; methodology, M.S.; software, M.S.; validation, M.S and P.A.; formal analysis, P.A.; investigation, M.S. and P.A.; resources, P.A.; data curation, M.S.; writing—original draft preparation, M.S. and P.A.; writing—review and editing, M.S. and P.A.; visualization, M.S.; supervision, P.A.; project administration, M.S.; funding acquisition, P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This scientific work is supported by the AGH University of Science and Technology in the year 2020 as a research project No. 16.16.120.773.

Acknowledgments

The authors wish to thank Adam Gacek and Paweł Kowalski from the Institute of Medical Technology and Equipment (ITAM) in Zabrze, Poland for providing us with the prototype of Revitus measurement device and software with no charge. The authors wish to thank also Beata Przybyłowska-Stanek, the director of Gym Center “Basen AGH in Krakow”, Poland for the possibility of renting the gym for experiments’ area with no charge.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Matrices of the recognition errors (in %) of the individual motor activities 1a ÷ 6b in the test set for all people together for sets of sensors B ÷ E (BC, BD, BE, CD, CE, DE, BCD, BCE, CDE, BDE, BCDE) are shown in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10 and Table A11. The first column 1a ÷ 6b describes the types of activities recognized by the classifier, the first line is the description of the performed activities. The percentage of correct recognition for the individual activities is therefore placed on a diagonal matrix.
Table A1. Table of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BC.
Table A1. Table of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BC.
BCPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.20.00.00.00.20.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.50.099.20.50.00.00.00.00.00.00.00.0
2b0.00.00.399.50.00.00.00.00.00.00.00.0
3a0.00.00.00.099.30.20.00.00.00.00.00.0
3b0.00.00.00.00.099.00.00.00.00.00.00.0
4a0.30.00.00.00.00.298.41.92.10.50.30.0
4b0.00.00.30.00.00.21.698.10.02.10.00.3
5a0.00.00.00.00.50.00.00.097.30.00.00.0
5b0.00.00.30.00.00.20.00.00.797.50.00.0
6a0.00.00.00.00.00.00.00.00.00.096.01.9
6b0.00.00.00.00.00.00.00.00.00.03.797.9
Table A2. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BD.
Table A2. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BD.
BDPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a98.20.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.30.00.00.00.00.00.00.00.00.0
2a1.00.099.50.80.00.00.00.00.00.00.00.0
2b0.00.00.099.20.00.00.00.00.20.00.00.0
3a0.00.00.00.099.50.00.20.00.00.00.00.0
3b0.00.00.00.00.099.80.00.20.00.00.00.0
4a0.30.00.00.00.20.099.30.20.20.00.00.0
4b0.00.00.00.00.20.20.599.50.00.00.30.0
5a0.50.00.30.00.00.00.00.099.50.00.00.0
5b0.00.00.00.00.00.00.00.00.0100.00.00.0
6a0.00.00.00.00.00.00.00.00.00.096.32.1
6b0.00.00.00.00.00.00.00.00.00.03.597.9
Table A3. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BE.
Table A3. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BE.
BEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.00.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.50.099.70.00.00.00.00.00.00.00.00.0
2b0.00.00.0100.00.20.00.00.00.00.00.00.0
3a0.00.00.00.099.50.00.00.00.00.00.00.0
3b0.00.00.00.00.099.80.00.00.00.00.00.0
4a0.00.00.00.00.20.2100.03.30.50.00.00.0
4b0.50.00.00.00.00.00.096.50.00.20.00.0
5a0.00.00.30.00.00.00.00.099.50.00.00.0
5b0.00.00.00.00.00.00.00.20.099.80.00.0
6a0.00.00.00.00.00.00.00.00.00.096.32.9
6b0.00.00.00.00.00.00.00.00.00.03.797.1
Table A4. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor CD.
Table A4. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor CD.
CDPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a96.40.00.00.00.00.00.00.00.20.00.00.0
1b0.599.20.00.00.00.00.00.00.00.20.00.0
2a0.30.397.51.50.00.00.00.00.50.00.00.0
2b0.50.02.098.20.00.00.00.00.00.00.00.0
3a0.00.00.00.099.30.00.00.00.00.00.00.0
3b0.00.00.00.00.098.30.00.00.00.00.00.0
4a0.30.00.30.00.70.097.92.10.50.00.00.0
4b0.50.30.30.00.01.72.197.90.00.50.00.3
5a1.30.00.00.30.00.00.00.098.60.00.00.0
5b0.30.30.00.00.00.00.00.00.299.30.00.0
6a0.00.00.00.00.00.00.00.00.00.095.22.9
6b0.00.00.00.00.00.00.00.00.00.04.896.8
Table A5. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor CE.
Table A5. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor CE.
CEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.50.00.00.00.00.00.00.00.20.00.00.0
1b0.099.70.00.00.00.00.00.00.00.00.00.0
2a0.00.099.70.00.00.20.00.00.00.00.00.0
2b0.00.00.0100.00.00.00.00.00.00.00.00.0
3a0.00.00.00.099.50.00.00.00.00.00.00.0
3b0.00.00.00.00.097.30.00.00.00.00.00.0
4a0.50.00.00.00.50.598.85.90.50.00.00.0
4b0.00.30.30.00.02.01.294.10.00.50.00.0
5a0.00.00.00.00.00.00.00.098.90.00.00.0
5b0.00.00.00.00.00.00.00.00.599.50.00.0
6a0.00.00.00.00.00.00.00.00.00.095.23.5
6b0.00.00.00.00.00.00.00.00.00.04.896.5
Table A6. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor DE.
Table A6. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor DE.
DEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.70.00.00.00.00.00.00.00.20.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.20.00.0
2a0.00.099.70.00.00.20.00.00.00.00.00.0
2b0.00.00.0100.00.00.00.00.00.00.00.00.0
3a0.00.00.00.099.80.00.70.00.00.00.00.0
3b0.00.00.00.00.098.00.00.20.00.00.00.0
4a0.00.00.00.00.00.099.13.50.20.00.00.0
4b0.00.00.00.00.21.70.296.20.00.00.00.0
5a0.30.00.00.00.00.00.00.099.50.00.00.0
5b0.00.00.30.00.00.00.00.00.099.80.00.0
6a0.00.00.00.00.00.00.00.00.00.097.62.9
6b0.00.00.00.00.00.00.00.00.00.02.497.1
Table A7. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BCD.
Table A7. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BCD.
BCDPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.00.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.30.099.70.30.00.00.00.00.00.00.00.0
2b0.00.00.099.70.00.00.00.00.00.00.00.0
3a0.00.00.00.099.50.00.00.00.00.00.00.0
3b0.00.00.00.00.099.50.00.00.00.00.00.0
4a0.30.00.00.00.50.098.60.20.70.00.00.0
4b0.00.00.30.00.00.51.499.80.00.50.30.3
5a0.50.00.00.00.00.00.00.099.30.00.00.0
5b0.00.00.00.00.00.00.00.00.099.50.00.0
6a0.00.00.00.00.00.00.00.00.00.096.51.9
6b0.00.00.00.00.00.00.00.00.00.03.297.9
Table A8. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BCE.
Table A8. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BCE.
BCEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a98.70.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.30.0100.00.00.20.00.00.00.00.00.00.0
2b0.00.00.099.70.00.00.00.00.00.00.00.0
3a0.00.00.00.099.80.00.00.00.00.00.00.0
3b0.00.00.00.00.099.30.00.00.00.00.00.0
4a0.50.00.00.30.00.299.52.80.70.00.00.0
4b0.30.00.00.00.00.50.597.20.20.50.00.0
5a0.30.00.00.00.00.00.00.098.90.00.00.0
5b0.00.00.00.00.00.00.00.00.299.50.00.0
6a0.00.00.00.00.00.00.00.00.00.096.52.1
6b0.00.00.00.00.00.00.00.00.00.03.597.9
Table A9. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor CDE.
Table A9. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor CDE.
CDEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized activity1a99.70.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.30.0100.00.00.00.00.00.00.00.00.00.0
2b0.00.00.0100.00.00.00.00.00.00.00.00.0
3a0.00.00.00.099.50.00.00.00.00.00.00.0
3b0.00.00.00.00.098.00.00.00.00.00.00.0
4a0.00.00.00.00.50.299.34.00.20.00.00.0
4b0.00.00.00.00.01.70.796.00.00.50.00.0
5a0.00.00.00.00.00.00.00.099.50.00.00.0
5b0.00.00.00.00.00.00.00.00.299.50.00.0
6a0.00.00.00.00.00.00.00.00.00.095.72.7
6b0.00.00.00.00.00.00.00.00.00.04.397.3
Table A10. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BDE.
Table A10. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BDE.
BDEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.20.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.30.0100.00.00.00.00.00.00.00.00.00.0
2b0.00.00.099.70.00.00.00.00.00.00.00.0
3a0.00.00.00.0100.00.00.00.00.00.00.00.0
3b0.00.00.00.00.099.80.00.00.00.00.00.0
4a0.30.00.00.30.00.0100.01.60.00.00.00.0
4b0.30.00.00.00.00.20.098.40.00.00.00.0
5a0.00.00.00.00.00.00.00.0100.00.00.00.0
5b0.00.00.00.00.00.00.00.00.0100.00.00.0
6a0.00.00.00.00.00.00.00.00.00.096.52.1
6b0.00.00.00.00.00.00.00.00.00.03.597.9
Table A11. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BCDE.
Table A11. Matrix of recognition errors (in %) of 12 motor activities 1a ÷ 6b in test set for all volunteers together for sensor BCDE.
BCDEPerformed Activity
1a1b2a2b3a3b4a4b5a5b6a6b
Recognized Activity1a99.20.00.00.00.00.00.00.00.00.00.00.0
1b0.0100.00.00.00.00.00.00.00.00.00.00.0
2a0.00.0100.00.00.00.00.00.00.20.00.00.0
2b0.00.00.099.70.00.00.00.00.00.00.00.0
3a0.00.00.00.099.50.00.00.00.00.00.00.0
3b0.00.00.00.00.099.30.00.00.00.00.00.0
4a0.30.00.00.30.50.299.81.90.50.00.00.0
4b0.30.00.00.00.00.50.298.10.00.50.00.0
5a0.30.00.00.00.00.00.00.099.30.00.00.0
5b0.00.00.00.00.00.00.00.00.099.50.00.0
6a0.00.00.00.00.00.00.00.00.00.096.52.1
6b0.00.00.00.00.00.00.00.00.00.03.597.9

References

  1. Hijaz, F.; Afzal, N.; Ahmad, T.; Hasan, O. Survey of fall detection and daily activity monitoring techniques. In Proceedings of the Conference on Information and Emerging Technologies (ICIET), Karachi, Pakistan, 14–16 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–6. [Google Scholar] [CrossRef]
  2. Smoleń, M. Kawiarnia naukowa—Jej rola i perspektywy rozwoju w odniesieniu do inżynierii biomedycznej. Inżynieria Biomed. 2008, 14, 68–74. [Google Scholar]
  3. Rashidi, P.; Mihailidis, A. A survey on ambient-assisted living tools for older adults. IEEE J. Biomed. Health Inf. 2013, 17, 579–590. [Google Scholar] [CrossRef] [PubMed]
  4. Uddin, Z.; Khaksar, W.; Torresen, J. Ambient sensors for elderly care and independent living: A survey. Sensors 2018, 18, 2027. [Google Scholar] [CrossRef] [Green Version]
  5. Chalmers, C.; Fergus, P.; Montanez, C.C.; Sikdar, S.; Ball, F.; Kendall, B. Detecting activities of daily living and routine behaviours in dementia patients living alone using smart meter load disaggregation. IEEE Trans. Emerg. Top. Comput. 2020, 1. [Google Scholar] [CrossRef]
  6. Sandeepa, C.; Moremada, C.; Dissanayaka, N.; Gamage, T.; Liyanage, M. An emergency situation detection system for ambient assisted living. In Proceedings of the IEEE International Conference on Communications Workshops (ICC Workshops), Dublin, Ireland, 7–11 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
  7. Larcher, L.; Ströele, V.; Dantas, M.; Bauer, M. Event-driven framework for detecting unusual patterns in AAL environments. In Proceedings of the International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 28–30 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 309–314. [Google Scholar] [CrossRef]
  8. Tabbakha, N.E.; Tan, W.; Ooi, C. Indoor location and motion tracking system for elderly assisted living home. In Proceedings of the International Conference on Robotics, Automation and Sciences (ICORAS), Melaka, Malaysia, 27–29 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar] [CrossRef]
  9. Patel, A.; Shah, J. Real-time human behaviour monitoring using hybrid ambient assisted living framework. J. Reliab. Intell. Environ. 2020, 6, 95–106. [Google Scholar] [CrossRef]
  10. El Murabet, A.; Abtoy, A.; Touhafi, A.; Tahiri, A. Ambient assisted living system’s models and architectures: A survey of the state of the art. J. King Saud Univ. Comput. Inf. Sci. 2020, 32, 1–10. [Google Scholar] [CrossRef]
  11. Bourke, A.K.; Lyons, G.M. A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor. Med. Eng. Phys. 2008, 30, 84–90. [Google Scholar] [CrossRef] [PubMed]
  12. Mikrut, Z.; Pleciak, P.; Smoleń, M. Combining pattern matching and optical flow methods in home care vision system. In Information Technologies in Biomedicine. Lecture Notes in Computer Science; Piętka, E., Kawa, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7339, pp. 537–548. [Google Scholar] [CrossRef]
  13. Atallah, L.; Lo, B.; Ali, R.; King, R.; Yang, G. Real-time activity classification using ambient and wearable sensors. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 1031–1039. [Google Scholar] [CrossRef]
  14. Fleury, A.; Noury, N.; Vacher, M. Supervised classification of activities of daily living in health smart homes using SVM. In Proceedings of the Annual International Conference of the IEEE EMBS, Minneaplois, MN, USA, 1 January 2009; pp. 6099–6102. [Google Scholar] [CrossRef] [Green Version]
  15. Phinyomark, A.; Chujit, G.; Phukpattaranont, P.; Limsakul, C.; Hu, H. A preliminary study assessing time-domain EMG features of classifying exercises in preventing falls in the elderly. In Proceedings of the International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Phetchaburi, Thailand, 16–18 May 2012; pp. 1–4. [Google Scholar] [CrossRef]
  16. Lewandowski, T.; Augustyniak, P. The system of a touchfree personal computer navigation by using the information on the human eye movements. In Proceedings of the Human System Interactions, Rzeszów, Polska, 13–15 May 2010; pp. 674–677. [Google Scholar]
  17. Broniec, A. Control of cursor movement based on EEG motor cortex rhythm using autoregressive spectral analysis. Automatyka 2011, 15, 321–329. [Google Scholar]
  18. Przybyło, J. Vision based facial action recognition system for people with disabilities. In Information Technologies in Biomedicine. Lecture Notes in Computer Science; Piętka, E., Kawa, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7339, pp. 577–588. [Google Scholar] [CrossRef]
  19. Czopek, K. Cardiac activity based on acoustic signal properties. Acta Phys. Pol. 2012, 121, A42–A45. [Google Scholar] [CrossRef]
  20. Smoleń, M.; Czopek, K.; Augustyniak, P. Non-invasive sensors based human state in nightlong sleep analysis for home-care. In Proceedings of the Computing in Cardiology, Belfast, UK, 26–29 September 2010; Volume 37, pp. 45–48. [Google Scholar]
  21. Smoleń, M.; Czopek, K.; Augustyniak, P. Sleep evaluation device for home-care. In Information Technologies in Biomedicine. Advances in Intelligent and Soft Computing; Piȩtka, E., Kawa, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 69, pp. 367–378. [Google Scholar] [CrossRef]
  22. Mikrut, Z.; Smoleń, M. A neural network approach to recognition of the selected human motion patterns. Automatyka 2011, 15, 535–543. [Google Scholar]
  23. Kańtoch, E.; Smoleń, M.; Augustyniak, P.; Kowalski, P. Wireless body area network system based on ECG and accelerometer pattern. In Proceedings of the Computing in Cardiology, Hangzhou, China, 18–21 September 2011; pp. 245–248. [Google Scholar]
  24. Smoleń, M.; Kańtoch, E.; Augustyniak, P.; Kowalski, P. Wearable patient home monitoring based on ECG and ACC sensors. In Proceedings of the European Conference of the International Federation for Medical and Biological Engineering, Budapest, Hungary, 14–18 September 2011; Jobbágy, Á., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 37, pp. 941–944. [Google Scholar] [CrossRef]
  25. Augustyniak, P.; Smoleń, M.; Broniec, A.; Chodak, J. Data integration in multimodal home care surveillance and communication system. In Information Technologies in Biomedicine. Advances in Intelligent and Soft Computing; Piȩtka, E., Kawa, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 69, pp. 391–402. [Google Scholar] [CrossRef]
  26. Kańtoch, E.; Augustyniak, P. Wearable mobile network as an integrated part of assisted living technologies. In Information Technologies in Biomedicine. Lecture Notes in Computer Science; Piętka, E., Kawa, J., Eds.; Springer: Berlin/Heidelberg Germany, 2012; Volume 7339, pp. 549–559. [Google Scholar] [CrossRef]
  27. Suh, Y.; Park, S. Monitoring of basic daily activities with inertial sensors and wireless modules. In Proceedings of the IEEE Sensors Applications Symposium, Limerick, Ireland, 15–18 April 2009; pp. 200–205. [Google Scholar]
  28. Sherrill, D.; Bonato, P.; Luca, C. A neural network approach to monitor motor activities. In Proceedings of the Second Joint EMBS/BMES Conference, Houston, TX, USA, 23–26 October 2002; pp. 52–53. [Google Scholar]
  29. Hsieh, T.; Tsai, A.; Chang, C.; Ho, K.; Hsu, W.; Lin, T. A wearable walking monitoring system for gait analysis. In Proceedings of the Annual International Conference of the IEEE EMBS San Diego, San Diego, CA, USA, 28 August–1 September 2012; pp. 6772–6775. [Google Scholar] [CrossRef]
  30. Mizuno, H.; Nagai, H.; Sasaki, K.; Hosaka, H.; Sugimoto, C.; Khalil, K.; Tatsuta, S. Wearable sensor system for human behavior recognition. In Proceedings of the International Solid-State Sensors, Actuators and Microsystems Conference, Lyon, France, 10–14 June 2007; pp. 435–438. [Google Scholar]
  31. Boissy, P.; Hester, T.; Sherrill, D.; Corriveau, H.; Bonato, P. Monitoring mobility assistive device use in post-stroke patients. In Proceedings of the Annual International Conference of the IEEE EMBS, Lyon, France, 22–26 August 2007; pp. 4372–4375. [Google Scholar]
  32. Gomaa, W.; Elbasiony, R. Comparative study of different approaches for modeling and analysis of activities of daily living. In Proceedings of the International Conference on Informatics and Systems, Cairo, Egipt, 10–12 December 2018. [Google Scholar] [CrossRef]
  33. Augustyniak, P.; Smoleń, M.; Mikrut, Z.; Kańtoch, E. Seamless tracing of human behavior using complementary wearable and house-embedded sensors. Sensors 2014, 14, 7831–7856. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Augustyniak, P.; Ślusarczyk, G. Graph-based representation of behavior in detection and prediction of daily living activities. Comput. Biol. Med. 2018, 95, 261–270. [Google Scholar] [CrossRef] [PubMed]
  35. Boonma, P.; Suzuki, J. An adaptive, scalable and self-healing sensor network architecture for autonomous coastal environmental monitoring. In Proceedings of the IEEE Conference on Technologies For Homeland Security, Woburn, MA, USA, 16–17 May 2007; pp. 1–8. [Google Scholar]
  36. Cohen, O.; Edan, Y. A sensor fusion framework for online sensor and algorithm selection. Robot. Auton. Syst. 2008, 56, 762–776. [Google Scholar] [CrossRef]
  37. Marti, E.; Garcia, J.; Molina, J.M. Adaptive sensor fusion architecture through ontology modeling and automatic reasoning. In Proceedings of the International Conference on Information Fusion, Washington, DC, USA, 6–9 July 2015. [Google Scholar]
  38. Gravina, R.; Alinia, P.; Ghasemzadeh, H.; Fortino, G. Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inf. Fusion 2017, 35, 68–80. [Google Scholar] [CrossRef]
  39. Cook, D.; Feuz, K.D.; Krishnan, N.C. Transfer learning for activity recognition: A survey. Knowl. Inf. Syst. 2012, 36, 537–556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Koping, L.; Shirahama, K.; Grzegorzek, M. A general framework for sensor-based human activity recognition. Comput. Biol. Med. 2018, 95, 248–260. [Google Scholar] [CrossRef]
  41. Lin, K.; Li, Y.; Sun, J.; Zhou, D.; Zhang, Q. Multi-sensor fusion for body sensor network in medical human–robot interaction scenario. Inf. Fusion 2020, 57, 15–26. [Google Scholar] [CrossRef]
  42. Bazo, R.; Reis, E.; Seewald, L.A.; Rodrigues, V.F.; Da Costa, C.A.; Gonzaga, L.; Antunes, R.S.; Righi, R.D.R.; Maier, A.; Eskofier, B.M.; et al. Baptizo: A sensor fusion based model for tracking the identity of human poses. Inf. Fusion 2020, 62, 1–13. [Google Scholar] [CrossRef]
  43. Liciotti, D.; Contigiani, M.; Frontoni, E.; Mancini, A.; Zingaretti, P.; Placidi, V. Shopper analytics: A customer activity recognition system using a distributed RGB-D camera network. In Video Analytics for Audience Measurement. Lecture Notes in Computer Science; Distante, C., Battiato, S., Cavallaro, A., Eds.; Springer: Cham, Switzerland, 2014; Volume 8811, pp. 146–157. [Google Scholar] [CrossRef] [Green Version]
  44. Sturari, M.; Liciotti, D.; Pierdicca, R.; Frontoni, E.; Mancini, A.; Contigiani, M.; Zingaretti, P. Robust and affordable retail customer profiling by vision and radio beacon sensor fusion. Pattern Recognit. Lett. 2016, 81, 30–40. [Google Scholar] [CrossRef]
  45. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
  46. Teng, J.; Zhang, B.; Zhu, J.; Li, X.; Xuan, D.; Zheng, Y.F. Ev-Loc: Integrating electronic and visual signals for accurate localization. IEEE/ACM Trans. Netw. 2014, 22, 1285–1296. [Google Scholar] [CrossRef] [Green Version]
  47. Xiu, Y.; Li, J.; Wang, H.; Fang, Y.; Lu, C. Pose flow: Efficient online pose tracking. In Proceedings of the British Machine Vision Conference, Newcastle-upon-Tyne, UK, 3–6 September 2018. [Google Scholar]
  48. He, S.; Shin, H.-S.; Xu, S.; Tsourdos, A. Distributed estimation over a low-cost sensor network: A Review of state-of-the-art. Inf. Fusion 2020, 54, 21–43. [Google Scholar] [CrossRef]
  49. Augustyniak, P. Sensorized elements of a typical household in behavioral studies and prediction of a health setback. In Proceedings of the International Conference on Human System Interfaces, Warsaw, Poland, 25–27 June 2015. [Google Scholar] [CrossRef]
  50. Smoleń, M. Consistency of outputs of the selected motion acquisition methods for human activity recognition. J. Healthc. Eng. 2019. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Motion signals measurement devices: (a) sensor B—a wireless (WLAN) EMG biopotentials amplifier ME6000 (Mega Electronics) with MegaWin software; (b) sensor C—a wireless foot pressure measurement system ParoLogg with Parologg software; (c) sensor D—ACC Revitus module with a dedicated software; (d) sensor E—a digital video camera Sony HDR-FX7E.
Figure 1. Motion signals measurement devices: (a) sensor B—a wireless (WLAN) EMG biopotentials amplifier ME6000 (Mega Electronics) with MegaWin software; (b) sensor C—a wireless foot pressure measurement system ParoLogg with Parologg software; (c) sensor D—ACC Revitus module with a dedicated software; (d) sensor E—a digital video camera Sony HDR-FX7E.
Sensors 20 05278 g001
Figure 2. Steps of processing the measurement data from sensors: (a) B; (b) C; (c) D; (d) E. L—left, R—right.
Figure 2. Steps of processing the measurement data from sensors: (a) B; (b) C; (c) D; (d) E. L—left, R—right.
Sensors 20 05278 g002
Figure 3. Selected (first, middle, and end, respectively) video frames presented the investigated movements (activities 1a ÷ 6b). Transitions between the specific successive activities are named 1a1b ÷ 5a5b.
Figure 3. Selected (first, middle, and end, respectively) video frames presented the investigated movements (activities 1a ÷ 6b). Transitions between the specific successive activities are named 1a1b ÷ 5a5b.
Sensors 20 05278 g003
Figure 4. Chart of recognition Rs_a (in %) for activities 1a ÷ 6b (numbers 1 ÷ 12) and for sensors B ÷ E and their sets (numbers 1 ÷ 15).
Figure 4. Chart of recognition Rs_a (in %) for activities 1a ÷ 6b (numbers 1 ÷ 12) and for sensors B ÷ E and their sets (numbers 1 ÷ 15).
Sensors 20 05278 g004
Figure 5. Chart of recognition Rs_V (in %) for volunteers V1 ÷ V20 (numbers 1 ÷ 20) and for sensors B ÷ E and their sets (numbers 1 ÷ 15).
Figure 5. Chart of recognition Rs_V (in %) for volunteers V1 ÷ V20 (numbers 1 ÷ 20) and for sensors B ÷ E and their sets (numbers 1 ÷ 15).
Sensors 20 05278 g005
Figure 6. Flow of sensors’ data modulator adapting a multimodal assisted living system to presently detected behavior.
Figure 6. Flow of sensors’ data modulator adapting a multimodal assisted living system to presently detected behavior.
Sensors 20 05278 g006
Figure 7. Flow of sensors’ data modulator adapting a multimodal assisted living system to most probable future behavior.
Figure 7. Flow of sensors’ data modulator adapting a multimodal assisted living system to most probable future behavior.
Sensors 20 05278 g007
Figure 8. Activities performed in example 7.4 (activities are listed in circles and sensors in grayed rectangles) with unexpected activities and system decisions: at (5a) sensor E is included additionally and at (2a) the usage of sensor E is maintained.
Figure 8. Activities performed in example 7.4 (activities are listed in circles and sensors in grayed rectangles) with unexpected activities and system decisions: at (5a) sensor E is included additionally and at (2a) the usage of sensor E is maintained.
Sensors 20 05278 g008
Table 1. Sampling frequency (Fs) for sensors B ÷ E.
Table 1. Sampling frequency (Fs) for sensors B ÷ E.
B (EMG)C (Pressure)D (Accelerometer)E (Video)
Fs (Hz)20010010025
Table 2. Vectors for classification of the selected motor activities.
Table 2. Vectors for classification of the selected motor activities.
SensorL
Vector Length
Vector Structure
B (EMG)320[EL1 EL2 EL3 EL4 EP1 EP2 EP3 EP4]
C (pressure)240[L1 L2 L3 P1 P2 P3]
D (accelerometer)120[X Y Z]
E (video)320[B1 B2 B3 B4 B5 B6 B7 B8]
BC560[EL1 EL2 EL3 EL4 EP1 EP2 EP3 EP4 L1 L2 L3 P1 P2 P3]
BD440[EL1 EL2 EL3 EL4 EP1 EP2 EP3 EP4 X Y Z]
BE640[EL1 EL2 EL3 EL4 EP1 EP2 EP3 EP4 B1 B2 B3 B4 B5 B6 B7 B8]
CD360[L1 L2 L3 P1 P2 P3 X Y Z]
CE560[L1 L2 L3 P1 P2 P3 B1 B2 B3 B4 B5 B6 B7 B8]
DE440[X Y Z B1 B2 B3 B4 B5 B6 B7 B8]
BCD680[EL1 EL2 EL3 EL4 EP1 EP2 EP3 EP4 L1 L2 L3 P1 P2 P3 X Y Z]
BCE880[EL1 EL2 EL3 EL4 EP1 EP2 EP3 EP4 L1 L2 L3 P1 P2 P3
B1 B2 B3 B4 B5 B6 B7 B8]
Table 3. Table of recognition Rs_a (in %) of motion activities 1a ÷ 6b in the test set for all volunteers together for sensors B ÷ E and their sets. Calculation errors Us_a are placed in orange areas.
Table 3. Table of recognition Rs_a (in %) of motion activities 1a ÷ 6b in the test set for all volunteers together for sensors B ÷ E and their sets. Calculation errors Us_a are placed in orange areas.
1a1b2a2b3a3b4a4b5a5b6a6bALL
B96.9100.099.598.599.099.399.198.697.998.296.097.698.4
3.90.01.53.73.41.83.23.13.73.512.77.51.8
C90.891.895.796.794.992.495.393.687.088.294.997.193.1
10.010.812.415.56.96.410.27.615.014.614.18.25.7
D95.297.295.594.296.695.198.497.697.999.396.596.096.7
17.211.215.815.46.68.83.24.12.81.912.312.15.2
E99.799.595.595.599.397.696.079.899.399.391.792.395.5
1.11.617.718.81.84.47.925.01.71.79.18.44.4
BC99.2100.099.299.599.399.098.498.197.397.596.097.998.4
1.90.01.93.12.32.05.25.24.55.912.35.72.3
BD98.2100.099.599.299.599.899.399.599.5100.096.397.999.1
3.40.01.52.42.21.12.21.41.40.012.87.01.6
BE99.0100.099.7100.099.599.8100.096.599.599.896.397.199.0
2.10.01.10.01.51.10.011.41.40.911.710.12.1
CD96.499.297.598.299.398.397.997.998.699.395.296.897.9
8.62.78.36.31.84.85.45.52.52.014.08.53.1
CE99.599.799.7100.099.597.398.894.198.999.595.296.598.3
1.61.11.10.01.54.92.614.92.81.812.510.52.5
DE99.7100.099.7100.099.898.099.196.299.599.897.697.198.9
1.10.01.10.01.14.02.511.41.40.85.86.41.2
BCD99.0100.099.799.799.599.598.699.899.399.596.597.999.1
2.60.01.21.51.51.54.21.01.71.212.05.71.5
BCE98.7100.0100.099.799.899.399.597.298.999.596.597.998.9
2.80.00.01.11.11.81.29.22.51.810.65.71.6
CDE99.7100.0100.0100.099.598.099.396.099.599.595.797.398.7
1.10.00.00.01.54.61.614.31.31.811.17.32.1
BDE99.2100.0100.099.7100.099.8100.098.4100.0100.096.597.999.3
1.80.00.01.10.01.10.07.10.00.011.77.01.4
BCDE99.2100.0100.099.799.599.399.898.199.399.596.597.999.1
1.80.00.01.11.51.81.07.11.71.810.66.11.6
Table 4. Table of recognition Rs_V (in %) of all motor activities for volunteers V1 ÷ V20 in test set for sensors B ÷ E and their sets. Calculation errors Us_V are placed in orange areas.
Table 4. Table of recognition Rs_V (in %) of all motor activities for volunteers V1 ÷ V20 in test set for sensors B ÷ E and their sets. Calculation errors Us_V are placed in orange areas.
W1W2W3W4W5W6W7W8W9W10W11W12W13W14W15W16W17W18W19W20ALL
B99.194.898.498.398.799.199.697.798.398.098.499.692.6100.099.699.197.399.699.6100.0...98.4
2.07.82.44.63.11.91.54.13.34.53.71.415.80.01.42.94.31.11.10.0...1.1
C98.294.898.097.992.397.497.394.995.791.294.488.085.397.187.388.975.294.793.598.4...93.1
2.55.63.32.211.65.95.07.84.112.57.217.215.93.313.413.622.713.111.84.4...3.3
D96.993.199.296.998.398.7100.076.293.197.699.299.291.198.2100.098.394.6100.0100.0100.0...96.7
3.911.11.96.52.53.20.033.811.34.12.71.917.22.90.02.59.10.00.00.0...1.5
E97.495.395.988.897.999.199.697.794.493.699.281.096.594.999.298.392.397.397.794.8...95.5
6.49.77.423.43.61.92.03.110.514.71.931.85.316.21.93.916.14.74.511.2...5.8
BC99.696.699.299.799.699.199.698.698.796.098.899.291.1100.099.2100.094.1100.0100.099.6...98.4
1.65.91.91.11.51.91.53.33.18.83.62.015.10.01.90.06.80.00.01.4...1.1
BD99.195.799.299.0100.0100.0100.098.699.199.299.699.693.8100.0100.0100.099.1100.0100.0100.0...99.1
1.78.41.92.90.00.00.02.42.92.61.41.415.40.00.00.03.00.00.00.0...1.1
BE99.196.199.699.7100.099.6100.099.199.695.6100.0100.091.9100.0100.0100.099.599.6100.0100.0...99.0
2.08.21.41.10.01.40.02.11.514.40.00.016.40.00.00.01.51.10.00.0...1.4
CD99.697.099.699.398.797.499.692.199.197.2100.097.193.0100.0100.0100.088.399.699.2100.0...97.9
1.66.91.41.72.36.31.512.51.96.80.05.815.80.00.00.013.31.11.90.0...1.2
CE99.197.099.699.7100.0100.0100.099.598.794.098.897.194.298.599.698.790.599.6100.0100.0...98.3
2.06.01.41.00.00.00.01.63.118.33.06.211.44.21.43.116.61.10.00.0...2.0
DE99.196.199.298.699.1100.0100.099.198.795.6100.099.298.898.599.699.699.5100.0100.097.2...98.9
1.88.41.94.12.10.00.02.32.314.40.02.92.14.11.41.51.40.00.06.5...1.3
BCD99.697.499.699.7100.099.6100.099.198.798.0100.0100.094.6100.0100.0100.095.9100.0100.0100.0...99.1
1.66.01.41.10.01.50.02.13.15.30.00.014.20.00.00.04.80.00.00.0...1.0
BCE99.696.699.699.399.6100.099.699.598.796.0100.099.694.699.6100.0100.096.4100.0100.0100.0...98.9
1.58.21.41.51.50.01.51.63.111.80.01.411.41.40.00.05.70.00.00.0...1.1
CDE100.097.099.699.7100.0100.0100.099.599.694.0100.097.994.6100.0100.099.693.799.6100.0100.0...98.7
0.07.41.41.00.00.00.01.61.418.30.05.811.40.00.01.59.81.10.00.0...1.6
BDE100.097.099.699.7100.0100.0100.099.599.697.2100.0100.094.6100.0100.0100.099.5100.0100.0100.0...99.3
0.08.31.41.30.00.00.01.61.59.20.00.014.30.00.00.01.50.00.00.0...1.1
BCDE100.096.199.699.7100.0100.0100.099.599.196.8100.099.694.6100.0100.0100.096.8100.0100.0100.0...99.1
0.09.31.41.30.00.00.01.61.99.10.01.411.40.00.00.03.50.00.00.0...1.0
Table 5. From sensors (in %) to the compound action recognition, “searching on the shelf”.
Table 5. From sensors (in %) to the compound action recognition, “searching on the shelf”.
Time (s)PoseB (EMG)C (Pressure)D (Accelerometer)E (Video)
04a60.07.57.525.0
1.44b42.57.535.017.5
1.93a42.55.042.510.0
3.23b42.55.042.510.0
3.75a17.55.060.017.5
4.95b17.55.060.017.5
5.11a42.55.042.510.0
6.71b60.05.025.010.0

Share and Cite

MDPI and ACS Style

Smoleń, M.; Augustyniak, P. Assisted Living System with Adaptive Sensor’s Contribution. Sensors 2020, 20, 5278. https://doi.org/10.3390/s20185278

AMA Style

Smoleń M, Augustyniak P. Assisted Living System with Adaptive Sensor’s Contribution. Sensors. 2020; 20(18):5278. https://doi.org/10.3390/s20185278

Chicago/Turabian Style

Smoleń, Magdalena, and Piotr Augustyniak. 2020. "Assisted Living System with Adaptive Sensor’s Contribution" Sensors 20, no. 18: 5278. https://doi.org/10.3390/s20185278

APA Style

Smoleń, M., & Augustyniak, P. (2020). Assisted Living System with Adaptive Sensor’s Contribution. Sensors, 20(18), 5278. https://doi.org/10.3390/s20185278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop