Next Article in Journal
Image Classifier for an Online Footwear Marketplace to Distinguish between Counterfeit and Real Sneakers for Resale
Next Article in Special Issue
Human Activity Recognition, Monitoring, and Analysis Facilitated by Novel and Widespread Applications of Sensors
Previous Article in Journal
Deep Reinforcement Learning for Optimizing Restricted Access Window in IEEE 802.11ah MAC Layer
Previous Article in Special Issue
Adopting Graph Neural Networks to Analyze Human–Object Interactions for Inferring Activities of Daily Living
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization

1
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Faculty of Computing ad AI, Air University, E-9, Islamabad 44000, Pakistan
3
Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia
4
Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia
5
Department of Computer Sciences, Faculty of Computing and Information Technology, Northern Border University, Rafha 91911, Saudi Arabia
6
Cognitive Systems Lab, University of Bremen, 28359 Bremen, Germany
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(10), 3032; https://doi.org/10.3390/s24103032
Submission received: 20 March 2024 / Revised: 26 April 2024 / Accepted: 29 April 2024 / Published: 10 May 2024
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)

Abstract

:
The domain of human locomotion identification through smartphone sensors is witnessing rapid expansion within the realm of research. This domain boasts significant potential across various sectors, including healthcare, sports, security systems, home automation, and real-time location tracking. Despite the considerable volume of existing research, the greater portion of it has primarily concentrated on locomotion activities. Comparatively less emphasis has been placed on the recognition of human localization patterns. In the current study, we introduce a system by facilitating the recognition of both human physical and location-based patterns. This system utilizes the capabilities of smartphone sensors to achieve its objectives. Our goal is to develop a system that can accurately identify different human physical and localization activities, such as walking, running, jumping, indoor, and outdoor activities. To achieve this, we perform preprocessing on the raw sensor data using a Butterworth filter for inertial sensors and a Median Filter for Global Positioning System (GPS) and then applying Hamming windowing techniques to segment the filtered data. We then extract features from the raw inertial and GPS sensors and select relevant features using the variance threshold feature selection method. The extrasensory dataset exhibits an imbalanced number of samples for certain activities. To address this issue, the permutation-based data augmentation technique is employed. The augmented features are optimized using the Yeo–Johnson power transformation algorithm before being sent to a multi-layer perceptron for classification. We evaluate our system using the K-fold cross-validation technique. The datasets used in this study are the Extrasensory and Sussex Huawei Locomotion (SHL), which contain both physical and localization activities. Our experiments demonstrate that our system achieves high accuracy with 96% and 94% over Extrasensory and SHL in physical activities and 94% and 91% over Extrasensory and SHL in the location-based activities, outperforming previous state-of-the-art methods in recognizing both types of activities.

1. Introduction

Human locomotion activity recognition is a rapidly emerging field that analyzes and classifies many types of physical activity and locomotion activity using smartphone sensors [1]. Smartphones are an ideal platform for this research because they are widely available, equipped with a variety of sensors [2], and are commonly carried by individuals. In modern smartphones, a rich array of sensors including accelerometers, gyroscopes, magnetometers, GPS, light sensors, barometers, and microphones, are utilized for comprehensive human locomotion activity recognition. These sensors allow for the accurate detection and analysis of user movements and environmental interactions, enhancing the performance of activity recognition systems. Human activity recognition using smartphone sensors has a wide range of applications in various fields. Some of the most common applications include fitness and health monitoring, and elderly care (to monitor the movements of elderly individuals and detect any signs of falls or other accidents [3]. This can provide peace of mind for caregivers and family members and can also be used to trigger an alert if assistance is needed), transportation, environmental monitoring, sports, and marketing (to track and analyze the movements of individuals in different retail environments. This information can be used to better understand consumer behavior and make decisions about product placement and advertising), safety, and security [4].
Researchers have used different machine-learning approaches to recognize human activity through smartphone sensors. These approaches have several advantages, including the ability to recognize complex patterns and classify multiple activities simultaneously. Additionally, machine learning algorithms can improve over time as they are fed more data, making them more accurate and reliable [5,6]. However, there are also some disadvantages to using machine learning for activity recognition. One of the main disadvantages is the requirement for large amounts of labeled data to train the algorithms effectively. This can be time-consuming and expensive, especially when dealing with numerous activities or users. Another potential issue is the sensitivity of the algorithms to changes in sensor placement or sensor data quality. If the sensor is not positioned correctly or the data are noisy, the accuracy of the machine-learning algorithm can be significantly reduced [7].
The field of HAR is impeded by a variety of challenges, including sensor heterogeneity across devices [8], which complicates the development of universal application. Different smartphone models are equipped with various types of sensors that have differing specifications and capabilities. This variation complicates the development of universal applications that perform consistently across all devices. Another big challenge researchers face is that of noise in the raw sensor data [9]. Smartphone sensors frequently capture data contaminated with noise, which can significantly affect accuracy. The precision of sensors varies widely between devices, often depending on the hardware quality and sensor calibration. Additionally, variations in sensor sampling frequencies [10], mean that the rate at which sensor sample data can fluctuate is influenced by other processes running on the device. This inconsistency can lead to challenges in capturing the real-time, high-resolution data necessary for precise motion recognition. Similarly, sensor characteristics can change over time due to aging hardware or software updates, leading to a data drift. This phenomenon can degrade the performance of motion recognition algorithms that were trained on data from newer or different sensors. Continuous sensor data collection is resource-intensive, consuming significant battery life and processing power. Managing these resources efficiently while maintaining accurate motion detection is a major challenge. Moreover, different users may carry their smartphones in various positions (e.g., pocket, hand, or bag) [11], leading to vastly different data profiles. Algorithms must be robust enough to handle these variations to ensure accurate motion recognition. Moving forward, another challenge is the privacy of the user [12], collecting and analyzing sensor data increases privacy concerns, as such data can inadvertently reveal sensitive information about a user’s location and activities. Ensuring data privacy and security while collecting and processing sensor data is critical. Lastly, the complexity of human activities and the limited spatial coverage of sensors add to the difficulty of capturing a comprehensive range of motions, highlighting the multifaceted nature of these technological hurdles. In this study, we developed a system that recognizes human movements along with location, and ultimately, can provide valuable insights into an individual’s physical and localization activity levels. We processed the raw sensor data for physical and location-based activity separately. In the first stage, we denoised the data using the Butterworth filter [13] and Median filter [14]. In the second stage, we segmented the long sequence signal data into small pieces using the Hamming windowing technique [15]. After that, features were extracted. The Variance threshold feature selection method [16,17] is used for feature selection, the selected feature vectors are balanced using the data augmentation technique, and after that, the augmented data are well optimized before classification using the Yeo–Johnson power transformation technique. Finally, physical and localization activity classification is performed by a multi-layer perceptron (MLP). The contribution of this research is described below:
  • Implemented separate denoising filters for inertial and GPS sensors, significantly enhancing data cleanliness and accuracy.
  • Developed a robust methodology for concurrent feature extraction from human locomotion and localization data, improving processing efficiency and reliability.
  • Established dedicated processing streams for localization and locomotion activities, allowing for more precise activity recognition by reducing computational interference.
  • Applied a novel data augmentation technique to substantially increase the dataset size of activity samples, enhancing the robustness and generalizability of the recognition algorithms.
  • Utilized an advanced feature optimization algorithm to adjust the feature vector distribution towards normality, significantly improving the accuracy of activity recognition.
The research has been divided into the following sections: Section 2 discusses some literature review in the field of human activity recognition, and then material and methods, including noise removal, signal windowing and segmentation, feature extraction, feature selection, and optimization presented in Section 3. Section 5 analyzes the computational complexity of the proposed system, Section 6 presents a discussion and limitations. Finally, in Section 7, the research study is concluded.

2. Related Work

Scientists have explored different approaches to studying how to analyze human motion, both inside and outside [18]. These approaches can be divided into two main groups: ones that depend on motion-based analysis and ones that depend on vision-based analysis [19,20], whilst some are based on sensors-based methods that make use of sensors such as accelerometers, gyroscopes, GPS, light, mic, magnetometers, mechanomyography, ECG (electrocardiogram), EMG (electromyogram), and geomagnetic sensors. Vision-based methods, on the other hand, use cameras such as Microsoft Kinect [21], Intel Real sense, video cameras, and dynamic vision sensors [22].
The related work by Hsu et al. [23] involved a method of human activity recognition that utilized a pair of wearable inertial sensors. On the subject’s wrist, one sensor was mounted, while the other was mounted on the ankle. The collected sensor data, including accelerations and angular velocities, was wirelessly sent to a central computer for processing. Using a nonparametric weighted feature extraction algorithm and principal component analysis, the system could differentiate between various activities. While the method boasted the high points of portability and wireless data transmission, it was limited by the use of only two sensors, potentially missing out on capturing the full spectrum of human movement and requiring a dependable wireless connection to function effectively. To improve upon this, the proposed solution in the research includes the deployment of additional sensors on various body parts such as the torso, backpack, hand, and pocket to provide a more comprehensive capture of human motion. Additionally, integrating sensors embedded within smartphones eliminates the need for a continuous wireless connection, facilitating the recognition of human activities and locations with enhanced reliability and context awareness. In the research by A-Basset et al. [24], a novel approach to human activity recognition is introduced where sensor data is treated as visual information. Their method considers human activity recognition as analogous to image classification, converting sensor data into an RGB image format for processing. This enables the use of multiscale hierarchical feature extraction and a channel-wise attention mechanism to classify activities. The strength of this system lies in its innovative interpretation of sensor data, which allows for the application of image classification techniques. However, its reliance on small datasets for training raises questions about how well it can be generalized to real-world situations. The uncertainty regarding the computational and space complexity also poses concerns about the system’s scalability. The proposed enhancement of this system involves training on larger and more diverse datasets to enhance the system’s capacity to generalize across various scenarios. By ensuring that the system is robust when handling larger datasets, the solution seeks to maintain computational efficiency while scaling up to more complex applications. Konak et al.’s [25] method for evaluating human activity recognition performance employs accelerometer data, which is categorized into three distinct classes based on motion, orientation, and rotation. The system utilizes these categories either individually or in combination to assess activity recognition performance and employs a variety of classification techniques, such as decision trees, naive Bayes, and random forests. The primary limitation of this method is its training on a dataset derived from only 10 subjects, raising concerns about its generalizability to a broader population. Additionally, the study relies on common machine learning classifiers, which may not be as effective as more advanced models. In contrast, the proposed model in the research under discussion utilizes the Extrasensory and Huawei dataset, which includes data from more subjects, thus providing a more robust and generalizable system that achieves state-of-the-art performance. The research by Chetty et al. [26] presents an innovative data analytic method for human activity recognition using smartphone inertial sensors, utilizing machine learning classifiers like random forests, ensemble learning, and lazy learning. The system distinguishes itself through its feature ranking process informed by information theory, which optimizes for the most relevant features in activity recognition. Despite the innovative approach, the system’s reliance on a single dataset for training is its primary limitation. This constraint could hinder the model’s ability to generalize to unobserved scenarios and potentially lead to degraded performance in real-world applications. The proposed solution to these limitations involves a system trained on two benchmark datasets that encompass a wider variety of activities. It includes the Extrasensory dataset, which is notable for being collected in uncontrolled, real-world environments without restrictions on participant behavior. This approach is intended to enhance the system’s reliability and applicability to a broader range of real-life situations, thereby making it a more robust solution for activity recognition.
The study by Ehtisham-ul-Haq et al. [27] introduced an innovative context recognition framework that interprets human activity by leveraging physical activity recognition (PAR) and learning patterns from various behavioral scenarios. Their system correlated fourteen different behaviors, including phone positions with five daily living activities, using random forest and other machine learning classifiers for performance evaluation. The high points of this method are its use of human activity recognition to infer context and its integration of additional information such as the subject’s location and secondary activities. Nonetheless, the system’s primary reliance on accelerometer data makes it less adept at complex activities, and it lacks more comprehensive data sources like GPS and microphone inputs for enhanced location estimation. The proposed enhancement to this framework includes a more integrated sensor approach, utilizing not only the smartphone’s accelerometer, magnetometer, and gyroscope, but also the smartwatch’s accelerometer and compass, along with smartphone GPS and microphone data. This integration promises increased robustness and accuracy in activity recognition and localization.

2.1. Activity Recognition Using Inertial Sensors

Smartphone sensors are more popular for activity recognition. By using these sensors, human activity can be easily detected. The most significant feature of the smartphone is its portability, which means it can be carried easily in any place. In [28], different supervised machine learning algorithms were used to classify human activity. The classification precision was tested using 5-fold cross-validation. They achieved a good accuracy rate for all classifiers. Ref. [29] presented trends in human activity recognition. The survey discussed different solutions proposed for each phase in human activity classification, i.e., preprocessing, windowing and segmentation, feature extraction, and classification. All the solutions are analyzed, and the weaknesses and strengths are described. The paper also presented how to evaluate the quality of a classifier. In [30], a new method was proposed for recognizing human activity with multi-class SVM using integer parameters. The method used in the research consumes less memory, processor time, and power consumption. The authors in [31] analyzed the performance of the two classifiers, that is, KNN and clustered KNN. The classifiers were evaluated with an online activity recognition system using the Android operating system. The system supports online training and classification by collecting data from one sensor called an accelerometer. They started with KNN and then clustered it. The main rationale for utilizing clustered was to reduce the computational complexity of KNN using clusters. The major goal of the article was to examine the performance of algorithms on the phone with limited training data and memory.

2.2. Activity Recognition Using Computer Vision and Image Processing Techniques

As previously noted, identifying human activity via smartphone is a convenient method because the smartphone is a portable device that can be readily carried anywhere. The use of an RGB camera for activity recognition [32,33] has some limits and constraints. To monitor a person’s activity with a camera, for example, the individual must be within range of the camera’s eye. Nighttime (changing lighting conditions) is the second most prevalent challenge that researchers face while tracking human activities through a webcam. However, advancements in multimedia tools have mitigated these issues to some extent. To recognize human movement from 2D/3D films and photos, many computer-vision and image-processing algorithms have been utilized [34,35]. Researchers can recognize human activities more easily by employing techniques such as segmentation [36], filtering [37], saliency map detection [38], skeleton extraction [39], and so on. The work described in [40] investigated human activity recognition using a depth camera. The camera first acquired the skeleton data, and then several spatial and temporal features were retrieved. The CNN (Convolutional Neural Network) algorithm was employed to classify the activities. The issue with highlighting an in-depth camera is that noise can occur, leading to misclassification.

3. Proposed System Methodology

Data were collected from various raw sensors. The data were denoised in the first step using the Butterworth filter [40]. The Hamming windowing and segmentation approach [41] is then applied. During the third step, we worked with the data from the inertial and GPS sensors. We picked out various features for each of them. To determine the significance of features, we employed the Variance Threshold for feature selection. We noted that certain activities in the Extrasensory dataset had a limited number of samples. To address this issue, we applied data augmentation and subsequently optimized using the Yeo–Johnson power transformation technique before conducting activity recognition. Finally, the activity recognition was performed by the MLP. The flow diagram of the suggested human physical and localization activity model is shown in Figure 1.

3.1. Signal Denoising

There is a risk of noise during data collection. Noise is the undesirable portion of data that we do not need to process. Unwanted data processing lengthens and complicates model training. It also reduces the learning model’s performance. Therefore, noise removal is crucial in data preprocessing. For this reason, we employed a noise-removal filter.
To get rid of the unwanted disruptions that can happen when collecting data, we applied a low-pass Butterworth filter [42,43,44,45] to the inertial sensors. This filter is used in signal processing and aims to make the frequency response as even as possible in the part where it passes signals through. That is why it is called the maximally flat magnitude filter. Equation (1) depicts the general frequency response of the Butterworth filter.
H ( j w ) = 1 1 + ( ω ω c ) 2 n
where n is the order of the filter, ω is the passband frequency (also known as the operational frequency), ωc is the filter’s cut-off frequency, and j is the imaginary unit, used to denote the complex frequency. In Figure 2, the original vs. filtered signal for the inertial sensor is shown. Similarly, for processing our GPS data and to enhance its clarity, we used the median filter [46,47,48,49] a robust nonlinear digital filtering technique. The median filter operates by moving a sliding window across each data point in our GPS sequence. Within this window, the data values are arranged in ascending order. The central value, or median, of this sorted list, is then used to replace the current data point. Mathematically, for a given signal, S, and a window of size n, at each point xi in the signal, we consider:
W = { ( x i     ( n 1 ) ) / 2 ( x i ( n 1 ) ) / ( 2 + 1 ) , . ( x i   + ( n 1 ) ) / ( 2 1 ) + ( x i + ( n 1 ) ) / 2 }
The median of this set W becomes the new value at xi in the filtered signal. In our experiment on the GPS data, we selected a window size of 3. The selected window size ensures that the filter assesses each data point while considering itself and one neighboring point on either side. This particular size strikes a balance by being large enough to effectively suppress noise, and yet, be compact enough to preserve important details and transitions in the GPS data. But it is important to note that there was not enough noise in the GPS signal as inertial sensors.

3.2. Signal Windowing and Segmentation

Segmentation is an important concept used in signal processing. The concept of windowing and segmentation [50,51,52,53,54] involves dividing signals into smaller windows instead of processing complete or long sequences. The advantage of windowing is that it allows for easier data processing, reducing complexity and processing time. This makes it more manageable for machine learning or deep learning models to process. We turned to the Hamming windows technique to modulate the signal. Hamming windows, known for their capacity to reduce spectral leakage [55] during frequency domain operations like the Fourier Transform, effectively tackle the side effects that often arise during such analyses. The principle behind the Hamming window is a simple point-wise multiplication of the signal with the window function, which curtails the signal values at both the start and end of a segment. This modulation ensures a minimized side lobe in the frequency response, which is crucial for accurate spectral analyses.
Mathematically, the Hamming window is represented in Equation (3).
    W ( n ) = 0.54 0.46 c o s ( 2 π n N 1 )
where w(n) represents the window function, N signifies the total points in the window, and n spans from 0 to N − 1. We utilized a window size of 5 s [56,57]. After generating the Hamming window values based on the aforementioned formula, we multiplied each point in our data segments with its corresponding Hamming value. In Figure 3, we visualized the results through distinct line plots, with each of the five windows represented in a unique color.

3.3. Feature Extraction

In this section, we listed all the feature lists used in the study specifically aligned with each type of sensor data. We extract separate features for the physical and localization activities. The subsequent section presents each section comprehensively.

3.3.1. Feature Extraction for Physical Activity

For physical recognition, we processed data from three sensors: magnetometer, gyroscope, and accelerometer [58,59,60,61]. Various statistical features were extracted.

Shannon Entropy

Shannon entropy is first extracted, as seen in Figure 4. Shannon entropy [62,63] measures the unpredictability [64,65,66] or randomness of a signal. Mathematically, it can be calculated as:
H ( P ) = i p i l o g 2 ( p i )
where pi represents the probability of occurrence of the different outcomes.

Linear Prediction Cepstral Coefficients (LPCCs)

The extraction of LPCCs from accelerometer signals [67], the primary step involves the application of linear predictive analysis (LPA). Given s(n) as the accelerometer signal, it can be modeled by the relation
  s ( n ) = k = 1 p a k s ( n k ) + e ( n )
where p represents the order of the linear prediction, ak are the linear prediction coefficients, and e(n) denotes the prediction error. The linear prediction coefficients, ak, derived by minimizing the prediction’s mean square error, were commonly achieved using the Levinson–Durbin algorithm. After obtaining these coefficients, the transition to cepstral coefficients begins. This conversion entails taking the inverse Fourier transform of the logarithm of the signal’s power spectrum. Specifically, the cepstral coefficients are determined through a recurrence relation, where the initial coefficient is the logarithm of the zeroth linear prediction coefficient, and subsequent coefficients are derived using the linear prediction coefficients and previous cepstral coefficients. The LPCCs calculated for different activities can be seen in Figure 4.

Skewness

In the context of signal processing for accelerometer data, skewness is a crucial statistical measure that captures the asymmetry [68,69,70] of the signal distribution. To compute the skewness of an accelerometer signal s(n), where n represents the discrete time index, we first calculate the mean (μ) and standard deviation (σ) of the signal. Following this, the skewness (S) is obtained using the formula.
S = 1 X x = 1 X ( s ( x ) μ σ ) 3  
Here, X is the total number of data points in the signal. The formula essentially quantifies the degree to which the signal’s distribution deviates from a normal distribution. A skewness value of zero signifies a symmetric distribution. Positive skewness indicates a distribution with an asymmetric tail extending towards more positive values, while negative skewness indicates a tail extending towards more negative values. Computing the skewness of an accelerometer signal can provide insights into the distribution characteristics of the signal. Figure 5 show the skewness for different locomotion activities.

Kurtosis

Kurtosis is a statistical measure used to describe the distribution of observed data around the mean. Specifically, it quantifies the probability distribution of a real-valued random variable. In the context of signals, kurtosis [71] can be particularly informative as it can capture the sharpness of the distribution’s peak and the heaviness of its tails. This, in turn, can indicate the occurrence of abrupt or high-magnitude changes in the acceleration data, which may be characteristic of specific activities or movements. The formula for kurtosis is given by:
Kurtosis   ( x ) = F [ ( x μ σ ) 4 ] 3
where F denotes the expected value, μ is the mean, and σ is the standard deviation. A kurtosis value greater than zero indicates that the distribution has heavier tails and a sharper peak compared to a normal distribution. Conversely, a value that is less than zero suggests that the distribution has lighter tails. In our analysis, we extracted the kurtosis from the accelerometer signals corresponding to different activities. This enabled us to discern and distinguish the nature of signal distributions for activities such as cooking, sitting, or cleaning. For instance, a sudden or vigorous activity might exhibit a distribution with a higher kurtosis value, indicating rapid changes in acceleration, whereas more steady or uniform activities might have a lower kurtosis value. In Figure 6, the kurtosis plot is presented.

3.3.2. Feature Extraction for Localization Activity

For localization activities, we try to capture the complicated movement patterns by extracting a set of distinct features. We extracted the Total Distance, Average Speed, Maximum Displacement, Direction Change features, heading angles, skewness, kurtosis, step detection, and MFCCs [72].

Mel-Frequency Cepstral Coefficients (MFCCs)

In human localization using audio signals, MFCCs [73] play a pivotal role in determining the direction, proximity, and potential movement patterns. We begin with the pre-emphasis of the signal s(n), accentuate its high frequencies, a step mathematically represented as:
s ´ = s ( n ) α × s ( n 1 )
where α is commonly set to 0.97. This amplification aids in emphasizing the subtle changes in audio signals that may result from human movement or orientation changes. The signal is then split into overlapping frames to analyze temporal variations, and each frame is windowed, often using the Hamming window, to mitigate spectral leakage. The short-time Fourier transform (STFT) offers a frequency domain representation of each frame, and its squared magnitude delivers the power spectrum. As human auditory perception is nonlinear, this spectrum is translated to the Mel scale using triangular filters. This transformation is governed by:
m = 2595 × l o g 10 ( 1 + f 700 )
The mathematics above ensures that the extracted features align with human auditory perception. The logarithm of this Mel spectrum undergoes the Discrete Cosine Transform (DCT), producing the MFCCs. By retaining only the initial coefficients, one captures the essential spectral shape, pivotal for discerning sound characteristics that aid in human localization. The MFCCs calculated for localization activities can be seen in Figure 7.

Step Detection

To understand the steps [74,75,76,77,78] from accelerometer data, we harness the magnitude of the acceleration vector. This magnitude is essentially a scalar representation of the combined accelerations in the x, y, and z axes. Mathematically, given the acceleration values ax, ay, az in the respective axes, the magnitude M is calculated using the formula:
M = a x 2 + a y 2 + a z 2
Once we have the magnitude of acceleration, the periodic nature of indoor or outdoor environments produces recognizable peaks in this signal. Each peak can correspond to a step, and by detecting these peaks, we can estimate the number of steps taken. The peak detection is anchored on identifying local maxima in the magnitude signal that stand out from their surroundings. The step detected [79] for indoor and outdoor activities can be seen in Figure 8.

Heading Angle

The heading angle [80,81] plays a pivotal role in determining the orientation or direction a person is facing. As humans navigate through environments, whether they are indoor spaces like shopping malls or outdoor terrains like city streets, understanding their heading is crucial for applications ranging from pedestrian navigation systems to augmented reality. The heading angle, often termed the azimuth, denotes the angle between the North direction (assuming a geomagnetic North) and the projection of the magnetometer’s reading onto the ground plane. Mathematically, the heading angle θ can be calculated using the magnetic field components A and B as:
θ = a r c t a n 2 ( B , A )
where arctan2 is the two-argument arctangent function, ensuring the angle [82] lies in the correct quadrant and providing a result in the range [−180, 180]. The heading for indoor and outdoor activity can be seen in Figure 9.

3.4. Feature Selection Using Variance Threshold

In this experiment, we applied the variance threshold method [83,84,85,86] to the feature vector. The goal was to identify and retain only those features that showed significant variation across all features, ensuring that our dataset was as informative as possible. The feature mean, standard deviation, and total distance, exhibited a relatively low variance and were therefore removed, while all other features were retained. Variance Threshold is a simple filter-based feature selection method. It removes all features whose variance across all samples does not meet a specific threshold. The rationale behind this approach is straightforward: features that do not vary much within themselves are less likely to be informative. The variance of a feature X is given by:
V a r ( x )   =   1 n Σ i = 1 n ( x i x ¯ ) 2
where: n is the number of samples; xi is the value of the feature X for the ith sample; and x ¯ is the mean value of the feature X across all samples. In the context of the variance threshold method, we compare the variance of each feature against a pre-defined threshold. Features with variances below this threshold are considered non-informative and are removed. The algorithm working of the variance threshold is shown in Algorithm 1.
Algorithm 1: Variance Threshold Feature Selection
1: Input: Dataset D with m features: f1, f2, …, fm.
Variance threshold value τ.
k: Desired number of features to select.
2: Output:
A subset of features whose variance is above
3: Initialization:
Create an empty list R to store the retained features
4: Feature Selection:
  For each feature fi in (D). Compute the variance vi of fi
   Add fi to the list R
   end for
5: Return:
Return the list R as the subset of features with variance above τ.
6: End

3.5. Feature Optimization via Yeo–Johnson Power Transformation

We perform feature optimization before moving on to classification. In simple terms, feature optimization makes the feature clearer to the model. We opted to optimize the specified feature vector after selecting relevant features for the model using the Variance Threshold. For this purpose, we utilized the Yeo–Johnson power transformation method. The Yeo–Johnson power transformation [87] is a statistical method used to transform non-normally distributed data into a normal or Gaussian-like distribution. This method is highly valuable in machine learning, as many algorithms assume that the data follow a normal distribution. By transforming the data using the Yeo–Johnson method, we can enhance the performance of these algorithms and make the results more reliable. The method uses a power transformation to map the original data into a new distribution, with the power being a parameter that is estimated from the data. Mathematically, the Yeo–Johnson optimization is given in Equation (13).
ψ ( x ) = { ( x + 1 ) λ 1 λ             λ 0   a n d   x 0 , l n ( x + 1 ) , λ = 0   a n d   x 0 , ( x + 1 ) 2 λ 1 ) 2 λ ,   λ 2   a n d   x < 0 , l n ( x + 1 ) ,     λ = 2   a n d   x < 0

3.6. Data Augmentation

In addressing the challenge of class imbalance in datasets, the permutation technique [88,89,90] emerges as a novel data augmentation method, which is particularly effective for sequential or time-series data. At its core, the permutation technique involves dividing a signal into multiple non-overlapping segments and then rearranging these segments in various orders to generate new samples. For example, given a time-series signal divided into three segments, A, B, and C, permutations can produce sequences such as B–A–C, C–B–A, or even B–C–A. This method capitalizes on the inherent structure and patterns within the data, creating diverse samples that maintain the original signal’s fundamental characteristics. When applied to the minority class in an imbalanced dataset, the permutation technique can artificially expand the number of samples, thus bridging the gap between the majority and minority classes. This ensures that the learning algorithm is exposed to a broader spectrum of data variations from the minority class, potentially enhancing its ability to generalize and reducing the bias towards the majority class.

3.7. Proposed Multi-Layer Perceptron Architecture

Our proposed MLP architecture [91,92,93,94] was designed to handle the complexity and variability inherent in the sensor data collected. With the manual feature extraction and subsequent optimization processes we employed, our MLP [95,96,97,98] was strategically tasked with classifying a refined feature vector that encapsulates essential information for robust activity recognition.

3.7.1. Architecture Overview

  • Input Layer: The size of the input layer directly corresponds to the number of features extracted and optimized from the sensor data. In our study, the dimensionality of the input layer was adjusted based on the dataset being processed, aligning it with the feature vector size derived after optimization.
  • Hidden Layers: We include three hidden layers. The first and second hidden layers are each composed of 64 neurons, while the third hidden layer contains 32 neurons. We utilized the ReLU (rectified linear unit) activation function across these layers to introduce necessary nonlinearity into the model, which is crucial for learning the complex patterns present in the activity data.
  • Output Layer: The size of the output layer varies with the dataset; it comprises nine neurons for the Extrasensory dataset and 10 neurons for the Huawei dataset, each representing the number of activity classes within these datasets. The softmax activation function is employed in the output layer to provide a probability distribution over the predicted activity classes, facilitating accurate activity classification.

3.7.2. Training Process

We trained the MLP using a backpropagation algorithm with a stochastic gradient descent optimizer [99,100]. A categorical cross-entropy [101,102,103] loss function was employed, suitable for the multi-class classification challenges presented by our datasets. The key elements of our training process included:
  • Batch Size: We processed 32 samples per batch, optimizing the computational efficiency without sacrificing the ability to learn complex patterns.
  • Epochs: The network was trained for up to 100 epochs. To combat overfitting, we implemented early stopping, which halted training if the validation loss did not improve for 10 consecutive epochs.
  • Validation Split: To ensure robust model evaluation and tuning, 20% of our training data were set aside as a validation set. This allowed us to monitor the model’s performance and make necessary adjustments to the hyperparameters in real-time.

3.7.3. Model Application and Evaluation

Following the rigorous training phase, we applied the trained MLP model to the test sets from both the Extrasensory and Huawei datasets to critically assess their effectiveness in real-world scenarios. Our evaluation strategy was comprehensive, focusing on a range of metrics that provide the accuracy and robustness of the models.
  • Performance Metrics: We evaluated the model based on accuracy, precision, recall, and the F1-score [104,105]. These metrics were calculated to assess the overall effectiveness of the models in correctly classifying the activities.
  • Confusion matrix: For each dataset, a confusion matrix was generated to visually represent the performance of the model across all activity classes. The confusion matrix [106,107] helps in identifying not only the instances of correct predictions but also the types of errors made by the model, such as false positives and false negatives. This detailed view allows us to specific activities where the model may require further tuning.
  • ROC Curves: We also plotted receiver operating characteristic (ROC) curves for each class within the datasets. The ROC curves provide a graphical representation of the trade-off between the true positive rate and the false positive rate at various threshold settings. The area under the ROC curve (AUC) was calculated to quantify the model’s ability to discriminate between the classes under study.

4. Experimental Setup

Evaluation of the proposed system was performed on a benchmark dataset: the Extrasensory dataset and Sussex Huawei locomotion (SHL) datasets. The experiment was performed on a Mac 2017 core i5 with 16 GB of RAM, a 3.2 GHz processor, and 512 GB of SSD.

4.1. Datasets Descriptions

In this section, we delve into the specifics of each dataset, highlighting their diversity and how they reflect real-world scenarios.

4.1.1. The Extrasensory Dataset

The Extrasensory dataset was compiled through the utilization of a variety of sensors, including inertial, GPS, compass, and audio sensors. The data collection process was facilitated by an extra-sensory smartphone app, which aimed to monitor human physical and locomotion activities. The dataset comprises information derived from 36 individual users, with each user contributing a substantial number of instances. Data were collected through both Android and iPhone smartphones, and the dataset includes a comprehensive set of 116 labels for user-reported activities. The details of the dataset are also given in Table 1.

4.1.2. The Sussex Huawei Dataset (SHL)

The Sussex Huawei Locomotion (SHL) dataset [108] is a comprehensive collection of data designed to support research in mobile sensing, particularly for the recognition of human activities and modes of transportation. It was created through a collaboration between the University of Sussex and Huawei Technologies Co., Ltd. The dataset consists of recordings from smartphone sensors, such as accelerometers, gyroscopes, magnetometers, and barometers. These sensors capture movements and environmental characteristics as people go about various activities, including walking, running, cycling, and traveling by car, bus, or train. Participants carried smartphones equipped with these sensors went through a series of movements in real-world settings, ensuring that the data was as realistic and varied as possible.

4.2. First Experiment: Confusion Matrix

We perform the activity classification using MLP. To evaluate the performance, we plotted the confusion matrix. In simple words, a confusion matrix is a table used for classification problems. It is used to see where the model made an error. The confusion matrices calculated for physical and localization activity for both datasets are shown in Table 2, Table 3, Table 4 and Table 5.

4.3. Second Experiment: Precision, Recall, and F1-Score

In this experiment, we evaluated our system by plotting precision, recall, and f1-score for individual activity. In Table 6 and Table 7, the evaluation for physical and localization activity can be seen.

4.4. Third Experiment: Receiver Operating Characteristics (ROC Curve)

To further assess the performance and robustness of our system, we employed the ROC curve, a well-established graphical tool that illustrates the diagnostic ability of a classification system. The ROC curve visualizes the trade-offs between the true positive rate (sensitivity) and false positive rate (1-specificity) across various threshold settings. The area under the ROC curve (AUC) serves as a single scalar value summarizing the overall performance of the classifier. A model with perfect discriminatory power has an AUC of 1, while a model with no discriminatory power (akin to random guessing) has an AUC of 0.5. In Figure 10 and Figure 11 the Roc curve is plotted.

4.5. Fourth Experiment: Comparison with Other Techniques

In the last experiment, the proposed system is compared with the state-of-the-art techniques. Table 8 shows the comparison of the proposed model with other state of the art techniques.

5. Computational Analysis

The comparative analysis of time consumption and memory usage between the Extrasensory and Huawei datasets reveals significant differences in efficiency and resource demands. These disparities suggest diverse applicability in real-world scenarios. Specifically, the extrasensory dataset, with its higher time and memory requirements, is best suited for environments where detailed and complex activity recognition is crucial, and computational resources are less constrained, such as in clinical or controlled research settings. On the other hand, the Huawei dataset, with its lower resource demands, demonstrates suitability for consumer electronics and real-time applications, such as smartphones and wearable devices that require efficient processing capabilities. The findings show that, while the system exhibits robust performance, its deployment in resource-limited environments such as low-end smartphones or IoT devices might be challenging. Thus, our system is ideal for scenarios where precision and detailed activity recognition outweigh the need for low resource consumption, and less so for applications requiring minimal power usage and rapid processing. Figure 12 shows the analysis visually.

6. Discussion and Limitations

Our research has successfully demonstrated the utilization of smartphone and smartwatch sensors to accurately identify human movements and locations. By methodically cleaning, segmenting, and extracting features from raw sensor data, and employing a multi-layer perceptron for classification, our system achieved high accuracy rates. Specifically, we observed success rates of 96% and 94% for identifying physical activities over the extrasensory and SHL datasets, respectively, and 94% (Extrasensory) and 91% (SHL) for localization activities. These results represent a significant improvement over many existing methods and underscore the potential of our approach in applications where precise activity recognition is crucial.
  • Detailed Analysis of Key Findings
The high accuracy rates in physical activity recognition demonstrate the efficacy of the proposed system’s feature extraction and machine learning workflow. For localization activities, although slightly lower, the success rates are still competitive, emphasizing our system’s capability in varied contexts. These findings suggest that our approach could be particularly beneficial in health monitoring, urban navigation, and other IoT applications that demand reliable human activity and location data.
While our proposed system offers a promising approach for biosensor-driven IoT wearables in human motion tracking and localization, we recognize several inherent challenges that could impact its broader application and effectiveness. These limitations, if not addressed, may curtail the system’s reliability and versatility in diverse environments:
  • GPS limitations: The GPS technology we utilize, while generally effective, can suffer from significant inaccuracies in environments such as urban canyons or indoors due to signal blockage and multipath interference. These environmental constraints can affect the system’s ability to precisely track and localize activities, particularly in complex urban settings.
  • Data diversity and completeness: The dataset employed for training our system, though extensive, does not encompass the entire spectrum of human activities, particularly those that are irregular or occur less frequently. This limitation could reduce the model’s ability to generalize to activities not represented in the training phase, potentially impacting its applicability in varied real-world scenarios.
  • Performance across different hardware: Our system was primarily tested and optimized on a specific computational setup. When considering deployment across diverse real-world devices such as smartphones, smartwatches, or other IoT wearables, variations in processing power, storage capacity, and sensor accuracy must be addressed. The heterogeneity of these devices could result in inconsistent performance, with higher-end devices potentially delivering more accurate results than lower-end counterparts.
  • Scalability and real-time processing: Scaling our system to handle real-time data processing across multiple devices simultaneously presents another significant challenge. The computational demands of processing large volumes of sensor data in real time necessitate not only robust algorithms but also hardware capable of efficiently supporting these operations.
  • Privacy and security concerns: As with any system handling sensitive personal data, ensuring privacy and security is paramount. Our current model must incorporate more advanced encryption methods and privacy-preserving techniques to safeguard user data against potential breaches or unauthorized access.

7. Conclusions and Future Work

In this study, we successfully developed a comprehensive system capable of effectively recognizing human physical activities and localization through a combination of inertial and GPS sensor data. Our system initiates with denoising the raw signals using Butterworth and median filters to reduce noise while preserving essential signal characteristics. This is followed by the Hamming windowing technique and segmentation processes that structure the data for more effective analysis. Subsequently, we extract and optimize statistical features using the variance threshold selection method and Yeo–Johnson power transformation, respectively, significantly enhancing the relevance and performance of these features in the activity classification process. The final classification of activities is executed through a multilayer perceptron (MLP), which provides a robust model capable of predicting various types of human movements and positions. The findings from our research offer significant implications for the development of smarter, more responsive wearable and mobile technology. By showcasing high accuracy in activity recognition, our system lays a foundation for improved user interaction and monitoring across various applications, spanning from personal fitness tracking to patient health monitoring in medical settings. The successful integration of sensor data for precise activity and location recognition paves the way for more intuitive and context-aware devices.
Moving forward, several enhancements and extensions are proposed to further enrich the capabilities of our system and its applicability to a broader range of real-world scenarios. First, integrating additional types of sensor data, such as environmental and biometric sensors, could provide a more complex understanding of the context and improve the accuracy and reliability of activity recognition. Second, developing adaptive algorithms that can dynamically adjust to changes in the environment or user behavior would make the system more responsive and versatile. Additionally, scalability improvements are crucial, and future work will focus on optimizing the system to more efficiently handle larger, more diverse datasets. This will involve refining our algorithms to manage increased computational demands while enhancing performance. Another important direction for future research involves enhancing the real-time processing capabilities of our system, which is essential for applications requiring immediate responses, such as emergency services or live health monitoring. Furthermore, given the sensitive nature of the data involved in our system, advancing data privacy and security measures will be a priority. We plan to explore sophisticated encryption methods and privacy-preserving data analytics to ensure the security and privacy of user data.

Author Contributions

Conceptualization: D.K., N.A.M. and A.A. (Asaad Algarni); methodology: D.K. and M.A.; software: D.K. and N.A.A.; validation: N.A.M., M.A. and A.A. (Abdulwahab Alazeb); formal analysis: and N.A.M.; resources: N.A.M., A.A. (Asaad Algarni), N.A.M. and A.A. (Abdulwahab Alazeb); writing—review and editing: N.A.M., A.J. and H.L.; funding acquisition: N.A.M., N.A.M., A.A. (Asaad Algarni), H.L., A.A. (Abdulwahab Alazeb) and A.J. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB Bremen. This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R410), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding program grant code (NU/GP/SERC/13/30). This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2024/R/1445).

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R410), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Qi, M.; Cui, S.; Chang, X.; Xu, Y.; Meng, H.; Wang, Y.; Yin, T. Multi-region Nonuniform Brightness Correction Algorithm Based on L-Channel Gamma Transform. Secur. Commun. Netw. 2022, 2022, 2675950. [Google Scholar] [CrossRef]
  2. Li, R.; Peng, B. Implementing Monocular Visual-Tactile Sensors for Robust Manipulation. Think. Ski. Creat. 2022, 2022, 9797562. [Google Scholar] [CrossRef] [PubMed]
  3. Babaei, N.; Hannani, N.; Dabanloo, N.J.; Bahadori, S. A Systematic Review of the Use of Commercial Wearable Activity Trackers for Monitoring Recovery in Individuals Undergoing Total Hip Replacement Surgery. Think. Ski. Creat. 2022, 2022, 9794641. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, Q.; Yan, S.; Zhang, B.; Fan, K.; Zhang, J.; Li, W. An On-Chip Viscoelasticity Sensor for Biological Fluids. Think. Ski. Creat. 2023, 4, 6. [Google Scholar] [CrossRef] [PubMed]
  5. Qu, J.; Mao, B.; Li, Z.; Xu, Y.; Zhou, K.; Cao, X.; Fan, Q.; Xu, M.; Liang, B.; Liu, H.; et al. Recent Progress in Advanced Tactile Sensing Technologies for Soft Grippers. Adv. Funct. Mater. 2023, 33, 2306249. [Google Scholar] [CrossRef]
  6. Khan, D.; Alonazi, M.; Abdelhaq, M.; Al Mudawi, N.; Algarni, A.; Jalal, A.; Liu, H. Robust human locomotion and localization activity recognition over multisensory. Front. Physiol. 2024, 15, 1344887. [Google Scholar] [CrossRef] [PubMed]
  7. Jalal, A.; Nadeem, A.; Bobasu, S. Human Body Parts Estimation and Detection for Physical Sports Movements. In Proceedings of the 2019 2nd International Conference on Communication, Computing and Digital systems (C-CODE), Islamabad, Pakistan, 6–7 March 2019; pp. 104–109. [Google Scholar]
  8. Arshad, M.H.; Bilal, M.; Gani, A. Human Activity Recognition: Review, Taxonomy and Open Challenges. Sensors 2022, 22, 6463. [Google Scholar] [CrossRef] [PubMed]
  9. Elbayoudi, A.; Lotfi, A.; Langensiepen, C.; Appiah, K. Modelling and Simulation of Activities of Daily Living Representing an Older Adult’s Behaviour. In Proceedings of the 8th ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA ’15), Corfu, Greece, 1–3 July 2015; Article 67. Association for Computing Machinery: New York, NY, USA, 2015; pp. 1–8. [Google Scholar]
  10. Azmat, U.; Jalal, A. Smartphone Inertial Sensors for Human Locomotion Activity Recognition based on Template Matching and Codebook Generation. In Proceedings of the 2021 International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan, 21 September 2021; pp. 109–114. [Google Scholar]
  11. Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
  12. Serpush, F.; Menhaj, M.B.; Masoumi, B.; Karasfi, B. Wearable Sensor-Based Human Activity Recognition in the Smart Healthcare System. Comput. Intell. Neurosci. 2022, 2022, 1–31. [Google Scholar] [CrossRef]
  13. Yan, L.; Shi, Y.; Wei, M.; Wu, Y. Multi-feature fusing local directional ternary pattern for facial expressions signal recognition based on video communication system. Alex. Eng. J. 2023, 63, 307–320. [Google Scholar] [CrossRef]
  14. Cai, L.; Yan, S.; Ouyang, C.; Zhang, T.; Zhu, J.; Chen, L.; Ma, X.; Liu, H. Muscle synergies in joystick manipulation. Front. Physiol. 2023, 14, 1282295. [Google Scholar] [CrossRef] [PubMed]
  15. Li, J.; Li, J.; Wang, C.; Verbeek, F.J.; Schultz, T.; Liu, H. Outlier detection using iterative adaptive mini-minimum spanning tree generation with applications on medical data. Front. Physiol. 2023, 14, 1233341. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, F.; Ma, M.; Zhang, X. Study on a Portable Electrode Used to Detect the Fatigue of Tower Crane Drivers in Real Construction Environment. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar] [CrossRef]
  17. Yu, J.; Dong, X.; Li, Q.; Lu, J.; Ren, Z. Adaptive Practical Optimal Time-Varying Formation Tracking Control for Disturbed High-Order Multi-Agent Systems. IEEE Trans. Circuits Syst. I Regul. Pap. 2022, 69, 2567–2578. [Google Scholar] [CrossRef]
  18. He, H.; Chen, Z.; Liu, H.; Liu, X.; Guo, Y.; Li, J. Practical Tracking Method based on Best Buddies Similarity. Think. Ski. Creat. 2023, 4, 50. [Google Scholar] [CrossRef] [PubMed]
  19. Hou, X.; Zhang, L.; Su, Y.; Gao, G.; Liu, Y.; Na, Z.; Xu, Q.; Ding, T.; Xiao, L.; Li, L.; et al. A space crawling robotic bio-paw (SCRBP) enabled by triboelectric sensors for surface identification. Nano Energy 2023, 105, 108013. [Google Scholar] [CrossRef]
  20. Hou, X.; Xin, L.; Fu, Y.; Na, Z.; Gao, G.; Liu, Y.; Xu, Q.; Zhao, P.; Yan, G.; Su, Y.; et al. A self-powered biomimetic mouse whisker sensor (BMWS) aiming at terrestrial and space objects perception. Nano Energy 2023, 118, 109034. [Google Scholar] [CrossRef]
  21. Ma, S.; Chen, Y.; Yang, S.; Liu, S.; Tang, L.; Li, B.; Li, Y. The Autonomous Pipeline Navigation of a Cockroach Bio-robot with Enhanced Walking Stimuli. Think. Ski. Creat. 2023, 4, 0067. [Google Scholar] [CrossRef] [PubMed]
  22. Bahadori, S.; Williams, J.M.; Collard, S.; Swain, I. Can a Purposeful Walk Intervention with a Distance Goal Using an Activity Monitor Improve Individuals’ Daily Activity and Function Post Total Hip Replacement Surgery. A Randomized Pilot Trial. Think. Ski. Creat. 2023, 4, 0069. [Google Scholar] [CrossRef]
  23. Hsu, Y.-L.; Yang, S.-C.; Chang, H.-C.; Lai, H.-C. Human Daily and Sport Activity Recognition Using a Wearable Inertial SensorNetwork. IEEE Access 2018, 6, 31715–31728. [Google Scholar] [CrossRef]
  24. Abdel-Basset, M.; Hawash, H.; Chang, V.; Chakrabortty, R.K.; Ryan, M. Deep Learning for Heterogeneous Human Activity Recognition in Complex IoT Applications. IEEE Internet Things J. 2022, 9, 5653–5665. [Google Scholar] [CrossRef]
  25. Konak, S.; Turan, F.; Shoaib, M.; Incel, Ö.D. Feature Engineering for Activity Recognition from Wrist-worn Motion Sensors. In Proceedings of the International Conference on Pervasive and Embedded Computing and Communication Systems, Lisbon, Portugal, 25–27 July 2016. [Google Scholar]
  26. Chetty, G.; White, M.; Akther, F. Smart Phone Based Data Mining for Human Activity Recognition. Procedia Comput. Sci. 2016, 46, 1181–1187. [Google Scholar] [CrossRef]
  27. Ehatisham-ul-Haq, M.; Azam, M.A. Opportunistic sensing for inferring in-the-wild human contexts based on activity patternrecognition using smart computing. Future Gener. Comput. Syst. 2020, 106, 374–392. [Google Scholar] [CrossRef]
  28. Zhang, X.; Huang, D.; Li, H.; Zhang, Y.; Xia, Y.; Liu, J. Self-training maximum classifier discrepancy for EEG emotion recognition. CAAI Trans. Intell. Technol. 2023, 8, 1480–1491. [Google Scholar] [CrossRef]
  29. Wen, C.; Huang, Y.; Zheng, L.; Liu, W.; Davidson, T.N. Transmit Waveform Design for Dual-Function Radar-Communication Systems via Hybrid Linear-Nonlinear Precoding. IEEE Trans. Signal Process. 2023, 71, 2130–2145. [Google Scholar] [CrossRef]
  30. Wen, C.; Huang, Y.; Davidson, T.N. Efficient Transceiver Design for MIMO Dual-Function Radar-Communication Systems. IEEE Trans. Signal Process. 2023, 71, 1786–1801. [Google Scholar] [CrossRef]
  31. Yao, Y.; Shu, F.; Li, Z.; Cheng, X.; Wu, L. Secure Transmission Scheme Based on Joint Radar and Communication in Mobile Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10027–10037. [Google Scholar] [CrossRef]
  32. Jalal, A.; Quaid, M.A.K.; Kim, K. A Wrist Worn Acceleration Based Human Motion Analysis and Classification for Ambient Smart Home System. J. Electr. Eng. Technol. 2019, 14, 1733–1739. [Google Scholar] [CrossRef]
  33. Hu, Z.; Ren, L.; Wei, G.; Qian, Z.; Liang, W.; Chen, W.; Lu, X.; Ren, L.; Wang, K. Energy Flow and Functional Behavior of Individual Muscles at Different Speeds During Human Walking. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 294–303. [Google Scholar] [CrossRef]
  34. Wang, K.; Boonpratatong, A.; Chen, W.; Ren, L.; Wei, G.; Qian, Z.; Lu, X.; Zhao, D. The Fundamental Property of Human Leg During Walking: Linearity and Nonlinearity. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 4871–4881. [Google Scholar] [CrossRef]
  35. Jalal, A.; Quaid, M.A.K.; Hasan, A.S. Wearable Sensor-Based Human Behavior Understanding and Recognition in Daily Life for Smart Environments. In Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018; pp. 105–110. [Google Scholar]
  36. Zhao, Z.; Xu, G.; Zhang, N.; Zhang, Q. Performance analysis of the hybrid satellite-terrestrial relay network with opportunistic scheduling over generalized fad-ing channels. IEEE Trans. Veh. Technol. 2022, 71, 2914–2924. [Google Scholar] [CrossRef]
  37. Zhu, T.; Ding, H.; Wang, C.; Liu, Y.; Xiao, S.; Yang, G.; Yang, B. Parameters Calibration of the GISSMO Failure Model for SUS301L-MT. Chin. J. Mech. Eng. 2023, 36, 1–12. [Google Scholar] [CrossRef]
  38. Qu, J.; Yuan, Q.; Li, Z.; Wang, Z.; Xu, F.; Fan, Q.; Zhang, M.; Qian, X.; Wang, X.; Wang, X.; et al. All-in-one strain-triboelectric sensors based on environment-friendly ionic hydrogel for wearable sensing and underwater soft robotic grasping. Nano Energy 2023, 111, 108387. [Google Scholar] [CrossRef]
  39. Zhao, S.; Liang, W.; Wang, K.; Ren, L.; Qian, Z.; Chen, G.; Lu, X.; Zhao, D.; Wang, X.; Ren, L. A Multiaxial Bionic Ankle Based on Series Elastic Actuation with a Parallel Spring. IEEE Trans. Ind. Electron. 2023, 71, 7498–7510. [Google Scholar] [CrossRef]
  40. Liang, X.; Huang, Z.; Yang, S.; Qiu, L. Device-Free Motion & Trajectory Detection via RFID. ACM Trans. Embed. Comput. Syst. 2018, 17, 1–27. [Google Scholar] [CrossRef]
  41. Liu, C.; Wu, T.; Li, Z.; Ma, T.; Huang, J. Robust Online Tensor Completion for IoT Streaming Data Recovery. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 10178–10192. [Google Scholar] [CrossRef]
  42. Nadeem, A.; Jalal, A.; Kim, K. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimed. Tools Appl. 2021, 80, 21465–21498. [Google Scholar] [CrossRef]
  43. Yu, J.; Lu, L.; Chen, Y.; Zhu, Y.; Kong, L. An Indirect Eavesdropping Attack of Keystrokes on Touch Screen through Acoustic Sensing. IEEE Trans. Mob. Comput. 2021, 20, 337–351. [Google Scholar] [CrossRef]
  44. Bashar, S.K.; Al Fahim, A.; Chon, K.H. Smartphone-Based Human Activity Recognition with Feature Selection and Dense Neural Network. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 5888–5891. [Google Scholar]
  45. Xie, L.; Tian, J.; Ding, G.; Zhao, Q. Hu-man activity recognition method based on inertial sensor and barometer. In Proceedings of the 2018 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), Lake Como, Italy, 26–29 March 2018; pp. 1–4. [Google Scholar]
  46. Lee, S.-M.; Yoon, S.M.; Cho, H. Human activity recognition from accelerometer data using Convolutional Neural Network. In Proceedings of the 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju, Republic of Korea, 13–16 February 2017; pp. 131–134. [Google Scholar]
  47. Mekruksavanich, S.; Jitpattanakul, A. Recognition of Real-life Activities with Smartphone Sensors using Deep Learning Approaches. In Proceedings of the 2021 IEEE 12th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 20–22 August 2021; pp. 243–246. [Google Scholar]
  48. Cong, R.; Sheng, H.; Yang, D.; Cui, Z.; Chen, R. Exploiting Spatial and Angular Correlations with Deep Efficient Transformers for Light Field Image Super-Resolution. IEEE Trans. Multimed. 2024, 26, 1421–1435. [Google Scholar] [CrossRef]
  49. Liu, H.; Yuan, H.; Liu, Q.; Hou, J.; Zeng, H.; Kwong, S. A Hybrid Compression Framework for Color Attributes of Static 3D Point Clouds. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1564–1577. [Google Scholar] [CrossRef]
  50. Liu, Q.; Yuan, H.; Hamzaoui, R.; Su, H.; Hou, J.; Yang, H. Reduced Reference Perceptual Quality Model with Application to Rate Control for Video-Based Point Cloud Compression. IEEE Trans. Image Process. 2021, 30, 6623–6636. [Google Scholar] [CrossRef]
  51. Mutegeki, R.; Han, D.S. A CNN-LSTM Approach to Human Activity Recognition. In Proceedings of the International Conference on Artificial Intelligence and Information Communications (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 362–366. [Google Scholar]
  52. Liu, A.-A.; Zhai, Y.; Xu, N.; Nie, W.; Li, W.; Zhang, Y. Region-Aware Image Captioning via Interaction Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3685–3696. [Google Scholar] [CrossRef]
  53. Jaramillo, I.E.; Jeong, J.G.; Lopez, P.R.; Lee, C.-H.; Kang, D.-Y.; Ha, T.-J.; Oh, J.-H.; Jung, H.; Lee, J.H.; Lee, W.H.; et al. Real-Time Human Activity Recognition with IMU and Encoder Sensors in Wearable Exoskeleton Robot via Deep Learning Networks. Sensors 2022, 22, 9690. [Google Scholar] [CrossRef]
  54. Hussain, I.; Jany, R.; Boyer, R.; Azad, A.; Alyami, S.A.; Park, S.J.; Hasan, M.; Hossain, A. An Explainable EEG-Based Human Activity Recognition Model Using Machine-Learning Approach and LIME. Sensors 2023, 23, 7452. [Google Scholar] [CrossRef]
  55. Garcia-Gonzalez, D.; Rivero, D.; Fernandez-Blanco, E.; Luaces, M.R. New machine learning approaches for real-life human activity recognition using smartphone sensor-based data. Knowl. Based Syst. 2023, 262, 110260. [Google Scholar] [CrossRef]
  56. Zhang, J.; Zhu, C.; Zheng, L.; Xu, K. ROSEFusion: Random optimization for online dense reconstruction under fast camera motion. ACM Trans. Graph. 2021, 40, 1–17. [Google Scholar] [CrossRef]
  57. Zhang, J.; Tang, Y.; Wang, H.; Xu, K. ASRO-DIO: Active Subspace Random Optimization Based Depth Inertial Odometry. IEEE Trans. Robot. 2022, 39, 1496–1508. [Google Scholar] [CrossRef]
  58. She, Q.; Hu, R.; Xu, J.; Liu, M.; Xu, K.; Huang, H. Learning High-DOF Reaching-and-Grasping via Dynamic Representation of Gripper-Object Interaction. ACM Trans. Graph. 2022, 41, 1–14. [Google Scholar] [CrossRef]
  59. Xu, J.; Zhang, X.; Park, S.H.; Guo, K. The Alleviation of Perceptual Blindness During Driving in Urban Areas Guided by Saccades Recommendation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16386–16396. [Google Scholar] [CrossRef]
  60. Xu, J.; Park, S.H.; Zhang, X.; Hu, J. The Improvement of Road Driving Safety Guided by Visual Inattentional Blindness. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4972–4981. [Google Scholar] [CrossRef]
  61. Mao, Y.; Sun, R.; Wang, J.; Cheng, Q.; Kiong, L.C.; Ochieng, W.Y. New time-differenced carrier phase approach to GNSS/INS integration. GPS Solutions 2022, 26, 122. [Google Scholar] [CrossRef]
  62. Jalal, A.; Kim, Y. Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data. In Proceedings of the 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Seoul, Republic of Korea, 26–29 August 2014; pp. 119–124. [Google Scholar]
  63. Mahmood, M.; Jalal, A.; Kim, K. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors. Multimed. Tools Appl. 2020, 79, 6919–6950. [Google Scholar] [CrossRef]
  64. Chen, Z.; Cai, C.; Zheng, T.; Luo, J.; Xiong, J.; Wang, X. RF-Based Human Activity Recognition Using Signal Adapted Convolutional Neural Network. IEEE Trans. Mob. Comput. 2023, 22, 487–499. [Google Scholar] [CrossRef]
  65. Batool, M.; Alotaibi, S.S.; Alatiyyah, M.H.; Alnowaiser, K.; Aljuaid, H.; Jalal, A.; Park, J. Depth Sensors-Based Action Recognition using a Modified K-Ary Entropy Classifier. IEEE Access 2023, 11, 58578–58595. [Google Scholar] [CrossRef]
  66. Xu, J.; Pan, S.; Sun, P.Z.H.; Park, S.H.; Guo, K. Human-Factors-in-Driving-Loop: Driver Identification and Verification via a Deep Learning Approach using Psychological Behavioral Data. IEEE Trans. Intell. Transp. Syst. 2022, 24, 3383–3394. [Google Scholar] [CrossRef]
  67. Xu, J.; Guo, K.; Sun, P.Z. Driving Performance under Violations of Traffic Rules: Novice vs. Experienced Drivers. IEEE Trans. Intell. Veh. 2022, 7, 908–917. [Google Scholar] [CrossRef]
  68. Liu, H.; Xu, Y.; Chen, F. Sketch2Photo: Synthesizing photo-realistic images from sketches via global contexts. Eng. Appl. Artif. Intell. 2023, 117, 105608. [Google Scholar] [CrossRef]
  69. Pazhanirajan, S.; Dhanalakshmi, P. EEG Signal Classification using Linear Predictive Cepstral Coefficient Features. Int. J. Comput. Appl. 2013, 73, 28–31. [Google Scholar] [CrossRef]
  70. Fausto, F.; Cuevas, E.; Gonzales, A. A New Descriptor for Image Matching Based on Bionic Principles. Pattern Anal. Appl. 2017, 20, 1245–1259. [Google Scholar] [CrossRef]
  71. Alonazi, M.; Ansar, H.; Al Mudawi, N.; Alotaibi, S.S.; Almujally, N.A.; Alazeb, A.; Jalal, A.; Kim, J.; Min, M. Smart healthcare hand gesture recognition using CNN-based detector and deep belief network. IEEE Access 2023, 11, 84922–84933. [Google Scholar] [CrossRef]
  72. Jalal, A.; Mahmood, M. Students’ behavior mining in e-learning environment using cognitive processes with information technologies. Educ. Inf. Technol. 2019, 24, 2797–2821. [Google Scholar] [CrossRef]
  73. Quaid, M.A.K.; Jalal, A. Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm. Multimed. Tools Appl. 2020, 79, 6061–6083. [Google Scholar] [CrossRef]
  74. Pervaiz, M.; Jalal, A. Artificial Neural Network for Human Object Interaction System Over Aerial Images. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar]
  75. Jalal, A.; Kim, J.T.; Kim, T.-S. Development of a life logging system via depth imaging-based human activity recognition for smart homes. In Proceedings of the International Symposium on Sustainable Healthy Buildings, Seoul, Republic of Korea, 19 September 2012; pp. 91–95. [Google Scholar]
  76. Jalal, A.; Rasheed, Y. Collaboration achievement along with performance maintenance in video streaming. In Proceedings of the IEEE Conference on Interactive Computer Aided Learning, Villach, Austria, 23 December 2007; pp. 1–8. [Google Scholar]
  77. Muneeb, M.; Rustam, H.; Jalal, A. Automate Appliances via Gestures Recognition for Elderly Living Assistance. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar]
  78. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception–ResNet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 3. [Google Scholar]
  79. Azmat, U.; Ghadi, Y.Y.; al Shloul, T.; Alsuhibany, S.A.; Jalal, A.; Park, J. Smartphone Sensor-Based Human Locomotion Surveillance System Using Multilayer Perceptron. Appl. Sci. 2022, 12, 2550. [Google Scholar] [CrossRef]
  80. Jalal, A.; Batool, M.; Kim, K. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Appl. Sci. 2020, 10, 7122. [Google Scholar] [CrossRef]
  81. Tan, T.-H.; Wu, J.-Y.; Liu, S.-H.; Gochoo, M. Human Activity Recognition Using an Ensemble Learning Algorithm with Smartphone Sensor Data. Electronics 2022, 11, 322. [Google Scholar] [CrossRef]
  82. Hartmann, Y.; Liu, H.; Schultz, T. High-Level Features for Human Activity Recognition and Modeling. In Biomedical Engineering Systems and Technologies, Proceedings of the BIOSTEC 2022, Virtual Event, 9–11 February 2022; Roque, A.C.A., Gracanin, D., Lorenz, R., Tsanas, A., Bier, N., Fred, A., Gamboa, H., Eds.; Communications in Computer and In-formation Science; Springer: Cham, Switzerland, 2023; Volume 1814. [Google Scholar] [CrossRef]
  83. Khalid, N.; Gochoo, M.; Jalal, A.; Kim, K. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. Sustainability 2021, 13, 970. [Google Scholar] [CrossRef]
  84. Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point Cloud Upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
  85. Jalal, A.; Sharif, N.; Kim, J.T.; Kim, T.-S. Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart homes. Indoor Built Environ. 2013, 22, 271–279. [Google Scholar] [CrossRef]
  86. Manos, A.; Klein, I.; Hazan, T. Gravity-based methods for heading computation in pedestrian dead reckoning. Sensors 2019, 19, 1170. [Google Scholar] [CrossRef]
  87. Jalal, A.; Batool, M.; Kim, K. Sustainable Wearable System: Human Behavior Modeling for Life-logging Activities Using K-AryTree Hashing Classifier. Sustainability 2020, 12, 10324. [Google Scholar] [CrossRef]
  88. Cruciani, F.; Vafeiadis, A.; Nugent, C.; Cleland, I.; McCullagh, P.; Votis, K.; Giakoumis, D.; Tzovaras, D.; Chen, L.; Hamzaoui, R. Feature learning for human activity recognition using convolutional neural networks: A case study for inertial measurement unit and audio data. CCF Trans. Pervasive Comput. Interact. 2020, 2, 18–32. [Google Scholar] [CrossRef]
  89. Jalal, A.; Ahmed, A.; Rafique, A.A.; Kim, K. Scene Semantic Recognition Based on Modified Fuzzy C-Mean and Maximum En-tropy Using Object-to-Object Relations. IEEE Access 2021, 9, 27758–27772. [Google Scholar] [CrossRef]
  90. Won, Y.-S.; Jap, D.; Bhasin, S. Push for More: On Comparison of Data Augmentation and SMOTE with Optimised Deep Learning Architecture for Side-Channel Information Security Applications. In Proceedings of the Information Security Applications: 21st International Conference, WISA 2020, Jeju Island, Republic of Korea, 26–28 August 2020; Volume 12583, ISBN 978-3-030-65298-2. [Google Scholar]
  91. Hartmann, Y.; Liu, H.; Schultz, T. Interactive and Interpretable Online Human Activity Recognition. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Pisa, Italy, 20–25 March 2022; pp. 109–111. [Google Scholar]
  92. Jalal, A.; Khalid, N.; Kim, K. Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors. Entropy 2020, 22, 817. [Google Scholar] [CrossRef] [PubMed]
  93. Vaizman, Y.; Ellis, K.; Lanckriet, G. Recognizing Detailed Human Context in the Wild from Smartphones and Smartwatches. IEEE Pervasive Comput. 2017, 16, 62–74. [Google Scholar] [CrossRef]
  94. Sztyler, T.; Stuckenschmidt, H. Online personalization of cross sub-jects based activity recognition models on wearable devices. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), Kona, HI, USA, 13–17 March 2017; pp. 180–189. [Google Scholar]
  95. Garcia-Gonzalez, D.; Rivero, D.; Fernandez-Blanco, E.; Luaces, M.R. A public domain dataset for real-life human activi-ty recognition using smartphone sensors. Sensors 2020, 20, 2200. [Google Scholar] [CrossRef]
  96. Jalal, A.; Kim, Y.-H.; Kim, Y.-J.; Kamal, S.; Kim, D. Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognit. 2017, 61, 295–308. [Google Scholar] [CrossRef]
  97. Sheng, H.; Wang, S.; Yang, D.; Cong, R.; Cui, Z.; Chen, R. Cross-View Recurrence-Based Self-Supervised Super-Resolution of Light Field. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 7252–7266. [Google Scholar] [CrossRef]
  98. Wang, L.; Ciliberto, M.; Gjoreski, H.; Lago, P.; Murao, K.; Okita, T.; Roggen, D. Locomotion and Transportation Mode Recognition from GPS and Radio Signals: Summary of SHL Challenge 2021. In Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers (UbiComp/ISWC ‘21 Adjunct), Virtual, 21–26 September 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
  99. Fu, C.; Yuan, H.; Xu, H.; Zhang, H.; Shen, L. TMSO-Net: Texture adaptive multi-scale observation for light field image depth estimation. J. Vis. Commun. Image Represent. 2023, 90, 103731. [Google Scholar] [CrossRef]
  100. Luo, G.; Xie, J.; Liu, J.; Luo, Y.; Li, M.; Li, Z.; Yang, P.; Zhao, L.; Wang, K.; Maeda, R.; et al. Highly Stretchable, Knittable, Wearable Fiberform Hydrovoltaic Generators Driven by Water Transpiration for Portable Self-Power Supply and Self-Powered Strain Sensor. Small 2023, 20, 2306318. [Google Scholar] [CrossRef]
  101. Feng, Y.; Pan, R.; Zhou, T.; Dong, Z.; Yan, Z.; Wang, Y.; Chen, P.; Chen, S. Direct joining of quartz glass and copper by nanosecond laser. Ceram. Int. 2023, 49, 36056–36070. [Google Scholar] [CrossRef]
  102. Miao, Y.; Wang, X.; Wang, S.; Li, R. Adaptive Switching Control Based on Dynamic Zero-Moment Point for Versatile Hip Exoskeleton Under Hybrid Locomotion. IEEE Trans. Ind. Electron. 2022, 70, 11443–11452. [Google Scholar] [CrossRef]
  103. Xu, C.; Jiang, Z.; Wang, B.; Chen, J.; Sun, T.; Fu, F.; Wang, C.; Wang, H. Biospinning of hierarchical fibers for a self-sensing actuator. Chem. Eng. J. 2024, 485, 150014. [Google Scholar] [CrossRef]
  104. Liu, Y.; Fang, Z.; Cheung, M.H.; Cai, W.; Huang, J. Mechanism Design for Blockchain Storage Sustainability. IEEE Commun. Mag. 2023, 61, 102–107. [Google Scholar] [CrossRef]
  105. Fu, X.; Pace, P.; Aloi, G.; Guerrieri, A.; Li, W.; Fortino, G. Tolerance Analysis of Cyber-Manufacturing Systems to Cascading Failures. ACM Trans. Internet Technol. 2023, 23, 1–23. [Google Scholar] [CrossRef]
  106. Wang, S.; Sheng, H.; Yang, D.; Zhang, Y.; Wu, Y.; Wang, S. Extendable Multiple Nodes Recurrent Tracking Framework with RTU++. IEEE Trans. Image Process. 2022, 31, 5257–5271. [Google Scholar] [CrossRef]
  107. Yang, D.; Zhu, T.; Wang, S.; Wang, S.; Xiong, Z. LFRSNet: A Robust Light Field Semantic Segmentation Network Combining Contextual and Geometric Features. Front. Environ. Sci. 2022, 10, 1443. [Google Scholar] [CrossRef]
  108. Asim, Y.; Azam, M.A.; Ehatisham-Ul-Haq, M.; Naeem, U.; Khalid, A. Context-Aware Human Activity Recognition (CAHAR) in-the-Wild Using Smartphone Accelerometer. IEEE Sens. J. 2020, 20, 4361–4371. [Google Scholar] [CrossRef]
  109. Vaizman, Y.; Weibel, N.; Lanckriet, G. Context Recognition In-the-Wild: Unified Model for Multi-Modal Sensors and Mul-ti-Label Classification. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 168. [Google Scholar] [CrossRef]
  110. Sharma, A.; Singh, S.K.; Udmale, S.S.; Singh, A.K.; Singh, R. Early Transportation Mode Detection Using Smartphone Sensing Data. IEEE Sens. J. 2021, 21, 15651–15659. [Google Scholar] [CrossRef]
  111. Akbari, A.; Jafari, R. Transition-Aware Detection of Modes of Locomotion and Transportation through Hierarchical Segmentation. IEEE Sens. J. 2020, 21, 3301–3313. [Google Scholar] [CrossRef]
  112. Brimacombe, O.; Gonzalez, L.C.; Wahlstrom, J. Smartphone-Based CO2e Emission Estimation Using Transportation Mode Clas-sification. IEEE Access 2023, 11, 54782–54794. [Google Scholar] [CrossRef]
Figure 1. The proposed system architecture.
Figure 1. The proposed system architecture.
Sensors 24 03032 g001
Figure 2. The accelerometer x axis noisy vs. filtered signal.
Figure 2. The accelerometer x axis noisy vs. filtered signal.
Sensors 24 03032 g002
Figure 3. Hamming windows first 3 windows for accelerometer data.
Figure 3. Hamming windows first 3 windows for accelerometer data.
Sensors 24 03032 g003
Figure 4. LPCCs are calculated for different activities.
Figure 4. LPCCs are calculated for different activities.
Sensors 24 03032 g004
Figure 5. Skewness is calculated for different activities.
Figure 5. Skewness is calculated for different activities.
Sensors 24 03032 g005
Figure 6. Kurtosis is calculated for different activities.
Figure 6. Kurtosis is calculated for different activities.
Sensors 24 03032 g006
Figure 7. MFCCs are calculated for (a) indoor and (b) outdoor activity.
Figure 7. MFCCs are calculated for (a) indoor and (b) outdoor activity.
Sensors 24 03032 g007
Figure 8. Steps detected for (a) indoor and (b) outdoor activity.
Figure 8. Steps detected for (a) indoor and (b) outdoor activity.
Sensors 24 03032 g008
Figure 9. Heading angle calculated for (a) indoor and (b) outdoor activity.
Figure 9. Heading angle calculated for (a) indoor and (b) outdoor activity.
Sensors 24 03032 g009
Figure 10. ROC curves: (a) physical and (b) localization activity over extrasensory dataset.
Figure 10. ROC curves: (a) physical and (b) localization activity over extrasensory dataset.
Sensors 24 03032 g010
Figure 11. ROC curves: (a) physical and (b) localization activity over the SHL dataset.
Figure 11. ROC curves: (a) physical and (b) localization activity over the SHL dataset.
Sensors 24 03032 g011
Figure 12. Time and memory usage analysis of the proposed system.
Figure 12. Time and memory usage analysis of the proposed system.
Sensors 24 03032 g012
Table 1. Description of the extrasensory dataset.
Table 1. Description of the extrasensory dataset.
SensorsSignal TypeSampling Rate (Hz)Duration (sec)Number of Recordings
AccelerometerAcceleration322308,306
GyroscopeAngular Velocity322291,883
MagnetometerMagnetic Field322282,527
LocationLatitude, Longitude12273,737
Table 2. Confusion matrix over the Extrasensory dataset for physical activity.
Table 2. Confusion matrix over the Extrasensory dataset for physical activity.
Obj. ClassesSittingEatingCookingBicycle
sitting0.950.010.030.00
eating0.001.000.000.00
cooking0.000.001.000.00
bicycle0.030.000.000.97
Mean Accuracy=96.61%
Table 3. Confusion matrix over the Extrasensory dataset for localization activity.
Table 3. Confusion matrix over the Extrasensory dataset for localization activity.
Obj. ClassesIndoorsOutdoorsHomeSchoolCar
Indoors1.000.000.000.000.00
Outdoors0.001.000.000.000.00
Home0.050.060.800.020.07
School0.020.020.030.900.03
Car0.000.000.000.001.00
Mean Accuracy=94.28%
Table 4. Confusion matrix over the SHL dataset for physical activity.
Table 4. Confusion matrix over the SHL dataset for physical activity.
Obj. ClassesSitWalkStandRun
Sit0.960.000.040.00
Walk0.030.970.000.00
Stand0.030.030.920.02
Run0.020.010.030.94
Mean Accuracy=94.75%
Table 5. Confusion matrix over the SHL dataset for localization activity.
Table 5. Confusion matrix over the SHL dataset for localization activity.
Obj. ClassesIndoorOutdoorIn TrainIn CarIn BusIn Subway
Indoor0.930.000.050.020.000.00
Outdoor0.000.950.040.000.000.01
In train0.010.030.890.020.050.00
In car0.000.010.010.940.000.04
In bus0.030.020.070.000.880.00
In subway0.030.000.030.000.020.92
Mean Accuracy=91.83%
Table 6. Precision, recall, and F1-score over physical activity.
Table 6. Precision, recall, and F1-score over physical activity.
ClassesExtrasensorySHL
ActivitiesPrecisionRecallF1-ScorePrecisionRecallF1-Score
Sitting0.951.000.92---
Eating1.000.800.90---
Cooking1.000.890.95---
Bicycle0.970.950.96---
Sit---0.920.960.94
Stand---0.940.920.93
Walking---0.960.970.97
Run---0.950.940.92
Table 7. Precision, recall, and F1-score over localization activity.
Table 7. Precision, recall, and F1-score over localization activity.
ClassesExtrasensorySHL
ActivitiesPrecisionRecallF1-ScorePrecisionRecallF1-Score
Indoors1.000.940.91---
Outdoors1.001.000.95---
School0.841.000.92---
Home0.900.850.88---
Car1.001.001.00---
Indoor---0.930.930.93
Outdoor---0.940.950.94
In train---0.820.890.85
In car---0.960.940.95
In subway---0.950.920.93
In bus---0.930.880.90
Table 8. Comparison of proposed MLP with other methods.
Table 8. Comparison of proposed MLP with other methods.
MethodAccuracy %
ExtrasensorySHL
Vaizman et al. [109]0.83-
Vaizman et al. [98]0.83-
Asim et al. [108]0.87-
Sharma et al. [110]-0.92
Akbari et al. [111]-0.92
Brimacombe et al. [112]-0.79
Proposed0.940.91
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almujally, N.A.; Khan, D.; Al Mudawi, N.; Alonazi, M.; Alazeb, A.; Algarni, A.; Jalal, A.; Liu, H. Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization. Sensors 2024, 24, 3032. https://doi.org/10.3390/s24103032

AMA Style

Almujally NA, Khan D, Al Mudawi N, Alonazi M, Alazeb A, Algarni A, Jalal A, Liu H. Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization. Sensors. 2024; 24(10):3032. https://doi.org/10.3390/s24103032

Chicago/Turabian Style

Almujally, Nouf Abdullah, Danyal Khan, Naif Al Mudawi, Mohammed Alonazi, Abdulwahab Alazeb, Asaad Algarni, Ahmad Jalal, and Hui Liu. 2024. "Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization" Sensors 24, no. 10: 3032. https://doi.org/10.3390/s24103032

APA Style

Almujally, N. A., Khan, D., Al Mudawi, N., Alonazi, M., Alazeb, A., Algarni, A., Jalal, A., & Liu, H. (2024). Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization. Sensors, 24(10), 3032. https://doi.org/10.3390/s24103032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop