Next Article in Journal
A New Image Oversampling Method Based on Influence Functions and Weights
Previous Article in Journal
Neural and Cardio-Respiratory Responses During Maximal Self-Paced and Controlled-Intensity Protocols at Similar Perceived Exertion Levels: A Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forward Fall Detection Using Inertial Data and Machine Learning

by
Cristian Tufisi
1,2,
Zeno-Iosif Praisach
1,2,*,
Gilbert-Rainer Gillich
1,2,3,
Andrade Ionuț Bichescu
4 and
Teodora-Liliana Heler
2
1
Department of Engineering Science, Babeș-Bolyai University, Str. M. Kogălniceanu 1, 400084 Cluj-Napoca, Romania
2
Doctoral School of Engineering, Babeș-Bolyai University, Str. M. Kogălniceanu 1, 400084 Cluj-Napoca, Romania
3
Technical Science Academy of Romania, Bd Dacia 26, 030167 București, Romania
4
Department of Physical Education and Sport, Babeș-Bolyai University, Str. M. Kogălniceanu 1, 400084 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(22), 10552; https://doi.org/10.3390/app142210552
Submission received: 24 September 2024 / Revised: 11 November 2024 / Accepted: 12 November 2024 / Published: 15 November 2024

Abstract

:
Fall risk assessment is becoming an important concern, with the realization that falls, and more importantly fainting occurrences, in most cases require immediate medical attention and can pose huge health risks, as well as financial and social burdens. The development of an accurate inertial sensor-based fall risk assessment tool combined with machine learning algorithms could significantly advance healthcare. This research aims to investigate the development of a machine learning approach for falling and fainting detection, using wearable sensors with an emphasis on forward falls. In the current paper we address the problem of the lack of inertial time-series data to differentiate the forward fall event from normal activities, which are difficult to obtain from real subjects. To solve this problem, we proposed a forward dynamics method to generate necessary training data using the OpenSim software, version 4.5. To develop a model as close to the real world as possible, anthropometric data taken from the literature was used. The raw X and Y axes acceleration data was generated using OpenSim software, and ML fall prediction methods were trained. The machine learning (ML) accuracy was validated by testing with data acquired from six unique volunteers, considering the forward fall type.

1. Introduction

To efficiently target the needed resources for preventing health risks associated with undetected fall events in elderly persons or patients having different health issues, the need for fast and accurate detection methods is mandatory to ensure effective intervention, before the fall event can lead to irreversible health costs or death. According to the World Health Organization (WHO), fall-related injuries are more common in older persons and represent a major cause of pain, disability, independence loss, and even premature death, with more than 28% of people over 65 having at least one fall event per year, with the percentage increasing to at least 32% for elderly over 70 years of age [1]. These events burden the health systems by increasing the costs of care at the global level, bringing at the same time a negative impact on the patient, the family of the patient, and on society. The timely detection of a fall is a particularly important aspect of increasing the chances of a patient’s recovery after an event.
Fall risk assessment is crucial for senior patients, particularly those with chronic diseases such as neurological disorders, to reduce casualties or recuperation time [2]. There are several research studies in the domain of fall detection, with some using contact and noncontact-based sensors and others using deep learning and machine-learning techniques [3]. In paper [3], the authors used two methods for detecting falls, one which used a video camera and one which used accelerometer sensors that are commonly used in smartphones. For the contact-based method, 1D Convolutional Neural Network (CNN) and machine learning (ML) methods were used, and for the recordings, 3D ConvNets using three different architectures were used: BGG-16, Xception, and DenseNet, which obtained an accuracy of 99.85% using the UP-Fall detection dataset [4] for training and testing. However, this study had drawbacks in that the methods were not tested against new experimental data.
Fall detection using inertial sensors has emerged as a practical tool for ongoing fall risk assessment in different environments. In article [5], the authors showcased the use of inertial data coupled with deep learning models, specifically Long Short-Term Memory (LSTM) neural networks by considering the spatio-temporal gait parameters as input, obtaining a good classification accuracy.
In paper [6], the authors conducted a comprehensive study on developing a fall detection system using artificial intelligence which was tailored for telemedicine applications, specifically targeting elderly care, using a Kinect 2.0 camera, manufactured by Microsoft, Redmond, WA, USA, and an AI model that analyzes images in real-time. The initial approach used libraries such as Bounding Boxes to determine falls based on the lengths of the sides. However, this approach faced challenges due to the inclusion of extraneous reference points in the generated point cloud, which led to inaccurate classifications. Additionally, the current model relies on a laptop for processing and the Kinect camera, which presents some limitations in terms of portability and usability in real-world applications.
Another important aspect is that falls can occur in different ways [7], which can be forward, sideward, backward, damped, and undamped. From the existing studies [7,8,9,10,11], it was concluded that forward falls are the most prominent.
In paper [12], the author describes the detection of falls with a 3-axis digital accelerometer by successfully classifying acceleration change characteristics while falling from different activities like walking downstairs, upstairs, sitting down, and standing up.
To develop an effective classification or prediction model for falls and faints, it is necessary to have a sizeable participant database. However, collecting data from many participants becomes time-consuming, error-prone, and expensive due to the requirement for continuous data collection and long-term follow-up. In paper [13], the authors present a method of acquiring training data using a forward kinematic approach with unmarked 3D motion capture technology and the biomechanical simulation platform Open Sim [14]. After training an LSTM model, accuracies of 91.99% and 86.62% were obtained on two different datasets of real fall-related inertial measuring unit (IMU) data. The paper demonstrates that insufficient sample size can be overcome by developing a digital model with the scope of extending the testing scenarios as well as the training data, using specialized software tools that are designed for human body dynamics simulation, such as Open Sim. These tools allow users to create digital models of the human body and simulate movements and interactions with the environment, including scenarios such as falls [15].
Paper [16] aims to enhance fall detection systems, which are crucial for timely medical intervention. Unlike many studies that rely on simulated falls in controlled environments, this research uses real-world fall data from the FARSEEING repository, the largest collection of real-world falls to date. Acceleration signals were recorded using an inertial sensor placed on the lower back of individuals with a moderate-to-high risk of falling. Results showed a sensitivity higher than 80%, a false alarm rate per hour of 0.56, and an F-measure of 64.6%.
As there are many different approaches to fall detection [17], the current paper focuses on the detection of forward falls with an emphasis on the type of fall, mainly if it was a sudden undamped fall or a damped fall.
The current paper presents a method for classifying fall events, such as forward falls and forward damped falls, from normal day-to-day activities by combining normal gait acceleration data from real persons and fall acceleration data from digital models as inputs to train supervised learning models.
To develop an accurate method to establish if a fall has occurred or normal activities have been carried out by the monitored person, in the current paper, the inertial data acquired using an accelerometer on the X and Y axes, mounted in the pelvis area, will be used for both digital and real-world measurements. Due to the high risk of injury as well as the long period of time required for volunteers to simulate real falls, as well as the large number of volunteers required to generate the necessary input data to train supervised learning models to detect falls, this paper validates the possibility of using digital models to generate training data.
The digital model is generated starting from a four-link walker model in the passive dynamic simulation example from OpenSim [18], which is further developed to simulate a real person. The digital model has specific contact joints and forces, to be able to simulate real-world movements [19,20]. The model is generated using body segments and inertial data from the literature [21]. Head and torso geometries are defined as spheres, and the upper arm, forearm, thigh, and calf are defined as ellipses with their inertial properties given using the inertial properties presented by R. F. Chandler et al. in report [21], which is a thorough investigation that looks at the physical characteristics connected to the dynamics of the human body with an emphasis on the inertial properties. The data provided in article [21] is important because it clarifies how the human body reacts to motion and force, which is important in domains like biomechanics, ergonomics, automobile safety, and robotics. The study looks at the distribution of mass in the human body and how that influences the responses of various body components to motions and forces from the outside world. The report details the methods used to collect data, including experimental setups, measurement techniques, and analytical models for defining the inertial parameters of several subjects. The authors of this study analyzed the inertial properties of six different corpses. The digital model developed in the current paper considers the arithmetic mean of the anthropometric parameters of the six measured subjects and is considered symmetrical, and according to paper [21], the inertial parameters of the main body segments are shown in Table 1.
After the model is generated, the forward dynamics tool is used to record the model’s acceleration on the X and Y axis. The raw acceleration data is normalized and is used to train and test an artificial neural network (ANN) based on the LSTM network and a supervised learning classifier, such as the k-nearest neighbors.
To test the trained machine-learning models, the raw acceleration values corresponding to forward falls, both damped and undamped, are recorded by mounting an accelerometer sensor on the pelvis area of six volunteers. The data is inserted into both developed models to evaluate the fall detection accuracy, and is also based on the type of activity such as normal gait, sitting on a bed, falling forward with the knees, and falling forward with no damping.

2. Materials and Methods

2.1. Measurement of the Two-Axis Acceleration from Real Subjects

To measure the acceleration data from real subjects, a Raspberry Pi Pico W with an MPU6050 accelerometer (Raspberry Pi, Bucharest, Romania) was used to develop a portable inertial measurement unit (IMU).
The Raspberry Pi Pico is a commonly used, low-cost, high-performance microcontroller board which has the required 3.3 V power outlet and analog input to operate the MPU-6050 accelerometer. The MPU-6050 is an electronic circuit with a 3-axis Gyroscope, 3-axis Accelerometer, Digital Motion Processor, and a Temperature sensor. It requires just two connections to communicate across the I/C interface. The setup is presented in Figure 1.
To power on the sensor, the 3V3 PIN on the Raspberry-Pi is connected to the VCC of the MPU6050, and the GND pin to the GND of the MPU6050.
For signal transmission, the Serial Dala Line Pin I2C0 SDA (Pin 1) and Serial clock Line PIN I2C0 SLC (Pin 2) are connected to the MPU6050’s SDA and SCL.
Python 3.12 was used to interface the MPU-6050 module with the Raspberry Pi. The MPU6050 module (Figure 2) was used to read the acceleration values. The board is programmed to start recordings consisting of the acceleration values and time in ms as soon as the power is on. For every iteration or stop/start loop, a new file is saved, with the name acceleration followed by the date and time.
The developed IMU is intended to be mounted on the belt of subjects and is made portable by connecting it to an external 5 V, 2 A battery by fixing all the components in a 3D printed PLA case with sleeves for mounting. All wires are soldered before mounting inside the case. The case was printed using an Ender-type V2 3D printer with a nozzle of 0.4 mm and a PLA filament of 1.75 mm. The extruder temperature was set to 210 °C, building plate temperature to 63 °C, a layer thickness of 0.2 mm, and an infill ratio of 40%.
The analyzed method involves writing a code in MicroPython, 1.24.0 which is loaded into Thonny IDE to initialize the sensor, and read and process raw data, thus, determining the acceleration on the X, Y, and Z axes relative to time. The testing and validation process was performed by first continuously monitoring the sensor values and calculated angles displayed in the Thonny IDE console. After the sensor was calibrated, it was mounted on the belt (Figure 3) of Subject 1 and the normal gait acceleration values were recorded by setting a time resolution Δt = 0.02 s.
The X, Y, and Z axes were considered according to the Open Sim model: the X-axis being in the walking direction, the Y-axis in the upper and lower direction, and the Z-axis in the side direction. The acceleration on the Z-axis was not considered in the current study.
The gait event acceleration (Figure 4) values were measured only for Subject 1 for 25 s, and the data was used together with the fall events acceleration values obtained through simulations to train the supervised learning models to distinguish between fall events and normal activities.

2.2. Generating the Fall Event Training Values Using a Digital Model

To easily generate the necessary data for training artificial neural networks, in this study, the Open Sim program was used starting from the passive dynamic simulation walking model provided by [19,20,21,23].
The necessary steps for creating the digital model are illustrated in the following flowchart:
The first step was to generate the models’ body geometries starting from a rigid platform and a four-link walker [19], which was further developed into an 11-segment model, by following the steps presented in Figure 5, using MATLAB and anthropometric data from the literature [21].
The model is presented in Figure 6 and its constituting body segments are defined as simple geometries.
The pelvis and torso are considered a single body from the anthropomorphic point, but are still differentiated to measure the acceleration in the pelvis zone.
The moments and products of inertia of the body about its center of mass are expressed according to the body frame, as an exemplification; the mass center for the thigh is at the midpoint between the hip and knee joints, and the mass center for the shank is between the center of the shank.
Joints are necessary to unite the bodies of the model to create a multibody graph. In Open Sim, every model starts with a Ground body. The kinematic relationship between two bodies, referred to as the parent and child bodies, is described by a joint, which is a collection of degrees of freedom [23].
The Frame class hierarchy was included in Open Sim 4.0 to provide flexibility in the definition of joints. Ground is the basic type of frame, followed by Body and PhysicalOffsetFrame. Since each of these three frames is either a rigid body or is fixed to a rigid body, they are collectively referred to as PhysicalFrames. A joint links two PhysicalFrames (parent and child) in OpenSim 4.0.
A non-linear contact force model called the Hunt–Crossley force model [22] is used to simulate how two bodies interact when they come into touch. It considers the contact’s damping and elastic properties simultaneously. In the Hunt–Crossley model, the force F is provided by:
F = k δ n + c δ n δ ˙
where:
k is the stiffness coefficient.
δ is the penetration depth (deformation).
n is the exponent related to the nature of contact (typically between 1.5 and 2 for biological tissues).
c is the damping coefficient.
δ ˙ is the rate of change of deformation (velocity of penetration).
In human movement dynamics, the Hunt–Crossley force principle [24] is mainly used to model and simulate contact forces between the human body and external surfaces.
To define Coordinate Limit Forces in the dynamic simulation environment, the class that captures the parameters defining these forces is created with the help of MATLAB 2021, with the values shown in Table 2. The values have been changed empirically to make the digital model able to forward fall in a damped or undamped manner.
For the left-hip coordinate-limit force the setup ensures that the hip joints are constrained within specified angular limits, with appropriate stiffness and damping applied as they approach these limits, which helps to prevent unrealistic or damaging joint movements during the simulation. Coordinate limit forces are also added to the right hip, left knee, torso, head, and arms.
Contact geometries are defined in the simulation model according to the topology view shown in Figure 7. This involves creating a contact sphere for each set of contacts and setting its properties, such as radius, location, and frame, before adding it to the simulation model.
Hunt–Crossley contact forces are applied to the pelvis, specifically between the pelvis contact sphere and a platform contact surface in the simulation model. The parameters influence the behavior of the contact force during the simulation.
The right thigh, left thigh, and torso contact forces are also set up between their contact spheres and pelvis. The upper right and left arms have contact forces to the torso and forearms to the upper-arms contact spheres.
The angle of the platform is used to control the amount of potential energy converted to kinetic energy with each stride. Energy must be added to the system to offset losses from ground contact to maintain a stable motion. A planar joint, which permits three degrees of freedom between the pelvis and the platform, connects the walker’s pelvis to the platform. The shanks and thighs are joined by pin joints, as are the thighs and pelvis, and arms and torso.
After setting up the appropriate contact forces, the rigid platform is inclined at a 2-degree angle to control the amount of potential energy converted to kinetic energy. Because the knee upper and lower stiffness values are set up to 0.7, respectively, 0.4 Nm/rad, the knees bend upon the falling event, and will first touch the platform resulting in a damped forward fall (Figure 8).
By using the forward dynamic tool, and setting the run time to 4 s, the simulation is finished, and the pelvis speed values relative to the platform according to the X and Y axis are recorded relative to time, with a step of Δt = 0.02 s resulting in 201 data samples. The values obtained are illustrated in Figure 9a, where it is shown that the velocity increases as soon as the model starts to fall. Depending on the type of fall (damped or undamped) there will be 1 or two spikes. The minus sign of the velocity on the X-axis is considered in relation to the platform, in the vertical direction.
The speed values are exported and converted to acceleration data, as shown in Figure 9.
a = Δ v Δ t   m / s 2
To obtain the acceleration values for the undamped forward fall, the coordinate limit force parameters upperStiffness and lowerStiffness are set to 5 Nm/rad, the kneeUpperLimit is set to 5° and kneeLowerLimit to −5. The rigid platform remains inclined at a 2-degree angle and the simulation is run for 4 s using the Open Sim forward dynamics tool, as shown in Figure 10.
By placing the digital model in the correct position, several types of falls can be obtained, including damped falls by applying hand contacts. The scope of the current simulation is to obtain a pure forward fall to imitate a faint event, during which the person falls on his head without any damping.
After the velocity values are recorded, the acceleration data is extracted, as shown in Figure 11, and normalized for all cases.

3. Development of the Fall Detection Models

After the acceleration data is obtained and normalized for normal gait, damped forward fall, and undamped forward fall events, the data is used as input to train one deep learning model, using the LSTM sequential neural network at first, and a KNN supervised classifier.
Long Short-Term Memory Neural Networks (LSTM) represent the predominant type of recurrent neural network (RNN) utilized in fall detection [25,26]. As demonstrated in [25], the basic topology of an LSTM network was developed using a 3-neuron input layer, two Gate Recurrent Unit (GRU) layers of 20 neurons each, and a final stage of two neuron Soft-Max layers [25]. The RNN is trained using raw acceleration data and the results obtained showcase a better performance than traditional models such as Naive Bayes and Support Vector Machines.
As a second step for detecting falls, we chose the k-NN classifier, which is among the simplest classifiers [27] but has performance comparable to the most complicated classifiers. This classifier’s core primarily measures the similarity or distance between the train and test data. Finding the query’s nearest neighbor is the primary goal of k-NN. Therefore, distance is the main k-NN parameter, and the way the distances are calculated greatly affects how accurate the predictions are. For the k-NN classifier, we specify the number of the cross-validation to 5-fold (i.e., k = 5), meaning that the dataset is split into 5 subsets. For each iteration, the model is trained on 4 subsets (80% of the data) and tested on the 1 remaining subset (20%).
The obtained training data is labeled by considering the gait output as 0 or “No fall”, the damped forward fall as 1 or Damped_Fall, and the forward fall as 2 or Forward_Fall. The number outputs are used for the LSTM network and the string outputs for the KNN model.

3.1. Training the LSTM Neural Network

One kind of recurrent neural network (RNN) is an LSTM network (LSTM). Because LSTMs can recognize long-term connections between data time steps, they are mostly utilized for learning, processing, and the classification of sequential data.
The LSTM neural network is developed using MATLAB’s Deep Learning Toolbox [28]. The data is split into train, test, and validation sets, where 70% of the data is used for training the model, 15% for testing and 15% for validation.
The neural network is set to accept sequences with input size features of 2, consisting of acceleration data on the X and Y axes. The network’s architecture contains an LSTM layer with 512 hidden units, designed to capture temporal dependencies in sequence data and a fully connected layer that maps the LSTM output to the desired number of responses, which is set to 1. A dropout layer which randomly sets 20% of the input units to zero at each update during training is set, which helps prevent overfitting.
A regression layer is used to compute the mean squared error loss. Even though the task involves classifying data into three distinct classes, we have opted for a regression approach because the model doesn’t treat the classes as completely distinct entities, but rather learns to predict a continuous value that reflects the inherent ordering of the classes. Using regression allows the model to make more nuanced predictions, which can be converted to class labels later.
To train the network, the Adam optimization algorithm is used, and the number of training epochs is set to 1000, the gradient threshold to 0.001, and the initial learning rate to 0.0001. The training data is shuffled by batches of 60 samples. The number of samples per batch is chosen empirically for graphical visualization purposes. This approach means that instead of shuffling individual samples randomly, we shuffle predefined chunks (batches) of data by applying shuffled indices.
During the training, the networks’ performance is evaluated by plotting the root mean squared error (RMSE), as shown in Figure 12, and the Loss curve, as shown in Figure 13.
The root mean squared error (RMSE) is a metric commonly used to evaluate the performance of regression models. It is the square root of the mean squared error, as shown in Equation (3).
R M S E = 1 n i = 1 n ( y i y ^ i ) 2
where y i is the actual values and y ^ i is the predicted values.
The term “loss” refers to the value given by the loss function, which measures the difference between the predictions and the actual target values.
For the network to correspond to the best validation loss, we have adjusted the training options to keep track of the best validation model. MATLAB’s trainingOptions allows us to use the ValidationPatience option to stop training early if the validation loss does not improve after 50 validations by enabling OutputNetwork to ‘best-validation-loss’.
As observed in Figure 12 and Figure 13, the loss value is around 0.1, and the validation RMSE is around 0.3, suggesting that the LSTM model achieves a low average relative error in the predicted values. While the value of RMSE indicates a good general performance, it does not guarantee that all predictions are within a 30% deviation of the true values, thus, in the next section, the LSTM network is also tested using unseen experimental data. The network undergoes an initial evaluation to assess its performance on unseen data by plotting the Predicted vs. Original graph in Figure 14, while considering the 15% which represents unseen test data from a total of 17,220 samples. The plots that are red lines with an asterisk are the predicted values and the plots in green are the actual test values.
In the training phase for the LSTM network, the outputs have a value of 0 for normal activity, meaning normal gait, an output value of 1 for damped forward fall events, and an output value of 2 for forward fall events without damping. Considering the data is composed of the raw normalized acceleration on the X and Y axes, the predicted values offer very good accuracy, as illustrated in Figure 13.
The regression layer has one output that can have 3 continuous target values. Values near 0 are considered for a no-fall event, values from 0.5 to 1.5 for a forward damped fall event, and values from 1.5 to 2 for a forward undamped fall event. We converted the continuous class into discrete class labels by rounding the values to the nearest integers. Once the predictions were mapped to the nearest class label, we computed the confusion matrix and metrics, and the obtained results for the trained LSTM network were Accuracy: 84%, Precision: 96%, Recall: 98%, and F1 Score: 97%, which suggested that the model performed exceptionally well on the test dataset.
For a better understanding of the network’s performance, the confusion matrix was plotted, as shown in Figure 15. The confusion matrix considered all data, consisting of the 70% used for training, the 15% for validation, and the 15% for testing.
The primary confusion matrix illustrated in Figure 15 reflects that the network was generally reliable in distinguishing between the classes.
The network achieved a True Positive Rate (TPR) of 99.2%, with a False Negative Rate (FNR) of 0.8% indicating that Class 0 was detected accurately with very few instances misclassified into other classes.
Class 1 had a TPR of 99.0%, showing good sensitivity by having the FNR at 1.0%, meaning the network rarely misclassifies Class 1.
Class 2 showed a slightly lower performance, with a TPR of 89.7% and an FNR of 10.3%. While the network still performed reasonably well for Class 2, it occasionally confused it with Class 1, which is also a fall event.

3.2. Training the KNN Classifier

After generating the dataset using the digital model, with Matlab programming tools, we imported the data into the classifiers. To train the data, the KNN classifier was chosen [29]. By changing the k-fold cross-validation to 5, each method was trained independently on the imported data.
The model type was set to a Fine KNN with 3 neighbors, using the City block distance metric and squared inverse distance weight. Additionally, all features were used in the model, and PCA was disabled. The Fine KNN in MATLAB refers to a K-Nearest Neighbors (KNN) classifier model that uses a relatively small number of neighbors (typically K = 1) and employs a fine granularity in its distance metric for classification. We employed 5-fold cross-validation to ensure reliable and unbiased performance evaluation across different subsets of data. Several trials were made by considering 2-fold to 10-fold cross-validation. After training, 5-fold exhibited the best performance and computational efficiency.
A model validation technique called cross-validation was used to calculate the model’s performance. True positive, false positive, true negative, and false negative values are documented in the confusion matrix that was produced and shown in Figure 16.
Figure 16 illustrates the performance of the Fine KNN classifier on the raw normalized acceleration data, which was also used for training and as a test set in the form of a confusion matrix. Of the 17,268 test sample pairs, for the damped fall class, the accuracy was 98%, with 0.3% predicted as forward fall and 1.6% as normal activity. For the forward fall class, 99.1% accuracy was obtained, with 0.9% being wrong predictions from which 0.3% were classified as normal activities. The results show that the proposed classifier was very sensitive to falls and had high recognition accuracy, achieving a True Positive Rate (TPR) of 98% for damped falls, 99.1% for forward falls, and 92.6% for normal activities.
From the confusion matrix, the results obtained for the trained KNN network were Accuracy: 97.33%, Precision: 97.11%, Recall: 97.5% and F1 Score: 97.3%, comparable to the results obtained after training the LSTM network.

4. Testing the Developed Fall-Detection Models Using Experimental Measurements

To test the proposed method of training using digital models for fall simulations and the developed deep learning and classification models, experimental measurements of 6 male volunteers were taken. The developed sensor was strapped tightly using a belt to the pelvis zone of the volunteers, either at the front or back. The physical characteristics of the volunteers, as well as the placement position of the inertial measurement unit (front or back), are shown in Table 3.
Subjects 1 to 6 simulated forward falls on a mattress both by falling first on their knees, thus, damping the fall, and by forward falling directly on their elbows, without any damping. Additionally, subject 3 was recorded doing activities like normal walking and sitting in bed (Figure 17) to evaluate how the developed models predict the new data.
All the fall simulations were performed starting from a distance of two steps away from the mattress. The subject would walk two steps and perform the fall.
The fall event was timed accordingly, considering the start as the moment when the subject started walking. The obtained acceleration values were normalized and considered regarding the start and end times of the timer. The acceleration data was recorded using the same time resolution Δt = 0.02 s. The acceleration values for Subject 1 are presented in Figure 18.
For a better understanding of the correlation between the simulated and real fall events, the data was normalized and compared in Figure 19.
Figure 19 illustrates how the simulated fall event compares with the real recorded data of six different subjects by considering only the actual fall event. The simulated fall (orange line) shows a relatively smooth pattern, with peaks occurring primarily between 0.2 and 1.4 s. Consequently, in the same fashion, Subjects 1, 3, and 4 also exhibit more prominent peaks around 0–0.2 and 0.8–1.4 s. Subject 2 (magenta) has more noise and lower overall values. The peaks for the subjects tend to occur between 0.1 and 1.5 s, having at least two prominent maximum spikes between 0.3 and 0.5 s apart, with some differences in magnitude and timing of acceleration across the subjects, especially for Subject 6.
The discrepancies between simulated and recorded data could reduce model accuracy. Addressing these differences through model refinement or calibration would likely enhance the robustness and applicability of the predictive model. Additionally, in future studies, considering additional signal metrics for acceleration values, such as the Root Mean Square (RMS) of Acceleration, Peak Acceleration Values, and Frequency Analysis, would be a solid approach to bridge the gap between simulated and recorded data.

4.1. Testing the LSTM Neural Network

After the acceleration data was recorded for all six volunteers while performing both undamped and damped forward falls, the values were inserted into the LSTM network and the accuracy of the predictions was evaluated. In Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24, the number of samples is relative to the time recorded for each participant to perform the simulated fall and consists of several samples between 240 and 450.
For all six subjects, the predictions given by the trained LSTM are presented in Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25. Output 1 is given for damped fall, output 2 for forward undamped fall, and output 0 for normal activities. The mean predicted value of each case is shown in the title of the graph.
Based on the results presented in Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25, it can be concluded that fall events were accurately detected across all cases, and the majority were correctly classified according to fall type. However, some classification errors occurred in subjects 1 to 4, where forward falls were misclassified as damped falls. This misclassification may be partially attributed to the 10.2% False Negative Rate for Class 2, which indicates a tendency to confuse certain fall types, particularly forward falls, with other categories. These errors can also be attributed to the subjects’ tendency to instinctively dampen their falls, despite being instructed otherwise.
For all six volunteers, the acceleration values during normal gait movements were recorded, normalized, and combined into one single test sample. The results illustrate in Figure 26 that the predicted results were very close to the expected target 0, indicating that the network can predict normal walking activities to fall events.
Furthermore, to test if the developed LSTM model is capable of distinguishing between fall events and normal activities like sitting in bed, the acceleration is measured on Subject 1 as he performs a 5-step walk and sits in bed by engaging the sitting position at first and, afterwards, laying down on his back. The results are presented in Figure 27.
Because the maximum outputs did not exceed 0.6 as illustrated in the predictions in Figure 27, one can assume that by setting the appropriate threshold, the network does not indicate a fall event.
The network correctly detected forward fall events and distinguished between the two types of falls correctly.
The network predicted normal walking activities accurately, with values close to the expected target of 0, and no false positives were obtained for normal walking activities.
For Subject 1, during a 5-step walk and sitting in bed, the network’s predictions did not exceed 0.42, indicating no fall event, which resulted in the correct identification of sitting in bed and walking as non-falls with no false negatives.
These findings lead to the conclusion that with the LSTM network, the events were correctly detected, there were no false positives, the non-fall activities were correctly identified as non-falls, and there were no false negative results.

4.2. Testing the KNN Classifier

To test the prediction accuracy of the developed Fine KNN model, the experimental recorded data was labeled accordingly, and the prediction accuracy was evaluated by plotting the confusion matrix for all measured scenarios.
The first test was performed for Subject 1 by considering the damped fall, which was labeled as the Damped_Fall target, the forward undamped fall which was labeled as the Forward_Fall, and the normal gait and sitting in bed events which were labeled as No, in one single test file. The results are shown in Figure 28, in the confusion matrix, where, if there is a red box in the row for Predicted Class and the column for Damped_Fall, Forward_Fall, or No, it indicates that the model failed to detect the right prediction, classifying them incorrectly. For the remaining subjects, the data consists of damped and undamped forward falls and gait events. Figure 28, Figure 29, Figure 30, Figure 31, Figure 32 and Figure 33 show the KNN network prediction for Subjects 2 to 6.
The accuracy of detecting the fall event without considering the type of fall, for each person, is presented in Table 4.
From the presented results, in the case of a fall event, the classifier was capable of distinguishing from the data that a fall event had taken place with a minimum True Positive Rate of 92.5%, illustrating that the model was highly capable of detecting a fall event.
Regarding the type of fall event, the highest accuracy obtained was for Subject 1, where the TPR values were 98.7% for a damped fall and 98.9% for a forward fall.
For the damped fall event, the biggest classification error was obtained in the case of Subject 4, where the damped fall had a TPR value of 49.9% and a False Negative Rate (FNR) of 50.1%, from which the forward fall prediction represented 49.2% and for normal activities just 0.9%.
For the forward fall event, the biggest classification error was obtained in the case of Subject 2, where the forward fall had a TPR value of 89.7% and a False Negative Rate (FNR) of 10.3%, from which the damped fall prediction represented 2.8% and for normal activities 7.4%.
The predictions relating to the normal activity events presented were correctly predicted, achieving a maximum of 100% for Subjects 2, 3, 4, and 6, and a minimum of 72.6% TPR for Subject 5.
The models generally performed well across subjects, with high fall detection accuracy. However, there were variations in prediction accuracy, such as 100% for Subject 5 and 92.5% for Subject 2, suggesting that factors like physical differences or sensor placement may have influenced performance. Subject 2’s recordings had more noise and lower acceleration values. Similarly, Subject 6 showed variations in acceleration spikes. These inconsistencies impacted the model’s performance and special consideration about the type of sensor and placement will be addressed in future studies.

5. Discussion

In this paper, a digital model was developed and used to simulate forward damped and undamped falls to create a database composed of simulated normalized acceleration values. The generated data consisting of simulated falls and experimental gait acceleration values were used to train LSTM and Fine KNN models with the main scope of detecting fall events and even classifying the type of fall.
For testing the models in real-world scenarios, six male volunteers with diverse physical characteristics (with ages ranging from 38 to 52, weights from 68 to 94 kg, and heights from 173 to 184 cm) participated in the experiments.
The sensor placement (front or back of the pelvis) and the type of fall simulation (damped vs. undamped) provided the dataset for testing the models.
The LSTM network demonstrated very good accuracy in predicting the fall event from normal activity. The classification results indicate that forward falls were misclassified as damped falls for subjects 1 to 4. This suggests that while the network is effective at reliably detecting fall events overall, it struggles to accurately differentiate between specific types of falls. Therefore, the model performs well in identifying that a fall has occurred, but its accuracy in distinguishing between fall types (such as forward vs. damped) is less reliable.
For normal walking activities, the LSTM network also demonstrated high accuracy, with predictions closely matching the expected target, obtaining a mean predicted value of 0, showcasing that the model can successfully distinguish between normal activities from fall events.
In distinguishing falls from normal activities like sitting in bed, the LSTM network maintained a low maximum output, having a mean predicted value of 0.04, which would allow for a threshold to be set that prevents false positives for non-fall activities.
The Fine KNN model was tested using labeled experimental data for various scenarios, including damped falls, forward undamped falls, normal gait, and sitting in bed.
From the presented results, the KNN classifier was observed to predict a consistent misclassification issue where damped falls were classified as forward falls, but not predominantly. This indicates the model’s sensitivity to damping actions, possibly due to subjects’ natural tendency to dampen their falls either by using their hands or knees even when instructed otherwise.
Confusion matrices for each subject illustrated the model’s prediction accuracy.
The results showed that the KNN model could accurately classify falls from normal activities, with good accuracy.
Fall detection models combining the LSTM neural network and KNN classifier will produce predictions that are more accurate, even when they are trained with digital models. Acceleration profiles during falls and regular activities are two examples of sequential data that the LSTM is excellent at capturing temporal relationships and patterns in.
An innovative advantage of the presented method is the use of digital simulation to generate training data. This approach is easy to implement, carries no risk to subjects, and, to our knowledge, has seen limited use in existing research. The presented method’s accuracy and practicality make it a promising candidate for real-world applications, particularly in wearable devices. The classification between damped and undamped falls is of high importance in determining if the person has suffered a potentially fatal fall or not.
In reference [9], the LSTM model trained on synthetic IMU data from the shank achieves an accuracy of 91.99% on Dataset A, which is significantly higher than the model trained on real data (87.5%). This synthetic data-based model also outperforms the real data-trained model across other performance metrics. In contrast, the current paper’s results demonstrate that, aside from the instances with 92.5% and 93.4% accuracy, the model performs exceptionally well, often achieving accuracy rates above 95%. This strong performance suggests that the current study’s model is highly reliable and compares favorably with the models discussed in reference [9], where the synthetic data-based models also showed strong but somewhat varying accuracy across different datasets.
In contrast, existing studies [16,27], which often rely on additional devices or fixed sensors to detect falls, may not be as practical for widespread use. For instance, the real-time fall detection system (FDS) proposed in other research [27] leverages smartphones and Google’s 3D mapping services for location tracking, achieving an accuracy of within 9 m. While this system offers excellent location tracking in areas with high Wi-Fi density, it faces limitations in terms of additional equipment requirements and the need for fixed target environments, which can hinder its real-world applicability.
The current study’s use of digital simulation for training data enhances the system’s safety and ease of deployment and provides a scalable solution for improving fall detection systems.
Scenarios like sitting in bed and normal walking were also tested, which demonstrated the models’ ability to distinguish falls from routine activities. The presented method proves the capability of improving prediction accuracy by focusing on signal processing improvements, rather than just using raw normalized acceleration data, and training with a larger, more diverse dataset by developing digital models of varying sizes, developing a more diverse set of fall scenarios, or incorporating different body types could further enhance the models’ accuracy and generalizability in real-world applications.

6. Conclusions

Fall risk assessment is a very important concern for providing immediate needed assistance before it can pose a health risk. The detection of falls and their type as promptly as possible can contribute significantly to the shortening of the recuperation time and overall health of a person. Special devices can be developed that can deploy and protect the vital parts when the fall event has been detected and to inform specialized personnel that the person needs immediate assistance.
In this work, emphasis is placed on generating training data by introducing simulation procedures with the help of specialized programs as well as anthropometric data obtained from literature. The paper highlights the advantages and availability of data that can be generated using digital models.
By using the OpenSim software, a model of a person consisting of 11 segments, a height of 170.52 cm and a weight of 62.25 kg was generated. The digital model can simulate front falls both with damping and without, making it simpler to replicate real fall events of different people.
Using the data generated through simulation, LSTM and Fine KNN models were trained to detect falls with very good accuracy, based on the raw acceleration data. In the training process, the models were evaluated using 20% of the generated data for testing, with the results illustrating a very good performance.
For the experimental measurements, the LSTM model detected the fall events with very high accuracy, but did not successfully differentiate the type of fall event.
Both developed models detected the fall event with good accuracy, which brings us to the conclusion that the fall detection model can take advantage of both methods by preprocessing the output of the LSTM and feeding it into a KNN classifier, which is competent at handling high-dimensional data and identifying local patterns. By further employing exhaustive research considering different positions of the sensors and several types of falls, the presented methods can make use of portable technologies like mobile phones or smart watches to be able to predict fall events in real time, thus, becoming a reliable fall detection system.
The presented results showed a high accuracy despite both the generated and the measured data being taken directly into the network without prior filtering and processing.

Author Contributions

Conceptualization, C.T. and G.-R.G.; methodology, C.T. and Z.-I.P.; software, C.T.; validation, C.T. and G.-R.G.; formal analysis, G.-R.G. and A.I.B.; investigation, C.T.; resources, C.T. and T.-L.H.; data curation, C.T.; writing—original draft preparation, C.T.; writing—review and editing, G.-R.G.; visualization, A.I.B.; supervision, G.-R.G.; project administration, C.T.; funding acquisition, C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universitaty Babeş-Bolyai, grant number SRG-UBB 32911/22.06.2023.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Subcommittee of the Babeș-Bolyai University of Cluj-Napoca, according to the Research Ethics Approval no. 15.796/16 July2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. Ageing and Health (AAH), Maternal, Newborn, Child & Adolescent Health & Ageing (MCA); World Health Organization: Geneva, Switzerland, 2015; ISBN 9789241563536. [Google Scholar]
  2. Tinetti, M.E.; Williams, C.S. Falls, injuries due to falls, and the risk of admission to a nursing home. N. Engl. J. Med. 1997, 337, 1279–1284. [Google Scholar] [CrossRef] [PubMed]
  3. Chandak, A.; Chaturvedi, N.; Dhiraj. Machine-Learning-Based Human Fall Detection Using Contact- and Non-contact-Based Sensors. Comput. Intell. Neurosci. 2022, 2022, 9626170. [Google Scholar] [CrossRef] [PubMed]
  4. Martínez-Villaseñor, L.; Ponce, H.; Brieva, J.; Moya-Albor, E.; Núñez-Martínez, J.; Peñafort-Asturiano, C. UP-fall detection dataset: A multimodal approach. Sensors 2019, 19, 1988. [Google Scholar] [CrossRef] [PubMed]
  5. Tunca, C.; Salur, G.; Ersoy, C. Deep Learning for Fall Risk Assessment With Inertial Sensors: Utilizing Domain Knowledge in Spatio-Temporal Gait Parameters. IEEE J. Biomed. Health Inform. 2020, 24, 1994–2005. [Google Scholar] [CrossRef]
  6. Villegas-Ch., W.; Barahona-Espinosa, S.; Gaibor-Naranjo, W.; Mera-Navarrete, A. Model for the Detection of Falls with the Use of Artificial Intelligence as an Assistant for the Care of the Elderly. Computation 2022, 10, 195. [Google Scholar] [CrossRef]
  7. Attar, M.; Alsinnari, Y.M.; Alqarni, M.S.; Bukhari, Z.M.; Alzahrani, A.; Abukhodair, A.W.; Qadi, A.; Alotibi, M.; Jastaniah, N.A. Common Types of Falls in the Elderly Population, Their Associated Risk Factors and Prevention in a Tertiary Care Center. Cureus 2021, 13, e14863. [Google Scholar] [CrossRef]
  8. Gratza, S.K.; Chocano-Bedoya, P.O.; Orav, E.J.; Fischbacher, M.; Freystätter, G.; Theiler, R.; Egli, A.; Kressig, R.W.; Kanis, J.A.; Bischoff-Ferrari, H.A. Influence of fall environment and fall direction on risk of injury among pre-frail and frail adults. Osteoporos. Int. 2019, 30, 2205–2215. [Google Scholar] [CrossRef]
  9. Borrelli, J.; Creath, R.A.; Rogers, M.W. A method for simulating forward falls and controlling impact velocity. MethodsX 2023, 11, 102399. [Google Scholar] [CrossRef]
  10. Crenshaw, J.R.; Bernhardt, K.A.; Achenbach, S.J.; Atkinson, E.J.; Khosla, S.; Kaufman, K.R.; Amin, S. The circumstances, orientations, and impact locations of falls in community-dwelling older women. Arch. Gerontol. Geriatr. 2017, 73, 240–247. [Google Scholar] [CrossRef]
  11. Schonnop, R.; Yang, Y.; Feldman, F.; Robinson, E.; Loughin, M.; Robinovitch, S.N. Prevalence of and factors associated with head impact during falls in older adults in long-term care. Can. Med. Assoc. J. 2013, 185, E803–E810. [Google Scholar] [CrossRef]
  12. Jia, N. Detecting Human Falls with a 3-Axis Digital Accelerometer. Analog Dialogue 2009, 43, 1–7. [Google Scholar]
  13. Tang, J.; He, B.; Xu, J.; Tan, T.; Wang, Z.; Zhou, Y.; Jiang, S. Synthetic IMU Datasets and Protocols Can Simplify Fall Detection Experiments and Optimize Sensor Configuration. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 1233–1245. [Google Scholar] [CrossRef] [PubMed]
  14. Geyer, H.; Herr, H. A muscle-reflex model that encodes principles of legged mechanics produces human walking dynamics and muscle activities. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 263–273. [Google Scholar] [CrossRef] [PubMed]
  15. OpenSim. How Inverse Kinematics Works. Available online: https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/overview (accessed on 10 June 2024).
  16. Palmerini, L.; Klenk, J.; Becker, C.; Chiari, L. Accelerometer-Based Fall Detection Using Machine Learning: Training and Testing on Real-World Falls. Sensors 2020, 20, 6479. [Google Scholar] [CrossRef]
  17. Lee, Y.; Yeh, H.; Kim, K.-H.; Choi, O. A real-time fall detection system based on the acceleration sensor of smartphone. Int. J. Eng. Bus. Manag. 2018, 10, 315–326. [Google Scholar] [CrossRef]
  18. Zhang, B. Research on biomechanical simulation and simulation of badminton splitting and hanging action based on edge computing. Mob. Inf. Syst. 2021, 2021, 5527879. [Google Scholar] [CrossRef]
  19. OpenSim. Building a Dynamic Walker in Matlab. OpenSim Confluence Wiki. Available online: https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53084230/Building+a+Dynamic+Walker+in+Matlab (accessed on 27 June 2024).
  20. Sherman, M.A.; Seth, A.; Delp, S.L. Simbody: Multibody dynamics for biomedical research. Procedia IUTAM 2011, 2, 241–261. [Google Scholar] [CrossRef]
  21. Chandler, R.; Clauser, C.; McConville, J.; Reynolds, H.; Young, J. Investigation of Inertial Properties of the Human Body; Wright-Patterson Air Force Base: Dayton, OH, USA, 1975.
  22. da Silva, M.R.; Marques, F.; Tavares da Silva, M.; Flores, P. A compendium of contact force models inspired by Hunt and Crossley’s cornerstone work. Mech. Mach. Theory 2022, 167, 104501. [Google Scholar] [CrossRef]
  23. Seth, A.; Sherman, M.A.; Eastman, P.; Delp, S.L. Minimal formulation of joint motion for biomechanisms. Nonlinear Dyn. 2010, 62, 291–303. [Google Scholar] [CrossRef]
  24. Carvalho, A.S.; Martins, J.M. Exact restitution and generalizations for the Hunt–Crossley contact model. Mech. Mach. Theory 2019, 139, 174–194. [Google Scholar] [CrossRef]
  25. Mauldin, T.R.; Canby, M.E.; Metsis, V.; Ngu, A.H.; Rivera, C.C. Smartfall: A smartwatch-based fall detection system using deep learning. Sensors 2018, 18, 3363. [Google Scholar] [CrossRef] [PubMed]
  26. Theodoridis, T.; Solachidis, V.; Vretos, N.; Daras, P. Human fall detection from acceleration measurements using a recurrent neural network. Int. Fed. Med. Biol. Eng. (IFMBE) Proc. 2018, 66, 145–149. [Google Scholar]
  27. Xing, W.; Bei, Y. Medical Health Big Data Classification Based on k-NN Classification Algorithm. IEEE Access 2020, 8, 28808–28819. [Google Scholar] [CrossRef]
  28. MathWorks. Long Short-Term Memory Networks (LSTM). Deep Learning Toolbox Documentation. Available online: https://www.mathworks.com/help/deeplearning/ug/long-short-term-memory-networks.html (accessed on 27 June 2024).
  29. MathWorks. Classification KNN. Statistics and Machine Learning Toolbox Documentation. Available online: https://www.mathworks.com/help/stats/classificationknn.html (accessed on 27 June 2024).
Figure 1. Connecting the Raspberry Pi Pico and MPU-6050 sensor.
Figure 1. Connecting the Raspberry Pi Pico and MPU-6050 sensor.
Applsci 14 10552 g001
Figure 2. Developed IMU sensor: (a) CAD model of the developed IMU; (b) IMU sensor.
Figure 2. Developed IMU sensor: (a) CAD model of the developed IMU; (b) IMU sensor.
Applsci 14 10552 g002
Figure 3. Setup for gait acceleration measurement.
Figure 3. Setup for gait acceleration measurement.
Applsci 14 10552 g003
Figure 4. Recorded normalized acceleration values on the Gait event of Person 1.
Figure 4. Recorded normalized acceleration values on the Gait event of Person 1.
Applsci 14 10552 g004
Figure 5. Flowchart for developing the digital model.
Figure 5. Flowchart for developing the digital model.
Applsci 14 10552 g005
Figure 6. Developed model: (a) schematic representation of the segmented model; (b) Open Sim developed digital model.
Figure 6. Developed model: (a) schematic representation of the segmented model; (b) Open Sim developed digital model.
Applsci 14 10552 g006
Figure 7. Topology view of the digital model.
Figure 7. Topology view of the digital model.
Applsci 14 10552 g007
Figure 8. Damped forward fall simulation.
Figure 8. Damped forward fall simulation.
Applsci 14 10552 g008
Figure 9. Training data acquired through simulation for the forward damped fall: (a) Recorded speed on the X and Y axis [m/s]; (b) Recorded acceleration on the X and Y axis.
Figure 9. Training data acquired through simulation for the forward damped fall: (a) Recorded speed on the X and Y axis [m/s]; (b) Recorded acceleration on the X and Y axis.
Applsci 14 10552 g009
Figure 10. Forward fall simulation.
Figure 10. Forward fall simulation.
Applsci 14 10552 g010
Figure 11. Training data acquired through simulation for the forward fall: (a) Recorded speed on the X and Y axis [m/s]; (b) Recorded acceleration on the X and Y axis.
Figure 11. Training data acquired through simulation for the forward fall: (a) Recorded speed on the X and Y axis [m/s]; (b) Recorded acceleration on the X and Y axis.
Applsci 14 10552 g011
Figure 12. Plotted RMSE curve.
Figure 12. Plotted RMSE curve.
Applsci 14 10552 g012
Figure 13. Plotted Loss curve.
Figure 13. Plotted Loss curve.
Applsci 14 10552 g013
Figure 14. Testing performance of the trained LSTM network.
Figure 14. Testing performance of the trained LSTM network.
Applsci 14 10552 g014
Figure 15. Training performance of the trained LSTM network (a) Confusion matrix; (b) True positive (TPR) and false negative rates (FNR).
Figure 15. Training performance of the trained LSTM network (a) Confusion matrix; (b) True positive (TPR) and false negative rates (FNR).
Applsci 14 10552 g015
Figure 16. Confusion matrix for the trained Fine KNN.
Figure 16. Confusion matrix for the trained Fine KNN.
Applsci 14 10552 g016
Figure 17. Forward fall simulation of Subject 3.
Figure 17. Forward fall simulation of Subject 3.
Applsci 14 10552 g017
Figure 18. Training data acquired through experimental measurements of fall events for Subject 1: (a) Recorded acceleration on the X and Y axes [m/s2] for the damped forward fall; (b) Recorded acceleration on the X and Y axes [m/s2] for the undamped forward fall.
Figure 18. Training data acquired through experimental measurements of fall events for Subject 1: (a) Recorded acceleration on the X and Y axes [m/s2] for the damped forward fall; (b) Recorded acceleration on the X and Y axes [m/s2] for the undamped forward fall.
Applsci 14 10552 g018
Figure 19. Comparison of the recorded Y-axis normalized acceleration values for the forward damped fall event between simulated and experimental measurements for the 5 subjects.
Figure 19. Comparison of the recorded Y-axis normalized acceleration values for the forward damped fall event between simulated and experimental measurements for the 5 subjects.
Applsci 14 10552 g019
Figure 20. LSTM network fall event predictions for Subject 1: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Figure 20. LSTM network fall event predictions for Subject 1: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Applsci 14 10552 g020
Figure 21. LSTM network fall event predictions for Subject 2: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Figure 21. LSTM network fall event predictions for Subject 2: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Applsci 14 10552 g021
Figure 22. LSTM network fall event predictions for Subject 3: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Figure 22. LSTM network fall event predictions for Subject 3: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Applsci 14 10552 g022
Figure 23. LSTM network fall event predictions for Subject 4: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Figure 23. LSTM network fall event predictions for Subject 4: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Applsci 14 10552 g023
Figure 24. LSTM network fall event predictions for Subject 5: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Figure 24. LSTM network fall event predictions for Subject 5: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Applsci 14 10552 g024
Figure 25. LSTM network fall event predictions for Subject 6: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Figure 25. LSTM network fall event predictions for Subject 6: (a) Predictions for the damped forward fall; (b) Predictions for the undamped forward fall.
Applsci 14 10552 g025
Figure 26. LSTM network prediction for normal walking of the six participants.
Figure 26. LSTM network prediction for normal walking of the six participants.
Applsci 14 10552 g026
Figure 27. LSTM network prediction for sitting in bed event for Subject 1.
Figure 27. LSTM network prediction for sitting in bed event for Subject 1.
Applsci 14 10552 g027
Figure 28. KNN network prediction for Subject 1.
Figure 28. KNN network prediction for Subject 1.
Applsci 14 10552 g028
Figure 29. KNN network prediction for Subject 2.
Figure 29. KNN network prediction for Subject 2.
Applsci 14 10552 g029
Figure 30. KNN network prediction for Subject 3.
Figure 30. KNN network prediction for Subject 3.
Applsci 14 10552 g030
Figure 31. KNN network prediction for Subject 4.
Figure 31. KNN network prediction for Subject 4.
Applsci 14 10552 g031
Figure 32. KNN network prediction for Subject 5.
Figure 32. KNN network prediction for Subject 5.
Applsci 14 10552 g032
Figure 33. KNN network prediction for Subject 6.
Figure 33. KNN network prediction for Subject 6.
Applsci 14 10552 g033
Table 1. Mean values of anthropometry data for the 6 subjects [22].
Table 1. Mean values of anthropometry data for the 6 subjects [22].
ParameterHeadTorsoThighCalfUpper ArmForearm
Weight [kg]3.9433.996.522.691.841.11
Length [cm]-66.4545.8237.5330.5226.3
Width/Diameter [cm]20.7228.6213.719.9610.449.06
Principal Moments of Inertia [kgm2]Ixx,0.000017080.001619370.000115140.0000033620.000013300.00000647
Iyy0.00001640.001087630.000122120.000003040.000013270.00000630
Izz0.000020080.000378510.000021250.0000007010.000002200.00000086
Table 2. Coordinate Limit Force Parameters.
Table 2. Coordinate Limit Force Parameters.
ParameterValueParameterValue
upperStiffness0.7 [Nm/rad]damping0.8 [Nm/(rad/s)]
lowerStiffness0.4 [Nm/rad]transition 5 [°]
kneeUpperLimit0.7 [Nm/rad]torsoUpperLimit0 [rad]
kneeLowerLimit−140 [°] torsoLowerLimit0 [rad]
hipUpperLimit100 [°]HeadUpperLimit0 [rad]
hipLowerLimit−100 [°]HeadLowerLimit0 [rad]
Table 3. Subjects’ physical data.
Table 3. Subjects’ physical data.
Subject NumberWeight [kg]Height [cm]Age [Years]Sensor Mounting
Subject 19117336Front
Subject 29318025Front
Subject 38917650Back
Subject 49418450Back
Subject 58017552Front
Subject 66818038Back
Table 4. Obtained results.
Table 4. Obtained results.
Type of Fall Accuracy
Subject 1Subject 2Subject 3Subject 4Subject 5Subject 6
Damped98.8%98.2%99.6%99.1%93.4%98.5%
Undamped99.7%92.5%98%97.1%100%98.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tufisi, C.; Praisach, Z.-I.; Gillich, G.-R.; Bichescu, A.I.; Heler, T.-L. Forward Fall Detection Using Inertial Data and Machine Learning. Appl. Sci. 2024, 14, 10552. https://doi.org/10.3390/app142210552

AMA Style

Tufisi C, Praisach Z-I, Gillich G-R, Bichescu AI, Heler T-L. Forward Fall Detection Using Inertial Data and Machine Learning. Applied Sciences. 2024; 14(22):10552. https://doi.org/10.3390/app142210552

Chicago/Turabian Style

Tufisi, Cristian, Zeno-Iosif Praisach, Gilbert-Rainer Gillich, Andrade Ionuț Bichescu, and Teodora-Liliana Heler. 2024. "Forward Fall Detection Using Inertial Data and Machine Learning" Applied Sciences 14, no. 22: 10552. https://doi.org/10.3390/app142210552

APA Style

Tufisi, C., Praisach, Z. -I., Gillich, G. -R., Bichescu, A. I., & Heler, T. -L. (2024). Forward Fall Detection Using Inertial Data and Machine Learning. Applied Sciences, 14(22), 10552. https://doi.org/10.3390/app142210552

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop