Next Article in Journal
Agent-Guided Non-Local Network for Underwater Image Enhancement and Super-Resolution Using Multi-Color Space
Next Article in Special Issue
MTP-YOLO: You Only Look Once Based Maritime Tiny Person Detector for Emergency Rescue
Previous Article in Journal
Autonomous Marine Vehicle Operations
Previous Article in Special Issue
YOLOv7-Ship: A Lightweight Algorithm for Ship Object Detection in Complex Marine Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Based Approach to Identifying Fall Risk in Seafarers Using Wearable Sensors

1
Institute of Well-Aging Medicare & Chosun University LAMP Center, Chosun University, Gwangju 61452, Republic of Korea
2
Department of Biomechanics, University of Nebraska at Omaha, Omaha, NE 68182, USA
3
Department of Computer Science, University of Nebraska at Omaha, Omaha, NE 68182, USA
4
Department of Computer Science and Statistics, Chosun University, Gwangju 61452, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(2), 356; https://doi.org/10.3390/jmse12020356
Submission received: 16 January 2024 / Revised: 14 February 2024 / Accepted: 17 February 2024 / Published: 19 February 2024
(This article belongs to the Special Issue Application of Advanced Technologies in Maritime Safety)

Abstract

:
Falls on a ship cause severe injuries, and an accident falling off board, referred to as “man overboard” (MOB), can lead to death. Thus, it is crucial to accurately and timely detect the risk of falling. Wearable sensors, unlike camera and radar sensors, are affordable and easily accessible regardless of the weather conditions. This study aimed to identify the fall risk level (i.e., high and low risk) among individuals on board using wearable sensors. We collected walking data from accelerometers during the experiment by simulating the ship’s rolling motions using a computer-assisted rehabilitation environment (CAREN). With the best features selected by LASSO, eight machine learning (ML) models were implemented with a synthetic minority oversampling technique (SMOTE) and the best-tuned hyperparameters. In all ML models, the performance in classifying fall risk showed overall a good accuracy (0.7778 to 0.8519), sensitivity (0.7556 to 0.8667), specificity (0.7778 to 0.8889), and AUC (0.7673 to 0.9204). Logistic regression showed the best performance in terms of the AUC for both training (0.9483) and testing (0.9204). We anticipate that this study will effectively help identify the risk of falls on ships and aid in developing a monitoring system capable of averting falls and detecting MOB situations.

1. Introduction

Human falls cause serious injury. In particular, falls are even more likely to occur on ships due to their motion. The loss of balance caused by ship’s movements at sea causes falls, and due to the iron structure of ships, falls on board can result in much more severe injuries compared to those on land. In the worst-case scenario, an accident that involves falling off a ship, called “man overboard” (MOB), can lead to death. In fact, 22 people fall off board each year on cruise ships, of which 79 percent either go missing or do not survive [1]. From a broader perspective, an estimated 1000 or more people are involved in MOB incidents, and the survival rate is meager [2]. This low survival rate can be attributed to the unnoticed nature of MOB situations, in which the occurrence (e.g., time and location) is not promptly identified, leading to a drastic drop in the faller’s body temperature (i.e., hypothermia) [3]. An MOB can be handled more effectively if the situation is immediately recognized by another crew member on the ship [4]. However, because of the complex structure of a ship’s deck, the coverage of closed-circuit television (CCTV) surveillance cameras is limited, and as the size of the ship increases, blind spots will also increase. It is not easy to directly detect falls because the number of crew members is limited. Therefore, it is essential to accurately and timely detect human falls on ships. To do this, it is necessary to identify the risk of falling while walking on a vessel in motion.
Traditional techniques for fall risk classification often rely on clinical assessments and scoring systems based on observations, questionnaires, and physical examinations conducted by healthcare professionals. Healthcare providers use various assessment tools, such as the Timed Up and Go (TUG) test, Berg Balance Scale (BBS), Tinetti Performance-Oriented Mobility Assessment (POMA), and Functional Reach Test (FRT) to evaluate balance, gait, mobility, and other factors related to fall risk [5,6,7,8]. Patients may be asked to complete questionnaires, such as the Falls Efficacy Scale-International (FES-I), Falls Risk Assessment Tool (FRAT), or other self-reported surveys to assess their perception of their own fall risk and related factors [9,10]. However, these traditional techniques rely heavily on subjective assessments and observations by healthcare professionals and may have limitations in predicting fall risk accurately, especially in individuals with complex medical conditions or those at higher risk for falls. Therefore, integrating machine learning (ML) techniques into fall risk classification can provide additional insight and improve the accuracy of predictive models.
In recent years, ML techniques have been widely applied in many research domains. Specifically, ML techniques such as classification and clustering tackle many automated recognition or prediction problems. Many researchers have utilized ML to predict or detect falls and identify the risk of falls [11,12,13,14]. Thakur and Han [11] proposed an optimal ML approach to improve fall detection in assisted living by comparing 19 different ML methods. Usmani et al. [12] explored the latest research trends in fall detection and prevention using ML algorithms. Noh et al. [13] developed the extreme gradient boosting (XGB) model to predict the fall risk level in older adults by identifying the optimal gait features. Chakraborty and Sorwar [14] discriminated between fallers and non-fallers based on the long-term monitoring of natural fall data using three different ML approaches. These studies primarily examined older adults in the biomedical or healthcare domains. There are fewer studies on the prediction or detection of seafarers’ falls in the maritime field. Only a few previous studies have attempted to detect MOBs through ML technologies [15,16,17,18,19]. Tsekenis et al. [15] implemented an ML-based system to detect MOBs using radar sensors. They achieved higher accuracy (97.24%) with a random forest algorithm. Bakalos et al. investigated identifying MOB events using simple RGB streams and thermal imagery through convolutional spatiotemporal autoencoders, which can detect a fall as an anomaly [16,17]. Armeniakos et al. [18] built a human fall detection system using multiple long-range millimeter-wave band radar sensors. They emphasized that the system can detect and track real human fall scenarios. Gürüler et al. [19] designed an MOB detection system module using GPS, radio frequency, and a mobile ad hoc network to warn of an MOB situation, including the location and information of the individual involved in the MOB. However, these studies need expensive devices, such as radar sensors and video cameras, and some challenges still exist. The video cameras can be affected by environmental conditions (e.g., fog, rain, and snow), and the radar sensors can be interrupted by other obstacles or reflective objects. To tackle these limitations, we used a wearable sensor to identify the fall risk levels in this study through ML algorithms, since a wearable sensor is cheap, light, and convenient to use in the laboratory and real-world settings and not influenced by weather conditions. The recent advancements in wearable sensor technology have led to its versatile application in various research domains, particularly in healthcare. The flexible nature of these sensors has facilitated their widespread adoption, contributing to innovative solutions and improvements in diverse fields, including fall detection methods [20,21,22].
The purpose of this study was to classify the risk of human falls in a rolling situation of a ship. Our specific aims for this study were:
  • To see whether an ML approach can be applied to identify fall risks with a wearable sensor;
  • To identify the best gait features for the prediction of the fall risk level (high or low) during a ship’s rolling conditions;
  • To examine which ML models perform best for fall risk classifications under a ship’s roll motions.
To achieve these goals, a computer-assisted rehabilitation environment (CAREN) was used to systematically simulate the rolling motions of a ship. In this study, we simulated ship roll motions of up to 20 degrees. We also implemented eight ML classification models with the best feature set and hyperparameters. The detailed experimental design is described in Section 2.
The main contributions of this study are summarized as follows:
  • To the best of our knowledge, this study marks the initial endeavor to detect fall risks in the maritime field using wearable sensors, as the majority of the previous studies used video cameras or radar sensors, often focusing on older adults in biomedical and healthcare fields;
  • We comprehensively analyzed eight ML models for fall risk classification implemented with a synthetic minority oversampling technique (SMOTE) and hyperparameters tuning;
  • The findings of this study can be applied to prevent seafarers or passengers from falls and MOBs by determining the risk of falls during a ship’s rolling motions.
The remaining paper is structured as follows: Section 2 describes this study’s experimental design and methodology, including data collection, data preprocessing, ML techniques, and overall implementation. The results of the study are presented in Section 3. In Section 4, we discuss the findings of this study. Finally, in Section 5, we provide conclusions and future research directions.

2. Related Work

Fall detection is a critical area of research in healthcare and assistive technology, aiming to prevent fall-related injuries among vulnerable populations. Over the years, researchers have explored various methodologies and technologies to develop effective fall detection systems. In this section, we review the existing literature on fall detection, focusing on recent advancements and key findings in the field.

2.1. Traditional Sensor-Based Approaches to Fall Detection

Early efforts in fall detection primarily relied on traditional approaches, including rule-based systems, wearable sensors, and ambient sensors [23,24,25]. These systems often involved threshold-based algorithms to detect abrupt changes in motion patterns indicative of a fall. While effective to some extent, traditional approaches face challenges, such as high false alarm rates and limited adaptability to diverse environments and user behaviors. In particular, ships are affected by continuous movements, including rolling, pitching, and heaving, which can cause significant variability and noise in sensor data. However, many existing fall detection studies using sensors have been conducted in static environments [11,21,26,27]. Therefore, distinguishing between normal ship movement and fall events requires sophisticated algorithms that can robustly detect falls under dynamic movement patterns. The proposed study considered ship movement, to some extent, by applying rolling.

2.2. Hidden Markov Model (HMM) for Fall Detection

Many pieces of literature have reported on Hidden Markov Models (HMMs) for their high recognition rates in human activity recognition (HAR), particularly in fall detection using wearable sensors [26,28,29,30]. Moreover, HMMs exhibit superior efficiency, interpretability, and scalability owing to their innate and robust modeling capabilities for time series data [29,31]. However, HMMs require a predefined number of states, which can make it difficult for these models to handle complex situations, and they are mainly used to deal with sequential data, especially time series data. Thus, their application to different types of data and problems is difficult. On the other hand, ML can provide more flexible models because it can handle different types of data, such as images, text, and speech, and it can model complex relationships among different variables. The proposed study evaluated eight ML models with data obtained from sensors, helping researchers conducting similar studies in the future decide which models to explore further and potentially adopt for fall risk classification tasks.

3. Materials and Methods

This study includes data collection, data preprocessing, feature extraction/selection, and classification. We benchmarked a well-structured research pipeline for wearable-based HAR research by Liu et al. [31]. Figure 1 depicts a framework of the whole process.

3.1. Data Collection

We recruited 30 healthy participants for this study. In Table 1, the participants’ demographics are summarized. All participants read and signed a consent form approved by the Institutional Review Board at the University of Nebraska Medical Center (IRB 141-21-EP). A general inclusion criterion was that participants should be between 19 and 55 years old. Participants were excluded if they had:
  • A major lower extremity injury or surgery;
  • Known cardiovascular conditions that make it unsafe for them to exercise;
  • A history of dizziness due to vestibular disorders, such as Meniere’s disease and vertigo;
  • Any difficulty in walking in unstable, moving environments.
We recorded the subject’s movement at 100 Hz with ten cameras using a 3D motion capture system (Vicon Motion System Ltd., Oxford, UK). Anatomical landmarks were marked with 37 reflective markers using the Plug-In Gait full-body model [32]; four markers were applied to the head, five to the torso, 12 to the upper limbs, four to the pelvis, and 12 to the lower limbs. In addition, we placed seven accelerometers (Xsens, Enschede, The Netherlands) on the pelvis, feet, shanks, and thighs to obtain three-dimensional accelerations. The sampling frequencies of the accelerometers were set to 100 Hz. This study analyzed acceleration data from the pelvis because upper body motion is more appropriate for measuring balance [33]. As shown in Figure 2a, the reflective markers and accelerometers were placed accordingly. The ship’s roll motion was simulated up to 20 degrees using the CAREN system (Motek, Amsterdam, The Netherlands). Participants walked for two minutes at their own pace using the CAREN system with a split-belt treadmill, as shown in Figure 2b. All participants were fitted with safety harnesses to prevent accidents on the moving platform. Nine different conditions were applied: no rolling and 5-, 10-, 15-, and 20-degrees of rolling with slow (12 s) and fast (6 s) rolling cycles. Previous studies used different incline degrees, such as 5, 10, 15, and 20, to determine the evacuation walking time in an emergency at sea [34,35,36]. Thus, we chose the same rolling angles for our experiments. We picked a 12 s rolling cycle of a passenger ship and a 6 s rolling cycle of a general cargo ship for the slow- and fast-rolling cycles, respectively [37]. We conducted nine different walking trials in random order to prevent learning effects.

3.2. Data Preprocessing

Using the collected accelerations from the pelvis for nine different walking trials, we first labeled the data in terms of the fall risk as high or low. Choi et al. found significant balance and stability variations in rolling above 15 degrees [38]. Thus, the data on the walking trials in 15 and 20 degrees of rolling for both the slow and fast cycles were labeled as “high risk”, and the remaining data (i.e., no rolling and 5 and 10 degrees in slow and fast cycles) were labeled as “low risk”. We also randomly divided the data into training (70%) and test (30%) datasets, as shown in Table 2.
In [38], we calculated the center of mass excursion (COME) and the variability in the margin of stability (vMOS) with the data collected from the motion capture system, since these two variables represent the balance or stability during walking and have been proven to be reasonable predictors of falls in many studies [39,40,41,42,43,44,45,46,47]. To verify the determination of data divided into high and low risk, we compared the difference between the two groups using an independent samples t-test for the four variables: mediolateral and anterior–posterior directional COMEs and vMOSs, denoted by ML-COME, AP-COME, ML-vMOS, and AP-vMOS, respectively. The t-test analysis revealed a statistically significant difference in the two groups’ risk of falling movements (p < 0.001), as shown in Table 3 and Figure 3.
To extract gait features from the pelvis accelerations, the initial step involved identifying each step event. The methods employed for detecting step events and extracting features are consistent with the peak detection method in our previous works [48,49]. Table 4 lists twenty gait features extracted from the pelvis. Each feature was also calculated with its average value (denoted by a lowercase “a”), symmetry value (denoted by a lowercase “s”), and variability value (denoted by a lowercase “v”). This study used 60 features, and detailed methods for step detection and feature extraction can be found in [48,49], respectively. We normalized the features to have a zero mean and scaled them to unit variance.

3.3. Feature Selection Using LASSO

Feature selection is a key part of this work. Feature engineering in HAR encompasses various techniques, including feature stacking, feature space reduction, and the design of high-level HAR features [51,52,53]. We applied the least absolute shrinkage and selection operator (LASSO) to select a subset of relevant features, since the LASSO was the best feature selection method for identical data in the previous study [50]. In LASSO, the residual sum of squares of a vector of regression coefficients is minimized subject to a constraint on the L1-norm [54]. A sparse model is obtained by shrinking the coefficients of less important variables to zero using this method. LASSO is defined as:
i = 1 n y i j x i j β j 2 + λ j = 1 p β j
where y i and x i j represent the respective outcome and variables of the i-th subject; λ is a non-negative hyperparameter; and β is a vector of regression coefficients. The best λ was chosen to minimize the mean squared error (MSE) based on 10-fold cross-validation (CV). We repeated this step 100 times with training data. By using the cut-off threshold (i.e., selected at least 50 times), we derived the most selected features after 100 iterations.

3.4. ML Classification Models

3.4.1. SMOTE Resampling

A majority class is a class with a larger number of samples, and a minority class is a class with a smaller number of samples. On the basis of the label distribution, our datasets are slightly skewed as high-risk labels are the minority class. In the case of an imbalanced dataset, it is more difficult for a model to learn from the high-risk group, which is the class of interest. The accuracy of a traditional classifier is biased toward the majority when there are not enough samples in the minority class. There is no guarantee that minority class samples can be classified, even if their accuracy rate is high. Resampling techniques can mitigate the problem of imbalanced classes. This technique improves model performance by increasing the distribution of the minority class in the training data. In general, there are two resampling methods, namely oversampling and undersampling.
For this study, we selected a popular resampling method called synthetic minority oversampling technique (SMOTE), which was proposed by Chawla et al. [55] in 2002. With SMOTE, synthetic examples of the minority class are created to add instances to the minority class to balance the classes [55]. The SMOTE method creates new samples by linearly interpolating two minority samples. By doing so, we are alleviating the overfitting problems caused by random oversampling, making class distributions more balanced and improving the generalization capabilities of the classifier.

3.4.2. ML Algorithms and Hyperparameters Tuning

In this study, we evaluated six different ML algorithms to classify “high” or “low” fall risks: logistic regression (LR) [56], decision tree (DT) [57], k-nearest neighbors (KNN) [58], random forest (RF) [59], extreme gradient boosting (XGB) [60], and support vector machine (SVM) [61]. These algorithms are commonly used in supervised binary classification problems. We built and tested these models using the caret package [62] in R (version 4.2.1).
For binary classification, LR is one of the most popular methods [63]. The LR uses the maximum likelihood to find the regression coefficients for each feature so that the predicted probability of each class is as close to the actual class as possible. The estimated coefficients can calculate the probability of a given observation falling into each class [64].
A DT performs classification using recursive binary splitting. A tree is constructed from a root node, and splitting occurs in each node until it reaches the minimum size of a class subgroup or a stop condition. During the construction of a tree, the Gini index or entropy is used to assess each split’s quality [65].
A KNN classifier constructs a decision boundary by identifying the k samples closest to a given observation [66]. The KNN assigns a class to the observation based on the simple majority vote of its k nearest neighbors [65]. We used the default configuration for the KNN classifier to calculate the distance, known as the Minkowski distance metric.
An RF solves classification tasks by building many decision trees. Unlike DT, an RF is composed of a large number of individual trees [59,67,68]. Because of the sensitivity of each tree to its training data, each tree’s structure changes when given slightly different data each time. Each tree is constructed using a subset of the training data and splitting each node according to the best randomly selected feature set. A final classification is determined based on the majority vote from the decision trees. The RF is less prone to overfitting [64].
The XGB is an integrated learning algorithm developed by Chen in 2016 [60]. The XGB algorithm has been applied in many fields because it is fast, accurate, and robust. The XGB algorithm optimizes and enhances a gradient-boosted DT algorithm, which can parallelize computations, build approximate trees, and process sparse data efficiently. In addition, XGB optimizes CPU and memory usage, making it ideal for recognizing and classifying multidimensional data features [69].
An SVM maps the features onto high-dimensional space using kernels to accommodate nonlinear class boundaries and then constructs a hyperplane that effectively separates observations [61,70,71]. As new observations are provided, they will be mapped into high-dimensional space to assign a class based on the hyperplane [64]. We built SVM classifiers with three kernels: a linear kernel (SVM-L), a radial basis function kernel (SVM-RBF), and a polynomial kernel (SVM-Poly).
ML models require optimized hyperparameters to achieve robust performance results. Default hyperparameter settings cannot optimize ML techniques, and this crucial step requires additional attention [72,73]. The hyperparameters of each model had to be tuned during the training phase to construct a model that performed relatively well. We adjusted the hyperparameters of each method as specified in Table 5 using a grid search with a 10-fold CV. The best hyperparameters were set according to the area under the receiver operating characteristic (ROC) curve (AUC).

3.5. Evaluation Metrics

The testing dataset was used to evaluate the predictive performance of the ML classification models during the model evaluation phase. Performance comparisons were made using the accuracy, sensitivity, specificity, and AUC metrics. Accuracy refers to the proportion of true positives and true negatives among all cases. Sensitivity (also known as the true positive rate) is the probability of a positive test, assuming that the results are genuinely positive. Specificity (also known as true negative rate) refers to the probability of a negative test, assuming that the results are truly negative. The accuracy, sensitivity, and specificity are defined as:
A c c u r a c y = T P + T N T P + T N + F P + F N
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
where T P is predicted to be a positive class for the original positive class; T N is predicted to be a negative class for the original negative class; F P is predicted to be a positive class for the original negative class; and F N is predicted to be a negative class for the original positive class.
This study mainly used the AUC to evaluate ML classifiers as the ROC curve provides valuable insight into the classifier’s performance across the entire range of possible operating points, helping us understand its strengths and limitations. Also, the AUC is less affected by class imbalance compared to other metrics like accuracy [74]. The ROC curve captures the trade-off between the true positive rate and the false positive rate for all decision boundaries [55] and offers a straightforward interpretation. The AUC is a valuable measure of a binary classifier’s performance that summarizes the ROC curve into a single value between 0 and 1, with 1 representing the best result.

3.6. Software

All analyses were performed using R statistical software (version 4.2.1). The glmnet package [75] was used to perform the LASSO for feature selection. All ML classification models, including the hyperparameter tuning process, were implemented and tested using the caret package [62].

4. Results

4.1. Feature Selection Results

In this study, the LASSO method was employed for the feature selection. We repeated the LASSO 100 times with randomly picked training data to find the most selected features. The smallest MSE determined the best λ of the LASSO for each iteration with a 10-fold CV. Figure 4 shows an example of the best parameter λ tuning process for the LASSO. On the basis of 100 iterations, we found the most frequently selected features for fall risk classification, as represented in Table 6. We chose only the features selected more than 50 times to build the ML classifiers. The initial step-event relevant features (i.e., aLHM, sLHS, sAHS, aAHS, and vAHS) and double limb support relevant features (i.e., sAMD and aAMD) were mainly selected. We also found that most selected features were lateral and anterior directional features.

4.2. Hyperparameter Tuning Results

Using a grid search method with a 10-fold CV, we tuned the hyperparameters for the ML models based on the highest AUCs to achieve the best performances. Figure 5 illustrates examples of the hyperparameter tuning process. The hyperparameters for DT, KNN, and RF were set as cp = 0.0001, k = 5, and mtry = 2, respectively. For the XGB, the hyperparameters were determined as nround = 90, max_depth = 3, and eta = 0.1. For the SVM, we set the hyperparameters for three different kernels: (1) C = 0.1 for SVM-L; (2) C = 1 and sigma = 0.1 for SVM-RBF; and (3) C = 0.561, scale = 0.135, and degree = 2 for SVM-Poly. Table 7 summarizes the list of the best hyperparameters for all of the ML classification models.

4.3. Classification Results

The performance results of the fall risk classifications for each ML model are shown in Table 8, with the four metrics (i.e., accuracy, sensitivity, specificity, and AUC) used in this study. The result indicates that the XGB and SVM-Poly had the highest accuracy (0.8519) among all models. The SVM-RBF performed better in specificity (0.8889). The LR performed best in terms of the sensitivity (0.8667) and AUC (0.9204). Figure 6 shows the binary classification confusion matrix for the eight models to illustrate how the classifiers predict fall risk. The ROC curves, including the AUCs for all methods, are shown in Figure 7. The LR outperformed the other classifiers for both the training (AUC = 0.9483) and testing (AUC = 0.9204) datasets. Overall, the results show that the LR was the best classification model for identifying the fall risk in this study.

5. Discussion

Many researchers have studied the assessment of fall risk [11,12,13,14], but most of them examined older adults, and little research has been conducted on fall risk identification for seafarers on a ship. In addition, existing studies related to classifying fall risks have been undertaken via different devices like radars and cameras [15,16,17,18,19]. However, wearable sensor-based fall risk evaluation studies have been relatively insufficient. To the best of our knowledge, this study represents the first use of machine learning approaches to identify fall risks using wearable sensors in the context of ship’s rolling situations. Various ML models (LR, DT, KNN, RF, XGB, SVM-L, SVM-RBF, and SVM-Poly) were applied and evaluated for fall risk classifications. The performance of each model was compared accordingly based on the evaluation metrics (i.e., accuracy, sensitivity, specificity, and AUC). The results show that the overall accuracies for all ML models were greater than 0.7788, and the XGB and SVM-Poly had the highest accuracies (0.8519) among all models. Regarding the AUC metric, the LR performed best with the highest AUCs for training (0.9483) and testing (0.9204). The results of this study demonstrate that an ML approach can be applied to identify fall risks with a wearable sensor. The performance in classifying fall risk levels by the proposed models (accuracy: 0.7778~0.8519; sensitivity: 0.7556~0.8667; specificity: 0.7778~0.8889; and AUC: 0.7673~0.9204) outperformed that of a previous study on older adults (accuracy: 0.67~0.70; sensitivity: 0.43~0.53; specificity: 0.77~0.84; and AUC: 0.71~0.72) [13]. The evaluation of eight ML models serves the purpose of providing researchers with a comprehensive understanding of the performance and characteristics of various approaches to fall risk classification. By assessing multiple models, researchers gain insight into the strengths, weaknesses, and suitability of each method for their specific datasets and requirements. This approach empowers researchers to make informed decisions about which models to explore further and potentially adopt for their own fall risk classification tasks. This study also contributed to building strong ML classification models with advanced techniques like SMOTE and tuning the best hyperparameters using novel frameworks for ML-based fall risk classification.
In addition, the study exhibited the best feature for predicting the risk of falls. The LASSO selected the best features, as shown in Table 5. We found that the initial step-event relevant features (i.e., aLHM, sLHS, sAHS, aAHS, and vAHS) and double limb support relevant features (i.e., sAMD and aAMD) were primarily selected. When walking, the body system is translated mechanically, with the center of mass (COM) moving forward and recovering dynamic balance by moving another foot forward to avoid falls [76]. Since the initial contact and double limb support features might be associated with recovering balance mechanics, these features are mostly selected for detecting fall risks. Furthermore, the ship’s rolling motion may alter the COM motion and reduce dynamic stability during walking [77]. In a previous study [50], these gait features successfully predicted COM motion, which can be said to be effective in detecting fall risks and be closely related to dynamic stability. We also found that the mediolateral and anterior–posterior directional features were mainly selected. This is because the rolling motion can affect the dynamic instability by moving the body forward or left and right [38].
There are several limitations to this study. First, the sample size was small. Machine learning can create improved models when provided with a larger volume of training data. In this study, it was challenging to confirm that the number of samples was sufficient, because the system uses a total of 270 data samples split into 189 training and 81 testing samples. However, we tackled this problem through SMOTE oversampling methods, feature engineering, and regularization techniques using LASSO, as well as a 10-fold CV to mitigate this limitation. Second, as the participants are not seafarers, the dataset may not fully capture the walking characteristics of seafarers. While some crew members have much experience, there may be trainees or novice sailors. Additionally, in the case of passenger ships, there are more passengers on board than crew members. Thus, our proposed model can be used to assess fall risks among inexperienced seafarers or ensure the safety of on-board passengers. Third, while the actual movement of a ship at sea encompasses six degrees of freedom, including rolling, and pitching, the experiment in this study solely focused on the ship’s rolling motion, which could impact the selected features. In addition, although the vessel can have a rolling motion of more than 20 degrees in rough seas, only 20 degrees of rolling was tested in our experiment, because the CAREN system supports only up to 20 degrees. This study focused on roll motions, as they are the primary motion of a ship. Finally, environmental risk factors, such as weather that can affect a ship’s movement (e.g., wind, wave, swell, rain, or snow); the type of footwear worn by the seafarers; and the friction force of the hull (e.g., deck floor material or wet floor), were not considered. Since this study was conducted through simulations, those external risk factors could not be incorporated. Considering that such a scenario could pose a risk of injury to the subjects, we conducted the experiment with the utmost priority given to the safety of the participants. To address these limitations in future studies, it is essential to develop a robust fall risk evaluation model through experiments conducted on an actual ship and involving more experienced seafarers in the experiments.

6. Conclusions

The objective of this study was to assess whether a wearable sensor could detect fall risk in simulations of a ship’s rolling movement. This study demonstrated that the proposed ML models effectively classified settings with high and low fall risk using ML algorithms. Using LASSO, we also investigated the best feature set for the fall risk classifications. The results determined that the lateral and anterior directional features can affect the identification of fall risks in the ship’s rolling conditions. Through this study, we developed a model that reliably detects seafarers’ fall risks and opened the possibility of developing an effective monitoring system for preventing the ship’s crew or passengers from falls and MOB accidents. Future research needs to consider effectively reducing the computational time for developing a real-time fall prediction or detection system in a natural marine environment.

Author Contributions

Conceptualization, J.C. and J.-H.Y.; methodology, J.C.; software, J.C.; validation, J.C., B.A.K. and J.-H.Y.; formal analysis, J.C. and K.Y.S.; investigation, J.C.; data curation, J.C.; writing—original draft preparation, J.C.; writing—review and editing, J.C., B.A.K., J.-H.Y. and K.Y.S.; visualization, J.C. and K.Y.S.; supervision, B.A.K. and J.-H.Y.; funding acquisition, J.C., J.-H.Y. and K.Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a Learning & Academic Research Institution for Master’s and PhD students, and Postdocs (LAMP) Program of the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (No. RS-2023-00285353), and partly funded by a Graduate Research and Creative Activity (GRACA) Award from the Office of Research and Creative Activity (ORCA) of the University of Nebraska at Omaha (grant number: 42-1209-1223).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the University of Nebraska Medical Center (protocol code: 141-21-EP, approved 21 May 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent was obtained from the subjects to publish this paper.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors express their gratitude to all individuals who participated in the study. Special thanks are extended to Namwoong Kim for assisting with data collection.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Örtlund, E.; Larsson, M. Man Overboard Detecting Systems Based on Wireless Technology; Chalmers University of Technology: Gothenburg, Sweden, 2018. [Google Scholar]
  2. Feraru, V.A.; Andersen, R.E.; Boukas, E. Towards an Autonomous UAV-Based System to Assist Search and Rescue Operations in Man Overboard Incidents. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Abu Dhabi, United Arab Emirates, 4–6 November 2020; IEEE: Piscataway, NJ, USA; pp. 57–64. [Google Scholar]
  3. Sevin, A.; Bayilmiş, C.; Erturk, İ.; Ekiz, H.; Karaca, A. Design and Implementation of a Man-Overboard Emergency Discovery System Based on Wireless Sensor Networks. Turk. J. Electr. Eng. Comput. Sci. 2016, 24, 762–773. [Google Scholar] [CrossRef]
  4. Hunter, F.; Hunter, T. Autonomous Man Overboard Rescue Equipment (AMORE). Bachelor’s Thesis, Worcester Polytechnic Institute, Worcester, MA, USA, 2013. [Google Scholar]
  5. Podsiadlo, D.; Richardson, S. The Timed “Up & Go”: A Test of Basic Functional Mobility for Frail Elderly Persons. J. Am. Geriatr. Soc. 1991, 39, 142–148. [Google Scholar] [CrossRef] [PubMed]
  6. Berg, K.; Wood-Dauphine, S.; Williams, J.I.; Gayton, D. Measuring Balance in the Elderly: Preliminary Development of an Instrument. Physiother. Can. 1989, 41, 304–311. [Google Scholar] [CrossRef]
  7. Tinetti, M.E.; Franklin Williams, T.; Mayewski, R. Fall Risk Index for Elderly Patients Based on Number of Chronic Disabilities. Am. J. Med. 1986, 80, 429–434. [Google Scholar] [CrossRef] [PubMed]
  8. Duncan, P.W.; Weiner, D.K.; Chandler, J.; Studenski, S. Functional Reach: A New Clinical Measure of Balance. J. Gerontol. 1990, 45, M192–M197. [Google Scholar] [CrossRef] [PubMed]
  9. Yardley, L.; Beyer, N.; Hauer, K.; Kempen, G.; Piot-Ziegler, C.; Todd, C. Development and Initial Validation of the Falls Efficacy Scale-International (FES-I). Age Ageing 2005, 34, 614–619. [Google Scholar] [CrossRef] [PubMed]
  10. Nandy, S.; Parsons, S.; Cryer, C.; Underwood, M.; Rashbrook, E.; Carter, Y.; Eldridge, S.; Close, J.; Skelton, D.; Taylor, S.; et al. Development and Preliminary Examination of the Predictive Validity of the Falls Risk Assessment Tool (FRAT) for Use in Primary Care. J. Public Health 2004, 26, 138–143. [Google Scholar] [CrossRef] [PubMed]
  11. Thakur, N.; Han, C.Y. A Study of Fall Detection in Assisted Living: Identifying and Improving the Optimal Machine Learning Method. J. Sens. Actuator Netw. 2021, 10, 39. [Google Scholar] [CrossRef]
  12. Usmani, S.; Saboor, A.; Haris, M.; Khan, M.A.; Park, H. Latest Research Trends in Fall Detection and Prevention Using Machine Learning: A Systematic Review. Sensors 2021, 21, 5134. [Google Scholar] [CrossRef]
  13. Noh, B.; Youm, C.; Goh, E.; Lee, M.; Park, H.; Jeon, H.; Kim, O.Y. XGBoost Based Machine Learning Approach to Predict the Risk of Fall in Older Adults Using Gait Outcomes. Sci. Rep. 2021, 11, 12183. [Google Scholar] [CrossRef]
  14. Chakraborty, P.R.; Sorwar, G. A Machine Learning Approach to Identify Fall Risk for Older Adults. Smart Health 2022, 26, 100303. [Google Scholar] [CrossRef]
  15. Tsekenis, V.; Armeniakos, C.K.; Nikolaidis, V.; Bithas, P.S.; Kanatas, A.G. Machine Learning-Assisted Man Overboard Detection Using Radars. Electronics 2021, 10, 1345. [Google Scholar] [CrossRef]
  16. Bakalos, N.; Katsamenis, I.; Voulodimos, A. Man Overboard: Fall Detection Using Spatiotemporal Convolutional Autoencoders in Maritime Environments. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, Virtual, 29 June–2 July 2021; ACM: New York, NY, USA, 2021; pp. 420–425. [Google Scholar]
  17. Bakalos, N.; Katsamenis, I.; Karolou, E.E.; Doulamis, N. Unsupervised Man Overboard Detection Using Thermal Imagery and Spatiotemporal Autoencoders. In Novelties in Intelligent Digital Systems: Proceedings of the 1st International Conference (NIDS 2021), Athens, Greece, 30 September–1 October 2021; Frasson, C., Kabassi, K., Voulodimos, A., Eds.; IOS Press: Amsterdam, The Netherlands, 2021; Volume 338, p. 256. [Google Scholar]
  18. Armeniakos, C.K.; Nikolaidis, V.; Tsekenis, V.; Maliatsos, K.; Bithas, P.S.; Kanatas, A.G. Human Fall Detection Using MmWave Radars: A Cluster-Assisted Experimental Approach. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 11657–11669. [Google Scholar] [CrossRef]
  19. Gürüler, H.; Altun, M.; Khan, F.; Whangbo, T. Man Overboard Detection System Using IoT for Navigation Model. Comput. Mater. Contin. 2022, 71, 4955–4969. [Google Scholar] [CrossRef]
  20. El Attaoui, A.; Largo, S.; Kaissari, S.; Benba, A.; Jilbab, A.; Bourouhou, A. Machine Learning-based Edge-computing on a Multi-level Architecture of WSN and IoT for Real-time Fall Detection. IET Wirel. Sens. Syst. 2020, 10, 320–332. [Google Scholar] [CrossRef]
  21. Lee, J.-S.; Tseng, H.-H. Development of an Enhanced Threshold-Based Fall Detection System Using Smartphones with Built-In Accelerometers. IEEE Sens. J. 2019, 19, 8293–8302. [Google Scholar] [CrossRef]
  22. Nho, Y.-H.; Ryu, S.; Kwon, D.-S. UI-GAN: Generative Adversarial Network-Based Anomaly Detection Using User Initial Information for Wearable Devices. IEEE Sens. J. 2021, 21, 9949–9958. [Google Scholar] [CrossRef]
  23. Howcroft, J.; Kofman, J.; Lemaire, E.D. Prospective Fall-Risk Prediction Models for Older Adults Based on Wearable Sensors. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1812–1820. [Google Scholar] [CrossRef]
  24. Patel, M.; Pavic, A.; Goodwin, V.A. Wearable Inertial Sensors to Measure Gait and Posture Characteristic Differences in Older Adult Fallers and Non-Fallers: A Scoping Review. Gait Posture 2020, 76, 110–121. [Google Scholar] [CrossRef]
  25. Drover, D.; Howcroft, J.; Kofman, J.; Lemaire, E.D. Faller Classification in Older Adults Using Wearable Sensors Based on Turn and Straight-Walking Accelerometer-Based Features. Sensors 2017, 17, 1321. [Google Scholar] [CrossRef]
  26. Tong, L.; Song, Q.; Ge, Y.; Liu, M. HMM-Based Human Fall Detection and Prediction Method Using Tri-Axial Accelerometer. IEEE Sens. J. 2013, 13, 1849–1856. [Google Scholar] [CrossRef]
  27. Shoaib, M.; Dragon, R.; Ostermann, J. View-Invariant Fall Detection for Elderly in Real Home Environment. In Proceedings of the 4th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2010, Singapore, 14–17 November 2010; pp. 52–57. [Google Scholar] [CrossRef]
  28. Liu, H.; Hartmann, Y.; Schultz, T. Motion Units: Generalized Sequence Modeling of Human Activities for Sensor-Based Activity Recognition. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1506–1510. [Google Scholar]
  29. Xue, T.; Liu, H. Hidden Markov Model and Its Application in Human Activity Recognition and Fall Detection: A Review. In International Conference in Communications, Signal Processing, and Systems; Springer: Singapore, 2022; pp. 863–869. [Google Scholar]
  30. Iloga, S.; Bordat, A.; Le Kernec, J.; Romain, O. Human Activity Recognition Based on Acceleration Data From Smartphones Using HMMs. IEEE Access 2021, 9, 139336–139351. [Google Scholar] [CrossRef]
  31. Liu, H.; Hartmann, Y.; Schultz, T. A Practical Wearable Sensor-Based Human Activity Recognition Research Pipeline. In Proceedings of the 15th International Joint Conference on Biomedical Engineering Systems and Technologies, SCITEPRESS-Science and Technology Publications. Virtual, 9–11 February 2022; pp. 847–856. [Google Scholar]
  32. Vicon Motion System Ltd. Plug-in Gait Reference Guide. Available online: https://docs.vicon.com/display/Nexus26/PDF+downloads+for+Vicon+Nexus?preview=/42696722/42697399/Plug-in%20Gait%20Reference%20Guide.pdf (accessed on 3 March 2022).
  33. Van Criekinge, T.; Saeys, W.; Hallemans, A.; Velghe, S.; Viskens, P.-J.; Vereeck, L.; De Hertogh, W.; Truijen, S. Trunk Biomechanics during Hemiplegic Gait after Stroke: A Systematic Review. Gait Posture 2017, 54, 133–143. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, X.; Liu, Z.; Wang, J.; Loughney, S.; Yang, Z.; Gao, X. Experimental Study on Individual Walking Speed during Emergency Evacuation with the Influence of Ship Motion. Phys. A Stat. Mech. Its Appl. 2021, 562, 125369. [Google Scholar] [CrossRef]
  35. Sun, J.; Guo, Y.; Li, C.; Lo, S.; Lu, S. An Experimental Study on Individual Walking Speed during Ship Evacuation with the Combined Effect of Heeling and Trim. Ocean Eng. 2018, 166, 396–403. [Google Scholar] [CrossRef]
  36. Lee, D.; Park, J.-H.; Kim, H. A Study on Experiment of Human Behavior for Evacuation Simulation. Ocean Eng. 2004, 31, 931–941. [Google Scholar] [CrossRef]
  37. Barrass, B. Ship Stability: Notes and Examples, 3rd ed.; Elsevier: Amsterdam, The Netherlands, 2000. [Google Scholar]
  38. Choi, J.; Knarr, B.A.; Youn, J.-H. The Effects of Ship’s Roll Motion on the Center of Mass and Margin of Stability During Walking: A Simulation Study. IEEE Access 2022, 10, 102432–102439. [Google Scholar] [CrossRef]
  39. Jansen, K.; De Groote, F.; Duysens, J.; Jonkers, I. How Gravity and Muscle Action Control Mediolateral Center of Mass Excursion during Slow Walking: A Simulation Study. Gait Posture 2014, 39, 91–97. [Google Scholar] [CrossRef] [PubMed]
  40. Kuo, A.D. Stabilization of Lateral Motion in Passive Dynamic Walking. Int. J. Rob. Res. 1999, 18, 917–930. [Google Scholar] [CrossRef]
  41. O’Connor, S.M.; Kuo, A.D. Direction-Dependent Control of Balance during Walking and Standing. J. Neurophysiol. 2009, 102, 1411–1419. [Google Scholar] [CrossRef]
  42. Bauby, C.E.; Kuo, A.D. Active Control of Lateral Balance in Human Walking. J. Biomech. 2000, 33, 1433–1440. [Google Scholar] [CrossRef]
  43. Dean, J.C.; Alexander, N.B.; Kuo, A.D. The Effect of Lateral Stabilization on Walking in Young and Old Adults. IEEE Trans. Biomed. Eng. 2007, 54, 1919–1926. [Google Scholar] [CrossRef]
  44. Sinitksi, E.H.; Terry, K.; Wilken, J.M.; Dingwell, J.B. Effects of Perturbation Magnitude on Dynamic Stability When Walking in Destabilizing Environments. J. Biomech. 2012, 45, 2084–2091. [Google Scholar] [CrossRef]
  45. Hof, A.L.; Gazendam, M.G.J.; Sinke, W.E. The Condition for Dynamic Stability. J. Biomech. 2005, 38, 1–8. [Google Scholar] [CrossRef]
  46. Hak, L.; Houdijk, H.; Steenbrink, F.; Mert, A.; van der Wurff, P.; Beek, P.J.; van Dieën, J.H. Stepping Strategies for Regulating Gait Adaptability and Stability. J. Biomech. 2013, 46, 905–911. [Google Scholar] [CrossRef]
  47. Hak, L.; Houdijk, H.; Steenbrink, F.; Mert, A.; van der Wurff, P.; Beek, P.J.; van Dieën, J.H. Speeding up or Slowing down?: Gait Adaptations to Preserve Gait Stability in Response to Balance Perturbations. Gait Posture 2012, 36, 260–264. [Google Scholar] [CrossRef] [PubMed]
  48. Choi, J.; Youn, J.-H.; Haas, C. Machine Learning Approach for Foot-Side Classification Using a Single Wearable Sensor. In Proceedings of the 40th International Conference on Information Systems, ICIS 2019, Munich, Germany, 15–18 December 2019; Association for Information Systems: Munich, Germany, 2019. [Google Scholar]
  49. Choi, J.; Parker, S.M.; Knarr, B.A.; Gwon, Y.; Youn, J.H. Wearable Sensor-Based Prediction Model of Timed up and Go Test in Older Adults. Sensors 2021, 21, 6831. [Google Scholar] [CrossRef] [PubMed]
  50. Choi, J.; Knarr, B.A.; Gwon, Y.; Youn, J.-H. Prediction of Stability during Walking at Simulated Ship’s Rolling Motion Using Accelerometers. Sensors 2022, 22, 5416. [Google Scholar] [CrossRef] [PubMed]
  51. Hartmann, Y.; Liu, H.; Lahrberg, S.; Schultz, T. Interpretable High-Level Features for Human Activity Recognition. In Proceedings of the 15th International Joint Conference on Biomedical Engineering Systems and Technologies—BIOSIGNALS, Online, 9–11 February 2022; INSTICC. SciTePress: Setubal, Portugal, 2022; pp. 40–49. [Google Scholar]
  52. Hartmann, Y.; Liu, H.; Schultz, T. Interactive and Interpretable Online Human Activity Recognition. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Pisa, Italy, 21–25 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 109–111. [Google Scholar]
  53. Hartmann, Y.; Liu, H.; Schultz, T. High-Level Features for Human Activity Recognition and Modeling. In International Joint Conference on Biomedical Engineering Systems and Technologies; Springer Nature: Cham, Switzerland, 2023; pp. 141–163. [Google Scholar]
  54. Tibshiranit, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  55. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-Sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  56. Wright, R.E. Logistic Regression. In Reading and Understanding Multivariate Statistics; American Psychological Association: Washington, DC, USA, 1995; pp. 217–244. [Google Scholar]
  57. Rokach, L.; Maimon, O. Decision Trees. In Data Mining and Knowledge Discovery Handbook; Springer: New York, NY, USA, 2005; pp. 165–192. [Google Scholar]
  58. Cover, T.; Hart, P. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  59. Ho, T.K. Random Decision Forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 278–282. [Google Scholar]
  60. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  61. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  62. Kuhn, M. Caret: Classification and Regression Training [R Package Caret Version 6.0-93]. Available online: https://CRAN.R-project.org/package=caret (accessed on 22 November 2022).
  63. Nusinovici, S.; Tham, Y.C.; Chak Yan, M.Y.; Wei Ting, D.S.; Li, J.; Sabanayagam, C.; Wong, T.Y.; Cheng, C.-Y. Logistic Regression Was as Good as Machine Learning for Predicting Major Chronic Diseases. J. Clin. Epidemiol. 2020, 122, 56–69. [Google Scholar] [CrossRef]
  64. Hastie, T.; Friedman, J.; Tibshirani, R. The Elements of Statistical Learning; Springer Series in Statistics; Springer: New York, NY, USA, 2001; ISBN 978-1-4899-0519-2. [Google Scholar]
  65. Gareth, J.; Daniela, W.; Trevor, H.; Robert, T. An Introduction to Statistical Learning: With Applications in R; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  66. Cunningham, P.; Delany, S.J. K-Nearest Neighbour Classifiers—A Tutorial. ACM Comput. Surv. 2022, 54, 1–25. [Google Scholar] [CrossRef]
  67. Piryonesi, S.M.; El-Diraby, T.E. Using Machine Learning to Examine Impact of Type of Performance Indicator on Flexible Pavement Deterioration Modeling. J. Infrastruct. Syst. 2021, 27, 04021005. [Google Scholar] [CrossRef]
  68. Piryonesi, S.M.; El-Diraby, T.E. Role of Data Analytics in Infrastructure Asset Management: Overcoming Data Size and Quality Problems. J. Transp. Eng. Part B Pavements 2020, 146, 04020022. [Google Scholar] [CrossRef]
  69. Qu, Y.; Lin, Z.; Li, H.; Zhang, X. Feature Recognition of Urban Road Traffic Accidents Based on GA-XGBoost in the Context of Big Data. IEEE Access 2019, 7, 170106–170115. [Google Scholar] [CrossRef]
  70. Ben-Hur, A.; Horn, D.; Siegelmann, H.T.; Vapnik, V. Support Vector Clustering. J. Mach. Learn. Res. 2001, 2, 125–137. [Google Scholar] [CrossRef]
  71. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory—COLT ’92, Pittsburgh, PA, USA, 27–29 July 1992; ACM Press: New York, NY, USA, 1992; pp. 144–152. [Google Scholar]
  72. Schratz, P.; Muenchow, J.; Iturritxa, E.; Richter, J.; Brenning, A. Hyperparameter Tuning and Performance Assessment of Statistical and Machine-Learning Algorithms Using Spatial Data. Ecol. Modell. 2019, 406, 109–120. [Google Scholar] [CrossRef]
  73. Elgeldawi, E.; Sayed, A.; Galal, A.R.; Zaki, A.M. Hyperparameter Tuning for Machine Learning Algorithms Used for Arabic Sentiment Analysis. Informatics 2021, 8, 79. [Google Scholar] [CrossRef]
  74. Jeni, L.A.; Cohn, J.F.; De La Torre, F. Facing Imbalanced Data-Recommendations for the Use of Performance Metrics. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, 2–5 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 245–251. [Google Scholar]
  75. Hastie, T.; Qian, J.; Tay, K. An Introduction to Glmnet. Available online: https://glmnet.stanford.edu/articles/glmnet.html (accessed on 10 March 2022).
  76. Tesio, L.; Rota, V. The Motion of Body Center of Mass during Walking: A Review Oriented to Clinical Applications. Front. Neurol. 2019, 10, 999. [Google Scholar] [CrossRef] [PubMed]
  77. Meyer, G.; Ayalon, M. Biomechanical Aspects of Dynamic Stability. Eur. Rev. Aging Phys. Act. 2006, 3, 29–33. [Google Scholar] [CrossRef]
Figure 1. Framework for the fall risk classification model based on machine learning.
Figure 1. Framework for the fall risk classification model based on machine learning.
Jmse 12 00356 g001
Figure 2. Experimental settings: (a) location of the reflective markers and accelerometers; (b) example of the rolling simulations using the CAREN.
Figure 2. Experimental settings: (a) location of the reflective markers and accelerometers; (b) example of the rolling simulations using the CAREN.
Jmse 12 00356 g002
Figure 3. Boxplots for the mean differences between the high and low risks: (a) ML-COME (mediolateral center of mass excursion); (b) AP-COME (anterior-posterior center of mass excursion); (c) ML-vMOS (mediolateral variability for margin of stability); (d) AP-vMOS (anterior-posterior variability for margin of stability).
Figure 3. Boxplots for the mean differences between the high and low risks: (a) ML-COME (mediolateral center of mass excursion); (b) AP-COME (anterior-posterior center of mass excursion); (c) ML-vMOS (mediolateral variability for margin of stability); (d) AP-vMOS (anterior-posterior variability for margin of stability).
Jmse 12 00356 g003
Figure 4. Example of the best parameter λ tuning process: cross-validation plot for LASSO.
Figure 4. Example of the best parameter λ tuning process: cross-validation plot for LASSO.
Jmse 12 00356 g004
Figure 5. Examples of the hyperparameter tuning process: (a) DT; (b) RF; (c) XGB.
Figure 5. Examples of the hyperparameter tuning process: (a) DT; (b) RF; (c) XGB.
Jmse 12 00356 g005
Figure 6. Confusion matrix: (a) LR; (b) DT; (c) KNN; (d) RF; (e) XGB; (f) SVM-L; (g) SVM-RBF; (h) SVM-Poly.
Figure 6. Confusion matrix: (a) LR; (b) DT; (c) KNN; (d) RF; (e) XGB; (f) SVM-L; (g) SVM-RBF; (h) SVM-Poly.
Jmse 12 00356 g006
Figure 7. AUC values of the eight classification models: (a) training data; (b) testing data. The LR model performed best in both the training (AUC = 0.9483) and testing (AUC = 0.9204).
Figure 7. AUC values of the eight classification models: (a) training data; (b) testing data. The LR model performed best in both the training (AUC = 0.9483) and testing (AUC = 0.9204).
Jmse 12 00356 g007
Table 1. Summary of the participants’ demographics.
Table 1. Summary of the participants’ demographics.
CharacteristicsMean ± Standard Deviation
Gender (male/female)20/10
Age (years)30.3 ± 6.1
Height (cm)173.0 ± 9.4
Weight (kg)71.9 ± 14.5
Body Mass Index (BMI) (kg/m2)23.8 ± 3.4
Table 2. Distribution of the training and test datasets for classification.
Table 2. Distribution of the training and test datasets for classification.
Label of Fall RiskTraining SetTest SetTotal
Low10545150
High8436120
Table 3. The results of the independent samples t-test between low and high risks for each variable (*** p < 0.001).
Table 3. The results of the independent samples t-test between low and high risks for each variable (*** p < 0.001).
VariableGroupNMeanSDtp-Value
ML-COMELow150−0.53780.7503−12.3540.000 ***
High1200.67220.8576
AP-COMELow150−0.18920.8710−3.4610.001 ***
High1200.23651.0996
ML-vMOSLow150−0.66670.6248−18.3810.000 ***
High1200.83340.7149
AP-vMOSLow150−0.40870.8463−8.4300.000 ***
High1200.51090.9434
Table 4. List of extracted gait features [50] (* SD means standard deviation).
Table 4. List of extracted gait features [50] (* SD means standard deviation).
FeatureDescription
MVector magnitude of the entire step
M10Vector magnitude at initial 10% of the step
LMLateral directional vector magnitude of the entire step
VMVertical directional vector magnitude of the entire step
AMAnterior–posterior directional vector magnitude of the entire step
MDVector magnitude at double limb support
LMDLateral directional vector magnitude at double limb support
VMDVertical directional vector magnitude at double limb support
AMDAnterior–posterior directional vector magnitude at double limb support
M30Vector magnitude at single limb support
LM30Lateral directional vector magnitude at single limb support
VM30Vertical directional vector magnitude at single limb support
AM30Anterior–posterior directional vector magnitude at single limb support
LHMMaximum value of lateral accelerations at heel-strike
LHSSD * of lateral accelerations at the initial 10% of the step
VHMMaximum value of vertical accelerations at heel-strike
VHSSD * of vertical accelerations at the initial 10% of the step
AHMMaximum value of anterior–posterior accelerations at heel-strike
AHSSD * of anterior–posterior accelerations at the initial 10% of the step
STTime from heel-strike to heel-strike
Table 5. List of hyperparameters for each model.
Table 5. List of hyperparameters for each model.
ModelParameterDescription
DTcpComplexity parameter
KNNkNumber of neighbors
RFmtryNumber of randomly selected predictors
XGBnroundNumber of boosting iterations
max_depthMax tree depth
etaShrinkage
SVM-LCCost
SVM-RBFCCost
sigmaSigma
SVM-PolyCCost
scaleScale
degreePolynomial degree
Table 6. The most selected features by LASSO. The cut-off value for frequency was 50.
Table 6. The most selected features by LASSO. The cut-off value for frequency was 50.
RankFeatureFrequencyRankFeatureFrequency
1aLHM1006aAMD92
1sLHS1007vAHS91
1sAHS1008aLM3086
4sAMD989vAM3081
5aAHS9410aVM3074
Table 7. Best hyperparameters for each model.
Table 7. Best hyperparameters for each model.
DTKNNRFXGBSVM-LSVM-RBFSVM-Poly
cp = 0.0001k = 5mtry = 2nround = 90,
max_depth = 3,
eta = 0.1
C = 0.1C = 1,
sigma = 0.1
C = 0.561,
scale = 0.135,
degree = 2
Table 8. Results of the performance in classification for each model.
Table 8. Results of the performance in classification for each model.
ModelAccuracySensitivitySpecificityAUC
LR0.82720.86670.77780.9204
DT0.77780.77780.77780.7673
KNN0.81480.82220.80560.8444
RF0.81480.80000.83330.9015
XGB0.85190.84440.86110.9173
SVM-L0.79010.75560.83330.9093
SVM-RBF0.82720.77780.88890.9068
SVM-Poly0.85190.84440.86110.9198
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, J.; Knarr, B.A.; Youn, J.-H.; Song, K.Y. Machine Learning-Based Approach to Identifying Fall Risk in Seafarers Using Wearable Sensors. J. Mar. Sci. Eng. 2024, 12, 356. https://doi.org/10.3390/jmse12020356

AMA Style

Choi J, Knarr BA, Youn J-H, Song KY. Machine Learning-Based Approach to Identifying Fall Risk in Seafarers Using Wearable Sensors. Journal of Marine Science and Engineering. 2024; 12(2):356. https://doi.org/10.3390/jmse12020356

Chicago/Turabian Style

Choi, Jungyeon, Brian A. Knarr, Jong-Hoon Youn, and Kwang Yoon Song. 2024. "Machine Learning-Based Approach to Identifying Fall Risk in Seafarers Using Wearable Sensors" Journal of Marine Science and Engineering 12, no. 2: 356. https://doi.org/10.3390/jmse12020356

APA Style

Choi, J., Knarr, B. A., Youn, J. -H., & Song, K. Y. (2024). Machine Learning-Based Approach to Identifying Fall Risk in Seafarers Using Wearable Sensors. Journal of Marine Science and Engineering, 12(2), 356. https://doi.org/10.3390/jmse12020356

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop