Next Article in Journal
Using AI Segmentation Models to Improve Foreign Body Detection and Triage from Ultrasound Images
Next Article in Special Issue
Biomechanical Effects of the Badminton Split-Step on Forecourt Lunging Footwork
Previous Article in Journal
Self-Guided Algorithm for Fast Image Reconstruction in Photo-Magnetic Imaging: Artificial Intelligence-Assisted Approach
Previous Article in Special Issue
Modeling and Analysis of Foot Function in Human Gait Using a Two-Degrees-of-Freedom Inverted Pendulum Model with an Arced Foot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compensation Method for Missing and Misidentified Skeletons in Nursing Care Action Assessment by Improving Spatial Temporal Graph Convolutional Networks

1
Faculty of Engineering, Yamaguchi University Graduate School of Sciences and Technology for Innovation, 2-16-1 Tokiwadai, Ube City 755-0097, Yamaguchi Prefecture, Japan
2
Department of Orthopedic Surgery, Yamaguchi University Graduate School of Medicine, 1-1-1 Minamikogushi, Ube City 755-8505, Yamaguchi Prefecture, Japan
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(2), 127; https://doi.org/10.3390/bioengineering11020127
Submission received: 2 January 2024 / Revised: 24 January 2024 / Accepted: 24 January 2024 / Published: 29 January 2024

Abstract

:
With the increasing aging population, nursing care providers have been facing a substantial risk of work-related musculoskeletal disorders (WMSDs). Visual-based pose estimation methods, like OpenPose, are commonly used for ergonomic posture risk assessment. However, these methods face difficulty when identifying overlapping and interactive nursing tasks, resulting in missing and misidentified skeletons. To address this, we propose a skeleton compensation method using improved spatial temporal graph convolutional networks (ST-GCN), which integrates kinematic chain and action features to assess skeleton integrity and compensate for it. The results verified the effectiveness of our approach in optimizing skeletal loss and misidentification in nursing care tasks, leading to improved accuracy in calculating both skeleton joint angles and REBA scores. Moreover, comparative analysis against other skeleton compensation methods demonstrated the superior performance of our approach, achieving an 87.34% REBA accuracy score. Collectively, our method might hold promising potential for optimizing the skeleton loss and misidentification in nursing care tasks.

1. Introduction

The nursing industry has consistently exhibited a high prevalence of work-related musculoskeletal disorders (WMSDs) [1]. Among nursing professionals, the incidence of work-related musculoskeletal disorders is even more pronounced, particularly in rehabilitation and geriatric care settings, reaching a staggering 92% [2,3]. The most effective preventive approach lies in conducting ergonomic posture risk assessments for nursing personnel and promptly addressing high-risk postures through corrective measures [4,5].
The predominant methods for assessing ergonomic posture typically rely on field observation or video monitoring to measure joint angles. These joint angles are then utilized in scoring tools, such as the Rapid Upper Limb Assessment (RULA) [6] and Rapid Entire Body Assessment (REBA) [7], to determine the level of postural risk and guide the implementation of suitable intervention measures. Nevertheless, limitations exist when conducting posture assessments through field observation. Firstly, subjective judgments made by assessors are prone to biases influenced by viewing angles and fatigue [8,9]. Secondly, manual observation is time-consuming and inefficient. As a result, researchers have sought to develop machine-based automated assessment methods as a replacement for manual evaluation. Initially, some researchers employed contact-based sensors to capture human posture movements. While this method provides high accuracy and frequently serves as a validation benchmark for emerging recognition techniques [10,11], it requires a significant number of sensors, resulting in increased equipment costs and requiring extensive sensor calibration. Moreover, the use of sensors may impede the normal work of healthcare personnel [12,13]. In contrast, vision-based posture motion capture methods offer a non-contact approach that does not disrupt the tasks of healthcare providers [14]. Currently, this approach primarily relies on machine learning algorithms to recognize motion pose keypoints from images or videos [15,16], enabling the automatic calculation of the REBA posture score using these keypoints. Compared to the Microsoft Kinect camera [17] and various pose estimation networks (e.g., PoseNet [18], DensePose [19], HRNet [20]), OpenPose [21] is presently recognized as a widely utilized and reliable algorithm for human pose estimation, demonstrating stable skeletal tracking capabilities even in non-frontal views and video sequences.
We endeavored to incorporate OpenPose into the automatic REBA assessment of caregiver postures. However, our findings revealed significant discrepancies in the REBA scores and substantial fluctuations in joint angles. To explore the underlying reasons for this issue, we conducted an analysis of caregiver postures. The results revealed that when healthcare professionals were involved in posture estimation, the overlapping of limbs between nurses and patients not only led to the loss of skeletal information but also introduced complexities in distinguishing the skeletal structures of both parties. Consequently, this significantly compromised the accuracy of OpenPose in estimating caregiver postures, resulting in considerable fluctuations and errors in both REBA scores and joint angles. The simultaneous estimation of poses for multiple individuals presents inherent challenges that may compromise the accuracy of joint angle calculations and lead to inaccurate REBA scores, particularly in scenarios involving overlapping, occlusion, and intricate interactions among various body parts.
To improve the pose estimation deficiencies caused by body occlusion in nursing interactions, researchers have utilized the principle of left–right symmetry to compensate for missing skeleton keypoints [22]. However, this approach is applicable to pose captured from a frontal camera perspective, and deviations in camera angles result in corrected skeletal keypoints being positioned outside the body. To overcome this limitation, the Mask RCNN method has been utilized to detect human boundaries, thereby constraining the skeletal keypoints within the body’s boundaries [23]. Nonetheless, compensating for skeletal keypoints using the symmetry principle often encounters challenges when dealing with complex movements. To restore occluded keypoints, researchers have explored the utilization of unoccluded skeletal keypoints in a Euclidean distance matrix [24]. This skeleton compensation method has proven successful in mitigating skeletal occlusion issues. However, ignoring temporal attributes and their association with skeletal motion trends leads to disparities between the compensated skeleton and the action dynamics. Furthermore, certain approaches have introduced the concept of “Human Dynamics” [25], which predicts future body poses based on multiple frames in the current video, even in the absence of subsequent frames. This method has demonstrated remarkable effectiveness in compensating for missing skeletal keypoints. However, limitations still persist regarding skeletal misidentification.
To tackle the challenges of skeleton loss and misidentification caused by body contact in nursing tasks, we proposed an enhanced spatial temporal graph convolutional network (ST-GCN) method that incorporated action feature weighting for skeleton time series. Additionally, we introduced a skeleton discrimination method based on kinematic chains, which identified skeletal loss and misidentification by combining skeleton and action features. This information was then utilized to provide feedback to the skeleton interpolation compensation network and skeleton correction network, enabling the reconstruction of missing and misidentified skeletal structures. The following are the main contributions of this study:
(1) An improved ST-GCN framework is proposed for skeleton action prediction.
(2) A kinematic-chain-based method for missing and misidentified skeletons is proposed for skeleton compensation in scenes with limb overlapping.
(3) Our results illustrate that the skeleton compensation and correction methods can effectively improve the calculation accuracy of skeleton joint angles and REBA score.

2. Methods

2.1. Overview

In our study, we introduced a novel kinematic chain skeleton discrimination method to assess the integrity of the pose skeleton, distinguishing loss and misidentification. By analyzing the heterogeneity of action features obtained from the ST-GCN network and their corresponding skeleton mappings within a predefined temporal threshold, we identified instances of skeleton misidentification from a pose-based kinematic chain perspective. To optimize skeletal loss, we proposed a temporal-based skeleton interpolation compensation method. This involved utilizing temporal features, traversing complete skeletons preceding and subsequent to the temporal sequence, and employing interpolation algorithms to rectify missing skeleton data. In cases of skeleton misidentification, we presented a method to optimize action feature heterogeneity. This technique involved optimizing action features with lower weights within the predefined temporal range, compensating for gaps by utilizing consistent action features from previous and subsequent temporal sequences, and updating the corresponding skeletons mapped with the action features to rectify misidentification of the pose skeleton. The overview of our skeleton compensation method is shown in Figure 1. The following supporting information can be downloaded at: https://github.com/Nicxhan/Skeleton-compensation-and-correction (accessed on 1 January 2024).

2.2. ST-GCN

The ST-GCN has demonstrated its extraordinary ability to extract dynamic skeletal features from both spatial and temporal dimensions by capitalizing on a sequence of skeletal graphs [26]. Our adjusted ST-GCN structure comprises the spatial and spatial temporal feature layer (Figure 2a). Through the fusion of spatial temporal features of the skeleton, it enables the allocation of distinct action labels and weights to the temporal variations of skeletal features, redefining posture with actions.
The construction of the Spatial Feature layer entailed the integration of multiple Spatial Conv layers through residual structures. Each Spatial Conv layer was complemented by batch normalization (BN) and ReLU modules (Figure 2b), thereby bolstering the stability and facilitating the capture of intricate non-linear linkages among joints. The Spatial Feature layer aimed to discern the interconnected features that manifested between skeletal nodes and their neighboring counterparts, originating from the spatial information encapsulated within the pivotal nodes of the skeletal graph. Consequently, it exerted a discernible influence on the estimation of human poses by representing localized attributes of individual skeletal joints alongside the distinctive characteristics exhibited by adjacent nodes [27]. The Spatial–Temporal Feature layer, constructed by intricately interweaving multiple spatial temporal feature extraction units, manifested as a dense connection structure [28]. Encompassing a stack of Temporal Conv and Spatial Conv (Figure 2c), each Spatial–Temporal Conv aimed to extract motion trend features from skeletal joint nodes that exhibited correspondence across frames in the skeletal graph. This extraction process facilitated the depiction of motion trends between matched joint nodes in consecutive frames. By acquiring a comprehensive understanding of these features, the prediction of pose actions within the skeletal structure was enhanced.

2.3. Kinematic Chain for Skeleton Discrimination

The integration of spatial and temporal features within the label mapping framework enables the determination of action weights for postures, with the highest-weighted action label signifying each unique posture. To address challenges related to missing or misidentified skeletons in complex scenarios, we introduced a Kinematic Chain Skeleton Discrimination Network in the extra layer of the ST-GCN. This novel approach evaluated both skeletal pose completeness and the comparison of fused action weight features, distinct from prior research [29]. Anomalous action weights within a defined temporal sequence were identified as misidentified actions and skeletons, and corrective feedback was provided for both. Skeletal connections, denoting the links between adjacent keypoints in the human skeletal structure, form a 2 × M matrix K, where M represents the predefined number of skeletal keypoints. Matrix Ψ = K T K acts as a feature for discriminating skeletal integrity, with diagonal elements in Ψ representing squared joint lengths, while the remaining elements signify weighted angles between pairs of skeletal keypoints, serving as internal indicators. Inspired by kinematic chains, we introduced a temporal kinematic chain, defined as Equation (1).
Φ = K t + i T K t + i K t T K t
where i represents the temporal interval between successive frames within the temporal kinematic chain. The diagonal elements within matrix Φ depict alterations in skeletal joint lengths, while the remaining elements signify changes in angles between pairs of skeletal keypoints.
We established the prediction of temporal kinematic chains by connecting the coordinates of skeletal keypoints, which were subsequently input into a Temporal Convolutional Network (TCN) to construct a posture discrimination network. This methodology not only accounted for the integrity of posture skeletons across frames but also ensured the coherence of weight variations in action feature changes across frames. It optimized abnormal action weights and provides feedback for skeleton compensation or correction. Building upon the framework of a Generative Adversarial Network [30], we constructed the posture discrimination network and employed this framework to generate regularization loss for pose estimation.

2.4. Skeleton Interpolation Compensation

In the case of missing skeleton states detected in the pose estimation results, the skeleton interpolation compensation network initiated the process by considering the current time sequence of the missing skeleton as the starting point. Subsequently, it traversed through the skeletal information of the preceding and succeeding time sequences to identify complete skeletons. In terms of temporal proximity to the missing skeleton, the nearest preceding and succeeding complete skeletons were chosen as references for interpolating the missing skeleton. Based on the spatial and temporal features offered by the complete skeletons, the linear interpolation algorithm was employed to fill in the missing skeletal keypoints. Simultaneously, the motion characteristics of the temporal sequence were taken into account to ensure alignment between the generated skeleton and the actual kinematic features, the process of skeleton compensation is depicted in Figure 3. To determine the temporal features within the interpolation compensation process, the traversal range for the preceding and succeeding temporal skeletons was set to 10 frames. This selection of a 10-frame range, sampled at a frequency of 50 Hz, provided the optimal interpolated data for motion skeleton interpolation [31].
Assuming that the motion velocity of skeletal keypoints remained independent and constant within the missing region, when there were n missing skeletal keypoints between the temporal sequences P s x s , y s ,   P e x e , y e , P s and P e represented the starting and ending points of the complete skeletal information with a temporal distance of 10 frames, respectively. The missing point was denoted as P 1 x 1 ,   y 1 ,   P 2 x 2 ,   y 2 ,   ,   P n x n ,   y n . The equation for computing the interpolated compensatory coordinates of the missing skeleton keypoints was determined by Equations (2)–(4).
x i = 1 t x s + t x e
y i = 1 t y s + t y e
t = i / n + 1   i = 1,2 , , n

2.5. Skeleton Correction

In the case of pose estimation results indicating skeletal misidentification states, we proposed a novel approach termed heterogeneous action feature optimization. By leveraging the inherent action features associated with each stage of the skeleton, we could rectify the misidentified skeleton by focusing on the correction of action features. The process of skeleton correction is depicted in Figure 4. The skeleton correction network commenced the process using the current time sequence of the misidentification skeleton as the starting point. It subsequently traversed the action features of the preceding and succeeding 10 frames within the temporal sequence. Following this, the weight proportions of the action features were calculated in the predefined time thresholds. For example, if the skeleton action features were denoted as A and B, within the specified time threshold, a comparison was made between the weights of action features A and B. Dominant action features were identified as those with a weight proportion exceeding 60%, while the remaining action features were considered heterogeneous. Consequently, the heterogeneous features were replaced with the dominant features, and the skeleton was accordingly updated. This approach effectively rectified the misidentified skeleton, demonstrating its efficacy in practice.
To prevent the disregard of preceding and succeeding frames due to estimation errors in the current frame, we incorporated the Kalman filtering algorithm to perform noise smoothing on the time series of coordinates for each skeletal point [32]. This procedure enhanced the congruity between the corrected skeleton and the actual movement. Assuming the independent calculation of each skeletal point, without considering skeletal constraints, we observed a natural correlation between the horizontal and vertical actions of the skeleton. Additionally, when disregarding action trends, the preceding and subsequent temporal states exhibited the same characteristics. Hence, Equations (5)–(9) were met.
x ^ k = A x ^ k 1 + B u k
P ^ k = A P k 1 A T + Q
K k = P k C T / C P k C T + R
x ^ k = x ^ k + K k y k C x ^ k
P k = I K k C P k
where x ^ k and x ^ k 1 represent the posterior state estimates of the skeleton points at time series k − 1 and k, respectively. x ^ k represents the prior state estimate of the skeleton point at time series k. P k 1 and P k represent the posterior estimated covariance values at time series k − 1 and k, respectively. P ^ k represents the a priori estimated covariance value at time series k. C represents the transformation matrix from state variables to measured values. y k represents the input value. K k represents the Kalman coefficient. A represents the state transition matrix. B represents the control input matrix. Q represents the process excitation noise covariance value. R represents the measurement noise covariance value.

2.6. Study Design

The data used in this study was acquired by recruiting volunteers to simulate the task of patient transfer. The recruited volunteers had no history of musculoskeletal disorders in the past year. Volunteers were tasked with transferring the standard patient from the bed to the wheelchair.
A single monocular RGB camera was employed for recording the nursing care task videos. A motion capture system comprising multiple inertial sensors was utilized to measure the angles of various joints in the body [33], with a high correlation observed between the results obtained from this system and those obtained from optical motion capture systems, making it suitable for joint angle measurement research. Additionally, inertial sensors possess strong occlusion resistance and find extensive application in fields like rehabilitation medicine and ergonomic analysis [34,35]. Hence, the joint angle measurements obtained from the inertial sensors can be employed as a ground truth value to assess the precision of visually based angle measurements [36].
Statistical analysis was conducted using SPSS v27 software (SPSS Inc., Chicago, IL, USA) and GraphPad Prism 9 (GraphPad Inc., San Diego, CA, USA). Paired t-tests were employed for paired continuous data, mean values and standard deviations were reported for all statistical tests. A p-value less than 0.05 was considered statistically significant.

2.7. Joint Angle and Scoring Tool

The nursing task videos were processed by OpenPose and our method to predict the human body skeleton and compute the skeleton joint angles. A total of 25 skeletal keypoints were identified for each participant (Figure 5), and based on the scoring criteria of the REBA, a total of eight joint angles were calculated. The computation of joint angles and their corresponding skeletal keypoints were summarized in Table 1. Due to the wrist being in a nearly fixed position during the nursing tasks, the wrist angle was considered constant for the purpose of angle measurement and posture risk assessment in this study.
The REBA method was chosen as a tool for evaluating ergonomic risks in the workplace. Its objective was to swiftly assess the WMSD risk of postures to determine which work positions require additional attention and improvement, thereby reducing the risk of bodily discomfort and injury associated with work. The REBA algorithm involved evaluating the angle changes of key joints (trunk, neck, legs, upper arms, lower arms, wrists), external loads, and hand coupling capability. REBA scores range from 1 to 12, with higher scores indicating greater WMSD risk (Table 2).

2.8. Accuracy Verification

To validate the accuracy of our approach in posture risk assessment, a comparison was conducted among OpenPose, inertial sensors, and our method in terms of joint angles and REBA scores. The nursing task videos were separated into individual frames, and for each frame, the joint angles and REBA scores were calculated independently, as shown in Table 3. The mean absolute error (MAE) of the joint angles and the precision of the REBA scores were used to assess the performance of our method. The MAE measured the absolute difference between the joint angles computed by different methods. Although it did not distinguish between positive and negative errors, this value represented the actual magnitude of the error. The mathematical equation for MAE was determined by the Equations (10) and (11).
M A E 1 = i = 1 n A i A s i / n
M A E 2 = i = 1 n A o i A s i / n
where M A E 1 was measured by our method and the inertial sensors; M A E 2 was measured by OpenPose and the inertial sensors. Assuming the number of frames with consistent REBA scores between the inertial sensors and our method was denoted as Fm, and the total number of frames was denoted as F, the REBA precision calculation was determined by Equation (12).
A c c = F m / F × 100 %

3. Results

3.1. Missing and Misidentified Skeletons

During the application of OpenPose for posture risk assessment in nursing tasks, notable challenges arise from complex interactions and overlapping body configurations between nurses and patients. These challenges often lead to incomplete or erroneous skeletal estimations, resulting in deviations and fluctuations in joint angles (Figure 6a). For instance, as depicted in Figure 6b, when a skeleton corresponding to the upper arm was misidentified, substantial fluctuations in the upper arm angle occurred, resulting in discontinuous states. In contrast, our method optimized the misidentification problem (Figure 6c), maintaining a stable and continuous state for the joint angles of the upper arm. Likewise, in scenarios where the skeleton was missing, such as the legs, there might be deviations or even a complete absence of leg angles. However, our method optimized the identification of the skeleton, achieving the continuity of leg angle measurements.
We compared the overall skeleton missing rate and misidentification rate for all frames (Table 4). The results revealed that our approach achieved a skeletal misidentification rate of 2.18%. Regarding the skeleton missing rate, except for the right lower arm (Lower arm-R) caused by limb occlusion, significant skeleton compensation effects were observed for all other missing skeletons. These outcomes highlighted the efficacy and potential of our approach in optimizing missing skeletons and misidentification the field of skeletal analysis.

3.2. Joint Angles Error

To assess the accuracy of our approach in measuring joint angles, we conducted a comparative analysis of angle errors among various methods. The analysis involved three distinct groups, each focused on evaluating the errors within a specific context. E a n g l e 1 = A o i A s i represented the error between the joint angles obtained from OpenPose and the ground truth values; E a n g l e 2 = A i A s i represented the error between our method and the ground truth values; E a n g l e 3 = A i A o i represented the error in joint angle errors between our method and OpenPose (Table 5).
We presented a detailed analysis of joint angle errors based on comprehensive experimental results (Table 5). When comparing joint angle errors between OpenPose and ground truth values (Eangle1), all angles, except Trunk angles (p1 = 0.628), displayed significant statistical differences (p1 < 0.001), indicating substantial joint angle deviations. Conversely, our method exhibited minimal errors compared to ground truth values (Eangle2), with significant statistical differences observed only in Upper arm-R (p2 = 0.025) and Lower arm-R (p2 = 0.006) joint angles. This highlighted the reliability of our method in calculating skeletal joint angles. Additionally, significant differences were found in joint angle errors (p3 < 0.001) between our method and OpenPose (Eangle3), except for Trunk (p3 = 0.961) and Lower arm-R angles (p3 = 0.752), demonstrating the effectiveness of our approach in enhancing pose estimation accuracy and improving the precision of skeletal joint angle calculation.
MAE was employed to evaluate the stability and accuracy of measuring joint angles. A smaller MAE value indicated better measurement accuracy. Our method consistently achieved an overall MAE (MAE1) below 10°, demonstrating superior accuracy in measuring joint angles (Figure 7). In contrast, OpenPose exhibited an MAE exceeding 10° for all joints, except the trunk, indicating significant error fluctuations. Both MAE1 and MAE2 showed statistically significant differences across all joint angles (p < 0.05). These discrepancies could be attributed to the skeleton loss and misidentification issues encountered in OpenPose during estimation of nursing care poses, resulting in frequent variations in angle differences and increased error fluctuation. In contrast, our proposed method addressed these challenges by optimizing skeleton loss and misidentification and reducing error fluctuations. This significantly enhanced the accuracy of joint angle calculations, as evidenced by the lower MAE values and reduced error fluctuations observed in Figure 7.

3.3. REBA Score Error

To verify the performance of our method in REBA scoring, we conducted a comparative analysis of the error in REBA scores among different skeletal joints. E R E B A 1 = R o i R s i denoted the error between OpenPose and the ground truth values, while E R E B A 2 = R i R s i  signified the error between our method and the ground truth values. The results, in accordance with the REBA scoring rules, are presented in Table 6.
Based on the comprehensive results presented in Table 6, notable differences (p < 0.001) were observed in the joints scores and REBA scores between the OpenPose and the ground truth values (EREBA1), except for Trunk (p = 0.788) and Neck (p = 0.124). These observations indicated that the reliability of REBA scores derived from the OpenPose method for assessing nursing care task postures was suboptimal, with considerable deviations. Conversely, when considering the REBA scores obtained through our proposed method (EREBA2), a significant difference was only observed for the Lower arm-R score (p < 0.001) compared to the ground truth values, while no significant differences were detected for other joint scores. Moreover, the final REBA scores showed no significant discrepancy compared to the ground truth values (p = 0.373). These outcomes demonstrated that the REBA scores computed using our method closely aligned with the ground truth values, highlighting the substantial feasibility and reliability of our approach for assessing nursing task posture.
Moreover, to evaluate the effectiveness of our method in tackling the issues of skeleton loss and misidentification within nursing care task scenarios, we conducted a comprehensive performance comparison against several existing methods, including that of Tsai et al. [23], a left–right skeletal symmetry skeleton compensation method; Guo et al. [24], a Euclidean distance matrix skeleton compensation method; and Kanazawa et al. [25], a Human-Dynamics-based temporal skeleton compensation method. The evaluation metric employed for this analysis was the precision of REBA scores. To uphold the scientific integrity of the comparative results, all assessments of the methods were conducted using standardized hardware configurations and nursing care posture datasets. Nonetheless, it was vital to exercise caution when interpreting these findings, as discrepancies in algorithmic parameters and model metrics might introduce variations that require careful consideration [37]. The summarized results of this comparative evaluation can be found in Table 7.
The findings in Table 7 indicated that OpenPose achieved an accuracy exceeding 90% for specific skeletal joints, yet its final accuracy in REBA scoring remains at 58.33%. This was associated with the issues of skeleton loss and misidentification, which caused low accuracy of REBA. In contrast, our approach attained an accuracy of 87.34%, outperforming alternative methods and improving the skeleton loss and misidentification in nursing care tasks. Importantly, our method exhibited promising potential for pose assessment in interaction-based nursing tasks.

4. Discussion

4.1. Main Findings and Contributions

In this study, we identified concerning accuracy issues in the integration of OpenPose with the REBA assessment for nursing postures. This inadequacy stemmed from the inherent challenges posed by motion interactions and limb occlusions in nursing tasks, resulting in skeleton missing and misidentification in the OpenPose pose estimation. Consequently, these deviations and fluctuations in skeletal joint angles had a direct impact on the accuracy of REBA scoring. To address this problem, we have devised an innovative method that built upon the ST-GCN framework by incorporating action feature inverse skeleton compensation and correction. Hence, we enhanced the tracking of pose skeletons in scenarios involving overlapping bodies and interactive movements during nursing tasks. This improvement ensured the continuity and stability of skeletal joint angle calculations, ultimately resulting in an enhanced accuracy of REBA scoring.
To validate the reliability and feasibility of our proposed method, we conducted a comprehensive comparison of skeleton missing rate, skeleton misidentification rate, joint angles, REBA score, and REBA scoring accuracy. We have identified significant differences between the joint angles and scores obtained from OpenPose and the inertial sensors, primarily due to the influence of skeleton loss and misidentification. In contrast, our method yielded joint angles and scores that did not differ from the ground truth values, demonstrating the effectiveness of our approach in mitigating skeleton loss and misidentification challenges (Table 5 and Table 6). Furthermore, it was important to highlight that substantial angle errors were observed in the right upper and lower arm joints (Table 5, Upper arm-R (p2 = 0.025), Lower arm-R (p2 = 0.006)). This discrepancy could be attributed to the interaction between the arms and patients during the caregiving process, resulting in the loss of arm joint tracking features. It is important to note that such limitations are commonly encountered in vision-based pose estimation algorithms. It could be overcome by employing marker-based wearable sensor measurement methods, but the use of sensors itself may impede the normal work of healthcare personnel [12]. It seems that improving the performance of pose estimation algorithms is more convenient and effective [10]. While our method showed smaller error fluctuation (Figure 7), improvements could be made in the future studies, particularly in addressing errors related to the Leg, Upper arm, and Lower arm joints on the side that is occluded by the limb. These joints experience significant challenged in terms of skeleton loss during the pose estimation process within multi-person interaction nursing care tasks. Therefore, future research efforts should prioritize enhancing the recognition accuracy of these specific joints.
While numerous studies have demonstrated the reliability of OpenPose in calculating joint angles for simple poses [38,39], its performance in complex scenarios involving overlapping bodies and interactions among multiple individuals remains suboptimal. Skeletal compensation methods that rely on left–right skeletal symmetry are often proved to be highly dependent on camera perspective settings [22]. Additionally, when employing Mask RCNN to confine the boundaries of compensated skeletal points in scenes with multiple individuals, the accuracy of pose skeleton estimation is not ideal enough [23]. Existing methods that compensate for occluded skeletons based on a Euclidean distance matrix [24] or that predict future pose skeletons using Human Dynamics [25] share a common limitation: they fail to address the problem of skeletal misidentification, leading to a uniform compensation approach for both correctly identified and misidentified skeletons. Consequently, the compensated skeletons fail to match the target pose skeleton, exacerbating differences in pose skeleton angles and REBA scores. Taking inspiration from skeleton kinematics, we proposed a novel skeleton discrimination method based on skeleton kinematic chains, which effectively distinguished different states of skeletal misidentification. Furthermore, we introduced a heterogeneous action feature optimization method that updated heterogeneous action features at the temporal sequences level. Leveraging the ST-GCN network’s ability to assign action labels to different temporal skeletons, we could focus on updating the action features to correct misidentified skeletons. Comparative analysis of the accuracy of REBA scores demonstrated the distinct advantages of our method compared to alternative approaches (Table 7).
Furthermore, the primary objective of this study was to conduct a comparative analysis between our method and the OpenPose in terms of the predictive accuracy of skeletal joint angles at the algorithmic level of 2D pose estimation. It is important to note that the REBA scoring criteria encompasses not only joint angle assessment but also incorporates additional scores for joint rotation and extra points. To ensure consistency across all methods, we manually defined the parameters for rotation and extra point interventions. While previous research has explored posture risk assessment based on monocular camera 3D pose estimation [40,41], achieving good recognition accuracy, it is essential to recognize the inherent limitations of 3D pose evaluation. The computational demands associated with 3D pose estimation make it less suitable for real-time pose estimation, and the reliance on depth cameras or specialized sensors to capture depth data introduces complexities in terms of hardware and data collection. In contrast, 2D pose estimation algorithms exhibit greater resilience to challenging conditions such as lighting variations and occlusions in comparison to their 3D counterparts. Significantly, most existing monocular camera 3D pose estimation techniques primarily focus on simple pose estimation scenarios, while the complexities arising from multi-person interactions and limb occlusions present more substantial obstacles for accurate 3D pose estimation.
Collectively, our approach initially explored solutions for multi-person pose estimation from a 2D perspective before transitioning to 3D pose estimation research. The current research findings underscored the feasibility of our method, which might hold wide-ranging applicability in popular mobile devices or surveillance cameras through the utilization of lightweight models. Moreover, our method could be integrated into Internet of Things (IoT) devices equipped with RGB cameras, including smartphones and surveillance systems. Leveraging neural network models and image processing techniques, our method enables the inference of posture information, facilitating risk assessment and visual guidance for WMSDs associated with nursing postures. Looking ahead, the realization of an integrated intelligent nursing posture assessment system becomes a tangible possibility, driven by the advancements achieved through our method.

4.2. Limitations

It is important to acknowledge that our skeletal compensation and correction mechanisms rely on traversing temporal features over a span of 10 frames. Any instances of skeleton loss beyond this range might increase the skeleton miss rate of our method, resulting in our method’s REBA score accuracy being limited to 87.34%. As such, future investigations should focus on mitigating these limitations and exploring a suitable traversing temporal scope for improving accuracy. Furthermore, exploring the application of monocular camera 3D caregiving pose evaluation would be merited to improve the performance in the limb occlusion scenario, as investigating the effectiveness of 3D compared to 2D approaches would carry significant implications and contribute to the advancement of the field.

4.3. Directions for Further Research

In light of the demonstrable benefits associated with the capture of temporal features over a 10-frame interval in nursing care action interaction actions, the accuracy of skeleton compensation within this temporal range is influenced by the speed and complexity of these actions across diverse application scenarios. Consequently, it is imperative for future research to prioritize the investigation of pose actions’ intricacy and subsequently determine the optimal time span required to match these actions accurately. The development of a model that establishes the relationship between action complexity and time span would significantly enhance the efficiency and effectiveness of skeleton compensation, thereby unlocking the substantial potential for intelligent selection of time intervals in various pose estimation scenarios. Furthermore, augmenting the precision of monocular-camera-based 3D techniques in multi-person pose skeleton estimation is pivotal for improving the accuracy of caregiving posture assessment, particularly in scenarios involving rotational movements and changes in perspective. Exploring the integration of skeleton compensation and correction techniques derived from 2D approaches into 3D scenes represents a promising avenue for future research, as it addresses the challenge of compensating for skeleton occlusion during rotational maneuvers and visual alterations. Additionally, proactive exploration of the integration of our approach into Internet of Things (IoT) devices equipped with RGB cameras, such as smartphones and monitoring systems, holds substantial potential. Leveraging neural network models and image processing techniques to infer pose information can facilitate risk assessment and visual guidance pertaining to work-related musculoskeletal disorders (WMSDs), offering significant opportunities for the implementation of integrated intelligent pose assessment systems.

5. Conclusions

This study introduced an enhanced ST-GCN-based skeletal compensation method that effectively optimized skeletal occlusion and misidentification in nursing care tasks. Our approach integrated distinct action features and weights for posture skeletons, utilizing a skeletal discrimination network to evaluate skeleton integrity. To mitigate occlusion, we employed a skeletal interpolation compensation network that utilized adjacent temporal contexts. In instances of misidentification, a skeletal correction network optimized abnormal action features and updated skeletons accordingly. Our method improved joint angle calculations and enhanced the accuracy of REBA scores, which exhibited higher accuracy compared to the traditional OpenPose, achieving high precision in REBA scores for nursing task postures. Such improvements are crucial in mitigating the risk of WMSDs in the nursing profession.

Supplementary Materials

A demo could be found at https://github.com/Nicxhan/Skeleton-compensation-and-correction, accessed on 1 January 2024.

Author Contributions

Conceptualization, X.H., N.N. and Z.J.; methodology, X.H., N.N. and Z.J.; software, X.H.; validation, X.H., M.M. and T.S.; formal analysis, X.H., M.M. and T.S.; investigation, X.H., M.M. and T.S.; writing—original draft preparation, X.H.; writing—review and editing, N.N. and Z.J.; visualization, X.H.; supervision, N.N. and Z.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the ethics committee at the Center for Clinical Research of the co-authors’ hospital (H2019−182).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank the motion capture system and related supporting equipment provided by the Micro Mechatronics Laboratory of Yamaguchi University Graduate School of Sciences and Technology for Innovation. The authors would also like to thank the professional medical staff of the Department of Orthopedic Surgery, Yamaguchi University Graduate School of Medicine, for their enthusiastic assistance and guidance.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jacquier-Bret, J.; Gorce, P. Prevalence of Body Area Work-Related Musculoskeletal Disorders among Healthcare Professionals: A Systematic Review. Int. J. Environ. Res. Public Health 2023, 20, 841. [Google Scholar] [CrossRef]
  2. Heuel, L.; Lübstorf, S.; Otto, A.-K.; Wollesen, B. Chronic stress, behavioral tendencies, and determinants of health behaviors in nurses: A mixed-methods approach. BMC Public Health 2022, 22, 624. [Google Scholar] [CrossRef]
  3. Naidoo, R.N.; Haq, S.A. Occupational use syndromes. Best Pract. Res. Clin. Rheumatol. 2008, 22, 677–691. [Google Scholar] [CrossRef]
  4. Asuquo, E.G.; Tighe, S.M.; Bradshaw, C. Interventions to reduce work-related musculoskeletal disorders among healthcare staff in nursing homes; An integrative literature review. Int. J. Nurs. Stud. Adv. 2021, 3, 100033. [Google Scholar] [CrossRef]
  5. Xu, D.; Zhou, H.; Quan, W.; Gusztav, F.; Wang, M.; Baker, J.S.; Gu, Y. Accurately and effectively predict the ACL force: Utilizing biomechanical landing pattern before and after-fatigue. Comput. Meth. Programs Biomed. 2023, 241, 107761. [Google Scholar] [CrossRef] [PubMed]
  6. McAtamney, L.; Corlett, E.N. RULA: A survey method for the investigation of work-related upper limb disorders. Appl. Ergon. 1993, 24, 91–99. [Google Scholar] [CrossRef] [PubMed]
  7. Hignett, S.; McAtamney, L. Rapid entire body assessment (REBA). Appl. Ergon. 2000, 31, 201–205. [Google Scholar] [CrossRef] [PubMed]
  8. Graben, P.R.; Schall, M.C., Jr.; Gallagher, S.; Sesek, R.; Acosta-Sojo, Y. Reliability Analysis of Observation-Based Exposure Assessment Tools for the Upper Extremities: A Systematic Review. Int. J. Environ. Res. Public Health 2022, 19, 10595. [Google Scholar] [CrossRef] [PubMed]
  9. Kee, D. Comparison of OWAS, RULA and REBA for assessing potential work-related musculoskeletal disorders. Int. J. Ind. Ergon. 2021, 83, 103140. [Google Scholar] [CrossRef]
  10. Kim, W.; Sung, J.; Saakes, D.; Huang, C.; Xiong, S. Ergonomic postural assessment using a new open-source human pose estimation technology (OpenPose). Int. J. Ind. Ergon. 2021, 84, 103164. [Google Scholar] [CrossRef]
  11. Xu, D.; Zhou, H.; Quan, W.; Jiang, X.; Liang, M.; Li, S.; Ugbolue, U.C.; Baker, J.S.; Gusztav, F.; Ma, X.; et al. A new method proposed for realizing human gait pattern recognition: Inspirations for the application of sports and clinical gait analysis. Gait Posture 2024, 107, 293–305. [Google Scholar] [CrossRef]
  12. Lind, C.M.; Abtahi, F.; Forsman, M. Wearable Motion Capture Devices for the Prevention of Work-Related Musculoskeletal Disorders in Ergonomics—An Overview of Current Applications, Challenges, and Future Opportunities. Sensors 2023, 23, 4259. [Google Scholar] [CrossRef]
  13. Kalasin, S.; Surareungchai, W. Challenges of Emerging Wearable Sensors for Remote Monitoring toward Telemedicine Healthcare. Anal. Chem. 2023, 95, 1773–1784. [Google Scholar] [CrossRef] [PubMed]
  14. Han, X.; Nishida, N.; Morita, M.; Mitsuda, M.; Jiang, Z. Visualization of Caregiving Posture and Risk Evaluation of Discomfort and Injury. Appl. Sci. 2023, 13, 12699. [Google Scholar] [CrossRef]
  15. Yu, Y.; Umer, W.; Yang, X.; Antwi-Afari, M.F. Posture-related data collection methods for construction workers: A review. Autom. Constr. 2021, 124, 103538. [Google Scholar] [CrossRef]
  16. Xu, D.; Quan, W.; Zhou, H.; Sun, D.; Baker, J.S.; Gu, Y. Explaining the differences of gait patterns between high and low-mileage runners with machine learning. Sci. Rep. 2022, 12, 2981. [Google Scholar] [CrossRef] [PubMed]
  17. Clark, R.A.; Mentiplay, B.F.; Hough, E.; Pua, Y.H. Three-dimensional cameras and skeleton pose tracking for physical function assessment: A review of uses, validity, current developments and Kinect alternatives. Gait Posture 2019, 68, 193–200. [Google Scholar] [CrossRef] [PubMed]
  18. Kendall, A.; Grimes, M.; Cipolla, R. Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2938–2946. [Google Scholar]
  19. Güler, R.A.; Neverova, N.; Kokkinos, I. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7297–7306. [Google Scholar]
  20. Huang, J.; Zhu, Z.; Huang, G. Multi-stage HRNet: Multiple stage high-resolution network for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  21. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  22. Huang, C.C.; Nguyen, M.H. Robust 3D skeleton tracking based on openpose and a probabilistic tracking framework. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 4107–4112. [Google Scholar]
  23. Tsai, M.F.; Huang, S.H. Enhancing accuracy of human action Recognition System using Skeleton Point correction method. Multimed. Tools Appl. 2022, 81, 7439–7459. [Google Scholar] [CrossRef]
  24. Guo, X.; Dai, Y. Occluded joints recovery in 3d human pose estimation based on distance matrix. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 1325–1330. [Google Scholar]
  25. Kanazawa, A.; Zhang, J.Y.; Felsen, P.; Malik, J. Learning 3d human dynamics from video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5614–5623. [Google Scholar]
  26. Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; p. 32. [Google Scholar]
  27. Chen, Y.; Zhang, Z.; Yuan, C.; Li, B.; Deng, Y.; Hu, W. Channel-wise topology refinement graph convolution for skeleton-based action recognition. In Proceedings of the IEEE/CVF international conference on computer vision, Montreal, BC, Canada, 11–17 October 2021; pp. 13359–13368. [Google Scholar]
  28. Li, G.; Zhang, M.; Li, J.; Lv, F.; Tong, G. Efficient densely connected convolutional neural networks. Pattern Recognit. 2021, 109, 107610. [Google Scholar] [CrossRef]
  29. Wandt, B.; Ackermann, H.; Rosenhahn, B. A kinematic chain space for monocular motion capture. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
  30. Natarajan, B.; Elakkiya, R. Dynamic GAN for high-quality sign language video generation from skeletal poses using generative adversarial networks. Soft Comput. 2022, 26, 13153–13175. [Google Scholar] [CrossRef]
  31. Howarth, S.J.; Callaghan, J.P. Quantitative assessment of the accuracy for three interpolation techniques in kinematic analysis of human movement. Comput. Methods Biomech. Biomed. Eng. 2010, 13, 847–855. [Google Scholar] [CrossRef] [PubMed]
  32. Gauss, J.F.; Brandin, C.; Heberle, A.; Löwe, W. Smoothing skeleton avatar visualizations using signal processing technology. SN Comput. Sci. 2021, 2, 429. [Google Scholar] [CrossRef]
  33. Miyajima, S.; Tanaka, T.; Imamura, Y.; Kusaka, T. Lumbar joint torque estimation based on simplified motion measurement using multiple inertial sensors. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 6716–6719. [Google Scholar]
  34. Liang, F.Y.; Gao, F.; Liao, W.H. Synergy-based knee angle estimation using kinematics of thigh. Gait Posture 2021, 89, 25–30. [Google Scholar] [CrossRef]
  35. Figueiredo, L.C.; Gratão, A.C.M.; Barbosa, G.C.; Monteiro, D.Q.; Pelegrini, L.N.d.C.; Sato, T.d.O. Musculoskeletal symptoms in formal and informal caregivers of elderly people. Rev. Bras. Enferm. 2021, 75, e20210249. [Google Scholar] [CrossRef]
  36. Yu, Y.; Li, H.; Yang, X.; Kong, L.; Luo, X.; Wong, A.Y.L. An automatic and non-invasive physical fatigue assessment method for construction workers. Autom. Constr. 2019, 103, 1–12. [Google Scholar] [CrossRef]
  37. Li, L.; Martin, T.; Xu, X. A novel vision-based real-time method for evaluating postural risk factors associated with musculoskeletal disorders. Appl. Ergon. 2020, 87, 103138. [Google Scholar] [CrossRef] [PubMed]
  38. Li, Z.; Zhang, R.; Lee, C.-H.; Lee, Y.-C. An evaluation of posture recognition based on intelligent rapid entire body assessment system for determining musculoskeletal disorders. Sensors 2020, 20, 4414. [Google Scholar] [CrossRef]
  39. Xu, D.; Zhou, H.; Quan, W.; Gusztav, F.; Baker, J.S.; Gu, Y. Adaptive neuro-fuzzy inference system model driven by the non-negative matrix factorization-extracted muscle synergy patterns to estimate lower limb joint movements. Comput. Meth. Programs Biomed. 2023, 242, 107848. [Google Scholar] [CrossRef]
  40. Yuan, H.; Zhou, Y. Ergonomic assessment based on monocular RGB camera in elderly care by a new multi-person 3D pose estimation technique (ROMP). Int. J. Ind. Ergon. 2023, 95, 103440. [Google Scholar] [CrossRef]
  41. Liu, P.L.; Chang, C.C. Simple method integrating OpenPose and RGB-D camera for identifying 3D body landmark locations in various postures. Int. J. Ind. Ergon. 2022, 91, 103354. [Google Scholar] [CrossRef]
Figure 1. Overview of our skeleton compensation method.
Figure 1. Overview of our skeleton compensation method.
Bioengineering 11 00127 g001
Figure 2. (a) Spatial temporal graph convolutional network structure. (b) Spatial Feature layer and Spatial Conv structure. (c) Spatial Temporal Feature layer and Spatial–Temporal Conv structure.
Figure 2. (a) Spatial temporal graph convolutional network structure. (b) Spatial Feature layer and Spatial Conv structure. (c) Spatial Temporal Feature layer and Spatial–Temporal Conv structure.
Bioengineering 11 00127 g002
Figure 3. Skeleton compensation for missing frames (left to right: skeleton loss in OpenPose, missing skeleton frame, complete skeleton traverse, skeleton interpolation compensation, compensated skeleton).
Figure 3. Skeleton compensation for missing frames (left to right: skeleton loss in OpenPose, missing skeleton frame, complete skeleton traverse, skeleton interpolation compensation, compensated skeleton).
Bioengineering 11 00127 g003
Figure 4. Skeleton correction for misidentified frames. It was accomplished by employing action features and weights when skeleton misidentification was detected, A and B represented the skeleton action features.
Figure 4. Skeleton correction for misidentified frames. It was accomplished by employing action features and weights when skeleton misidentification was detected, A and B represented the skeleton action features.
Bioengineering 11 00127 g004
Figure 5. Pose estimation skeleton key points numbers. OpenPose detects 25 key skeletal points on the human body for joint construction and skeleton analysis. Numbers 0 to 24 represent different bone points.
Figure 5. Pose estimation skeleton key points numbers. OpenPose detects 25 key skeletal points on the human body for joint construction and skeleton analysis. Numbers 0 to 24 represent different bone points.
Bioengineering 11 00127 g005
Figure 6. (a) The utilization of OpenPose for pose estimation in the nursing task gave rise to issues concerning missing and misidentified skeletons. (b) The variations in the angles of the upper arm and leg in the presence of skeleton loss and misidentification (Orange represents the angle data obtained by OpenPose) and subsequent skeleton compensation (Green represents the angle data obtained by our method). (c) The effect of our skeleton compensation method.
Figure 6. (a) The utilization of OpenPose for pose estimation in the nursing task gave rise to issues concerning missing and misidentified skeletons. (b) The variations in the angles of the upper arm and leg in the presence of skeleton loss and misidentification (Orange represents the angle data obtained by OpenPose) and subsequent skeleton compensation (Green represents the angle data obtained by our method). (c) The effect of our skeleton compensation method.
Bioengineering 11 00127 g006
Figure 7. MAE of different joint angles. * p < 0.05, ** p < 0.01.
Figure 7. MAE of different joint angles. * p < 0.05, ** p < 0.01.
Bioengineering 11 00127 g007
Table 1. Joint angles list.
Table 1. Joint angles list.
Joint AngleInvolved Skeletal Points
Trunk flexion angle∠1, 8, 8′
Neck flexion angle∠0, 1, 1′
Left leg flexion angle∠12, 13, 14
Right leg flexion angle∠9, 10, 11
Left upper arm flexion angle∠5′, 5, 6
Right upper arm flexion angle∠2′, 2, 3
Left lower arm flexion angle∠5, 6, 7
Right lower arm flexion angle∠2, 3, 4
Table 2. REBA risk level list.
Table 2. REBA risk level list.
Action LevelREBA ScoreRisk LevelCorrection Suggestion
01NegligibleNone necessary
12–3LowMaybe necessary
24–7MediumNecessary
38–10HighNecessary soon
411–15Very highNecessary now
Table 3. Accuracy calculation parameters.
Table 3. Accuracy calculation parameters.
Nursing Task VideoFrame 1Frame 2Frame iFrame n
OpenPoseJoint angleAo1Ao2AoiAon
REBARo1Ro2RoiRon
Inertial sensorsJoint angleAs1As2AsiAsn
REBARs1Rs2RsiRsn
OursJoint angleA1A2AiAn
REBAR1R2RiRn
AccuracyJoint angle error[Ao1, As1, A1][Ao2, As2, A2][Aoi, Asi, Ai][Aon, Asn, An]
REBA score error[Ro1, Rs1, R1][Ro2, Rs2, R2][Roi, Rsi, Ri][Ron, Rsn, Rn]
Table 4. Overall skeleton missing rate and misidentification rate for all frames.
Table 4. Overall skeleton missing rate and misidentification rate for all frames.
JointsSkeleton Missing RateSkeleton Misidentification Rate
OpenPoseOursOpenPoseOurs
Trunk0.18%0.07%20.60%2.18%
Leg-R16.79%5.96%
Upper arm-R22.42%10.36%
Lower arm-R64.68%51.67%
Neck22.06%7.01%
Leg-L8.47%1.78%
Upper arm-L11.19%0.29%
Lower arm-L12.75%0.58%
Table 5. Errors between different joint angles.
Table 5. Errors between different joint angles.
JointsEangle1
(N = 8)
p-Value p1Eangle2
(N = 8)
p-Value p2Eangle3
(N = 8)
p-Value p3
Trunk−0.166 ± 18.526p = 0.628−0.019 ± 2.345p = 0.659−0.017 ± 18.800p = 0.961
Leg-R3.880 ± 18.591p < 0.001−0.060 ± 2.324p = 0.1600.882 ± 6.090p < 0.001
Upper arm-R3.145 ± 10.742p < 0.001−0.186 ± 4.475p = 0.0250.755 ± 10.136p < 0.001
Lower arm-R3.969 ± 30.840p < 0.001−0.226 ± 4.427p = 0.006−0.108 ± 18.481p = 0.752
Neck−1.956 ± 14.891p < 0.001−0.072 ± 2.281p = 0.0871.963 ± 14.436p < 0.001
Leg-L−1.069 ± 7.174p < 0.001−0.125 ± 4.512p = 0.134−4.098 ± 30.771p < 0.001
Upper arm-L−1.014 ± 10.605p < 0.001−0.059 ± 2.292p = 0.1650.773 ± 9.903P < 0.001
Lower arm-L2.473 ± 27.971p < 0.0010.006 ± 4.586p = 0.942−3.001 ± 27.793p < 0.001
Table 6. Errors between joint angle score and REBA score.
Table 6. Errors between joint angle score and REBA score.
JointsEREBA1 (N = 8)p-ValueEREBA2 (N = 8)p-Value
Trunk−0.001 ± 0.207p = 0.7880 ± 0.159p = 1
Leg-R0.255 ± 0.568p < 0.0010.015 ± 0.465p = 0.066
Upper arm-R−0.176 ± 0.644p < 0.001−0.005 ± 0.302p = 0.296
Lower arm-R−0.154 ± 0.635p < 0.0010.235 ± 0.448p < 0.001
Neck0.003 ± 0.132p = 0.124−0.003 ± 0.395p = 0.638
Leg-L−0.027 ± 0.282p < 0.0010.012 ± 0.506p = 0.186
Upper arm-L0.013 ± 0.282p = 0.0130.001 ± 0.186p = 0.619
Lower arm-L0.098 ± 0.309p < 0.0010.234 ± 0.508p = 0.325
REBA0.116 ± 1.128p < 0.001−0.003 ± 0.208p = 0.373
Table 7. Accuracy of REBA score by different methods in nursing care tasks.
Table 7. Accuracy of REBA score by different methods in nursing care tasks.
JointsAcc
OpenPoseTsai et al. [23]Guo et al. [24]Kanazawa et al. [25]Ours
Trunk91.92%90.34%92.36%95.32%95.65%
Leg-R81.43%86.61%86.42%88.33%87.47%
Upper arm-R71.61%72.41%72.98%75.79%76.95%
Lower arm-R47.76%59.87%60.14%62.87%64.31%
Neck76.96%82.86%87.95%86.97%87.96%
Leg-L82.94%83.14%89.76%91.61%90.81%
Upper arm-L80.25%85.27%92.31%91.89%92.13%
Lower arm-L84.26%87.35%91.14%95.57%91.68%
REBA58.33%63.29%76.63%80.46%87.34%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, X.; Nishida, N.; Morita, M.; Sakai, T.; Jiang, Z. Compensation Method for Missing and Misidentified Skeletons in Nursing Care Action Assessment by Improving Spatial Temporal Graph Convolutional Networks. Bioengineering 2024, 11, 127. https://doi.org/10.3390/bioengineering11020127

AMA Style

Han X, Nishida N, Morita M, Sakai T, Jiang Z. Compensation Method for Missing and Misidentified Skeletons in Nursing Care Action Assessment by Improving Spatial Temporal Graph Convolutional Networks. Bioengineering. 2024; 11(2):127. https://doi.org/10.3390/bioengineering11020127

Chicago/Turabian Style

Han, Xin, Norihiro Nishida, Minoru Morita, Takashi Sakai, and Zhongwei Jiang. 2024. "Compensation Method for Missing and Misidentified Skeletons in Nursing Care Action Assessment by Improving Spatial Temporal Graph Convolutional Networks" Bioengineering 11, no. 2: 127. https://doi.org/10.3390/bioengineering11020127

APA Style

Han, X., Nishida, N., Morita, M., Sakai, T., & Jiang, Z. (2024). Compensation Method for Missing and Misidentified Skeletons in Nursing Care Action Assessment by Improving Spatial Temporal Graph Convolutional Networks. Bioengineering, 11(2), 127. https://doi.org/10.3390/bioengineering11020127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop