A Posture Training System Based on Therblig Analysis and YOLO Model—Taking Erhu Bowing as an Example
Abstract
:1. Introduction
2. Related Work
2.1. YOLO Models
- Backbone: The backbone contains a combination of CNNs that extract useful features from an input image.
- Neck: The neck, which connects the backbone to the head, aggregates and refines features at different scales.
- Head: The head comprises the output prediction layers. In these layers, the number of classes can be modified according to the application and data set.
2.2. Therblig Analysis
2.3. Computer-Assisted Learning for Musical Instruments
3. Overview of the Proposed System
4. Posture Requirement Analysis
4.1. Steps Involved in Erhu Bowing
- The erhu player must select and pick up an erhu.
- Although the erhu can be played in standing position, erhu learners are recommended to always sit on a chair when playing the instrument. In addition, the bottom of the erhu should lie on top of the left leg of the player (Figure 2).
- After the player is seated, they must hold the stem of the erhu with their left hand and then grasp the bow head with their right hand.
- The right hand, which is used to grasp the bow head, is also used to place the bow hair flat on top of the erhu barrel.
- The bow is then pulled across the strings from left to right.
- When the bow head reaches the right boundary or the player wishes to return to the left side, they move the bow from right to left (bow pushing). The cycle of bow pulling and bow pushing is called erhu bowing.
- The player can stop the bow movement at any time to end the practice.
- After the end of the practice, the player must fold the bow and the erhu body together.
- The player must then stand up and go to a suitable location to place the erhu.
- Finally, the player places the erhu at the suitable location identified.
4.2. Selection of Critical Objects
4.3. Associations Between the Erhu Player and Critical Objects
5. Posture Understanding
5.1. Development of the YOLO-OD Model
5.2. Detection Algorithms
5.3. Detection of Posture and Movement
5.3.1. Bow Level
5.3.2. Bow Straightness
6. Posture Training System
6.1. System Design
6.2. Scoring Methods
6.2.1. Scoring Method for Bow Level
6.2.2. Scoring Method for Bow Straightness
- Trajectory of the slope of the bow during bow movement: We applied the algorithm discussed in Section 5.2 to determine the learner’s bow-pull and bow-push phases. In the image, green represents the bow pull and blue represents the bow push. The frame-by-frame depiction of the bow’s up-and-down movement in the video, along with the blue and green colors, illustrates changes in the bowing direction. These data not only help professionals to visualize changes in the learner’s bow movements, but also enable future computer data analysis to provide automated suggestions.
- Offset position of the BSB relative to the baseline: The upper or lower boundary range of the BSB indicates the learner’s bowing habits. This chart allows for the identification of shortcomings in the learner’s habits, facilitating appropriate adjustments.
- BSB overlap range between bow pull and bow push: We separately calculate the BSB for the bow pull and bow push. The larger the overlap range ratio between the upper and lower boundaries of these two phases, the more stable the learner’s bow-handling habits. We denote the overlap range ratio as . As higher BSBs may have higher overlap rates, another metric, , serves to normalize the difference based on the union of the BSB ranges at each time.
6.3. Interactive Learning Outcomes with System Assistance
- Group 1: Initially used the system before discontinuing its use. Learners A, B, and C were in group 1.
- Group 2: Began without the system and later incorporated it into their practice. Learners D, E, and F were in group 2.
7. Discussion
- If the performer in the video is not directly facing the camera or if the shooting angle differs from the images in the YOLO-OD model training data set, the proposed system may yield errors in assessing bow posture. Similarly, variations in the camera’s tilt angle (e.g., high or low angles) can distort the spatial relationship between key points, further affecting the accuracy of posture assessment. To address these issues, posture key points can be utilized to calculate the body’s angles, allowing for deduction of the bow’s posture as if the body were oriented towards the front. Future iterations of the model could incorporate methods for correcting or normalizing data captured from tilted perspectives. Additionally, the performance of the model may be influenced by variations in environmental conditions; for example, differences between indoor and outdoor settings, changes in natural or artificial lighting, and background complexity could introduce inconsistencies in data quality. These factors may reduce the robustness of the model when applied in diverse scenarios. Expanding the training data set to include diverse camera angles and environmental conditions, as well as exploring normalization techniques, could enhance the system’s adaptability and reliability.
- Maintaining a relaxed right wrist while holding the bow is another fundamental requirement. When drawing the bow, the player’s wrist bends inward, while when pushing the bow, it extends outward. However, it is challenging to discern the wrist flexion from a frontal view as the palm obscures the wrist. This issue can be resolved by employing IoT devices, such as accelerometers, or by training the YOLO-pose model to identify the right hand’s key points. Alternatively, using the YOLO segmentation model to analyze the proportion of the inner and outer areas of the right palm can help in determining the wrist’s flexion state.
- The section of the bow being used, along with the angle of the player’s upper right limb relative to the body, can provide insights into whether the right arm is moving into the correct position. For example, the angle between the performer’s right arm and the body should not exceed 45 degrees when using the right half of the bow and 30 degrees when using the left half of the bow.
- Another prevalent issue for beginners of the erhu is that the bow hair occasionally leaves the sound barrel. Some studies have used magnetometers to detect this occurrence. Our YOLO models, in conjunction with our bowing detection algorithm, can calculate the movement of the intersection point between the bowing center line and the bow posture line in the vertical direction. This analysis might help in determining whether the bow is being lifted.
- Through detecting the sections of the bow that are frequently used by the player and calculating the length of bow utilized, the system can remind the erhu player to maximize their use of the full bow. It can also determine the player’s technique by calculating the length of each bow movement and converting the frame speed. This allows for the identification of various bowing methods, such as long bow, short bow, jumping bow, and throwing bow.
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
YOLO | You Only Look Once |
CV | Computer vision |
OD | Object detection |
IoU | Intersection over union |
mAP | Mean average precision |
References
- Chao, Z.; Karin, K. An Analysis of the Problems and Difficulties in Erhu Teaching. J. Educ. Mahasarakham Univ. 2019, 13, 184–192. [Google Scholar]
- Araújo, M.V.; Hein, C.F. A survey to investigate advanced musicians’ flow disposition in individual music practice. Int. J. Music Educ. 2019, 37, 107–117. [Google Scholar] [CrossRef]
- Motokawa, Y.; Saito, H. Support system for guitar playing using augmented reality display. In Proceedings of the 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality, Santa Barbara, CA, USA, 22–25 October 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 243–244. [Google Scholar]
- Burloiu, G. Interactive Learning of Microtiming in an Expressive Drum Machine. In Proceedings of the 2020 Joint Conference on AI Music Creativity, Stockholm, Sweden, 22–24 October 2020. [Google Scholar]
- Johnson, D.; Damian, D.; Tzanetakis, G. Detecting Hand Posture in Piano Playing Using Depth Data. Comput. Music J. 2020, 43, 59–78. [Google Scholar] [CrossRef]
- Chen, Y. Interactive piano training using augmented reality and the Internet of Things. Educ. Inf. Technol. 2023, 28, 6373–6389. [Google Scholar] [CrossRef]
- Lei, Y.; Long, Z.; Liang, S.; Zhong, T.; Xing, L.; Xue, X. Self-powered piezoelectric player-interactive patch for guitar learning assistance. Sci. China Technol. Sci. 2022, 65, 2695–2702. [Google Scholar] [CrossRef]
- Kikukawa, F.; Soga, M.; Taki, H. Development of a gesture learning environment for novices’ erhu bow strokes. Procedia Comput. Sci. 2014, 35, 1323–1332. [Google Scholar] [CrossRef]
- Pardue, L.S.; Harte, C.; McPherson, A.P. A low-cost real-time tracking system for violin. J. New Music Res. 2015, 44, 305–323. [Google Scholar] [CrossRef]
- Sun, S.W.; Liu, B.Y.; Chang, P.C. Deep learning-based violin bowing action recognition. Sensors 2020, 20, 5732. [Google Scholar] [CrossRef]
- Jaouedi, N.; Boujnah, N.; Bouhlel, M.S. A new hybrid deep learning model for human action recognition. J. King Saud Univ.-Comput. Inf. Sci. 2020, 32, 447–453. [Google Scholar] [CrossRef]
- Provenzale, C.; Di Stefano, N.; Noccaro, A.; Taffoni, F. Assessing the bowing technique in violin beginners using MIMU and optical proximity sensors: A feasibility study. Sensors 2021, 21, 5817. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, P.; Gao, R.X. Hybrid machine learning for human action recognition and prediction in assembly. Robot. Comput.-Integr. Manuf. 2021, 72, 102184. [Google Scholar] [CrossRef]
- Shah, K.; Shah, A.; Lau, C.P.; de Melo, C.M.; Chellappa, R. Multi-View Action Recognition Using Contrastive Learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 3381–3391. [Google Scholar]
- Shimbun, T.Y. More Japan Firms Turning to VR for Customer Service Training. 2023. Available online: https://japannews.yomiuri.co.jp/business/companies/20230508-108386/ (accessed on 8 May 2023).
- Eschen, H.; Kötter, T.; Rodeck, R.; Harnisch, M.; Schüppstuhl, T. Augmented and Virtual Reality for Inspection and Maintenance Processes in the Aviation Industry. Procedia Manuf. 2018, 19, 156–163. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13029–13038. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Joseph Nelson, J.S. YOLOv5 Is Here: State-of-the-Art Object Detection at 140 FPS. 2020. Available online: https://blog.roboflow.com/yolov5-is-here/ (accessed on 15 May 2023).
- ultralytics.com. Ultralytics YOLOv8: The State-of-the-Art YOLO Model. 2023. Available online: https://ultralytics.com/yolov8 (accessed on 15 May 2023).
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Terven, J.; Cordova-Esparza, D. A Comprehensive Review of YOLO: From YOLOv1 to YOLOv8 and Beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar]
- Wang, J. Introduction to Erhu Music in China. Int. J. Res. Publ. Rev. 2022, 2582, 7421. [Google Scholar]
- Zhao, J. Technical Analysis and Method Research of Erhu Bowing. Can. Soc. Sci. 2018, 14, 13–16. [Google Scholar]
- Yao, W. Charper Two: The Skill of the right hand. In Follow Me to Learn the Erhu; Anhui Literature and Art Publishing House: Hefei, China, 2011; pp. 13–16. [Google Scholar]
- Lavorini, F.; Levy, M.L.; Corrigan, C.; Crompton, G. The ADMIT series-issues in inhalation therapy. 6) Training tools for inhalation devices. Prim. Care Respir. J. 2010, 19, 335–341. [Google Scholar] [CrossRef]
- Australia, H. How to Use an Asthma Inhaler. 2021. Available online: https://www.healthdirect.gov.au/how-to-use-an-asthma-inhaler (accessed on 10 May 2023).
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Ferguson, D. Therbligs: The Keys to Simplifying Work. 2000. Available online: https://gilbrethnetwork.tripod.com/therbligs.html (accessed on 28 March 2023).
- Yen, Y.R. A study of developing the procedural logic learning system using the concept of Therbligs. In Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
- Oyekan, J.; Hutabarat, W.; Turner, C.; Arnoult, C.; Tiwari, A. Using Therbligs to embed intelligence in workpieces for digital assistive assembly. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 2489–2503. [Google Scholar] [CrossRef]
- Dessalene, E.; Maynord, M.; Fermuller, C.; Aloimonos, Y. Therbligs in Action: Video Understanding through Motion Primitives. arXiv 2023, arXiv:2304.03631. [Google Scholar]
- Young, D. The hyperbow controller: Real-time dynamics measurement of violin performance. In Proceedings of the 2002 Conference on New Interfaces for Musical Expression, Dublin, Ireland, 24–26 May 2002; pp. 1–6. [Google Scholar]
- Lu, B.; Dow, C.R.; Peng, C.J. Bowing Detection for Erhu Learners Using YOLO Deep Learning Techniques. In HCI International 2020-Posters: 22nd International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020, Proceedings, Part II 22; Springer: Berlin/Heidelberg, Germany, 2020; pp. 193–198. [Google Scholar]
- Enkhbat, A.; Shih, T.K.; Gochoo, M.; Cheewaprakobkit, P.; Aditya, W.; Duy Quy, T.; Lin, H.; Lin, Y.T. Using Hybrid Models for Action Correction in Instrument Learning Based on AI. IEEE Access 2024, 12, 125319–125331. [Google Scholar] [CrossRef]
- Larkin, O.; Koerselman, T.; Ong, B.; Ng, K. Sonification of bowing features for string instrument training. In Proceedings of the 14th International Conference on Auditory Display (ICAD2008), Paris, France, 24–27 June 2008; International Community for Auditory Display: Troy, NY, USA, 2008. [Google Scholar]
- Wang, L. An empirical study of interactive experience and teaching effect of erhu performance in virtual reality environment. Appl. Math. Nonlinear Sci. 2024, 9. [Google Scholar] [CrossRef]
- Lv, W.; Xu, S.; Zhao, Y.; Wang, G.; Wei, J.; Cui, C.; Du, Y.; Dang, Q.; Liu, Y. DETRs Beat YOLOs on Real-time Object Detection. arXiv 2023, arXiv:2304.08069. [Google Scholar]
Therblig | Code | Description |
---|---|---|
1. Search | SH | Seeking an object using the eyes and hands |
2. Select | ST | Choosing among several objects |
3. Grasp | G | Grasping an object with a hand |
4. Transport Empty (Reach) | TE | Receiving an object with a hand |
5. Transport Loaded (Move) | TL | Moving an object using a hand motion |
6. Hold | H | Holding an object |
7. Release load | RL | Releasing control of an object |
8. Position | P | Positioning or orienting an object in the defined location |
9. Preposition | PP | Positioning or orienting an object for the next operation and relative to an approximation location |
10. Inspect | I | Determining the quality or the characteristics of an object using the eyes and/or other senses |
11. Assemble | A | Joining multiple components together |
12. Disassemble | DA | Separating multiple components that were joined |
13. Use | U | Manipulating a tool in the intended way during working |
14. Unavoidable Delay | UD | Waiting due to factors beyond the worker’s control |
15. Avoidable Delay | AD | Waiting within the worker’s control that causes idleness |
16. Plan | PN | Deciding on a course of action |
17. Rest | R | Resting to overcome fatigue (e.g., pausing during a motion) |
18. Find | F | A momentary mental reaction at the end of the search |
Feature | Existing Algorithm | Proposed YOLO-Based Approach |
---|---|---|
Detection Method | Optical motion capture (e.g., VICON) or GCN+TCN models | YOLO models for object detection and pose estimation |
Cost | High (specialized hardware required) | Low (requires only a camera) |
Real-Time Feedback | Limited or post-analysis | Immediate feedback with sound and visual alerts |
Scalability | Limited (requires specific setups) | High (can be deployed in home or classroom environments) |
Precision | Extremely high, for professional use | Sufficient for educational purposes |
Ease of Use | Complex setup and high computational requirements | Simple deployment and low computational overhead |
Learning Progress Analysis | Detailed error analysis and progress tracking | Quantitative metrics (e.g., BLS, BSS) for tracking progress |
Target Users | Professionals and researchers | Beginners and general learners |
Performance | YOLO | YOLO | YOLO | YOLO | YOLO | YOLO | RTDETR |
---|---|---|---|---|---|---|---|
v3-spp | v4 | v5m6 | v6m | v7 | v8n | Large [42] | |
mAP—Bow | 95.3% | 99.5% | 100.0% | 99.9% | 99.5% | 100.0% | 99.6% |
mAP—Right Hand | 99.5% | 100.0% | 100.0% | 99.9% | 99.5% | 99.9% | 99.6% |
[email protected] | 97.4% | 99.8% | 99.5% | 99.5% | 100.0% | 99.5% | 99.5% |
avgIoU | 76.3% | 84.5% | 90.1% | 88.8% | 90.5% | 90.1% | 89.3% |
F1 score | 0.96 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
Recall | 0.94 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
Layers | 114 | 162 | 323 | 194 | 314 | 168 | 498 |
Params (M) | 134.2 | 134.2 | 41.1 | 51.9 | 36.4 | 3.0 | 31.9 |
FLOPs (G) | 140.3 | 127.2 | 65.1 | 161.1 | 103.2 | 8.1 | 110 |
Performance | V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 |
---|---|---|---|---|---|---|---|---|---|---|
mAP—Bow | 98.9% | 98.4% | 100.0% | 100.0% | 99.3% | 100.0% | 91.9% | 99.5% | 90.0% | 100.0% |
mAP—Right Hand | 88.4% | 99.5% | 100.0% | 99.5% | 98.4% | 100.0% | 99.4% | 100.0% | 100.0% | 90.0% |
[email protected] | 98.2% | 99.5% | 99.5% | 99.5% | 99.5% | 99.5% | 99.5% | 97.3% | 96.0% | 95.0% |
avgIoU—Bow | 81.5% | 94.8% | 85.8% | 89.2% | 86.1% | 84.7% | 94.5% | 83.3% | 74.3% | 90.5% |
avgIoU—RH | 60.5% | 74.6% | 82.9% | 72.1% | 80.7% | 85.8% | 82.6% | 82.7% | 82.5% | 71.8% |
F1 score | 0.83 | 1.00 | 1.00 | 1.00 | 1.00 | 0.99 | 0.98 | 0.97 | 0.95 | 0.95 |
Recall | 0.95 | 1.00 | 1.00 | 1.00 | 1.00 | 0.99 | 1.00 | 0.95 | 0.95 | 0.95 |
Total frames | 812 | 447 | 1941 | 1947 | 827 | 841 | 866 | 1462 | 3145 | 1499 |
Bow detected | 806 | 447 | 1941 | 1930 | 798 | 756 | 866 | 1448 | 2987 | 1499 |
(99%) | (100%) | (100%) | (99%) | (96%) | (89%) | (100%) | (99%) | (94%) | (100%) | |
RH detected | 630 | 443 | 1940 | 1932 | 806 | 841 | 853 | 1434 | 3026 | 1498 |
(77%) | (99%) | (99%) | (99%) | (97%) | (100%) | (98%) | (98%) | (96%) | (99%) | |
Both detected | 626 | 443 | 1940 | 1915 | 777 | 756 | 853 | 1421 | 2893 | 1498 |
(77%) | (99%) | (99%) | (98%) | (93%) | (89%) | (98%) | (97%) | (91%) | (99%) |
Performance | V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 |
---|---|---|---|---|---|---|---|---|---|---|
mAP—Bow | 98.5% | 100.0% | 98.5% | 99.5% | 97.9% | 98.6% | 92.6% | 99.0% | 99.3% | 98.4% |
mAP—Right Hand | 99.6% | 99.5% | 100.0% | 99.5% | 90.6% | 100.0% | 100.0% | 99.2% | 98.4% | 100.0% |
[email protected] | 99.5% | 99.5% | 99.5% | 99.5% | 98.6% | 99.5% | 99.5% | 96.8% | 99.5% | 99.5% |
avgIoU—Bow | 89.8% | 95.1% | 84.9% | 88.5% | 86.9% | 85.8% | 93.9% | 81.3% | 86.5% | 90.4% |
avgIoU—RH | 92.5% | 76.9% | 76.6% | 74.3% | 82.6% | 87.2% | 81.7% | 74.8% | 90.4% | 67.8% |
F1 score | 1.00 | 1.00 | 1.00 | 1.00 | 0.97 | 1.00 | 0.98 | 0.97 | 1.00 | 1.00 |
Recall | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.95 | 1.00 | 1.00 |
Total frames | 812 | 447 | 1941 | 1947 | 827 | 841 | 866 | 1462 | 3145 | 1499 |
Bow detected | 812 | 447 | 1941 | 1935 | 803 | 791 | 866 | 1440 | 3131 | 1499 |
(100%) | (100%) | (100%) | (99%) | (97%) | (94%) | (100%) | (98%) | (99%) | (100%) | |
RH detected | 807 | 447 | 1931 | 1931 | 788 | 841 | 866 | 1439 | 3140 | 1456 |
(99%) | (100%) | (99%) | (99%) | (95%) | (100%) | (100%) | (98%) | (99%) | (97%) | |
Both detected | 626 | 447 | 1931 | 1919 | 764 | 791 | 866 | 1429 | 3126 | 1456 |
(99%) | (100%) | (99%) | (98%) | (92%) | (94%) | (100%) | (97%) | (99%) | (97%) |
Group | Learner | Assist. | BLS | BSB | |||||
---|---|---|---|---|---|---|---|---|---|
Pull | Push | Inter | Union | IOU | |||||
G1 | A | Yes | 90.7% | 0.32 | 0.27 | 0.27 | 0.32 | 84% | 2.64 |
No | 72.5% | 0.21 | 0.19 | 0.18 | 0.22 | 82% | 3.72 | ||
B | Yes | 98.1% | 0.20 | 0.17 | 0.17 | 0.20 | 85% | 4.25 | |
No | 88.9% | 0.16 | 0.18 | 0.16 | 0.18 | 89% | 4.94 | ||
C | Yes | 92.3% | 0.37 | 0.32 | 0.32 | 0.37 | 86% | 2.34 | |
No | 72.7% | 0.50 | 0.47 | 0.47 | 0.50 | 94% | 1.88 | ||
G2 | D | No | 34.7% | 0.51 | 0.52 | 0.50 | 0.53 | 94% | 1.78 |
Yes | 96.4% | 0.28 | 0.26 | 0.26 | 0.28 | 93% | 3.32 | ||
E | No | 35.1% | 0.28 | 0.24 | 0.24 | 0.28 | 86% | 3.06 | |
Yes | 86.9% | 0.27 | 0.27 | 0.27 | 0.27 | 100% | 3.70 | ||
F | No | 77.8% | 0.36 | 0.31 | 0.30 | 0.37 | 81% | 2.19 | |
Yes | 89.9% | 0.26 | 0.23 | 0.22 | 0.27 | 81% | 3.02 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, B.; Meng, C.-L.; Dow, C.-R. A Posture Training System Based on Therblig Analysis and YOLO Model—Taking Erhu Bowing as an Example. Sensors 2025, 25, 674. https://doi.org/10.3390/s25030674
Lu B, Meng C-L, Dow C-R. A Posture Training System Based on Therblig Analysis and YOLO Model—Taking Erhu Bowing as an Example. Sensors. 2025; 25(3):674. https://doi.org/10.3390/s25030674
Chicago/Turabian StyleLu, Bonnie, Chao-Li Meng, and Chyi-Ren Dow. 2025. "A Posture Training System Based on Therblig Analysis and YOLO Model—Taking Erhu Bowing as an Example" Sensors 25, no. 3: 674. https://doi.org/10.3390/s25030674
APA StyleLu, B., Meng, C.-L., & Dow, C.-R. (2025). A Posture Training System Based on Therblig Analysis and YOLO Model—Taking Erhu Bowing as an Example. Sensors, 25(3), 674. https://doi.org/10.3390/s25030674