Eye Aspect Ratio for Real-Time Drowsiness Detection to Improve Driver Safety
Abstract
:1. Introduction
2. Materials and Methods
2.1. Facial Landmarks for Eye Blink Detection
2.2. Eye Aspect Ratio (EAR)
2.3. Research Workflow
2.4. Eye Blink Detection Flowchart
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- LaRocco, J.; Le, M.D.; Paeng, D.G. A Systemic Review of Available Low-Cost EEG Headsets Used for Drowsiness Detection. Front. Neuroinform. 2020, 14, 42. [Google Scholar] [CrossRef]
- Rahman, A.; Sirshar, M.; Khan, A. Real Time Drowsiness Detection Using Eye Blink Monitoring. In Proceedings of the 2015 National Software Engineering Conference, NSEC 2015, Rawalpindi, Pakistan, 25–26 November 2015; pp. 1–7. [Google Scholar]
- Lemke, M.K.; Apostolopoulos, Y.; Sönmez, S. Syndemic Frameworks to Understand the Effects of COVID-19 on Commercial Driver Stress, Health, and Safety. J. Transp. Health 2020, 18, 100877. [Google Scholar] [CrossRef] [PubMed]
- Gagnon, S.; Stinchcombe, A.; Curtis, M.; Kateb, M.; Polgar, J.; Porter, M.M.; Bédard, M. Driving Safety Improves after Individualized Training: An RCT Involving Older Drivers in an Urban Area. Traffic Inj. Prev. 2019, 20, 595–600. [Google Scholar] [CrossRef] [PubMed]
- Koesdwiady, A.; Soua, R.; Karray, F.; Kamel, M.S. Recent Trends in Driver Safety Monitoring Systems: State of the Art and Challenges. IEEE Trans. Veh. Technol. 2017, 66, 4550–4563. [Google Scholar] [CrossRef]
- Al Tawil, L.; Aldokhayel, S.; Zeitouni, L.; Qadoumi, T.; Hussein, S.; Ahamed, S.S. Prevalence of Self-Reported Computer Vision Syndrome Symptoms and Its Associated Factors among University Students. Eur. J. Ophthalmol. 2020, 30, 189–195. [Google Scholar] [CrossRef]
- Drutarovsky, T.; Fogelton, A. Eye Blink Detection Using Variance of Motion Vectors. In Proceedings of the Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Braga, Portugal, 28 September–1 October 2015; Volume 8927, pp. 436–448. [Google Scholar]
- Pan, G.; Sun, L.; Wu, Z.; Lao, S. Eyeblink-Based Anti-Spoofing in Face Recognition from a Generic Webcamera. In Proceedings of the IEEE International Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Dewi, C.; Chen, R.C.; Yu, H. Weight Analysis for Various Prohibitory Sign Detection and Recognition Using Deep Learning. Multimed. Tools Appl. 2020, 79, 32897–32915. [Google Scholar] [CrossRef]
- Muhammad, K.; Ullah, A.; Lloret, J.; Ser, J.D.; De Albuquerque, V.H.C. Deep Learning for Safe Autonomous Driving: Current Challenges and Future Directions. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4316–4336. [Google Scholar] [CrossRef]
- Dewi, C.; Chen, R.C.; Liu, Y.T. Wasserstein Generative Adversarial Networks for Realistic Traffic Sign Image Generation. In Proceedings of the Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Phuket, Thailand, 7–10 April 2021; pp. 479–493. [Google Scholar]
- Mimouna, A.; Alouani, I.; Ben, K.A.; El Hillali, Y.; Taleb-Ahmed, A.; Menhaj, A.; Ouahabi, A.; Amara, N.E. Ben OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics 2020, 9, 560. [Google Scholar] [CrossRef] [Green Version]
- Rosenfield, M. Computer Vision Syndrome: A Review of Ocular Causes and Potential Treatments. Ophthalmic Physiol. Opt. 2011, 31, 502–515. [Google Scholar] [CrossRef]
- Bentivoglio, A.R.; Bressman, S.B.; Cassetta, E.; Carretta, D.; Tonali, P.; Albanese, A. Analysis of Blink Rate Patterns in Normal Subjects. Mov. Disord. 1997, 12, 1028–1034. [Google Scholar] [CrossRef]
- Čech, J.; Franc, V.; Uřičář, M.; Matas, J. Multi-View Facial Landmark Detection by Using a 3D Shape Model. Image Vis. Comput. 2016, 47, 60–70. [Google Scholar] [CrossRef]
- Dong, X.; Yu, S.I.; Weng, X.; Wei, S.E.; Yang, Y.; Sheikh, Y. Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 360–368. [Google Scholar]
- Dewi, C.; Chen, R.-C.; Jiang, X.; Yu, H. Adjusting Eye Aspect Ratio for Strong Eye Blink Detection Based on Facial Landmarks. PeerJ Comput. Sci. 2022, 8, e943. [Google Scholar] [CrossRef] [PubMed]
- Song, M.; Tao, D.; Sun, Z.; Li, X. Visual-Context Boosting for Eye Detection. IEEE Trans. Syst. Man. Cybern. B Cybern. 2010, 40, 1460–1467. [Google Scholar] [CrossRef] [PubMed]
- Lee, W.O.; Lee, E.C.; Park, K.R. Blink Detection Robust to Various Facial Poses. J. Neurosci. Methods 2010, 193, 356–372. [Google Scholar] [CrossRef]
- Park, C.W.; Park, K.T.; Moon, Y.S. Eye Detection Using Eye Filter and Minimisation of NMF-Based Reconstruction Error in Facial Image. Electron. Lett. 2010, 46, 130–132. [Google Scholar] [CrossRef] [Green Version]
- Li, F.; Chen, C.H.; Xu, G.; Khoo, L.P. Hierarchical Eye-Tracking Data Analytics for Human Fatigue Detection at a Traffic Control Center. IEEE Trans. Human-Mach. Syst. 2020, 50, 465–474. [Google Scholar] [CrossRef]
- García, I.; Bronte, S.; Bergasa, L.M.; Almazán, J.; Yebes, J. Vision-Based Drowsiness Detector for Real Driving Conditions. In Proceedings of the IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 618–623. [Google Scholar]
- Maior, C.B.S.; das Chagas Moura, M.J.; Santana, J.M.M.; Lins, I.D. Real-Time Classification for Autonomous Drowsiness Detection Using Eye Aspect Ratio. Exp. Syst. Appl. 2020, 158, 113505. [Google Scholar] [CrossRef]
- Mehta, S.; Dadhich, S.; Gumber, S.; Jadhav Bhatt, A. Real-Time Driver Drowsiness Detection System Using Eye Aspect Ratio and Eye Closure Ratio. In Proceedings of the International Conference on Sustainable Computing in Science, Technology and Management (SUSCOM), Amity University Rajasthan, Jaipur, India, 26–28 February 2019; pp. 1333–1339. [Google Scholar]
- Wu, Y.; Ji, Q. Facial Landmark Detection: A Literature Survey. Int. J. Comput. Vis. 2019, 127, 115–142. [Google Scholar] [CrossRef] [Green Version]
- Dewi, C.; Chen, R.; Liu, Y.; Yu, H. Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation. Appl. Sci. 2021, 11, 2913. [Google Scholar] [CrossRef]
- Bergasa, L.M.; Nuevo, J.; Sotelo, M.A.; Barea, R.; Lopez, M.E. Real-Time System for Monitoring Driver Vigilance. IEEE Trans. Intell. Transp. Syst. 2006, 7, 63–77. [Google Scholar] [CrossRef]
- Dewi, C.; Chen, R.-C.; Jiang, X.; Yu, H. Deep Convolutional Neural Network for Enhancing Traffic Sign Recognition Developed on Yolo V4. Multimed. Tools Appl. 2022, 81, 37821–37845. [Google Scholar] [CrossRef]
- Fu, R.; Wang, H.; Zhao, W. Dynamic Driver Fatigue Detection Using Hidden Markov Model in Real Driving Condition. Exp. Syst. Appl. 2016, 63, 397–411. [Google Scholar] [CrossRef]
- You, F.; Li, X.; Gong, Y.; Wang, H.; Li, H. A Real-Time Driving Drowsiness Detection Algorithm with Individual Differences Consideration. IEEE Access 2019, 7, 179396–179408. [Google Scholar] [CrossRef]
- Zhao, X.; Meng, C.; Feng, M.; Chang, S.; Zeng, Q. Eye Feature Point Detection Based on Single Convolutional Neural Network. IET Comput. Vis. 2018, 12, 453–457. [Google Scholar] [CrossRef]
- Zhang, Z.; Luo, P.; Loy, C.C.; Tang, X. Learning Deep Representation for Face Alignment with Auxiliary Attributes. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 918–930. [Google Scholar] [CrossRef] [Green Version]
- Yue, X.; Li, J.; Wu, J.; Chang, J.; Wan, J.; Ma, J. Multi-Task Adversarial Autoencoder Network for Face Alignment in the Wild. Neurocomputing 2021, 437, 261–273. [Google Scholar] [CrossRef]
- Sun, Y.; Wang, X.; Tang, X. Deep Convolutional Network Cascade for Facial Point Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 23–28 June 2013; pp. 3476–3483. [Google Scholar]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Jacques, S. Multi-block Color-binarized Statistical Images for Single-sam-Ple Face Recognition. Sensors 2021, 21, 728. [Google Scholar] [CrossRef]
- El Morabit, S.; Rivenq, A.; Zighem, M.E.N.; Hadid, A.; Ouahabi, A.; Taleb-Ahmed, A. Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using off-the-Shelf Cnn Architectures. Electronics 2021, 10, 1926. [Google Scholar] [CrossRef]
- Jimenez-Pinto, J.; Torres-Torriti, M. Face Salient Points and Eyes Tracking for Robust Drowsiness Detection. Robotica 2012, 30, 731–741. [Google Scholar] [CrossRef]
- Lawrenson, J.G.; Birhah, R.; Murphy, P.J. Tear-Film Lipid Layer Morphology and Corneal Sensation in the Development of Blinking in Neonates and Infants. J. Anat. 2005, 206, 265–270. [Google Scholar] [CrossRef] [PubMed]
- Perelman, B.S. Detecting Deception via Eyeblink Frequency Modulation. PeerJ 2014, 2, e260. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lin, X.; Wan, J.; Xie, Y.; Zhang, S.; Lin, C.; Liang, Y.; Guo, G.; Li, S.Z. Task-Oriented Feature-Fused Network with Multivariate Dataset for Joint Face Analysis. IEEE Trans. Cybern. 2020, 50, 1292–1305. [Google Scholar] [CrossRef]
- Kazemi, V.; Sullivan, J. One Millisecond Face Alignment with an Ensemble of Regression Trees. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
- Sugawara, E.; Nikaido, H. Properties of AdeABC and AdeIJK Efflux Systems of Acinetobacter Baumannii Compared with Those of the AcrAB-TolC System of Escherichia Coli. Antimicrob. Agents Chemother. 2014, 58, 7250–7257. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rakshita, R. Communication Through Real-Time Video Oculography Using Face Landmark Detection. In Proceedings of the International Conference on Inventive Communication and Computational Technologies, ICICCT 2018, Coimbatore, India, 20–21 April 2018; pp. 1094–1098. [Google Scholar]
- Noor, A.Z.M.; Jafar, F.A.; Ibrahim, M.R.; Soid, S.N.M. Fatigue Detection among Operators in Industry Based on Euclidean Distance Computation Using Python Software. Int. J. Emerg. Trends Eng. Res. 2020, 8, 6375–6379. [Google Scholar] [CrossRef]
- Fogelton, A.; Benesova, W. Eye Blink Detection Based on Motion Vectors Analysis. Comput. Vis. Image Underst. 2016, 148, 23–33. [Google Scholar] [CrossRef]
- Tang, X.; Guo, F.; Shen, J.; Du, T. Facial Landmark Detection by Semi-Supervised Deep Learning. Neurocomputing 2018, 297, 22–32. [Google Scholar] [CrossRef]
- Dhiraj; Jain, D.K. An Evaluation of Deep Learning Based Object Detection Strategies for Threat Object Detection in Baggage Security Imagery. Pattern Recognit. Lett. 2019, 120, 112–119. [Google Scholar] [CrossRef]
- King, D.E. Dlib-Ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
- Eriksson, M.; Papanikolopoulos, N.P. Eye-Tracking for Detection of Driver Fatigue. In Proceedings of the IEEE Conference on Intelligent Transportation Systems Proceedings, ITSC, Boston, MA, USA, 9–12 November 1997. [Google Scholar]
- Dewi, C.; Chen, R.-C.; Liu, Y.-T.; Tai, S.-K. Synthetic Data Generation Using DCGAN for Improved Traffic Sign Recognition. Neural Comput. Appl. 2021, 33, 1–15. [Google Scholar] [CrossRef]
- Chen, R.C.; Saravanarajan, V.S.; Hung, H. Te Monitoring the Behaviours of Pet Cat Based on YOLO Model and Raspberry Pi. Int. J. Appl. Sci. Eng. 2021, 18, 1–12. [Google Scholar] [CrossRef]
- Yang, H.; Chen, L.; Chen, M.; Ma, Z.; Deng, F.; Li, M.; Li, X. Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model. IEEE Access 2019, 7, 180998–181011. [Google Scholar] [CrossRef]
- Yuan, Y.; Xiong, Z.; Wang, Q. An Incremental Framework for Video-Based Traffic Sign Detection, Tracking, and Recognition. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1918–1929. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple Detection during Different Growth Stages in Orchards Using the Improved YOLO-V3 Model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Khan, A.; Jin, W.; Haider, A.; Rahman, M.; Wang, D. Adversarial Gaussian Denoiser for Multiple-Level Image Denoising. Sensors 2021, 21, 2998. [Google Scholar] [CrossRef]
- Khaldi, Y.; Benzaoui, A.; Ouahabi, A.; Jacques, S.; Taleb-Ahmed, A. Ear Recognition Based on Deep Unsupervised Active Learning. IEEE Sens. J. 2021, 21, 20704–20713. [Google Scholar] [CrossRef]
- Khaldi, Y.; Benzaoui, A. A New Framework for Grayscale Ear Images Recognition Using Generative Adversarial Networks under Unconstrained Conditions. Evol. Syst. 2021, 12, 923–934. [Google Scholar] [CrossRef]
Video Info | Video 1 | Video 2 | Video 3 | Talking Face | Eyeblink8 Video 8 |
---|---|---|---|---|---|
FPS | 29.97 | 24 | 29.97 | 30 | 30 |
Frame Count | 1829 | 1302 | 2195 | 5000 | 10,712 |
Durations (s) | 61.03 | 54.25 | 73.24 | 166.67 | 357.07 |
Size (MB) | 29.6 | 12.4 | 38.6 | 22 | 18.6 |
No | Description | Features |
---|---|---|
1 | Alternatively, a frame counter may be used to get a timestamp in a different file. | frame ID |
2 | A unique blink ID is defined as a sequence of blink ID frames that are all the identical. The time between two consecutive blinks is measured in terms of a sequence of identical blink ID frames. | blink ID |
3 | A change from X to N occurs in the provided variable while the person is looking sideways and blinking. | non frontal face (NF) |
4 | Left Eye. | left eye (LE), |
5 | Right Eye. | right eye (RE), |
6 | Face. | face (F) |
7 | The given flag will transition from X to C if the subject’s eye closure percentage is between 90% and 100%. | eye fully closed (FC) |
8 | This variable changes from X to N when the subject’s eye is covered (by the subject’s hand, by low lighting, or by the subject’s excessive head movement). | eye not visible (NV) |
9 | x and y coordinates, width, height. | face bounding box (F_X, F_Y, F_W, F_H) |
10 | RX (right corner x coordinate), LY (left corner y coordinate) | left and right eye corners positions |
Dataset | Video 1 | Video 2 | Video 3 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
EAR Threshold (t) | 0.18 | 0.2 | 0.225 | 0.25 | 0.18 | 0.2 | 0.225 | 0.25 | 0.18 | 0.2 | 0.225 | 0.25 |
Statistics on the prediction set are | ||||||||||||
Total Number of Frames Processed | 1829 | 1829 | 1829 | 1829 | 1302 | 1302 | 1302 | 1302 | 2192 | 2192 | 2192 | 2192 |
Number of Closed Frames | 23 | 56 | 131 | 281 | 182 | 342 | 614 | 884 | 232 | 440 | 791 | 1177 |
Number of Blinks | 2 | 6 | 9 | 16 | 18 | 39 | 73 | 65 | 25 | 49 | 79 | 89 |
Statistics on the test set are | ||||||||||||
Total Number of Frames Processed | 1829 | 1829 | 1829 | 1829 | 1302 | 1302 | 1302 | 1302 | 2192 | 2192 | 2192 | 2192 |
Number of Closed Frames | 58 | 58 | 58 | 58 | 35 | 35 | 35 | 35 | 61 | 61 | 61 | 61 |
Number of Blinks | 14 | 14 | 14 | 14 | 9 | 9 | 9 | 9 | 10 | 10 | 10 | 10 |
Eye Closeness Frame by Frame Test Scores | ||||||||||||
Accuracy | 0.955 | 0.938 | 0.897 | 0.820 | 0.861 | 0.74 | 0.543 | 0.340 | 0.890 | 0.797 | 0.645 | 0.475 |
AUC | 0.613 | 0.581 | 0.528 | 0.501 | 0.732 | 0.692 | 0.654 | 0.591 | 0.664 | 0.641 | 0.626 | 0.594 |
Dataset | Talking Face | Eyeblink8 Video 8 | ||||||
---|---|---|---|---|---|---|---|---|
EAR Threshold (t) | 0.18 | 0.2 | 0.225 | 0.25 | 0.18 | 0.2 | 0.225 | 0.25 |
Statistics on the prediction set are | ||||||||
Total Number of Frames Processed | 5000 | 5000 | 5000 | 5000 | 10,663 | 10,663 | 10,663 | 10,663 |
Number of Closed Frames | 227 | 292 | 352 | 484 | 404 | 529 | 1055 | 2002 |
Number of Blinks | 31 | 42 | 49 | 59 | 37 | 43 | 85 | 126 |
Statistics on the test set are | ||||||||
Total Number of Frames Processed | 5000 | 5000 | 5000 | 5000 | 10,663 | 10,663 | 10,663 | 10,663 |
Number of Closed Frames | 153 | 153 | 153 | 153 | 107 | 107 | 107 | 107 |
Number of Blinks | 61 | 61 | 61 | 61 | 30 | 30 | 30 | 30 |
Eye Closeness Frame by Frame Test Scores | ||||||||
Accuracy | 0.971 | 0.968 | 0.959 | 0.933 | 0.970 | 0.959 | 0.911 | 0.911 |
AUC | 0.974 | 0.968 | 0.953 | 0.946 | 0.963 | 0.961 | 0.955 | 0.955 |
Evaluation | Video 1 | Video 2 | Video 3 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Support | Precision | Recall | F1-Score | Support | Precision | Recall | F1-Score | Support | |
EAR Threshold (t) = 0.18 | EAR Threshold (t) = 0.18 | EAR Threshold (t) = 0.18 | ||||||||||
0 | 0.97 | 0.99 | 0.98 | 1771 | 0.98 | 0.87 | 0.92 | 1267 | 0.98 | 0.90 | 0.94 | 2131 |
1 | 0.00 | 0.00 | 0.00 | 58 | 0.10 | 0.51 | 0.17 | 35 | 0.11 | 0.43 | 0.18 | 61 |
Macro avg | 0.48 | 0.49 | 0.49 | 1829 | 0.54 | 0.69 | 0.55 | 1302 | 0.55 | 0.66 | 0.56 | 2192 |
Weight avg | 0.94 | 0.96 | 0.95 | 1829 | 0.96 | 0.86 | 0.90 | 1302 | 0.96 | 0.89 | 0.92 | 2192 |
Accuracy | 0.96 | 1829 | 0.86 | 1302 | 0.89 | 2192 | ||||||
EAR Threshold (t) = 0.2 | EAR Threshold (t) = 0.2 | EAR Threshold (t) = 0.2 | ||||||||||
0 | 0.97 | 0.97 | 0.97 | 1771 | 0.99 | 0.75 | 0.85 | 1267 | 0.98 | 0.81 | 0.89 | 2131 |
1 | 0.02 | 0.02 | 0.02 | 58 | 0.07 | 0.71 | 0.13 | 35 | 0.07 | 0.48 | 0.12 | 61 |
Macro avg | 0.49 | 0.49 | 0.49 | 1829 | 0.53 | 0.73 | 0.49 | 1302 | 0.52 | 0.64 | 0.50 | 2192 |
Weight avg | 0.94 | 0.94 | 0.94 | 1829 | 0.96 | 0.75 | 0.83 | 1302 | 0.96 | 0.80 | 0.86 | 2192 |
Accuracy | 0.94 | 1829 | 0.75 | 1302 | 0.80 | 2192 | ||||||
EAR Threshold (t) = 0.225 | EAR Threshold (t) = 0.225 | EAR Threshold (t) = 0.225 | ||||||||||
0 | 0.97 | 0.93 | 0.95 | 1771 | 0.99 | 0.54 | 0.70 | 1267 | 0.98 | 0.65 | 0.78 | 2131 |
1 | 0.01 | 0.02 | 0.01 | 58 | 0.04 | 0.77 | 0.08 | 35 | 0.05 | 0.61 | 0.09 | 61 |
Macro avg | 0.49 | 0.47 | 0.48 | 1829 | 0.52 | 0.65 | 0.39 | 1302 | 0.51 | 0.63 | 0.43 | 2192 |
Weight avg | 0.94 | 0.90 | 0.92 | 1829 | 0.96 | 0.54 | 0.68 | 1302 | 0.96 | 0.65 | 0.76 | 2192 |
Accuracy | 0.90 | 1829 | 0.54 | 1302 | 0.65 | 2192 | ||||||
EAR Threshold (t) = 0.25 | EAR Threshold (t) = 0.25 | EAR Threshold (t) = 0.25 | ||||||||||
0 | 0.97 | 0.84 | 0.90 | 1771 | 0.99 | 0.33 | 0.49 | 1267 | 0.98 | 0.47 | 0.63 | 2131 |
1 | 0.02 | 0.09 | 0.03 | 58 | 0.03 | 0.85 | 0.07 | 35 | 0.04 | 0.72 | 0.07 | 61 |
Macro avg | 0.49 | 0.47 | 0.47 | 1829 | 0.51 | 0.59 | 0.28 | 1302 | 0.51 | 0.59 | 0.35 | 2192 |
Weight avg | 0.94 | 0.82 | 0.87 | 1829 | 0.96 | 0.34 | 0.48 | 1302 | 0.96 | 0.48 | 0.62 | 2192 |
Accuracy | 0.82 | 1829 | 0.34 | 1302 | 0.48 | 2192 |
Evaluation | Talking Face | Eyeblink8 Video 8 | ||||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Support | Precision | Recall | F1-Score | Support | |
EAR Threshold (t) = 0.18 | EAR Threshold (t) = 0.18 | |||||||
0 | 0.99 | 0.98 | 0.99 | 4847 | 1.00 | 0.97 | 0.98 | 10,556 |
1 | 0.52 | 0.77 | 0.62 | 153 | 0.24 | 0.92 | 0.38 | 107 |
Macro avg | 0.76 | 0.87 | 0.80 | 5000 | 0.62 | 0.94 | 0.68 | 10,663 |
Weight avg | 0.98 | 0.97 | 0.97 | 5000 | 0.99 | 0.97 | 0.98 | 10,663 |
Accuracy | 0.97 | 5000 | 0.97 | 10,663 | ||||
EAR Threshold (t) = 0.2 | EAR Threshold (t) = 0.2 | |||||||
0 | 1.00 | 0.97 | 0.98 | 4847 | 1.00 | 0.96 | 0.98 | 10,556 |
1 | 0.49 | 0.93 | 0.64 | 153 | 0.19 | 0.96 | 0.32 | 107 |
Macro avg | 0.74 | 0.95 | 0.81 | 5000 | 0.60 | 0.96 | 0.65 | 10,663 |
Weight avg | 0.98 | 0.97 | 0.97 | 5000 | 0.99 | 0.96 | 0.97 | 10,663 |
Accuracy | 0.97 | 5000 | 0.96 | 10,663 | ||||
EAR Threshold (t) = 0.225 | EAR Threshold (t) = 0.225 | |||||||
0 | 1.00 | 0.96 | 0.98 | 4847 | 1.00 | 0.91 | 0.95 | 10,556 |
1 | 0.43 | 0.99 | 0.60 | 153 | 0.10 | 1.00 | 0.18 | 107 |
Macro avg | 0.71 | 0.97 | 0.79 | 5000 | 0.55 | 0.96 | 0.57 | 10,663 |
Weight avg | 0.98 | 0.96 | 0.97 | 5000 | 0.99 | 0.91 | 0.95 | 10,663 |
Accuracy | 0.96 | 5000 | 0.91 | 10,663 | ||||
EAR Threshold (t) = 0.25 | EAR Threshold (t) = 0.25 | |||||||
0 | 1.00 | 0.93 | 0.96 | 4847 | 1.00 | 0.91 | 0.95 | 10,556 |
1 | 0.32 | 1.00 | 0.48 | 153 | 0.10 | 1.00 | 0.18 | 107 |
Macro avg | 0.66 | 0.97 | 0.72 | 5000 | 0.55 | 0.96 | 0.57 | 10,663 |
Weight avg | 0.98 | 0.93 | 0.95 | 5000 | 0.99 | 0.91 | 0.95 | 10,663 |
Accuracy | 0.93 | 5000 | 0.91 | 10,663 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dewi, C.; Chen, R.-C.; Chang, C.-W.; Wu, S.-H.; Jiang, X.; Yu, H. Eye Aspect Ratio for Real-Time Drowsiness Detection to Improve Driver Safety. Electronics 2022, 11, 3183. https://doi.org/10.3390/electronics11193183
Dewi C, Chen R-C, Chang C-W, Wu S-H, Jiang X, Yu H. Eye Aspect Ratio for Real-Time Drowsiness Detection to Improve Driver Safety. Electronics. 2022; 11(19):3183. https://doi.org/10.3390/electronics11193183
Chicago/Turabian StyleDewi, Christine, Rung-Ching Chen, Chun-Wei Chang, Shih-Hung Wu, Xiaoyi Jiang, and Hui Yu. 2022. "Eye Aspect Ratio for Real-Time Drowsiness Detection to Improve Driver Safety" Electronics 11, no. 19: 3183. https://doi.org/10.3390/electronics11193183
APA StyleDewi, C., Chen, R. -C., Chang, C. -W., Wu, S. -H., Jiang, X., & Yu, H. (2022). Eye Aspect Ratio for Real-Time Drowsiness Detection to Improve Driver Safety. Electronics, 11(19), 3183. https://doi.org/10.3390/electronics11193183