Objective Classes for Micro-Facial Expression Recognition
Abstract
:1. Introduction
2. Background
2.1. CASME II
2.2. SAMM
2.3. Related Work
3. Methodology
4. Results
5. Limitaions
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Ekman, P. Emotions Revealed: Understanding Faces and Feelings; Phoenix: Nairobi, Kenyan, 2004. [Google Scholar]
- Ekman, P. Lie Catching and Microexpressions. In The Philosophy of Deception; Martin, C.W., Ed.; Oxford University Press: New York, NY, USA, 2009; pp. 118–133. [Google Scholar]
- Matsumoto, D.; Yoo, S.H.; Nakagawa, S. Culture, emotion regulation, and adjustment. J. Pers. Soc. Psychol. 2008, 94, 925. [Google Scholar] [CrossRef] [PubMed]
- Shen, X.B.; Wu, Q.; Fu, X.L. Effects of the duration of expressions on the recognition of microexpressions. J. Zhejiang Univ. Sci. B 2012, 13, 221–230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yan, W.J.; Wu, Q.; Liang, J.; Chen, Y.H.; Fu, X. How Fast are the Leaked Facial Expressions: The Duration of Micro-Expressions. J. Nonverbal Behav. 2013, 37, 217–230. [Google Scholar] [CrossRef]
- Ekman, P.; Friesen, W.V. Nonverbal leakage and clues to deception. Psychiatry 1969, 32, 88–106. [Google Scholar] [CrossRef] [PubMed]
- Ekman, P. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage; Norton: New York, NY, USA, 2001. [Google Scholar]
- Ekman, P.; Rosenberg, E.L. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS); Series in Affective Science; Oxford University Press: New York, NY, USA, 2005. [Google Scholar]
- Frank, M.G.; Maccario, C.J.; Govindaraju, V.l. Behavior and Security. In Protecting Airline Passengers in the Age of Terrorism; Greenwood Pub. Group: Santa Barbara, CA, USA, 2009. [Google Scholar]
- Ojala, T.; Pietikainen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
- Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef] [Green Version]
- Zhao, G.; Pietikainen, M. Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 915–928. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
- Chaudhry, R.; Ravichandran, A.; Hager, G.; Vidal, R. Histograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami, FL, USA, 20–25 June 2009; pp. 1932–1939. [Google Scholar] [CrossRef]
- O’Sullivan, M.; Frank, M.G.; Hurley, C.M.; Tiwana, J. Police lie detection accuracy: The effect of lie scenario. Law Hum. Behav. 2009, 33, 530. [Google Scholar] [CrossRef] [PubMed]
- Frank, M.; Herbasz, M.; Sinuk, K.; Keller, A.M.; Kurylo, A.; Nolan, C. I See How You Feel: Training Laypeople and Professionals to Recognize Fleeting Emotions; International Communication Association: New York, NY, USA, 2009. [Google Scholar]
- Yap, M.H.; Ugail, H.; Zwiggelaar, R. A database for facial behavioural analysis. In Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China, 22–26 April 2013; pp. 1–6. [Google Scholar]
- Hopf, H.C.; Muller-Forell, W.; Hopf, N.J. Localization of emotional and volitional facial paresis. Neurology 1992, 42, 1918. [Google Scholar] [CrossRef] [PubMed]
- Cohn, J.F.; Kruez, T.S.; Matthews, I.; Yang, Y.; Nguyen, M.H.; Padilla, M.T.; Zhou, F.; De La Torre, F. Detecting depression from facial actions and vocal prosody. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009), Amsterdam, The Netherlands, 10–12 September 2009; pp. 1–7. [Google Scholar] [CrossRef]
- Yap, M.H.; Ugail, H.; Zwiggelaar, R. Facial Behavioral Analysis: A Case Study in Deception Detection. Br. J. Appl. Sci. Technol. 2014, 4, 1485. [Google Scholar] [CrossRef]
- Li, X.; Pfister, T.; Huang, X.; Zhao, G.; Pietikäinen, M. A Spontaneous Micro-expression Database: Inducement, Collection and Baseline. In Proceedings of the 10th IEEE International Conference on Automatic Face and Gesture Recognition, Shanghai, China, 22–26 April 2013. [Google Scholar]
- Yan, W.J.; Wu, Q.; Liu, Y.J.; Wang, S.J.; Fu, X. CASME Database: A dataset of spontaneous micro-expressions collected from neutralized faces. In Proceedings of the IEEE Conference on Automatic Face and Gesture Recognition, Shanghai, China, 22–26 April 2013. [Google Scholar]
- Yan, W.J.; Li, X.; Wang, S.J.; Zhao, G.; Liu, Y.J.; Chen, Y.H.; Fu, X. CASME II: An Improved Spontaneous Micro-Expression Database and the Baseline Evaluation. PLoS ONE 2014, 9, e86041. [Google Scholar] [CrossRef] [PubMed]
- Davison, A.K.; Lansley, C.; Costen, N.; Tan, K.; Yap, M.H. SAMM: A Spontaneous Micro-Facial Movement Dataset. IEEE Trans. Affect. Comput. 2018, 9, 116–129. [Google Scholar] [CrossRef]
- Polikovsky, S.; Kameda, Y.; Ohta, Y. Facial micro-expressions recognition using high speed camera and 3D-gradient descriptor. In Proceedings of the 3rd International Conference on Imaging for Crime Detection and Prevention (ICDP 2009), London, UK, 3 December 2009; pp. 16–21. [Google Scholar]
- Shreve, M.; Godavarthy, S.; Goldgof, D.; Sarkar, S. Macro- and micro-expression spotting in long videos using spatio-temporal strain. In Proceedings of the 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG 2011), Santa Barbara, CA, USA, 21–25 March 2011; pp. 51–56. [Google Scholar] [CrossRef]
- Ekman, P.; Friesen, W.V. Facial Action Coding System: A Technique for the Measurement of Facial Movement; Consulting Psychologists Press: Palo Alto, CA, USA, 1978. [Google Scholar]
- Yan, W.J.; Wang, S.J.; Liu, Y.J.; Wu, Q.; Fu, X. For micro-expression recognition: Database and suggestions. Neurocomputing 2014, 136, 82–87. [Google Scholar] [CrossRef] [Green Version]
- Ekman, P.; Friesen, W.V. Facial Action Coding System: Investigator’s Guide; Consulting Psychologists Press: Palo Alto, CA, USA, 1978. [Google Scholar]
- Davison, A.K.; Yap, M.H.; Lansley, C. Micro-Facial Movement Detection Using Individualised Baselines and Histogram-Based Descriptors. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Kowloon, China, 9–12 October 2015; pp. 1864–1869. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Davison, A.K.; Yap, M.H.; Costen, N.; Tan, K.; Lansley, C.; Leightley, D. Micro-facial Movements: An Investigation on Spatio-Temporal Descriptors. In 13th European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2014. [Google Scholar]
- Huang, X.; Wang, S.J.; Zhao, G.; Piteikainen, M. Facial Micro-Expression Recognition Using Spatiotemporal Local Binary Pattern With Integral Projection. In Proceedings of the The IEEE International Conference on Computer Vision (ICCV) Workshops, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Huang, X.; Wang, S.; Liu, X.; Zhao, G.; Feng, X.; Pietikainen, M. Spontaneous Facial Micro-Expression Recognition using Discriminative Spatiotemporal Local Binary Pattern with an Improved Integral Projection. arXiv, 2016; arXiv:1608.02255. [Google Scholar]
- Liu, Y.J.; Zhang, J.K.; Yan, W.J.; Wang, S.J.; Zhao, G.; Fu, X. A Main Directional Mean Optical Flow Feature for Spontaneous Micro-Expression Recognition. IEEE Trans. Affect. Comput. 2016, 7, 299–310. [Google Scholar] [CrossRef]
- Li, X.; Hong, X.; Moilanen, A.; Huang, X.; Pfister, T.; Zhao, G.; Pietikäinen, M. Reading Hidden Emotions: Spontaneous Micro-expression Spotting and Recognition. arXiv, 2015; arXiv:1511.00423. [Google Scholar]
- Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization. In Advances in Neural Information Processing Systems 22; Bengio, Y., Schuurmans, D., Lafferty, J.D., Williams, C.K.I., Culotta, A., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2009; pp. 2080–2088. [Google Scholar]
- Wang, S.J.; Yan, W.J.; Zhao, G.; Fu, X.; Zhou, C.G. Micro-Expression Recognition Using Robust Principal Component Analysis and Local Spatiotemporal Directional Features. In Workshop at the European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 325–338. [Google Scholar]
- Wang, S.J.; Yan, W.J.; Li, X.; Zhao, G.; Zhou, C.G.; Fu, X.; Yang, M.; Tao, J. Micro-Expression Recognition Using Color Spaces. IEEE Trans. Image Process. 2015, 24, 6034–6047. [Google Scholar] [CrossRef] [PubMed]
- Huang, X.; Zhao, G.; Hong, X.; Zheng, W.; Pietikainen, M. Spontaneous facial micro-expression analysis using Spatiotemporal Completed Local Quantized Patterns. Neurocomputing 2016, 175, 564–578. [Google Scholar] [CrossRef]
- Wang, S.J.; Yan, W.J.; Sun, T.; Zhao, G.; Fu, X. Sparse tensor canonical correlation analysis for micro-expression recognition. Neurocomputing 2016, 214, 218–232. [Google Scholar] [CrossRef]
- Liong, S.T.; See, J.; Phan, R.C.W.; Wong, K. Less is More: Micro-expression Recognition from Video using Apex Frame. arXiv, 2016; arXiv:1606.01721. [Google Scholar]
- Xu, F.; Zhang, J.; Wang, J.Z. Microexpression Identification and Categorization Using a Facial Dynamics Map. IEEE Trans. Affect. Computi. 2017, 8, 254–267. [Google Scholar] [CrossRef]
- Wang, S.J.; Wu, S.; Qian, X.; Li, J.; Fu, X. A main directional maximal difference analysis for spotting facial movements from long-term videos. Neurocomputing 2017, 230, 382–389. [Google Scholar] [CrossRef]
- Ekman, P.; Friesen, W.V. Measuring facial movement. Environ. Psychol. Nonverbal Behav. 1976, 1, 56–75. [Google Scholar] [CrossRef]
- Platt, J.C. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods; The MIT Press: Cambridge, UK, 1999; pp. 185–208. [Google Scholar]
- Davison, A.K.; Lansley, C.; Ng, C.C.; Tan, K.; Yap, M.H. Objective Micro-Facial Movement Detection Using FACS-Based Regions and Baseline Evaluation. arXiv, 2016; arXiv:1612.05038. [Google Scholar]
- Ng, C.C.; Yap, M.H.; Costen, N.; Li, B. Wrinkle detection using hessian line tracking. IEEE Access 2015, 3, 1079–1088. [Google Scholar] [CrossRef]
- Wang, Y.; See, J.; Phan, R.C.W.; Oh, Y.H. Efficient Spatio-Temporal Local Binary Patterns for Spontaneous Facial Micro-Expression Recognition. PLoS ONE 2015, 10, e0124674. [Google Scholar] [CrossRef] [PubMed]
- Bengio, Y. Learning Deep Architectures for AI. Found. Trends Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef] [Green Version]
- Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
- Alarifi, J.; Goyal, M.; Davison, A.; Dancey, D.; Khan, R.; Yap, M.H. Facial Skin Classification Using Convolutional Neural Networks. In Proceedings of the 14th International Conference on Image Analysis and Recognition, ICIAR 2017, Montreal, QC, Canada, 5–7 July 2017; Springer: Cham, Switzerland, 2017; Volume 10317, p. 479. [Google Scholar]
- Soomro, K.; Zamir, A.R.; Shah, M. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. arXiv, 2012; arXiv:1212.0402. [Google Scholar]
- Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale Video Classification with Convolutional Neural Networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
Feature | CASME II [23] | SAMM [24] |
---|---|---|
Micro-Movements | 247 * | 159 |
Participants | 35 | 32 |
Resolution | 640 × 480 | 2040 × 1088 |
Facial Resolution | 280 × 340 | 400 × 400 |
FPS | 200 | 200 |
Spontaneous/Posed | Spontaneous | Spontaneous |
FACS Coded | Yes | Yes |
No. Coders | 2 | 3 |
Emotion Classes | 5 | 7 |
Mean Age (SD) | 22.03 (SD = 1.60) | 33.24 (SD = 11.32) |
Ethnicities | 1 | 13 |
Class | Action Units |
---|---|
I | AU6, AU12, AU6+AU12, AU6+AU7+AU12, AU7+AU12 |
II | AU1+AU2, AU5, AU25, AU1+AU2+AU25, AU25+AU26, AU5+AU24 |
III | A23, AU4, AU4+AU7, AU4+AU5, AU4+AU5+AU7, AU17+AU24, AU4+AU6+AU7, AU4+AU38 |
IV | AU10, AU9, AU4+AU9, AU4+AU40, AU4+AU5+AU40, AU4+AU7+AU9, AU4 +AU9+AU17, AU4+AU7+AU10, AU4+AU5+AU7+AU9, AU7+AU10 |
V | AU1, AU15, AU1+AU4, AU6+AU15, AU15+AU17 |
VI | AU1+AU2+AU4, AU20 |
VII | Others |
Class | CASME II | SAMM | Total |
---|---|---|---|
I | 25 | 24 | 49 |
II | 15 | 13 | 28 |
III | 99 | 20 | 119 |
IV | 26 | 8 | 34 |
V | 20 | 3 | 23 |
VI | 1 | 7 | 8 |
VII | 69 | 84 | 153 |
Total | 255 | 159 | 415 |
Ten-fold Cross-Validation | Leave-One-Subject-Out (LOSO) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Feature | Class | Accuracy (%) | TPR | FPR | F-Measure | AUC | Accuracy (%) | TPR | FPR | F-Measure | AUC |
LBP-TOP | Original | 77.17 | 0.56 | 0.22 | 0.53 | 0.74 | 68.24 | 0.49 | 0.17 | 0.48 | 0.63 |
I–V | 77.94 | 0.63 | 0.33 | 0.58 | 0.70 | 67.80 | 0.54 | 0.14 | 0.51 | 0.44 | |
I–VI | 76.84 | 0.59 | 0.32 | 0.55 | 0.69 | 67.94 | 0.53 | 0.14 | 0.51 | 0.44 | |
I–VII | 76.13 | 0.50 | 0.23 | 0.45 | 0.70 | 61.92 | 0.39 | 0.17 | 0.35 | 0.63 | |
HOOF | Original | 78.83 | 0.61 | 0.19 | 0.60 | 0.78 | 68.36 | 0.51 | 0.24 | 0.49 | 0.61 |
I–V | 82.70 | 0.69 | 0.22 | 0.67 | 0.80 | 69.64 | 0.59 | 0.18 | 0.56 | 0.47 | |
I–VI | 82.41 | 0.68 | 0.23 | 0.66 | 0.79 | 73.52 | 0.62 | 0.18 | 0.60 | 0.47 | |
I–VII | 83.94 | 0.64 | 0.14 | 0.63 | 0.79 | 76.60 | 0.57 | 0.14 | 0.55 | 0.72 | |
HOG3D | Original | 80.93 | 0.62 | 0.14 | 0.62 | 0.79 | 59.59 | 0.38 | 0.24 | 0.35 | 0.50 |
I–V | 86.35 | 0.72 | 0.13 | 0.72 | 0.84 | 69.53 | 0.56 | 0.18 | 0.51 | 0.40 | |
I–VI | 83.49 | 0.68 | 0.16 | 0.67 | 0.80 | 69.87 | 0.56 | 0.18 | 0.51 | 0.40 | |
I–VII | 82.59 | 0.58 | 0.12 | 0.58 | 0.79 | 61.33 | 0.39 | 0.30 | 0.31 | 0.51 |
Ten-fold Cross-Validation | Leave-One-Subject-Out (LOSO) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Feature | Class | Accuracy (%) | TPR | FPR | F-Measure | AUC | Accuracy (%) | TPR | FPR | F-Measure | AUC |
LBP-TOP | I–V | 79.21 | 0.54 | 0.16 | 0.51 | 0.74 | 44.70 | 0.38 | 0.19 | 0.35 | 0.31 |
I–VI | 81.93 | 0.55 | 0.13 | 0.52 | 0.74 | 45.89 | 0.34 | 0.17 | 0.31 | 0.36 | |
I–VII | 79.52 | 0.57 | 0.18 | 0.56 | 0.74 | 54.93 | 0.42 | 0.22 | 0.39 | 0.40 | |
HOOF | I–V | 78.95 | 0.56 | 0.16 | 0.55 | 0.74 | 42.17 | 0.32 | 0.06 | 0.33 | 0.32 |
I–VI | 79.53 | 0.52 | 0.15 | 0.51 | 0.73 | 40.89 | 0.28 | 0.07 | 0.27 | 0.35 | |
I–VII | 72.80 | 0.52 | 0.32 | 0.50 | 0.65 | 60.06 | 0.49 | 0.25 | 0.48 | 0.30 | |
HOG3D | I–V | 77.18 | 0.51 | 0.17 | 0.49 | 0.74 | 34.16 | 0.22 | 0.15 | 0.22 | 0.24 |
I–VI | 79.41 | 0.48 | 0.15 | 0.45 | 0.71 | 36.39 | 0.19 | 0.14 | 0.19 | 0.26 | |
I–VII | 79.09 | 0.59 | 0.25 | 0.55 | 0.71 | 63.93 | 0.50 | 0.22 | 0.44 | 0.30 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Davison, A.K.; Merghani, W.; Yap, M.H. Objective Classes for Micro-Facial Expression Recognition. J. Imaging 2018, 4, 119. https://doi.org/10.3390/jimaging4100119
Davison AK, Merghani W, Yap MH. Objective Classes for Micro-Facial Expression Recognition. Journal of Imaging. 2018; 4(10):119. https://doi.org/10.3390/jimaging4100119
Chicago/Turabian StyleDavison, Adrian K., Walied Merghani, and Moi Hoon Yap. 2018. "Objective Classes for Micro-Facial Expression Recognition" Journal of Imaging 4, no. 10: 119. https://doi.org/10.3390/jimaging4100119
APA StyleDavison, A. K., Merghani, W., & Yap, M. H. (2018). Objective Classes for Micro-Facial Expression Recognition. Journal of Imaging, 4(10), 119. https://doi.org/10.3390/jimaging4100119