A Case Study of Facial Emotion Classification Using Affdex
Abstract
:1. Introduction
2. Related Work
- Face detection phase,
- Feature extraction,
- Classification of emotions according to the selected model.
- principal component analysis,
- neural network-based methods,
- active shape modeling,
- active appearance model,
- Gabor wavelet.
- the facial position and the incorrect angle of the scanning device where certain parts of the face may be out of focus (for example, part of the mouth or nose, eye, etc.),
- incorrect face and sensor distance causing loss of facial features information,
- illumination of the scene,
- smaller obstacles that temporarily cover a certain part of the face (for example, long hair), which can cause a partial loss of the information or act as a disturbing element,
- accessories and additional features such as glasses, mustache, etc. Since these elements are often permanent features characterizing a face, it is necessary to include these elements in the eventual detection,
- ethnicity, age, sex, appearance, and other attributes, by which the faces of various people differ.
3. Materials and Methods
- using a camera to capture all the students’ faces at the same time,
- using multiple cameras, each capturing only one student’s face at the time.
- It is assumed that emotional facial expressions (facial emotions) can be correctly identified and classified using the Ekman’s classification and the selected face analysis system.
- It is assumed that the subconscious reaction to the positive or negative emotional states correlates with the facial expressions (facial emotions) that are recognizable by the face analysis system.
- face and facial landmark detection,
- face texture feature extraction,
- facial action classification,
- emotion expression modeling.
4. Results
5. Discussion
6. Conclusions
Author Contributions
Acknowledgments
Conflicts of Interest
References
- Bahreini, K.; Nadolski, R.; Westera, W. FILTWAM—A framework for online affective computing in serious games. Procedia Comput. Sci. 2012, 15, 45–52. [Google Scholar] [CrossRef]
- Meikleham, A.; Hugo, R. Understanding informal feedback to improve online course design. Eur. J. Eng. Educ. 2018, 1–18. [Google Scholar] [CrossRef]
- Stöckli, S.; Schulte-Mecklenbeck, M.; Borer, S.; Samson, A.C. Facial expression analysis with AFFDEX and FACET: A validation study. Behav. Res. Methods 2018, 50, 1446–1460. [Google Scholar] [CrossRef] [PubMed]
- Abramson, L.; Marom, I.; Petranker, R.; Aviezer, H. Is fear in your head? A comparison of instructed and real-life expressions of emotion in the face and body. Emotion 2017, 17, 557–565. [Google Scholar] [CrossRef] [PubMed]
- IMotions Facial Expression Analysis. Available online: https://imotions.com/facial-expressions (accessed on 12 December 2018).
- Ekman, P.; Sorenson, E.R.; Friesen, W. V pan-cultural elements in facial displays of emotion. Science 1969, 164, 86–88. [Google Scholar] [CrossRef]
- Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 1971, 17, 124–129. [Google Scholar] [CrossRef]
- Matsumoto, D.; Assar, M. The effects of language on judgments of universal facial expressions of emotion. J. Nonverbal Behav. 1992, 16, 85–99. [Google Scholar] [CrossRef] [Green Version]
- Matsumoto, D.; Yoo, S.H.; Fontaine, J. Mapping expressive differences around the world. J. Cross Cult. Psychol. 2008, 39, 55–74. [Google Scholar] [CrossRef]
- Røysamb, E.; Nes, R.B.; Czajkowski, N.O.; Vassend, O. Genetics, personality and wellbeing. A twin study of traits, facets and life satisfaction. Sci. Rep. 2018, 8, 12298. [Google Scholar] [CrossRef]
- Songa, G.; Slabbinck, H.; Vermeir, I. How do implicit/explicit attitudes and emotional reactions to sustainable logo relate? A neurophysiological study. Food Qual. Prefer. 2019, 71, 485–496. [Google Scholar] [CrossRef]
- Keltner, D. Born to Be Good: The Science of a Meaningful Life; WW Norton & Company: New York, NY, USA, 2009. [Google Scholar]
- Lövheim, H. A new three-dimensional model for emotions and monoamine neurotransmitters. Med. Hypotheses 2012, 78, 341–348. [Google Scholar] [CrossRef]
- Weiner, B. A cognitive (attribution)-emotion-action model of motivated behavior: An analysis of judgments of help-giving. J. Pers. Soc. Psychol. 1980, 39, 186–200. [Google Scholar] [CrossRef]
- Plutchik, R. Emotion, a Psychoevolutionary Synthesis; Harper & Row: New York, NY, USA, 1980; ISBN 0060452358. [Google Scholar]
- Wundt, W.M. Outlines of Psychology (Classic Reprint); Fb&c Limited: London, UK, 2017; ISBN 9780331795738. [Google Scholar]
- Ekman, P. Facial expressions of emotion: New findings, new questions. Psychol. Sci. 1992, 3, 34–38. [Google Scholar] [CrossRef]
- Russell, J.A. A circumplex model of affect. J. Pers. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
- Oveis, C.; Horberg, E.J.; Keltner, D. Compassion, pride, and social intuitions of self-other similarity. J. Pers. Soc. Psychol. 2010, 98, 618–630. [Google Scholar] [CrossRef]
- Goetz, J.L.; Keltner, D.; Simon-Thomas, E. Compassion: An evolutionary analysis and empirical review. Psychol. Bull. 2010, 136, 351–374. [Google Scholar] [CrossRef]
- Ayata, D.; Yaslan, Y.; Kamasak, M. Emotion recognition via galvanic skin response: Comparison of machine learning algorithms and feature extraction methods. Available online: https://go.galegroup.com/ps/anonymous?id=GALE%7CA508361308&sid=googleScholar&v=2.1&it=r&linkaccess=abs&issn=13030914&p=AONE&sw=w (accessed on 11 December 2018).
- Shimmer Measuring Emotion: Reactions to Media. Available online: https://www.shimmersensing.com/assets/images/content/case-study-files/Emotional_Response_27July2015.pdf (accessed on 11 December 2018).
- Clore, G.L.; Ortony, A. Appraisal theories: How cognition shapes affect into emotion. In Handbook of Emotions, 3rd ed.; The Guilford Press: New York, NY, USA, 2008; pp. 628–642. ISBN 1-59385-650-4 (Hardcover); 978-1-59385-650-2 (Hardcover). [Google Scholar]
- Kanade, T. Picture Processing System by Computer Complex and Recognition of Human Faces. Ph.D. Thesis, Kyoto University, Kyoto, Japan, 1973. [Google Scholar]
- Bledsoe, W.W. The Model Method in Facial Recognition; Panoramic Research, Inc.: Palo Alto, CA, USA, 1964. [Google Scholar]
- Chan, H.; Bledsoe, W.W. A Man-Machine Facial Recognition System: Some Preliminary Results; Panoramic Research, Inc.: Palo Alto, CA, USA, 1965. [Google Scholar]
- Bledsoe, W.W. Some results on multicategory pattern recognition. J. ACM 1966, 13, 304–316. [Google Scholar] [CrossRef]
- Bledsoe, W.W. Semiautomatic Facial Recognition; Stanford Research Institute: Menlo Park, CA, USA, 1968. [Google Scholar]
- Goldstein, A.J.; Harmon, L.D.; Lesk, A.B. Identification of human faces. Proc. IEEE 1971, 59, 748–760. [Google Scholar] [CrossRef]
- Yang, M.-H.; Kriegman, D.J.; Ahuja, N. Detecting faces in images: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 34–58. [Google Scholar] [CrossRef]
- Yang, G.; Huang, T.S. Human face detection in a complex background. Pattern Recognit. 1994, 27, 53–63. [Google Scholar] [CrossRef]
- Kotropoulos, C.; Pitas, I. Rule-based face detection in frontal views. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich, Germany, 21–24 April 1997. [Google Scholar]
- Zhang, L.; Lenders, P. Knowledge-based eye detection for human face recognition. In Proceedings of the KES’2000. Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies. Proceedings (Cat. No.00TH8516), Brighton, UK, 30 August–1 September 2000. [Google Scholar]
- Vezhnevets, V.; Sazonov, V.; Andreeva, A. A Survey on pixel-based skin color detection techniques. Proc. Graph. 2003, 6, 85–92. [Google Scholar]
- Lakshmi, H.C.V.; PatilKulakarni, S. Segmentation algorithm for multiple face detection for color images with skin tone regions. In Proceedings of the 2010 International Conference on Signal Acquisition and Processing, Bangalore, India, 9–10 February 2010. [Google Scholar]
- Ghimire, D.; Lee, J. A robust face detection method based on skin color and edges. J. Inf. Process. Syst. 2013, 9, 141–156. [Google Scholar] [CrossRef]
- Chavhan, A.; Chavan, S.; Dahe, S.; Chibhade, S. A neural network approach for real time emotion recognition. IJARCCE 2015, 4, 259–263. [Google Scholar] [CrossRef]
- Sakai, T.; Nagao, M.; Fujibayashi, S. Line extraction and pattern detection in a photograph. Pattern Recognit. 1969, 1, 233–248. [Google Scholar] [CrossRef]
- Chow, G.; Li, X. Towards a system for automatic facial feature detection. Pattern Recognit. 1993, 26, 1739–1755. [Google Scholar] [CrossRef]
- Li, X.; Roeder, N. Face contour extraction from front-view images. Pattern Recognit. 1995, 28, 1167–1179. [Google Scholar] [CrossRef]
- Jeng, S.-H.; Liao, H.-Y.; Liu, Y.-T.; Chern, M.-Y. An efficient approach for facial feature detection using geometrical face model. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996. [Google Scholar]
- Kwon, Y.H.; da Vitoria Lobo, N. Face detection using templates. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994. [Google Scholar]
- Wang, J.; Tan, T. A new face detection method based on shape information. Pattern Recognit. Lett. 2000, 21, 463–471. [Google Scholar] [CrossRef]
- Jiang, J.L.; Loe, K.-F. S-adaboost and pattern detection in complex environment. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003. [Google Scholar]
- Li, S.Z.; Zhang, Z. FloatBoost learning and statistical face detection. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1112–1123. [Google Scholar] [CrossRef] [Green Version]
- Sirovich, L.; Kirby, M. Low-dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. A 1987, 4, 519–524. [Google Scholar] [CrossRef]
- Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef]
- Jin, Y.; Ruan, Q.Q. Face recognition using gabor-based improved supervised locality preserving projections. Comput. Inform. 2012, 28, 81–95. [Google Scholar]
- Tian, Y.; Kanade, T.; Cohn, J.F. Eye-State Action Unit Detection by Gabor Wavelets; Springer: Berlin/Heidelberg, Germany, 2000; pp. 143–150. [Google Scholar]
- Gneushev, A.N. Construction and optimization of a texture-geometric model of a face image in the space of basic Gabor functions. J. Comput. Syst. Sci. Int. 2007, 46, 418–428. [Google Scholar] [CrossRef]
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef] [Green Version]
- Yuille, A.L.; Hallinan, P.W.; Cohen, D.S. Feature extraction from faces using deformable templates. Int. J. Comput. Vis. 1992, 8, 99–111. [Google Scholar] [CrossRef]
- Wang, Q.; Xie, L.; Zhu, B.; Yang, T.; Zheng, Y. Facial features extraction based on active shape model. J. Multimed. 2013, 8, 6. [Google Scholar] [CrossRef]
- Yoo, S.-H.; Oh, S.-K.; Pedrycz, W. Optimized face recognition algorithm using radial basis function neural networks and its practical applications. Neural Netw. 2015, 69, 111–125. [Google Scholar] [CrossRef]
- Surace, L.; Patacchiola, M.; Sönmez, E.B.; Spataro, W.; Cangelosi, A. Emotion recognition in the wild using deep neural networks and bayesian classifiers. arXiv 2017, arXiv:1709.03820. [Google Scholar]
- Rajakumari, B.; Senthamarai Selvi, N. HCI and eye tracking: Emotion recognition using hidden markov model. Int. J. Comput. Sci. Eng. Technol. 2015, 6, 90–93. [Google Scholar]
- Rowley, H.A.; Baluja, S.; Kanade, T. Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 23–38. [Google Scholar] [CrossRef] [Green Version]
- Bahreini, K.; Nadolski, R.; Westera, W. Towards multimodal emotion recognition in e-learning environments. Interact. Learn. Environ. 2016, 24, 590–605. [Google Scholar] [CrossRef]
- Wang, H.; Song, W.; Liu, W.; Song, N.; Wang, Y.; Pan, H.; Wang, H.; Song, W.; Liu, W.; Song, N.; et al. A Bayesian scene-prior-based deep network model for face verification. Sensors 2018, 18, 1906. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef] [Green Version]
- Burges, C.J.C.; Scholkopf, B. Improving the Accuracy and Speed of Support Vector Machines; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Rajesh, K.M.; Naveenkumar, M. A robust method for face recognition and face emotion detection system using support vector machines. In Proceedings of the 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT), Mysuru, India, 9–10 December 2016; pp. 1–5. [Google Scholar]
- Chen, R.; Zhou, Y.; Qian, Y. Emotion Recognition Using Support Vector Machine and Deep Neural Network; Springer: Singapore, 2018; pp. 122–131. ISBN 978-981-10-8110-1. [Google Scholar]
- Rizvi, Q.M.; Agrawal, B.G.; Beg, R. A review on face detection methods. J. Manag. Dev. Inf. Technol. 2011, 11, 0–11. [Google Scholar]
- Wu, X.; Yuan, P.; Wang, T.; Gao, D.; Cai, Y. Race Classification from Face using Deep Convolutional Neural Networks. In Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18–20 July 2018; pp. 1–6. [Google Scholar]
- Zafar, U.; Ghafoor, M.; Zia, T.; Ahmed, G.; Latif, A.; Malik, K.R.; Sharif, A.M. Face recognition with Bayesian convolutional networks for robust surveillance systems. EURASIP J. Image Video Process. 2019, 2019, 10. [Google Scholar] [CrossRef]
- Abuzneid, M.; Mahmood, A. Improving human face recognition using deep learning based image registration and multi-classifier approaches. In Proceedings of the 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA), Aqaba, Jordan, 28 Ocober–1 November 2018; pp. 1–2. [Google Scholar]
- Muniasamy, A.; Tabassam, S.; Hussain, M.A.; Sultana, H.; Muniasamy, V.; Bhatnagar, R. Deep learning for predictive analytics in healthcare. In The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019); Springer: Cham, Switzerland, 2019; pp. 32–42. [Google Scholar]
- Rincon, J.A.; Costa, A.; Carrascosa, C.; Novais, P.; Julian, V.; Rincon, J.A.; Costa, A.; Carrascosa, C.; Novais, P.; Julian, V. EMERALD—Exercise monitoring emotional assistant. Sensors 2019, 19, 1953. [Google Scholar] [CrossRef]
- Hsu, G.-S.J.; Kang, J.-H.; Huang, W.-F. Deep hierarchical network with line segment learning for quantitative analysis of facial palsy. IEEE Access 2019, 7, 4833–4842. [Google Scholar] [CrossRef]
- Wang, Y.; Lv, Z.; Zheng, Y. Automatic emotion perception using eye movement information for e-healthcare systems. Sensors (Basel) 2018, 18, 2826. [Google Scholar] [CrossRef]
- Bouchra, N.; Aouatif, A.; Mohammed, N.; Nabil, H. Deep Belief Network and Auto-Encoder for Face Classification. Int. J. Interact. Multimedia Artif. Intell. 2018. [Google Scholar] [CrossRef]
- Cipresso, P.; Riva, G. Virtual reality for artificial intelligence: Human-centered simulation for social science. Stud. Health Technol. Inform. 2015, 219, 177–181. [Google Scholar]
- Chaichotchuang, E.; Hengsadeekul, V.; Sriboon, N. A development conceptual model of sponsorship in professional boxing competition. Asian Sport. Manag. Rev. 2015, 9, 90–99. [Google Scholar]
- Kyung-Ah, K.; Sang-Jin, C.; Duk-Jung, K. A study on the passion and exhaustion of table tennis players mediated by emotional regulation. Korean J. Sport Sci. 2017, 26, 206–210. [Google Scholar]
- Kuo, C.-M.; Lai, S.-H.; Sarkis, M. A compact deep learning model for robust facial expression recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 2121–2129. [Google Scholar]
- Salakhutdinov, R.; Hinton, G. Deep Boltzmann machines. Proc. Mach. Learn. Res. 2009, 5, 448–455. [Google Scholar]
- Zhan, S.; Tao, Q.-Q.; Li, X.-H. Face detection using representation learning. Neurocomputing 2016, 187, 19–26. [Google Scholar] [CrossRef]
- Zhou, E.; Cao, Z.; Yin, Q. Naive-deep face recognition: Touching the limit of lfw benchmark or not? arXiv 2015, arXiv:1501.04690. [Google Scholar]
- Sun, Y.; Wang, X.; Tang, X. Hybrid deep learning for face verification. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1997–2009. [Google Scholar] [CrossRef]
- Bazrafkan, S.; Nedelcu, T.; Filipczuk, P.; Corcoran, P. Deep learning for facial expression recognition: A step closer to a smartphone that knows your moods. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 8–10 January 2017. [Google Scholar]
- Bahreini, K.; van der Vegt, W.; Westera, W. A fuzzy logic approach to reliable real-time recognition of facial emotions. Multimed. Tools Appl. 2019, 1–24. [Google Scholar] [CrossRef]
- Li, S.; Deng, W. Deep facial expression recognition: A survey. arXiv 2018, arXiv:1804.08348. [Google Scholar]
- Samadiani, N.; Huang, G.; Cai, B.; Luo, W.; Chi, C.-H.; Xiang, Y.; He, J.; Samadiani, N.; Huang, G.; Cai, B.; et al. A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors 2019, 19, 1863. [Google Scholar] [CrossRef]
- Marchewka, A.; Żurawski, Ł.; Jednoróg, K.; Grabowska, A. The nencki affective picture system (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behav. Res. Methods 2014, 46, 596–610. [Google Scholar] [CrossRef]
- Riegel, M.; Moslehi, A.; Michałowski, J.M.; Żurawski, Ł.; Horvat, M.; Wypych, M.; Jednoróg, K.; Marchewka, A. Nencki affective picture system: Cross-cultural study in Europe and Iran. Front. Psychol. 2017, 8, 274. [Google Scholar] [CrossRef]
- McDuff, D.; Mahmoud, A.; Mavadati, M.; Amr, M.; Turcot, J.; el Kaliouby, R. AFFDEX SDK. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems—CHI EA ’16; ACM Press: New York, NY, USA, 2016; pp. 3723–3726. [Google Scholar]
- Friesen, W.V.; Ekman, P.; University of California at San Francisco, San Francisco, CA, USA. EMFACS-7: Emotional Facial Action Coding System, Unpublished manuscript. 1983.
- Ekman, P.; Friesen, W.V. Measuring facial movement. Environ. Psychol. Nonverbal Behav. 1976, 1, 56–75. [Google Scholar] [CrossRef]
- Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1988; ISBN 0805802835. [Google Scholar]
- Fredrickson, B.L. The broaden-and-build theory of positive emotions. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2004, 359, 1367–1378. [Google Scholar] [CrossRef]
- Fredrickson, B.L.; Branigan, C. Positive emotions broaden the scope of attention and thought-action repertoires. Cogn. Emot. 2005, 19, 313–332. [Google Scholar] [CrossRef] [Green Version]
- Valiente, C.; Swanson, J.; Eisenberg, N. Linking students’ emotions and academic achievement: When and why emotions matter. Child Dev. Perspect. 2012, 6, 129–135. [Google Scholar] [CrossRef]
- Lewinski, P.; den Uyl, T.M.; Butler, C. Automated facial coding: Validation of basic emotions and FACS AUs in face reader. J. Neurosci. Psychol. Econ. 2014, 7, 227–236. [Google Scholar] [CrossRef]
- Vallverdú, J. Jordi Handbook of Research on Synthesizing Human Emotion in Intelligent Systems and Robotics; IGI Global: Hershey, PA, USA, 2015; ISBN 9781466672789. [Google Scholar]
- McDuff, D.; El Kaliouby, R.; Kassam, K.; Picard, R. Affect valence inference from facial action unit spectrograms. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 17–24. [Google Scholar]
- Aviezer, H.; Trope, Y.; Todorov, A. Holistic person processing: Faces with bodies tell the whole story. J. Pers. Soc. Psychol. 2012, 103, 20–37. [Google Scholar] [CrossRef]
- Barrett, L.F.; Mesquita, B.; Gendron, M. Context in emotion perception. Curr. Dir. Psychol. Sci. 2011, 20, 286–290. [Google Scholar] [CrossRef]
- Du, S.; Tao, Y.; Martinez, A.M. Compound facial expressions of emotion. Proc. Natl. Acad. Sci. USA 2014, 111, E1454–E1462. [Google Scholar] [CrossRef] [Green Version]
- Mehta, D.; Siddiqui, M.F.H.; Javaid, A.Y. Recognition of emotion intensities using machine learning algorithms: A comparative study. Sensors 2019, 19, 1897. [Google Scholar] [CrossRef]
Valence\Specified Valence | Negative | Neutral | Positive |
---|---|---|---|
Negative | 322 | 238 | 727 |
23.23% | 25.21% | 22.60% | |
Neutral | 975 | 636 | 2248 |
70.35% | 67.37% | 69.88% | |
Positive | 89 | 70 | 242 |
6.42% | 7.42% | 7.52% | |
∑ | 1386 | 944 | 3217 |
100% | 100% | 100% | |
Pearson | Chi-square = 4.647288; df = 4; p = 0.32544 | ||
Con. Coef. C | 0.0289327 | ||
Cramér’s V | 0.0204671 |
Negative Specified Valence | |
---|---|
Pearson | Chi-square = 84.17117; df = 66; p = 0.06515 |
Con. Coef. C | 0.2392752 |
Cramér’s V | 0.1742549 |
Neutral Specified Valence | |
---|---|
Pearson | Chi-square = 68.07593; df = 50; p = 0.04537 |
Con. Coef. C | 0.2593524 |
Cramér’s V | 0.1898872 |
Positive Specified Valence | |
---|---|
Pearson | Chi-square = 105.8141; df = 102; p = 0.37815 |
Con. Coef. C | 0.1784509 |
Cramér’s V | 0.1282423 |
Emotion Valence\Specified Valence | Negative | Neutral | Positive |
---|---|---|---|
Negative | 39 | 37 | 105 |
2.81% | 3.92% | 3.26% | |
Neutral | 1316 | 886 | 3032 |
94.95% | 93.86% | 94.25% | |
Positive | 31 | 21 | 80 |
2.24% | 2.22% | 2.49% | |
∑ | 1386 | 944 | 3217 |
100% | 100% | 100% | |
Pearson | Chi-square = 2.554037; df = 4; p = 0.63499 | ||
Con. Coef. C | 0.0214528 | ||
Cramér’s V | 0.0151729 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Magdin, M.; Benko, Ľ.; Koprda, Š. A Case Study of Facial Emotion Classification Using Affdex. Sensors 2019, 19, 2140. https://doi.org/10.3390/s19092140
Magdin M, Benko Ľ, Koprda Š. A Case Study of Facial Emotion Classification Using Affdex. Sensors. 2019; 19(9):2140. https://doi.org/10.3390/s19092140
Chicago/Turabian StyleMagdin, Martin, Ľubomír Benko, and Štefan Koprda. 2019. "A Case Study of Facial Emotion Classification Using Affdex" Sensors 19, no. 9: 2140. https://doi.org/10.3390/s19092140
APA StyleMagdin, M., Benko, Ľ., & Koprda, Š. (2019). A Case Study of Facial Emotion Classification Using Affdex. Sensors, 19(9), 2140. https://doi.org/10.3390/s19092140