Next Article in Journal
The Study of the Influence of Temperature and Low Frequency on the Performance of the Laminated MFC Piezoelectric Energy Harvester
Next Article in Special Issue
Analysis of Deep Learning-Based Decision-Making in an Emotional Spontaneous Speech Task
Previous Article in Journal
Improvement of Thermochemical Processes of Laser-Matter Interaction and Optical Systems for Wavefront Shaping
Previous Article in Special Issue
A Data-Driven Approach for University Public Opinion Analysis and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System

1
Department of Computer Science and IT, University of Balochistan, Quetta 87300, Pakistan
2
Department of Computer Engineering, Balochistan University of Information Technology, Engineering and Management Sciences, Quetta 87300, Pakistan
3
Department of Software Engineering, Balochistan University of Information Technology, Engineering and Management Sciences, Quetta 87300, Pakistan
4
Department of Electrical Engineering Fundamentals, Faculty of Electrical Engineering, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
5
Department of Electrical Power Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, 708-00 Ostrava, Czech Republic
6
Department of Operations Research and Business Intelligence, Wroclaw University of Science and Technology, 50-370 Wroclaw, Poland
7
Department of Computer Science, MNS-University of Agriculture, Multan 60000, Pakistan
8
Department of Computer Science, Sardar Bhadur Khan Women’s University, Quetta 87300, Pakistan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12134; https://doi.org/10.3390/app122312134
Submission received: 20 October 2022 / Revised: 20 November 2022 / Accepted: 22 November 2022 / Published: 27 November 2022
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)

Abstract

:

Featured Application

The proposed automatic emotion recognition system has been deployed in the classroom environment (education) but it can be used anywhere to monitor the emotions of humans, i.e., health, banking, industries, social welfare etc.

Abstract

Emotions play a vital role in education. Technological advancement in computer vision using deep learning models has improved automatic emotion recognition. In this study, a real-time automatic emotion recognition system is developed incorporating novel salient facial features for classroom assessment using a deep learning model. The proposed novel facial features for each emotion are initially detected using HOG for face recognition, and automatic emotion recognition is then performed by training a convolutional neural network (CNN) that takes real-time input from a camera deployed in the classroom. The proposed emotion recognition system will analyze the facial expressions of each student during learning. The selected emotional states are happiness, sadness, and fear along with the cognitive–emotional states of satisfaction, dissatisfaction, and concentration. The selected emotional states are tested against selected variables gender, department, lecture time, seating positions, and the difficulty of a subject. The proposed system contributes to improve classroom learning.

1. Introduction

Emotions are a very important factor in decision-making, interaction, and perception. Human emotions are diverse and we therefore have different definitions of emotions from different perspectives. Emotions are the expressed internal states of an individual that are evoked as a reaction to, and an interaction, with certain stimuli. The emotional states of an individual are affected by the person’s intentions, norms, background, cognitive capabilities, physical or psychological states, action tendencies, environmental conditions, appraisals, expressive behavior, and subjective feelings [1].
The researcher Tomkins conducted the first study and demonstrated that facial expressions were reliably associated with certain emotional states [2]. Later, Tomkins recruited Paul Ekman and Carroll Izard to work on this phenomenon, they performed a compressive study regarding facial expression and emotions which are known as “universality studies”. Ekman proposed universal facial expressions (happiness, sadness, disgust, anger, fear, and neutral) and claimed that these were present in every human belonging to any culture, these expressions were later agreed to be universal expressions [3].
Facial expressions are powerful tools to analyze emotions due to the expressive behavior of the face, with the possibility of capturing rich content in multi-dimensional views. Humans can use the movement of facial features to express their desired inner feelings and mental states.
Emotions play a vital role in all human activities, recent developments in neurology have shown that there is a connection between emotions, cognition, and audio functions [4]. Emotions have a strong impact on learning and thus play a vital role in education [5,6,7]. Emotions do not just impact traditional classroom learning but influence various types of learning such as language learning, skills learning, ethics learning, etc.
A human possesses different emotions, and each has a different role in different situations, it can therefore be said that there is no limit to the emotions affecting a classroom environment. An educator needs to analyze their student’s facial expressions with a deep understanding of the surrounding environment during learning in order to improve the learning process [8,9].
Research studies have depicted the relationship between a students’ emotions explicitly expressed through the face and their academic performances; and have demonstrated the importance of considering the quadratic relationship between a students’ positive emotions during learning and their academic grades [10,11,12,13,14,15,16]. Good learning can have a positive impact on the academic grades of the learner [17] while difficulty in learning can lower a student’s academic grades and can cause dropout [18].

Automatic Emotion Recognition System

The most significant components of human emotions are input into machines so that they can automatically detect the emotions expressed by a human through their facial features. The advent of, and advancement in, automatic facial expression recognition systems can tremendously increase the amount of processed data. The real-time implementation of the FER can boost society. The FER can be applied in various areas such as mental disease diagnosis, human social/physiological interaction detection, criminology, security systems, customer satisfaction, human–computer interaction, etc. There are many computing techniques that are used to automate facial expression recognition [19]. The most implemented and successful technique is “deep learning” [20,21,22,23,24]. Deep learning is basically a subfield of machine learning using artificial intelligence that imitates the biological processes of the human brain and deals with algorithms inspired by the structure and function of the human brain known as artificial neural networks. Jeremy Howard, on his Brussels 2014 TEDx talk, states that computers trained using deep learning techniques have been able to achieve some effective computing processes that are similar to human emotion recognition and which are necessary for machines to better serve their purpose [25].
The facial expression recognition systems use either static images or videos. Most applications of facial expression recognition systems use imagery datasets [26,27]. It has been observed that the imagery dataset requires a trained person to label the images into discrete emotions and that, moreover, these discrete emotions do not cover the micro-expression of the individual [28]. The research argues that analyzing facial expressions using a huge imagery dataset with unnecessary input dimensions can decrease the efficiency of the FER system. Similarly, another study concludes that images taken from different angles, low resolution, and noisy backgrounds can be problematic in automatic facial expression recognition [29,30,31,32,33]. Another research has argued that static images are not sufficient for automatic facial expression recognition and the authors conducted a study using recorded video of the classroom to improve the accuracy of FER [34,35,36,37].
One of the researchers has divided their proposed emotion detection system into positive and negative emotions [38]; they used two positive facial expressions happiness and surprise and five negative expressions of anger, contempt, disgust, fear, and sadness to analyze the impact of negative and positive emotions on students’ learning. Another study was conducted to analyze students’ facial expressions in the classroom using deep learning, focused on the concentration level and interest of learners using eye and head movement [39].
There are many deep learning techniques used in building automatic facial expression recognition systems. The convolutional neural network (CNN) is one of the most efficient and widely used models to develop an automatic emotion recognition system [40,41,42,43,44]. The Table 1. below show the existing facial expression recognition systems.
The Figure 1 Above show the general composition of the existing FER systems. It has been observed that the performance of automatic facial expression recognition is determined by feature selection, and appropriate movements associated with the facial feature in order to identify certain expressions [65,66]. A research study that was conducted to focus on the selection and extraction of facial features for an automatic facial expressions recognition system; emphasized the facial geometric feature selection of eyebrows, eyes, etc., and used local binary patterns. They have modified hidden Markov model to mitigate the change in the local facial landmarks distance [67,68,69,70,71,72,73].
It has been observed that a very minor misalignment can cause displacement of the sub-regional location of the facial feature and can cause misleading placement in classification. Research has focused on the selection of patches of the facial features and has demonstrated high accuracy for an automatic facial expression recognition system. We, therefore, have selected salient facial features in our proposed automatic facial expression recognition system and have used the histogram of orientation gradients (HOG) technique to accurately identify the facial features.

2. Materials and Methods

2.1. Research Questions

The following research questions are addressed
  • Does the use of salient facial patches improve the performance of the automatic facial expression recognition system?
  • Do student’s emotions during lectures change according to
    • Department
    • Gender
    • Difficulty of subject
    • Lecture duration
    • Seating position in the class
  • Do student’s facial expressions during the lecture effect their performance?
  • Are student’s facial expressions related to the difficulty of the subject?
  • How strong is the impact of lecture duration on student’s facial expressions?

2.2. Proposed Automatic Facial Expression Recognition System

In this study, we have developed an automatic emotion recognition system using facial expressions that incorporates novel salient facial features. The facial features for each emotion are initially detected using HOG for face recognition, and automatic emotion recognition is then performed by training the convolutional neural network (CNN) that takes real-time input from a camera deployed in the classroom.
The selected emotional states are happiness, sadness, and fear along with the cognitive–emotional states of satisfaction, dissatisfaction, and concentration. These facial expressions are identified using Python APIs. Figure 2 explains the facial features selected to identify each facial expression.
The proposed facial expression recognition system works in a real-time classroom environment. The camera is installed in the classroom to capture input for proposed FER. The preprocessing is performed on the input and HOG is used for face localization during this preprocessing. The proposed FER system then performs feature extraction of the proposed novel salient facial feature using a histogram of oriented gradients (HOG). The deep learning model convolutional neural network (CNN) is used for the classification of the selected emotions. The proposed FER is capable of displaying and storing the identified emotion in its database.
The selected factors or variables used in this study are the duration of the lecture, difficulty of the subject, gender, department and seating position in classroom. The variable lecture duration is divided into three time slots of 15, 30 and 45 min. Similarly, the variable difficulty of the subject is selected on the basis of students’ grades in the subject. The subject that has a greater average of lower grades is selected as a difficult subject. The subjects selected as difficult subjects include theory of automata, multivariable calculus, general science and modern programming languages. The variable gender consists of male and female. The variable seating position is divided into three categories: students seated in first rows; students seated in middle rows; and students seated on last rows of the lecture room. The variable department considers four departments: computer science; education; information technology; and computer engineering.
The proposed automatic facial expressions recognition system is installed in the classroom and a camera takes the face of the student as an input for the system. The camera and the system are set up to be ready before the beginning of each lecture. The system is capable of identifying the facial expressions of the students with the help of the facial expressions used to train the system. The system continuously saves the facial expressions of each student and the changes in facial expression during the lecture in a database. Figure 3 explains the automatic facial expression recognition system. While the proposed FER system is depicted in Figure 4.

2.2.1. Sample Group

The sample group consisted of 100 students, 20 females and 80 males, from Balochistan University of Information Technology Engineering and Management Sciences (BUITEMS), University of Balochistan (UoB) and Sardar Bahadur Khan Women’s University (SBKWU), Quetta Balochistan. The Figure 5 shows the working of automatic facial expression recognition system.

2.2.2. Data Collection and Analysis

The system runs in the real-time classroom environment taking input from the camera to automatically identify the facial expressions of students in the classroom. The proposed system is capable of storing the data in a database which is used to perform the statistical analysis. The accuracy of the proposed system is compared with well-known FERs such as Azure, Face ++, and Face Reader. Moreover, a senior psychologist was provided a recording of the same classroom lecture and, using the proposed system, was asked to assess the selected facial expressions. This helped us to identify the accuracy of the proposed system.

2.2.3. Constraints/Limitations

It has been observed that among the pool of 100 selected students there were some students who were not clearly visible by the camera due to their seating positions, light issues in the classroom, or because they frequently kept their hands over their mouth, or were wearing a mask or glasses. Similarly, the facial expressions of some students could not be captured for some minutes due to their head being in a down position while they wrote important points of the lecture or used a cell phone for a long time. Therefore, the exact number of students with perfect data is 80.

2.2.4. Statistical Tests

Statistical tests were performed in order to analyze the effects of the selected variables on the selected facial expressions of the students during the lecture. The multivariable ANOVA (MANOVA) test was performed in order to analyze the effect of multiple selected variables over selected facial expressions. The multiple analysis of variance is a technique to analyze the effect of independent categorical variables on multiple continuous dependent variables. The MANOVA essentially tests whether or not the independent grouping variable causes significant variance in the dependent variable. For a particular p-variable multivariate test, assume that the matrices H and E have h and e degrees of freedom, respectively. Four tests may be defined as follows. Let θi, φi, and λi be the eigenvalues of H(E + H − 1, HE − 1, and E(E + H) − 1 respectively. Note that these eigenvalues are related as follows:
Θi = 1 − λi = φi/1 + φi
Φi = θi/1 − θi = 1 − λi/λi
λi = 1 − θi = 1/1 + φi
The MANOVA explains how the student’s facial expressions change with respect to gender, difficulty of subject, lecture duration and department. We have calculated skewness and kurtosis for the normal distribution of data. According to the guidelines of Kline in 2005, skewness and kurtosis are used to analyze the normal distribution of data. If the skewness coefficient is between ±1 the data is considered suitable for normal distribution, while some other studies consider both kurtosis and skewness together and state that the data can be normally distributed if the values of kurtosis and skewness lie between ±2 [74]. The kurtosis and skewness coefficient of the data collected in this study lies between ±2 and can therefore be normally distributed.
Furthermore, the Box’s M statistics and Levene’s test was used to test the homogeneity of co-variance and variance respectively on the dependent variables [75,76].
Moreover, the Bonferroni correction was used in order to analyze the effect of six selected facial expressions with respect to the five selected variables without pre-planned hypotheses. The value of p will be adjusted using Bonferroni correction to reduce the family-wise error rate.

3. Results

The study focused on analyzing the relationship between facial expressions and learning. It has been observed from the data that the facial expressions of students change frequently (in seconds) during lecture which shows that the learning and facial expressions are directly related to each other.
The six selected facial expressions (dissatisfied, sadness, happiness, fear, satisfied and concentration) of 100 students were analyzed against five selected variables (duration of lecture, difficulty of subject, gender, seating position in classroom and department).
The facial expressions were first analyzed against the variable “gender”. The homogeneity of covariance matrix the (Box’s M = 135 and the p value is p > 0.05) and the Levene’s test for homogeneity or equality of variance for selected facial expressions where p < 0.05 shows the homogeneity of data. The MANOVA test was carried out for the variance of student’s facial expressions for the variable “gender”. There was a significant difference between male and female student’s facial expressions when collectively considered for six selected facial expressions of students during the classroom learning. The value of Wilks’ Lambda (Λ)= 0.926, F (6,506) = 6.694, p < 0.001, partial η2 = 0.74. The separate ANOVA was conducted for each dependent variable, with each ANOVA evaluated at an alpha level of 0.0083 with the Bonferroni adjustments. The variables showed the following effects:
  • Happy had no significant effect, F(1,511) = 0.003, partial η2 < 0.001
  • Sad had no significant effect, F(1,511) = 4.483, partial η2 = 0.009
  • Satisfied had a significant effect, F(1,511) = 15.370, partial η2 = 0.029
  • Dissatisfied had a significant effect, F(1,511) = 23.026, partial η2 = 0.43
  • Concentration did not have a significant effect, F(1,511) = 4.176, partial η2 = 0.008
  • Fear did not have a significant effect, F(1,511) = 3.446, partial η2 = 0.007
Similarly, using the MANOVA to test the effect of the variable “department”, we analyzed the five selected facial expressions mentioned above. The homogeneity of the covariance matrix (Box’s M = 141 and the p value is p < 0.05) and the Levene’s test for homogeneity or equality of variance for selected facial expressions (multiple variables) of the students where the p value is p < 0.05 shows the homogeneity of the data. The value of Wilks’ Lambda (Λ) = 0.972, F (18,1426) = 0.792, p = 0.712, partial η2 = 0.009 shows that there was not a significant impact of the variable “department” on the facial expressions of the students. The statistical analysis is shown in Table 2.
“Seating position” was another selected variable. It was analyzed against selected facial expressions to determine its effect. The covariance matrix homogeneity (Box’s M = 240, p < 0.05) and variance homogeneity against selected facial expressions shows p < 0.05 and Wilks’ Lambda (Λ) = 0.877, F (6,506) = 11.741; p < 0.05), indicating the seating position effect on the student’s facial expressions. Furthermore, the seating position was divided into three rows—first row, middle row and last row—and the facial expressions were analyzed according to row in order to clearly understand the effect by applying the MANOVA test. The students in the first row had a Wilks’ Lambda (Λ) = 0.969, F (6,506) = 2.681; p < 0.05, partial η2 = 0.031 and after applying ANOVA for each variable the p < 0.05, indicating a strong impact. Furthermore, in order to identify the most frequent facial expressions among the students sitting in the front row, the mean and standard deviation were focused there and showed that those students displayed satisfaction, concentration and happiness. The results are given in the Table 3.
The ANOVA conducted for students sitting in the middle row shows that students sitting there were satisfied, as shown in Table 4.
Similarly, the students sitting in the last row were also evaluated using ANOVA to determine the impact of each variable. The results in Table 5 show that the students in the last row seemed dissatisfied.
The variable “lecture duration” was considered and its effect on the students’ facial expressions was analyzed. The lecture duration was divided in to three sections: 15, 30 and 45 min. The lecture duration of 15 min included the overview of the topic, while the 30 min’s lecture included the definitions and details of the topic and the 45 min’s lecture included the background, definition and comprehensive detail of the topic. The statistics from mean, Std. deviation and chart given in Table 6 show the facial expression that lasted for the most time on an individual in each time division of the “lecture duration”.
The facial expressions satisfaction, concentration and happiness were high in the lecture duration of 15 min while the feelings of satisfaction and concentration were at high in the 30 min’s lecture. The feelings of dissatisfaction and sadness were found in the lecture duration of 45 min.
Another selected variable “difficulty of subject” was analyzed against selected facial expressions. The difficulty level of the subject was analyzed by the students’ grades performance the more the failure or the lower grade the students will attain the subject is considered as difficult subject. The theory of automata, multivariable calculus, general science and modern programming languages subjects were considered as the higher difficulty level subjects. The results show that the difficulty level of the subject has a huge impact on the facial expressions of the students. Table 7 below shows that the mean value of the dissatisfied expression (x = 0.78) was higher in the subjects selected with higher difficulty level. Similarly, the mean of another facial expression, sad, is (x = 0.63) while facial expressions such as satisfied is lower, (x = 0.19), indicating that the difficulty level of the subjects had a great impact on the facial expressions of the students.
Moreover, the selected subjects with higher difficulty level were analyzed with the time duration of the lecture and the values indicate that the dissatisfied facial expression was found to be at its peak when the lecture duration was 15 min and was lower at 30 min and 45 min, as shown in Table 7.
The research question “Does the use of salient facial patches improve the performance the of automatic facial expression recognition system?” can be answered by stating that adding the salient facial patches increased the accuracy of the automatic facial expression detection system to a greater extent. The statistical measure area under the curve (AUC) and the evaluation metric of accuracy showed improved results while comparing the proposed system with well-known FERs such as Azure, Face ++, and Face Reader. Figure 6 and Figure 7 shows a comparative analysis of the proposed automatic emotion recognition system using facial expression with other well-known FERs on the market.
The proposed system was developed to work in a real-time classroom environment but the system is capable of computing the images and videos as well. In order to compare the performance of the proposed system with the existing system the proposed system was provided with images as input because the existing systems can input images only. In Table 8 the commercial FER systems are considered and the F1 score of each emotion is listed using the FER-2013 dataset.
In Table 9 the accuracy of the existing FER models is compared with the proposed system using the well-known datasets FER 13, JAFFE, and CK+.
In Table 10 the accuracy of FERs that can work in a real-time environment are compared with the proposed system.
The statistic above shows higher accuracy of the proposed system as compared to existing FER. Moreover, the proposed FER is trained to analyze emotions with the help of proposed novel facial feature movements. Each emotion addressed in the study is composed of multiple micro muscle movements of the facial features that are analyzed to improve the accuracy of the emotion.
Moreover, the lectures analyzed by our system were recorded. The videos of the recorded lectures were presented to a senior psychologist, Dr. Inam Shabir, an employee of the federal government of Pakistan, to evaluate the facial expressions of the students during lecture. The assessments by our proposed automatic emotions recognition system and this senior psychologist were compared. He observed the facial expressions of students against the variable “gender” and observed that the facial expressions of female students changed most frequently during the lecture. He observed the variable “seating position” and concluded that the students sitting in the first row are mostly happy and concentrating on the lecture, that the students in the middle row seemed satisfied and showed concentration during lecture, and that the students in the last row seemed a little bored, concentrating and dissatisfied. He suggested that the student’s behavior in the last row was affected by factors such as the invisibility of the board, the voice of the instructor and the speed of teaching and that improving the seating arrangement could change the facial expressions. He furthermore observed the variable “difficulty of subject” and observed that subject difficulty increased sadness, concentration and dissatisfaction among the students. The variable “lecture time” was analyzed and the psychologist observed that boredom and dissatisfaction most frequently occurred during the 45 min section of the lecture. He stated that the increase in lecture duration caused the students to lose their focus and become tired. It is therefore suggested not to increase the lecture time or that students should be given breaks during lecture to refresh their minds for better focus and good results. He suggested that this situation can be overcome if the instructor uses interactive, thought provoking and engaging lecture techniques during lecture.
The second research question “Do student’s facial expressions during lecture change according to the variables “department’”, “gender”, ”difficulty of subject”, “lecture duration”, or “seating position”? was answered as our research study explored the effect of each variable over students’ facial expressions and concluded that the external variables associated with the learning environment can affect the facial expressions of the student’s during learning.
The third research question addressed was “Do student’s facial expression during the lecture effect their performance”?
The facial expressions of each student evaluated by the proposed system were compared with their grade performance and it was observed that the students’ grade performances were directly connected to their facial expressions during learning. The fourth research question addressed was “Are student’s facial expressions related to the difficulty of the subject?”. The statistic from the research shows that when the students were taught a subject with higher difficulty they displayed higher dissatisfaction and sadness. Similarly, the study analyzed the difficulty of the subject against student’s grades and the results show that the difficulty level of the subject also has a huge impact on the students’ performances. The fifth research question addressed was “What is the impact of lecture duration on student’s facial expression?”. It has been observed in this research study that as the lecture duration increases the facial expressions of the majority of student’s changes from satisfied, concentrated and happy to dissatisfied and sad. We can therefore, conclude that lecture duration should not be very long or that lecture breaks be included to overcome this issue.

4. Conclusions and Discussion

This study explores the effects of facial expressions on classroom learning using a proposed deep learning based automatic emotion recognition system incorporating novel facial features. The facial expressions of happy, sad, satisfied, concentration, dissatisfied and fear were selected and novel facial features movements are introduced to correctly identify each facial expression. The group of 100 students from three universities (SBKWU, BUITEMS and UOB) and four departments (CS, CE, Education and IT) were selected for analysis of the effect of facial expressions on selected variables (department, lecture duration, gender, difficulty of subject and seating position). It has been observed that the facial expressions of students fluctuated throughout the lecture as shown in Figure 8 which indicates that the facial expressions were directly connected to the classroom learning.
Moreover, each selected variable was analyzed against the facial expressions. The variable "department” had no significant effect on the facial expression of the students during classroom learning. The variable “seating position” showed a significant effect as the seating positions were divided into three groups front row, middle row and last row and the results of the study indicates that the students in the first row were more satisfied than the students sitting in the middle and last row. Similarly, the variable “difficulty of subject” showed a significant impact as the statistics indicate that subjects with higher difficulty level result in negative facial expressions of the students during learning. In order to analyze the results of the proposed emotion recognition system using facial expression the accuracy of the system was compared with the well-known commercial FERs Face ++, Face Reader, Emotient, Affectiva, MorphCast, Azure and with other existing FERs. Moreover, a senior psychologist evaluated the classroom and discussed his observations. He has suggested some points to improve the learning process such as adopting interactive teaching techniques which can improve learning for higher difficulty level subjects, that the time duration of the lecture should not be very long or that there should be a break during the long hours of lectures to refresh the minds of students and that improving the seating positions can also help improve the learning.

Author Contributions

Conceptualization, J.B.; methodology, S.F., S.U.B.; software, S.F., S.H.; validation, S.U.B., E.J.; formal analysis, M.U.C., S.F., S.M.; investigation, S.F., M.J., E.J.; data curation, S.F., S.H., S.M.; writing—original draft preparation, S.F.; writing—review and editing, J.B., S.U.B., M.U.C., Z.L.; visualization, S.F., S.U.B.; supervision, J.B.; project administration, M.J., Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

SGS Grant from VSB—the Technical University of Ostrava under grand number SP2022/21.

Institutional Review Board Statement

The office of research, innovation and communication’s Ethics Research Committee at University of Balochistan approved the conduct of this study under letter No. RUB:/Estt/T-08:/498-10.

Informed Consent Statement

Informed Consent was obtained from all subjects involved in this study.

Data Availability Statement

Data will be made available when required.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jeffrey, C. Foundations of Human Computing Facial Expression and Emotion. In Proceedings of the ICMI 2006 and IJCAI 2007 International Workshops, Banff, AB, Canada, 3 November 2006. [Google Scholar]
  2. Tomkins, S.S.; McCarter, R. What and where are the primary affects? Some evidence for a theory. Am. Psychol. Assoc. 1964, 18, 119–158. [Google Scholar] [CrossRef] [PubMed]
  3. Ekman, P.; Friesen, W.V.; O’Sullivan, M.; Chan, A.; Diacoyanni-Tarlatzis, I.; Heider, K.; Krause, R.; LeCompte, W.A.; Pitcairn, T.; Ricci-Bitti, P.E.; et al. Universals and cultural differences in the judgments of facial expressions of emotion. J. Personal. Soc. Psychol. 1987, 53, 712–717. [Google Scholar] [CrossRef] [PubMed]
  4. Immordino-Yang, M.H.; Damasio, A. We Feel, Therefore We Learn: The Relevance of Affective and Social Neuroscience to Education. Int. Mind Brain Educ. Soc. 2007, 1, 3–10. [Google Scholar] [CrossRef]
  5. Zembyl, M.; Schutz, P.A. Introduction to Methodological Advances in Research on Emotion in Education. In Methodological Advances in Research on Emotion and Education; Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–14. [Google Scholar]
  6. Chubb, J.; Watermeyer, R.; Wakeling, P. Fear and loathing in the academy? The role of emotion in response to an impact agenda in the UK and Australia. J. High. Educ. Res. Dev. 2017, 36, 555–568. [Google Scholar] [CrossRef]
  7. Demetriou, H. Empathy and Emotion in Education and Beyond. In Empathy, Emotion and Education; Palgrave Macmillan: London, UK, 2018; pp. 279–306. [Google Scholar]
  8. Sgariboldi, A.R.; Puggina, A.C.G.; da Silva, M.J.P. Professors perception of students’ feelings in the classroom an analysis. SciELO Sci. Electron. Libr. Online 2011, 45, 1206–1212. [Google Scholar] [CrossRef] [Green Version]
  9. Kärner, T.; Kögler, K. Emotional States during Learning Situations and Students’ Self-Regulation: Process-Oriented Analysis of Person-Situation Interactions in the Vocational Classroom. Empir. Res. Vocat. Educ. Train. 2016, 8, 12. [Google Scholar] [CrossRef] [Green Version]
  10. Altrabsheh, N.; Cocea, M.; Fallahkhair, S. Learning sentiment from students’ feedback for real time interventions in classrooms. In Proceedings of the Third International Conference, ICAIS 2014, Bournemouth, UK, 8–9 September 2014. [Google Scholar]
  11. Dennis, T.A.; Hong, M.; Solomon, B. Do the associations between exuberance and emotion regulation depend on effortful control? Int. J. Behav. Dev. 2010, 34, 462–472. [Google Scholar] [CrossRef]
  12. Hascher, T. Learning and Emotion: Perspectives for Theory and Research. Eur. Res. J. 2010, 9, 13–28. [Google Scholar] [CrossRef] [Green Version]
  13. Lei, H.; Cui, Y. Effects of academic emotions on achievement among mainland Chinese students: A meta-analysis. Soc. Behav. Personal. 2016, 44, 1541–1554. [Google Scholar] [CrossRef]
  14. Mega, C.; Ronconi, L.; Beni, R.D. What Makes a Good Student? How Emotions, Self-Regulated Learning, and Motivation Contribute to Academic Achievement. J. Educ. Psychol. 2014, 106, 121–131. [Google Scholar] [CrossRef]
  15. Oliver, E.; Archambault, I.; del Clercq, M.; Galand, B. Student Self-Efficacy, Classroom Engagement, and Academic Achievement: Comparing Three Theoretical Frameworks. J. Youth Adolesc. 2019, 48, 326–340. [Google Scholar] [CrossRef] [PubMed]
  16. Sainio, P.J.; Eklund, K.M.; Ahonen, T.P.S.; Kiuru, N.H. The Role of Learning Difficulties in Adolescents’ Academic Emotions and Academic Achievement. J. Learn. Disabil. 2019, 52, 287–298. [Google Scholar] [CrossRef] [PubMed]
  17. Hakkarainen, M.; Halopainen, L.K.; Savolainen, H.K. A Five-Year Follow-Up on the Role of Educational Support in Preventing Dropout From Upper Secondary Education in Finland. J. Learn. Disabil. 2013, 48, 408–421. [Google Scholar] [CrossRef] [PubMed]
  18. Lodge, J.M.; Kennedy, G.; Lockyer, L. Understanding Difficulties and Resulting Confusion in Learning: An Integrative Review. Educ. Psychol. 2018, 3, 49. [Google Scholar] [CrossRef] [Green Version]
  19. Srivastava, S. Real time facial expression recognition using a novel method. Int. J. Multimed. Its Appl. 2012, 4, 49–57. [Google Scholar] [CrossRef]
  20. Abdullah, S.M.S.; Ameen, S.Y.; Sadeeq, M.A.; Zeebaree, S.R.M. Multimodal Emotion Recognition using Deep Learning. J. Appl. Sci. Technol. Trends 2021, 2, 52–58. [Google Scholar] [CrossRef]
  21. Miao, Y.; Dong, H.; al Jaam, J.M.; Siddik, A.E. A Deep Learning System for Recognizing Facial Expression in Real-Time. ACM Trans. Multimed. Comput. Commun. Appl. 2019, 15, 1–20. [Google Scholar] [CrossRef]
  22. Wang, X.; Huang, J.; Zhu, J.; Yang, M.; Yang, F. Facial expression recognition with deep learning. In Proceedings of the 10th International Conference on Internet Multimedia Computing and Service, Nanjing, China, 17–19 August 2018; pp. 1–4. [Google Scholar]
  23. Ma, Y.; Huang, C. Facial Expression Recognition Based on Deep Learning and Attention Mechanism. In Proceedings of the 3rd International Conference on Advanced Information Science and System (AISS 2021), Sanya, China, 26–28 November 2021; pp. 1–6. [Google Scholar]
  24. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.-R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Sainath, N.; Kingsbury, B. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  25. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  26. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, 3–6 December 2012. [Google Scholar]
  27. Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
  28. Ekman, P. The argument and evidence about universals in facial expressions of emotion. In Hand Book of Social Psychology; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1989; pp. 143–164. [Google Scholar]
  29. Bodavarapu, P.N.R.; Sriniva, P.V.V.S. Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques. Indian J. Sci. Technol. 2021, 14, 971–983. [Google Scholar] [CrossRef]
  30. Barsoum, E.; Zhang, C.; Ferrer, C.C.; Zhang, Z. Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan, 12–16 November 2016. [Google Scholar]
  31. D’Amario, V.; Srivastava, S.; Sasaki, T.; Boix, X. The Data Efficiency of Deep Learning Is Degraded by Unnecessary Input Dimensions. Front. Comput. Neurosci. 2022, 16, 760085. [Google Scholar] [CrossRef]
  32. Kumar, A.; Jani, K.; Jishu, A.K.; Patel, H.; Sharma, A.K.; Khare, M. Evaluation of Deep Learning Based Human Expression Recognition on Noisy Images. In Proceedings of the 7th International Conference on Soft Computing & Machine Intelligence (ISCMI), Stockholm, Sweden, 14–15 November 2020. [Google Scholar]
  33. Kathavarayan, R.S.; Murugesan, K. Preserving Global and Local Features for Robust Face Recognition under Various Noisy Environments. Int. J. Image Process. (IJIP) 2010, 3, 328–340. [Google Scholar]
  34. Wang, Y.; Xu, X.; Zhuang, Y. Learning Dynamics for Video Facial Expression Recognition. In Proceedings of the 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence, Sanya, China, 22–24 December 2021. [Google Scholar]
  35. Yu, J.; Wang, Z. A monocular video-based facial expression recognition system by combining static and dynamic knowledge. In Proceedings of the 9th International Conference on Utility and Cloud Computing, Shanghai, China, 6–9 December 2016. [Google Scholar]
  36. Fan, Y.; Lam, J.C.K.; Li, V.O. Video-based Emotion Recognition Using Deeply-Supervised Neural Networks. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018. [Google Scholar]
  37. Zhen, P.; Chen, H.-B.; Cheng, Y.; Ji, Z.; Liu, B.; Yu, H. Fast Video Facial Expression Recognition by a Deeply Tensor-Compressed LSTM Neural Network for Mobile Devices. ACM Trans. Internet Things 2021, 2, 1–26. [Google Scholar] [CrossRef]
  38. Oo, T.; Boonroungrut, C.; One, K. Exploring Classroom Emotion with Cloud-Based Facial Recognizer in the Chinese Beginning Class: A Preliminary Study. Int. J. Instr. 2019, 12, 947–958. [Google Scholar]
  39. Tonguç, G.; Ozkara, B.O. Automatic recognition of student emotions from facial expressions during a lecture. Comput. Educ. 2020, 148, 103797. [Google Scholar] [CrossRef]
  40. Minaee, S.; Bouazizi, I.; Kolan, P.; Najafzadeh, H. Ad-Net: Audio-Visual Convolutional Neural Network for Advertisement Detection In Videos. arXiv 2018, arXiv:1806.08612. [Google Scholar]
  41. Li, K.; Jin, Y.; Akram, M.W.; Han, R.; Chen, J. Facial expression recognition with convolutional neural networks via a new face cropping and rotation strategy. Vis. Comput. 2019, 36, 391–404. [Google Scholar] [CrossRef]
  42. Wang, Y.; Li, Y.; Song, Y.; Rong, X. The Influence of the Activation Function in a Convolution Neural Network Model of Facial Expression Recognition. Appl. Sci. 2020, 10, 1897. [Google Scholar] [CrossRef] [Green Version]
  43. Sajjad, M.; Zahir, S.; Ullah, A.; Akhtar, Z.; Muhammad, K. Human Behavior Understanding in Big Multimedia Data Using CNN based Facial Expression Recognition. Mob. Netw. Appl. 2020, 25, 1611–1621. [Google Scholar] [CrossRef]
  44. Lopes, T.; de Aguiar, E.; Souza, A.F.D.; Oliveria-Santos, T. Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order. Pattern Recognit. 2017, 61, 610–628. [Google Scholar] [CrossRef]
  45. Wang, H.; Huang, H.; Hu, Y.; Anderson, M.; Rollins, P.; Makedon, F. Emotion detection via discriminative kernel method. In Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments, Samos Greece, 23–25 June 2010. [Google Scholar]
  46. Zhang, L.; Tjondronegoro, D. Facial Expression Recognition Using Facial Movement Features. IEEE Trans. Affect. Comput. 2011, 2, 219–229. [Google Scholar] [CrossRef] [Green Version]
  47. Poursaberi, A.; Noubari, H.A.; Gavrilova, M.; Yanushkevich, S.N. Gauss–Laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP J. Image Video Process. 2012, 2012, 17. [Google Scholar] [CrossRef] [Green Version]
  48. Owusu, E.; Zhan, Y.; Mao, Q.R. A neural-AdaBoost based facial expression recognition system. Expert Syst. Appl. 2014, 41, 3383–3390. [Google Scholar] [CrossRef] [Green Version]
  49. Dahmane, M.; Meunier, J. Prototype-Based Modeling for Facial Expression Analysis. IEEE Trans. Multimed. 2014, 16, 1574–1584. [Google Scholar] [CrossRef]
  50. Biswas, S.; Sil, J. An efficient face recognition method using contourlet and curvelet transform. J. King Saud Univ.—Comput. Inf. Sci. 2020, 32, 718–729. [Google Scholar] [CrossRef]
  51. Hegde, G.P.; Seetha, M.; Hegde, N. Kernel Locality Preserving Symmetrical Weighted Fisher Discriminant Analysis based subspace approach for expression recognition. Eng. Sci. Technol. Int. J. 2016, 19, 1321–1333. [Google Scholar] [CrossRef] [Green Version]
  52. Kumar, S.; B, M.; Chakraborty, B.K. Extraction of informative regions of a face for facial expression recognition. IET Comput. Vis. 2016, 10, 567–576. [Google Scholar] [CrossRef]
  53. Siddiqi, M.H.; Ali, R.; Khan, A.M.; Park, Y.-T.; Lee, S. Human Facial Expression Recognition Using Stepwise Linear Discriminant Analysis and Hidden Conditional Random Fields. IEEE Trans. Image Process. 2015, 24, 1386–1398. [Google Scholar] [CrossRef]
  54. Kim, S.; An, G.H.; Kang, S.-j. Facial expression recognition system using machine learning. In Proceedings of the 2017 International SoC Design Conference (ISOCC), Seoul, Korea, 5–8 November 2017; pp. 266–267. [Google Scholar]
  55. Makhmudkhujaev, F.; Al-Wadud, M.A.; Ryu, M.T.B.I.B.; Chae, O. Facial expression recognition with local prominent directional pattern. Signal Process. Image Commun. 2019, 74, 1–12. [Google Scholar] [CrossRef]
  56. Liu, K.-C.; Hsu, C.-C.; Wang, W.-Y.; Chiang, H.-H. Real-Time Facial Expression Recognition Based on CNN. In Proceedings of the 2019 International Conference on System Science and Engineering (ICSSE), Dong Hoi, Vietnam, 19–21 July 2019. [Google Scholar]
  57. Niu, B.; Gao, Z.; Guo, B. Facial Expression Recognition with LBP and ORB Features. Comput. Intell. Neurosci. 2021, 2021, 8828245. [Google Scholar] [CrossRef]
  58. Liang, L.; Lang, C.; Li, Y.; Feng, S. Fine-Grained Facial Expression Recognition in the Wild. IEEE Trans. Inf. Secur. 2021, 16, 482–494. [Google Scholar] [CrossRef]
  59. Zhang, F.; Zhang, T.; Mao, Q.; Xu, C. Geometry Guided Pose-Invariant Facial Expression Recognition. IEEE Trans. Image Process. 2020, 29, 4445–4460. [Google Scholar] [CrossRef] [PubMed]
  60. Appa, R.; Borgalli, S.; Surve, S. Deep learning for facial emotion recognition using custom CNN architecture. In Proceedings of the 2nd International Conference on Computational Intelligence & IoT (ICCIIoT) 2021, Online, 23–24 February 2022. [Google Scholar]
  61. Borgalli, R.A.; Surve, S. Deep Learning Framework for Facial Emotion Recognition using CNN Architectures. In Proceedings of the 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 16–18 March 2022. [Google Scholar]
  62. Fang, B.; Chen, G.; He, J. Ghost-based Convolutional Neural Network for Effective Facial Expression Recognition. In Proceedings of the 2022 International Conference on Machine Learning and Knowledge Engineering (MLKE), Guilin, China, 25–27 February 2022. [Google Scholar]
  63. Khattak, K.; Asghar, M.Z.; Ali, M.; Batool, U. An efficient deep learning technique for facial emotion recognition. Multimed. Tools Appl. 2022, 81, 1649–1683. [Google Scholar] [CrossRef]
  64. Liu, X.; Kumar, B.; You, J.; Jia, P. Adaptive Deep Metric Learning for Identity-Aware Facial Expression Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  65. Choudhary, A.; Shukla, J. Feature Extraction and Feature Selection for Emotion Recognition using Facial Expression. In Proceedings of the IEEE International Conference on Multimedia Big Data (BigMM), New Delhi, India, 24–26 September 2020. [Google Scholar]
  66. Lajevardi, S.M.; Hussain, Z.M. Automatic facial expression recognition: Feature extraction and selection. Signal Image Video Process. 2010, 6, 159–169. [Google Scholar] [CrossRef]
  67. Chouhayebi, H.; Riffi, J.; Mahraz, M.A.; Yahyaouy, A.; Tairi, H.; Alioua, N. Facial expression recognition based on geometric features. In Proceedings of the International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020. [Google Scholar]
  68. Chen, B.; Guan, W.; Li, P.; Ikeda, N.; Hirasawa, K.; Lu, H. Residual multi-task learning for facial landmark localization and expression recognition. Pattern Recognit. 2021, 115, 107893. [Google Scholar] [CrossRef]
  69. Khan, A. Facial Expression Recognition using Facial Landmark Detection and Feature Extraction via Neural Networks. arXiv 2020, arXiv:1812.04510. [Google Scholar]
  70. Li, Q.; Zhan, S.; Xu, L.; Wu, C. Facial micro-expression recognition based on the fusion of deep learning and enhanced optical flow. Multimed. Tools Appl. 2019, 78, 29307–29322. [Google Scholar] [CrossRef]
  71. Agarwal, R.; Kohli, N.; Rahul, M. Facial Expression Recognition using Local Multidirectional Score Pattern (LMSP) descriptor and Modified Hidden Markov Model. Int. J. Adv. Intell. Paradig. 2021, 18, 538–551. [Google Scholar] [CrossRef]
  72. Ekweariri, N.; Yurtkan, K. Facial expression recognition using enhanced local binary patterns. In Proceedings of the 9th International Conference on Computational Intelligence and Communication Networks (CICN), Girne, Northern Cyprus, 16–17 September 2017. [Google Scholar]
  73. Lakshmi, D.; Ponnusamy, R. Facial emotion recognition using modified HOG and LBP features with deep stacked autoencoders. Microprocess. Microsyst. 2021, 82, 103834. [Google Scholar] [CrossRef]
  74. George, D.; Mallery, P. SPSS for Windows Step by Step: A Simple Guide and Reference. In SPSS for Windows Step by Step: A Simple Guide and Reference, 4th ed.; Allyn and Bacon: Boston, MA, USA, 2003; p. 368. [Google Scholar]
  75. Green, S.B.; Salkind, N.J. Using SPSS for Windows and Macintosh: Analyzing and Understanding Data, 6th ed.; Pearson: London, UK, 2011. [Google Scholar]
  76. Raza, M.A.; Bazai, S.; Li, Z.; Wagan, R.A.; Nizamani, M.M.; Khokhar, A.A. Analysis of Change in Diversity Pattern Due to Environmental Change to Improve the Conservation of Species. Pol. J. Environ. Stud. 2022, 31, 1305–1316. [Google Scholar] [CrossRef]
  77. Minaee, S.; Minaei, M.; Abdolrashidi, A. Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network. Sensors 2021, 21, 3046. [Google Scholar] [CrossRef]
  78. Shin, M.; Kim, M.; Kwon, D.-S. Baseline CNN structure analysis for facial expression recognition. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016. [Google Scholar]
  79. Wang, X.; Wang, X.; Ni, Y. Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks. Comput. Intell. Neurosci. 2018, 2018, 7208794. [Google Scholar] [CrossRef] [PubMed]
  80. Solari, A.; Chen, R.; Tong, Y. Local Dominant Directional Symmetrical Coding Patterns for Facial Expression Recognition. Comput. Intell. Neurosci. 2019, 2019, 3587036. [Google Scholar]
  81. Happy, S.L.; Routray, A. Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 2015, 6, 1–12. [Google Scholar] [CrossRef] [Green Version]
  82. Li, Y.; Zeng, J.; Shan, S.; Chen, X. Mechanism, Occlusion Aware Facial Expression Recognition Using CNN With Attention. IEEE Trans. Image Process. 2018, 28, 2439–2450. [Google Scholar] [CrossRef] [PubMed]
  83. Quan, C.; Qian, Y.; Ren, F. Dynamic facial expression recognition based on K-order emotional intensity model. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO), Bali, Indonesia, 5–10 December 2014. [Google Scholar]
  84. Kamarol, S.K.A.; Jaward, M.H.; Kälviäinen, H.; Parkkinen, J.; Parthiban, R. Joint facial expression recognition and intensity estimation based on weighted votes of image sequences. Pattern Recognit. Lett. 2017, 92, 25–32. [Google Scholar] [CrossRef]
  85. Walecki, R.; Rudovic, O.O.; Pavlovic, V.; Pantic, M. Variable-state latent conditional random fields for facial expression recognition and action unit detection. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015. [Google Scholar]
  86. Wu, H. Real Time Facial Expression Recognition for Online Lecture. Next-Gener. Wirel. Netw. (NGWN) Auton. Intell. Commun. 2022, 2022, 9684264. [Google Scholar] [CrossRef]
  87. Magdin, M. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva. Int. J. Interact. Multimed. Artif. Intell. 2021, 2021, 7–15. [Google Scholar] [CrossRef]
  88. Suk, M.; Prabhakaran, B. Real-Time Mobile Facial Expression Recognition System—A Case Study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
Figure 1. Stages of existing FER systems.
Figure 1. Stages of existing FER systems.
Applsci 12 12134 g001
Figure 2. Facial features used for automatic facial expression recognition in classroom.
Figure 2. Facial features used for automatic facial expression recognition in classroom.
Applsci 12 12134 g002
Figure 3. Proposed salient facial features.
Figure 3. Proposed salient facial features.
Applsci 12 12134 g003
Figure 4. Diagram of the proposed FER system.
Figure 4. Diagram of the proposed FER system.
Applsci 12 12134 g004
Figure 5. The proposed automatic facial expression recognition system working in classroom.
Figure 5. The proposed automatic facial expression recognition system working in classroom.
Applsci 12 12134 g005
Figure 6. Accuracy of proposed FER compared with well-known FER systems.
Figure 6. Accuracy of proposed FER compared with well-known FER systems.
Applsci 12 12134 g006
Figure 7. Accuracy of the proposed system.
Figure 7. Accuracy of the proposed system.
Applsci 12 12134 g007
Figure 8. Changes in facial expression with respect to time.
Figure 8. Changes in facial expression with respect to time.
Applsci 12 12134 g008
Table 1. Existing FER.
Table 1. Existing FER.
AuthorTechniqueDataset
Wang et al. [45]SVMJAFFE
Zhang et al. [46]Patch based, SVMJAFFE
Poursaberi et al. [47]GL Wavelet, KNNJAFFE, CK, MMI
Owusu et al. [48]GF, MFFNNJAFFE, Yale
Dahmane et al. [49]HOG, SVMJAFFE
Biswas et al. [50]DCT, SVMJAFFE, CK
Hedge et al. [51]KLSWFDA, SVMJAFFE, Yale, FD
Kumar et al. [52]WPLBPJAFFE, CK+, MMI
Siddiqi et al. [53] SWLDA, HRCFJAFFE, CK+, MMI, Yale
Kim et al. [54] HOG, SVMNot mentioned
Makhmudkhujaev et al. [55]Local prominent directional pattern (LPDP)CK+, MMI, BU-3DFE, ISED, GEMEP-FERA, FACES
Liu et al. [56] Average weighting + CNNReal time
Niu et al. [57]Feature extraction, SVMCK+, JAFFE, MMI
Liang et al. [58]Multi-scale action unit (AU)-based network (MSAU-Net)Self-generated 10,371 images and 1491 video clips
Zhang et al. [59]GANMulti-PIE, BU-3DFE, and SFEW
Appa et al. [60]CNNFER13, CK+, JAFFE
Borgalli et al. [61]CNNKDEF, RAFD, RAF-DB, SFEW, and AMFED+
Fang et al. [62]Ghost-CNNRAF-DB, FER2013 and FERPlus
Khattak et al. [63]CNNJAFFE, CKUTKFace
Liu et al. [64]CNNCK+
Table 2. The statistical analysis.
Table 2. The statistical analysis.
Dependent VariableSum of SquaresDfMean SquareFSig.Partial Eta Squared
HappyContrast0.00010.0000.0030.9580.000
Error59.7195110.117   
SadContrast0.36410.3644.4830.0350.009
Error41.5115110.081   
SatisfiedContrast3.26813.26815.3700.0000.029
Error108.6615110.213   
DissatisfiedContrast3.31013.31023.0260.0000.043
Error73.4655110.144   
ConcentrationContrast0.74510.7454.1760.0420.008
Error91.1855110.178   
FearContrast0.12910.1293.4460.0640.007
Error19.0925110.037   
Table 3. Statistical analysis of variable seating position in front rows.
Table 3. Statistical analysis of variable seating position in front rows.
Dependent VariableSeating Position Front RowMeanStd. Error95% Confidence Interval
Lower BoundUpper Bound
HappyYes0.1270.02300.0780.167
SadYes0.0870.0190.0500.124
SatisfiedYes0.3580.0310.2970.419
DissatisfiedYes0.1220.0250.0770.177
ConcentrationYes0.2840.0280.2290.339
FearYes0.0220.012−0.0030.046
Table 4. Statistical analysis of variable seating position in middle rows.
Table 4. Statistical analysis of variable seating position in middle rows.
Dependent VariableSeating Position Middle RowMeanStd. Error95% Confidence Interval
Lower BoundUpper Bound
HappyYes0.1560.0250.1080.205
SadYes0.0570.0210.0170.098
SatisfiedYes0.3650.0340.2980.431
DissatisfiedYes0.1510.0280.0960.206
ConcentrationYes0.2450.0310.1840.305
FearYes0.0260.014−0.0010.053
Table 5. Statistical analysis of variable seating position in last rows.
Table 5. Statistical analysis of variable seating position in last rows.
Dependent VariableSeating Position Last RowMeanStd. Error95% Confidence Interval
Lower BoundUpper Bound
HappyYes0.1210.0360.0500.191
SadYes0.1650.0300.1060.223
SatisfiedYes0.1430.0480.0480.238
DissatisfiedYes0.3960.0390.3180.473
ConcentrationYes0.0990.0440.0120.186
FearYes0.0990.0200.0600.137
Table 6. Statistical analysis of variable lecture time.
Table 6. Statistical analysis of variable lecture time.
Lecture TimeHappySadSatisfiedDissatisfiedConcentrationFear
15 minMean0.110.040.380.070.370.04
N105105105105105105
Std. Deviation0.3200.1920.4880.2510.4860.192
30 minMean0.160.060.430.040.310.01
N221221214221221221
Std. Deviation0.3700.2360.4960.1870.4640.095
45 minMean0.130.190.150.390.090.07
N180180179180179180
Std. Deviation0.3350.3630.3790.4900.2860.260
Table 7. Statistical analysis of dissatisfied facial expression against the variables “difficulty of subject” and “lecture time”.
Table 7. Statistical analysis of dissatisfied facial expression against the variables “difficulty of subject” and “lecture time”.
Facial ExpressionsDifficulty of SubjectLecture TimeMean
DissatisfiedTheory of automata15 min0.78
30 min0.55
45 min0.51
Table 8. Comparison of proposed FER with commercial FER using the FER-2013 dataset.
Table 8. Comparison of proposed FER with commercial FER using the FER-2013 dataset.
Facial EmotionsEmotientAffectivaMorphCastProposed System
Anger0.620.330.300.35
Disgust0.650.400.490.76
Fear0.400.240.360.44
Happy0.750.750.760.90
Sad0.700.380.540.89
Surprise0.810.570.690.91
Concentration---0.83
Dissatisfied---0.57
Satisfied---0.81
Table 9. Comparison of proposed FER with existing FER using well-known datasets.
Table 9. Comparison of proposed FER with existing FER using well-known datasets.
ModelFER13JAFFECK+
Deep-Emotion [77]---
CNN [78]58.96%--
LBP+ORB features [57]-88.50%-
Domain Adaptation [79]63.50%--
Local Dominant Pattern [80]-87.60%-
Salient Patches [81]-91.80%-
Occlusion-aware FER [82]--97.03%
K-Mean Cluster [83]--88.32%
Weighted Vote [84]--82.3%
VLS-CRF [85]--96.7%
Proposed FER87.05%99%98.79%
Table 10. Comparison of proposed FER with existing real-time FER.
Table 10. Comparison of proposed FER with existing real-time FER.
Proposed Facial EmotionsACCRY FER [86]ACCRY FER [87]ACCRY FER [88]ACCRY
Concentration87.28%Happy80%Happy84.2%Fear72%
DissatisfiedSurpriseWinkAnger
SatisfiedNeutralNormalHappy
FearEnlightenedSadSad
HappyConfuseSleepyDisgust
SadBoredomSurpriseSurprise
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fakhar, S.; Baber, J.; Bazai, S.U.; Marjan, S.; Jasinski, M.; Jasinska, E.; Chaudhry, M.U.; Leonowicz, Z.; Hussain, S. Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System. Appl. Sci. 2022, 12, 12134. https://doi.org/10.3390/app122312134

AMA Style

Fakhar S, Baber J, Bazai SU, Marjan S, Jasinski M, Jasinska E, Chaudhry MU, Leonowicz Z, Hussain S. Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System. Applied Sciences. 2022; 12(23):12134. https://doi.org/10.3390/app122312134

Chicago/Turabian Style

Fakhar, Shariqa, Junaid Baber, Sibghat Ullah Bazai, Shah Marjan, Michal Jasinski, Elzbieta Jasinska, Muhammad Umar Chaudhry, Zbigniew Leonowicz, and Shumaila Hussain. 2022. "Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System" Applied Sciences 12, no. 23: 12134. https://doi.org/10.3390/app122312134

APA Style

Fakhar, S., Baber, J., Bazai, S. U., Marjan, S., Jasinski, M., Jasinska, E., Chaudhry, M. U., Leonowicz, Z., & Hussain, S. (2022). Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System. Applied Sciences, 12(23), 12134. https://doi.org/10.3390/app122312134

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop