Next Article in Journal
A Pattern Approach to Comprehensible and Pleasant Human–Robot Interaction
Previous Article in Journal
Accuracy and Repeatability Tests on HoloLens 2 and HTC Vive
Previous Article in Special Issue
Innovative Teacher Education with the Augmented Reality Device Microsoft HoloLens—Results of an Exploratory Study and Pedagogical Considerations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Reality for Autistic Children to Enhance Their Understanding of Facial Expressions

1
Department of Autonomous Systems, Faculty of Artificial Intelligence, Al-Balqa Applied University, Al-Salt 19117, Jordan
2
Department of Information Science, College of Computer and Information Systems, Umm Al Qura University, P.O. Box 715, Makkah 21961, Saudi Arabia
3
ICAM (Interdisciplinary Centre for the Artificial Mind), FSD, Bond University, Robina, QLD 4226, Australia
4
Department of Computer Information Systems, King Abdullah II School of Information Technology, The University of Jordon, Amman 11942, Jordan
5
Department of Software Engineering, Information Technology Collage, AL-Hussein bin Talal University, Ma’an 71111, Jordan
6
School of Computing and Mathematics, Charles Sturt University, Wagga Wagga, NSW 2650, Australia
7
College of Computing, Fahad Bin Sultan University, Tabuk 47721, Saudi Arabia
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2021, 5(8), 48; https://doi.org/10.3390/mti5080048
Submission received: 27 June 2021 / Revised: 4 August 2021 / Accepted: 9 August 2021 / Published: 23 August 2021
(This article belongs to the Special Issue Theoretical and Pedagogical Perspectives on Augmented Reality)

Abstract

:
Difficulty in understanding the feelings and behavior of other people is considered one of the main symptoms of autism. Computer technology has increasingly been used in interventions with Autism Spectrum Disorder (ASD), especially augmented reality, to either treat or alleviate ASD symptomatology. Augmented reality is an engaging type of technology that helps children interact easily and understand and remember information, and it is not limited to one age group or level of education. This study utilized AR to display faces with six different basic facial expressions—happiness, sadness, surprise, fear, disgust, and anger—to help children to recognize facial features and associate facial expressions with a simultaneous human condition. The most important point of this system is that children can interact with the system in a friendly and safe way. Additionally, our results showed the system enhanced social interactions, talking, and facial expressions for both autistic and typical children. Therefore, AR might have a significant upcoming role in talking about the therapeutic necessities of children with ASD. This paper presents evidence for the feasibility of one of the specialized AR systems.

1. Introduction

Autism is characterized by difficulties in social and emotional communication and by repetitive and stereotypical behavior [1,2]. It is clear that the inability of individuals to recognize the feelings of the interacting partner can reduce the degree of social interaction [3]. Griffiths et al. discovered that both children and adolescents with autism display a lack of control at the precise level of identifying feelings from passionate and more precise expressions [4,5].
The entertainment technology of augmented reality (AR) might play an important role in treating autistic children [6,7,8] because this technology might shed light on the specific symptoms of autism, since AR can create environments that control and decrease the anxiety generated by real social situations [9]. Many of the technologies currently used in the treatment and education of autistic children reveal the extent of environments’ smartness on positively impacting current therapeutic practices [10].
Previous studies have explored many applications concerning technology-based interference with autistic children. The significance of such studies lies in rendering assistance for the effectiveness of technology-based interferences with autistic children by examining the following themes: (1) sensible and hearing prompting devices (e.g., [11]), (2) instructions through video and feedback (e.g., [12]), (3) instruction through computer-assistance (e.g., [13]), and (4) robotics (e.g., [14]). Few researchers have successfully categorized the research regarding human–robot interaction (HRI) and human–computer interaction (HCI) [15,16].
The current study aims to investigate the recognition of facial expressions in children by comparison of two groups: one group containing participants with ASD and the second group containing typically developing control participants. The present work also aims to design an AR system to enhance the recognition of facial expressions for both groups. This study is also distinguished from other studies in that an AR system was designed to improve facial expression recognition for both groups, with trials conducted during two sessions that proved the effectiveness of the system and its ease of use through the achieved results and feedback from the caregivers.
Based on previous research [17,18,19], it was anticipated that there would be a moderate to large difference in performance capabilities between the children with autism group and the children without autism group. To achieve a sufficient level of statistical power (80%) with the assumption of a 0.65 effect size difference in performance capability scores, at least 12 participants per group were required (i.e., at least 24 participants in total being recruited). The ability to refuse a void hypothesis is called statistical power. For example, if experiments have plenty of potentials but the H0 was saved, then there is a small opportunity that a type II error occurred.
In terms of the effect size, it is simpler to discover robust phenomena than easy phenomena. For instance, if the practice has a dramatic effect on the mode, an experiment to examine this hypothesis will have great strength because it will be easy to find the impact. Furthermore, if the effect of exercise is actual but tiny, there will be small power, which means the experiment will not get statistically important outcomes. In addition, because the prevalence of autism in children is less 3%, and after we returned to research [17,18,19] that conducted statistical analysis to determine the sample size (12–15), we conducted the same analysis in this stage (preparation stage), and we received the same result.
AR is defined as a 3D technology that combines both physical and digital worlds in real time [20]. AR interference presents a chance for promoting social communication in Autism Spectrum Disorder (ASD). This paper shows preparatory evidence of the accessibility and feasibility of AR technology [21]. In this paper, we introduce an AR application for both autistic and neurotypical children (TD) to enhance their understanding of facial expressions. We review some of the related literature and then describe the method, including the participants’ characteristics, the application descriptions, and the experimental design procedures. Then, we review and discuss the results before concluding this study.

2. Goals of AR Systems for Autistic Children

A number of attributes of AR make it useful in the treatment of children with autism. Multiple tasks can be performed in a single session using this technology. The child learns certain skills and behavioral patterns using games and exercises. Facilitating AR can also reinforce a desired response to a certain task or situation. One or more of the following aspects are targeted when designing AR systems for use in intervention in autistic patients:
  • Imagination: The aim is to create a graphical environment that reproduces the imagination that would normally exist in the child’s mind. Imagination can be expressed by children through play [22]; however, autistic children face a number of difficulties in this regard [22,23]. Their actions often seem purposeless and repetitive [23]. Despite this, these children have a desire to play and be accepted by their peers [24], and how their peers react to them significantly affects the level of aloneness they experience [23].
  • Attention: Autistic children tend to focus on certain objects and exclude others within their environment. Although this is not diagnostic of autism, it is one of the first symptoms of this condition [25]. Eye motion records have shown promising potential in the identification and addressing of autism [26]. Computer-based technology can be of use in the enhancement of concentration, and on occasion, it can lead to increased learning compared with traditional educational methods [27].
  • Social skills: Social skills are attitudes that, in similar situations, anticipate significant social effects among children and youths [28]. Realizing the requirements of people with social skill disorders with the use of AR is still challenging. Previous studies focused on making eye contact and exchanging hugs. Some of these studies have demonstrated promising potential in helping children with autism to achieve social skills and develop a more natural social attitude, allowing for better interaction with their peers. Examples of these interventions include the benefaction of adult- and peer-involved interventions, peer forming and starting by an autistic child, class-extent educating or interaction, and the utilization of scripts [29].
  • Emotion: In this aspect, AR serves as a method that allows for flexible adaptation. The aim is for the individual to be able to rapidly demonstrate suitable reactions and to allow for breaks during which there is re-evaluation and intention contacting in the advantage of response improvement [30]. Children with autism struggle to feel the emotions experienced by neurotypical people, such as excitement, sorrow, astonishment, fury, abhorrence, and fright [31,32]. The role of AR in this area is to enable children with autism to recognize these emotions [32], as it has previously been demonstrated that they have the ability to identify these emotions in others as well as in themselves after training (e.g., [32]).
  • Navigation skills: Humans use navigation skills to move from one location to another. Having these skills improves the quality of life and happiness levels of autistic children [33]. For example, it enables them to use public transport when needed. Mobile devices with navigator applications can be used to achieve this purpose. These devices are widely available, have high rates of acceptance among people with disabilities, and can serve multiple purposes in addition to navigation [34].

3. Related Work

Videos and photos have been used to enhance the understanding of people with ASD for facial emotions and communication skills [35]. These allow autistic children to observe an event, produce the targeted reaction as a result of that suggestive event, and have the communication partner provide the outcome to the child. However, in a new system, they used AR. For example, AR has been designed as a gamebook [36] in order to enhance ASD children in both the recognition and acquisition of emotion through attracting their awareness and motivation, as well as enhancing their efficiency concerning this impediment. The story is connected with five scenarios, and the person is engaged in real-world situations that involve imaginative content linked with feelings.
AR is employed for different purposes [37], and it helps autistic persons [38]. For example, McMahon et al. compared the impacts of three various navigation means, such as paper maps, Google maps on a smartphone, and the application of AR navigation. The checks of the participant with autism’s independent navigation enhanced to a mean of 95%, whereas the autistic participants navigated with 100% independence throughout the last three AR sessions [34].
The significance of [39] lies in estimating the efficiency of an AR training program that depends on the visual sense of autistic students in order to enhance their social skills. The investigation was conducted through employing a quantitative approach, involving a quasi-experimental method and a pre-test/post-test design with the control group. The participants were divided into 10 males and 1 female and then broken down into a control collection (n = 5) and experimental collection (n = 6). They were asked to employ a non-probabilistic intentional sampling method. The experimental group worked with various activities using AR, like being a football or soccer player who has to score a goal or playing with an animal. In comparison, the control group obtained similar interference but without the usage of such a tool. For example, they needed to catch objects in compliance with the therapists’ orders. The interference continued for a period of 15 min two times per week for 20 weeks. The tool for collecting data was the Autistic Spectrum Inventory of Riviere. The Quicker Vision application was employed as the AR-based interference method. The findings of the experiments showed no statistically considerable differences between the two groups in spite of the minor improvements that manifested in some items, such as resiliency and limitation. The aims are manifested in revealing the effect of AR as positive for all children, including neurotypical children and autistic children.
In [40], researchers examined participants with autism to distinguish facial expressions, displaying a face and asking participants to imitate the displayed facial expression and comparing this group with other participants. The results of the study were that participants with autism imitated facial expressions if asked to do so, but their imitations were slower and less accurate than neurotypical individuals, and this is consistent with the results of our study in that participants with autism are less accurate at distinguishing facial expressions. In [41], the children were divided into two groups: children with autism and children without autism. The children watched videos, and their facial expressions were recorded. It was noted that the children with autism had neutral facial expressions, while children in the other group interacted, and this appeared from their facial expressions. In [42], which is a recent study, researchers reviewed several studies that studied the relationship of autism with the ability to understand and perceive facial expressions, which indicates the existence of difficulties in this category. It also reviewed some proposals for intervention to teach and improve their ability to understand these expressions, which consisted of images and pictures, and this is what was done in our study, which proved its effectiveness.
In our study, six facial gestures [43] that express basic feelings were incorporated by using the AR-based 3D modeling facial gesture. Thus, the feelings were integrated with scenarios that the children had seen. In the mean correct estimation rates for the three groups of facial expressions, the results suggest that the correct estimation rates of the children were enhanced after training and that the children remained in the monitoring phase for the sentimental expression and social skills that they acquired in the interference stage.

4. Method

4.1. Participants

The application was tested on 30 children in total aged from 6 to 9 years, comparing 15 typical children with 15 children of the same age who had a specialist-derived clinical diagnosis of ASD, according to the DSM-5 criteria, who did not suffer from any other organic diseases. Their caregivers accompanied the participants to the intervention sessions. Table 1 displays the children’s characteristics before participating in the study. Signed parental consent forms were obtained, and all participants agreed to engage in this study after being asked through their teachers. The experiment was conducted as part of ethical approval number ETH18-2710, approved by the University of Technology in Sydney (Sydney, New South Wales, Australia), and all the participants in the study were professional people in addition to the use of trained children’s teachers, and all this is explained in the ethical approval consent. The two groups were largely equivalent. Both groups spoke the same language (Arabic), and the IQ levels are shown in Table 1. Their ages were relatively similar, the distribution of males and females in the two groups was also relatively similar, and no child suffered from other medical problems such as vision or hearing problems. In addition, the matching of the mental age was approved at p > 0.05. Additionally, we tried to balance the genders in the two groups, as there were 7 females in the TP set and 8 females in the ASD participants, but the IQ was not balanced between the groups (both p < 0.001).

4.2. The Application

The AR application was designed by employing Unity software, which is a cross-platform improvement tool initially designed for improving games, but it is now employed for various domains, such as education, entertainment, military, medical architecture, art, information management, children’s apps, simulations, marketing, physical installations, training, and so forth [44].
The application included three data sets of basic facial expressions: the first data set was a graphical facial expression (shown in Figure 1), the second data set was a real facial expression for the same person (shown in Figure 2), and the third data set was a real facial expression for a different person (shown in Figure 3). The facial expression appeared, and then the child selected the right expression name with their caregiver’s guidance (visual and pointing guidance) from several different expressions in real time. In this system scenario, each participant and coach in the system interacted through graphical user interfaces.
The app displayed the dataset for facial expressions built in the same environment as the child to achieve greater interaction than just seeing it on a display screen, and the child chose the appropriate facial expression image using the joint place technique, which was explained in detail in our research paper [7]. This provided the child with ease of choice in addition to speed, as it required proximity to the image and did not require touching as in traditional systems.
The system worked efficiently when all system software requirements and system hardware requirements were available. The application needed software resources and certain hardware components to be present on the computer. The hardware used was Kinect, which is a Microsoft product that is used for motion-sensing input tools. It can be employed on different platforms, including Windows, Xbox 360, and Xbox One [45]. Kinect is used for detecting humans’ motions and gestures and providing interaction with the software systems or games without using a controller. In this project, the Kinect for Xbox One was employed throughout the project. It was produced in 2013, and at the time of conducting this experiment, it was the final form of the Kinect sensor in the Kinect series. Kinect looks like a straight bar webcam and is composed of three main parts: the RGB camera, a depth sensor, and a multi-array microphone. The AR application was built using Unity software. Unity hides the complexity from users. Consequently, designers and developers can continue designing and developing their systems. These complexities include creating and designing graphics and the interaction between virtual and physical objects in Unity in two stages. The first stage is the Unity editor, while the second stage uses the code, specifically C#. The Kinect sensor was located at a height of 1.2 m above the floor and 2.7 m away from the child. During the experiment, the body of the user was facing the Kinect sensor. The reason for facing the sensor was to prevent the arm joints from intersecting with the body joints.

4.3. Experimental Design

The experiment was conducted over 2 days. All children—those with ASD and the typical children—had individual coaching sessions in two parts: assessment and training.
The training session contained a preliminary guidance period that enabled both the children and caregivers to become aware of the system in order to have the chance to test the Facial Expressions Training Application (FETA). As soon as they were familiar with the FETA, the child and the caregiver had a separate period that enabled them to learn how to play the facial expression game. It should be noted that each of the participants, caregivers, and study employees had the chance to take a break and pause the game for any reason whatsoever, such as for bathroom and food breaks or due to a behavioral change or a lack of application endurance. Throughout the interference, further external auditory and video observation was in place. The procedure of the experiment is illustrated in Figure 4.
From the first image collection, the child was asked to determine the image and its expression. For example, the caregiver asked the child where the happy face was. The child pointed to his or her answer, and if it was true, then the caregiver would put a (✔) on the assessment form, as shown in Figure 5, to make an initial assessment.
At this stage of the study, we had two goals. First, it was required to measure the degree to which the children in the two groups were able to distinguish between the six facial expressions. If the child could answer, the caregiver put a (✔), and if he or she could not, the caregiver put a (o). For example, the caregiver asked the child where the happy face was. (The caregiver read the statement from the screen) The child pointed to his or her answer, and if it was true (evaluated by the system), then the caregiver would put a (✔) on the assessment form, as shown in Figure 5, to make an initial assessment. The second goal was to evaluate our system. The caregiver observed whether the child’s answer and system reaction (evaluation) were correct or not. This was to avoid the problem of the child giving a correct answer and the system assessing that the answer was wrong, or the child giving a wrong answer and the system evaluating it as a correct answer. Our system had an accuracy of 100%, which was consistent with our previous study [7] that used the same techniques. Then, the child was trained on these images. From the second image collection, the child was asked to make a second assessment and determine the image’s expression. The child was asked to do the same thing again but this time from the third set of images. The child was asked to determine the image’s expression, and this was the final assessment.
In the training phase, the trainer taught the children to understand the expressions and faces shown in the three image collections. The caregivers instructed the children on how to play with the system and perceive cues to ensure that they felt comfortable using the AR technology, achieving self-training for the participants. The instruction time was 15–20 min. In this study, the six basic emotions were used as a measurement. The practical content depicted in the scenes was designed to train the TD children and children with ASD. The system was evaluated by the teacher filling in the assessment form with each emotion used (shown in Figure 5).

5. Results

This section presents the results for both the TD and ASD groups after the children and caregivers had finished the training and assessment sessions. These sessions were completed without anyone involved in the study reporting any side effects, either pointed out by the staff of the study or reported by either the participants or caregivers.

5.1. Typical Children

In this section, we present the results for the TD children, where the caregivers filled out the following tables after the system intervention to estimate both the children’s reactions and the quality and functionality of the application. The application was highly friendly for the users. Moreover, the caregivers believed that all the kids were entertained by and enjoyed the system. Table 2 and Figure 6 display the results for each session on the first day for each image collection, and Table 3 and Figure 7 display the results for each session on the second day for each image collection.
As shown in Table 2 and Figure 6, the maximum number of correct answers in the first image collection was 5 for child 5, child 7, and child 12, while the minimum number of correct answers in the first image collection was 3 for both child 3 and child 8. Overall, the average number of correct answers in the first image collection for the TD children was 4.1. In the second image collection, child 7, child 8, and child 13 achieved the highest number of correct answers with 5, whereas children 2, 3, 6, and 11 achieved the lowest number of correct answers with 3. In general, the average score for the second image collection for the TD children was 3.9. Interestingly, the minimum number of correct answers for the third image collection for the TS children was 4. As such, the average third image collection score for all of the children was 4.5.
As illustrated above in Table 4 and Figure 8, the correct answers provided by the children for the six facial expressions—namely happiness, sadness, surprise, fear, disgust, and anger—were different for each image collection. Surprisingly, the findings revealed that the highest levels provided by the children for the first, second, and third image collections were for happiness. The minimum correct answers provided by the children were for disgust in the first image collection and disgust and anger in the second image collection. This shows that the majority of the children were able to identify the facial expressions for happiness and sadness.
Table 5 shows the assessments of the application by the caregivers of the TD children. Notably, the first caregiver gave the first criteria a low assessment, which indicates that not all caregivers were completely engaged with the application. Both the first and second caregivers all assessed the level of engagement and the ease of use highly, followed by the level of engagement with the application for the second caregiver as well as the level of tolerability for the second caregiver. Table 5 shows the TD children’s performance when designating between the six different facial expressions. We can see that the main facial expressions (happiness and sadness) had high designation results, and the lowest designation results were for disgust and anger. Additionally, Table 5 illustrates the AR system evaluation by the caregivers and shows the advantages and ease of use of the system.

5.2. Autistic Children

The same procedures applied to the typical children group were reapplied to the autistic children group. The results are displayed in Table 6, Table 7, Table 8 and Table 9 and Figure 9, Figure 10 and Figure 11.
These results show a slight improvement in the ability of children to interact with the three-face collection on the second day. As illustrated in Table 7 and Figure 10, the average number of correct answers for the second day for the first image collection was 2.7, while the average for the second image collection was 3.0. There was a small improvement in third image collection, with the average increasing to 3.2. Comparing the above-mentioned Table 6 with the current Table 7 reveals the slight decrease in improvement for the average of the third image collection for the second day from the first day, with a decrease of 0.1 (from 3.3 to 3.2).
As indicated in Table 8 and Figure 11, the correct answers provided by the children for the six facial expressions had different results for each image collection. Notably, the findings revealed that the highest level of correct answers provided by the children for the second image collection was for happiness. The minimum number of correct answers provided by the children for the first image collection was for disgust and anger. These results show that the majority of the children were able to identify the happiness facial expression easily for the first, second, and third image collections. However, they were unable to identify the anger facial expression easily.
Table 9 shows the caregivers’ assessments of the application for autistic children. The first caregiver gave the second criteria the lowest assessment, which indicates that not all of the caregivers’ children found the application completely tolerable. However, the second caregiver assessed the level of engagement with the application and the ease of use highly.
Table 6 and Figure 9 display the results for each session on the first day for each image and collection. Table 7 displays the results for each session on the second day for each image collection. Additionally, Table 8 and Figure 11 show each child’s performance when designating between the six different facial expressions. These show that the main facial expressions—happiness and sadness—had the highest designation results, with the lowest designation results being for disgust and anger. Finally, Table 9 shows the AR system evaluation by the caregivers and illustrates the advantages and ease of use of the system.
Figure 12 indicates several important points for the best performance of the children on both days, namely the lower performance of the autistic children. However, one can notice the efficiency of the system in the improving performance on the second day compared with the first day for both groups. Finally, it shows the improvement in performance for each group of images separately.

6. Discussion

This study aimed to make a comparison between two groups (the first group included participants with autism, and the second group contained non-autistic participants) in the recognition of facial expressions. The study results showed that the difference between the two groups was clear. This study is also distinguished from other studies in that the designed AR system had improved facial expression recognition for both groups during the two conducted sessions. The results proved the effectiveness of the system and its ease of use through the achieved results and feedback from the caregivers.
The results show that the neurotypical children’s ability to recognize facial expressions was better than that of the autistic children, which agrees with most studies [40,41] and researchers, who consider the failure to recognize emotions as one of the most common symptoms of autism, as there are many people with autism unable to recognize the mental state of their neurotypical interaction partner. The proposed system was designed with AR, and the system was designed after consulting specialists in the field of childcare. We received a lot of notes and instructions to make the AR system attractive and easy to deal with, and this is consistent with [42], which recommends some ways to teach children facial expressions through images and pictures.
AR interference presents a chance to enhance social interaction in ASD and typical children by allowing them to develop their comprehension of facial gestures. This paper provides introductory evidence of both the usage and quality of AR technology. Initially, the children could not easily recognize the six basic facial expressions represented in the system, particularly the facial expressions of fear and disgust. After the system’s intervention, the children’s ability to recognize the expressions improved. In this way, it improved their social skills and ability to distinguish between basic emotional facial expressions.
Our AR application offers some distinct advantages compared with traditional teaching. The use of the AR application enables children to interact and learn in an exciting, safe, and engaging hands-free manner. Therefore, they will be able to employ their hands to both engage in non-verbal interactions and conduct educational and professional missions. Such kinds of simple mobile technology allow users to train themselves in the privacy of their own homes as well as whenever and wherever else it is appropriate. Additionally, this technology can be instantly updated to meet any future requirements.

7. Conclusions

This study utilized AR to display faces with six different basic expressions—happiness, sadness, surprise, fear, disgust, and anger—to help children recognize facial features and associate facial expressions with a simultaneous human condition. The most important consideration was that the children interacted with the system in a friendly and safe manner. The system also enhanced social interactions, talking, and facial expressions for both the autistic and typical children. Therefore, AR might have a significant role in talking about children’s therapeutic necessities with ASD. This paper presents evidence for the feasibility of a specialized AR system as a treatment method.
Moreover, the ASD community experiences significant complexity in receiving efficient and timely therapeutic interference. Thus, AR might be considered as very efficient, since it can send visual and auditory signals while the user is engaged at the same time in spontaneous and organized social communication. This paper sheds light on the necessity for additional research into the usage of AR technology as a therapeutic device for people with ASD. The AR displayed high quality, ease of use, high effectiveness, usage, and endurance in the reported findings. Generally speaking, the findings are encouraging, but they should be regarded within the context of the identified constraints. The major limitation of our study is the small sample sizes of the sets of participants; Therefore, the study results should be replicated in a bigger sample to support our methodology.

Author Contributions

Conceptualization, M.W., A.A.-J. and J.F.; methodology, A.A.-J.; software, M.W. and R.A.; validation, S.F.M.A., I.G. and O.E.; formal analysis, M.W. and A.A.-J.; investigation, O.E.; resources, M.W. and A.A.-J.; data curation, S.F.M.A. and R.A.; writing—original draft preparation, I.G.; writing—review and editing, I.G., J.F. and A.A.-J.; visualization, M.W., J.F.; supervision, A.A.-J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This report was conducted as part of the “Implementation of an AR Game to Track Upper Limb Movement in Autistic Children” project (number ETH18-2710) approved by the University of Technology, Sydney (Sydney, New South Wales, Australia). The children’s involvement in such a study was examined with their legal guardians and obtained their approval. The guardians were told that they could take away the approval whenever it was needed and for any purpose whatsoever.

Informed Consent Statement

Informed consent was obtained from all subjects and from their legal guardians involved in the study.

Data Availability Statement

All gathered data is reported in the Results section of this paper and will be available on request and plan to make it online soon.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baumer, N.; Spence, S.J. Evaluation and management of the child with autism spectrum disorder. Contin. Lifelong Learn. Neurol. 2018, 24, 248–275. [Google Scholar] [CrossRef] [PubMed]
  2. Wedyan, M.; Al-Jumaily, A. Early diagnosis autism based on upper limb motor coordination in high risk subjects for autism. In Proceedings of the 2016 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Tokyo, Japan, 17–20 December 2016; pp. 13–18. [Google Scholar]
  3. Brewer, R.; Biotti, F.; Catmur, C.; Press, C.; Happé, F.; Cook, R.; Bird, G. Can neurotypical individuals read autistic facial expressions? Atypical production of emotional facial expressions in autism spectrum disorders. Autism Res. 2016, 9, 262–271. [Google Scholar] [CrossRef] [Green Version]
  4. Griffiths, S.; Jarrold, C.; Penton-Voak, I.S.; Woods, A.T.; Skinner, A.L.; Munafò, M.R. Impaired recognition of basic emotions from facial expressions in young people with autism spectrum disorder: Assessing the importance of expression intensity. J. Autism Dev. Disord. 2019, 49, 2768–2778. [Google Scholar] [CrossRef] [Green Version]
  5. Wedyan, M.; Al-Jumaily, A.; Crippa, A. Early Diagnose of Autism Spectrum Disorder Using Machine Learning Based on Simple Upper Limb Movements. In Proceedings of the International Conference on Hybrid Intelligent Systems, Porto, Portugal, 13–15 December 2018; pp. 491–500. [Google Scholar]
  6. Munson, J.; Pasqual, P. Using technology in autism research: The promise and the perils. Computer 2012, 45, 89–91. [Google Scholar] [CrossRef]
  7. Wedyan, M.; Al-Jumaily, A.; Dorgham, O. The use of augmented reality in the diagnosis and treatment of autistic children: A review and a new system. Multimed. Tools Appl. 2020, 79, 18245–18291. [Google Scholar] [CrossRef]
  8. Wedyan, M. Augmented Reality and Novel Virtual Sample Generation Algorithm Based Autism Diagnosis System. Ph.D. Thesis, University of Technology Sydney, Sydney, Australia, 2020. [Google Scholar]
  9. Aresti-Bartolome, N.; Garcia-Zapirain, B. Technologies as support tools for persons with autistic spectrum disorder: A systematic review. Int. J. Environ. Res. Public Health 2014, 11, 7767–7802. [Google Scholar] [CrossRef]
  10. Tentori, M.; Escobedo, L.; Balderas, G. A smart environment for children with autism. IEEE Pervasive Comput. 2015, 14, 42–50. [Google Scholar] [CrossRef]
  11. Mason, R.A.; Gregori, E.; Wills, H.P.; Kamps, D.; Huffman, J. Covert Audio Coaching to Increase Question Asking by Female College Students with Autism: Proof of Concept. J. Dev. Phys. Disabil. 2020, 32, 75–91. [Google Scholar] [CrossRef]
  12. Plavnick, J.B.; Ingersoll, B. Video-based group instruction for adolescents with autism spectrum disorders: A case of intervention development. In International Review of Research in Developmental Disabilities; Elsevier: Amsterdam, The Netherlands, 2017; Volume 52, pp. 109–139. [Google Scholar]
  13. Bai, Z.; Blackwell, A.F.; Coulouris, G. Using Augmented Reality to Elicit Pretend Play for Children with Autism. Vis. Comput. Graph. IEEE Trans. 2015, 21, 598–610. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [Green Version]
  15. Azuma, R.T. A survey of augmented reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  16. Mythili, M.; Shanavas, A.M. A study on Autism spectrum disorders using classification techniques. Ijcsit 2014, 5, 7288–7291. [Google Scholar]
  17. Crippa, A.; Salvatore, C.; Perego, P.; Forti, S.; Nobile, M.; Molteni, M.; Castiglioni, I. Use of machine learning to identify children with autism and their motor abnormalities. J. Autism Dev. Disord. 2015, 45, 2146–2156. [Google Scholar] [CrossRef]
  18. Taffoni, F.; Focaroli, V.; Keller, F.; Iverson, J.M. A technological approach to studying motor planning ability in children at high risk for ASD. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3638–3641. [Google Scholar]
  19. Li, K.-H.; Lou, S.-J.; Tsai, H.-Y.; Shih, R.-C. The Effects of Applying Game-Based Learning to Webcam Motion Sensor Games for Autistic Students’ Sensory Integration Training. Turk. Online J. Educ. Technol.-TOJET 2012, 11, 451–459. [Google Scholar]
  20. Ibáñez, M.-B.; Delgado-Kloos, C. Augmented reality for STEM learning: A systematic review. Comput. Educ. 2018, 123, 109–123. [Google Scholar] [CrossRef]
  21. Liu, R.; Salisbury, J.P.; Vahabzadeh, A.; Sahin, N.T. Feasibility of an autism-focused augmented reality smartglasses system for social communication and behavioral coaching. Front. Pediatr. 2017, 5, 145. [Google Scholar] [CrossRef] [Green Version]
  22. Baron-Cohen, S. Theory of mind in normal development and autism. Prisme 2001, 34, 74–183. [Google Scholar]
  23. Wolfberg, P.; Bottema-Beutel, K.; DeWitt, M. Including children with autism in social and imaginary play with typical peers: Integrated play groups model. Am. J. Play 2012, 5, 55–80. [Google Scholar]
  24. Boucher, J.; Wolfberg, P. Editorial: Aims and design of the special issue. Autism 2003, 7, 339–346. [Google Scholar] [CrossRef]
  25. Pashler, H.E.; Sutherland, S. The Psychology of Attention; MIT Press: Cambridge, MA, USA, 1998; Volume 15. [Google Scholar]
  26. Ames, C.; Fletcher-Watson, S. A review of methods in the study of attention in autism. Dev. Rev. 2010, 30, 52–73. [Google Scholar] [CrossRef] [Green Version]
  27. Michel, P. The use of technology in the study, diagnosis and treatment of autism. In Final Term Paper for CSC350: Autism and Associated Developmental Disorders; Yale University: New Haven, CT, USA, 2004; pp.1–26. [Google Scholar]
  28. Gresham, F.M.; Elliott, S.N. The relationship between adaptive behavior and social skills issues in definition and assessment. J. Spec. Educ. 1987, 21, 167–181. [Google Scholar] [CrossRef]
  29. Weiss, M.J.; Harris, S.L. Teaching social skills to people with autism. Behav. Modif. 2001, 25, 785–802. [Google Scholar] [CrossRef]
  30. Scherer, K.R. Toward a dynamic theory of emotion. Geneva Stud. Emot. 1987, 1, 1–96. [Google Scholar]
  31. Ashwin, C.; Chapman, E.; Colle, L.; Baron-Cohen, S. Impaired recognition of negative basic emotions in autism: A test of the amygdala theory. Soc. Neurosci. 2006, 1, 349–363. [Google Scholar] [CrossRef]
  32. Hadwin, J.; Baron-Ohen, S.; Howlin, P.; Hill, K. Can we teach children with autism to understand emotions, belief, or pretence? Dev. Psychopathol. 1996, 8, 345–366. [Google Scholar] [CrossRef]
  33. Karimi, H.A. Introduction to Navigation; Springer Science & Business Media: Pittsburgh, PA, USA, 2011; pp. 1–16. [Google Scholar]
  34. McMahon, D.D.; Smith, C.C.; Cihak, D.F.; Wright, R.; Gibbons, M.M. Effects of Digital Navigation Aids on Adults With Intellectual Disabilities Comparison of Paper Map, Google Maps, and Augmented Reality. J. Spec. Educ. Technol. 2015, 30, 157–165. [Google Scholar] [CrossRef]
  35. Akmanoglu, N. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling. Educ. Sci. Theory Pract. 2015, 15, 519–537. [Google Scholar]
  36. Cunha, P.; Brandão, J.; Vasconcelos, J.; Soares, F.; Carvalho, V. Augmented reality for cognitive and social skills improvement in children with ASD. In Proceedings of the 2016 13th International Conference on Remote Engineering and Virtual Instrumentation (REV), Madrid, Spain, 24–26 February 2016; pp. 334–335. [Google Scholar]
  37. Aung, Y.M.; Al-Jumaily, A. AR based upper limb rehabilitation system. In Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, Italy, 24–27 June 2012; pp. 213–218. [Google Scholar]
  38. Bhatt, S.; De Leon, N.; Al-Jumaily, A. Augmented reality game therapy for children with autism spectrum disorder. Int. J. Smart Sens. Intell. Syst. 2017, 7, 519–536. [Google Scholar] [CrossRef] [Green Version]
  39. Lorenzo, G.; Gómez-Puerta, M.; Arráez-Vera, G.; Lorenzo-Lledó, A. Preliminary study of augmented reality as an instrument for improvement of social skills in children with autism spectrum disorder. Educ. Inf. Technol. 2019, 24, 181–204. [Google Scholar] [CrossRef]
  40. Drimalla, H.; Baskow, I.; Behnia, B.; Roepke, S.; Dziobek, I. Imitation and recognition of facial emotions in autism: A computer vision approach. Mol. Autism 2021, 12, 27. [Google Scholar] [CrossRef] [PubMed]
  41. Carpenter, K.L.; Hahemi, J.; Campbell, K.; Lippmann, S.J.; Baker, J.P.; Egger, H.L.; Espinosa, S.; Vermeer, S.; Sapiro, G.; Dawson, G. Digital behavioral phenotyping detects atypical pattern of facial expression in toddlers with autism. Autism Res. 2021, 14, 488–499. [Google Scholar] [CrossRef] [PubMed]
  42. Webster, P.J.; Wang, S.; Li, X. Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention. Front. Psychol. 2021, 12, 2540. [Google Scholar] [CrossRef] [PubMed]
  43. Chen, C.-H.; Lee, I.-J.; Lin, L.-Y. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders. Res. Dev. Disabil. 2015, 36, 396–403. [Google Scholar] [CrossRef] [PubMed]
  44. Freire, M.; Serrano-Laguna, Á.; Iglesias, B.M.; Martínez-Ortiz, I.; Moreno-Ger, P.; Fernández-Manjón, B. Game learning analytics: Learning analytics for serious games. In Learning, Design, and Technology: An International Compendium of Theory, Research, Practice, and Policy; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1–29. [Google Scholar]
  45. Chang, X.; Ma, Z.; Lin, M.; Yang, Y.; Hauptmann, A.G. Feature interaction augmented sparse learning for fast kinect motion detection. IEEE Trans. Image Process. 2017, 26, 3911–3920. [Google Scholar] [CrossRef] [PubMed]
  46. Sensum, B.A. A Primer on Emotion Science for Our Autonomous Future. Available online: https://medium.com/@ben.bland/a-primer-on-emotion-science-for-our-autonomous-future-5b0369babf2a (accessed on 6 August 2021).
  47. Gudipati, V.K.; Barman, O.R.; Gaffoor, M.; Abuzneid, A. Efficient facial expression recognition using adaboost and haar cascade classifiers. In Proceedings of the 2016 Annual Connecticut Conference on Industrial Electronics, Technology & Automation (CT-IETA), Bridgeport, CT, USA, 14–15 October 2016; pp. 1–4. [Google Scholar]
  48. Uçar, A.; Demir, Y.; Güzeliş, C. A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering. Neural Comput. Appl. 2016, 27, 131–142. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Graphics of the six basic facial expressions (collection one) [46].
Figure 1. Graphics of the six basic facial expressions (collection one) [46].
Mti 05 00048 g001
Figure 2. Six real basic facial expressions (collection two) [47].
Figure 2. Six real basic facial expressions (collection two) [47].
Mti 05 00048 g002
Figure 3. Six real basic facial expressions (collection three) [48].
Figure 3. Six real basic facial expressions (collection three) [48].
Mti 05 00048 g003
Figure 4. Experimental design steps.
Figure 4. Experimental design steps.
Mti 05 00048 g004
Figure 5. Assessment form.
Figure 5. Assessment form.
Mti 05 00048 g005
Figure 6. Correct answers in the TD children’s interaction with the three face collections on the first day.
Figure 6. Correct answers in the TD children’s interaction with the three face collections on the first day.
Mti 05 00048 g006
Figure 7. TD children’s interaction with the three face collections on the second day.
Figure 7. TD children’s interaction with the three face collections on the second day.
Mti 05 00048 g007
Figure 8. TD children’s interaction with the three face collections.
Figure 8. TD children’s interaction with the three face collections.
Mti 05 00048 g008
Figure 9. Autistic children’s interaction with the three face collections on the first day.
Figure 9. Autistic children’s interaction with the three face collections on the first day.
Mti 05 00048 g009
Figure 10. Autistic children’s interaction with the three face collections on the second day.
Figure 10. Autistic children’s interaction with the three face collections on the second day.
Mti 05 00048 g010
Figure 11. Autistic children’s interaction with the three face collections.
Figure 11. Autistic children’s interaction with the three face collections.
Mti 05 00048 g011
Figure 12. Comparison of the performance (correct answers) of the two groups on the first and second day.
Figure 12. Comparison of the performance (correct answers) of the two groups on the first and second day.
Mti 05 00048 g012
Table 1. Participant characteristics.
Table 1. Participant characteristics.
CharacteristicTypical ChildrenAutistic Children
Number1515
Male:Female8:79:6
Age(Mean: 5.5)(Mean: 5.7)
(Standard deviation: 0.27)(Standard deviation: 0.28)
IQ95 ± 12.8125 ± 12.9
Table 2. Caregivers’ reports showing the correct answers for the TD children’s interaction with the three face collections on the first day.
Table 2. Caregivers’ reports showing the correct answers for the TD children’s interaction with the three face collections on the first day.
Child NumberCorrect Answers in the First Image CollectionCorrect Answers in the Second Image CollectionCorrect Answers in the Third Image Collection
Child 1445
Child 2434
Child 3334
Child 4445
Child 5545
Child 6434
Child 7555
Child 8354
Child 9444
Child 10445
Child 11434
Child 12545
Child 13454
Child 14444
Child 15445
Average4.13.94.5
Table 3. Caregivers’ reports on child interaction with three correct answers for the face collections (0–6) in the second day for the TD children.
Table 3. Caregivers’ reports on child interaction with three correct answers for the face collections (0–6) in the second day for the TD children.
Child NumberCorrect Answers in the First Image CollectionCorrect Answers in the Second Image CollectionCorrect Answers in the Third Image Collection
Child 1555
Child 2545
Child 3555
Child 4555
Child 5666
Child 6655
Child 7655
Child 8555
Child 9666
Child 10656
Child 11555
Child 12545
Child 13555
Child 14556
Child 15666
Average5.45.15.3
Table 4. Caregivers’ reports showing the correct answers for the children’s interaction with the three face collections for both days, sorted by facial expression.
Table 4. Caregivers’ reports showing the correct answers for the children’s interaction with the three face collections for both days, sorted by facial expression.
Facial ExpressionCorrect Answers in the First Image CollectionCorrect Answers in the Second Image CollectionCorrect Answers in the Third Image Collection
Happiness303030
Sadness282828
Surprise242426
Fear252627
Disgust202124
Anger222223
Table 5. The assessment of the application by caregivers for TD children.
Table 5. The assessment of the application by caregivers for TD children.
CriteriaThe First Caregiver (out of 10)The Second Caregiver (out of 10)
Level of engagement with the application79
Level of tolerability89
Level of enjoyment1010
Ease of use1010
Level of interaction with the application810
The AR application can be used repeatedly910
Level of safety1010
The child learned to use the AR system quickly89
The system is recommended by caregivers109
The child was feeling confident in dealing with the system1010
Table 6. The caregivers’ reports on child interaction with the three face collections with correct answers (0–6) on the first day for ASD children.
Table 6. The caregivers’ reports on child interaction with the three face collections with correct answers (0–6) on the first day for ASD children.
Child NumberCorrect Answers in the First Image CollectionCorrect Answers in the Second Image CollectionCorrect Answers in the Third Image Collection
Child 1122
Child 2023
Child 3133
Child 4133
Child 5234
Child 6134
Child 7033
Child 8123
Child 9134
Child 10234
Child 11133
Child 12133
Child 13234
Child 14134
Child 15123
Average1.12.73.3
Table 7. The caregiver reports on child interaction with the three face collections with correct answers (0–6) in the second day for the ASD children.
Table 7. The caregiver reports on child interaction with the three face collections with correct answers (0–6) in the second day for the ASD children.
Child NumberCorrect Answers in the First Image CollectionCorrect Answers in the Second Image CollectionCorrect Answers in the Third Image Collection
Child 1223
Child 2333
Child 3343
Child 4225
Child 5434
Child 6342
Child 7223
Child 8132
Child 9443
Child 10224
Child 11434
Child 12342
Child 13223
Child 14233
Child 15443
Average2.73.03.2
Table 8. The caregiver reports on child interaction with the facial expression of the three face collections with correct answers (0#x2013;30) for ASD children in the two days.
Table 8. The caregiver reports on child interaction with the facial expression of the three face collections with correct answers (0#x2013;30) for ASD children in the two days.
Facial ExpressionCorrect Answers in the First Image CollectionCorrect Answers in the Second Image CollectionCorrect Answers in the Third Image Collection
Happiness202121
Sadness111314
Surprise71112
Fear71112
Disgust5811
Anger61113
Table 9. The assessment of the application by caregivers for ASD children.
Table 9. The assessment of the application by caregivers for ASD children.
CriteriaThe First Caregiver
(Out of 10)
The Second Caregiver
(Out of 10)
Level of engagement with the application910
Level of tolerability810
Level of enjoyment99
Ease of use99
Level of interaction with the application99
The AR application can be used repeatedly99
Level of safety1010
The child learned to use the AR system quickly78
The system is recommended by caregivers1010
The child was feeling confident in dealing with the system98
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wedyan, M.; Falah, J.; Alturki, R.; Giannopulu, I.; Alfalah, S.F.M.; Elshaweesh, O.; Al-Jumaily, A. Augmented Reality for Autistic Children to Enhance Their Understanding of Facial Expressions. Multimodal Technol. Interact. 2021, 5, 48. https://doi.org/10.3390/mti5080048

AMA Style

Wedyan M, Falah J, Alturki R, Giannopulu I, Alfalah SFM, Elshaweesh O, Al-Jumaily A. Augmented Reality for Autistic Children to Enhance Their Understanding of Facial Expressions. Multimodal Technologies and Interaction. 2021; 5(8):48. https://doi.org/10.3390/mti5080048

Chicago/Turabian Style

Wedyan, Mohammad, Jannat Falah, Ryan Alturki, Irini Giannopulu, Salsabeel F. M. Alfalah, Omar Elshaweesh, and Adel Al-Jumaily. 2021. "Augmented Reality for Autistic Children to Enhance Their Understanding of Facial Expressions" Multimodal Technologies and Interaction 5, no. 8: 48. https://doi.org/10.3390/mti5080048

APA Style

Wedyan, M., Falah, J., Alturki, R., Giannopulu, I., Alfalah, S. F. M., Elshaweesh, O., & Al-Jumaily, A. (2021). Augmented Reality for Autistic Children to Enhance Their Understanding of Facial Expressions. Multimodal Technologies and Interaction, 5(8), 48. https://doi.org/10.3390/mti5080048

Article Metrics

Back to TopTop