Next Article in Journal
A Comparative Analysis of Low or No-Code Authoring Tools for Location-Based Games
Next Article in Special Issue
“A Safe Space for Sharing Feelings”: Perspectives of Children with Lived Experiences of Anxiety on Social Robots
Previous Article in Journal
Evaluation of the Road to Birth Software to Support Obstetric Problem-Based Learning Education with a Cohort of Pre-Clinical Medical Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot

Department of Knowledge Technologies, Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
Multimodal Technol. Interact. 2023, 7(9), 85; https://doi.org/10.3390/mti7090085
Submission received: 19 July 2023 / Revised: 8 August 2023 / Accepted: 21 August 2023 / Published: 30 August 2023
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction - 2nd Edition)

Abstract

:
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: approximately 70% were preschool children, 15% were elementary school students, and 15% were teenagers and adults. We describe several robot applications that were developed specifically for this task and assess their qualitative performance outside a controlled research setting, catering to various demographics, including those with special needs (ASD, ADHD). Five groups of applications are presented: (1) motor development activities and games, (2) children’s games, (3) theatrical performances, (4) artificial intelligence applications, and (5) data harvesting applications. Different cases of human–robot interactions are considered and evaluated according to our experience, and we discuss their weak points and potential improvements. We examine the response of the audience when confronted with a humanoid robot featuring intelligent behavior, such as conversational intelligence and emotion recognition. We consider the importance of the robot’s physical appearance, the emotional dynamics of human–robot engagement across age groups, the relevance of non-verbal cues, and analyze drawings crafted by preschool children both before and after their interaction with the NAO robot.

1. Introduction

Human–robot interaction (HRI) is the interdisciplinary study of interaction dynamics between humans and machines (robots). The field has received an enormous amount of attention in the last few years but its foundational concepts were articulated in the literature decades before the technical constraints were lifted and robots of sufficient complexity were developed. Isaac Asimov’s three laws of robotics are often cited as the original guidelines for HRI, but there are other historical examples as well [1,2].
Child–robot interaction (CRI) has emerged as an important subfield of HRI. It recognizes the challenges and special requirements of interacting with children, given their ongoing neurophysical, physical, and cognitive development [3]. The NAO robot has emerged as one of the most popular robots used in CRI research, with Amirova et al. [4] providing an overview of about 300 research works that focus on the use of NAO, mostly with children, although other age groups are considered as well.
NAO is a typical example of a modern humanoid social robot. In terms of socially assistive robotics [5], NAO is typically used in tasks such as tutoring and emotional expression, but is less used in direct physical therapy due to its design and relatively fragile construction. However, indirect physical therapy, such as repetitive demonstration and coaching, is possible, as demonstrated by Matič and Kovačić [6] and Assad-Uz-Zaman et al. [7] in laboratory conditions.
The integration of social robots into tutoring and education [8] has enabled new approaches in pedagogical work; the positive effect can often be attributed to the novelty of the approach as well as the physical embodiment of the robot [9,10]. The latter was shown by Alimardani et al. [10] to be important, as evidenced by measuring the EEG engagement index of two groups of pupils during the same language learning task. In general, measuring engagement in child–robot interaction is difficult and often biased. Lytridis et al. [9] review several approaches to measuring the engagement levels of children during child–robot interactions with an emphasis on educational and therapeutic settings.
Elements of affective computing, especially emotion recognition from video and sound, as well as gesture recognition, are important aspects of socially assistive robotics in learning and education as they provide unique experiences by allowing affects to enter the CRI process [11,12]. In this way, the otherwise fixed course of interaction can be changed by the emotional responses of participants, similar to human–human interactions [11]. The NAO robot offers the built-in ALMood module, which attempts to estimate the mood of the focused user, but only when the autonomous mode (ALAutonomousLife) is active, thus limiting its use in custom applications. To this end, Filippini et al. developed a facial expression recognition module for NAO, which can be embedded into the platform; it generally performs better and faster than NAO’s built-in user mood estimation [12].
Social and socially assistive robots also play important roles in interventions tailored for children with special needs, especially different types of intellectual disabilities [13,14,15]. Most research is geared toward specific learning disabilities and autism spectrum disorder (ASD). Different robots, such as QTrobot, Orbit, ZECA, NAO, Milo, Kaspar, SPELTRA, Iromec, Moxie, Jibo, MiRo, Cozmo, Leka, InMoov, Rero, Ifbot, and Paro, as examples, have been developed or programmed, especially for these specific applications [13]. According to several studies, social robots allow for beneficial outcomes for children with ASD by fostering increased engagement, bolstering social skills, and mitigating social anxiety [16,17].
The rapid progress made in the field of artificial intelligence has also brought about the need to introduce AI-related topics and content into pedagogical work. However, this progress also brings with it dangers, with children being particularly exposed. The European Commission, UNICEF, and other governing bodies have already prepared guidelines and regulations that determine the safe use of AI-related technologies for children [18,19]. In addition to security and privacy provisions, it is also very important to prepare children for the current and future development of AI, as this is the only way they will be able to function competently in society in the future. Moreover, it has been demonstrated that interactions with AI toys can improve creativity, collaborative inquiries, and related literacy skills [20,21]. One of the best and easiest options to introduce AI-related technology (as well as science and technology in general) is through the use of robots or other intelligent agents. Humanoid robots are particularly suitable due to their complexity and human-likeness, but other robotic platforms are suitable as well [22,23,24,25]. The theory of embodied learning provides a theoretical foundation for such an approach [26], and social robots are appropriate tools for its realization [20,24]. In a recent study, Baumann et al. [27] discovered that 3-year-old children trusted robots and humans equally and 5-year-old children preferred to learn from a competent robot. The embodied AI curriculum proposed by Yang [20] offers the synthesis of knowledge on the “Why”, “What”, and “How” of AI education for young children, and provides a new way to engage children in STEM, helping them understand the modern digital world. On the other hand, Lindsay and Hounsell propose adaptations of the robotics program to enhance participation and interest in STEM among children with disabilities [28].
The aim of this retrospective study is to present and discuss the key factors and questions that arose during our work with the NAO robot (in light of the aforementioned aspects). We are especially interested in the impact of physical embodiment on social interactions, the role of anthropomorphism in CRI, and the emotional aspects of interactions between robots and humans. The aforementioned aspects are approached empirically. We present our case in sufficient detail and discuss the observed responses. Since children’s drawings are rich and interesting sources of data [29,30], we present and comment on the selected drawings of the NAO robot drawn by the children before and after the performance. We note our observations but attempt to draw more general conclusions in the hope that they will inspire further research. Because the work presented here was not designed as a controlled study but it is merely a retrospective study of a long-term goal of popularisation of STEM and robotics in particular, quantitative data is not available and statistical analysis cannot be performed. Nevertheless, the sheer size of the study and the target population ensures exceptional statistical significance, which is almost unprecedented in comparable existing literature. In addition, the work in popularisation of robotics in younger populations which is partially presented here was awarded the “Prometheus of science” award by the Slovene Science Foundation. Finally, as pointed out by Belpaeme et al. [3], qualitative analysis of CRI data with questionnaires and self-reflection is difficult as children have the tendency to attempt to please the experimenter and extreme responses are typically observed.
In general, robotic projects for children and the popularization of technology and AI are very important. Weinberg et al. [31] provide evidence that participating in a robotics project may help to reduce the gender gap in science and engineering and increase positive attitudes about engineering and science. A systematic review by Pedersen et al. [32] supports these findings, and provides collected recommendations, such as using a humanoid robot, human–robot interactions, teaching activities that foster collaboration, and avoiding competitive settings. Even though the two studies mostly involved older children of one gender, the conclusions can, to a certain extent, be applied to the other gender and different age groups as well. The recommendations, as presented by Pedersen et al. [32], have been followed in our work quite accurately while taking into account different environments and age groups.
This paper is structured as follows. The following section presents our applications for the NAO robot; it is organized into five groups, according to the content, target audience, and the main goal. In the discussion section, we present and summarize our experience when demonstrating the presented applications to different audiences. Finally, we briefly summarize our observations and draw general conclusions, which should be taken into account in future work.

2. Developed Applications for the NAO Robot

In the following section, we present applications for the NAO robot, which were developed over several years and performed or demonstrated many times to different audiences. They are divided into five groups according to the main topic and programming techniques used. We only provide short descriptions of the applications and potential implementation details while the discussion and evaluation are presented in Section 3.

2.1. Motor Development Activities and Games

The applications in this category are especially targeted at children who are still developing and improving their gross and fine motor skills, and for whom physical contact with a foreign object of interest is still of major importance. Applications in this category can also serve as the basis for indirect physical therapy for impaired individuals through demonstrations and coaching [6,7].

2.1.1. Aerobics

The aerobics application is a simple see-and-repeat game, where the robot performs sequences of increasingly difficult moves and poses that are to be repeated by the audience. The application features a few classic stretching exercises and yoga positions, which are simple enough to be programmed on NAO.
Repetitive moves, such as squats and balance-keeping yoga poses, are challenging to the participants. The application is intended to increase the awareness of superiority as well as shortcomings of the human body when compared to a robot’s body. Aside from tactile commands, the application does not feature other means of interaction.

2.1.2. Finger Grabbing

Finger grabbing is a very simple but incredibly popular application, designed especially for preschoolers. The NAO robot is programmed to respond to touch on two head tactile sensors, which open and close fingers on both hands. The instructions for using the application are simple enough to be understood by 1.5-year-old children. The application allows participants to study the mechanics of a robot’s fingers, feel the strength of the grip, and observe the robot’s immediate response to touch. Figure 1 shows a group of children testing the finger-grabbing application during NAO’s visit to a kindergarten. Taking into account the immense popularity of the application and its striking similarity to the palmar grasp reflex of newborns, we hypothesize that children associate NAO’s grip with this reflex, promoting a protective attitude and positive emotions.

2.1.3. Dance

Dance is one of the most interesting physical activities that a robot can perform. It is typically used to demonstrate the precision of its movements, the number of DOFs, and maintaining balance. The NAO robot offers a user-friendly motion editor and several useful functions for fluid motion and motion editing, which can be used to develop dance applications. We programmed and animated several simple dances and complex dances, which are often used as icebreakers when presenting the robot to younger audiences. Tai Chi, the Gangnam Style dance, the Macarena, Rasputin, and other fast disco-style dances are the most typical examples. It is worth noting that fast dances are the most popular and that children are always very eager to imitate NAO’s moves, which are often perceived as funny (this can likely be attributed to the unnaturally flawless repetition of moves). We observed that children spontaneously danced alongside the robot shortly after starting the application.

2.2. Children’s Games

Play is an essential component of a child’s development as it shapes the way children experience the world and, therefore, presents opportunities and challenges for those devising robot-involved games [33,34]. This category features a few classic children’s games, adapted for the NAO robot. The participants can play with the robot without supervision once the robot’s operational principles and game modifications are explained by the robot’s operator.

2.2.1. Pantomime

The pantomime application allows a group of participants to engage in pantomime play with the robot. The application features several animated moves, gestures, and accompanying sounds, which are performed one after another by repeatedly touching the NAO’s head tactile sensor. The game is typically very well received by the participants because of the robot’s fluent, complex, human-like gestures and animation sequences, as well as the anticipation of the next guessing challenge. The fact that the game can only be played unilaterally—as the robot cannot guess the gestures of the participants—does not reduce the interest in playing pantomime with NAO. As the number of pre-programmed guessing challenges is typically less than the number of participants (and, thus, not everyone has the chance to interact with NAO), we added a final animation, where the robot goes to sleep and starts snoring. Interestingly, this is usually accepted as a good excuse as to why the game cannot continue indefinitely.

2.2.2. The Day and Night Game

The day and night game for NAO is a modern implementation of the well-known children’s game. The game is played as follows. The leader names the time of the day (day or night) and the participants respond by standing up (day) or sitting down (night). The leader can increase the speed and/or repeat the same word many times in order to confuse the participants. The NAO robot is programmed to compete against the participants by recognizing the two words and performing the required moves. The program uses NAO’s built-in single-word speech recognition engine, which is quite robust, and turns the robot into a skilled player. Interestingly, this game can also be played in the Slovene language because the corresponding Slovene translations for “day” and “night” (“dan” and “noč”) have very similar pronunciations to the English words “done” and “notch”, which can be recognized using NAO’s English speech recognition engine. This is a special case of cross-language speech recognition, where the whole word is recognized instead of a list of phonemes, which is the case with cross-language phoneme mapping. When demonstrated by a skilled operator, the robot can play the game with almost 100% accuracy, which is on par with the best human performers.

2.2.3. Football

Football for NAO is a simple game of ball-kicking, which is based on the built-in red ball tracking algorithm. The application works as follows. The robot is programmed to search for and walk toward a red ball. When the robot is close enough, a kick is performed and the robot repeats the search and approach loop. If the ball moves out of his field of view, the robot stops and attempts to locate it by turning around and looking in all directions. The audience can participate by kicking the ball, causing the robot to change direction or stop and locate the ball, or by giving the ball to the robot to be kicked.
During several performances, we discovered that the built-in red ball tracker is often too sensitive and red objects of any shape can be recognized as the target when the size is approximately the same as the specified target size (a parameter of the built-in function). It is, thus, recommended to remove or hide red-colored objects when playing the game and/or explain the robot’s seemingly erratic behavior to the participants.

2.3. Theatrical Performances

A robot acting in a theater play is still quite a rare sight. While robots are increasingly performing character versions of themselves, there are many open-ended questions and limitations to consider [35]. In 2009, Lin et al. reported on the realization of one of the earliest robot theater performances with a cast of two biped androids and two twin-wheeled two-armed humanoid robots [36]. The following adaptations of literary works—where NAO assumes the lead role—constitute our modest contribution to this interesting field.

2.3.1. Mr. Cheerful

Mr. Cheerful is a book in the Mr. Men series created by Roger Hargreaves [37]. It tells a story about Mr. Cheerful, a smiling, hat-wearing individual who is hiding one little secret, which only becomes known when meeting Miss Splendid.
We adapted the story into a play in which NAO plays the role of Mr. Cheerful, while a human actor plays the role of Miss Splendid and narrates the story. The robot wears a little hat and performs actions, as instructed by the hidden remote human operator. For the sake of simplicity, the interaction between the robot and the actor is only simulated, and no speech recognition or other advanced programming is used.

2.3.2. Robonocchio

Robonocchio is an adaptation of the well-known novel, The Adventures of Pinocchio, by the Italian writer Carlo Collodi [38]. The main character, Pinocchio, is a wooden puppet who dreams of becoming a human child. He often lies and is deceived many times by a pair of beggars, a fox, and a cat.
Our adaptation is significantly shorter than the original and translated into the Slovene language. The lead role of Robonocchio is played by the robot while the side role of his father, Geppetto, is played by a human. Two string-animated plush toys are used to play the roles of the fox and the cat. Because the NAO robot does not support speech synthesis in Slovene, the dialogues were prerecorded and synchronized to gestures using our own animation engine (https://github.com/vpodpecan/nao-gesturesync, accessed on 19 July 2023). The adaptation features a limited amount of interactions with the robot and a hidden robot operator is required to start the acts.

2.3.3. “O barvici, ki je hotela plesati”

This is an adaptation of a short story in the Slovene language, named “O barvici, ki je hotela plesati”, which was written by choreographer and dance teacher Jasna Knez. It tells a story about a lively orange coloring pencil that wanted to dance. It was performed in cooperation with a local social welfare institution for children, adolescents and adults with special needs, CUDV Draga (Education, Work and Care Center Draga). Several children and teenagers with developmental disorders participated in the performance with the NAO robot. The use of colors (LEDs in NAO’s eyes, ears, and torso) and fluent motions were the two key contributions of NAO. The play was performed at a local festival with great success.

2.4. Artificial Intelligence Applications

This category features robot applications that are based on machine learning algorithms and programming techniques that implement or mimic intelligent behavior. They are primarily intended for demonstrations in front of audiences interested in the advances in artificial intelligence and robotics, but also for the learning and discovery of AI. It has been shown that interactions with AI toys and social robots are beneficial [21,24], but the first and most important requirement is safety. When developing AI applications for children, it is recommended to follow the UNICEF policy guidance on AI for children to ensure safety, protection of data and privacy, and non-discrimination [18].

2.4.1. Facenao

Facenao (https://github.com/vpodpecan/facenao, accessed on 19 July 2023) is an application that connects the NAO robot with emotion recognition software, thus allowing the robot to recognize the emotions of people around it. As the accurate recognition of emotions poses hardware requirements that exceed those available in NAO, the approach that relies on an external computing service is currently the only feasible way to enable fast and accurate emotion recognition on NAO. The first version of Facenao was based on Microsoft’s Cognitive Services API but the latest version uses the open-source PAZ library [39] (perception for autonomous systems), which allows the Facenao application to work without internet access, as the recognition runs on the computer connected to the NAO robot. The PAZ library implements an optimization of Google’s Xception architecture, called mini-Xception for emotion and gender recognition, and achieves 81% accuracy for emotion recognition on the FER+ dataset [40]. The following emotions are recognized: anger, disgust, fear, happiness, neutrality, sadness, and surprise.
Facenao is implemented as a web application that runs on a PC and connects to the robot via its JavaScript API. The application can capture and display a picture from NAO’s camera, detect and extract all faces captured by the camera, and display them along with the recognized emotions. The hardware setup (PC and NAO) allows the application to run in real time as a typical computation time on a modern laptop is less than 0.1 s. However, NAO only captures the image and performs emotion recognition when a keyboard or mouse event is registered. Aside from limiting the expensive calls to the emotion recognition function, this also ensures that the participant is facing the robot.
In addition, there is a “hall of fame” gallery where the faces with the strongest detected emotions are exhibited. We quickly discovered that the mere existence of the Hall of Fame gallery encourages participants to produce extreme emotions (as extreme as possible) in order to be highlighted as the most extreme. It is also worth noting that the recognition accuracy is not the same for all emotions and that lighting conditions are of crucial importance. Using any state-of-the-art deep neural network emotion recognition method [41], instead of the portable PAZ library, could improve accuracy but would also make the system more complex and less portable.

2.4.2. LiveChat

The LiveChat application was developed with the aim of equipping the robot with conversational skills, allowing it to chat with its users, ask and answer questions, and be knowledgeable in many different topics. We achieved this by using the publicly available ALICE AIML knowledge base (https://github.com/drwallace/aiml-en-us-foundation-alice, accessed on 19 July 2023), NAO’s built-in speech synthesis, animated speech, and state-of-the-art speech recognition services. Although much better chat intelligence could be achieved by using state-of-the-art large language models [42] instead of ALICE, this could introduce inappropriate content and create a potentially harmful environment.
Two versions of LiveChat were developed. The first one is more advanced and enables the robot to recognize continuous speech, making the conversation very natural (Google’s cloud-based speech-to-text service is used). However, the cloud-based service introduces the inevitable delay of a few seconds in communication. The second one features a web application running on a computer that connects to NAO via its JavaScript API and allows the user to type questions on the connected personal computer, which are then answered by the robot using its animated speech functionality. The advanced version is less reliable in an uncontrolled environment because of the noise picked up by the robot’s microphones and the segmentation of the live audio stream. On the other hand, while the simpler version is very reliable, it usually fails to evoke an emotional response, which can be observed when using the advanced version where the robot answers to verbal questions with its own voice and uses appropriate gestures.

2.5. Data Harvesting Applications

The presented data-harvesting applications were created with the specific goal of collecting data such as pictures, sound recordings, answers, opinions, etc., during human–robot communications or interactions. In robot research scenarios, the collected data are most typically used to train or improve machine learning models [43], learn directly from observed data by demonstration [44], or collect and store data for later analysis, such as environmental data, for example, in [45,46]. In spite of the widespread collection of various types of data during HRI, an approach to formalize the HRI data collection process has only recently been proposed [47]. Finally, unlike the rest of the presented applications, data harvesting may raise privacy issues, which must be addressed as specified by the law.

2.5.1. Face Detection

Face detection is a simple application based on NAO’s built-in face recognition. It implements a simple, closed-dialog loop, where the robot detects a face, remembers it, asks for the name, and stores both in the internal library. When a face is recognized, the corresponding recording is played to address the participant by his or her name. The application can be used to collect pictures of human faces along with speech samples. However, unless specifically allowed by the developer (when the legal requirements are met), the application will purge its internal database upon exit. The application was developed as a demonstration of the learning abilities of the NAO robot to primary school pupils; a similar approach (without audio recording) was used by Ismail et al. to measure the concentration levels of children with ASD in social interactions and when communicating [48].

2.5.2. WhimBot

The WhimBot (the What-If Machine robot interface) application was developed as a part of a computationally creativity project, where the aim was to collect evaluations (scores) of computer-generated storytelling ideas (what-if sentences) in order to improve the algorithms that produce the ideas. The application collects scores through a pleasant, guided dialogue between the robot and the participant. The application can run without supervision and can be demonstrated during conferences and conventions. It was successfully launched and exhibited at two large public events. We discovered that even the simplest human–robot dialogue is far superior to questionnaires and that the novelty of the approach guarantees that people are willing to participate, regardless of the topic.

3. Discussion

The applications presented in the previous section were demonstrated, improved, and evaluated many times in real-world settings, which allowed us to gain a significant amount of experience and a deeper insight into several aspects of human–robot interactions, including technological obstacles, emotions, expectations, and fears. Our first (and most general) observation is the almost universally positive emotion toward the NAO robot in all age groups. Although this could possibly be explained by the general interest in technology, robotics, and novelties, we believe that the “Uncanny Valley” phenomenon might also provide a plausible explanation.
As proposed by Masahiro Mori 50 years ago, the disputed relation between the human likeness of an entity and the perceiver’s affinity for it suggests that there is a valley in the graph of affinity vs. human likeness, and that robots with slightly imperfect human likenesses are near the bottom of this valley because they are simultaneously perceived as familiar and scary. The other two peaks in the graph denote a healthy person and a humanoid with little resemblance to humans, besides the general anthropomorphic design of the body and body parts [49]. If the movement is also taken into account, the valley and peaks become more extreme. However, the preliminary study by Destephe et al. [50] suggests that people with ASD react differently to the uncanny valley and rate the robot much more attractively when it is perceived as more human, possibly because of their difficulty in understanding body language and the emotional content of gestures. Finally, the research by Brink et al. [51] suggests that the uncanny valley is acquired over development. The experiments show that a human-like robot does not become creepier than the machine-like robot until approximately 9 years of age.
The designers and engineers who developed the NAO robot and the newer Pepper robot took the uncanny valley into account and the results are nearly optimal, according to Mori’s hypothesis. A too-exact human likeness was avoided and user feedback was taken into account [52]. The height and weight of NAO also closely match those of a 1–2-year-old child. Moreover, the experiments of Mathur et al. [53] confirmed that an artificially constructed robot head which closely resembles NAO’s ensures high likeability (second only to humans). On the other hand, von der Pütten and Krämer argue that the perceived human-likeness is not always linked to perceived likeability, but likeability depends on the overall design of the robot, including characteristics such as height, color, form, and facial features [54]. Taking into account our experience with the NAO robot, we can confirm that characteristics such as height, weight, shape, color, unobtrusiveness, and a clean design are important—if not decisive—and that the NAO robot arouses feelings that people usually experience toward toddlers. In addition, younger children generally perceive NAO as a living being that is not very different from themselves; the robot’s operator is often asked questions such as: Can he dance? Can he talk? What does he eat? Can he get fatter? Does he sleep? Where does he live? Does he have parents? Why does he have only 3 fingers? The use of the pronoun “he” is the result of the translation from the Slovene language, which has grammatical gender and in which the word robot is masculine. The research of Brink et al. [51] and Belpaeme et at. [3] supports our observations about how younger children perceive NAO.
During our performances with NAO, we discovered three exceptions to the initial positive feelings. The first one involves adults and seniors cultivating a universal dislike for technology and innovation. Aside from the open dislike or hostility toward the robot in question, they typically express a dislike of artificial intelligence, computers, and robots in the industry. We were unable to mitigate negative emotions via an open debate, argumentation, or explanation, and the only change in opinion concerned the dismissal of robots and AI in general as unnecessary and potentially dangerous. The second exception involves otherwise normally developed and healthy children who refuse to touch the robot or come close to it. They often describe the robot as terrible and ugly. They are usually willing to watch the performance from a distance but always seek the support and physical closeness of an adult that they trust. It is worth mentioning that such cases are exceedingly rare. The third exception are very young children (less than 2 years old). This is the youngest group of children in kindergartens in Slovenia in which children are aged 11–18 months. During our performances, we discovered that they were initially afraid of unknown mechanisms that attempted to invade their personal space. This observation is consistent with the findings reported by Kozima and Nakagawa [55], who distinguish between and name three consecutive phases in the child–robot interaction process: the neophobia phase, the exploration phase, and the interaction phase. We were able to mitigate and completely disperse the fear by allowing the children to freely explore the unknown object, and observe, touch, and inspect the inactivated robot in its open transporting case. Later on, when the activated robot started to blink its LEDs, produce sounds, and move, the fear may have returned briefly but was quickly replaced by curiosity, enthusiasm, and the desire for interaction. It is also worth noting that while in the presence of the robot, their attention spans increased enormously. It was not uncommon for them to voluntarily and actively participate for up to 30 min. The attention span of 1.5-year-old toddlers is 2–3 min in general so the observed increase is remarkable. A detailed analysis of focused attention in toddlers is presented by Gaertner et al. [56].
An important case that deserves our attention involves people with special needs. Our performances with the NAO robot also included children and teenagers with developmental and behavioral/emotional disorders. The theatrical performance described in Section 2.3.3, where the NAO robot performed together with children with developmental disorders, is especially worth mentioning. There were a few cases of ASD and ADHD among the participants in our other numerous NAO performances. In general, a positive attitude and, often, even great enthusiasm can be expected. Bernal et al. [57] reported that children with ASD felt safe, calm, and comfortable while in the robot’s presence, and the anxiety levels of some children reduced. However, the applications have to be selected carefully, taking into account their dynamics, emotional aspects, and the types of human–robot and human–human interactions involved. The robot operator should also have a basic understanding of the specificities of several more common behavioral and emotional disorders. Generally, the operator needs to allocate more time for each planned robot activity and be aware of the importance of physical distancing between the robot and participants with special needs, which varies on a case-by-case basis. Virnes [14] emphasized the importance of physical accessibility to the robot as it affects the child’s sense of emotional ownership of and connection to the robot. Nonverbal communication between the operator and the participants (especially eye contact) also deserves special attention, and the operator must realize that apparent non-cooperation is not an indication of disinterest.
The majority of presented applications, except for those centered around data harvesting and AI, were presented to children of preschool and primary school age (3–10 years). These applications feature only a small amount of human–robot interactions, which are limited to selected instances of touch, vision, and sound. Nevertheless, the mere fact that interacting with a robot is possible is enough to stimulate one’s imagination and ignore technological obstacles. The easiest and most robust way of implementing human–robot interaction between children of this age group and NAO is by using its tactile sensors and speech synthesis. Tactile sensors can be used to start an activity and confirm the user’s choice but can also be used to trigger the robot’s response to touch. To complete a bidirectional interaction, speech synthesis can be used to present choices, confirm or deny actions, and report errors. In general, we found that physical contact with the robot is immensely important and that NAO would benefit from additional tactile sensors or possibly even artificial skin. In this way, the robot could respond more accurately to human touch, thus improving its tactile communication and social interactivity. This observation is confirmed by the pioneering work of Andreasson et al. [58], focusing on the tactile conveyance of positive and negative emotions (affective touch) on the NAO robot.
The almost universally positive emotions and impacts on children within the 3–10 year age group can be observed by analyzing their drawings [30,59]. On several occasions, they were asked to draw the robot before and after the event (they did not know any details about the NAO robot before the event). Four samples, which were drawn by a 3-year-old and a 5-year-old, are shown in Figure 2. Figure 2b clearly shows that the robot’s movements left a deep impression on the 3-year-old child. The wavy line that encircles the figure can be interpreted as the expansive movement of the robot (which indeed occurred during the session). Similarly, short and long circular lines inside the body indicate motions of short and long durations. NAO’s fingers and movement also left an impression. They are present in the drawing, although the number is not accurate. Two vertical lines that stem from the hands indicate the movement. There is also an emphasized line on top of the head that corresponds to the hat that the robot wore during the theatrical performance of Mr. Cheerful.
Figure 2d, which was drawn by a 5-year-old child, also exhibits strong emotions that are expressed as strong, with dark lines covering the head, arms, and legs. The hat and the torso are less important. The robot’s loudspeakers (ears) are also drawn and the hands have the correct number of fingers. As in Figure 2b, the eyes are strongly emphasized, which is related to the use of LEDs in NAO’s eyes during the performance to blink and express emotions. In general, when comparing the drawings before and after the performance, one can immediately detect strong emotions and parts of the robot’s body and/or performance that left a lasting impression.
The adult population is more diverse with respect to the general impression and the emotional factors involved. The actual physical appearance is still important, but we also observed that the quality and complexity of the software, the level and quality of the human–robot communication, and unpredictability play significant roles. The variability is further increased by factors such as age, education, background knowledge, character traits, etc. For example, the LiveChat application, where the robot is able to produce a more-or-less appropriate answer to almost any question and say it aloud, is sometimes perceived as proof of the robot’s intelligence, or as an example of lousy programming due to generic answers to certain types of questions, and because of a few inevitable grammatical mistakes. Using large language models trained on dialogues such as ChatGPT or other similar models [42] could turn the NAO robot into a know-it-all. However, we believe that conversational agents of this type and capability are incompatible with the NAO robot, its design, and its aim. A “personal intelligence” chatbot such as Inflection AI’s PI [60] would be more appropriate as it is developed to be supportive, playful, kind, and fun.
The Facenao application is generally considered intelligent because of the accuracy of emotion recognition and the involved fun factor when attempting to produce extreme facial expressions. Demonstrations of the robot’s physical abilities are also received favorably; one of the most common reactions during the first encounter with the robot is to attempt to shake its hand. In summary, during a human–robot interaction session featuring the latest advances in speech and vision recognition, natural language processing, and fluid whole-body motion, adults are much less forgiving of mistakes, inaccuracies, and limitations than children; they are also less adaptable and usually unable to escape established behavioral patterns during their interactions with the robot.

4. Conclusions

We presented a retrospective, long-term study on child–robot interaction using the NAO robot across several developed applications. Based on the significant amount of experience gained during numerous performances and demonstrations to diverse audiences, a few important conclusions can be drawn. First, the design of the NAO robot can be considered as a school example of a likable robot design. It seems that there is no single decisive design feature that makes NAO likable but there is clearly a winning combination. As also observed by Pinto-Bernal et al. [57], a robot-like appearance is preferred over a human appearance or other appearances, especially when working with people with ASD. Second, when developing human–robot interfaces and components for human–robot interactions, the age of the target audience is an important factor; simplicity and reliability should be the main guidelines, even at the expense of limited functionality. Finally, we believe that non-verbal communication modalities should play important roles in the future of HRI, especially in CRI research. Research in socially assistive robotics [61] suggests that even seemingly unimportant non-verbal inserts can lead to significant changes in communication and perception. Non-verbal communication supplements and augments spoken communication, thus making human–robot communication easier and richer, allowing emotions to enter the human–robot interaction loop.
In future work, we would like to conduct a controlled experiment, examining the emotions involved during child–robot interactions between NAO and preschool children. Drawings are excellent and potentially unbiased sources of information and our goal is to conduct an in-depth, large-scale analysis of image data of this type. The presented NAO applications are being extended with simple, non-verbal communication features, and we will study the effects in real-world settings in our future engagements with NAO.

Funding

This work was partially supported by the EU FP7 project WHIM (the what-if machine) under grant agreement no. 611560, and by the Slovenian Research Agency research core funding under the Knowledge Technologies program (no. P2-0103).

Institutional Review Board Statement

Ethical review and approval were waived for this retrospective study due to the observational nature of the study.

Informed Consent Statement

Informed consent was obtained from all subjects (or their legal guardians) involved in the study.

Data Availability Statement

Data sharing is not applicable in this article as no datasets were generated or analyzed during the current study.

Acknowledgments

The author is grateful to his wife, Katarina Podpečan, a professionally trained kindergarten teacher, for her assistance and support with the NAO robot, insights into the analysis of children’s emotional responses, expertise in interpreting children’s drawings of the NAO robot, and guidance in developing NAO applications, games, and theatrical performances for children.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
STEMscience, technology, engineering, mathematics
HRIhuman–robot interaction
CRIchild–robot interaction
AIartificial intelligence
LEDlight-emitting diode
DOFdegree of freedom
APIapplication programming interface
ASDautistic spectrum disorder
ADHDattention deficit hyperactivity disorder
SARsocially assistive robotics

References

  1. Goodrich, M.; Schultz, A. Human-Robot Interaction: A Survey. Found. Trends Hum.-Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  2. Soegaard, M.; Dam, R.F. (Eds.) The Encyclopedia of Human-Computer Interaction, 2nd ed.; The Interaction Design Foundation: Aarhus, Denmark, 2012. [Google Scholar]
  3. Belpaeme, T.; Baxter, P.; de Greeff, J.; Kennedy, J.; Read, R.; Looije, R.; Neerincx, M.; Baroni, I.; Zelati, M.C. Child-Robot Interaction: Perspectives and Challenges. In Proceedings of the Social Robotics, Bristol, UK, 27–29 October 2013; Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U., Eds.; Springer: Cham, Switzerland, 2013; pp. 452–459. [Google Scholar]
  4. Amirova, A.; Rakhymbayeva, N.; Yadollahi, E.; Sandygulova, A.; Johal, W. 10 Years of Human-NAO Interaction Research: A Scoping Review. Front. Robot. AI 2021, 8, 744526. [Google Scholar] [CrossRef] [PubMed]
  5. Feil-Seifer, D.; Mataric, M. Defining socially assistive robotics. In Proceedings of the 9th International Conference on Rehabilitation Robotics (ICORR), Chicago, IL, USA, 28 June–1 July 2005; pp. 465–468. [Google Scholar]
  6. Matić, D.; Kovačić, Z. NAO Robot as Demonstrator of Rehabilitation Exercises after Fractures of Hands. In Proceedings of the 2019 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 19–21 September 2019; pp. 1–6. [Google Scholar]
  7. Assad-Uz-Zaman, M.; Islam, M.R.; Miah, M.S.; Rahman, M. NAO robot for cooperative rehabilitation training. J. Rehabil. Assist. Technol. Eng. 2019, 6, 2055668319862151. [Google Scholar] [CrossRef] [PubMed]
  8. Miglino, O.; Lund, H.H.; Cardaci, M. Robotics as an educational tool. J. Interact. Learn. Res. 1999, 10, 25–47. [Google Scholar]
  9. Lytridis, C.; Bazinas, C.; Papakostas, G.A.; Kaburlasos, V. On Measuring Engagement Level During Child-Robot Interaction in Education. In Proceedings of the Robotics in Education, Vienna, Austria, 10–12 April 2019; Merdan, M., Lepuschitz, W., Koppensteiner, G., Balogh, R., Obdržálek, D., Eds.; Springer: Cham, Switzerland, 2020; pp. 3–13. [Google Scholar]
  10. Alimardani, M.; van den Braak, S.; Jouen, A.L.; Matsunaka, R.; Hiraki, K. Assessment of Engagement and Learning During Child-Robot Interaction Using EEG Signals. In Proceedings of the Social Robotics: 13th International Conference, ICSR 2021, Singapore, 10–13 November 2021; Proceedings. Springer: Berlin/Heidelberg, Germany, 2021; pp. 671–682. [Google Scholar]
  11. Valagkouti, I.A.; Troussas, C.; Krouska, A.; Feidakis, M.; Sgouropoulou, C. Emotion Recognition in Human & Robot Interaction Using the NAO Robot. Computers 2022, 11, 72. [Google Scholar]
  12. Filippini, C.; Perpetuini, D.; Cardone, D.; Merla, A. Improving Human–Robot Interaction by Enhancing NAO Robot Awareness of Human Facial Expression. Sensors 2021, 21, 6438. [Google Scholar] [CrossRef] [PubMed]
  13. Papakostas, G.A.; Sidiropoulos, G.K.; Papadopoulou, C.I.; Vrochidou, E.; Kaburlasos, V.G.; Papadopoulou, M.T.; Holeva, V.; Nikopoulou, V.A.; Dalivigkas, N. Social Robots in Special Education: A Systematic Review. Electronics 2021, 10, 1398. [Google Scholar] [CrossRef]
  14. Virnes, M. Robotics in Special Needs Education. In Proceedings of the 7th International Conference on Interaction Design and Children, New York, NY, USA, 11–13 June 2008; IDC ’08. pp. 29–32. [Google Scholar]
  15. Syriopoulou-Delli, C.; Gkiolnta, E. Robotics and inclusion of students with disabilities in special education. Res. Soc. Dev. 2021, 10, e36210918238. [Google Scholar] [CrossRef]
  16. Freitas, H.; Costa, P.; Silva, V.; Silva Pereira, A.; Soares, F.; Esteves, J.S. Using a humanoid robot as the promoter of the interaction with children in the context of educational games. Int. J. Mechatronics Appl. Mech. 2017. [Google Scholar] [CrossRef]
  17. Rakhymbayeva, N.; Amirova, A.; Sandygulova, A. A Long-Term Engagement with a Social Robot for Autism Therapy. Front. Robot. AI 2021, 8, 669972. [Google Scholar] [CrossRef]
  18. Dignum, V.; Penagos, M.; Pigmans, K.; Vosloo, S. Policy guidance on AI for children. In Report, UNICEF, Office of Global Insight &amp Policy, United Nations Children’s Fund; United Nations Plaza: New York, NY, USA, 2021; p. 10017. [Google Scholar]
  19. Charisi, V.; Chaudron, S.; Di Gioia, R.; Vuorikari, R.; Planas, M.E. Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and Policy. In Scientific Analysis or Review KJ-NA-31048-EN-N (Online); Publications Office of the European Union: Luxembourg, 2022. [Google Scholar]
  20. Yang, W. Artificial Intelligence education for young children: Why, what, and how in curriculum design and implementation. Comput. Educ. Artif. Intell. 2022, 3, 100061. [Google Scholar] [CrossRef]
  21. Kewalramani, S.; Kidman, G.; Palaiologou, I. Using Artificial Intelligence (AI)-interfaced robotic toys in early childhood settings: A case for children’s inquiry literacy. Eur. Early Child. Educ. Res. J. 2021, 29, 652–668. [Google Scholar] [CrossRef]
  22. Parsons, S.; Sklar, E. Teaching AI using LEGO Mindstorms. In Proceedings of the 2004 AAAI Spring Symposium: Accessible Hands-on AI and Robotics Education, Palo Alto, CA, USA, 22–24 March 2004. [Google Scholar]
  23. Williams, R.; Park, H.W.; Oh, L.; Breazeal, C. PopBots: Designing an Artificial Intelligence Curriculum for Early Childhood Education. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 9729–9736. [Google Scholar]
  24. Williams, R.; Park, H.W.; Breazeal, C. A is for Artificial Intelligence: The Impact of Artificial Intelligence Activities on Young Children’s Perceptions of Robots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  25. Bertel, L.; Hannibal, G. The NAO robot as a Persuasive Educational and Entertainment Robot (PEER)—A case study on children’s articulation, categorization and interaction with a social robot for learning. Tidsskr. LæRing Medier (LOM) 2015, 8. [Google Scholar] [CrossRef]
  26. Macedonia, M. Embodied Learning: Why at School the Mind Needs the Body. Front. Psychol. 2019, 10. [Google Scholar] [CrossRef]
  27. Baumann, A.E.; Goldman, E.J.; Meltzer, A.; Poulin-Dubois, D. People Do Not Always Know Best: Preschoolers’ Trust in Social Robots. J. Cogn. Dev. 2023, 24, 532–562. [Google Scholar] [CrossRef]
  28. Lindsay, S.; Hounsell, K.G. Adapting a robotics program to enhance participation and interest in STEM among children with disabilities: A pilot study. Disabil. Rehabil. Assist. Technol. 2017, 12, 694–704. [Google Scholar] [CrossRef] [PubMed]
  29. Jolley, R.P. Children and Pictures: Drawing and Understanding; Understanding Children’s Worlds; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  30. Serjouie, A. Children’s Understanding of Pictures and Expression of Emotion in their Drawings. Ph.D. Thesis, Art Education Faculty University of Erfurt, Erfurt, Germany, 2012. [Google Scholar]
  31. Weinberg, J.B.; Pettibone, J.C.; Thomas, S.; Stephen, M.L.; Stein, C. The Impact of Robot Projects on Girl’s Attitudes Toward Science and Engineering. In Proceedings of the 2007 RSS Robotics in Education Workshop, Atlanta, GA, USA, 27–30 June 2007. [Google Scholar]
  32. Pedersen, B.K.M.K.; Weigelin, B.C.; Larsen, J.C.; Nielsen, J. Using educational robotics to foster girls’ interest in STEM: A systematic review. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 865–872. [Google Scholar]
  33. Torpegaard, J.; Knudsen, L.S.; Linnet, M.P.; Skov, M.B.; Merritt, T. Preschool children’s social and playful interactions with a play-facilitating cardboard robot. Int. J. Child-Comput. Interact. 2022, 31, 100435. [Google Scholar] [CrossRef]
  34. Metatla, O.; Bardot, S.; Cullen, C.; Serrano, M.; Jouffrais, C. Robots for Inclusive Play: Co-Designing an Educational Game with Visually Impaired and Sighted Children. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar]
  35. LePage, L. Performing Robots: Some Reflections. In Proceedings of the TaPRA Interim Event: Performing Robots, Arena Theatre, Wolverhampton, 2 March 2017. [Google Scholar]
  36. Lin, C.Y.; Tseng, C.K.; Teng, W.C.; Lee, W.C.; Kuo, C.H.; Gu, H.Y.; Chung, K.L.; Fahn, C.S. The realization of robot theater: Humanoid robots and theatric performance. In Proceedings of the 2009 International Conference on Advanced Robotics, Munich, Germany, 22–26 June 2009; pp. 1–6. [Google Scholar]
  37. Hargreaves, R. Mr.Cheerful; Egmont Books Ltd.: London, UK, 2014. [Google Scholar]
  38. Collodi, C. The Adventures of Pinocchio (Oxford World’s Classics); Oxford University Press: New York, NY, USA, 2009. [Google Scholar]
  39. Arriaga, O.; Valdenegro-Toro, M.; Muthuraja, M.; Devaramani, S.; Kirchner, F. Perception for Autonomous Systems (PAZ). 2020. Available online: http://xxx.lanl.gov/abs/2010.14541 (accessed on 19 July 2023).
  40. Arriaga, O.; Valdenegro-Toro, M.; Plöger, P.G. Real-time Convolutional Neural Networks for emotion and gender classification. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2019), Bruges, Belgium, 24–26 April 2019. [Google Scholar]
  41. Mellouk, W.; Handouzi, W. Facial emotion recognition using deep learning: Review and insights. Procedia Comput. Sci. 2020, 175, 689–694. [Google Scholar] [CrossRef]
  42. Editorial. What’s the next word in large language models? Nat. Mach. Intell. 2023, 5, 331–332. [CrossRef]
  43. Endrawis, S.; Leibovich, G.; Jacob, G.; Novik, G.; Tamar, A. Efficient Self-Supervised Data Collection for Offline Robot Learning. 2021. Available online: http://xxx.lanl.gov/abs/2105.04607 (accessed on 19 July 2023).
  44. Ravichandar, H.; Polydoros, A.S.; Chernova, S.; Billard, A. Recent Advances in Robot Learning from Demonstration. Annu. Rev. Control Robot. Auton. Syst. 2020, 3, 297–330. [Google Scholar] [CrossRef]
  45. Dunbabin, M.; Marques, L. Robots for Environmental Monitoring: Significant Advancements and Applications. IEEE Robot. Autom. Mag. 2012, 19, 24–39. [Google Scholar] [CrossRef]
  46. Tekdas, O.; Isler, V.; Lim, J.H.; Terzis, A. Using mobile robots to harvest data from sensor fields. IEEE Wirel. Commun. 2009, 16, 22–28. [Google Scholar] [CrossRef]
  47. Han, Z.; Williams, T. Towards Formalizing HRI Data Collection Processes. In Proceedings of the 4th Annual Workshop on Novel and Emerging Test Methods & Metrics for Effective HRI, online, 11 March 2022. [Google Scholar]
  48. Ismail, L.; Shamsuddin, S.; Yussof, H.; Hashim, H.; Bahari, S.; Jaafar, A.; Zahari, I. Face detection technique of Humanoid Robot NAO for application in robotic assistive therapy. In Proceedings of the 2011 IEEE International Conference on Control System, Computing and Engineering, Penang, Malaysia, 25–27 November 2011; pp. 517–521. [Google Scholar]
  49. Mori, M. Bukimi no tani. Energy 1970, 7, 33–35. [Google Scholar]
  50. Destephe, M.; Zecca, M.; Hashimoto, K.; Takanishi, A. Uncanny valley, robot and autism: Perception of the uncanniness in an emotional gait. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 1152–1157. [Google Scholar]
  51. Brink, K.A.; Gray, K.; Wellman, H.M. Creepiness Creeps In: Uncanny Valley Feelings Are Acquired in Childhood. Child Dev. 2019, 90, 1202–1214. [Google Scholar] [CrossRef]
  52. Pandey, A.K.; Gelin, R. A Mass-Produced Sociable Humanoid Robot: Pepper: The First Machine of Its Kind. IEEE Robot. Autom. Mag. 2018, 25, 40–48. [Google Scholar] [CrossRef]
  53. Mathur, M.B.; Reichling, D.B. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef]
  54. von der Pütten, A.M.R.; Krämer, N.C. How design characteristics of robots determine evaluation and uncanny valley related responses. Comput. Hum. Behav. 2014, 36, 422–439. [Google Scholar] [CrossRef]
  55. Kozima, H.; Nakagawa, C. Interactive Robots as Facilitators of Childrens Social Development. In Mobile Robots: Towards New Applications; Lazinica, A., Ed.; IntechOpen: Rijeka, Croatia, 2006; Chapter 14. [Google Scholar]
  56. Gaertner, B.M.; Spinrad, T.L.; Eisenberg, N. Focused Attention in Toddlers: Measurement, Stability, and Relations to Negative Emotion and Parenting. Infant Child Dev. 2008, 17, 339–363. [Google Scholar] [CrossRef]
  57. Pinto-Bernal, M.J.; Sierra, S.D.M.; Munera, M.; Casas, D.; Villa-Moreno, A.; Frizera-Neto, A.; Stoelen, M.F.; Belpaeme, T.; Cifuentes, C.A. Do different robot appearances change emotion recognition in children with ASD? Front. Neurorobot. 2023, 17, 11. [Google Scholar] [CrossRef]
  58. Andreasson, R.; Alenljung, B.; Billing, E.; Lowe, R. Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot. Int. J. Soc. Robot. 2018, 10, 473–491. [Google Scholar] [CrossRef]
  59. Winston, A.S.; Kenyon, B.; Stewardson, J.; Lepine, T. Children’s Sensitivity to Expression of Emotion in Drawings. Vis. Arts Res. 1995, 21, 1–14. [Google Scholar]
  60. Konrad, A. Inflection AI, Startup from Ex-DeepMind Leaders, Launches Pi—A Chattier Chatbot. Forbes 2023. [Google Scholar]
  61. Admoni, H. Nonverbal Communication in Socially Assistive Human-Robot Interaction. AI Matters 2016, 2, 9–10. [Google Scholar] [CrossRef]
Figure 1. Finger grabbing is a very popular application where the robot responds to touch by opening or closing its palms. The picture was taken during a visit to a kindergarten and anonymized in order to protect identities.
Figure 1. Finger grabbing is a very popular application where the robot responds to touch by opening or closing its palms. The picture was taken during a visit to a kindergarten and anonymized in order to protect identities.
Mti 07 00085 g001
Figure 2. Drawings of the NAO robot before (a,c) and after (b,d) the session, which included a theatrical performance, dance, pantomime, finger grabbing, and physical contact with the robot. Figures (a,b) were drawn by a 3-year-old child while figures (c,d) were drawn by a 5-year-old child.
Figure 2. Drawings of the NAO robot before (a,c) and after (b,d) the session, which included a theatrical performance, dance, pantomime, finger grabbing, and physical contact with the robot. Figures (a,b) were drawn by a 3-year-old child while figures (c,d) were drawn by a 5-year-old child.
Mti 07 00085 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Podpečan, V. Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot. Multimodal Technol. Interact. 2023, 7, 85. https://doi.org/10.3390/mti7090085

AMA Style

Podpečan V. Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot. Multimodal Technologies and Interaction. 2023; 7(9):85. https://doi.org/10.3390/mti7090085

Chicago/Turabian Style

Podpečan, Vid. 2023. "Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot" Multimodal Technologies and Interaction 7, no. 9: 85. https://doi.org/10.3390/mti7090085

APA Style

Podpečan, V. (2023). Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot. Multimodal Technologies and Interaction, 7(9), 85. https://doi.org/10.3390/mti7090085

Article Metrics

Back to TopTop