Human Computer Interaction and Its Future

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (20 November 2021) | Viewed by 82400

Special Issue Editor


E-Mail Website
Guest Editor
Electrical and Computer Engineering, University of California Santa Cruz, Santa Cruz, CA, USA
Interests: robotics; human-machine interaction; soft systems; wearables; rehabilitation

Special Issue Information

Dear Colleagues,

Humans and computers/machinery have had a complex relationship throughout history. From can-openers to online shopping, we see countless areas in which electromechanical devices can assist us every day. However, our interaction with these systems has been fraught with difficulty. From repetitive stress injuries to the 2008 mortgage meltdown, our interaction with computers, and the difficulties of that relationship have proven to be complex and multifaceted. As our computers and mechatronic systems become more sophisticated, our interactions, problems, and solutions to these problems grow in complexity.

A myriad of technical, personal, and societal difficulties have given rise to a myriad solutions and areas of research to address the difficulties, which often give rise to entirely novel solutions and even new fields of research.

The main aim of this Special Issue is to seek high-quality submissions that highlight emerging methods of identifying the nature of human–machine interaction, quantitatively and qualitatively evaluating the relative risks and merits of the interaction, and studying the possible solutions.

Topics of interest include, but are not limited to, the following:

- Direct human–machine interface: ergonomics, safety, and emerging solutions

- Assistive technologies

- Augmentative technologies

- Robotics: companion robots, workplace robots, healthcare robots, and soft robots

- Wearables: active and passive orthotics, prosthetics, exoskeletons, and wearable sensors

- Gesture recognition and virtual reality

- Social issues in human–computer interactions

Dr. Michael Wehner
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 2596 KiB  
Article
VR-PEER: A Personalized Exer-Game Platform Based on Emotion Recognition
by Yousra Izountar, Samir Benbelkacem, Samir Otmane, Abdallah Khababa, Mostefa Masmoudi and Nadia Zenati
Electronics 2022, 11(3), 455; https://doi.org/10.3390/electronics11030455 - 3 Feb 2022
Cited by 14 | Viewed by 3430
Abstract
Motor rehabilitation exercises require recurrent repetitions to enhance patients’ gestures. However, these repetitive gestures usually decrease the patients’ motivation and stress them. Virtual Reality (VR) exer-games (serious games in general) could be an alternative solution to address the problem. This innovative technology encourages [...] Read more.
Motor rehabilitation exercises require recurrent repetitions to enhance patients’ gestures. However, these repetitive gestures usually decrease the patients’ motivation and stress them. Virtual Reality (VR) exer-games (serious games in general) could be an alternative solution to address the problem. This innovative technology encourages patients to train different gestures with less effort since they are totally immersed in an easy to play exer-game. Despite this evolution, patients, with available exer-games, still suffer in performing their gestures correctly without pain. The developed applications do not consider the patients psychological states when playing an exer-game. Therefore, we believe that is necessary to develop personalized and adaptive exer-games that take into consideration the patients’ emotions during rehabilitation exercises. This paper proposed a VR-PEER adaptive exer-game system based on emotion recognition. The platform contain three main modules: (1) computing and interpretation module, (2) emotion recognition module, (3) adaptation module. Furthermore, a virtual reality-based serious game is developed as a case study, that uses updated facial expression data and provides dynamically the patient’s appropriate game to play during rehabilitation exercises. An experimental study has been conducted on fifteen subjects who expressed the usefulness of the proposed system in motor rehabilitation process. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

19 pages, 6253 KiB  
Article
Soft Robotic Sensing, Proprioception via Cable and Microfluidic Transmission
by Keng-Yu Lin, Arturo Gamboa-Gonzalez and Michael Wehner
Electronics 2021, 10(24), 3166; https://doi.org/10.3390/electronics10243166 - 19 Dec 2021
Cited by 4 | Viewed by 4358
Abstract
Current challenges in soft robotics include sensing and state awareness. Modern soft robotic systems require many more sensors than traditional robots to estimate pose and contact forces. Existing soft sensors include resistive, conductive, optical, and capacitive sensing, with each sensor requiring electronic circuitry [...] Read more.
Current challenges in soft robotics include sensing and state awareness. Modern soft robotic systems require many more sensors than traditional robots to estimate pose and contact forces. Existing soft sensors include resistive, conductive, optical, and capacitive sensing, with each sensor requiring electronic circuitry and connection to a dedicated line to a data acquisition system, creating a rapidly increasing burden as the number of sensors increases. We demonstrate a network of fiber-based displacement sensors to measure robot state (bend, twist, elongation) and two microfluidic pressure sensors to measure overall and local pressures. These passive sensors transmit information from a soft robot to a nearby display assembly, where a digital camera records displacement and pressure data. We present a configuration in which one camera tracks 11 sensors consisting of nine fiber-based displacement sensors and two microfluidic pressure sensors, eliminating the need for an array of electronic sensors throughout the robot. Finally, we present a Cephalopod-chromatophore-inspired color cell pressure sensor. While these techniques can be used in a variety of soft robot devices, we present fiber and fluid sensing on an elastomeric finger. These techniques are widely suitable for state estimation in the soft robotics field and will allow future progress toward robust, low-cost, real-time control of soft robots. This increased state awareness is necessary for robots to interact with humans, potentially the greatest benefit of the emerging soft robotics field. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

15 pages, 2037 KiB  
Article
Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles
by Ji-Hyeok Han and Da-Young Ju
Electronics 2021, 10(22), 2796; https://doi.org/10.3390/electronics10222796 - 15 Nov 2021
Cited by 8 | Viewed by 2637
Abstract
In autonomous driving vehicles, the driver can engage in non-driving-related tasks and does not have to pay attention to the driving conditions or engage in manual driving. If an unexpected situation arises that the autonomous vehicle cannot manage, then the vehicle should notify [...] Read more.
In autonomous driving vehicles, the driver can engage in non-driving-related tasks and does not have to pay attention to the driving conditions or engage in manual driving. If an unexpected situation arises that the autonomous vehicle cannot manage, then the vehicle should notify and help the driver to prepare themselves for retaking manual control of the vehicle. Several effective notification methods based on multimodal warning systems have been reported. In this paper, we propose an advanced method that employs alarms for specific conditions by analyzing the differences in the driver’s responses, based on their specific situation, to trigger visual and auditory alarms in autonomous vehicles. Using a driving simulation, we carried out human-in-the-loop experiments that included a total of 38 drivers and 2 scenarios (namely drowsiness and distraction scenarios), each of which included a control-switching stage for implementing an alarm during autonomous driving. Reaction time, gaze indicator, and questionnaire data were collected, and electroencephalography measurements were performed to verify the drowsiness. Based on the experimental results, the drivers exhibited a high alertness to the auditory alarms in both the drowsy and distracted conditions, and the change in the gaze indicator was higher in the distraction condition. The results of this study show that there was a distinct difference between the driver’s response to the alarms signaled in the drowsy and distracted conditions. Accordingly, we propose an advanced notification method and future goals for further investigation on vehicle alarms. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

21 pages, 5877 KiB  
Article
Exploring the Effect of Robot-Based Video Interventions for Children with Autism Spectrum Disorder as an Alternative to Remote Education
by Diego Antonio Urdanivia Alarcon, Sandra Cano, Fabian Hugo Rucano Paucar, Ruben Fernando Palomino Quispe, Fabiola Talavera-Mendoza and María Elena Rojas Zegarra
Electronics 2021, 10(21), 2577; https://doi.org/10.3390/electronics10212577 - 21 Oct 2021
Cited by 6 | Viewed by 2730
Abstract
Education systems are currently in a state of uncertainty in the face of the changes and complexities that have accompanied SARS-CoV2, leading to new directions in educational models and curricular reforms. Video-based Intervention (VBIs) is a form of observational learning based on social [...] Read more.
Education systems are currently in a state of uncertainty in the face of the changes and complexities that have accompanied SARS-CoV2, leading to new directions in educational models and curricular reforms. Video-based Intervention (VBIs) is a form of observational learning based on social learning theory. Thus, this study aims to make use of a humanoid robot called NAO, which has been used in educational interventions for children with autism spectrum disorder. Integrating it in video-based interventions. This study aims to characterize, in an everyday context, the mediating role of the NAO robot presented in group videoconferences to stimulate video-based on observational learning for children with cognitive and social communication deficits. The children in the study demonstrated a minimal ability to understand simple instructions. This qualitative study was applied to three children with autism spectrum disorder (ASD), level III special education students at Center for Special Basic Education (CEBE) in the city of Arequipa, Perú. Likewise, an instrument was designed for assessment of the VBIs by a group of psychologists. The results showed that the presence of the NAO robot in the VBIs successfully stimulated their interaction capabilities. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

11 pages, 1560 KiB  
Article
Factors Contributing to Korean Older Adults’ Acceptance of Assistive Social Robots
by Lin Wang, Jia Chen and Da-Young Ju
Electronics 2021, 10(18), 2204; https://doi.org/10.3390/electronics10182204 - 9 Sep 2021
Cited by 4 | Viewed by 2219
Abstract
This study investigated the factors contributing to older adults’ acceptance of assistive social robots. A survey was conducted to find factors explaining and predicting older adults’ acceptance behavior of assistive social robots. Three factors of older adults’ needs for assistive social robots were [...] Read more.
This study investigated the factors contributing to older adults’ acceptance of assistive social robots. A survey was conducted to find factors explaining and predicting older adults’ acceptance behavior of assistive social robots. Three factors of older adults’ needs for assistive social robots were found (advanced needs, social needs, and physiological needs) which integrated Maslow’s five levels of basic human needs. According to older adults’ self-reported scores, the most important needs were physiological needs, followed by advanced needs and social needs. A regression analysis showed that the advanced needs and social needs significantly influence older adults’ use intention of assistive social robots. The results can assist in the future design of assistive social robot functions and features targeting the older population. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

23 pages, 5602 KiB  
Article
Sensory Substitution for the Visually Impaired: A Study on the Usability of the Sound of Vision System in Outdoor Environments
by Otilia Zvorișteanu, Simona Caraiman, Robert-Gabriel Lupu, Nicolae Alexandru Botezatu and Adrian Burlacu
Electronics 2021, 10(14), 1619; https://doi.org/10.3390/electronics10141619 - 6 Jul 2021
Cited by 9 | Viewed by 3657
Abstract
For most visually impaired people, simple tasks such as understanding the environment or moving safely around it represent huge challenges. The Sound of Vision system was designed as a sensory substitution device, based on computer vision techniques, that encodes any environment in a [...] Read more.
For most visually impaired people, simple tasks such as understanding the environment or moving safely around it represent huge challenges. The Sound of Vision system was designed as a sensory substitution device, based on computer vision techniques, that encodes any environment in a naturalistic representation through audio and haptic feedback. The present paper presents a study on the usability of this system for visually impaired people in relevant environments. The aim of the study is to assess how well the system is able to help the perception and mobility of the visually impaired participants in real life environments and circumstances. The testing scenarios were devised to allow the assessment of the added value of the Sound of Vision system compared to traditional assistive instruments, such as the white cane. Various data were collected during the tests to allow for a better evaluation of the performance: system configuration, completion times, electro-dermal activity, video footage, user feedback. With minimal training, the system could be successfully used in outdoor environments to perform various perception and mobility tasks. The benefit of the Sound of Vision device compared to the white cane was confirmed by the participants and by the evaluation results to consist in: providing early feedback about static and dynamic objects, providing feedback about elevated objects, walls, negative obstacles (e.g., holes in the ground) and signs. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

16 pages, 1106 KiB  
Article
Mitigating Children’s Pain and Anxiety during Blood Draw Using Social Robots
by Matthijs H. J. Smakman, Koen Smit, Lotte Buser, Tom Monshouwer, Nigel van Putten, Thymen Trip, Coen Schoof, Daniel F. Preciado, Elly A. Konijn, Esther M. van der Roest and Wouter M. Tiel Groenestege
Electronics 2021, 10(10), 1211; https://doi.org/10.3390/electronics10101211 - 19 May 2021
Cited by 13 | Viewed by 5832
Abstract
Young pediatric patients who undergo venipuncture or capillary blood sampling often experience high levels of pain and anxiety. This often results in distressed young patients and their parents, increased treatment times, and a higher workload for healthcare professionals. Social robots are a new [...] Read more.
Young pediatric patients who undergo venipuncture or capillary blood sampling often experience high levels of pain and anxiety. This often results in distressed young patients and their parents, increased treatment times, and a higher workload for healthcare professionals. Social robots are a new and promising tool to mitigate children’s pain and anxiety. This study aims to purposefully design and test a social robot for mitigating stress and anxiety during blood draw of children. We first programmed a social robot based on the requirements expressed by experienced healthcare professionals during focus group sessions. Next, we designed a randomized controlled experiment in which the social robot was applied as a distraction method to measure its capacity to mitigate pain and anxiety in children during blood draw in a children’s hospital setting. Children who interacted with the robot showed significantly lower levels of anxiety before actual blood collection, compared to children who received regular medical treatment. Children in the middle classes of primary school (aged 6–9) seemed especially sensitive to the robot’s ability to mitigate pain and anxiety before blood draw. Children’s parents overall expressed strong positive attitudes toward the use and effectiveness of the social robot for mitigating pain and anxiety. The results of this study demonstrate that social robots can be considered a new and effective tool for lowering children’s anxiety prior to the distressing medical procedure of blood collection. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

30 pages, 8799 KiB  
Article
Spatial Components Guidelines in a Face-to-Face Seating Arrangement for Flexible Layout of Autonomous Vehicles
by Ju Yeong Kwon and Da Young Ju
Electronics 2021, 10(10), 1178; https://doi.org/10.3390/electronics10101178 - 14 May 2021
Cited by 2 | Viewed by 4422
Abstract
Fully autonomous vehicles are not yet available for consumers to experience; however, as experts predict they will be ready for the consumer market in the not-too-distant future, it is important to consider the spatial design of such vehicles. As the interior of a [...] Read more.
Fully autonomous vehicles are not yet available for consumers to experience; however, as experts predict they will be ready for the consumer market in the not-too-distant future, it is important to consider the spatial design of such vehicles. As the interior of a vehicle is a confined space, it is important to design a flexible layout in different aspects of the overall space. Therefore, this study aimed to analyze the relationships among various elements related to the use of space in a face-to-face seating arrangement. Using mock-up, observational surveys, questionnaires, and the think-aloud research method within an ethnographic observation framework, we conducted experiments on three study participants who were aware of the changing concept of autonomous vehicles. One of the key findings of our analysis is that various activities and actions can occur in a face-to-face seating arrangement. It is important to recognize that face-to-face seating arrangements are not just to facilitate conversation but can be seen as an environment in which each passenger can conduct other in-vehicle activities individually. Based on these findings, we recommend that needs for activities be considered when designing spatial components in a face-to-face seating arrangement. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

13 pages, 2887 KiB  
Article
Position Control for Soft Actuators, Next Steps toward Inherently Safe Interaction
by Dongshuo Li, Vaishnavi Dornadula, Kengyu Lin and Michael Wehner
Electronics 2021, 10(9), 1116; https://doi.org/10.3390/electronics10091116 - 9 May 2021
Cited by 10 | Viewed by 3225
Abstract
Soft robots present an avenue toward unprecedented societal acceptance, utility in populated environments, and direct interaction with humans. However, the compliance that makes them attractive also makes soft robots difficult to control. We present two low-cost approaches to control the motion of soft [...] Read more.
Soft robots present an avenue toward unprecedented societal acceptance, utility in populated environments, and direct interaction with humans. However, the compliance that makes them attractive also makes soft robots difficult to control. We present two low-cost approaches to control the motion of soft actuators in applications common in human-interaction tasks. First, we present a passive impedance approach, which employs restriction to pneumatic channels to regulate the inflation/deflation rate of a pneumatic actuator and eliminate the overshoot/oscillation seen in many underdamped silicone-based soft actuators. Second, we present a visual servoing feedback control approach. We present an elastomeric pneumatic finger as an example system on which both methods are evaluated and compared to an uncontrolled underdamped actuator. We perturb the actuator and demonstrate its ability to increase distal curvature around the obstacle and maintain the desired end position. In this approach, we use the continuum deformation characteristic of soft actuators as an advantage for control rather than a problem to be minimized. With their low cost and complexity, these techniques present great opportunity for soft robots to improve human–robot interaction. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

17 pages, 4858 KiB  
Article
Human Signature Identification Using IoT Technology and Gait Recognition
by Mihaela Hnatiuc, Oana Geman, Andrei George Avram, Deepak Gupta and K. Shankar
Electronics 2021, 10(7), 852; https://doi.org/10.3390/electronics10070852 - 2 Apr 2021
Cited by 21 | Viewed by 3873
Abstract
This study aimed to develop an autonomous design system for recognizing the subject by gait posture. Gait posture is a type of non-verbal communication characteristic of each person, and can be considered a signature used in identification. This system can be used for [...] Read more.
This study aimed to develop an autonomous design system for recognizing the subject by gait posture. Gait posture is a type of non-verbal communication characteristic of each person, and can be considered a signature used in identification. This system can be used for diagnosis. The system helps aging or disabled subjects to identify incorrect posture to recover the gait. Gait posture gives information for subject identification using leg movements and step distance as characteristic parameters. In the current study, the inertial measurement units (IMUs) located in a mobile phone were used to provide information about the movement of the upper and lower leg parts. A resistive flex sensor (RFS) was used to obtain information about the foot contact with the ground. The data were collected from a target group comprising subjects of different age, height, and mass. A comparative study was undertaken to identify the subject after the gait posture. Statistical analysis and a machine learning algorithm were used for data processing. The errors obtained after training data are presented at the end of the paper and the obtained results are encouraging. This article proposes a method of acquiring data available to anyone by using indispensable devices purchased by all users such as mobile phones. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

18 pages, 2819 KiB  
Article
Optimization of Dynamic Task Location within a Manipulator’s Workspace for the Utilization of the Minimum Required Joint Torques
by Adam Wolniakowski, Charalampos Valsamos, Kanstantsin Miatliuk, Vassilis Moulianitis and Nikos Aspragathos
Electronics 2021, 10(3), 288; https://doi.org/10.3390/electronics10030288 - 26 Jan 2021
Cited by 4 | Viewed by 2135
Abstract
The determination of the optimal position of a robotic task within a manipulator’s workspace is crucial for the manipulator to achieve high performance regarding selected aspects of its operation. In this paper, a method for determining the optimal task placement for a serial [...] Read more.
The determination of the optimal position of a robotic task within a manipulator’s workspace is crucial for the manipulator to achieve high performance regarding selected aspects of its operation. In this paper, a method for determining the optimal task placement for a serial manipulator is presented, so that the required joint torques are minimized. The task considered comprises the exercise of a given force in a given direction along a 3D path followed by the end effector. Given that many such tasks are usually conducted by human workers and as such the utilized trajectories are quite complex to model, a Human Robot Interaction (HRI) approach was chosen to define the task, where the robot is taught the task trajectory by a human operator. Furthermore, the presented method considers the singular free paths of the manipulator’s end-effector motion in the configuration space. Simulation results are utilized to set up a physical execution of the task in the optimal derived position within a UR-3 manipulator’s workspace. For reference the task is also placed at an arbitrary “bad” location in order to validate the simulation results. Experimental results verify that the positioning of the task at the optimal location derived by the presented method allows for the task execution with minimum joint torques as opposed to the arbitrary position. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

15 pages, 2618 KiB  
Article
Modeling the Conditional Distribution of Co-Speech Upper Body Gesture Jointly Using Conditional-GAN and Unrolled-GAN
by Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi and Hiroshi Ishiguro
Electronics 2021, 10(3), 228; https://doi.org/10.3390/electronics10030228 - 20 Jan 2021
Cited by 26 | Viewed by 3253
Abstract
Co-speech gestures are a crucial, non-verbal modality for humans to communicate. Social agents also need this capability to be more human-like and comprehensive. This study aims to model the distribution of gestures conditioned on human speech features. Unlike previous studies that try to [...] Read more.
Co-speech gestures are a crucial, non-verbal modality for humans to communicate. Social agents also need this capability to be more human-like and comprehensive. This study aims to model the distribution of gestures conditioned on human speech features. Unlike previous studies that try to find injective functions that map speech to gestures, we propose a novel, conditional GAN-based generative model to not only convert speech into gestures but also to approximate the distribution of gestures conditioned on speech through parameterization. An objective evaluation and user study show that the proposed model outperformed the existing deterministic model, indicating that generative models can approximate real patterns of co-speech gestures better than the existing deterministic model. Our results suggest that it is critical to consider the nature of randomness when modeling co-speech gestures. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

18 pages, 1493 KiB  
Article
A Novel Motion Intention Recognition Approach for Soft Exoskeleton via IMU
by Lu Zhu, Zhuo Wang, Zhigang Ning, Yu Zhang, Yida Liu, Wujing Cao, Xinyu Wu and Chunjie Chen
Electronics 2020, 9(12), 2176; https://doi.org/10.3390/electronics9122176 - 18 Dec 2020
Cited by 46 | Viewed by 5712
Abstract
To solve the complexity of the traditional motion intention recognition method using a multi-mode sensor signal and the lag of the recognition process, in this paper, an inertial sensor-based motion intention recognition method for a soft exoskeleton is proposed. Compared with traditional motion [...] Read more.
To solve the complexity of the traditional motion intention recognition method using a multi-mode sensor signal and the lag of the recognition process, in this paper, an inertial sensor-based motion intention recognition method for a soft exoskeleton is proposed. Compared with traditional motion recognition, in addition to the classic five kinds of terrain, the recognition of transformed terrain is also added. In the mode acquisition, the sensors’ data in the thigh and calf in different motion modes are collected. After a series of data preprocessing, such as data filtering and normalization, the sliding window is used to enhance the data, so that each frame of inertial measurement unit (IMU) data keeps the last half of the previous frame’s historical information. Finally, we designed a deep convolution neural network which can learn to extract discriminant features from temporal gait period to classify different terrain. The experimental results show that the proposed method can recognize the pose of the soft exoskeleton in different terrain, including walking on flat ground, going up and downstairs, and up and down slopes. The recognition accuracy rate can reach 97.64%. In addition, the recognition delay of the conversion pattern, which is converted between the five modes, only accounts for 23.97% of a gait cycle. Finally, the oxygen consumption was measured by the wearable metabolic system (COSMED K5, The Metabolic Company, Rome, Italy), and compared with that without an identification method; the net metabolism was reduced by 5.79%. The method in this paper can greatly improve the control performance of the flexible lower extremity exoskeleton system and realize the natural and seamless state switching of the exoskeleton between multiple motion modes according to the human motion intention. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

25 pages, 2972 KiB  
Article
Comparing VR- and AR-Based Try-On Systems Using Personalized Avatars
by Yuzhao Liu, Yuhan Liu, Shihui Xu, Kelvin Cheng, Soh Masuko and Jiro Tanaka
Electronics 2020, 9(11), 1814; https://doi.org/10.3390/electronics9111814 - 2 Nov 2020
Cited by 31 | Viewed by 11355
Abstract
Despite the convenience offered by e-commerce, online apparel shopping presents various product-related risks, as consumers can neither physically see nor try products on themselves. Augmented reality (AR) and virtual reality (VR) technologies have been used to improve the shopping online experience. Therefore, we [...] Read more.
Despite the convenience offered by e-commerce, online apparel shopping presents various product-related risks, as consumers can neither physically see nor try products on themselves. Augmented reality (AR) and virtual reality (VR) technologies have been used to improve the shopping online experience. Therefore, we propose an AR- and VR-based try-on system that provides users a novel shopping experience where they can view garments fitted onto their personalized virtual body. Recorded personalized motions are used to allow users to dynamically interact with their dressed virtual body in AR. We conducted two user studies to compare the different roles of VR- and AR-based try-ons and validate the impact of personalized motions on the virtual try-on experience. In the first user study, the mobile application with the AR- and VR-based try-on is compared to a traditional e-commerce interface. In the second user study, personalized avatars with pre-defined motion and personalized motion is compared to a personalized no-motion avatar with AR-based try-on. The result shows that AR- and VR-based try-ons can positively influence the shopping experience, compared with the traditional e-commerce interface. Overall, AR-based try-on provides a better and more realistic garment visualization than VR-based try-on. In addition, we found that personalized motions do not directly affect the user’s shopping experience. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

11 pages, 408 KiB  
Article
Motor-Imagery Classification Using Riemannian Geometry with Median Absolute Deviation
by Abu Saleh Musa Miah, Md Abdur Rahim and Jungpil Shin
Electronics 2020, 9(10), 1584; https://doi.org/10.3390/electronics9101584 - 27 Sep 2020
Cited by 25 | Viewed by 3142
Abstract
Motor imagery (MI) from human brain signals can diagnose or aid specific physical activities for rehabilitation, recreation, device control, and technology assistance. It is a dynamic state in learning and practicing movement tracking when a person mentally imitates physical activity. Recently, it has [...] Read more.
Motor imagery (MI) from human brain signals can diagnose or aid specific physical activities for rehabilitation, recreation, device control, and technology assistance. It is a dynamic state in learning and practicing movement tracking when a person mentally imitates physical activity. Recently, it has been determined that a brain–computer interface (BCI) can support this kind of neurological rehabilitation or mental practice of action. In this context, MI data have been captured via non-invasive electroencephalogram (EEGs), and EEG-based BCIs are expected to become clinically and recreationally ground-breaking technology. However, determining a set of efficient and relevant features for the classification step was a challenge. In this paper, we specifically focus on feature extraction, feature selection, and classification strategies based on MI-EEG data. In an MI-based BCI domain, covariance metrics can play important roles in extracting discriminatory features from EEG datasets. To explore efficient and discriminatory features for the enhancement of MI classification, we introduced a median absolute deviation (MAD) strategy that calculates the average sample covariance matrices (SCMs) to select optimal accurate reference metrics in a tangent space mapping (TSM)-based MI-EEG. Furthermore, all data from SCM were projected using TSM according to the reference matrix that represents the featured vector. To increase performance, we reduced the dimensions and selected an optimum number of features using principal component analysis (PCA) along with an analysis of variance (ANOVA) that could classify MI tasks. Then, the selected features were used to develop linear discriminant analysis (LDA) training for classification. The benchmark datasets were considered for the evaluation and the results show that it provides better accuracy than more sophisticated methods. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

15 pages, 1821 KiB  
Article
Hand Movement Activity-Based Character Input System on a Virtual Keyboard
by Md Abdur Rahim and Jungpil Shin
Electronics 2020, 9(5), 774; https://doi.org/10.3390/electronics9050774 - 8 May 2020
Cited by 13 | Viewed by 4349
Abstract
Nowadays, gesture-based technology is revolutionizing the world and lifestyles, and the users are comfortable and care about their needs, for example, in communication, information security, the convenience of day-to-day operations and so forth. In this case, hand movement information provides an alternative way [...] Read more.
Nowadays, gesture-based technology is revolutionizing the world and lifestyles, and the users are comfortable and care about their needs, for example, in communication, information security, the convenience of day-to-day operations and so forth. In this case, hand movement information provides an alternative way for users to interact with people, machines or robots. Therefore, this paper presents a character input system using a virtual keyboard based on the analysis of hand movements. We analyzed the signals of the accelerometer, gyroscope, and electromyography (EMG) for movement activity. We explored potential features of removing noise from input signals through the wavelet denoising technique. The envelope spectrum is used for the analysis of the accelerometer and gyroscope and cepstrum for the EMG signal. Furthermore, the support vector machine (SVM) is used to train and detect the signal to perform character input. In order to validate the proposed model, signal information is obtained from predefined gestures, that is, “double-tap”, “hold-fist”, “wave-left”, “wave-right” and “spread-finger” of different respondents for different input actions such as “input a character”, “change character”, “delete a character”, “line break”, “space character”. The experimental results show the superiority of hand gesture recognition and accuracy of character input compared to state-of-the-art systems. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

19 pages, 1643 KiB  
Article
A CNN Based Automated Activity and Food Recognition Using Wearable Sensor for Preventive Healthcare
by Ghulam Hussain, Mukesh Kumar Maheshwari, Mudasar Latif Memon, Muhammad Shahid Jabbar and Kamran Javed
Electronics 2019, 8(12), 1425; https://doi.org/10.3390/electronics8121425 - 29 Nov 2019
Cited by 26 | Viewed by 4301
Abstract
Recent developments in the field of preventive healthcare have received considerable attention due to the effective management of various chronic diseases including diabetes, heart stroke, obesity, and cancer. Various automated systems are being used for activity and food recognition in preventive healthcare. The [...] Read more.
Recent developments in the field of preventive healthcare have received considerable attention due to the effective management of various chronic diseases including diabetes, heart stroke, obesity, and cancer. Various automated systems are being used for activity and food recognition in preventive healthcare. The automated systems lack sophisticated segmentation techniques and contain multiple sensors, which are inconvenient to be worn in real-life settings. To monitor activity and food together, our work presents a novel wearable system that employs the motion sensors in a smartwatch together with a piezoelectric sensor embedded in a necklace. The motion sensor generates distinct patterns for eight different physical activities including eating activity. The piezoelectric sensor generates different signal patterns for six different food types as the ingestion of each food is different from the others owing to their different characteristics: hardness, crunchiness, and tackiness. For effective representation of the signal patterns of the activities and foods, we employ dynamic segmentation. A novel algorithm called event similarity search (ESS) is developed to choose a segment with dynamic length, which represents signal patterns with different complexities equally well. Amplitude-based features and spectrogram-generated images from the segments of activity and food are fed to convolutional neural network (CNN)-based activity and food recognition networks, respectively. Extensive experimentation showed that the proposed system performs better than the state of the art methods for recognizing eight activity types and six food categories with an accuracy of 94.3% and 91.9% using support vector machine (SVM) and CNN, respectively. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

Review

Jump to: Research

40 pages, 3988 KiB  
Review
Remote Eye Gaze Tracking Research: A Comparative Evaluation on Past and Recent Progress
by Ibrahim Shehi Shehu, Yafei Wang, Athuman Mohamed Athuman and Xianping Fu
Electronics 2021, 10(24), 3165; https://doi.org/10.3390/electronics10243165 - 19 Dec 2021
Cited by 16 | Viewed by 8664
Abstract
Several decades of eye related research has shown how valuable eye gaze data are for applications that are essential to human daily life. Eye gaze data in a broad sense has been used in research and systems for eye movements, eye tracking, and [...] Read more.
Several decades of eye related research has shown how valuable eye gaze data are for applications that are essential to human daily life. Eye gaze data in a broad sense has been used in research and systems for eye movements, eye tracking, and eye gaze tracking. Since early 2000, eye gaze tracking systems have emerged as interactive gaze-based systems that could be remotely deployed and operated, known as remote eye gaze tracking (REGT) systems. The drop point of visual attention known as point of gaze (PoG), and the direction of visual attention known as line of sight (LoS), are important tasks of REGT systems. In this paper, we present a comparative evaluation of REGT systems intended for the PoG and LoS estimation tasks regarding past to recent progress. Our literature evaluation presents promising insights on key concepts and changes recorded over time in hardware setup, software process, application, and deployment of REGT systems. In addition, we present current issues in REGT research for future attempts. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

Back to TopTop