applsci-logo

Journal Browser

Journal Browser

Advances in Human–Machine Systems, Human–Machine Interfaces and Human Wearable Device Performance

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 April 2025 | Viewed by 15084

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Management, Chung Hua University, Hsin-Chu, Taiwan
Interests: drone ergonomics; human–virtual object interactions; physical ergonomics; human movement science
Special Issues, Collections and Topics in MDPI journals
Department of Engineering and Management, Nanjing Agricultural university, Nanjing 210095, China
Interests: human–machine interactions; safety and health at work; physical work assessments

Special Issue Information

Dear Colleagues,

The human–machine system (HMS), wherein the functions of humans and machines are integrated, is one of the core issues in human-factors engineering and ergonomics. Its function and performance depend on human capability, the function of the machine and how the system is integrated. HMSs exist wherever people are using or operating something, from using a screwdriver to navigating a commercial jet or cargo vessel. The human–machine interface (HMI), on the other hand, emphasizes the interactions of humans and machines. The communication between humans and machines via displays and control devices is a common HMI issue that includes the design and layout of control devices, the ways in which humans interact with the input devices, and human responses to the outputs of machines or devices. For this Special Issue, we welcome submissions related to HMSs and HMIs, functional allocations of humans and machines, and methods of system integration. We especially welcome submissions focusing on the design and layout of machine or system control panels (such as in a nuclear power plant control room); the human usage of wearable devices (such as augmented or virtual reality, extraskeletons, and special-purpose sensors); the operation of manned and unmanned vehicles (automobiles, vessels, aircrafts, robots, etc.); human material-handling aid interactions (such as carts, trolleys, and forklifts); and the environmental implications of human–machine systems.

Prof. Dr. Kaiway Li
Dr. Lu Peng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–machine system
  • human–machine interface
  • human–computer interactions
  • wearable devices
  • manned and unmanned system operation
  • augmented reality
  • virtual reality
  • control room and control panel design and assessment
  • transportation safety
  • human–robot interactions
  • extraskeletons
  • mental workload
  • vigilance

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 1866 KiB  
Article
Human Operator Mental Fatigue Assessment Based on Video: ML-Driven Approach and Its Application to HFAVD Dataset
by Walaa Othman, Batol Hamoud, Nikolay Shilov and Alexey Kashevnik
Appl. Sci. 2024, 14(22), 10510; https://doi.org/10.3390/app142210510 - 14 Nov 2024
Viewed by 606
Abstract
The detection of the human mental fatigue state holds immense significance due to its direct impact on work efficiency, specifically in system operation control. Numerous approaches have been proposed to address the challenge of fatigue detection, aiming to identify signs of fatigue and [...] Read more.
The detection of the human mental fatigue state holds immense significance due to its direct impact on work efficiency, specifically in system operation control. Numerous approaches have been proposed to address the challenge of fatigue detection, aiming to identify signs of fatigue and alert the individual. This paper introduces an approach to human mental fatigue assessment based on the application of machine learning techniques to the video of a working operator. For validation purposes, the approach was applied to a dataset, “Human Fatigue Assessment Based on Video Data” (HFAVD) integrating video data with features computed by using our computer vision deep learning models. The incorporated features encompass head movements represented by Euler angles (roll, pitch, and yaw), vital signs (blood pressure, heart rate, oxygen saturation, and respiratory rate), and eye and mouth states (blinking and yawning). The integration of these features eliminates the need for the manual calculation or detection of these parameters, and it obviates the requirement for sensors and external devices, which are commonly employed in existing datasets. The main objective of our work is to advance research in fatigue detection, particularly in work and academic settings. For this reason, we conducted a series of experiments by utilizing machine learning techniques to analyze the dataset and assess the fatigue state based on the features predicted by our models. The results reveal that the random forest technique consistently achieved the highest accuracy and F1-score across all experiments, predominantly exceeding 90%. These findings suggest that random forest is a highly promising technique for this task and prove the strong connection and association among the predicted features used to annotate the videos and the state of fatigue. Full article
Show Figures

Figure 1

24 pages, 4287 KiB  
Article
Brainwaves in the Cloud: Cognitive Workload Monitoring Using Deep Gated Neural Network and Industrial Internet of Things
by Muhammad Abrar Afzal, Zhenyu Gu, Syed Umer Bukhari and Bilal Afzal
Appl. Sci. 2024, 14(13), 5830; https://doi.org/10.3390/app14135830 - 3 Jul 2024
Cited by 1 | Viewed by 983
Abstract
Monitoring and classifying cognitive workload in real time is vital for optimizing human–machine interactions and enhancing performance while ensuring safety, particularly in industrial scenarios. Considering this significance, the authors aim to formulate a cognitive workload monitoring system (CWMS) by leveraging the deep gated [...] Read more.
Monitoring and classifying cognitive workload in real time is vital for optimizing human–machine interactions and enhancing performance while ensuring safety, particularly in industrial scenarios. Considering this significance, the authors aim to formulate a cognitive workload monitoring system (CWMS) by leveraging the deep gated neural network (DGNN), a hybrid model integrating bi-directional long short-term memory (Bi-LSTM) and gated recurrent unit (GRU) networks. In our experimental setup, each of the four virtual users is equipped with a Raspberry Pi Zero W module to ensure efficient data transmission, thereby enhancing the reliability and efficacy of the monitoring process. This seamless monitoring framework utilizes the constrained application protocol (CoAP) and the Things Board platform to evaluate cognitive workload in real time. The most popular EEG benchmark dataset, the STEW is utilized for workload classification in this study. We employ the short-time Fourier transformation (STFT) to extract frequency bands corresponding to users in both high and low cognitive workload modes. The proposed DGNN models achieve a perfect accuracy of 99.45%, outperforming every previous state-of-the-art model. We meticulously monitored critical parameters, including latency, classification processing time, and cognitive workload levels. This research demonstrates the importance of continuous monitoring for increasing productivity and safety in industries by introducing a novel method of real-time cognitive workload monitoring. The implementation codes for each experiment are documented and made available for reproducibility. Full article
Show Figures

Figure 1

23 pages, 2709 KiB  
Article
Motion Sickness in Mixed-Reality Situational Awareness System
by Rain Eric Haamer, Nika Mikhailava, Veronika Podliesnova, Raido Saremat, Tõnis Lusmägi, Ana Petrinec and Gholamreza Anbarjafari
Appl. Sci. 2024, 14(6), 2231; https://doi.org/10.3390/app14062231 - 7 Mar 2024
Viewed by 1205
Abstract
This research focuses on enhancing the user experience within a Mixed-Reality Situational Awareness System (MRSAS). The study employed the Simulator Sickness Questionnaire (SSQ) in order to gauge and quantify the user experience and to compare the effects of changes to the system. As [...] Read more.
This research focuses on enhancing the user experience within a Mixed-Reality Situational Awareness System (MRSAS). The study employed the Simulator Sickness Questionnaire (SSQ) in order to gauge and quantify the user experience and to compare the effects of changes to the system. As the results of SSQ are very dependant on inherent motion sickness susceptibility, the Motion Sickness Susceptibility Questionnaire (MSQ) was used to normalize the results. The experimental conditions were tested on a simulated setup which was also compared to its real-life counterpart. This simulated setup was adjusted to best match the conditions found in the real system by using post-processing effects. The test subjects in this research primarily consisted of 17–28 years old university students representing both male and female genders as well as a secondary set with a larger age range but predominantly male. In total, there were 41 unique test subjects in this study. The parameters that were analyzed in this study were the Field of View (FoV) of the headset, the effects of peripheral and general blurring, camera distortions, camera white balance and users adaptability to VR over time. All of the results are presented as the average of multiple user results and as scaled by user MSQ. The findings suggest that SSQ scores increase rapidly in the first 10–20 min of testing and level off at around 40–50 min. Repeated exposure to VR reduces MS buildup, and a FoV of 49–54 is ideal for a MRSAS setup. Additionally camera based effects like lens distortion and automatic white balance had negligible effests on MS. In this study a new MSQ based SSQ normalization technique was also developed and utilized for comparison. While the experiments in this research were primarily conducted with the goal of improving the physical Vegvisir system, the results themselves may be applicable for a broader array of VR/MR awareness systems and can help improve the UX of future applications. Full article
Show Figures

Figure 1

34 pages, 3694 KiB  
Article
Impact of Navigation Aid and Spatial Ability Skills on Wayfinding Performance and Workload in Indoor-Outdoor Campus Navigation: Challenges and Design
by Rabail Tahir and John Krogstie
Appl. Sci. 2023, 13(17), 9508; https://doi.org/10.3390/app13179508 - 22 Aug 2023
Cited by 1 | Viewed by 5962
Abstract
Wayfinding is important for everyone on a university campus to understand where they are and get to where they want to go to attend a meeting or a class. This study explores the dynamics of mobile navigation apps and the spatial ability skills [...] Read more.
Wayfinding is important for everyone on a university campus to understand where they are and get to where they want to go to attend a meeting or a class. This study explores the dynamics of mobile navigation apps and the spatial ability skills of individuals on a wayfinding performance and perceived workload on a university campus wayfinding, including indoor-outdoor navigation, by focusing on three research objectives. (1) Compare the effectiveness of Google Maps (outdoor navigation app) and MazeMap (indoor-outdoor navigation app) on wayfinding performance and perceived workload in university campus wayfinding. (2) Investigate the impact of participants’ spatial ability skills on their wayfinding performance and perceived workload regardless of the used navigation app. (3) Highlight the challenges in indoor-outdoor university campus wayfinding using mobile navigation apps. To achieve this, a controlled experiment was conducted with 22 participants divided into a control (using Google Maps) and an experiment group (using MazeMap). Participants were required to complete a time-bound wayfinding task of navigating to meeting rooms in different buildings within the Gløshaugen campus of the Norwegian University of Science and Technology in Trondheim, Norway. Participants were assessed on spatial ability tests, mental workload, and wayfinding performance using a questionnaire, observation notes and a short follow-up interview about the challenges they faced in the task. The findings reveal a negative correlation between overall spatial ability score (spatial reasoning, spatial orientation, and sense of direction) and perceived workload (NASA TLX score and Subjective Workload Rating) and a negative correlation between sense of direction score and total hesitation during wayfinding task. However, no significant difference was found between the Google Maps and the MazeMap group for wayfinding performance and perceived workload. The qualitative analysis resulted in five key challenge categories in university campus wayfinding, providing implications for designing navigation systems that better facilitate indoor-outdoor campus navigation. Full article
Show Figures

Figure 1

17 pages, 21306 KiB  
Article
Presenting Job Instructions Using an Augmented Reality Device, a Printed Manual, and a Video Display for Assembly and Disassembly Tasks: What Are the Differences?
by Halimoh Dorloh, Kai-Way Li and Samsiya Khaday
Appl. Sci. 2023, 13(4), 2186; https://doi.org/10.3390/app13042186 - 8 Feb 2023
Cited by 9 | Viewed by 2992
Abstract
Components assembly and disassembly are fundamental tasks in manufacturing and the product service industry. Job instructions are required for novice and inexperienced workers to perform such tasks. Conventionally, job instructions may be presented via printed manual and video display. Augmented reality (AR) device [...] Read more.
Components assembly and disassembly are fundamental tasks in manufacturing and the product service industry. Job instructions are required for novice and inexperienced workers to perform such tasks. Conventionally, job instructions may be presented via printed manual and video display. Augmented reality (AR) device has been one of the recent alternatives in conveying such information. This research compared the presentation of job instruction via AR display, video display, and a printed manual in performing computer component assembly and disassembly tasks in terms of efficiency, quality, and usability. A Microsoft® HoloLens 2 device and a laptop computer were adopted to present the job instruction for the AR and video conditions, respectively. A total of 21 healthy adults, including 11 males and 10 females, participated in the study. Our findings were that AR display led to the least efficiency but the best quality of the task being performed. The differences of the overall usability scores among the three job instruction types were insignificant. The participants felt that support from a technical person for the AR device was significantly more than the printed manual. More male participants felt the AR display was easier to use than their female counterparts. Full article
Show Figures

Figure 1

15 pages, 3412 KiB  
Article
Movement Time for Pointing Tasks in Real and Augmented Reality Environments
by Caijun Zhao, Kai Way Li and Lu Peng
Appl. Sci. 2023, 13(2), 788; https://doi.org/10.3390/app13020788 - 5 Jan 2023
Cited by 7 | Viewed by 1890
Abstract
Human–virtual target interactions are becoming more and more common due to the emergence and application of augmented reality (AR) devices. They are different from interacting with real objects. Quantification of movement time (MT) for human–virtual target interactions is essential for AR-based interface/environment design. [...] Read more.
Human–virtual target interactions are becoming more and more common due to the emergence and application of augmented reality (AR) devices. They are different from interacting with real objects. Quantification of movement time (MT) for human–virtual target interactions is essential for AR-based interface/environment design. This study aims to investigate the motion time when people interact with virtual targets and to compare the differences in motion time between real and AR environments. An experiment was conducted to measure the MT of pointing tasks on the basis of both a physical and a virtual calculator panel. A total of 30 healthy adults, 15 male and 15 female, joined. Each participant performed pointing tasks on both physical and virtual panels with an inclined angle of the panel, hand movement direction, target key, and handedness conditions. The participants wore an AR head piece (Microsoft Hololens 2) when they pointed on the virtual panel. When pointing on the physical panel, the participants pointed on a panel drawn on board. The results showed that the type of panel, inclined angle, gender, and handedness had significant (p < 0.0001) effects on the MT. A new finding of this study was that the MT of the pointing task on the virtual panel was significantly (p < 0.0001) higher than that of the physical one. Users using a Hololens 2 AR device had inferior performance in pointing tasks than on a physical panel. A revised Fitts’s model was proposed to incorporate both the physical–virtual component and inclined angle of the panel in estimating the MT. This model is novel. The index of difficulty and throughput of the pointing tasks between using the physical and virtual panels were compared and discussed. The information in this paper is beneficial to AR designers in promoting the usability of their designs so as to improve the user experience of their products. Full article
Show Figures

Figure 1

Back to TopTop