sensors-logo

Journal Browser

Journal Browser

Human Activity Detection and Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 48952

Special Issue Editors


E-Mail Website
Guest Editor
Universidad de Valladolid, Valladolid, Spain
Interests: computer vision; human activity detection and recognition; machine learning; deep learning; GPU computing; physical and cognitive rehabilitation applications; human body tracking; consumer depth cameras; IMUs for motion acquisition; virtual and augmented reality

E-Mail Website
Guest Editor
Universidad de Valladolid, Valladolid, Spain
Interests: computer vision; machine learning; driving simulation; human body detection and tracking; consumer depth cameras; neural networks; deep learning

Special Issue Information

Dear Colleagues,

Systems for human activity detection and recognition are becoming more and more sophisticated everyday. On the one hand, many different kinds of sensors for human action acquisition and tracking are available, ranging from conventional vision-based solutions (RGB, depth, and multi-view cameras) to wearable devices (smart watches and phones or more specific comercial IMU-based sensors for movement tracking) and including many other kinds of IoT devices. On the other hand, the number of proposed techniques for data processing and analysis is increasing everyday. In particular, newer approaches using deep learning and classical machine learning techniques are gaining significant interest. It is also worth noticing that the new situation derived from the Covid-19 pandemic has raised the necessity of physical telerehabilitation systems or tools to control social distance. This Special Issue is intended to present a collection of scientific papers that represent the current state of the art of methods and sensors for the detection, tracking, and recognition of all forms of human activity.

Dr. Mario Martínez-Zarzuela
Dr. David González Ortega
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multisensor human activity detection, tracking, and recognition
  • Deep and machine learning techniques for human activity analysis
  • Human activity detection and recognition (i.e., pose estimation, activity classification)
  • Human activity in health apps (i.e., physical and cognitive rehabilitation, remote motion tracking)
  • Human activity in safety apps (i.e., driver monitoring, ergonomics)
  • Human activity monitoring and surveillance (i.e., Covid-19, social distance control, risk alert, terrorism)
  • Human activity for computer interaction (i.e., natural-based interfaces, virtual and augmented reality)
  • Human activity recording, simulation and generation (i.e., database acquisition, synthethic data)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3957 KiB  
Article
Physical Distancing Device with Edge Computing for COVID-19 (PADDIE-C19)
by Chun Hoe Loke, Mohammed Sani Adam, Rosdiadee Nordin, Nor Fadzilah Abdullah and Asma Abu-Samah
Sensors 2022, 22(1), 279; https://doi.org/10.3390/s22010279 - 30 Dec 2021
Cited by 2 | Viewed by 2779
Abstract
The most effective methods of preventing COVID-19 infection include maintaining physical distancing and wearing a face mask while in close contact with people in public places. However, densely populated areas have a greater incidence of COVID-19 dissemination, which is caused by people who [...] Read more.
The most effective methods of preventing COVID-19 infection include maintaining physical distancing and wearing a face mask while in close contact with people in public places. However, densely populated areas have a greater incidence of COVID-19 dissemination, which is caused by people who do not comply with standard operating procedures (SOPs). This paper presents a prototype called PADDIE-C19 (Physical Distancing Device with Edge Computing for COVID-19) to implement the physical distancing monitoring based on a low-cost edge computing device. The PADDIE-C19 provides real-time results and responses, as well as notifications and warnings to anyone who violates the 1-m physical distance rule. In addition, PADDIE-C19 includes temperature screening using an MLX90614 thermometer and ultrasonic sensors to restrict the number of people on specified premises. The Neural Network Processor (KPU) in Grove Artificial Intelligence Hardware Attached on Top (AI HAT), an edge computing unit, is used to accelerate the neural network model on person detection and achieve up to 18 frames per second (FPS). The results show that the accuracy of person detection with Grove AI HAT could achieve 74.65% and the average absolute error between measured and actual physical distance is 8.95 cm. Furthermore, the accuracy of the MLX90614 thermometer is guaranteed to have less than 0.5 °C value difference from the more common Fluke 59 thermometer. Experimental results also proved that when cloud computing is compared to edge computing, the Grove AI HAT achieves the average performance of 18 FPS for a person detector (kmodel) with an average 56 ms execution time in different networks, regardless of the network connection type or speed. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

25 pages, 3327 KiB  
Article
Custom IMU-Based Wearable System for Robust 2.4 GHz Wireless Human Body Parts Orientation Tracking and 3D Movement Visualization on an Avatar
by Javier González-Alonso, David Oviedo-Pastor, Héctor J. Aguado, Francisco J. Díaz-Pernas, David González-Ortega and Mario Martínez-Zarzuela
Sensors 2021, 21(19), 6642; https://doi.org/10.3390/s21196642 - 6 Oct 2021
Cited by 12 | Viewed by 6112
Abstract
Recent studies confirm the applicability of Inertial Measurement Unit (IMU)-based systems for human motion analysis. Notwithstanding, high-end IMU-based commercial solutions are yet too expensive and complex to democratize their use among a wide range of potential users. Less featured entry-level commercial solutions are [...] Read more.
Recent studies confirm the applicability of Inertial Measurement Unit (IMU)-based systems for human motion analysis. Notwithstanding, high-end IMU-based commercial solutions are yet too expensive and complex to democratize their use among a wide range of potential users. Less featured entry-level commercial solutions are being introduced in the market, trying to fill this gap, but still present some limitations that need to be overcome. At the same time, there is a growing number of scientific papers using not commercial, but custom do-it-yourself IMU-based systems in medical and sports applications. Even though these solutions can help to popularize the use of this technology, they have more limited features and the description on how to design and build them from scratch is yet too scarce in the literature. The aim of this work is two-fold: (1) Proving the feasibility of building an affordable custom solution aimed at simultaneous multiple body parts orientation tracking; while providing a detailed bottom-up description of the required hardware, tools, and mathematical operations to estimate and represent 3D movement in real-time. (2) Showing how the introduction of a custom 2.4 GHz communication protocol including a channel hopping strategy can address some of the current communication limitations of entry-level commercial solutions. The proposed system can be used for wireless real-time human body parts orientation tracking with up to 10 custom sensors, at least at 50 Hz. In addition, it provides a more reliable motion data acquisition in Bluetooth and Wi-Fi crowded environments, where the use of entry-level commercial solutions might be unfeasible. This system can be used as a groundwork for developing affordable human motion analysis solutions that do not require an accurate kinematic analysis. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

20 pages, 2729 KiB  
Article
Extended-Range Prediction Model Using NSGA-III Optimized RNN-GRU-LSTM for Driver Stress and Drowsiness
by Kwok Tai Chui, Brij B. Gupta, Ryan Wen Liu, Xinyu Zhang, Pandian Vasant and J. Joshua Thomas
Sensors 2021, 21(19), 6412; https://doi.org/10.3390/s21196412 - 25 Sep 2021
Cited by 21 | Viewed by 3085
Abstract
Road traffic accidents have been listed in the top 10 global causes of death for many decades. Traditional measures such as education and legislation have contributed to limited improvements in terms of reducing accidents due to people driving in undesirable statuses, such as [...] Read more.
Road traffic accidents have been listed in the top 10 global causes of death for many decades. Traditional measures such as education and legislation have contributed to limited improvements in terms of reducing accidents due to people driving in undesirable statuses, such as when suffering from stress or drowsiness. Attention is drawn to predicting drivers’ future status so that precautions can be taken in advance as effective preventative measures. Common prediction algorithms include recurrent neural networks (RNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) networks. To benefit from the advantages of each algorithm, nondominated sorting genetic algorithm-III (NSGA-III) can be applied to merge the three algorithms. This is named NSGA-III-optimized RNN-GRU-LSTM. An analysis can be made to compare the proposed prediction algorithm with the individual RNN, GRU, and LSTM algorithms. Our proposed model improves the overall accuracy by 11.2–13.6% and 10.2–12.2% in driver stress prediction and driver drowsiness prediction, respectively. Likewise, it improves the overall accuracy by 6.9–12.7% and 6.9–8.9%, respectively, compared with boosting learning with multiple RNNs, multiple GRUs, and multiple LSTMs algorithms. Compared with existing works, this proposal offers to enhance performance by taking some key factors into account—namely, using a real-world driving dataset, a greater sample size, hybrid algorithms, and cross-validation. Future research directions have been suggested for further exploration and performance enhancement. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

18 pages, 2079 KiB  
Article
EduNet: A New Video Dataset for Understanding Human Activity in the Classroom Environment
by Vijeta Sharma, Manjari Gupta, Ajai Kumar and Deepti Mishra
Sensors 2021, 21(17), 5699; https://doi.org/10.3390/s21175699 - 24 Aug 2021
Cited by 17 | Viewed by 7733
Abstract
Human action recognition in videos has become a popular research area in artificial intelligence (AI) technology. In the past few years, this research has accelerated in areas such as sports, daily activities, kitchen activities, etc., due to developments in the benchmarks proposed for [...] Read more.
Human action recognition in videos has become a popular research area in artificial intelligence (AI) technology. In the past few years, this research has accelerated in areas such as sports, daily activities, kitchen activities, etc., due to developments in the benchmarks proposed for human action recognition datasets in these areas. However, there is little research in the benchmarking datasets for human activity recognition in educational environments. Therefore, we developed a dataset of teacher and student activities to expand the research in the education domain. This paper proposes a new dataset, called EduNet, for a novel approach towards developing human action recognition datasets in classroom environments. EduNet has 20 action classes, containing around 7851 manually annotated clips extracted from YouTube videos, and recorded in an actual classroom environment. Each action category has a minimum of 200 clips, and the total duration is approximately 12 h. To the best of our knowledge, EduNet is the first dataset specially prepared for classroom monitoring for both teacher and student activities. It is also a challenging dataset of actions as it has many clips (and due to the unconstrained nature of the clips). We compared the performance of the EduNet dataset with benchmark video datasets UCF101 and HMDB51 on a standard I3D-ResNet-50 model, which resulted in 72.3% accuracy. The development of a new benchmark dataset for the education domain will benefit future research concerning classroom monitoring systems. The EduNet dataset is a collection of classroom activities from 1 to 12 standard schools. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

26 pages, 9823 KiB  
Article
Using Privacy Respecting Sound Analysis to Improve Bluetooth Based Proximity Detection for COVID-19 Exposure Tracing and Social Distancing
by Gernot Bahle, Vitor Fortes Rey, Sizhen Bian, Hymalai Bello and Paul Lukowicz
Sensors 2021, 21(16), 5604; https://doi.org/10.3390/s21165604 - 20 Aug 2021
Cited by 15 | Viewed by 3007
Abstract
We propose to use ambient sound as a privacy-aware source of information for COVID-19-related social distance monitoring and contact tracing. The aim is to complement currently dominant Bluetooth Low Energy Received Signal Strength Indicator (BLE RSSI) approaches. These often struggle with the complexity [...] Read more.
We propose to use ambient sound as a privacy-aware source of information for COVID-19-related social distance monitoring and contact tracing. The aim is to complement currently dominant Bluetooth Low Energy Received Signal Strength Indicator (BLE RSSI) approaches. These often struggle with the complexity of Radio Frequency (RF) signal attenuation, which is strongly influenced by specific surrounding characteristics. This in turn renders the relationship between signal strength and the distance between transmitter and receiver highly non-deterministic. We analyze spatio-temporal variations in what we call “ambient sound fingerprints”. We leverage the fact that ambient sound received by a mobile device is a superposition of sounds from sources at many different locations in the environment. Such a superposition is determined by the relative position of those sources with respect to the receiver. We present a method for using the above general idea to classify proximity between pairs of users based on Kullback–Leibler distance between sound intensity histograms. The method is based on intensity analysis only, and does not require the collection of any privacy sensitive signals. Further, we show how this information can be fused with BLE RSSI features using adaptive weighted voting. We also take into account that sound is not available in all windows. Our approach is evaluated in elaborate experiments in real-world settings. The results show that both Bluetooth and sound can be used to differentiate users within and out of critical distance (1.5 m) with high accuracies of 77% and 80% respectively. Their fusion, however, improves this to 86%, making evident the merit of augmenting BLE RSSI with sound. We conclude by discussing strengths and limitations of our approach and highlighting directions for future work. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

28 pages, 12173 KiB  
Article
Deep Learning for Classifying Physical Activities from Accelerometer Data
by Vimala Nunavath, Sahand Johansen, Tommy Sandtorv Johannessen, Lei Jiao, Bjørge Herman Hansen, Sveinung Berntsen and Morten Goodwin
Sensors 2021, 21(16), 5564; https://doi.org/10.3390/s21165564 - 18 Aug 2021
Cited by 11 | Viewed by 5575
Abstract
Physical inactivity increases the risk of many adverse health conditions, including the world’s major non-communicable diseases, such as coronary heart disease, type 2 diabetes, and breast and colon cancers, shortening life expectancy. There are minimal medical care and personal trainers’ methods to monitor [...] Read more.
Physical inactivity increases the risk of many adverse health conditions, including the world’s major non-communicable diseases, such as coronary heart disease, type 2 diabetes, and breast and colon cancers, shortening life expectancy. There are minimal medical care and personal trainers’ methods to monitor a patient’s actual physical activity types. To improve activity monitoring, we propose an artificial-intelligence-based approach to classify physical movement activity patterns. In more detail, we employ two deep learning (DL) methods, namely a deep feed-forward neural network (DNN) and a deep recurrent neural network (RNN) for this purpose. We evaluate the two models on two physical movement datasets collected from several volunteers who carried tri-axial accelerometer sensors. The first dataset is from the UCI machine learning repository, which contains 14 different activities-of-daily-life (ADL) and is collected from 16 volunteers who carried a single wrist-worn tri-axial accelerometer. The second dataset includes ten other ADLs and is gathered from eight volunteers who placed the sensors on their hips. Our experiment results show that the RNN model provides accurate performance compared to the state-of-the-art methods in classifying the fundamental movement patterns with an overall accuracy of 84.89% and an overall F1-score of 82.56%. The results indicate that our method provides the medical doctors and trainers a promising way to track and understand a patient’s physical activities precisely for better treatment. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

29 pages, 932 KiB  
Article
Activity-Aware Vital Sign Monitoring Based on a Multi-Agent Architecture
by Todor Ivașcu and Viorel Negru
Sensors 2021, 21(12), 4181; https://doi.org/10.3390/s21124181 - 18 Jun 2021
Cited by 5 | Viewed by 3099
Abstract
Vital sign monitoring outside the clinical environment based on wearable sensors ensures better support in assessing a patient’s health condition, and in case of health deterioration, automatic alerts can be sent to the care providers. In everyday life, the users can perform different [...] Read more.
Vital sign monitoring outside the clinical environment based on wearable sensors ensures better support in assessing a patient’s health condition, and in case of health deterioration, automatic alerts can be sent to the care providers. In everyday life, the users can perform different physical activities, and considering that vital sign measurements depend on the intensity of the activity, we proposed an architecture based on the multi-agent paradigm to handle this issue dynamically. Different types of agents were proposed that processed different sensor signals and recognized simple activities of daily living. The system was validated using a real-life dataset where subjects wore accelerometer sensors on the chest, wrist, and ankle. The system relied on ontology-based models to address the data heterogeneity and combined different wearable sensor sources in order to achieve better performance. The results showed an accuracy of 95.25% on intersubject activity classification. Moreover, the proposed method, which automatically extracted vital sign threshold ranges for each physical activity recognized by the system, showed promising results for remote health status evaluation. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

17 pages, 475 KiB  
Communication
Mass Tracking in Cellular Networks for the COVID-19 Pandemic Monitoring
by Emil J. Khatib, María Jesús Perles Roselló, Jesús Miranda-Páez, Victoriano Giralt and Raquel Barco
Sensors 2021, 21(10), 3424; https://doi.org/10.3390/s21103424 - 14 May 2021
Cited by 9 | Viewed by 2725
Abstract
The year 2020 was marked by the emergence of the COVID-19 pandemic. After months of uncontrolled spread worldwide, a clear conclusion is that controlling the mobility of the general population can slow down the propagation of the pandemic. Tracking the location of the [...] Read more.
The year 2020 was marked by the emergence of the COVID-19 pandemic. After months of uncontrolled spread worldwide, a clear conclusion is that controlling the mobility of the general population can slow down the propagation of the pandemic. Tracking the location of the population enables better use of mobility limitation policies and the prediction of potential hotspots, as well as improved alert services to individuals that may have been exposed to the virus. With mobility in their core functionality and a high degree of penetration of mobile devices within the general population, cellular networks are an invaluable asset for this purpose. This paper shows an overview of the possibilities offered by cellular networks for the massive tacking of the population at different levels. The major privacy concerns are also reviewed and a specific use case is shown, correlating mobility and number of cases in the province of Málaga (Spain). Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

20 pages, 2844 KiB  
Article
Smartphone-Based Activity Recognition in a Pedestrian Navigation Context
by Robert Jackermeier and Bernd Ludwig
Sensors 2021, 21(9), 3243; https://doi.org/10.3390/s21093243 - 7 May 2021
Cited by 2 | Viewed by 2598
Abstract
In smartphone-based pedestrian navigation systems, detailed knowledge about user activity and device placement is a key information. Landmarks such as staircases or elevators can help the system in determining the user position when located inside buildings, and navigation instructions can be adapted to [...] Read more.
In smartphone-based pedestrian navigation systems, detailed knowledge about user activity and device placement is a key information. Landmarks such as staircases or elevators can help the system in determining the user position when located inside buildings, and navigation instructions can be adapted to the current context in order to provide more meaningful assistance. Typically, most human activity recognition (HAR) approaches distinguish between general activities such as walking, standing or sitting. In this work, we investigate more specific activities that are tailored towards the use-case of pedestrian navigation, including different kinds of stationary and locomotion behavior. We first collect a dataset of 28 combinations of device placements and activities, in total consisting of over 6 h of data from three sensors. We then use LSTM-based machine learning (ML) methods to successfully train hierarchical classifiers that can distinguish between these placements and activities. Test results show that the accuracy of device placement classification (97.2%) is on par with a state-of-the-art benchmark in this dataset while being less resource-intensive on mobile devices. Activity recognition performance highly depends on the classification task and ranges from 62.6% to 98.7%, once again performing close to the benchmark. Finally, we demonstrate in a case study how to apply the hierarchical classifiers to experimental and naturalistic datasets in order to analyze activity patterns during the course of a typical navigation session and to investigate the correlation between user activity and device placement, thereby gaining insights into real-world navigation behavior. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

16 pages, 4736 KiB  
Article
Human Motion Tracking with Less Constraint of Initial Posture from a Single RGB-D Sensor
by Chen Liu, Anna Wang, Chunguang Bu, Wenhui Wang and Haijing Sun
Sensors 2021, 21(9), 3029; https://doi.org/10.3390/s21093029 - 26 Apr 2021
Cited by 9 | Viewed by 2738
Abstract
High-quality and complete human motion 4D reconstruction is of great significance for immersive VR and even human operation. However, it has inevitable self-scanning constraints, and tracking under monocular settings also has strict restrictions. In this paper, we propose a human motion capture system [...] Read more.
High-quality and complete human motion 4D reconstruction is of great significance for immersive VR and even human operation. However, it has inevitable self-scanning constraints, and tracking under monocular settings also has strict restrictions. In this paper, we propose a human motion capture system combined with human priors and performance capture that only uses a single RGB-D sensor. To break the self-scanning constraint, we generated a complete mesh only using the front view input to initialize the geometric capture. In order to construct a correct warping field, most previous methods initialize their systems in a strict way. To maintain high fidelity while increasing the easiness of the system, we updated the model while capturing motion. Additionally, we blended in human priors in order to improve the reliability of model warping. Extensive experiments demonstrated that our method can be used more comfortably while maintaining credible geometric warping and remaining free of self-scanning constraints. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

18 pages, 2064 KiB  
Article
Ensemble CNN to Detect Drowsy Driving with In-Vehicle Sensor Data
by Yongsu Jeon, Beomjun Kim and Yunju Baek
Sensors 2021, 21(7), 2372; https://doi.org/10.3390/s21072372 - 29 Mar 2021
Cited by 20 | Viewed by 3570
Abstract
Drowsy driving is a major threat to the safety of drivers and road traffic. Accurate and reliable drowsy driving detection technology can reduce accidents caused by drowsy driving. In this study, we present a new method to detect drowsy driving with vehicle sensor [...] Read more.
Drowsy driving is a major threat to the safety of drivers and road traffic. Accurate and reliable drowsy driving detection technology can reduce accidents caused by drowsy driving. In this study, we present a new method to detect drowsy driving with vehicle sensor data obtained from the steering wheel and pedal pressure. From our empirical study, we categorized drowsy driving into long-duration drowsy driving and short-duration drowsy driving. Furthermore, we propose an ensemble network model composed of convolution neural networks that can detect each type of drowsy driving. Each subnetwork is specialized to detect long- or short-duration drowsy driving using a fusion of features, obtained through time series analysis. To efficiently train the proposed network, we propose an imbalanced data-handling method that adjusts the ratio of normal driving data and drowsy driving data in the dataset by partially removing normal driving data. A dataset comprising 198.3 h of in-vehicle sensor data was acquired through a driving simulation that includes a variety of road environments such as urban environments and highways. The performance of the proposed model was evaluated with a dataset. This study achieved the detection of drowsy driving with an accuracy of up to 94.2%. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

18 pages, 14736 KiB  
Article
Low-Asymmetry Interface for Multiuser VR Experiences with Both HMD and Non-HMD Users
by Qimeng Zhang, Ji-Su Ban, Mingyu Kim, Hae Won Byun and Chang-Hun Kim
Sensors 2021, 21(2), 397; https://doi.org/10.3390/s21020397 - 8 Jan 2021
Cited by 10 | Viewed by 3461
Abstract
We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the [...] Read more.
We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

Back to TopTop