applsci-logo

Journal Browser

Journal Browser

Advanced Virtual, Augmented, and Mixed Reality: Immersive Applications and Innovative Techniques

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 November 2024) | Viewed by 20875

Special Issue Editor


E-Mail Website
Guest Editor
College of Engineering and Computing, Missouri University of Science and Technology, Rolla, MO 63128, USA
Interests: virtual reality; augmented reality; computer graphics; robotics; machine learning; data mining; qualitative spatial reasoning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

AR/VR has grown beyond its science fiction roots, becoming a powerful visualization tool using and creating immersive software and hardware technology. This technology provides a more immersive experience than traditional vignette approaches to evoke instant perception and comprehension. The COVID-19 pandemic has accelerated the need for enhancements in technology with such research and immersive tools. Zoom, for instance, is a prime tool for remote learning, conferencing, meeting of minds, etc. VR also uses remote sensing devices to inspect, diagnose, and maintain products and projects remotely. The technology has not reached a mature state yet because of the cost of equipment, but this is not a deterrent to content developers, and more research is continuing to be carried out to combat this.

Research interest in virtual reality education is rapidly growing beyond the limits of imagination in many areas, leading to various innovative applications which are of use both to researchers and the general public. This includes several industries such as automotive, healthcare, psychology, education and training, tourism, manufacturing, civil engineering, commerce (advertising and retail sales), military, architecture, and research and development. VR is also akin to perception, immersion and visualization. All the more, the innovators in each industry continue to explore ways to tap into the unlimited potential of VR in abstract and real environments.

In 2023, the trend is anticipated to involve the exploration and exploitation of AR as much as possible. AR trends show that it is increasingly being adopted in automotives, healthcare, marketing, engineering, and education, and there is a rapidly growing demand for professionals who are proficient in virtual reality (VR) and augmented reality (AR). In the context of these recent innovations, AR and VR will soon be integral aspects of society. MDPI welcomes original papers in all areas of virtual reality and augmented reality applications in natural and social sciences.

Dr. Chaman Sabharwal
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • augmented reality
  • digital
  • immersion
  • AR and AI
  • 3D graphics
  • animation
  • emerging
  • interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 9477 KiB  
Article
Development of a VR360 Ecological System for Learning Indigenous Cultures and Environmental Conservation
by Wernhuar Tarng and Jen-Chu Hsu
Appl. Sci. 2024, 14(22), 10582; https://doi.org/10.3390/app142210582 - 16 Nov 2024
Viewed by 481
Abstract
The cultures and religious beliefs of Taiwanese indigenous peoples are deeply rooted in ecological protection and environmental ethics. Indigenous peoples emphasize reverence for nature, ecological diversity, sustainable living, resource sharing, and sanctity of nature. Integrating environmental education with indigenous culture can promote biodiversity [...] Read more.
The cultures and religious beliefs of Taiwanese indigenous peoples are deeply rooted in ecological protection and environmental ethics. Indigenous peoples emphasize reverence for nature, ecological diversity, sustainable living, resource sharing, and sanctity of nature. Integrating environmental education with indigenous culture can promote biodiversity and ecological conservation while preserving indigenous traditions and fostering sustainable development. This study combined Virtual Reality 360-degree (VR360) technology with indigenous culture to develop a virtual ecological system as a learning tool for environmental education in indigenous elementary schools. The VR360 system simulates the ecological environments of Chichiawan Creek and the Atayal Nanshan Tribe in the mountainous regions of northern Taiwan to provide students with immersive experiences that enhance their learning interest and motivation. Through interactive operations, they can observe the appearance, characteristics, and habitual behaviors of Formosan Landlocked Salmon and other conservation animals to understand the relationship between maintaining biodiversity and ecological balance. The VR360 ecological system enhances learning effectiveness and motivation using low-cost cardboard glasses, making it suitable for promoting indigenous culture and environmental education while reducing the digital divide in remote tribal areas. Full article
Show Figures

Figure 1

15 pages, 1778 KiB  
Article
The Effects of the Complexity of 3D Virtual Objects on Visual Working Memory Capacity in AR Interface for Mobile Phones
by Xingcheng Di, Jing Zhang, Shangsong Jiang, Wei Xu and Nuowen Zhang
Appl. Sci. 2024, 14(21), 9776; https://doi.org/10.3390/app14219776 - 25 Oct 2024
Viewed by 532
Abstract
The current study aims to investigate the effects of 3D virtual object complexity on visual working memory capacity in mobile augmented reality (MAR) interfaces. With the popularity of augmented reality technology in mobile applications, 3D virtual elements play a key role in interaction. [...] Read more.
The current study aims to investigate the effects of 3D virtual object complexity on visual working memory capacity in mobile augmented reality (MAR) interfaces. With the popularity of augmented reality technology in mobile applications, 3D virtual elements play a key role in interaction. However, prior studies ignored the correlation between virtual object presentation and visual working memory (VWM). Given that visual working memory capacity is closely related to overall cognitive ability, the current study explored the relationship between the complexity of 3D virtual objects and VWM capacity in AR interfaces. Sixty volunteers participated in a 5-point Likert scale rating to eliminate the interference factors of familiarity, concreteness and similarity in 3D virtual material objects. Then, we further conducted an MAR change detection paradigm experiment and successfully manipulated the complexity of 3D virtual objects and the set size. Thirty-one subjects completed the formal experiment. Three measurements (reaction time, proportion correct, and Cowan’s K) were analyzed for nine experimental conditions (three object complexity levels and three object set sizes). The results confirmed that the visual working memory capacity in mobile AR interfaces is modulated by the complexity of the 3D virtual objects and set size, which decreases with the increase in complexity and set size of the displayed 3D virtual objects. As the complexity of the 3D virtual objects increases, the amount of resources allocated to each item decreases, resulting in a decrease in memory capacity and memory accuracy. This study highlights the effectiveness of VWM capacity in MAR interface design research and provides valuable insights into determining the best way to present 3D virtual objects. Full article
Show Figures

Figure 1

11 pages, 1130 KiB  
Article
Improving Exposure Therapy Through Projection-Based Augmented Reality for the Treatment of Cockroach Phobia: A Feasibility, Multiple-Baseline, Single-Case Study
by María Palau-Batet, Juana Bretón-López, Jorge Grimaldos, Carlos Suso-Ribera, Diana Castilla, Azucena García-Palacios and Soledad Quero
Appl. Sci. 2024, 14(20), 9581; https://doi.org/10.3390/app14209581 - 21 Oct 2024
Viewed by 707
Abstract
Augmented Reality (AR) is helpful for overcoming the challenges of in vivo exposure therapy for Specific Phobia (SP). Specifically, Projection-based AR exposure therapy (P-ARET) allows the individual to face the feared animal without intrusive hardware, the phobic stimulus can be controlled, and it [...] Read more.
Augmented Reality (AR) is helpful for overcoming the challenges of in vivo exposure therapy for Specific Phobia (SP). Specifically, Projection-based AR exposure therapy (P-ARET) allows the individual to face the feared animal without intrusive hardware, the phobic stimulus can be controlled, and it can maximize “variability”, producing a positive effect in the generalization of the results. The goal of this work is to assess the feasibility of P-ARET for SP, comparing multiple stimuli (MS) versus single stimulus (SS) conditions and evaluating the participants’ user experience. Adherence to a daily monitoring app (Emotional Monitor) and preliminary efficacy of the P-ARET treatment were assessed. Four participants diagnosed with SP of cockroaches (DSM-5) were randomly assigned to different baselines. Episodic and daily evaluations were performed. Participants considered the MS condition more aversive but more effective than the SS condition. Adherence to the mobile app was 83% for three participants and 55% for the remaining person. Analyses of non-overlap of all pairs and changes in the functionality levels showed a decrease in the SP symptoms at post-treatment and follow-ups. This study offers preliminary feasibility results for a novel form of P-ARET to treat participants with cockroach phobia, which may also apply to other phobias. Full article
Show Figures

Figure 1

19 pages, 10602 KiB  
Article
Effects of Gradual Spatial and Temporal Cues Provided by Synchronized Walking Avatar on Elderly Gait
by Dane A. L. Miller, Hirotaka Uchitomi and Yoshihiro Miyake
Appl. Sci. 2024, 14(18), 8374; https://doi.org/10.3390/app14188374 - 18 Sep 2024
Viewed by 498
Abstract
Aging often leads to elderly gait characterized by slower speeds, shorter strides, and increased cycle; improving gait can significantly enhance the quality of life. Early gait training can help reduce gait impairment later on. Augmented reality (AR) technologies have shown promise in gait [...] Read more.
Aging often leads to elderly gait characterized by slower speeds, shorter strides, and increased cycle; improving gait can significantly enhance the quality of life. Early gait training can help reduce gait impairment later on. Augmented reality (AR) technologies have shown promise in gait training, providing real-time feedback and guided exercises to improve walking patterns and gait parameters. The aim of this study was to observe the effects of gradual spatial and temporal cues provided by a synchronized walking avatar on the gait of elderly participants. This experiment involved 19 participants aged over 70 years, who walked while interacting with a synchronized walking avatar that provided audiovisual spatial and temporal cues. Spatial cueing and temporal cueing were provided through distance changes and phase difference changes, respectively. The WalkMate AR system was used to synchronize the avatar’s walking cycle with the participants’, delivering auditory cues matched to foot contacts. This study assessed the immediate and carry-over effects of changes in distance and phase difference on stride length, cycle time, and gait speed. The results indicate that gradual spatial and temporal cueing significantly influences elderly gait parameters, with potential applications in gait rehabilitation and training. Full article
Show Figures

Figure 1

27 pages, 20444 KiB  
Article
Investigating User Experience of an Immersive Virtual Reality Simulation Based on a Gesture-Based User Interface
by Teemu H. Laine and Hae Jung Suk
Appl. Sci. 2024, 14(11), 4935; https://doi.org/10.3390/app14114935 - 6 Jun 2024
Viewed by 1560
Abstract
The affordability of equipment and availability of development tools have made immersive virtual reality (VR) popular across research fields. Gesture-based user interface has emerged as an alternative method to handheld controllers to interact with the virtual world using hand gestures. Moreover, a common [...] Read more.
The affordability of equipment and availability of development tools have made immersive virtual reality (VR) popular across research fields. Gesture-based user interface has emerged as an alternative method to handheld controllers to interact with the virtual world using hand gestures. Moreover, a common goal for many VR applications is to elicit a sense of presence in users. Previous research has identified many factors that facilitate the evocation of presence in users of immersive VR applications. We investigated the user experience of Four Seasons, an immersive virtual reality simulation where the user interacts with a natural environment and animals with their hands using a gesture-based user interface (UI). We conducted a mixed-method user experience evaluation with 21 Korean adults (14 males, 7 females) who played Four Seasons. The participants filled in a questionnaire and answered interview questions regarding presence and experience with the gesture-based UI. The questionnaire results indicated high ratings for presence and gesture-based UI, with some issues related to the realism of interaction and lack of sensory feedback. By analyzing the interview responses, we identified 23 potential presence factors and proposed a classification for organizing presence factors based on the internal–external and dynamic–static dimensions. Finally, we derived a set of design principles based on the potential presence factors and demonstrated their usefulness for the heuristic evaluation of existing gesture-based immersive VR experiences. The results of this study can be used for designing and evaluating presence-evoking gesture-based VR experiences. Full article
Show Figures

Figure 1

18 pages, 7366 KiB  
Article
Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications
by Cosimo Aliani, Alberto Morelli, Eva Rossi, Sara Lombardi, Vincenzo Yuto Civale, Vittoria Sardini, Flavio Verdino and Leonardo Bocchi
Appl. Sci. 2024, 14(10), 4133; https://doi.org/10.3390/app14104133 - 13 May 2024
Cited by 3 | Viewed by 933
Abstract
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, [...] Read more.
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, training and testing phases of medical students. This study introduces a technique for enhancing a 3D digital model reconstructed from cone-beam computed tomography images with its real coloured texture using an Intel D435 RGBD camera. This method is based on iteratively projecting the two models onto a 2D plane, identifying their contours and then minimizing the distance between them. Finally, the coloured digital models were displayed in mixed reality through a Microsoft HoloLens 2 and an application to interact with them using hand gestures was developed. The registration error between the two 3D models evaluated using 30,000 random points indicates values of: 1.1 ± 1.3 mm on the x-axis, 0.7 ± 0.8 mm on the y-axis, and 0.9 ± 1.2 mm on the z-axis. This result was achieved in three iterations, starting from an average registration error on the three axes of 1.4 mm to reach 0.9 mm. The heatmap created to visualize the spatial distribution of the error shows how it is uniformly distributed over the surface of the pointcloud obtained with the RGBD camera, except for some areas of the nose and ears where the registration error tends to increase. The obtained results indicate that the proposed methodology seems effective. In addition, since the used RGBD camera is inexpensive, future approaches based on the simultaneous use of multiple cameras could further improve the results. Finally, the augmented reality visualization of the obtained result is innovative and could provide support in all those cases where the visualization of three-dimensional medical models is necessary. Full article
Show Figures

Figure 1

14 pages, 3573 KiB  
Article
Participatory Exhibition-Viewing Using Augmented Reality and Analysis of Visitor Behavior
by Chun-I Lee, Yen-Hsi Pan and Brian Chen
Appl. Sci. 2024, 14(9), 3579; https://doi.org/10.3390/app14093579 - 24 Apr 2024
Viewed by 1225
Abstract
Augmented reality (AR) is rapidly becoming a popular technology for exhibitions. The extended content provided through virtual elements offers a higher level of interactivity and can increase the appeal of the exhibition for younger viewers, in particular. However, AR technology in exhibition settings [...] Read more.
Augmented reality (AR) is rapidly becoming a popular technology for exhibitions. The extended content provided through virtual elements offers a higher level of interactivity and can increase the appeal of the exhibition for younger viewers, in particular. However, AR technology in exhibition settings is typically utilized to extend the effects of exhibits, focusing solely on individual experiences and lacking in shared social interactions. In order to address this limitation, in this study, we used AR technology to construct a participatory exhibition-viewing system in the form of an AR mobile application (app), “Wander Into Our Sea”. This system was developed as a component of the 2022 Greater Taipei Biennial of Contemporary Art exhibition titled “Log Into Our Sea”. The app features two modes: exhibition-viewing mode and message mode. The first embodies passive exhibition-viewing while the second offers channels for active participation. The app has three functions: (1) in exhibition mode, visitors passively view the exhibition content through the AR lens, (2) in message mode, visitors can use the AR lens to leave messages in the 3D space of the exhibition to become part of the exhibit, and (3) during the use of either mode, the app collects data on visitor behavior and uploads it to a cloud to create a research database. The third function allowed us to compare the behaviors of exhibition visitors while they used the two modes. Results revealed that without restricting the ways and sequences in which AR content was viewed, there were no significant differences in the duration of viewing, or the distance covered by visitors between the two modes. However, the paths they took were more concentrated in the exhibition-viewing mode, which indicates that this mode encouraged visitors to view the exhibit in accordance with the AR content. In contrast, in message mode, visitors were encouraged to leave text messages and read those left by others, which created disorganized unpredictable paths. Our study demonstrates an innovative application of AR positioning within an interactive exhibition-viewing system, showcasing a novel way to engage visitors and enrich their experience. Full article
Show Figures

Figure 1

11 pages, 3044 KiB  
Article
Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning
by Jari Kangas, Jorma Järnstedt, Kimmo Ronkainen, John Mäkelä, Helena Mehtonen, Pertti Huuskonen and Roope Raisamo
Appl. Sci. 2024, 14(3), 1038; https://doi.org/10.3390/app14031038 - 25 Jan 2024
Cited by 2 | Viewed by 1053
Abstract
Three-dimensional (3D) medical images are used for diagnosis and in surgical operation planning. Computer-assisted surgical simulations (CASS) are essential for complex surgical procedures that are often performed in an interdisciplinary manner. Traditionally, the participants study the designs on the same display. In 3D [...] Read more.
Three-dimensional (3D) medical images are used for diagnosis and in surgical operation planning. Computer-assisted surgical simulations (CASS) are essential for complex surgical procedures that are often performed in an interdisciplinary manner. Traditionally, the participants study the designs on the same display. In 3D virtual reality (VR) environments, the planner is wearing a head-mounted display (HMD). The designs can be then examined in VR by other persons wearing HMDs, which is a practical use case for the medical metaverse. A multi-user VR environment was built for the planning of an orthognathic–surgical (correction of facial skeleton) operation. Four domain experts (oral and maxillofacial radiologists) experimented with the pilot system and found it useful. It enabled easier observation of the model and a better understanding of the structures. There was a voice connection and co-operation during the procedure was natural. The planning task is complex, leading to a certain level of complexity in the user interface. Full article
Show Figures

Figure 1

13 pages, 1288 KiB  
Article
Attentional Bias Modification Training in Virtual Reality: Evaluation of User Experience
by María Teresa Mendoza-Medialdea, Ana Carballo-Laza, Mariarca Ascione, Franck-Alexandre Meschberger-Annweiler, Bruno Porras-Garcia, Marta Ferrer-Garcia and José Gutiérrez-Maldonado
Appl. Sci. 2024, 14(1), 222; https://doi.org/10.3390/app14010222 - 26 Dec 2023
Viewed by 1083
Abstract
Recent technological advances have paved the way for incorporating virtual reality (VR) into attentional bias modification training (ABMT) for the treatment of eating disorders. An important consideration in this therapeutic approach is ensuring the ease and comfort of users of the hardware and [...] Read more.
Recent technological advances have paved the way for incorporating virtual reality (VR) into attentional bias modification training (ABMT) for the treatment of eating disorders. An important consideration in this therapeutic approach is ensuring the ease and comfort of users of the hardware and software, preventing them from becoming additional obstacles during treatment. To assess this, 68 healthy participants engaged in an ABMT experiment aimed at evaluating various factors, including usability as well as the participants’ comfort while using the VR equipment, task-induced fatigue, and attitudes towards the technology. Our results indicated a favorable usability level for the ABMT proposed in this study. While their discomfort, anxiety, and fatigue increased during the task, these did not significantly impact its execution. However, heightened anxiety and fatigue were linked to lower evaluations of software usability. Other variables considered in the experiment did not notably affect the task. Full article
Show Figures

Figure 1

21 pages, 3273 KiB  
Article
Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies
by Amalia Ortiz and Sonia Elizondo
Appl. Sci. 2023, 13(24), 13322; https://doi.org/10.3390/app132413322 - 17 Dec 2023
Cited by 3 | Viewed by 1857
Abstract
Virtual Reality is expanding its use to several fields of application, including health and education. The continuous growth of this technology comes with new challenges related to the ways in which users feel inside these virtual environments. There are various guidelines on ways [...] Read more.
Virtual Reality is expanding its use to several fields of application, including health and education. The continuous growth of this technology comes with new challenges related to the ways in which users feel inside these virtual environments. There are various guidelines on ways to enhance users’ virtual experience in terms of immersion or presence. Nonetheless, there is no extensive research on enhancing the sense of agency (SoA), a phenomenon which refers to the self-awareness of initiating, executing, and controlling one’s actions in the world. After reviewing the state of the art of technologies developed in the field of Affective Computing (AC), we propose a framework for designing immersive virtual environments (IVE) to enhance the users’ SoA. The framework defines the flow of interaction between users and the virtual world, as well as the AC technologies required for each interactive component to recognise, interpret and respond coherently within the IVE in order to enhance the SoA. Full article
Show Figures

Figure 1

16 pages, 391 KiB  
Article
Performance, Emotion, Presence: Investigation of an Augmented Reality-Supported Concept for Flight Training
by Birgit Moesl, Harald Schaffernak, Wolfgang Vorraber, Reinhard Braunstingl and Ioana Victoria Koglbauer
Appl. Sci. 2023, 13(20), 11346; https://doi.org/10.3390/app132011346 - 16 Oct 2023
Cited by 1 | Viewed by 1512
Abstract
Augmented reality (AR) could be a means for a more sustainable education of the next generation of pilots. This study aims to assess an AR-supported training concept for approach to landing, which is the riskiest phase of flying an aircraft and the most [...] Read more.
Augmented reality (AR) could be a means for a more sustainable education of the next generation of pilots. This study aims to assess an AR-supported training concept for approach to landing, which is the riskiest phase of flying an aircraft and the most difficult to learn. The evaluation was conducted with 59 participants (28 women and 31 men) in a pretest–post-test control group design. No significant effect of the AR-supported training was observed when comparing the experimental and the control groups. However, the results show that for the experimental group that trained with AR, higher performance in post-test was associated with higher AR presence and comfort with AR during training. Although both gender groups improved their approach quality after training, the improvement was larger in women as compared to men. Trainees’ workload, fear of failure, and negative emotions decreased in post-test as compared to pre-test, but the decrease was significantly larger in women than in men. The experimental group who used AR support during training showed improved performance despite the absence of AR support in post-test. However, the AR-based training concept had a similar effect to conventional simulator training. Although more research is necessary to explore the training opportunities in AR and mixed reality, the results of this study indicate that such an application would be beneficial to bridge the gap between theoretical and practical instruction. Full article
Show Figures

Figure 1

18 pages, 14314 KiB  
Article
Wine Production through Virtual Environments with a Focus on the Teaching–Learning Process
by Danis Tapia, Diego Illescas, Walter Santamaría and Jessica S. Ortiz
Appl. Sci. 2023, 13(19), 10823; https://doi.org/10.3390/app131910823 - 29 Sep 2023
Cited by 3 | Viewed by 1512
Abstract
This paper focuses on the application of the hardware-in-the-loop (HIL) technique in the winemaking process. The HIL technique provides an effective methodology to test and verify the automatic control of industrial processes in 3D laboratory environments. Two parts are considered: (i) software, which [...] Read more.
This paper focuses on the application of the hardware-in-the-loop (HIL) technique in the winemaking process. The HIL technique provides an effective methodology to test and verify the automatic control of industrial processes in 3D laboratory environments. Two parts are considered: (i) software, which consists of the virtualization of the wine process in order to generate a realistic work environment that allows the student to manipulate the system while visualizing the changes in the process; and (ii) hardware, through which the process control is implemented in ladder language in a PLC S7 1200 AC/DC/RLY (programmable logic controller). Bidirectional Ethernet TCP/IP communication is established, achieving a client–server architecture. This article highlights the main advantages of the HIL technique, such as its ability to simulate complex and extreme scenarios that would be difficult or expensive to recreate in a real environment. In addition, real-time testing of the hardware and software to implement the control system is performed, allowing for fast and accurate responses. Finally, a usability table is obtained that demonstrates the benefits of performing industrial process control work in virtual work environments, focusing the development on meaningful learning processes for engineering students. Full article
Show Figures

Figure 1

15 pages, 1212 KiB  
Article
Augmented Reality Applications for Synchronized Communication in Construction: A Review of Challenges and Opportunities
by Rita El Kassis, Steven K. Ayer and Mounir El Asmar
Appl. Sci. 2023, 13(13), 7614; https://doi.org/10.3390/app13137614 - 28 Jun 2023
Cited by 7 | Viewed by 2187
Abstract
Many researchers in the construction field have explored the utilization of augmented reality (AR) and its impact on the industry. Previous studies have shown potential uses for AR in the construction industry. However, a comprehensive critical review exploring the ways in which AR [...] Read more.
Many researchers in the construction field have explored the utilization of augmented reality (AR) and its impact on the industry. Previous studies have shown potential uses for AR in the construction industry. However, a comprehensive critical review exploring the ways in which AR supports synchronized communication is still missing. This paper aims to fill this gap by examining trends identified in the literature and by analyzing both beneficial and challenging attributes. This work was performed by collecting numerous journal and conference papers, using keywords including “augmented reality”, “construction”, and “synchronous communication”. The papers were then categorized based on the reported attributes that were indicated to be challenges or benefits. Throughout the analysis, several benefits were consistently reported, including training, visualization, instantly sharing information, decision making, and intuitive interaction. Similarly, several challenges were consistently reported, such as difficulty in manipulation, unfriendly interface, device discomfort, and sun brightness. Regarding other attributes, such as field of view, cost, safety hazards, and hands-free mode, researchers provided divergent reports regarding whether they were beneficial or detrimental to AR communication. These findings provide valuable guidance for future researchers and practitioners, enabling them to leverage AR for synchronized communication in ways that consistently offer value. Full article
Show Figures

Figure 1

22 pages, 7522 KiB  
Article
Upper Body Pose Estimation Using Deep Learning for a Virtual Reality Avatar
by Taravat Anvari, Kyoungju Park and Ganghyun Kim
Appl. Sci. 2023, 13(4), 2460; https://doi.org/10.3390/app13042460 - 14 Feb 2023
Cited by 3 | Viewed by 3869
Abstract
With the popularity of virtual reality (VR) games and devices, demand is increasing for estimating and displaying user motion in VR applications. Most pose estimation methods for VR avatars exploit inverse kinematics (IK) and online motion capture methods. In contrast to existing approaches, [...] Read more.
With the popularity of virtual reality (VR) games and devices, demand is increasing for estimating and displaying user motion in VR applications. Most pose estimation methods for VR avatars exploit inverse kinematics (IK) and online motion capture methods. In contrast to existing approaches, we aim for a stable process with less computation, usable in a small space. Therefore, our strategy has minimum latency for VR device users, from high-performance to low-performance, in multi-user applications over the network. In this study, we estimate the upper body pose of a VR user in real time using a deep learning method. We propose a novel method inspired by a classical regression model and trained with 3D motion capture data. Thus, our design uses a convolutional neural network (CNN)-based architecture from the joint information of motion capture data and modifies the network input and output to obtain input from a head and both hands. After feeding the model with properly normalized inputs, a head-mounted display (HMD), and two controllers, we render the user’s corresponding avatar in VR applications. We used our proposed pose estimation method to build single-user and multi-user applications, measure their performance, conduct a user study, and compare the results with previous methods for VR avatars. Full article
Show Figures

Figure 1

Back to TopTop