LifeXR: Concepts, Technology and Design for Everyday XR

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 37283

Special Issue Editors

School of IT Convergence, University of Ulsan, Ulsan 44610, Republic of Korea
Interests: virtual/mixed reality; human computer interaction; virtual human
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea
Interests: virtual/mixed reality; human–computer interaction

E-Mail Website
Guest Editor
Center for Imaging Media Research, Korea Institute of Science and Technology, Seoul 136-791, Korea
Interests: virtual/mixed reality; computer graphics; human–computer interaction

Special Issue Information

Dear Colleagues,

With the introduction of inexpensive lines of head-mounted displays circa 2012, the craze surrounding virtual reality (VR), with rosy prospects as the most promising and exciting future media form, was rekindled yet again. It also has sparked huge investments and further quick developments of related products, software, contents, devices, and sensors, and has even helped instigate a similar phenomenon for augmented reality (AR).

After about a decade, while VR/AR may have managed to become important media technologies for use in the industrial sector, the “consumer”-level mass market VR/AR has not become a reality. This Special Issue explores the concept of "LifeXR", namely, asking questions of where the current state of XR (i.e., VR/AR/AVR) stands in our everyday life for the average consumer, and exploring ways to make it happen. As an example, leveraging upon smartphones and extending the mobile XR platforms to be more usable and inexpensive would be one important way to realize LifeXR. Taking advantage of the recent 5G everywhere communication and IoT devices to secure the needed computational power to support realism and immersive user experience of the LifeXR platforms could be another. The types of content should go beyond typical areas such as games, training, and medicine to be used on a daily basis by the average users—their themes should be found in the midst of our daily lives.

In essence, we ask how can VR/AR or LifeXR improve the quality of human lives and make our mundane daily routines more exciting?

We invite papers that address the conceptualization of LifeXR, technologies specific to LifeXR, and design solutions to LifeXR including but not limited to:

  •  New mobile XR/LifeXR platforms;
  •  Usability of mobile XR/LifeXR;
  •  User experience of mobile XR/LifeXR;
  •  Sickness in mobile XR/LifeXR;
  •  Interaction design for mobile XR/LifeXR;
  •  Interfaces for mobile XR/LifeXR;
  •  Tracking for mobile XR/LifeXR;
  •  Context awareness for mobile XR/LifeXR;
  •  5G for mobile XR/LifeXR;
  •  IoT for mobile XR/LifeXR;
  •  Multimode XR and LifeXR;
  •  Multitasking LifeXR;
  •  LifeXR applications;
  •  LifeXR in everyday spaces (vehicle, office, kitchen, living room, restaurant, bed, etc.).

Prof. Dr. Dongsik Jo
Prof. Dr. Gerard J. Kim
Dr. Jae-in Hwang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • New mobile XR/LifeXR platforms
  • Usability of mobile XR/LifeXR
  • User experience of mobile XR/LifeXR
  • Sickness in mobile XR/LifeXR
  • Interaction design for mobile XR/LifeXR
  • Interfaces for mobile XR/LifeXR
  • Tracking for mobile XR/LifeXR
  • Context awareness for mobile XR/LifeXR
  • 5G for mobile XR/LifeXR
  • IoT for mobile XR/LifeXR
  • Multimode XR and LifeXR
  • Multitasking LifeXR
  • LifeXR applications
  • LifeXR in everyday spaces (vehicle, office, kitchen, living room, restaurant, bed, etc.)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 3042 KiB  
Article
Spectare: Re-Designing a Stereoscope for a Cultural Heritage XR Experience
by Daniel Taipina and Jorge C. S. Cardoso
Electronics 2022, 11(4), 620; https://doi.org/10.3390/electronics11040620 - 17 Feb 2022
Cited by 2 | Viewed by 3282
Abstract
Stereoscopic photography was one of the main forms of visual communication in the second half of the 19th century. The experience of viewing stereoscopic photographs using stereoscopes is described as evoking memories of the past, feelings of presence in the depicted scenes, but [...] Read more.
Stereoscopic photography was one of the main forms of visual communication in the second half of the 19th century. The experience of viewing stereoscopic photographs using stereoscopes is described as evoking memories of the past, feelings of presence in the depicted scenes, but also fun and magical experiences. The fact that using these devices generates these impactful experiences is relevant for Cultural Heritage (CH) where we want visitors to have memorable experiences. Since classic stereoscopes are similar to contemporary smartphone-based Virtual Reality (VR) viewers, we questioned how the original viewing experience could be re-imagined to take advantage of current technologies. We have designed a new smartphone-based VR device—Spectare—targeted towards experiencing CH content (2D or 360° photos or videos, soundscapes, or other immersive content), while still maintaining a user experience close to the original. In this paper, we describe the design process and operation of the Spectare device. We also report on an usability evaluation with 20 participants and on the field testing where we applied the device to the visualization of CH content resulting from a digital reconstruction of the monastery of Santa Cruz in Coimbra, Portugal. The evaluations uncovered issues with the smartphone support piece of the device, but generally its usage was classified with a high usability score. Participants also classified the device as innovative, creative, impressive, fun. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

14 pages, 2204 KiB  
Article
Effects on Co-Presence of a Virtual Human: A Comparison of Display and Interaction Types
by Daehwan Kim and Dongsik Jo
Electronics 2022, 11(3), 367; https://doi.org/10.3390/electronics11030367 - 26 Jan 2022
Cited by 13 | Viewed by 5107
Abstract
Recently, artificial intelligence (AI)-enabled virtual humans have been widely used in various fields in our everyday lives, such as for museum exhibitions and as information guides. Given the continued technological innovations in extended reality (XR), immersive display devices and interaction methods are evolving [...] Read more.
Recently, artificial intelligence (AI)-enabled virtual humans have been widely used in various fields in our everyday lives, such as for museum exhibitions and as information guides. Given the continued technological innovations in extended reality (XR), immersive display devices and interaction methods are evolving to provide a feeling of togetherness with a virtual human, termed co-presence. With regard to such technical developments, one main concern is how to improve the experience through the sense of co-presence as felt by participants. However, virtual human systems still have limited guidelines on effective methods, and there is a lack of research on how to visualize and interact with virtual humans. In this paper, we report a novel method to support a strong sense of co-presence with a virtual human, and we investigated the effects on co-presence with a comparison of display and interaction types. We conducted the experiment according to a specified scenario between the participant and the virtual human, and our experimental study showed that subjects who participated in an immersive 3D display with non-verbal interaction felt the greatest co-presence. Our results are expected to provide guidelines on how to focus on constructing AI-based interactive virtual humans. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

26 pages, 7388 KiB  
Article
SEOUL AR: Designing a Mobile AR Tour Application for Seoul Sky Observatory in South Korea
by Soomin Shin and Yongsoon Choi
Electronics 2021, 10(20), 2552; https://doi.org/10.3390/electronics10202552 - 19 Oct 2021
Cited by 2 | Viewed by 2844
Abstract
Skyscrapers are symbols of local landmarks, and their prevalence is increasing across the world owing to recent advances in architectural technology. In Korea, the Lotte World Tower, which is now the tallest skyscraper in Seoul, was constructed in 2017. In addition, it has [...] Read more.
Skyscrapers are symbols of local landmarks, and their prevalence is increasing across the world owing to recent advances in architectural technology. In Korea, the Lotte World Tower, which is now the tallest skyscraper in Seoul, was constructed in 2017. In addition, it has an observatory deck called Seoul Sky, which is currently in operation. This study focuses on the design of Seoul AR, which is a mobile augmented reality (AR) tour application. Visitors can use Seoul AR when visiting the Seoul Sky Observatory, one of the representative landmarks of Seoul, and enjoy a 360° view of the entire landscape of Seoul in the observatory space. With Seoul AR, they can identify tourist attractions in Seoul with simple mission games. Users are also provided with information regarding the specific attraction they are viewing, as well as other information on transportation, popular restaurants, shopping places, etc., in order to increase the level of satisfaction of tourists visiting the Seoul Sky Observatory. The final design is revised through heuristic evaluation, and a study of users’ levels of satisfaction with Seoul AR is conducted through surveys completed by visitors to the Seoul Sky Observatory. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

21 pages, 22784 KiB  
Article
Multi-User Drone Flight Training in Mixed Reality
by Yong-Guk Go, Ho-San Kang, Jong-Won Lee, Mun-Su Yu and Soo-Mi Choi
Electronics 2021, 10(20), 2521; https://doi.org/10.3390/electronics10202521 - 15 Oct 2021
Cited by 5 | Viewed by 2901
Abstract
The development of services and applications involving drones is promoting the growth of the unmanned-aerial-vehicle industry. Moreover, the supply of low-cost compact drones has greatly contributed to the popularization of drone flying. However, flying first-person-view (FPV) drones requires considerable experience because the remote [...] Read more.
The development of services and applications involving drones is promoting the growth of the unmanned-aerial-vehicle industry. Moreover, the supply of low-cost compact drones has greatly contributed to the popularization of drone flying. However, flying first-person-view (FPV) drones requires considerable experience because the remote pilot views a video transmitted from a camera mounted on the drone. In this paper, we propose a remote training system for FPV drone flying in mixed reality. Thereby, beginners who are inexperienced in FPV drone flight control can practice under the guidance of remote experts. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Graphical abstract

16 pages, 2160 KiB  
Article
Integration of Extended Reality and a High-Fidelity Simulator in Team-Based Simulations for Emergency Scenarios
by Youngho Lee, Sun-Kyung Kim, Hyoseok Yoon, Jongmyung Choi, Hyesun Kim and Younghye Go
Electronics 2021, 10(17), 2170; https://doi.org/10.3390/electronics10172170 - 6 Sep 2021
Cited by 11 | Viewed by 3428
Abstract
Wearable devices such as smart glasses are considered promising assistive tools for information exchange in healthcare settings. We aimed to evaluate the usability and feasibility of smart glasses for team-based simulations constructed using a high-fidelity simulator. Two scenarios of patients with arrhythmia were [...] Read more.
Wearable devices such as smart glasses are considered promising assistive tools for information exchange in healthcare settings. We aimed to evaluate the usability and feasibility of smart glasses for team-based simulations constructed using a high-fidelity simulator. Two scenarios of patients with arrhythmia were developed to establish a procedure for interprofessional interactions via smart glasses using 15-h simulation training. Three to four participants formed a team and played the roles of remote supporter or bed-side trainee with smart glasses. Usability, attitudes towards the interprofessional health care team and learning satisfaction were assessed. Using a 5-point Likert scale, from 1 (strongly disagree) to 5 (strongly agree), 31 participants reported that the smart glasses were easy to use (3.61 ± 0.95), that they felt confident during use (3.90 ± 0.87), and that that responded positively to long-term use (3.26 ± 0.89) and low levels of physical discomfort (1.96 ± 1.06). The learning satisfaction was high (4.65 ± 0.55), and most (84%) participants found the experience favorable. Key challenges included an unstable internet connection, poor resolution and display, and physical discomfort while using the smart glasses with accessories. We determined the feasibility and acceptability of smart glasses for interprofessional interactions within a team-based simulation. Participants responded favorably toward a smart glass-based simulation learning environment that would be applicable in clinical settings. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

14 pages, 7754 KiB  
Article
Production of Mobile English Language Teaching Application Based on Text Interface Using Deep Learning
by Yunsik Cho and Jinmo Kim
Electronics 2021, 10(15), 1809; https://doi.org/10.3390/electronics10151809 - 28 Jul 2021
Cited by 16 | Viewed by 2590
Abstract
This paper proposes a novel text interface using deep learning in a mobile platform environment and presents the English language teaching applications created based on our interface. First, an interface for handwriting texts is designed with a simple structure based on a touch-based [...] Read more.
This paper proposes a novel text interface using deep learning in a mobile platform environment and presents the English language teaching applications created based on our interface. First, an interface for handwriting texts is designed with a simple structure based on a touch-based input method of mobile platform applications. This input method is easier and more convenient than the existing graphical user interface (GUI), in which menu items such as buttons are selected repeatedly or step by step. Next, an interaction that intuitively facilitates a behavior and decision making from the input text is proposed. We propose an interaction technique that recognizes a text handwritten on the text interface through the Extended Modified National Institute of Standards and Technology (EMNIST) dataset and a convolutional neural network (CNN) model and connects the text to a behavior. Finally, using the proposed interface, we create English language teaching applications that can effectively facilitate learning alphabet writing and words using handwriting. Then, the satisfaction regarding the interface during the educational process is analyzed and verified through a survey experiment with users. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

25 pages, 20051 KiB  
Article
Planar-Equirectangular Image Stitching
by Muhammad-Firdaus Syawaludin, Seungwon Kim and Jae-In Hwang
Electronics 2021, 10(9), 1126; https://doi.org/10.3390/electronics10091126 - 10 May 2021
Cited by 3 | Viewed by 4410
Abstract
The 360° cameras have served as a convenient tool for people to record their special moments or everyday lives. The supported panoramic view allowed for an immersive experience with a virtual reality (VR) headset, thus adding viewer enjoyment. Nevertheless, they cannot deliver the [...] Read more.
The 360° cameras have served as a convenient tool for people to record their special moments or everyday lives. The supported panoramic view allowed for an immersive experience with a virtual reality (VR) headset, thus adding viewer enjoyment. Nevertheless, they cannot deliver the best angular resolution images that a perspective camera may support. We put forward a solution by placing the perspective camera planar image onto the pertinent 360° camera equirectangular image region of interest (ROI) through planar-equirectangular image stitching. The proposed method includes (1) tangent image-based stitching pipeline to solve the equirectangular image spherical distortion, (2) feature matching scheme to increase correct feature match count, (3) ROI detection to find the relevant ROI on the equirectangular image, and (4) human visual system (HVS)-based image alignment to tackle the parallax error. The qualitative and quantitative experiments showed improvement of the proposed planar-equirectangular image stitching over existing approaches on a collected dataset: (1) less distortion on the stitching result, (2) 29.0% increased on correct matches, (3) 5.72° ROI position error from the ground truth and (4) lower aggregated alignment-distortion error over existing alignment approaches. We discuss possible improvement points and future research directions. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

15 pages, 1766 KiB  
Article
CIRO: The Effects of Visually Diminished Real Objects on Human Perception in Handheld Augmented Reality
by Hanseob Kim, Taehyung Kim, Myungho Lee, Gerard Jounghyun Kim and Jae-In Hwang
Electronics 2021, 10(8), 900; https://doi.org/10.3390/electronics10080900 - 9 Apr 2021
Cited by 3 | Viewed by 2864
Abstract
Augmented reality (AR) scenes often inadvertently contain real world objects that are not relevant to the main AR content, such as arbitrary passersby on the street. We refer to these real-world objects as content-irrelevant real objects (CIROs). CIROs may distract users from focusing [...] Read more.
Augmented reality (AR) scenes often inadvertently contain real world objects that are not relevant to the main AR content, such as arbitrary passersby on the street. We refer to these real-world objects as content-irrelevant real objects (CIROs). CIROs may distract users from focusing on the AR content and bring about perceptual issues (e.g., depth distortion or physicality conflict). In a prior work, we carried out a comparative experiment investigating the effects on user perception of the AR content by the degree of the visual diminishment of such a CIRO. Our findings revealed that the diminished representation had positive impacts on human perception, such as reducing the distraction and increasing the presence of the AR objects in the real environment. However, in that work, the ground truth test was staged with perfect and artifact-free diminishment. In this work, we applied an actual real-time object diminishment algorithm on the handheld AR platform, which cannot be completely artifact-free in practice, and evaluated its performance both objectively and subjectively. We found that the imperfect diminishment and visual artifacts can negatively affect the subjective user experience. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

14 pages, 7715 KiB  
Article
DeepHandsVR: Hand Interface Using Deep Learning in Immersive Virtual Reality
by Taeseok Kang, Minsu Chae, Eunbin Seo, Mingyu Kim and Jinmo Kim
Electronics 2020, 9(11), 1863; https://doi.org/10.3390/electronics9111863 - 6 Nov 2020
Cited by 25 | Viewed by 4735
Abstract
This paper proposes a hand interface through a novel deep learning that provides easy and realistic interactions with hands in immersive virtual reality. The proposed interface is designed to provide a real-to-virtual direct hand interface using a controller to map a real hand [...] Read more.
This paper proposes a hand interface through a novel deep learning that provides easy and realistic interactions with hands in immersive virtual reality. The proposed interface is designed to provide a real-to-virtual direct hand interface using a controller to map a real hand gesture to a virtual hand in an easy and simple structure. In addition, a gesture-to-action interface that expresses the process of gesture to action in real-time without the necessity of a graphical user interface (GUI) used in existing interactive applications is proposed. This interface uses the method of applying image classification training process of capturing a 3D virtual hand gesture model as a 2D image using a deep learning model, convolutional neural network (CNN). The key objective of this process is to provide users with intuitive and realistic interactions that feature convenient operation in immersive virtual reality. To achieve this, an application that can compare and analyze the proposed interface and the existing GUI was developed. Next, a survey experiment was conducted to statistically analyze and evaluate the positive effects on the sense of presence through user satisfaction with the interface experience. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

14 pages, 8214 KiB  
Article
Exploring the Effects of Scale and Color Differences on Users’ Perception for Everyday Mixed Reality (MR) Experience: Toward Comparative Analysis Using MR Devices
by Kwang-seong Shin, Howon Kim, Jeong gon Lee and Dongsik Jo
Electronics 2020, 9(10), 1623; https://doi.org/10.3390/electronics9101623 - 2 Oct 2020
Cited by 10 | Viewed by 3069
Abstract
With continued technological innovations in the fields of mixed reality (MR), wearable type MR devices, such as head-mounted display (HMD), have been released and are frequently used in various fields, such as entertainment, training, education, and shopping. However, because each product has different [...] Read more.
With continued technological innovations in the fields of mixed reality (MR), wearable type MR devices, such as head-mounted display (HMD), have been released and are frequently used in various fields, such as entertainment, training, education, and shopping. However, because each product has different parts and specifications in terms of design and manufacturing process, users feel that the virtual objects overlaying real environments in MR are visualized differently, depending on the scale and color used by the MR device. In this paper, we compare the effect of scale and color parameters on users’ perceptions in using different types of MR devices to improve their MR experiences in real life. We conducted two experiments (scale and color), and our experimental study showed that the subjects who participated in the scale perception experiment clearly tended to underestimate virtual objects, in comparison with real objects, and overestimated color in MR environments. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

Back to TopTop