Topic Editors

Prof. Dr. Moldoveanu Alin
Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania
Associate Professor, Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania
Associate Professor, Faculty of Automatic Control and Computer Engineering, “Gheorghe Asachi” Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania

Extended Reality: Models and Applications

Abstract submission deadline
30 April 2025
Manuscript submission deadline
31 October 2025
Viewed by
1287

Topic Information

Dear Colleagues,

Virtual Reality and Augmented Reality have enjoyed exponential growth during the last decade and with similar forecasts. VR is already a consumer technology with hundreds of millions of users and AR is closely following.

Under the umbrella term Extended Reality (including Mixed Reality as well), they are poised to become an essential part of computing and human society, encompassing all types of human–computer interactions and human–human computer-mediated interactions.

While over half a century old, until recently, they were restricted (due to costs and performance) only to a handful of application types. Their recent explosion in diversity and number of applications, similar only to that of generic software from the 1960s or the internet boom, surfaced a clearly insufficient lack of understanding—for both their fundamental inner concepts and workings (such as immersion, presence, perception, interaction modalities, etc.) and development as complex software systems.

Thus, this Topic aims to advance the state of the art in XR, through a focus on sound, well-designed models, experiments, and evaluation methods, covering a wide range of aspects: from core technologies and concepts, to design, user experience, application development, and interdisciplinary aspects.

We welcome submissions presenting original research concepts, experiments, and results, as well as high-quality, rigorous, and useful reviews, on a wide variety of topics. The audience for this Topic includes VR, AR, and MR researchers, developers, industry experts, and end users. Considering the recent growth of XR, we expect a widely multidisciplinary audience—many readers might be unfamiliar with the domain; thus, all papers are expected to be highly accessible in terms of structure, terminology, and gradual introduction of their advanced elements.

Prof. Dr. Moldoveanu Alin
Dr. Anca Morar
Dr. Robert Gabriel Lupu
Topic Editors

Keywords

  • extended reality (virtual reality, augmented reality, mixed reality)
  • models and methods for XR
  • XR applications (in medicine, education, industry, arts, entertainment, etc.)
  • immersion
  • 3D simulations, visualizations, modeling, animations, procedural generation
  • ergonomics and usability in XR
  • UX/UI in XR
  • advanced interactions in XR (body, hands, facial, and eye tracking, gestures, touch and tangibles, localization and tracking, biosensors, wearables, BCI, etc.)
  • locomotion and navigation in XR
  • acoustics in XR
  • haptics in XR
  • evaluation, metrics, and analytics for XR
  • embodiment, avatars, virtual humans, perception and cognition, transhumanism
  • XR related technologies and fields (games, communications, multi-user, AI, cloud, IoT, big data, security and privacy, blockchain, GPUs, software architectures, 360 videos, 3D scanning and reconstruction, user experience, collaborative work, etc.)
  • interdisciplinary aspects of XR (social, psychological, medical, ethical, legal, economical, human behaviour, etc.)

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400 Submit
Information
information
2.4 6.9 2010 14.9 Days CHF 1600 Submit
Mathematics
mathematics
2.3 4.0 2013 17.1 Days CHF 2600 Submit
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (1 paper)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
27 pages, 28326 KiB  
Article
Full-Body Pose Estimation of Humanoid Robots Using Head-Worn Cameras for Digital Human-Augmented Robotic Telepresence
by Youngdae Cho, Wooram Son, Jaewan Bak, Yisoo Lee, Hwasup Lim and Youngwoon Cha
Mathematics 2024, 12(19), 3039; https://doi.org/10.3390/math12193039 - 28 Sep 2024
Cited by 1 | Viewed by 701
Abstract
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose [...] Read more.
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose a method for overlaying a digital human model onto a humanoid robot using XR visualization, enabling an immersive 3D telepresence experience. Our approach employs a learning-based method to estimate the 2D poses of the humanoid robot from head-worn stereo views, leveraging a newly collected dataset of full-body poses for humanoid robots. The stereo 2D poses and sparse inertial measurements from the remote operator are optimized to compute 3D poses over time. The digital human is localized from the perspective of a continuously moving observer, utilizing the estimated 3D pose of the humanoid robot. Our moving camera-based pose estimation method does not rely on any markers or external knowledge of the robot’s status, effectively overcoming challenges such as marker occlusion, calibration issues, and dependencies on headset tracking errors. We demonstrate the system in a remote physical training scenario, achieving real-time performance at 40 fps, which enables simultaneous immersive and physical interactions. Experimental results show that our learning-based 3D pose estimation method, which operates without prior knowledge of the robot, significantly outperforms alternative approaches requiring the robot’s global pose, particularly during rapid headset movements, achieving markerless digital human augmentation from head-worn views. Full article
(This article belongs to the Topic Extended Reality: Models and Applications)
Show Figures

Figure 1

Back to TopTop