Novel Advances in Collaborative Environments for Virtual, Augmented, Mixed and Extended Reality

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: closed (30 October 2024) | Viewed by 6145

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Sapienza University of Rome, 00185 Rome, Italy
Interests: computer vision (feature extraction and pattern analysis); scene and event understanding (by people and/or vehicles and/or objects); human–computer interaction (pose estimation and gesture recognition by hands and/or body); sketch-based interaction (handwriting and freehand drawing); human–behaviour recognition (actions, emotions, feelings, affects, and moods by hands, body, facial expressions, and voice); biometric analysis (person re-identification by body visual features and/or gait and/or posture/pose); artificial intelligence (machine/deep learning); medical image analysis (MRI, ultrasound, X-rays, PET, and CT); multimodal fusion models; brain–computer interfaces (interaction and security systems); signal processing; visual cryptography (by RGB images); smart environments and natural interaction (with and without virtual/augmented reality); robotics (monitoring and surveillance systems with PTZ cameras, UAVs, AUVs, rovers, and humanoids)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
CNR, ISPC, 34900 Rome, Italy
Interests: WebXR; immersive VR; 3D interfaces; spatial user interfaces; social VR
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor Assistant
Department of Computer Science, Sapienza University, 00198 Rome, Italy
Interests: computer science; computer vision; virtual reality; multimodal interaction; human–computer interaction

Special Issue Information

Dear Colleagues,

Recent technological advancements have highlighted an intense growth of computational power and sensors’ accuracy, thereby reducing production costs and encouraging researchers and developers to work with advanced immersive devices, e.g., head-mounted displays (HMDs). Moreover, machine and deep Learning (ML and DL) nowadays are exploited almost in every application, since a considerable number of advanced tasks can be completed with remarkable results, such as hand tracking, natural interaction, redirection/reorientation in virtual locomotion, action recognition, and others. In this context, the topic of multi-user environments presents numerous challenges and extensive growth potential. Various tasks, including the latest technologies in networking that provide high-speed communication to reduce latency and increase data transfer, can be explored; however, virtual environments synchronization still has room to grow and concurrency management remains an open problem. Moreover, multi-user environments facilitate the interconnection of multidisciplinary applications, from social studies to simulative experience analysis.

Topics of interest in this Special Issue include, but are not limited to, the following:

  • AI in data segmentation, detection, classification, or analysis;
  • Network-based applications for VR, AR, MR, or XR environments;
  • Multidisciplinary applications exploiting multi-user VR/AR/MR/XR environments;
  • AI for interactive or VR, AR, MR, or XR visualization;
  • Interactive web-based VR, AR, MR, or XR applications, tools, or services;
  • Interaction design process and methods for VR, AR, MR, or XR;
  • Human–computer interaction models for VR, AR, MR, or XR;
  • Spatial computing and 3D interfaces for VR, AR, MR, or XR;
  • Neural radiance fields (NeRF) in virtual spaces.

Dr. Danilo Avola
Dr. Bruno Fanini
Dr. Marco Raoul Marini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • augmented reality
  • mixed reality
  • extended reality
  • human–computer interface
  • human–computer interaction
  • gesture recognition
  • virtual collaborative environment
  • virtual shared environment
  • machine learning
  • deep learning
  • artificial intelligence
  • network-based virtual environment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 5077 KiB  
Article
Collaboration in a Virtual Reality Serious Escape Room in the Metaverse Improves Academic Performance and Learners’ Experiences
by Antonia-Maria Pazakou, Stylianos Mystakidis and Ioannis Kazanidis
Future Internet 2025, 17(1), 21; https://doi.org/10.3390/fi17010021 - 6 Jan 2025
Viewed by 784
Abstract
The evolving potential of virtual reality and the Metaverse to create immersive, engaging learning experiences and of digital escape room games to provide opportunities for active, autonomous, personalised learning has brought both to the forefront for educators seeking to transform traditional educational settings. [...] Read more.
The evolving potential of virtual reality and the Metaverse to create immersive, engaging learning experiences and of digital escape room games to provide opportunities for active, autonomous, personalised learning has brought both to the forefront for educators seeking to transform traditional educational settings. This study investigated the impact of collaboration within a virtual reality serious escape room game in the Metaverse that was designed for English as a Foreign Language (EFL) learners to explore how this approach influences their academic performance and overall learning experience. A comparative research approach was adopted using twenty (n = 20) adult learners divided into two equal-sized groups; the experimental group completed the virtual reality escape room in pairs, while the control group completed it individually. Mixed methods were employed, utilising a pre- and post-test to measure academic performance, as well as a questionnaire and two focus groups to evaluate participants’ learning experiences. Results indicated a trend of learners working collaboratively showing better learning outcomes and experience, offering valuable insights regarding the integration of serious Metaverse games in language-focused educational contexts. Full article
Show Figures

Figure 1

15 pages, 8557 KiB  
Article
Personalized Visualization of the Gestures of Parkinson’s Disease Patients with Virtual Reality
by Konstantinos Sakkas, Eirini Georgia Dimitriou, Niki Eleni Ntagka, Nikolaos Giannakeas, Konstantinos Kalafatakis, Alexandros T. Tzallas and Evripidis Glavas
Future Internet 2024, 16(9), 305; https://doi.org/10.3390/fi16090305 - 23 Aug 2024
Viewed by 1200
Abstract
Parkinson’s disease is a neurological disorder characterized by motor and non-motor symptoms. Assessment methods, despite the many years of existence of the disease, lack individualized visualization. On the other hand, virtual reality promises immersion and realism. In this paper, we develop an integrated [...] Read more.
Parkinson’s disease is a neurological disorder characterized by motor and non-motor symptoms. Assessment methods, despite the many years of existence of the disease, lack individualized visualization. On the other hand, virtual reality promises immersion and realism. In this paper, we develop an integrated system for visualizing the gestures of Parkinson’s disease patients in a virtual reality environment. With this application, clinicians will have information about the unique motor patterns and challenges they must address in each individual patient’s case, while the collected data can travel and be easily and instantly visualized in any location. At the beginning of this research, the current terms of immersive technologies in conjunction with data visualization and Parkinson’s disease are described. Through an extensive systematic literature review, the technological developments in the field of Parkinson’s data visualization are presented. The findings of the review lead to the experimental procedure and implementation of the application. The conclusions drawn from this work fuel future extensions on the contribution of immersive technologies to various diseases. Full article
Show Figures

Graphical abstract

15 pages, 2087 KiB  
Article
Exploring Data Input Problems in Mixed Reality Environments: Proposal and Evaluation of Natural Interaction Techniques
by Jingzhe Zhang, Tiange Chen, Wenjie Gong, Jiayue Liu and Jiangjie Chen
Future Internet 2024, 16(5), 150; https://doi.org/10.3390/fi16050150 - 27 Apr 2024
Cited by 1 | Viewed by 1503
Abstract
Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was [...] Read more.
Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was employed to compare these techniques’ input efficiency, accuracy, and user experience across varying precision and distance conditions. The findings reveal that multimodal techniques surpass slider methods in input efficiency yet are comparable to keyboards; the voice method excels in reducing cognitive load but falls short in accuracy; and the scaling method marginally leads in user satisfaction but imposes a higher physical load. Furthermore, this study outlines these techniques’ pros and cons and offers design guidelines and future research directions. Full article
Show Figures

Figure 1

25 pages, 13896 KiB  
Article
A New Generation of Collaborative Immersive Analytics on the Web: Open-Source Services to Capture, Process and Inspect Users’ Sessions in 3D Environments
by Bruno Fanini and Giorgio Gosti
Future Internet 2024, 16(5), 147; https://doi.org/10.3390/fi16050147 - 25 Apr 2024
Cited by 1 | Viewed by 1834
Abstract
Recording large amounts of users’ sessions performed through 3D applications may provide crucial insights into interaction patterns. Such data can be captured from interactive experiences in public exhibits, remote motion tracking equipment, immersive XR devices, lab installations or online web applications. Immersive analytics [...] Read more.
Recording large amounts of users’ sessions performed through 3D applications may provide crucial insights into interaction patterns. Such data can be captured from interactive experiences in public exhibits, remote motion tracking equipment, immersive XR devices, lab installations or online web applications. Immersive analytics (IA) deals with the benefits and challenges of using immersive environments for data analysis and related design solutions to improve the quality and efficiency of the analysis process. Today, web technologies allow us to craft complex applications accessible through common browsers, and APIs like WebXR allow us to interact with and explore virtual 3D environments using immersive devices. These technologies can be used to access rich, immersive spaces but present new challenges related to performance, network bottlenecks and interface design. WebXR IA tools are still quite new in the literature: they present several challenges and leave quite unexplored the possibility of synchronous collaborative inspection. The opportunity to share the virtual space with remote analysts in fact improves sense-making tasks and offers new ways to discuss interaction patterns together, while inspecting captured records or data aggregates. Furthermore, with proper collaborative approaches, analysts are able to share machine learning (ML) pipelines and constructively discuss the outcomes and insights through tailored data visualization, directly inside immersive 3D spaces, using a web browser. Under the H2IOSC project, we present the first results of an open-source pipeline involving tools and services aimed at capturing, processing and inspecting interactive sessions collaboratively in WebXR with other analysts. The modular pipeline can be easily deployed in research infrastructures (RIs), remote dedicated hubs or local scenarios. The developed WebXR immersive analytics tool specifically offers advanced features for volumetric data inspection, query, annotation and discovery, alongside spatial interfaces. We assess the pipeline through users’ sessions captured during two remote public exhibits, by a WebXR application presenting generative AI content to visitors. We deployed the pipeline to assess the different services and to better understand how people interact with generative AI environments. The obtained results can be easily adopted for a multitude of case studies, interactive applications, remote equipment or online applications, to support or accelerate the detection of interaction patterns among remote analysts collaborating in the same 3D space. Full article
Show Figures

Figure 1

Back to TopTop