Next Article in Journal
Key Technologies for 6G-Enabled Smart Sustainable City
Next Article in Special Issue
Augmented Reality Visualization and Quantification of COVID-19 Infections in the Lungs
Previous Article in Journal
Circuit and Signal Processing Section of Electronics: Editorial 2023
Previous Article in Special Issue
Explicit Representation of Mechanical Functions for Maintenance Decision Support
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Immersive Technologies, Knowledge Representation, and AI for Human-Centered Digital Experiences

by
Nikolaos Partarakis
* and
Xenophon Zabulis
Institute of Computer Science, Foundation for Research and Technology Hellas, N. Plastira 100, Vassilika Vouton, GR-70013 Heraklion, Crete, Greece
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(2), 269; https://doi.org/10.3390/electronics13020269
Submission received: 7 December 2023 / Revised: 1 January 2024 / Accepted: 5 January 2024 / Published: 7 January 2024

Abstract

:
The evolution of digital technologies has resulted in the emergence of diverse interaction technologies. In this paper, we conducted a review of seven domains under a human-centric approach user interface design, human-centered web-based information systems, semantic knowledge representation, X-reality applications, human motion and 3D digitization, serious games, and AI. In this review, we studied these domains concerning their impact on the way we interact with digital interfaces, process information, and engage in immersive experiences. As such, we highlighted the shifts in design paradigms, user-centered principles, and the rise of web-based information systems. The results of such shifts are materialized in modern immersive technologies, semantic knowledge representation, serious games, and the facilitation of artificial intelligence for interactions. Through this exploration, we aimed to assist our understanding of the challenges that lie ahead. The seamless integration of technologies, ethical considerations, accessibility, education for technological literacy, interoperability, user trust, environmental sustainability, and regulatory frameworks are becoming significant. These challenges present opportunities for the future to enrich human experiences while addressing societal needs. This paper lays the groundwork for thoughtful and innovative approaches to the challenges that will define the future of human–computer interaction and information technologies.

1. Introduction and Orientation

Human–computer interaction (HCI) and information and communication technologies (ICT) are experiencing a transition due to the emergence of innovative design paradigms that are affecting the way we interact with digital information. Immersive technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), are considered the building blocks of modern digital experiences. Knowledge representation in a machine-interpretable format supports reasoning on knowledge sources to shape intelligent, user-centric information systems. From a human-centered perspective, these systems have the potential to dynamically adapt to the diverse and evolving needs of users. Simultaneously, the integration of artificial intelligence (AI) into ICT creates the foundation for new interaction technologies, information processing, and information visualization.
Our motivation for providing this review stems from the rapidly evolving nature of these developments, which inevitably creates gaps in understanding the implications of immersive technologies, knowledge representation, and AI on user-centered digital experiences. Bridging these gaps and providing insights into this journey can offer a roadmap for future research endeavors. In the same context, it is important to highlight the fact that research gaps exist both in terms of technology and also in the way that these modern developments become integrated, tested, and evaluated. In general, the transformative impact of innovative design paradigms is not sufficiently addressed. Such considerations should be studied in terms of cognitive aspects relevant to the interaction itself and with regard to novel visualization paradigms and information processing principles. Towards supporting such interaction, the need for semantic knowledge representation, both on the knowledge side and on the user data side, has not yet been adequately explored. The same can be said for the effects of immersive technologies in education and training, since existing novel approaches have not been sufficiently validated. In accordance with the integration of 3D digitization and motion-driven interaction, these still remain in a state of generating impressions by proposing new paradigms rather than integrating these paradigms into common practice. In this domain, perhaps the most successful attempts were the Microsoft Kinect gaming experience series in Xbox 360 and Xbox One [1] and the PS move controller variations in PS3 and PS4 VR [2]. Similarly for AI, currently isolated methods are provided for different application domains without providing horizontal integration. Concurrently, ethical aspects in AI research have not been adequately studied nor integrated into development approaches.
In this paper, an analysis was conducted by reviewing recent literature from seven application domains, each representing a crucial aspect of the evolution of HCI and ICT. However, the choice of these domains was not arbitrary; rather, it was rooted in their significance, and they were selected based on their role in shaping human-centered experiences and for their offerings toward innovative design paradigms. The emphasis was on highlighting the importance of these selected domains and their specific contributions to the transformation of HCI and ICT. However, it is crucial to acknowledge that the focus on these specific domains does not diminish the importance of other potential domains. The rationale for selecting these particular domains was rooted in their demonstrable impact and relevance to the broader landscape, recognizing that other domains may also play significant roles in the transformative journey of HCI and ICT. To this end, in this work, we argue that user interface design, knowledge representation, semantic knowledge, X-reality applications, human motion and 3D digitization, serious games, and AI approaches are not only individually important but collectively supportive to comprehending the broader landscape of HCI and ICT.
Starting from user interface design, explicit and implicit interactions were discussed, exploring the evolution of intuitive gestural interfaces, conversational agents, and the broader landscape that redefines user experiences. This section focuses on the dynamic nature of user interfaces, where human–computer interactions are evolving towards seamless and intuitive engagements. Modern interaction technologies can be traced back to the advent of user interfaces (UIs), where command-line interactions marked the early stages of HCI [3]. As technology matured, graphical user interfaces (GUIs) emerged, introducing a visual paradigm that laid the foundation for intuitive interactions [4]. Desktop computing was the next big thing, allowing users to navigate through digital realms via a mouse and keyboard [5].
Both implicit and explicit forms of interaction consider that the user is the one that is driving the interaction and is thus the one targeted by a computing application. Since not all users are the same and there is no solution that fits all users, a user-centered design (UCD) paradigm emerged. UCD places user experience at the forefront of technological development [6] and involves integrating end users, their needs, and expectations in all phases of development of a UI, from user requirement analysis to design and implementation. Human-centered principles today do not just affect the design of the interaction and of the UI but also the nature of the facilities offered by information systems. To support each individual user in such a dynamic and responsive environment, the importance of adaptivity, adaptability [7,8], and accessibility [9] were emphasized in UCD. Touchscreens and gestures were proposed [10,11] as new interaction paradigms, while feedback mechanisms were integrated into UIs to enhance user satisfaction and engagement [12,13].
Today, the majority of time spent on the computer worldwide is linked to the World Wide Web and is supported through some form of web-based information systems in broad terms [14]. Such systems provide a seamless integration of information from databases and support for interconnectedness [15,16]. Knowledge representation [17,18], data visualization [19], and data mining [20] have become components of these new developments, transforming them into hubs of information dissemination and retrieval [21]. In such systems, semantic knowledge is important, because it can enhance user interaction, support active participation in information processing, and enrich user experiences through the understanding of both context and content. In the domain of knowledge representation, the Semantic Web [22] brought representation technologies [23,24] that were capable of providing meaning to data. The goal was to make data easier to interpret by machines [25]. Ontologies and semantics were the building blocks representing relationships and context, leading to more sophisticated user interactions [26].
The interactions described above take place in the computing universe we are all today aware of. In parallel, though, a more disruptive interaction paradigm is taking place, rooted in immersive technologies such as augmented, virtual, and mixed reality (AR, VR, and MR). Apart from gaming special-purpose applications (e.g., in museums, science centers, 4D cinemas, etc.), these interaction paradigms exist in a newborn universe. In this review paper, we discuss the domains where these have started to prove their worth and potential for exploitation, with a special focus on their applications for vocational education and training. In these domains, a truly transformative journey is currently being undertaken, shaping new ways for using innovative technologies as a disruptive innovation. The advent of immersive technologies, [27,28] AR [29], VR [30], and MR [31], made possible the transition from traditional screens to the digital three-dimensional space, supporting both hybrid and pure digital training experiences. XR applications in vocational education and training exemplify the capacity of these technologies to revolutionize experiential learning [32,33,34].
Key building blocks of the immersive technologies discussed are human motion capture and 3D digitization. The focus is on how these technologies redefine digital experiences and introduce novel interaction metaphors. Our analysis starts from virtual embodiment, where users project themselves into digital avatars, to lifelike simulations that bridge the gap between the physical and digital worlds. In such applications, human motion capture [35,36] and 3D digitization [37,38,39,40] enable users to interact with digital environments in ways that mirror real-world movements and real-world environments [41,42,43]. Such realistic simulations [44] blur the boundaries between physical and digital realities, fostering a deeper level of engagement and personalization in digital experiences.
Serious games can be considered as the domain where user-centric principles and the innovation described in AR, VR, and MR join forces to repurpose gaming and address educational, training, and societal challenges. Serious games offer purposeful play that transcends entertainment, becoming powerful tools for knowledge and skill acquisition and thus addressing broader societal issues. Serious games employ XR technologies and game design principles to address educational, training, and societal challenges [45]. These games are based on a paradigm shift from entertainment-focused gaming to skill development, knowledge acquisition, and societal awareness [46]. At the same time, the integration of artificial intelligence (AI) into UIs [47,48] empowers systems to understand user intent; process information efficiently; and offer personalized, anticipatory digital experiences [49].
The forthcoming artificial intelligence (AI) revolution [50] has started by providing tools that currently alter the way we work by introducing a novel form of AI–human collaboration [51,52]. In the future, we expect that the way we interact with technology will be greatly affected by AI-driven approaches, which will be used as the building blocks of new interaction paradigms. As a result, this subject is discussed under the prism of the changes that it produces, possibly altering interaction as we perceive it today.
The main body of this work is dedicated to presenting a short review of the state of the art of these application domains and discussing the reviewed literature. Then, it concludes with a presentation of the challenges, including ethical considerations in design and AI practices and the continuous adaptation to evolving user behaviors. Looking ahead, future directions encompass the human-centric integration of AI, the development of immersive and multisensory interfaces, and the enhanced accessibility of digital platforms. Ethical design principles and responsible AI practices are expected to become increasingly integral, as cross-disciplinary collaboration enriches the design process. The adoption of natural and intuitive interactions, real-time user feedback mechanisms, and personalized experiences represent the evolving landscape, indicating a future where technology harmoniously aligns with human needs, preferences, and ethical standards.

2. Advances in User Interface Design, Development, and Evaluation, including New Approaches for Explicit and Implicit Interaction

UI design, development, and evaluation made possible the facilitation of new technologies and supported a deeper understanding of user behavior. This has been manifested through a combination of explicit and implicit interaction approaches.
Explicit interaction can be perceived as a form of communication with a computing device where explicit user input is directly connected to the issuing of a command. Traditionally, explicit interaction involved direct user inputs, such as with a mouse, a keyboard, or touch gestures. However, late advancements have elevated explicit interaction by introducing gesture-based interfaces (e.g., [53,54,55,56,57,58]) and kinesthetic interaction paradigms (e.g., [59,60]). These forms use technologies like computer vision, depth sensing, IR sensing, and tracking devices (RGB-D sensors, RGB cameras, infrared cameras) and enable users to communicate with systems through intuitive hand movements [61,62]. The advancements discussed in these works include tracking hand movements, instead of just static poses [53], optimizing gesture recognition from a monocular camera to support gaming in public spaces [55], using specialized wearable devices for gesture recognition [57], optimizing traditional RGB-D approached with AI [58,61], etc. Having these novel forms of interaction allows for a more immersive experience, and natural interactions can be supported. Especially in VR, new interaction paradigms can replace classic VR controllers and support more natural interaction through gestures and real-time hand tracking to augment the feeling of presence in a virtual space (e.g., [63,64]). Similar effects can be achieved in AR by enhancing interaction with digitally augmented physical objects (e.g., [65,66]).
Voice-activated interfaces [67,68] represent another advancement in explicit interaction, implemented through the integration of advanced natural language processing (NLP) algorithms, thus supporting more than traditional voice command recognition [69]. Today, intelligent voice assistants are capable of doing much more, such as comprehending context, discerning user intent, and being of assistance in typical daily tasks (e.g., [70,71]), making digital systems more inclusive for users with diverse abilities.
Implicit interaction is defined as “an action performed by the user that is not primarily aimed to interact with a computerized system but which such a system understands as input” [72]. In this domain, machine learning algorithms empower systems to discern user preferences, anticipate actions, and adapt interfaces and robots in real time [73,74,75]. Predictive modeling, driven by user behavior analysis, enables interfaces to become more personalized, offering a tailored experience that aligns with individual needs and preferences [76,77].
The fusion of explicit and implicit interaction is evident in the rise of anticipatory design [78]. Interfaces are becoming increasingly adept at predicting user actions, streamlining workflows, and minimizing decision fatigue [79]. Through the seamless integration of explicit inputs and implicit learning, systems can offer a more fluid and intuitive user experience.
As UI paradigms evolve, so too must the methods for evaluating their effectiveness. Traditional usability testing [80,81] and heuristic evaluations [82] are now complemented by sophisticated analytics and user feedback mechanisms [83,84,85,86]. A holistic understanding of user experience requires a multidimensional approach that considers not only task completion efficiency but also emotional engagement, accessibility, and inclusivity [87]. Eye-tracking technology and neuroscientific methods are emerging as powerful tools for evaluating implicit interactions [88]. By examining gaze patterns and neural responses, designers gain insights into user attention, emotional responses, and cognitive load, providing valuable feedback for refining UI designs [89,90].

3. Human-Centered Web-Based Information Systems

Today, web-based information systems try to integrate the principles of human-centered design [91]. To this end, a combination of ICT advances is being integrated into such systems, including knowledge representation approaches, data visualization paradigms, data mining methodologies, and big data analytics technologies [92,93]. This evolution supports more dynamic information delivery on the one hand and user-centric experiences that generate insights from large datasets on the other hand. The main objective is to enhance the visualization capacity over immersive amounts of information to render large datasets more readable for humans to understand and work with. The core of such approaches is the deployment of knowledge representation techniques [94,95]. Semantic web technologies, ontologies, and graph databases organize and structure information in a manner that is both human- and machine-interpretable. At the same time, these advancements further pose the need to introduce cognitive approaches in the semantic web, enabling systems to not only store and retrieve data but also to infer relationships, fostering a more accurate understanding of context [96,97]. Visual interfaces should help users locate information based on meaning while keeping the complexity of the semantic implementation hidden. For example, using similarities and comparisons can make it easier to navigate through a lot of information. Instead of having fixed representations of data, cognitive approaches should support choosing information based on user needs.
Such approaches are a prerequisite to reducing the threat of information overload addressed by effective data visualization. Modern web-based systems deploy interactive and immersive visualizations to present complex datasets via usable representations [98]. From interactive charts and graphs to VR-enhanced visualizations, the emphasis is on empowering users to explore and understand information intuitively, enhancing the overall user experience [99,100].
To achieve intuitive big data visualization, sophisticated tools for analysis and interpretation are needed. Data mining techniques augment web-based information systems, facilitating the discovery of patterns, trends, and anomalies within large datasets [101]. Machine learning algorithms enable systems to autonomously uncover hidden knowledge [102], providing users with recommendations and insights. Human-centered web-based systems are thus capable of employing distributed computing frameworks and cloud technologies to process datasets in real time. The synergy of big data analysis and visualization empowers users to promptly gain meaningful insights, fostering informed decision-making [103].
From a design perspective today, information systems’ interfaces are crafted with a deep understanding of user needs, preferences, and cognitive processes [104,105]. Personalization algorithms, informed by user interactions and feedback, ensure that the information presented is not only relevant but also delivered in a format that resonates with the user’s mental model. At the same time, continuous evaluation and iterative design are integral components of human-centered web-based information systems [106,107]. Analytics tools track user interactions, enabling designers to refine interfaces and functionalities based on real-world usage patterns [108,109,110]. This iterative process ensures that systems remain adaptive, responsive, and aligned with evolving user expectations [111].

4. Semantic Knowledge to Enhance User Interaction with Information, User Participation in Information Processing, and User Experience

Semantic knowledge representation provides meaning to the data processed by an information system and can thus support a more intelligent and intuitive interaction between users and information [112,113]. In this context, semantic technologies are used to represent knowledge in a machine-understandable format. This format can make various information systems semantically interoperable [114]. Ontologies, linked data, and semantic graphs provide a rich framework for expressing relationships between concepts [115], allowing systems to infer and connect pieces of data, creating a web of contextual relevance. Semantic knowledge representation lays the groundwork for a more intuitive and context-aware service provision [116]. NLP algorithms, powered by semantic models, enable systems to comprehend user queries in a more human-like manner [117]. Conversational interfaces, driven by semantic understanding, facilitate seamless interactions, allowing users to communicate with systems more naturally and dynamically [118]. A key advancement is the empowerment of users in the information processing chain. Collaborative knowledge creation and annotation, supported by semantic frameworks, enable users to contribute to the refinement and enrichment of data. This participatory approach not only enhances the accuracy of information but also fosters a sense of ownership and engagement among users in the information ecosystem [119,120].
Beyond representation, the presentation of information is an important part of user experience. Semantic technologies should influence how information is visually and contextually communicated to users [121,122], ensuring that users receive information in a format that aligns with their cognitive processes and preferences. In the same context, personalization algorithms, leveraging semantic understanding, deliver content that is not only relevant but also anticipates user needs [123,124]. The seamless integration of diverse datasets, facilitated by semantic frameworks, has the potential to provide a more coherent and holistic user experience, reducing information overload and enhancing overall satisfaction. The iterative nature of semantic knowledge representation ensures continuous improvement through feedback loops, driven by user interactions and system analytics, to enable adaptive learning and refinement.

5. X-Reality Applications (AR, VR, MR) for Immersive Human-Centered Experiences

X-reality applications today offer novel forms of immersion. In this paper, we consider two main application domains: namely, cultural heritage and vocational training.

5.1. X-Reality Applications in Cultural Heritage

The utilization of virtual reality (VR) in cultural heritage (CH) is not a novel concept. Initial approaches, such as CAVE-based VR, integrated immersive presentations and haptic-based manipulations of heritage objects [125,126]. A synergy of 3D reconstruction technologies with VR emerged, creating realistic digital replicas of CH objects [127,128]. In earlier methods, where digitization was constrained by technology immaturity, scenes from archaeological sites were manually modeled in 3D [129,130]. While resulting in lower-quality models, this allowed researchers to digitally restore monuments by complementing structural remains with digitally manufactured structures [131,132]. Advancements extended to simulating weather and daily life in ancient CH sites through a graphics-based rendering of nature and autonomous virtual humans.
The evolution of VR devices, particularly commercial VR headsets and controllers [133,134], simplified the implementation of VR-based experiences. Concurrently, the advent of 360 photography and videos enabled a different VR approach with inexpensive headsets, augmenting experiences with information points and interactive spots [135,136,137]. Studies have addressed resource-demanding tasks like streaming 360 videos in these headsets [138,139]. From a sustainability perspective, VR was proposed to divert visits from endangered CH sites to digital media [140].
In the domain of augmented reality (AR) and cultural heritage, ongoing research has shown its potential to enhance learning by providing a more comprehensive educational experience [141]. AR applications have been explored in school subjects like chemistry and cultural heritage sites [142,143]. Stakeholder studies indicate perceived value dimensions of AR in cultural heritage tourism, encompassing economic, experiential, social, epistemic, historical, cultural, and educational aspects [144].
Mobile AR research began with feature extraction in mobile phones for image acquisition [145]. More advanced mobile devices incorporated virtual humans [146], while modern mobile phones empowered various AR forms, such as the augmentation of camera images with information [147,148] and the interpolation of 3D digitization with camera input [149]. Some approaches replace physical remains with digitally enhanced versions from the time of creation [150], and physical objects aid visualization and interaction with archaeological artifacts in AR [151,152].
The fusion of augmented and virtual reality has given rise to “AR Portals”. Mobile devices, supporting larger AR scenes, allow users to spawn portals to alternate worlds [152]. “The Historical Figures AR” application exemplifies this, enabling users to walk through portals to historically themed sites [153]. Other approaches augment physical places with digital information, supporting alternative interactions through the manipulation of physical objects [154].

5.2. X-Reality Applications in Vocational Training and Education

Vocational training and education are challenging research topics due to the fact that the nature of the subjects to be taught integrates several aspects of human perception and is closely bound to human sense and skillful interaction with tools and materials. This is highlighted through mapping sequences and networks of physical and cognitive activities and working materials in design and workmanship, involving stages of perception, problem understanding, thinking, acting, planning, executing plans, and reflecting upon collected experiences [155]. In process training, part of thinking and planning is implemented by the mind, using mental simulation that produces mental imagery [156]. This modeling approach is also found in cognitive robotics (e.g., [157,158]). In [159], crafting processes are modeled to have schemas or plans, and their execution is modeled as individual events. Studies on the negotiation between the maker and the material have provided interesting data regarding how makers think between things, people, space, and time and develop their practices accordingly [160].
Due to their nature, in vocational education and training, X-reality (XR) applications have emerged as transformative tools, redefining the way individuals acquire and apply skills, based on the fact that these technologies can mimic, enhance, or alter the physical environment, seamlessly integrating physical and digital realms [161,162]. At the same time, these technologies can engage with the complex cognitive aspects described below by being capable of simulating reality and integrating cognitive cues in process training.
VR in training provides the capability of immersing learners in simulated environments that are digital twins of real-world environments, offering training scenarios in a controlled, risk-free setting [163,164]. Vocational training programs using VR have been employed from pilot training to surgical procedures. The ability to practice and refine skills in VR enhances muscle memory and improves confidence. Examples of such training environments include woodworking and blacksmithing simulators [165,166,167].
AR in vocational education has the form of overlaying digital information onto the physical environment, offering learners real-time, context-sensitive aid. From hands-on equipment maintenance simulations to interactive instructional overlays, AR facilitates learning by doing, enriching the educational experience. By extending reality through informative digital overlays, AR provides a bridge between theoretical knowledge and practical application, unifying the physical and digital learning domains. For example, a mobile traditional craft presentation system using AR technology has been proposed to superimpose 3D objects of traditional crafts on the real room space reflected by the camera of the mobile terminal [168,169,170].
MR blends virtual and physical elements, allowing learners to interact with physical and real-world objects simultaneously [171]. This is beneficial in vocational education where practical, hands-on experience is important [172] MR can be employed to integrate virtual equipment into physical training spaces, providing a hybrid learning experience [173]. The learner interacts with digital elements as an extension of their physical surroundings, bridging the gap between the two realities.
The added value of XR applications for vocational education lies in their ability to create immersive, human-centered learning experiences. By simulating authentic workplace scenarios, XR technologies engage learners on a deeper level, promoting active participation and knowledge retention [174]. XR applications also introduce adaptive learning environments, tailoring experiences to individual learner needs. Machine learning algorithms analyze user interactions and performance, allowing the system to dynamically adjust the difficulty and content of simulations [175]. This personalized approach ensures that learners progress at their own pace, addressing diverse skill levels and learning styles while seamlessly blending physical and digital realities.
Assessment in vocational education extends beyond traditional exams to immersive, performance-based evaluations. XR applications, by extending reality, enable instructors to assess not just theoretical knowledge but also the application of skills in realistic scenarios. Real-time feedback mechanisms enhance the learning loop, providing constructive insights to learners and facilitating continuous improvement within the extended reality of vocational training [32].

6. Human Motion and 3D Digitization for Enhanced Interactive Digital Experiences

Human motion capture and 3D digitization are reshaping how users interact with and perceive digital content in digital environments. More specifically, advancements in human motion capture technologies have altered the way digital systems interpret and respond to user movements [35,36]. High-fidelity sensors, wearable devices (e.g., [176,177]), computer vision, and AI enable the precise tracking of body movements, transforming them to explicit or implicit input (e.g., [178,179,180,181,182]). This has wide applicability for fields such as gaming, VR, and AR, where users can navigate, control, and manipulate digital environments through natural movements.
At the same time, sophisticated 3D digitization techniques make possible the seamless transition between real and digital world objects and spaces. From 3D scanning technologies that capture intricate details of physical objects [183,184,185] to depth-sensing cameras that create digital representations of physical spaces, the process of digitization extends beyond traditional boundaries [186,187,188]. This capability lays the foundation for more immersive and realistic digital experiences. In VR environments, for example, users can not only see but also physically interact with digital objects by leveraging natural hand gestures, body movements, and haptic interfaces [189,190,191]. This enhanced level of interaction fosters a deeper sense of presence and engagement, making digital experiences more lifelike and compelling [192,193].
Additionally, human motion and 3D digitization contribute to the emergence of novel interaction metaphors. Gesture-based interfaces, where a wave of the hand or a nod of the head translates into meaningful digital commands, exemplify the shift towards more intuitive interactions. This departure from conventional input methods introduces a new language of interaction, bridging the gap between the physical and digital worlds and giving rise to the concept of virtual embodiment [194,195]. Users can now project themselves into digital avatars or representations that mimic their real-world movements [196]. This not only adds a layer of personalization to digital interactions but also enables a more immersive and empathetic form of virtual presence [197].
The discussed impact may affect various industries such as healthcare, for instance, where surgeons can practice complex procedures in a VR setting that replicates real-world conditions [198,199]. In education, students can engage with historical artifacts through detailed 3D models. In the entertainment industry, these technologies can be used to create interactive storytelling experiences, blurring the lines between the observer and the observed [200,201].

7. Serious Game Design and Development

Serious games are transforming learning into an immersive and interactive experience [202]. Virtual scenarios, historical recreations, and problem-solving challenges become dynamic lessons, allowing students to explore, experiment, and learn through experience. The introduction of adaptability into these games caters to diverse learning styles, making education more accessible and engaging [203]. For occupational training, serious games offers a dynamic platform for skill development and scenario-based learning [204]. Simulations designed for various industries, such as healthcare, aviation, and emergency response, provide trainees with realistic environments to test their skills [205,206,207,208,209]. The interactive nature of these games fosters hands-on experience, allowing individuals to practice and refine their abilities in a controlled, risk-free setting.
Serious games extend their influence beyond traditional education and training contexts, addressing broader societal challenges. Games designed for public awareness campaigns, health promotion, and social issues provide a unique avenue for communication [210,211]. These games leverage storytelling, empathy-building narratives, and decision-making scenarios to raise awareness and prompt action on critical societal topics such as environmental conservation, public health, and social justice.
The design and development of serious games often involve collaboration across disciplines, educational psychologists, game designers, subject matter experts, and technologists collaborate to create holistic learning experiences. This interdisciplinary approach ensures that serious games not only convey information effectively but also align with pedagogical principles, maximizing their educational impact.
The landscape of serious game design has been significantly shaped by technological advancements. VR, AR, and advancements in graphics rendering have elevated the level of immersion and realism in these games [212,213,214]. This not only enhances the overall gaming experience but also contributes to the effectiveness of the learning and training outcomes. The integration of adaptive learning algorithms and analytics further personalizes the experience, tailoring content to individual needs and tracking progress.
These games, often delivered through digital platforms, have the potential to reach a global audience, making education and training more accessible across geographical boundaries. This democratization of learning resources addresses disparities in educational opportunities and ensures that individuals, regardless of their location, can benefit from engaging and purposeful learning experiences.

8. AI Approaches in User Interfaces, Information Processing, and Information Visualization

In the rapidly evolving landscape of technology, the integration of AI is manifested through intelligent systems that, in the future, will be able to understand user intent and enhance the extraction and presentation of valuable insights from vast datasets. AI-driven user interfaces have redefined the way individuals interact with digital systems. Language model-based AI and NLP technologies enable interfaces to comprehend and respond to user inputs in a more human-like fashion [215]. Chatbots and virtual assistants leverage these advancements, providing users with intuitive and conversational interactions. Intelligent user interfaces in the AI era promise personalized user experiences based on historical interactions, preferences, and context [216].
AI has revolutionized information processing, especially in the context of handling vast and complex datasets. Machine learning algorithms, ranging from supervised to unsupervised models, facilitate data classification, pattern recognition, and predictive modeling. This capability enhances the efficiency and accuracy of information processing, allowing systems to uncover hidden patterns, trends, and correlations that may elude traditional analytical approaches. AI-driven data analytics also contribute to real-time decision-making, providing valuable insights for businesses and organizations.
Advancements in AI have led to more sophisticated approaches in knowledge representation and semantic processing. Ontologies and semantic graphs enable systems to organize information in a manner that aligns with human understanding. This not only enhances the retrieval and interpretation of data but also supports more intelligent reasoning and inferencing. AI-driven semantic processing can contribute to a more nuanced understanding of context, facilitating more meaningful interactions between users and information systems [217,218,219,220].

9. Discussion

In the domain of user interfaces, spanning from explicit to implicit interactions, a paradigm shift towards more seamless and intuitive engagements is evident in the literature. This shift, marked by the emergence of gestural interfaces and conversational agents, highlights the growing emphasis on enhancing user experience. Here the practicality of this shift is not yet evident since, currently, the majority of our interactions with the digital world are still on the explicit action-based paradigm.
The more crucial impact of human-centered principles is witnessed in information systems’ core functionalities, since intelligent, user-centric information systems require advanced knowledge representation, data visualization, data mining, and big data analytics. Evidence of this evolution is witnessed, but the need to support the adaptability of these systems to diverse user needs underscores the importance of a flexible and context-aware approach in the processing and presentation of information.
Semantic knowledge representation emerges as a crucial element, offering a means to enhance user interaction. The current state of the art is far from supporting this vision, mainly due to the fact that semantic technologies have not yet sufficiently targeted user interaction. Understanding context, content, and user semantic knowledge may empower systems to provide more meaningful and personalized interactions, aligning closely with the evolving expectations of users.
X-reality has proven its worth in offering simulated environments for various application domains, among which cultural heritage and occupational training were discussed in this work. At the same time, through the integration of human motion and 3D digitization, parts of the immersion contributing to creating a sense of presence in virtual environments are beginning to be complemented. Parallel to the above-mentioned advances, serious games benefit all the above-mentioned progress, providing novel experiences that address educational goals. Of course, all this still remains in a state of flux, and real-life integration in various contexts has not yet achieved an increased degree of integration.
Of course, all of the above are going to be affected by the AI revolution, which currently has only sporadically contributed to advancements in various application domains. Considered among these directly exploitable are technologies integrating text models, computer vision technologies for motion capture, gesture recognition, feature extraction, 3D reconstruction, and view synthesis, etc.
Each of the domains discussed is about to face growth in the future. In this route, challenges emerge both for their technical development and for maintaining an ethical and human-centered approach to innovation. We conclude with a presentation of future directions and challenges in the next and final section of this work.

10. Challenges, Future Directions, and Conclusions

The current landscape on immersive technologies, knowledge representation, and AI for human-centered digital experiences constantly integrates innovations towards an even more transformative future, but this also poses immense challenges. These challenges demand a collective and forward-thinking approach and a commitment to ethical practices, inclusive design, education for technological literacy, and the establishment of frameworks that balance innovation with responsibility.
The integration of technologies, starting from XR to AI-driven interfaces, requires seamlessly integrating these diverse elements to create harmonized ecosystems that support holistic, user-centric experiences. Interdisciplinary collaboration becomes important to bridge gaps and ensure a unified approach to technology integration. The integration of AI in immersive technologies brings forth ethical considerations such as user privacy, data security, and algorithmic biases. Maintaining a balance between innovation and ethical responsibility is important. Safeguarding user data, addressing biases in algorithms, and ensuring transparent and accountable practices are crucial to building trust in these evolving technologies. Furthermore, AI-driven systems bring forth the challenge of building and maintaining user trust. The “black box” nature of complex algorithms necessitates efforts to enhance explainability, ensuring that users can understand and trust the decisions made by intelligent systems. Ethical AI practices that prioritize transparency and user understanding are pivotal to overcoming this challenge. The environmental impact of AI should be also considered. Striking a balance between innovation and sustainability entails the development of eco-friendly technologies, optimizing energy usage, and adopting practices that minimize the carbon footprint of emerging interaction technologies.
Another challenge falls into the domain of accessibility and inclusivity and requires designing interfaces and systems that cater to diverse user needs, including those with disabilities. This entails a commitment to universal design principles, making technology accessible to all, and mitigating potential disparities in digital access.
At the same time, education and training requires effort to improve technological literacy and ensure that individuals across demographics have the skills to use new technologies. This involves developing comprehensive educational programs and fostering a culture of lifelong learning.
Interoperability and standardization are also a challenge, since creating frameworks that facilitate seamless communication between different technologies, ensuring compatibility, and establishing industry-wide standards are vital.
Finally, regulatory frameworks are required to develop policies that foster innovation and safeguard them against misuse.
Looking towards the future, it can be foreseen that human-centric AI integration will provide the possibility to develop AI systems that will not only understand user behavior but also proactively enhance user experiences based on context and preferences. The evolution of interfaces will move towards immersive and multisensory experiences. Virtual and augmented reality, combined with haptic feedback and other sensory inputs, will redefine how users interact with digital content. Interfaces will increasingly leverage AI to offer intelligent personalization, adapting in real-time to user preferences, behavior, and evolving needs. This anticipatory design approach will enhance user satisfaction and engagement. In this respect, these advances will make it possible for future interfaces to prioritize enhanced accessibility solutions, ensuring that users of all abilities can seamlessly engage with digital platforms. This involves not only meeting existing accessibility standards but also pushing the boundaries of inclusivity based on a heightened commitment to ethical design principles and responsible AI practices. Transparency, user control over data, and strategies to mitigate biases will become integral components of interface development.
This interdisciplinary interface design should be emphasized in the future, requiring collaboration between designers, psychologists, technical experts, and experts on ethics. Cross-disciplinary approaches have the potential to enrich the design process, ensuring that interfaces are not only technologically advanced but also psychologically and ethically sound. Interfaces will increasingly adopt natural and intuitive interaction paradigms, using voice recognition, gesture control, and gaze tracking. This shift aims to reduce cognitive load and enhance user experience. A greater emphasis on real-time user feedback and iterative design methodologies will be a prerequisite combined with continuous user testing and feedback loops.
This work aimed to review the rapidly evolving nature of immersive technologies, knowledge representation, and AI for user-centered digital experiences. To this end, this work, starting from the latest developments, identified research gaps and provided insights for future challenges. The overall goal is to support a comprehensive roadmap for future research by summarizing significant advances in seven application domains that are considered by the authors closely bound to novel user-centered digital experiences.
The analysis of these advancements highlighted the current state of the art and assisted in the identification of research gaps that are expected to drive new developments in the future. Of course, the nature of these application domains poses new challenges by themselves. The integration of AI-based approaches opens an entirely new round of possibilities and another discussion on important considerations that include ethical and responsible AI practices.

Author Contributions

Conceptualization, N.P. and X.Z.; methodology, N.P. and X.Z.; validation, N.P. and X.Z.; formal analysis, N.P. and X.Z.; investigation, N.P. and X.Z.; resources, N.P. and X.Z.; writing—original draft preparation, N.P. and X.Z.; writing—review and editing, N.P. and X.Z; visualization, N.P. and X.Z.; supervision, N.P. and X.Z.; project administration, N.P. and X.Z.; funding acquisition, N.P. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was implemented under the project Craeft, which received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No. 101094349.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for contributing to the enhancement of the quality of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, Z. Microsoft Kinect sensor and its effect. IEEE Multimedia 2012, 19, 4–10. [Google Scholar] [CrossRef]
  2. Teixeira, A.; Assena, A.; Santos, A.; Moura, M.; Gomes, N.; Orvalho, J. Usability evaluation of playstation move motion controler. In Proceedings of the International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing, Lisbon, Portugal, 17–19 July 2014; pp. 276–280. [Google Scholar]
  3. Myers, B.A. A brief history of human-computer interaction technology. Interactions 1998, 5, 44–54. [Google Scholar] [CrossRef]
  4. Jansen, B.J. The graphical user interface. ACM SIGCHI Bull. 1998, 30, 22–26. [Google Scholar] [CrossRef]
  5. Campbell-Kelly, M.; Aspray, W.F.; Yost, J.R.; Tinn, H.; Díaz, G.C. Computer: A History of the Information Machine; Taylor & Francis: London, UK, 2023. [Google Scholar]
  6. Abras, C.; Maloney-Krichmar, D.; Preece, J.; User-Centered Design; Bainbridge, W. Encyclopedia of Human-Computer Interaction; Sage Publications: Thousand Oaks, CA, USA, 2004; Volume 37, pp. 445–456. [Google Scholar]
  7. Van Velsen, L.; Van Der Geest, T.; Klaassen, R.; Steehouder, M. User-centered evaluation of adaptive and adaptable systems: A literature review. Knowl. Eng. Rev. 2008, 23, 261–281. [Google Scholar] [CrossRef]
  8. Doulgeraki, C.; Partarakis, N.; Mourouzis, A.; Stephanidis, C. Adaptable Web-based user interfaces: Methodology and practice. eMinds Int. J. Hum. Comput. Interact. 2009, 1, 79–110. [Google Scholar]
  9. Stephanidis, C.; Akoumianakis, D.; Sfyrakis, M.; Paramythis, A. Universal accessibility in HCI: Process-oriented design guidelines and tool requirements. In Proceedings of the 4th ERCIM Workshop on User Interfaces for All, Stockholm, Sweden, 19–21 October 1998; pp. 19–21. [Google Scholar]
  10. Saffer, D. Designing Gestural Interfaces: Touchscreens and Interactive Devices; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2008. [Google Scholar]
  11. Bhalla, M.R.; Bhalla, A.V. Comparative study of various touchscreen technologies. Int. J. Comput. Appl. 2010, 6, 12–18. [Google Scholar] [CrossRef]
  12. Quesenbery, W. The five dimensions of usability. In Content and Complexity; Routledge: London, UK, 2014; pp. 93–114. [Google Scholar]
  13. O’Brien, H.L.; Toms, E.G. What is user engagement? A conceptual framework for defining user engagement with technology. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 938–955. [Google Scholar] [CrossRef]
  14. Isakowitz, T.; Bieber, M.; Vitali, F. Web information systems. Commun. ACM 1998, 41, 78–80. [Google Scholar] [CrossRef]
  15. Seymour, T.; Frantsvog, D.; Kumar, S. History of search engines. Int. J. Manag. Inf. Syst. (IJMIS) 2011, 15, 47–58. [Google Scholar] [CrossRef]
  16. Schwartz, C. Web search engines. J. Am. Soc. Inf. Sci. 1998, 49, 973–982. [Google Scholar] [CrossRef]
  17. Mylopoulos, J. An overview of knowledge representation. ACM SIGART Bulletin 1980, 74, 5–12. [Google Scholar]
  18. Partarakis, N.; Doulgeraki, V.; Karuzaki, E.; Galanakis, G.; Zabulis, X.; Meghini, C.; Bartalesi, V.; Metilli, D. A Web-Based Platform for Traditional Craft Documentation. Multimodal Technol. Interact. 2022, 6, 37. [Google Scholar] [CrossRef]
  19. Healy, K. Data Visualization: A Practical Introduction; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
  20. Mughal, M.J.H. Data mining: Web data mining techniques, tools and algorithms: An overview. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 208–215. [Google Scholar] [CrossRef]
  21. Kobayashi, M.; Takeda, K. Information retrieval on the web. ACM Comput. Surv. (CSUR) 2000, 32, 144–173. [Google Scholar] [CrossRef]
  22. Berners-Lee, T.; Hendler, J.; Lassila, O. The semantic web. Sci. Am. 2001, 284, 34–43. [Google Scholar] [CrossRef]
  23. Horrocks, I.; Patel-Schneider, P.F. KR and Reasoning on the Semantic Web: OWL. In Handbook of Semantic Web Technologies; Domingue, J., Fensel, D., Hendler, J.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  24. Bonatti, P.A.; Decker, S.; Polleres, A.; Presutti, V. Knowledge graphs: New directions for knowledge representation on the semantic web (dagstuhl seminar 18371). In Dagstuhl Reports; Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Wadern, Germany, 2019; Volume 8. [Google Scholar]
  25. Uschold, M. Where are the semantics in the semantic web? AI Mag. 2003, 24, 25. [Google Scholar]
  26. Breitman, K.K.; Casanova, M.A.; Truszkowski, W. Ontology in computer science. In Semantic Web: Concepts, Technologies and Applications; Springer: Berlin/Heidelberg, Germany, 2007; pp. 17–34. [Google Scholar]
  27. Suh, A.; Prophet, J. The state of immersive technology research: A literature analysis. Comput. Hum. Behav. 2018, 86, 77–90. [Google Scholar] [CrossRef]
  28. Handa, M.; Aul, E.G.; Bajaj, S. Immersive technology–uses, challenges and opportunities. Int. J. Comput. Bus. Res. 2012, 6, 1–11. [Google Scholar]
  29. Azuma, R.T. A survey of augmented reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  30. Burdea, G.C.; Coiffet, P. Virtual Reality Technology; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  31. Speicher, M.; Hall, B.D.; Nebeling, M. What is mixed reality? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–15. [Google Scholar]
  32. Doolani, S.; Wessels, C.; Kanal, V.; Sevastopoulos, C.; Jaiswal, A.; Nambiappan, H.; Makedon, F. A review of extended reality (xr) technologies for manufacturing training. Technologies 2020, 8, 77. [Google Scholar] [CrossRef]
  33. Ringas, C.; Tasiopoulou, E.; Kaplanidi, D.; Partarakis, N.; Zabulis, X.; Zidianakis, E.; Patakos, A.; Patsiouras, N.; Karuzaki, E.; Foukarakis, M.; et al. Traditional Craft Training and Demonstration in Museums. Heritage 2022, 5, 431–459. [Google Scholar] [CrossRef]
  34. Hauser, H.; Beisswenger, C.; Partarakis, N.; Zabulis, X.; Adami, I.; Zidianakis, E.; Patakos, A.; Patsiouras, N.; Karuzaki, E.; Foukarakis, M.; et al. Multimodal narratives for the presentation of silk heritage in the museum. Heritage 2022, 5, 461–487. [Google Scholar] [CrossRef]
  35. Moeslund, T.B.; Granum, E. A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 2001, 81, 231–268. [Google Scholar] [CrossRef]
  36. Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 2006, 104, 90–126. [Google Scholar] [CrossRef]
  37. Geiger, A.; Ziegler, J.; Stiller, C. Stereoscan: Dense 3d reconstruction in real-time. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), IEEE, Baden-Baden, Germany, 5–9 June 2011; pp. 963–968. [Google Scholar]
  38. Kang, Z.; Yang, J.; Yang, Z.; Cheng, S. A review of techniques for 3d reconstruction of indoor environments. ISPRS Int. J. Geo-Inf. 2020, 9, 330. [Google Scholar] [CrossRef]
  39. Zollhöfer, M.; Stotko, P.; Görlitz, A.; Theobalt, C.; Nießner, M.; Klein, R.; Kolb, A. State of the art on 3D reconstruction with RGB-D cameras. Comput. Graph. Forum 2018, 37, 625–652. [Google Scholar] [CrossRef]
  40. Pollefeys, M.; Nistér, D.; Frahm, J.M.; Akbarzadeh, A.; Mordohai, P.; Clipp, B.; Engels, C.; Gallup, D.; Kim, S.-J.; Merrell, P.; et al. Detailed real-time urban 3d reconstruction from video. Int. J. Comput. Vis. 2008, 78, 143–167. [Google Scholar] [CrossRef]
  41. Hasan, S.M.; Lee, K.; Moon, D.; Kwon, S.; Jinwoo, S.; Lee, S. Augmented reality and digital twin system for interaction with construction machinery. J. Asian Archit. Build. Eng. 2022, 21, 564–574. [Google Scholar] [CrossRef]
  42. Ma, X.; Tao, F.; Zhang, M.; Wang, T.; Zuo, Y. Digital twin enhanced human-machine interaction in product lifecycle. Procedia Cirp. 2019, 83, 789–793. [Google Scholar] [CrossRef]
  43. Onaji, I.; Tiwari, D.; Soulatiantork, P.; Song, B.; Tiwari, A. Digital twin in manufacturing: Conceptual framework and case studies. Int. J. Comput. Integr. Manuf. 2022, 35, 831–858. [Google Scholar] [CrossRef]
  44. Boschert, S.; Rosen, R. Digital twin—The simulation aspect. In Mechatronic Futures: Challenges and Solutions for Mechatronic Systems and Their Designers; Springer: Cham, Switzerland, 2016; pp. 59–74. [Google Scholar]
  45. Susi, T.; Johannesson, M.; Backlund, P. Serious Games: An Overview; School of Humanities and Informatics, University of Skövde: Skövde, Sweden, 2007. [Google Scholar]
  46. Connolly, T.M.; Boyle, E.A.; MacArthur, E.; Hainey, T.; Boyle, J.M. A systematic literature review of empirical evidence on computer games and serious games. Comput. Educ. 2012, 59, 661–686. [Google Scholar] [CrossRef]
  47. Planas, E.; Daniel, G.; Brambilla, M.; Cabot, J. Towards a model-driven approach for multiexperience AI-based user interfaces. Softw. Syst. Model. 2021, 20, 997–1009. [Google Scholar] [CrossRef]
  48. Sousa, R.; Miranda, R.; Moreira, A.; Alves, C.; Lori, N.; Machado, J. Software tools for conducting real-time information processing and visualization in industry: An up-to-date review. Appl. Sci. 2021, 11, 4800. [Google Scholar] [CrossRef]
  49. Escotet, M.Á. The optimistic future of Artificial Intelligence in higher education. Prospects 2023, 1–10. [Google Scholar] [CrossRef]
  50. Makridakis, S. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  51. Waldherr, S.; Romero, R.; Thrun, S. A gesture based interface for human-robot interaction. Auton. Robot. 2000, 9, 151–173. [Google Scholar] [CrossRef]
  52. Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. J. Inf. Technol. Case Appl. Res. 2023, 25, 277–304. [Google Scholar] [CrossRef]
  53. Roccetti, M.; Marfia, G.; Semeraro, A. Playing into the wild: A gesture-based interface for gaming in public spaces. J. Vis. Commun. Image Represent. 2012, 23, 426–440. [Google Scholar] [CrossRef]
  54. Bhuiyan, M.; Picking, R. Gesture-controlled user interfaces, what have we done and what’s next. In Proceedings of the Fifth Collaborative Research Symposium on Security, E-Learning, Internet and Networking (SEIN 2009), Darmstadt, Germany, 26–27 November 2009; pp. 26–27. [Google Scholar]
  55. Kim, J.; He, J.; Lyons, K.; Starner, T. The gesture watch: A wireless contact-free gesture based wrist interface. In Proceedings of the 2007 11th IEEE International Symposium on Wearable Computers, IEEE, Boston, MA, USA, 11–13 October 2007; pp. 15–22. [Google Scholar]
  56. Shin, S.; Kim, W.Y. Skeleton-based dynamic hand gesture recognition using a part-based GRU-RNN for gesture-based interface. IEEE Access 2020, 8, 50236–50243. [Google Scholar] [CrossRef]
  57. Fogtmann, M.H.; Fritsch, J.; Kortbek, K.J. Kinesthetic interaction: Revealing the bodily potential in interaction design. In Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat, Cairns, Australia, 8–12 December 2008; pp. 89–96. [Google Scholar]
  58. Koutsabasis, P.; Vosinakis, S. Kinesthetic interactions in museums: Conveying cultural heritage by making use of ancient tools and (re-) constructing artworks. Virtual Real. 2018, 22, 103–118. [Google Scholar] [CrossRef]
  59. Tran, D.S.; Ho, N.H.; Yang, H.J.; Baek, E.T.; Kim, S.H.; Lee, G. Real-time hand gesture spotting and recognition using RGB-D camera and 3D convolutional neural network. Appl. Sci. 2020, 10, 722. [Google Scholar] [CrossRef]
  60. Oudah, M.; Al-Naji, A.; Chahl, J. Hand gesture recognition based on computer vision: A review of techniques. J. Imaging 2020, 6, 73. [Google Scholar] [CrossRef] [PubMed]
  61. Sarma, D.; Bhuyan, M.K. Methods, databases and recent advancement of vision-based hand gesture recognition for hci systems: A review. SN Comput. Sci. 2021, 2, 436. [Google Scholar] [CrossRef] [PubMed]
  62. Vosinakis, S.; Koutsabasis, P. Evaluation of visual feedback techniques for virtual grasping with bare hands using Leap Motion and Oculus Rift. Virtual Real. 2018, 22, 47–62. [Google Scholar] [CrossRef]
  63. Kim, H.I.; Woo, W. Smartwatch-assisted robust 6-DOF hand tracker for object manipulation in HMD-based augmented reality. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces (3DUI), IEEE, Greenville, SC, USA, 19–20 March 2016; pp. 251–252. [Google Scholar]
  64. Leibe, B.; Starner, T.; Ribarsky, W.; Wartell, Z.; Krum, D.; Singletary, B.; Hodges, L. The perceptive workbench: Toward spontaneous and natural interaction in semi-immersive virtual environments. In Proceedings of the IEEE Virtual Reality 2000 (Cat. No. 00CB37048), New Brunswick, NJ, USA, 18–22 March 2000; pp. 13–20. [Google Scholar]
  65. Monteiro, P.; Gonçalves, G.; Coelho, H.; Melo, M.; Bessa, M. Hands-free interaction in immersive virtual reality: A systematic review. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2702–2713. [Google Scholar] [CrossRef] [PubMed]
  66. Cohen, M.H.; Giangola, J.P.; Balogh, J. Voice User Interface Design; Addison-Wesley Professional: Boston, MA, USA, 2004. [Google Scholar]
  67. Pearl, C. Designing Voice User Interfaces: Principles of Conversational Experiences; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016. [Google Scholar]
  68. Terzopoulos, G.; Satratzemi, M. Voice assistants and smart speakers in everyday life and in education. Inform. Educ. 2020, 19, 473–490. [Google Scholar] [CrossRef]
  69. McLean, G.; Osei-Frimpong, K. Hey Alexa… examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput. Hum. Behav. 2019, 99, 28–37. [Google Scholar] [CrossRef]
  70. Natale, S.; Cooke, H. Browsing with Alexa: Interrogating the impact of voice assistants as web interfaces. Media Cult. Soc. 2021, 43, 1000–1016. [Google Scholar] [CrossRef]
  71. Rzepka, C. Examining the use of voice assistants: A value-focused thinking approach. In Proceedings of the Twenty-fifth Americas Conference on Information Systems, Cancún, Mexico, 15–17 August 2019. [Google Scholar]
  72. Schmidt, A. Implicit human computer interaction through context. Pers. Technol. 2000, 4, 191–199. [Google Scholar] [CrossRef]
  73. Rani, P.; Liu, C.; Sarkar, N.; Vanman, E. An empirical study of machine learning techniques for affect recognition in human–robot interaction. Pattern Anal. Appl. 2006, 9, 58–69. [Google Scholar] [CrossRef]
  74. Papatheocharous, E.; Belk, M.; Germanakos, P.; Samaras, G. Towards implicit user modeling based on artificial intelligence, cognitive styles and web interaction data. Int. J. Artif. Intell. Tools 2014, 23, 1440009. [Google Scholar] [CrossRef]
  75. Ju, W.; Leifer, L. The design of implicit interactions: Making interactive systems less obnoxious. Des. Issues 2008, 24, 72–84. [Google Scholar] [CrossRef]
  76. Agichtein, E.; Brill, E.; Dumais, S.; Ragno, R. Learning user interaction models for predicting web search result preferences. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, USA, 6–11 August 2006; pp. 3–10. [Google Scholar]
  77. Ntalianis, K.S.; Doulamis, A.D.; Tsapatsoulis, N.; Doulamis, N. Human action annotation, modeling and analysis based on implicit user interaction. Multimed. Tools Appl. 2010, 50, 199–225. [Google Scholar] [CrossRef]
  78. Zamenopoulos, T.; Alexiou, K. Towards an anticipatory view of design. Des. Stud. 2007, 28, 411–436. [Google Scholar] [CrossRef]
  79. van Bodegraven, J. How anticipatory design will challenge our relationship with technology. In Proceedings of the 2017 AAAI Spring Symposium Series, Stanford, CA, USA, 27–29 March 2017. [Google Scholar]
  80. Dumas, J.F.; Redish, J.C. A Practical Guide to Usability Testing; Greenwood Publishing Group Inc.: Westport, CT, USA, 1993. [Google Scholar]
  81. Lewis, J.R. Usability testing. In Handbook of Human Factors and Ergonomics; Wiley: Hoboken, NJ, USA, 2012; pp. 1267–1312. [Google Scholar]
  82. Nielsen, J.; Molich, R. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Gaithersburg, MD, USA, 15–17 March 1990; pp. 249–256. [Google Scholar]
  83. González, M.P.; Lorés, J.; Granollers, A. Enhancing usability testing through datamining techniques: A novel approach to detecting usability problem patterns for a context of use. Inf. Softw. Technol. 2008, 50, 547–568. [Google Scholar] [CrossRef]
  84. Eloff, J.H.; De Bruin, J.A.; Malan, K.M. Semi-automated usability analysis through eye tracking. South Afr. Comput. J. 2018, 30, 66–84. [Google Scholar]
  85. Vargas, A.; Weffers, H.; da Rocha, H.V. A method for remote and semi-automatic usability evaluation of web-based applications through users behavior analysis. In Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research, Eindhoven, The Netherlands, 24–27 August 2010; pp. 1–5. [Google Scholar]
  86. Muhi, K.; Szőke, G.; Fülöp, L.J.; Ferenc, R.; Berger, Á. A semi-automatic usability evaluation framework. In Computational Science and Its Applications—ICCSA 2013, Proceedings of the 13th International Conference on Computational Science and Its Applications, Ho Chi Minh City, Vietnam, 24–27 June 2013; Proceedings, Part II 13; Springer: Berlin/Heidelberg, Germany, 2013; pp. 529–542. [Google Scholar]
  87. Petrie, H.; Bevan, N. The evaluation of accessibility, usability, and user experience. In The Universal Access Handbook; CRC Press: Boca Raton, FL, USA, 2009; Volume 1, pp. 1–16. [Google Scholar]
  88. Wang, J.; Antonenko, P.; Celepkolu, M.; Jimenez, Y.; Fieldman, E.; Fieldman, A. Exploring relationships between eye tracking and traditional usability testing data. Int. J. Hum.-Comput. Interact. 2019, 35, 483–494. [Google Scholar] [CrossRef]
  89. Brocke, J.V.; Riedl, R.; Léger, P.M. Application strategies for neuroscience in information systems design science research. J. Comput. Inf. Syst. 2013, 53, 1–13. [Google Scholar] [CrossRef]
  90. Alfimtsev, A.N.; Basarab, M.A.; Devyatkov, V.V.; Levanov, A.A. A new methodology of usability testing on the base of the analysis of user’s electroencephalogram. J. Comput. Sci. Appl. 2015, 3, 105–111. [Google Scholar]
  91. Gasson, S. Human-centered vs. user-centered approaches to information system design. J. Inf. Technol. Theory Appl. (JITTA) 2003, 5, 5. [Google Scholar]
  92. Zhang, J.; Johnson, K.A.; Malin, J.T.; Smith, J.W. Human-centered information visualization. In Proceedings of the International Workshop on Dynamic Visualizations and Learning, Tubingen, Germany, 18–19 July 2002. [Google Scholar]
  93. Aragon, C.; Guha, S.; Kogan, M.; Muller, M.; Neff, G. Human-Centered Data Science: An Introduction; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
  94. Hall, D.L.; Jordan, J.M. Human-Centered Information Fusion; Artech House: Norwood, MA, USA, 2010. [Google Scholar]
  95. Rinkus, S.; Walji, M.; Johnson-Throop, K.A.; Malin, J.T.; Turley, J.P.; Smith, J.W.; Zhang, J. Human-centered design of a distributed knowledge management system. J. Biomed. Inform. 2005, 38, 4–17. [Google Scholar] [CrossRef] [PubMed]
  96. Gentner, D.; van Harmelen, F.; Hitzler, P.; Janowicz, K.; Kuhnberger, K.U. Cognitive approaches for the semantic web. Dagstuhl Rep. 2012, 2, 93–116. [Google Scholar]
  97. Raubal, M.; Adams, B. The semantic web needs more cognition. Semant. Web 2010, 1, 69–74. [Google Scholar] [CrossRef]
  98. McCosker, A.; Wilken, R. Rethinking ‘big data’as visual knowledge: The sublime and the diagrammatic in data visualisation. Vis. Stud. 2014, 29, 155–164. [Google Scholar] [CrossRef]
  99. Donalek, C.; Djorgovski, S.G.; Cioc, A.; Wang, A.; Zhang, J.; Lawler, E.; Yeh, S.; Mahabal, A.; Graham, M.; Drake, A.; et al. Immersive and collaborative data visualization using virtual reality platforms. In Proceedings of the 2014 IEEE International Conference on Big Data (Big Data), IEEE, Washington, DC, USA, 27–30 October 2014; pp. 609–614. [Google Scholar]
  100. Olshannikova, E.; Ometov, A.; Koucheryavy, Y.; Olsson, T. Visualizing Big Data with augmented and virtual reality: Challenges and research agenda. J. Big Data 2015, 2, 1–27. [Google Scholar] [CrossRef]
  101. Abbasi, A.; Sarker, S.; Chiang, R.H. Big data research in information systems: Toward an inclusive research agenda. J. Assoc. Inf. Syst. 2016, 17, 3. [Google Scholar] [CrossRef]
  102. Franke, B.; Plante, J.F.; Roscher, R.; Lee, E.S.A.; Smyth, C.; Hatefi, A.; Chen, F.; Gil, E.; Schwing, A.; Selvitella, A.; et al. Statistical inference, learning and models in big data. Int. Stat. Rev. 2016, 84, 371–389. [Google Scholar] [CrossRef]
  103. De Bra, P.M.E.; Aroyo, L.M.; Chepegin, V. The next big thing: Adaptive web-based systems. J. Digit. Inf. 2004, 5, No-247. [Google Scholar]
  104. Clarke, S.; Lehaney, B. (Eds.) Human Centered Methods in Information Systems: Current Research and Practice; IGI Global: Hershey, PA, USA, 1999. [Google Scholar]
  105. Rahmayani, M.T.I.; Firdaus, R.; Tekwana, P. Implementation of human centered design (hcd) Models in designing web-based information systems. J. Mantik 2023, 6, 3818–3826. [Google Scholar]
  106. van Velsen, L.S.; van der Geest, T.M.; Klaassen, R.F. User-centered evaluation of adaptive and adaptable systems. In Proceedings of the Fifth Workshop on User-Centred Design and Evaluation of Adaptive Systems, Dublin, Ireland, 20 June 2006. [Google Scholar]
  107. Chen, H.M.; Cooper, M.D. Using clustering techniques to detect usage patterns in a Web-based information system. J. Am. Soc. Inf. Sci. Technol. 2001, 52, 888–904. [Google Scholar] [CrossRef]
  108. Chen, H.M.; Cooper, M.D. Stochastic modeling of usage patterns in a web-based information system. J. Am. Soc. Inf. Sci. Technol. 2002, 53, 536–548. [Google Scholar] [CrossRef]
  109. De Guinea, A.O.; Webster, J. An investigation of information systems use patterns: Technological events as triggers, the effect of time, and consequences for performance. MIS Q. 2013, 37, 1165–1188. [Google Scholar] [CrossRef]
  110. Ramirez, A.J.; Cheng, B.H. Design patterns for developing dynamically adaptive systems. In Proceedings of the 2010 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems, New York, NY, USA, 3–4 May 2010; pp. 49–58. [Google Scholar]
  111. Suchanek, F.M.; Kasneci, G.; Weikum, G. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, Banff, AB, Canada, 8–12 May 2007; pp. 697–706. [Google Scholar]
  112. Kabir, N. The Impact of Semantic Knowledge Management System on Firms’ Innovation and Competitiveness. Doctoral Dissertation, Newcastle University, Newcastle, UK, 2017. [Google Scholar]
  113. Guido, A.L.; Paiano, R. Semantic integration of information systems. Int. J. Comput. Netw. Commun. (IJCNC) 2010, 2, 48–64. [Google Scholar]
  114. Tummarello, G.; Delbru, R.; Oren, E. Sindice.com: Weaving the open linked data. In Proceedings of the Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007+ ASWC 2007, Busan, Korea, 11–15 November 2007; Proceedings. Springer: Berlin/Heidelberg, Germany, 2007; pp. 552–565. [Google Scholar]
  115. Patkos, T.; Bikakis, A.; Antoniou, G.; Papadopouli, M.; Plexousakis, D. A semantics-based framework for context-aware services: Lessons learned and challenges. In Proceedings of the International Conference on Ubiquitous Intelligence and Computing, Hong Kong, China, 11–13 July 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 839–848. [Google Scholar]
  116. Sangers, J.; Frasincar, F.; Hogenboom, F.; Chepegin, V. Semantic web service discovery using natural language processing techniques. Expert Syst. Appl. 2013, 40, 4660–4671. [Google Scholar] [CrossRef]
  117. Kocaballi, A.B.; Laranjo, L.; Coiera, E. Understanding and measuring user experience in conversational interfaces. Interact. Comput. 2019, 31, 192–207. [Google Scholar] [CrossRef]
  118. Gruber, T. Collective knowledge systems: Where the social web meets the semantic web. J. Web Semant. 2008, 6, 4–13. [Google Scholar] [CrossRef]
  119. Grassi, M.; Morbidoni, C.; Nucci, M. A collaborative video annotation system based on semantic web technologies. Cogn. Comput. 2012, 4, 497–514. [Google Scholar] [CrossRef]
  120. Albertoni, R.; Bertone, A.; De Martino, M. Information Search: The Challenge of Integrating Information Visualization and Semantic Web. In Proceedings of the 16th International Workshop on Database and Expert Systems Applications (DEXA’05), Copenhagen, Denmark, 22–26 August 2005; pp. 529–533. [Google Scholar] [CrossRef]
  121. Benjamins, R.; Contreras, J.; Corcho, O.; Gómez-Pérez, A. The six challenges of the Semantic Web. In Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning, KR2002, Toulouse, France, 22–25 April 2002; ISBN 9781558608474. [Google Scholar]
  122. Baldoni, M.; Baroglio, C.; Henze, N. Personalization for the semantic web. In Reasoning Web: First International Summer School 2005, Msida, Malta, July 25–29, 2005, Revised Lectures; Springer: Berlin/Heidelberg, Germany, 2005; pp. 173–212. [Google Scholar]
  123. Lilis, Y.; Zidianakis, E.; Partarakis, N.; Antona, M.; Stephanidis, C. Personalizing HMI elements in ADAS using ontology meta-models and rule based reasoning. In Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods, Proceedings of the 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, 9–14 July 2017; Proceedings, Part I 11; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 383–401. [Google Scholar]
  124. Christou, C.; Angus, C.; Loscos, C.; Dettori, A.; Roussou, M. A versatile large-scale multimodal VR system for cultural heritage visualization. In Proceedings of the ACM Symposium on VR Software and Technology, Limassol, Cyprus, 1–3 November 2006; pp. 133–140. [Google Scholar]
  125. Gaitatzes, A.; Christopoulos, D.; Roussou, M. Reviving the past: Cultural heritage meets VR. In Proceedings of the 2001 Conference on VR, Archeology, and Cultural Heritage, Glyfada, Greece, 28–30 November 2001; pp. 103–110. [Google Scholar]
  126. Bruno, F.; Bruno, S.; De Sensi, G.; Luchi, M.L.; Mancuso, S.; Muzzupappa, M. From 3D reconstruction to VR: A complete methodology for digital archaeological exhibition. J. Cult. Herit. 2010, 11, 42–49. [Google Scholar] [CrossRef]
  127. Gonizzi Barsanti, S.; Caruso, G.; Micoli, L.L.; Covarrubias Rodriguez, M.; Guidi, G. 3D visualization of cultural heritage artefacts with VR devices. In Proceedings of the 25th International CIPA Symposium 2015, Taipei, Taiwan, 31 August–4 September 2015; Copernicus Gesellschaft mbH: Göttingen, Germany, 2015; Volume 40, pp. 165–172. [Google Scholar]
  128. Foni, A.; Papagiannakis, G.; Magnenat-Thalmann, N. A Virtual Heritage Case Study: A Modern Approach to the Revival of Ancient Historical or Archeological Sites through Application of 3D Real-Time Computer Graphics. Proc. A VIR 3 2003. Available online: https://api.semanticscholar.org/CorpusID:12528723 (accessed on 5 January 2024).
  129. Papagiannakis, G.; Ponder, M.; Molet, T.; Kshirsagar, S.; Cordier, F.; Magnenat-Thalmann, M.; Thalmann, D. LIFEPLUS: Revival of life in ancient Pompeii, virtual systems and multimedia (No. CONF). 2002. Available online: https://www.researchgate.net/publication/37444098_LIFEPLUS_Revival_of_life_in_ancient_Pompeii_Virtual_Systems_and_Multimedia (accessed on 5 January 2024).
  130. Magnenat-Thalmann, N.; Foni, A.E.; Papagiannakis, G.; Cadi-Yazli, N. Real Time Animation and Illumination in Ancient Roman Sites. Int. J. Virtual Real. 2007, 6, 11–24. [Google Scholar]
  131. Foni, A.E.; Papagiannakis, G.; Cadi-Yazli, N.; Magnenat-Thalmann, N. Time-dependent illumination and animation of virtual Hagia-Sophia. Int. J. Archit. Comput. 2007, 5, 283–301. [Google Scholar] [CrossRef]
  132. Skovfoged, M.M.; Viktor, M.; Sokolov, M.K.; Hansen, A.; Nielsen, H.H.; Rodil, K. The tales of the Tokoloshe: Safeguarding intangible cultural heritage using VR. In Proceedings of the Second African Conference for Human Computer Interaction: Thriving Communities, New York, NY, USA, 3–7 December 2018; pp. 1–4. [Google Scholar]
  133. Cao, D.; Li, G.; Zhu, W.; Liu, Q.; Bai, S.; Li, X. VR technology applied in digitalization of cultural heritage. Clust. Comput. 2019, 22, 10063–10074. [Google Scholar]
  134. Oculus Quest. Available online: https://www.oculus.com/experiences/quest/?locale=el_GR, (accessed on 10 January 2023).
  135. Argyriou, L.; Economou, D.; Bouki, V. Design methodology for 360 immersive video applications: The case study of a cultural heritage virtual tour. Pers. Ubiquitous Comput. 2020, 24, 843–859. [Google Scholar] [CrossRef]
  136. Argyriou, L.; Economou, D.; Bouki, V. 360-degree interactive video application for cultural heritage education. In Proceedings of the 3rd Annual International Conference of the Immersive Learning Research Network, Verlag der Technischen Universität Graz, Coimbra, Portugal, 26–29 June 2017. [Google Scholar]
  137. Škola, F.; Rizvić, S.; Cozza, M.; Barbieri, L.; Bruno, F.; Skarlatos, D.; Liarokapis, F. VR with 360-video storytelling in cultural heritage: Study of presence, engagement, and immersion. Sensors 2020, 20, 5851. [Google Scholar] [CrossRef] [PubMed]
  138. Zhou, C.; Li, Z.; Liu, Y. A measurement study of oculus 360-degree video streaming. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 27–37. [Google Scholar]
  139. Lo, W.C.; Fan, C.L.; Lee, J.; Huang, C.Y.; Chen, K.T.; Hsu, C.H. 360 video viewing dataset in head-mounted VR. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 211–216. [Google Scholar]
  140. Hajirasouli, A.; Banihashemi, S.; Kumarasuriyar, A.; Talebi, S.; Tabadkani, A. VR-based digitization for endangered heritage sites: Theoretical framework and application. J. Cult. Herit. 2021, 49, 140–151. [Google Scholar] [CrossRef]
  141. Pribeanu, C.; Balog, A.; Iordache, D.D. Measuring the perceived quality of an AR-based learning application: A multidimensional model. Interact. Learn. Environ. 2017, 25, 482–495. [Google Scholar] [CrossRef]
  142. Irwansyah, F.S.; Yusuf, Y.M.; Farida, I.; Ramdhani, M.A. Augmented reality (AR) technology on the android operating system in chemistry learning. IOP Conf. Ser. Mater. Sci. Eng. 2018, 288, 012068. [Google Scholar] [CrossRef]
  143. Moorhouse, N.; Jung, T. Augmented reality to enhance the learning experience in cultural heritage tourism: An experiential learning cycle perspective. eReview Tour. Res. 2017, 8. [Google Scholar]
  144. Dieck, M.C.T.; Jung, T.H. Value of augmented reality at cultural heritage sites: A stakeholder approach. J. Destin. Mark. Manag. 2017, 6, 110–117. [Google Scholar]
  145. Choudary, O.; Charvillat, V.; Grigoras, R.; Gurdjos, P. MARCH: Mobile augmented reality for cultural heritage. In Proceedings of the 17th ACM International Conference on Multimedia, Vancouver, BC, Canada, 19–24 October 2009; pp. 1023–1024. [Google Scholar]
  146. Vlahakis, V.; Karigiannis, J.; Tsotros, M.; Gounaris, M.; Almeida, L.; Stricker, D.; Gleue, T.; Christou, I.T.; Carlucci, R.; Ioannidis, N.; et al. Archeoguide: First results of an augmented reality, mobile computing system in cultural heritage sites. VR Archeol. Cult. Herit. 2001, 9, 584993–585015. [Google Scholar]
  147. Chung, N.; Lee, H.; Kim, J.Y.; Koo, C. The role of augmented reality for experience-influenced environments: The case of cultural heritage tourism in Korea. J. Travel Res. 2018, 57, 627–643. [Google Scholar] [CrossRef]
  148. Deliyiannis, I.; Papaioannou, G. Augmented reality for archaeological environments on mobile devices: A novel open framework. Mediterr. Archaeol. Archaeom. 2014, 14, 1–10. [Google Scholar]
  149. Pierdicca, R.; Frontoni, E.; Zingaretti, P.; Malinverni, E.S.; Colosi, F.; Orazi, R. Making visible the invisible. augmented reality visualization for 3D reconstructions of archaeological sites. In Proceedings of the Augmented and VR: Second International Conference, AVR 2015, Lecce, Italy, 31 August–3 September 2015; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. Proceedings 2. pp. 25–37. [Google Scholar]
  150. Panou, C.; Ragia, L.; Dimelli, D.; Mania, K. An architecture for mobile outdoors augmented reality for cultural heritage. ISPRS Int. J. Geo-Inf. 2018, 7, 463. [Google Scholar] [CrossRef]
  151. Fernández-Palacios, B.J.; Nex, F.; Rizzi, A.; Remondino, F. ARCube—The Augmented Reality Cube for Archaeology. Archaeometry 2015, 1, 250–262. [Google Scholar] [CrossRef]
  152. Fernández-Palacios, B.J.; Rizzi, A.; Nex, F. Augmented reality for archaeological finds. In Proceedings of the Cultural Heritage Preservation: 4th International Conference, EuroMed 2012, Limassol, Cyprus, 29 October–3 November 2012; Springer: Berlin/Heidelberg, Germany, 2012. Proceedings 4. pp. 181–190. [Google Scholar]
  153. The Historical Figures AR. Available online: https://play.google.com/store/apps/details?id=ca.altkey.thehistoricalfiguresar (accessed on 31 October 2022).
  154. Carre, A.L.; Dubois, A.; Partarakis, N.; Zabulis, X.; Patsiouras, N.; Mantinaki, E.; Zidianakis, E.; Cadi, N.; Baka, E.; Thalmann, N.M.; et al. Mixed-reality demonstration and training of glassblowing. Heritage 2022, 5, 103–128. [Google Scholar] [CrossRef]
  155. Gedenryd, H. How Designers Work—Making Sense of Authentic Cognitive Activities. Ph.D. Thesis, Lund University, Lund, Sweden, 1998. [Google Scholar]
  156. Keller, C.M.; Keller, J.D. Imagery in cultural tradition and innovation. Mind Cult. Act. 1999, 6, 3–32. [Google Scholar] [CrossRef]
  157. Di Nuovo, A.; De La Cruz, V.M.; Marocco, D. Special issue on artificial mental imagery in cognitive systems and robotics. Adapt. Behav. 2013, 21, 217–221. [Google Scholar] [CrossRef]
  158. Di Nuovo, A.; Marocco, D.; Di Nuovo, S.; Cangelosi, A. Embodied mental imagery in cognitive robots. In Springer Handbook of Model-Based Science; Springer: Berlin/Heidelberg, Germany, 2017; pp. 619–637. [Google Scholar]
  159. Zabulis, X.; Meghini, C.; Dubois, A.; Doulgeraki, P.; Partarakis, N.; Adami, I.; Karuzaki, E.; Carre, A.; Patsiouras, N.; Kaplanidi, D.; et al. Digitisation of traditional craft processes. J. Comput. Cult. Herit. 2022, 15, 1–24. [Google Scholar] [CrossRef]
  160. Aktas, B.; Mäkelä, M.; Laamanen, T.K. Material connections in craft making: The case of felting. In Proceedings of the Design Research Society International Conference, Brisbane, Australia, 11–14 August 2020; Design Research Society: Brisbane, Australia; pp. 2326–2343. [Google Scholar]
  161. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Zikas, P.; Papagiannakis, G.; Magnenat Thalmann, N. TooltY: An approach for the combination of motion capture and 3D reconstruction to present tool usage in 3D environments. In Intelligent Scene Modeling and Human-Computer Interaction; Springer International Publishing: Cham, Switzerland, 2021; pp. 165–180. [Google Scholar]
  162. Stefanidi, E.; Partarakis, N.; Zabulis, X.; Papagiannakis, G. An approach for the visualization of crafts and machine usage in virtual environments. In Proceedings of the 13th International Conference on Advances in Computer-Human Interactions, Valencia, Spain, 21–25 November 2020; pp. 21–25. [Google Scholar]
  163. Bouloukakis, M.; Partarakis, N.; Drossis, I.; Kalaitzakis, M.; Stephanidis, C. Virtual reality for smart city visualization and monitoring. In Mediterranean Cities and Island Communities: Smart, Sustainable, Inclusive and Resilient; Springer: Cham, Switzerland, 2019; pp. 1–18. [Google Scholar]
  164. Rossau, I.G.; Skovfoged, M.M.; Czapla, J.J.; Sokolov, M.K.; Rodil, K. Dovetailing: Safeguarding traditional craftsmanship using virtual reality. Int. J. Intang. Herit. 2019, 14, 104–120. [Google Scholar]
  165. Irregularcorporation. Available online: https://theirregularcorporation.com/ (accessed on 27 November 2023).
  166. Woodwork Simulator. Available online: https://www.igdb.com/games/woodwork-simulator (accessed on 27 November 2023).
  167. Murray, J.; Sawyer, W. Virtual Crafting Simulator: Teaching Heritage through Simulation. In Proceedings of EDULEARN15; University of Lincoln: Lincoln, UK, 2015; pp. 7668–7675. [Google Scholar]
  168. Iyobe, M.; Ishida, T.; Miyakawa, A.; Shibata, Y. Kansei retrieval method by principal component analysis of Japanese traditional crafts. In Proceedings of the 23rd International Symposium on Artificial Life and Robotics, Beppu, Japan, 18–20 January 2018; pp. 588–591. [Google Scholar]
  169. Iyobe, M.; Ishida, T.; Miyakawa, A.; Sugita, K.; Uchida, N.; Shibata, Y. Development of a mobile virtual traditional crafting presentation system using augmented reality technology. Int. J. Space-Based Situated Comput. (IJSSC) 2017, 6, 239–251. [Google Scholar] [CrossRef]
  170. Iyobe, M.; Ishida, T.; Miyakawa, A.; Shibata, Y. Implementation of a mobile traditional crafting application using kansei retrieval method. IT CoNvergence PRAct. (INPRA) 2017, 5, 15–44. [Google Scholar]
  171. Kaplan, A.D.; Cruit, J.; Endsley, M.; Beers, S.M.; Sawyer, B.D.; Hancock, P.A. The effects of virtual reality, augmented reality, and mixed reality as training enhancement methods: A meta-analysis. Hum. Factors 2021, 63, 706–726. [Google Scholar] [CrossRef] [PubMed]
  172. Gonzalez-Franco, M.; Pizarro, R.; Cermeron, J.; Li, K.; Thorn, J.; Hutabarat, W.; Tiwari, A.; Bermell-Garcia, P. Immersive mixed reality for manufacturing training. Front. Robot. AI 2017, 4, 3. [Google Scholar] [CrossRef]
  173. Verhey, J.T.; Haglin, J.M.; Verhey, E.M.; Hartigan, D.E. Virtual, augmented, and mixed reality applications in orthopedic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, e2067. [Google Scholar] [CrossRef] [PubMed]
  174. Alnagrat, A.J.A. Virtual Transformations in Human Learning Environment: An Extended Reality Approach. J. Hum. Centered Technol. 2022, 1, 116–124. [Google Scholar] [CrossRef]
  175. Pretolesi, D.; Zechner, O. Persuasive XR Training: Improving Training with AI and Dashboards. In Proceedings of the 18th International Conference on Persuasive Technology (PERSUASIVE 2023), Eindhoven, The Netherlands, 19–21 April 2023; p. 8. [Google Scholar]
  176. Nansense. Available online: https://www.nansense.com, (accessed on 7 December 2023).
  177. Rokoko. Available online: https://www.rokoko.com, (accessed on 7 December 2023).
  178. Mehta, D.; Sotnychenko, O.; Mueller, F.; Xu, W.; Elgharib, M.; Fua, P.; Seidel, H.-P.; Rhodin, H.; Pons-Moll, G.; Theobalt, C. XNect: Real-time multi-person 3D motion capture with a single RGB camera. ACM Trans. Graph. (TOG) 2020, 39, 82:1–82:17. [Google Scholar] [CrossRef]
  179. Laraba, S.; Brahimi, M.; Tilmanne, J.; Dutoit, T. 3D skeleton-based action recognition by representing motion capture sequences as 2D-RGB images. Comput. Animat. Virtual Worlds 2017, 28, e1782. [Google Scholar] [CrossRef]
  180. Qammaz, A.; Argyros, A.A. MocapNET: Ensemble of SNN Encoders for 3D Human Pose Estimation in RGB Images. In Proceedings of the BMVC, Cardiff, UK, 9–12 September 2019; p. 46. [Google Scholar]
  181. Qammaz, A.; Argyros, A. Occlusion-tolerant and personalized 3D human pose estimation in RGB images. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, Milan, Italy, 10–15 January 2021; pp. 6904–6911. [Google Scholar]
  182. Qammaz, A.; Argyros, A.A. A Unified Approach for Occlusion Tolerant 3D Facial Pose Capture and Gaze Estimation Using MocapNETs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 3178–3188. [Google Scholar]
  183. D’Apuzzo, N. Overview of 3D surface digitization technologies in Europe. In Three-Dimensional Image Capture and Applications VII; SPIE: Wallisellen, Switzerland, 2006; Volume 6056, pp. 42–54. [Google Scholar]
  184. Durou, J.D.; Falcone, M.; Quéau, Y.; Tozza, S. (Eds.) Advances in Photometric 3d-Reconstruction; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–29. [Google Scholar]
  185. Daneshmand, M.; Helmi, A.; Avots, E.; Noroozi, F.; Alisinanoglu, F.; Arslan, H.S.; Gorbova, J.; Haamer, R.E.; Ozcinar, C.; Anbarjafari, G. 3d scanning: A comprehensive survey. arXiv 2018, preprint. arXiv:1801.08863. [Google Scholar]
  186. Xiong, Z.; Zhang, Y.; Wu, F.; Zeng, W. Computational depth sensing: Toward high-performance commodity depth cameras. IEEE Signal Process. Mag. 2017, 34, 55–68. [Google Scholar] [CrossRef]
  187. Zhang, T.; Nakamura, Y. Hrpslam: A benchmark for rgb-d dynamic slam and humanoid vision. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), IEEE, Naples, Italy, 25–27 February 2019; pp. 110–116. [Google Scholar]
  188. Aguilar, W.G.; Rodríguez, G.A.; Álvarez, L.; Sandoval, S.; Quisaguano, F.; Limaico, A. Visual SLAM with a RGB-D camera on a quadrotor UAV using on-board processing. In Proceedings of the Advances in Computational Intelligence: 14th International Work-Conference on Artificial Neural Networks, IWANN 2017, Cadiz, Spain, 14–16 June 2017; Proceedings, Part II 14. Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 596–606. [Google Scholar]
  189. Benko, H.; Holz, C.; Sinclair, M.; Ofek, E. Normaltouch and texturetouch: High-fidelity 3d haptic shape rendering on handheld virtual reality controllers. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 717–728. [Google Scholar]
  190. Choi, I.; Ofek, E.; Benko, H.; Sinclair, M.; Holz, C. Claw: A multifunctional handheld haptic controller for grasping, touching, and triggering in virtual reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–13. [Google Scholar]
  191. Whitmire, E.; Benko, H.; Holz, C.; Ofek, E.; Sinclair, M. Haptic revolver: Touch, shear, texture, and shape rendering on a reconfigurable virtual reality controller. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar]
  192. Coelho, C.; Tichon, J.; Hine, T.J.; Wallis, G.; Riva, G. Media presence and inner presence: The sense of presence in virtual reality technologies. In From Communication to Presence: Cognition, Emotions and Culture towards the Ultimate Communicative Experience; IOS Press: Amsterdam, The Netherlands, 2006; Volume 11, pp. 25–45. [Google Scholar]
  193. North, M.M.; North, S.M. A comparative study of sense of presence of traditional virtual reality and immersive environments. Australas. J. Inf. Syst. 2016, 20, 1–15. [Google Scholar]
  194. Argelaguet, F.; Hoyet, L.; Trico, M.; Lécuyer, A. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In Proceedings of the 2016 IEEE Virtual Reality (VR), IEEE, Greenville, SC, USA, 19–23 March 2016; pp. 3–10. [Google Scholar]
  195. Kilteni, K.; Groten, R.; Slater, M. The sense of embodiment in virtual reality. Presence Teleoperators Virtual Environ. 2012, 21, 373–387. [Google Scholar] [CrossRef]
  196. Genay, A.; Lécuyer, A.; Hachet, M. Being an avatar “for real”: A survey on virtual embodiment in augmented reality. IEEE Trans. Vis. Comput. Graph. 2021, 28, 5071–5090. [Google Scholar] [CrossRef] [PubMed]
  197. Hassan, R. Digitality, virtual reality and the ‘empathy machine’. Digit. Journal. 2020, 8, 195–212. [Google Scholar] [CrossRef]
  198. Kenanidis, E.; Boutos, P.; Voulgaris, G.; Zgouridou, A.; Gkoura, E.; Gamie, Z.; Papagiannakis, G.; Tsiridis, E. Effectiveness of virtual reality compared to video training on acetabular cup and femoral stem implantation accuracy in total hip arthroplasty among medical students: A randomised controlled trial. Int. Orthop. 2023, 1–9. [Google Scholar] [CrossRef] [PubMed]
  199. Zikas, P.; Protopsaltis, A.; Lydatakis, N.; Kentros, M.; Geronikolakis, S.; Kateros, S.; Kamarianakis, M.; Evangelou, G.; Filippidis, A.; Grigoriou, E.; et al. MAGES 4.0: Accelerating the world’s transition to VR training and democratizing the authoring of the medical metaverse. IEEE Comput. Graph. Appl. 2023, 43, 43–56. [Google Scholar] [CrossRef]
  200. Saccoccio, S. Towards Enabling Storyliving Experiences: How XR Technologies Can Enhance Brand Storytelling. Master Thesis, University of Milan, Milan, Italy, 2022. [Google Scholar]
  201. Miller, C.H. Digital Storytelling 4e: A Creator’s Guide to Interactive Entertainment; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  202. Raybourn, E.M. A new paradigm for serious games: Transmedia learning for more effective training and education. J. Comput. Sci. 2014, 5, 471–481. [Google Scholar] [CrossRef]
  203. Streicher, A.; Smeddinck, J.D. Personalized and adaptive serious games. In Proceedings of the Entertainment Computing and Serious Games: International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, 5–10 July 2015; Revised Selected Papers. Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 332–377. [Google Scholar]
  204. Williams-Bell, F.M.; Kapralos, B.; Hogue, A.; Murphy, B.M.; Weckman, E.J. Using serious games and virtual simulation for training in the fire service: A review. Fire Technol. 2015, 51, 553–584. [Google Scholar] [CrossRef]
  205. Arnab, S. (Ed.) Serious Games for Healthcare: Applications and Implications; IGI Global: Hershey, PA, USA, 2012. [Google Scholar]
  206. Chittaro, L.; Buttussi, F. Assessing knowledge retention of an immersive serious game vs. a traditional education method in aviation safety. IEEE Trans. Vis. Comput. Graph. 2015, 21, 529–538. [Google Scholar] [CrossRef]
  207. Chittaro, L.; Sioni, R. Serious games for emergency preparedness: Evaluation of an interactive vs. a non-interactive simulation of a terror attack. Comput. Hum. Behav. 2015, 50, 508–519. [Google Scholar] [CrossRef]
  208. Mystakidis, S.; Besharat, J.; Papantzikos, G.; Christopoulos, A.; Stylios, C.; Agorgianitis, S.; Tselentis, D. Design, development, and evaluation of a virtual reality serious game for school fire preparedness training. Educ. Sci. 2022, 12, 281. [Google Scholar] [CrossRef]
  209. Checa, D.; Miguel-Alonso, I.; Bustillo, A. Immersive virtual-reality computer-assembly serious game to enhance autonomous learning. Virtual Real. 2021, 27, 3301–3318. [Google Scholar] [CrossRef]
  210. Rebolledo-Mendez, G.; Avramides, K.; De Freitas, S.; Memarzia, K. Societal impact of a serious game on raising public awareness: The case of FloodSim. In Proceedings of the 2009 ACM SIGGRAPH Symposium on Video Games, New Orleans, LA, USA, 3–7 August 2009; pp. 15–22. [Google Scholar]
  211. De Jans, S.; Van Geit, K.; Cauberghe, V.; Hudders, L.; De Veirman, M. Using games to raise awareness: How to co-design serious mini-games? Comput. Educ. 2017, 110, 77–87. [Google Scholar] [CrossRef]
  212. Zarzuela, M.M.; Pernas, F.J.D.; Martínez, L.B.; Ortega, D.G.; Rodríguez, M.A. Mobile serious game using augmented reality for supporting children’s learning about animals. Procedia Comput. Sci. 2013, 25, 375–381. [Google Scholar] [CrossRef]
  213. Checa, D.; Bustillo, A. A review of immersive virtual reality serious games to enhance learning and training. Multimed. Tools Appl. 2020, 79, 5501–5527. [Google Scholar] [CrossRef]
  214. Avola, D.; Cinque, L.; Foresti, G.L.; Marini, M.R. An interactive and low-cost full body rehabilitation framework based on 3D immersive serious games. J. Biomed. Inform. 2019, 89, 81–100. [Google Scholar] [CrossRef] [PubMed]
  215. Schmidt, A.; Mayer, S.; Buschek, D. Introduction to Intelligent User Interfaces. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 8–13 May 2021; pp. 1–4. [Google Scholar]
  216. Hitzler, P.; Bianchi, F.; Ebrahimi, M.; Sarker, M.K. Neural-symbolic integration and the semantic web. Semant. Web 2020, 11, 3–11. [Google Scholar] [CrossRef]
  217. Shadbolt, N.; Berners-Lee, T.; Hall, W. The semantic web revisited. IEEE Intell. Syst. 2006, 21, 96–101. [Google Scholar] [CrossRef]
  218. Ławrynowicz, A. Creative AI: A new avenue for the Semantic Web? Semant. Web 2020, 11, 69–78. [Google Scholar] [CrossRef]
  219. Card, S. Information visualization. In Human-Computer Interaction; CRC Press: Boca Raton, FL, USA, 2009; pp. 199–234. [Google Scholar]
  220. Nakao, Y.; Strappelli, L.; Stumpf, S.; Naseer, A.; Regoli, D.; Gamba, G.D. Towards responsible AI: A design space exploration of human-centered artificial intelligence user interfaces to investigate fairness. Int. J. Hum. Comput. Interact. 2023, 39, 1762–1788. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Partarakis, N.; Zabulis, X. A Review of Immersive Technologies, Knowledge Representation, and AI for Human-Centered Digital Experiences. Electronics 2024, 13, 269. https://doi.org/10.3390/electronics13020269

AMA Style

Partarakis N, Zabulis X. A Review of Immersive Technologies, Knowledge Representation, and AI for Human-Centered Digital Experiences. Electronics. 2024; 13(2):269. https://doi.org/10.3390/electronics13020269

Chicago/Turabian Style

Partarakis, Nikolaos, and Xenophon Zabulis. 2024. "A Review of Immersive Technologies, Knowledge Representation, and AI for Human-Centered Digital Experiences" Electronics 13, no. 2: 269. https://doi.org/10.3390/electronics13020269

APA Style

Partarakis, N., & Zabulis, X. (2024). A Review of Immersive Technologies, Knowledge Representation, and AI for Human-Centered Digital Experiences. Electronics, 13(2), 269. https://doi.org/10.3390/electronics13020269

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop