Next Article in Journal
Recent Developments on Mobile Ad-Hoc Networks and Vehicular Ad-Hoc Networks
Previous Article in Journal
Planning and Research of Distribution Feeder Automation with Decentralized Power Supply
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support

by
Emmanouil Zidianakis
1,*,
Nikolaos Partarakis
1,
Stavroula Ntoa
1,
Antonis Dimopoulos
1,
Stella Kopidaki
1,
Anastasia Ntagianta
1,
Emmanouil Ntafotis
1,
Aldo Xhako
1,
Zacharias Pervolarakis
1,
Eirini Kontaki
1,
Ioanna Zidianaki
1,
Andreas Michelakis
1,
Michalis Foukarakis
1 and
Constantine Stephanidis
1,2
1
Institute of Computer Science, Foundation for Research and Technology—Hellas (FORTH), 70013 Heraklion, Crete, Greece
2
Computer Science Department, School of Sciences & Engineering, University of Crete, 70013 Heraklion, Crete, Greece
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(3), 363; https://doi.org/10.3390/electronics10030363
Submission received: 31 December 2020 / Revised: 21 January 2021 / Accepted: 28 January 2021 / Published: 2 February 2021
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
With the ever-advancing availability of digitized museum artifacts, the question of how to make the vast collection of exhibits accessible and explorable beyond what museums traditionally offer via their websites and exposed databases has recently gained increased attention. This research work introduces the Invisible Museum: a user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. The platform itself was designed following a Human-Centered Design approach, with the active participation of museum curators and end-users. Content representation adheres to domain standards such as International Committee for Documentation of the International Council of Museums (CIDOC-CRM) and the Europeana Data Model and exploits state-of-the-art deep learning technologies to assist the curators by generating ontology bindings for textual data. The platform enables the formulation and semantic representation of narratives that guide storytelling experiences and bind the presented artifacts with their socio-historic context. Main contributions are pertinent to the fields of (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visualization in web-based 3D/VR technologies, and (d) immersive navigation and interaction. The Invisible Museum has been evaluated using a combination of different methodologies, ensuring the delivery of a high-quality user experience, leading to valuable lessons learned, which are discussed in the article.

1. Introduction

Virtual Museums (VMs) aim to provide the means to establish access, context, and outreach by using information technology [1]. In the past decade, VMs have evolved from digital duplicates of “real” museums or online museums into complex communication systems, which are strongly connected with narratives in 3D reconstructed scenarios [2]. This evolution has provided a wide variety of VMs instantiations delivered through multiple platforms and technologies, aiming to visually explain history, architecture, and or artworks (indoor virtual archaeology, embedded virtual reconstructions, etc.). For example, on-site interactive installations aim at conserving the collective experience of a typical visit to a museum [3]. Web-delivered VMs, on the other hand, provide content through the Web and are facilitated by a wide variety of 3D viewers that have been developed to provide 3D interactive applications “embedded” in browsers. A common characteristic in contemporary VMs is the use of 3D models reconstructing monuments, sites, landscapes, etc., which can be explored in most cases in real-time, either directly or through a guide [4].
The evolving technologies of Virtual Reality (VR), Augmented Reality (AR), and the Web have reached a level of maturity that enables them to contribute significantly to the creation of VMs, usually as an extension of physical museums. However, existing VMs are implemented as ad hoc instantiations of physical museums, thus setting various obstacles toward a unified user-centric approach, such as (a) lack of tools for writing stories that use and connect knowledge with digital representations of objects, (b) lack of a unified platform for the presentation of virtual exhibitions on any device, and (c) lack of mechanisms for personalized interaction with knowledge and digital information.
In this context, this research work presents the Invisible Museum: a user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. The platform constitutes a VM repository incorporating several technologies that allow novice and expert users to create interactive and unique virtual exhibitions collaboratively. It builds upon Web-based 3D/VR technologies, aiming to provide access to the available content to anyone around the world. In this way, digital collections of exhibits can be promoted to different target groups to foster knowledge, promote cultural heritage, and enhance education, history, science, or art in a user-friendly and interactive way. At the same time, by adopting existing standards for the representation of information and museum knowledge, virtual exhibitions become easily accessible on multiple platforms and presentation devices. Furthermore, it supports content personalization facilities that allow adaptation to diverse end-users, thus supporting alternative forms of virtual exhibitions. Last, the platform itself was designed following a Human-Centered Design (HCD) approach with the collaboration of end-users as well as museum curators and personnel.
The ambition of this research work is to provide a useful and meaningful medium for creating and experiencing cultural content, setting the foundation for a framework that applies research outcomes in the museum context with direct benefits for museums, Cultural and Creative Industries, as well as the society. As such, the main impact of this work stems from its core scientific objective, which is to develop an integrated environment for the representation and presentation of museum collections in a VM context. In this work, several research challenges are addressed, including the semantic representation of narratives, the generation of ontology bindings from text, and web (co-)authoring of virtual exhibitions that can be experienced both in 3D and VR.
The remainder of this article is structured as follows: background and related work in the field of technologies for VMs is discussed in Section 2; the methodology followed for the development of the Invisible Museum is presented in Section 3; user requirements elicitation process and analysis are presented in Section 4; the iterative design and evaluation results are presented in Section 5; the platform itself is presented in Section 6 along with the architecture, structure, and implementation details; while results of its evaluation are presented in Section 8. Section 9 includes a discussion on the impact, lessons learned, and limitations of this work. Finally, the concluding Section 10 summarizes the key points of this work and highlights directions for future work.

2. Background and Related Work

2.1. Virtual Museums

One of the earliest definitions [1] of a VM identifies a VM as “a means to establish access, context, and outreach by using information technology. The Internet opens the VM to an interactive dialogue with virtual visitors and invites them to make a VM experience that is related to a real museum experience”. Recently, VMs have been envisioned as virtual places that provide access to structured museum narratives and immersive experiences [2]. This evolution has provided a wide variety of VM instantiations delivered through multiple platforms and technologies.
Mobile VMs or Micro Museums exploit VR/AR Mobile applications to explain the history, architecture, and or other artworks visually (indoor virtual archaeology, embedded virtual reconstructions, etc.). On-site interactive installations focus on augmenting existing museum collections with digital technologies [3]. Web-based VMs provide digitized museum content online. Examples include Google Arts and Culture [5], Inventing Europe [6], MUSEON [7], DynaMus [8], Scan4Reco [9], VIRTUE [10], etc. Multimedia VMs involve interactive experiences blending video, audio, and interactive technologies, which are usually delivered via CD-ROM or DVD, such as for example the “Medieval Dublin: From Vikings to Tudors” [11]. In parallel with the evolution of virtual museums, Digital Archives are becoming increasingly popular, as the number of digital information to be indexed increases, together with the wish of the public to gain access to information. In the past decade, Digital Archives have evolved and today provide thesauruses of terms, digital repositories considering different metadata schemes, intelligent searching/browsing systems, etc.
Existing approaches to the provision of VM experiences can be classified by (a) content, referring to the actual theme and exhibits of a VM (history, archaeology, natural history, art, etc.), (b) interaction technology, related to users’ capability of modifying the environment and receiving a feedback to their actions (both immersion and interaction concur to realize the belief of actually being in a virtual space [12]), (c) duration, referring to the timing of a VM and the consequences in terms of technology, content, installation, sustainability of the projects (periodic, permanent, temporary), (d) communication, referring to the type of communication style used to create a VM (descriptive, narrative or dramatization-based VMs), (e) level of user immersion: immersive, and non-immersive [12], (f) sustainability level, i.e., the extent to which the museum software, digital content, setup, and/or metadata are reusable, portable, maintainable, exchangeable, re-usable, (g) type of distribution, referring to the extent to which the VM can be moved from one location to another and (h) scope of the VM, including educational, entertainment, promotional, and research purposes.

2.2. Technologies for Virtual Museums

Considering the multiple definitions of a VM, today, a plethora of technologies are employed to provide access to such applications and services. Without being exhaustive, this background and related work in this article focus on technologies that are relevant to the proposed solution as presented in the following subsections.

2.2.1. Semantic Web Technologies

In the Cultural Heritage (CH) domain, the use of ontologies for describing, classifying, and contextualizing objects is now a well-established practice. The Conceptual Reference Model (CRM) of the International Committee for Documentation of the International Council of Museums (ICOM-CIDOC) [13] has emerged as a conceptual basis for reconciling different metadata schemas. CRM is an ISO standard (21127:2014 [14]) that has been integrated with the Functional Requirements for Bibliographic Records (FRBR) and the Europeana Data Model (EDM) [15], which are currently used to describe and contextualize the largest European collection of CH resources, featuring more than 30 million objects. The EDM plays the role of upper ontology for integrating metadata schemes of libraries, archives, and museums. Furthermore, linked data endpoints have been published under the creative commons license such as collections from the British museum, Europeana, Tate gallery, etc.
In the same context, the generation of top-level ontologies for narratives on top of the aforementioned semantic models has been proposed to support novel forms of representation of CH resources. Computational narratology [16] studies narratives from a computation perspective. The concept of the event is a core element of narratology theory and the narratives. People conventionally refer to an event as an occurrence taking place at a certain time at a specific location. Various models have been developed for representing events on the Semantic Web, e.g., Event Ontology [17], Linking Open Descriptions of Events (LODE) [18], and the F-Model [13]. Some more general models for semantic data organization are (a) the CIDOC-CRM [19], (b) the ABC ontology [20], and (c) the EDM [15]. Narratives have been recently proposed to enhance the information contents and functionality of Digital Libraries, with special emphasis on information discovery and exploration. For example, in the Personalising Access to cultural Heritage Spaces (PATHS) project [21] a system that acts as an interactive personalized tour guide through existing digital library collections was created. In the Agora project [22], methods and techniques to support the narrative understanding of objects in VMs collections were developed. The Storyspace system [23] allows describing stories that span museum objects. Recently, the Mingei Craft Ontology (CrO) [24] and the Mingei Online Platform [25] have been proposed to provide a holistic ecosystem for the representation of the socio-historical context of a museum collection and its presentation through multimodal event-based narratives.

2.2.2. Text Analysis for Semantic Web Representation

The quality of Automatic Speech Recognition (ASR) and surface-oriented language analysis have been improved by deep learning-(neural network-)based sequence-to-sequence models (such as [26,27]). Deep language analysis to produce content representations out of the analyzed textual input has recently attracted research attention. To date, semantic parsing techniques produce either “abstract meaning representations”, which are abstractions of syntactic dependency trees, or semantic role structures [28,29]. Relevant to this work is the production of abstract semantic structures that can be mapped onto ontological representations. The extraction of predefined domain-dependent concepts and relation instantiations are currently application-specific, such for example Wikipedia relations [30]. Other approaches focus on the identification of generic, schema-level knowledge, again concerning predefined types of information, such as IS-A relations [31] or part–whole relations [32]. Recently, research shifted to open-domain to capture the entirety of textual inputs, in terms of entities and their interrelations as structured Resource Description Framework (RDF) / Web Ontology Language (OWL) representations, e.g., Semantic Web Machine Reading with FRED [33] and JAMR parser [34].

2.2.3. Virtual Reality

VR is the total sensory immersion, which includes immersion displays, tracking, and sensing technologies. Common visualization displays include head-mounted displays and 3D polarizing stereoscopic glasses, while inertia and magnetic trackers are the most popular positional and orientation devices. As far as sensing is considered, 3D mouse and gloves can be used to create a feeling of control of an actual space. An example of a high-immersion VR environment is Kivotos, which is a VR environment that uses the CAVE® system, in a room of 3 by 3 meters, where the walls and the floor act as projection screens in which visitors take off on a journey thanks to stereoscopic 3D glasses [35]. As mentioned earlier, virtual exhibitions can be visualized in the web browser in the form of 3D galleries, but they can also be used as a standalone interface. In addition, several commercial VR software tools and libraries exist, such as Cortona [36], which can be used to generate fast and effective VM environments. However, the cost of creating and storing the content (i.e., 3D galleries) is considerably high for the medium and small-sized museums that represent the majority of Cultural Heritage Institutions (CHIs). More recently, the evolution in headsets technology made VR equipment more widely available and less demanding in terms of equipment costs for the museums. This led to the creation of several VR-based VM experiences (e.g., [37]) facilitating mainstream hardware such as the HTC Vive [38] and the Oculus Quest v2 [39].

2.2.4. Visualization Technologies

The most popular technology for the World Wide Web (WWW) visualization includes Web3D [40], which offers standards and tools such as VRML and X3D, which can be used for the creation of an interactive VM. Many museum applications based on VRML have been developed for the Web [41,42]. However, VRML can be excessively labor-intensive, time-consuming, and expensive. QuickTime VR (QTVR) and panoramas that allow animation and provide dynamic and continuous 360° views might represent an alternative solution for museums, such as in Hughes et al. [43]. As with VRML, the image allows panning and high-quality zooming. Furthermore, hotspots that connect the QTVR and panoramas with other files can be added [44]. In contrast, X3D is an Open Standards XML-enabled 3D file format offering real-time communication of 3D data across all applications and network applications. Although X3D is sometimes considered as an Application Programming Interface (API) or a file format for geometry interchange, its main characteristic is that it combines both geometry and runtime behavioral descriptions into a single file alone. Moreover, X3D is considered to be the next revision of the VRML97 ISO specification, incorporating the latest advances in commercial graphics hardware features, as well as improvements based on years of feedback from the VRML97 development community. One more 3D graphics format is COLLAborative Design Activity (COLLADA) [45], which defines an open standard XML schema for exchanging digital assets among various graphics software applications that might otherwise store their assets in incompatible formats. One of the main advantages of COLLADA is that it includes more advanced physics functionality such as collision detection and friction (which is not supported by Web3D). A recently published royalty-free specification for the efficient transmission and loading of 3D scenes and models is glTF (derivative short form of Graphics Language Transmission Format or GL Transmission Format) [46]. The file format was conceived in 2012 by members of the COLLADA working group. It is intended to be a streamlined, interoperable format for the delivery of 3D assets while minimizing file size and runtime processing by apps. As such, its creators have described it as the "JPEG of 3D". The binary version of the format is called GLB, where all assets are stored in a single file. Moreover, powerful technologies have been used in museum environments including OpenSceneGraph (OSG) [47] and a variety of 3D game engines [48,49]. OSG is an open-source multi-platform high-performance 3D graphics toolkit, used by museums [35,50] to generate powerful VR applications, especially in terms of immersion and interactivity since it supports text, video, audio, and 3D scenes into a single 3D environment. On the other hand, the game engines are very powerful, and they provide superior visualization and physics support.
This work builds on the aforementioned advancements in the state-of-the-art and provides the details of the following main technical contributions in the field of (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visualization in Web-based 3D/VR, and (d) immersive navigation and interaction in photorealistic renderings.

2.3. Content Provision

Apart from the technologies used to access a VM, recent research has exploited the concept of content personalization. In this context, content personalization based on user profile information has been proposed to support information provision by multi-user interactive museum exhibits [51]. Such systems detect the user currently interacting with the exhibit and adapt the provision of information based on user profile data such as age, expertise, interests, language, etc. In the same context, to provide access to the widest possible population including people with disabilities, similar approaches have been employed affecting both the personalization of information and the interaction metaphors used for providing access to information [52]. More recently, research has extended the concept of a user profile to include further user-related information such as opinions, interaction behavior, user ratings, and feedbacks. Using these knowledge recommendation systems has increased the accuracy of the content provided in social media [53]. Extensions of this approach in music recommendation systems employ techniques for the identification of personality traits, moods, and emotions to improve the quality of recommendations [54]. In this research work, an alternative approach is exploited focusing on provided alternative forms of curated knowledge at the narrative level, considering the need for the museum to provide curated information in all user groups.

3. Methodology

The methodology followed for the development of the Invisible Museum is rooted in the HCD Process [55], actively involving User Experience (UX) experts, domain experts (archaeologists and museum curators), as well as representative end-users. HCD is an approach to the software development process that focuses on the user, rather than the technology. The participation of end-users starts from the first stages of the design and continues until the end of the development. In addition, the method of iterative design and development is followed, which includes multiple evaluations by experts and end-users, at various stages of the application design and development, which may lead to refining the prototypes, the identified requirements, or the specifications of the context of use. The process is based on the use of techniques for communication, interaction, empathy, and stimulation of participants, thus gaining an understanding of their needs, desires, and experiences. Thus, HCD focuses the questions, ideas, and activities on the people whom the system addresses, rather than on the designers’ creative process or the characteristics of the technology [56]. In particular, four main phases are involved in the process (central circle in Figure 1): (a) understanding and specifying the context of use, (b) specifying user requirements, (c) producing design solutions and prototypes, and (d) evaluating the solutions.
Invisible Museum applies different methods in the context of each phase, aiming to ensure diversity and elicit the best possible results in terms of quality and validity. In particular:
  • For the specification of the context of use and user requirements, semi-structured interviews [57] with museum curators were conducted, as well as co-creation workshops [58] with end-users, archaeologists, and curators. Following the initial requirements, a use case modeling approach [59] was adopted, during which user types and roles (actors) were identified, and use cases were specified and described in detail. The results of the user requirements methods are presented in Section 4.
  • The mockup design phase followed, during which initial mockups were created, which were evaluated following the heuristic evaluation [60] by UX experts. Based on the feedback acquired, an extensive set of mockups for the Web and VR environment were designed, which were in turn evaluated by end-users, as well as by domain and UX experts following a group inspection approach [61]. The evaluation results were addressed by redesigning the mockups, which were used for the implementation of the Invisible Museum. Results from the mockup design and evaluation are reported in Section 5.
  • The implementation phase followed, during which the platform was developed, adopting the designed mockups and fully addressing functional requirements. The platform itself is presented in Section 6, and the entailed challenging technical aspects are discussed in Section 7.
  • Finally, the implemented system was evaluated by experts following the cognitive walkthrough approach [62], with the aim to identify if the platform users will be able to identify the correct action to take at all the interaction steps and appropriately interpret system output. The results of this evaluation are described in detail in Section 7.
It is noted that all the activities involving human subjects, namely interviews, co-creation workshops, and evaluation activities have received the approval of the Ethics Committee of the Foundation for Research and Technology–Hellas (Approval date: 12 April 2019 /Reference number: 40/12-4-2019). Prior to their participation in each research activity, all participants carefully reviewed and signed the user consent forms that they were given. Informed Consent forms have been prepared in accordance with the General Data Protection Regulation of the European Union [63] and have been approved by the Data Protection Officer of the Foundation for Research and Technology–Hellas.

4. User Requirements

4.1. Interviews with Curators

To understand the context of use and identify habits, procedures, and preferences of museum archaeologists and curators in the preparation of museum exhibitions and exhibits, semi-structured interviews with five (5) representative participants were carried out, using a questionnaire that was created for this purpose. The questionnaire was initially designed based on the literature research and research of best practices in the field. Then, a pilot interview was conducted, the conclusions of which provided feedback and formulated the final questionnaire that was used for the interviews.
In the context of the interview, experts were invited to envision a system for the creation of VMs and to point out their needs and expectations through responding to various questions. A top–down approach was followed for ordering the questions, moving from more general to more specific issues. The questionnaire involved both closed-ended and open-ended questions, to prioritize preferences, but also to allow interviewees to develop spontaneous responses [64]. The questionnaire was structured in three main sections:
  • Background and demographic information of the participant.
  • Digital museum exhibitions, asking participants to describe how the content of a digital museum exhibition should be organized, how digital exhibition rooms should be set up, and how a digital exhibit should be presented.
  • Digital tours, exploring how tours to the digital museum should be organized and delivered to end-users.
An unstructured discussion followed each interview, allowing participants to further elaborate on topics of their preference (for example by expressing concerns or additional requirements), as well as on topics that were not fully clear or which raised the interest of the interviewer. The analysis of results, as presented below, identified preferences for the information to be included concerning digital exhibits and digital exhibitions, as well as regarding the creation of digital rooms and pertinent tours.
The information that all participants identified as necessary for the representation of a digital exhibit (Figure 2) was its title, material, dimensions of the respective physical object, chronological period, creator/artist/holder (depending on the exhibit type), multimedia content (e.g., images, videos, audio), as well as a narration. It is worth mentioning that 80% of the participants identified as important information: the thematic category of the exhibit, its type, maintenance status, donor/owner, as well as a representative image of the exhibit to be used as a cover of its digital presentation. Other useful information suggested included a description of the exhibit (60%) and the location of the exhibit in the physical museum (if any) (20%).
As depicted in Figure 3a, information to include to appropriately represent a digital exhibition was unanimously agreed by all participants (100%) to be its title and an introductory text, while a considerable majority (80%) responded that a digital exhibition is also characterized by one or more thematic categories and time periods. Other suggested information regarding a digital exhibition included curators’ names (60%), multimedia (20%), and the image of a representative exhibit (20%). As illustrated in Figure 3b, when it comes to organizing digital exhibitions, participants identified chronological and thematic as the two most appealing options. However, additional options suggested were as follows: according to the user (e.g., age group, interests), according to a narrative that the curator would like to present and which may change from time to time, or according to the exhibits’ location in the physical museum.
Concerning the creation of digital rooms, all participants identified that the following information should always be included: exhibition title, descriptive text, thematic sections, and a multimedia narrative of the exhibition. Additional information that should be used to describe a digital room was suggested to include chronological periods of the contained exhibits (80%), room identifier (60%), as well as a musical background to be reproduced for visitors of the digital room (20%). Furthermore, participants identified as useful functionality the inclusion of ready-to-use templates of digital rooms, as well as the option for reusing features from existing digital rooms. Finally, the option for creating digital rooms from scratch was also considered as a mandatory functionality.
Finally, for digital tours, participants identified that the creation of such tours should support virtual routes per thematic category (100%), per chronological period (100%), selected highlights by the curator(s) of a museum (80%), tours according to the available time of the visitor (80%), tours according to the age of the visitor (80%), free tour to the entire digital space (20%), and tours according to the personal interests, cognitive background, and cultural background of the visitor (20%).
Further open-ended questions explored how curators would prefer to collaborate to create digital content. Participants’ responses identified the need for supporting different access levels for the various user types and stages of creating digital content. In particular, it turned out that one person should be identified as the main content creator for a digital exhibit and a digital exhibition, having full ownership; however, they could be supported by co-creators who should only have edit rights. Users of the platform should not have access to the pertinent digital contents unless these have been marked by the owner as ready to publish.

4.2. Co-Creation Workshops

Co-creation workshops are a participatory design method that involves end-users of the system in the design process [65]. In such workshops, participants do not have the role of the service user but rather that of its designer. Thus, each participant, as an “expert of their expertise” plays an important role in the development of knowledge, in the production of ideas, and in the creation of concepts [66].
Three co-creation workshops were organized, involving a total of 20 participants, trying to achieve a balanced distribution between ages and genders (see Table 1). All participants were familiar with the internet and smart mobile devices, while several were familiar with virtual reality applications. In addition, six of the participants were domain experts (historians, archaeologists, and curators). The workshops were facilitated by a UX expert experienced in user research methodologies, as well as participatory design and evaluation with end-users.
Each workshop was structured into three main sections:
  • Introduction, during which the participants were informed about the objectives of the workshop.
  • Goal Setting, in the context of which the main objectives that the Invisible Museum platform should address were identified by the participants.
  • Invisible Museum Services, during which a detailed discussion on the specific services and functionalities that should be supported was conducted.
During the Goal Setting activity, participants were asked to present through short stories the main objectives of the Invisible Museum platform, making sure to discuss whom it addresses and what needs it serves. The activity was individual and each participant was given a printed form to keep notes. At the end of the activity, each participant was asked to describe their scenarios, while the facilitator kept notes in the form of keywords on a board so that they would be visible to the entire group. Overall, throughout all workshops, a total of 52 mission statements were collected, which were then clustered into groups by assigning relevant or similar statements into one single group. After their classification, the resulting objectives of the Invisible Museum are as follows:
  • To convey to users the feeling of visiting a real museum.
  • To provide easy and fast access to exhibitions and exhibits, with concise yet sufficient information.
  • To constitute a tool that can be used to make culture and history universally accessible to a wide audience, in a contemporary and enjoyable approach.
  • To support storytelling through the combination of representations of physical objects enhanced with digital information and assets.
  • To keep users engaged by promoting content personalization according to their interests.
  • To make museums accessible to new audiences (e.g., individuals from other countries, individuals with disabilities, etc.).
  • To facilitate museums in making available to the public items that may be currently unavailable even in the physical museum (e.g., exhibits under maintenance, small items potentially kept in museum warehouses).
  • To support a more permanent presentation and access to temporary exhibitions hosted by museums from time to time.
  • To promote local culture, as well as the work of less known artists, and help them with further promoting it and disseminating to a wider audience.
  • To support educational efforts toward making children and young people more actively involved in cultural undertakings by making them creators themselves.
  • To constitute a tool for multidisciplinary collaboration, between researchers, teachers, curators, and historians.
For the Invisible Museum Services activity, participants were asked to think and write down specific functionalities that they would want the system to support: (a) museums, cultural institutions, or individuals who will be users of the system as content providers, that is by creating digital content and namely exhibits, exhibitions, virtual rooms, and tours; and (b) end-users who will act as content consumers, by viewing the provided content. Participants were asked to record their ideas about functionality in post-it notes. Each participant presented their ideas to the group, while the facilitator collected the post-it notes, attached them on a whiteboard, and clustered them to groups, according to how relevant they were with each other. The suggested functionalities from each workshop were recorded, and a final list of functionalities that should be supported by the system was created, making sure to remove duplicates and cluster similar suggestions under one entry. This final list, featuring 37 entries of desired functionality, was used to drive the use case analysis toward recording the functional and non-functional requirements of the system, which is summarized in the following subsection.

4.3. Use Case Analysis: User Groups and Consolidated Requirements

Taking into consideration the motivating scenarios, as well as the outcomes of the aforementioned interviews with curators and co-creation workshops, a use case analysis approach was followed to fully describe the functionality of the platform and the target users. A use case model comprises of use cases, which are sequences of actions required of the system; actors, which are users or other systems that exchange information with the system being analyzed; and relationships, which link two elements showing how they interact [59].
The main actors identified for the Invisible Museum are (see also Figure 4):
  • Creators, who create interactive and immersive virtual 3D/VR exhibitions in a collaborative way.
  • Co-creators, who contribute to the creation of digital content
  • Visitors, who browse the content of the platform and can be divided into two types: (a) Web visitors who navigate in the available digital exhibitions through a Web browser, and (b) VR visitors who navigate in the available virtual museums using a VR headset.
Furthermore, a total of 39 uses cases were developed to describe the functionality supported by the system for all the aforementioned user roles. Use cases were clustered in categories, as summarized in Table 2.
For each category, a use case model was developed, illustrating how the use cases and actors are related (see Figure 5 for an example). This detailed use case analysis was used to drive the design of mockups as well as the development of the Invisible Museum.

4.4. Motivating Scenarios

To better illustrate the overall user experience for each one of the identified user roles, motivating scenarios were developed, employing the results of the use case analysis. Two of these scenarios are provided in the following subsections, highlighting the main functions of the Invisible Museum for visitors, creators, and co-creators.

4.4.1. Exploring Virtual Exhibition of Historical Museum of Crete

Ismini is a 43-year-old craftswoman that likes the traditional art of pottery. Very recently, she heard about the Invisible Museum platform that hosts many virtual exhibitions with ceramics. Therefore, she decides to sign up and discover all exhibitions with ceramics provided by individuals or authenticated providers, such as museums and galleries. She purposely applies some filters to see only exhibitions with ceramics, officially provided by museums. She finds some super-interesting exhibitions of pottery provided by the Historical Museum of Crete. Although she has had the chance to visit the museum multiple times in the past, she notices that one of the available virtual exhibitions consists of artifacts not displayed in the actual museum. She wonders why and instantly thinks that this may be due to the fact that the museum has limited available space, which cannot host at the same time the numerous collections of artifacts that it has into possession. Taking that into consideration, she is extremely happy for having the chance to browse the unique collection of ceramics of the Historical Museum of Crete through a virtual tour.
Firstly, Ismini chooses to navigate through the virtual pottery collection using her web browser. Amongst several available tours, she prefers to freely navigate through the whole collection of exhibits. The platform provides instructions on how to navigate through the virtual exhibition to gain the most out of the provided information. She can focus on the displayed artefacts, view related information and narratives, as well as interact with 3D models, where available. In addition, the Web VR interface provides supplementary information, such as the actual time she has spent during her tour and the number of exhibits visited.
Since Ismini is equipped with a state-of-the-art VR headset, she feels eager to experience an immersive virtual tour through her Oculus Quest VR Headset and interact with the virtual exhibits using the accompanying controllers. While being at home, she travels in the virtual world of pottery that enlightens her on the treasures of tangible and intangible cultural heritage.

4.4.2. A Visual Artist Creates a Virtual Exhibition

Manolis is an acknowledged artist who operates a studio aiming to teach art through interactive workshops, allowing his students to freely explore the different perspectives and interpretations of art. However, the restrictive measures against the pandemic of COVID-19 led him to spend creative time in his studio on his own, creating a large collection of paintings and sculptures, using various methods and materials. Having heard of the Invisible Museum platform, he considers taking the occasion to create a virtual exhibition of his artworks to share with his students and to stimulate discussions as part of a remotely held workshop that he plans to organize.
After signing up, he starts adding new exhibits in the platform accompanied with descriptive information regarding the methods and materials he used, as well as his interpretation for each work. At this point, he finds out that apart from the ability to upload high-definition images and videos, he also has the chance to upload 3D models of his sculptures. Thus, using a third party mobile application for 3D scanning, he manages very easily and in a very short time to convert his sculptures into detailed 3D models. Then, he uploads the corresponding files of the 3D models in the platform and previews the generated results. Then, he needs to embed them in a virtual exhibition space. In order to do so, he creates a new virtual exhibition room from scratch and selects the option to render the scene in 3D. Then, he activates the Exhibition Designer tool enabling him to insert virtual exhibits in the room, as well as adjust the surroundings accordingly to highlight his artworks. To achieve this, he adds the showcases that will enclose the exhibits, the appropriate lighting to emphasize their details, as well as decorating elements to create a more immersive experience reminding of his atelier.
As long as Manolis has formed his virtual exhibition, he is willing to give access to all his students in the project. Thus, he enters their email addresses so that the system sends an invitation authorizing them to access the project. Lastly, Manolis—who is eager to exploit the capabilities offered by the platform—requests his students to add, as co-creators, the works that they have created themselves. Manolis, being owner of the project, will be notified every time a new action takes place and will be able to approve or edit their entries before updating the exhibition in the platform. Eventually, after finalizing the virtual exhibition, they all decide to publish the collectively generated virtual exhibition to the wide public through the Invisible Museum platform, allowing visitors to view all the artworks and gain valuable insights through a virtual reality tour.

5. Iterative Design and Evaluation

The iterative design of the Invisible Museum involved two main evaluation methods: (a) heuristic evaluation of the designed mockups, applied iteratively from the first set of mockups that were designed until the final extensive set of mockups, and (b) group inspection of the final implemented mockups. This section presents these preliminary evaluation rounds in more detail.

5.1. Heuristic Evaluation

Heuristic evaluation is a usability inspection method, during which evaluators go through a user interface (UI), examining its compliance with usability guidelines, which are known as heuristics [60]. Heuristic evaluation is conducted by usability experts and is very beneficial in the early stages of the design of a UI, leading to interfaces free from major usability problems that can then be tested by users. The combination of heuristic evaluation with user testing has the potential to identify most of the usability problems in an interface [60].
For the evaluation of the mockups developed throughout the design phase of the project, three usability experts carried out a heuristic evaluation, pointing out usability issues that should be addressed. The process was iterative, leading to consecutive refinements of the designed mockups. Overall, the design process resulted in 111 final mockups, after producing a total of 284 mockup alternatives during the various design and evaluation iterations. Figure 6 depicts two versions of the screen for creating a digital exhibit, which went through six design iterations in total. Namely, the first (Figure 6a) and the final mockup (Figure 6b) illustrate how a creator can add multimedia content and create a narration for the exhibit.
Overall, throughout the iterative evaluation process, evaluators focused on suggesting improvements to the UIs about:
  • Addressing the user requirements and goals, as these had been defined in the previous phases of the methodology that was followed
  • Achieving an aesthetic and minimalistic design
  • Minimizing the potential for user error
  • Ensuring that users would be able to easily understand the current status of the system and the displayed information
  • Supporting content creators in creating and managing the content of digital exhibits and exhibitions in a flexible manner, adopting practices that they follow in the real world
  • Following best practices and guidelines for the design of UIs

5.2. Group Inspection

The user-based evaluation was performed following the method of group usability inspection [1]. In particular, three evaluation sessions were conducted, during which mockups of the system were presented to the group of participating users who evaluated them through group discussions. Aiming to assess whether the requirements defined during previous phases (with regard to the aims and objectives of the system, as well as the functionalities that it should provide) were met by the final design, participants in the group inspection evaluation were the same as the ones in the co-creation workshops. Each evaluation session was facilitated by two usability experts, who were responsible for coordinating discussions and recording—in the form of handwritten notes—the problems identified by users. At a later stage, facilitators went through the entire list of problems identified throughout the three sessions to eliminate duplicates and prioritize findings.
In each session, the evaluation was performed using specific usage scenarios, according to which participants were guided to the functionality of the system through high-fidelity interactive mockups, following a logical flow of steps. The evaluation process was organized into three sections steps for each scenario:
  • Introduction: The facilitator presented the context and objectives of the scenario. Then, all the mockups/steps of the scenario were presented, asking participants to simply observe, without however taking any further action.
  • Commenting: The facilitator went through the mockups one by one, asking this time participants to keep individual notes for any usability problems and possible design improvements.
  • Discussion: A final round of going through the mockups followed, asking this time from each participant to share their observations and comments with the group, for each mockup presented. A short discussion was held for each identified problem, exploring potential solutions that would improve the system.
In total, 5 scenarios were examined, involving 14 focal mockups representing the most important system screens and functions. More specifically, the following mockups were evaluated:
  • Home page for three different user types: visitor, registered user, and content creator
  • Digital exhibit screens for visitors and registered users
  • Digital exhibition screens for visitors and registered users
  • Screens for creating a digital exhibit addressing content creators
  • Screens for creating a digital exhibition addressing content creators
All users were satisfied with regard to the potential of the system to address the aims and objectives that they had identified in the requirements elicitation phase, as well as with the integration of desired functionality. In total, 62 unique usability evaluation problems were identified, pertaining to four main pillars:
  • Participants’ preferences concerning the conveyed aesthetics.
  • Disambiguation of the terms that were used in the UI.
  • Suggestions about the content that the system should promote through the home page.
  • Suggestions for functions that would improve user efficiency (e.g., when creating a digital exhibition).

6. The Invisible Museum Platform

The Invisible Museum constitutes an integrated solution providing the potential to extend and enrich the overall museum experience, particularly in the areas of personal engagement, participation, and community involvement, as well as the co-creation of cultural values (Figure 7, online video: https://youtu.be/5nJ5Cewqngc). The presented platform is a complete framework and suite of tools designed to aid museum curators and individual users with no previous expertise, in delivering customized museum experiences. More specifically, it provides curators with a set of tools to (a) create collections of digital exhibits and narratives, (b) design custom-made virtual exhibitions to display their collections of artifacts, (c) promote participatory content creation, (d) support personalized experiences for visitors, and (e) visualize digital museums as immersive VR environments, allowing visitors to navigate and interact with the content.

6.1. Creating Exhibits and Narratives

The Invisible Museum aims to attract a wide range of audiences, from museum and gallery curators to any individual inspired to display their collection of artifacts, and other objects of artistic, cultural, historical, or scientific importance to the public through immersive VR experiences. In this context, to get full access to the content of the platform, users can register as individuals or professional organizations (Figure 8a). In the latter case, they have to submit appropriate official documents, which are assessed by system administrators before granting professional accounts. As a result, all the exhibits and exhibitions uploaded by professional organizations receive a verification badge to assist the platform visitors in identifying the pertinent content.
To create an exhibit, a curator has to upload its digital representation, add useful metadata, and create a narrative about it. More specifically, the required fields include the title, a short and long description, associated categories, dimensions of the corresponding physical artifact, and the type of the virtual exhibit (i.e., text, 3D model, image, sound, video, etc.), and so on. In addition to the predefined information categories, users can specify custom attributes by defining key-value pairs to describe any information they wish about the exhibit. Users are also able to create a narrative, comprising for example a sequence of historical events, to enrich the original document with contextual information. Curators can also define the visibility of the exhibit, as private or public, in which case it becomes readily visible to all the platform users.
In order to facilitate the exhibit creation process, the presented platform offers intelligent mechanisms, such as automated text completion and semantic knowledge-based recommendations about existing entries in the platform’s data storage. In addition, the platform incorporates all the necessary mechanisms to facilitate content translation based on popular third-party web services (i.e., Google Cloud Translation Services [67]), while at the same time, it allows users to interfere by editing or adding translations manually. Moreover, curators who are native speakers are invited to contribute to content translation, acquiring in return greater visibility and better chances of appearing at the top search results.
Through the user profile screen (Figure 8b), curators have access to their account settings and profile information along with the profile analytics dashboard that includes graphical representations referring to the exhibitions’ popularity (e.g., number of created exhibitions, views, shares, etc.).

6.2. Designing Dynamic Virtual Exhibitions

In addition to the aforementioned features, the Invisible Museum incorporates a flexible set of tools that allows individuals, from non-professionals to professionals, to unfold their creativity to create tailor-made virtual exhibitions without having programming or 3D design skills. For that purpose, the platform provides the Exhibition Designer, which is an integrated tool that enables users to create new virtual spaces in an intuitive and user-friendly way.
The Exhibition Designer is a visualization tool of 3D virtual spaces that facilitates curators to design virtual exhibitions based on existing ready-to-use templates, on exhibition rooms that they have created in the past, or even from scratch. Users have to first design and create the exhibition rooms by adjusting the walls in a two-dimensional floorplan. Then, the system generates a 3D representation of the exhibition, allowing users to add doors, windows, and decorative elements, such as furniture, floor/wall textures, and carpets. To add an exhibit in the 3D virtual space, users select the exhibit of their preference in the given formats (i.e., 2D images, 3D models, and videos) and position it in the exhibition space through the ray casting technique as depicted in Figure 9. The available exhibits to select from include one’s own exhibits or exhibits that are publicly shared by other users. Moreover, users may configure the exhibits’ position, rotation, and scaling factors in all three dimensions. It is possible to select among different showcases to enclose the artifacts, such as glass showcases, stands, frames, etc. In addition, they can create the lighting scheme using different types of lights (i.e., ceiling/floor/wall lighting) and customize characteristics, such as color, intensity, distance, and angle of each light. Furthermore, the Exhibition Designer provides free navigation possibilities across the exhibition rooms either by using the keyboard or the mouse, and it also enables creators to preview the VR environment of the exhibition as a simulation of the actual VR tour that will be generated after finalizing the virtual exhibition.

6.3. Co-Creation of Virtual Exhibitions

The Invisible Museum facilitates the creation of virtual exhibitions in a collaborative fashion. In detail, the curator that creates a new exhibition, namely the owner, invites potential co-creators in the corresponding field of the “Create Exhibition” form, as long as they own an account in the platform. If they do not own an account yet, they receive an invitation link via email to create one.
Co-creators automatically receive a notification, either in the form of a pop-up message in the web browser or an email to confirm their participation in the development of an exhibition (Figure 10a). Co-creators have editing permissions in exhibitions; however, their actions will not be instantly available to the public, unless the owner approves them. The owner is responsible to review all the entries, suggestions, and modifications made and eventually decide whether to approve, edit, or discard them, as illustrated in Figure 10b. This feature works more as a safety net toward creating consistent and qualitative content in the platform. More specifically, a notification mechanism informs the owner when edits or adjustments have been submitted by co-creators. Lastly, the platform allows users to exchange private messages, thus facilitating the collaboration between two or more parties (i.e., museums, organizations, etc.) through integrated communication channels.

6.4. Personalized Suggestions and Exhibition Tours

One of the features of the platform is content personalization, aiming to attract and engage users by delivering targeted information according to their interests. Arming various recommendation engine algorithms of collaborative filtering models, members of the platform are provided with personalized content to reduce the amount of time and frustration to find an exhibition of their interest to visit.
As depicted in Figure 11a, users receive personalized suggestions based on their activity in the platform and preferences (i.e., content that they mark as favorite). To avoid the cold-start problem of such algorithms, the newly registered members are asked to select the classified thematic areas of their interest (i.e., Arts, Culture, etc.), as depicted in Figure 11b. Content classification has emerged after several interview iterations with domain experts, resulting in a small set of dominant domains namely Arts, Culture, Sports, Science, History, and Technology. These are further subdivided into categories to allow users to better identify their interests; for instance, the domain of Culture includes content related to Costumes, Diet, Ethics & Customs, Law, Writing, Folklore, Religion, Sociology, and Education.
Users can select an exhibition in order not only to view relevant information but also to experience a virtual museum tour (Figure 12). A virtual tour of an exhibition can consist of a creator-defined path displaying a subset of the overall exhibits. Multiple tours can be laid out to highlight and focus on different aspects of a specific exhibition. Indicative examples of virtual tours include (a) free tour to the entire digital space displaying all the available exhibits, (b) the exhibition’s selected highlights by the curator(s) of a museum, (c) tours according to the age of the visitor, (e) tours per thematic category, per chronological period, and so on. The current version of the platform creates automatically at least one free virtual tour for an exhibition. To create a new virtual tour, the platform provides the option “Create tour”, where curators are called to enter information, such as language, title, short description, the starting point of the tour, the sequence of the exhibits, as well as upload audio files with music that may accompany users during the tour.
Moreover, the platform facilitates users to search for a specific entry (i.e., exhibit, exhibition, creator, etc.) through a dedicated search field in the header menu of the web interface. Results are automatically provided through a mini popup window that displays entries containing the given keywords grouped by exhibits and exhibitions. Users can view all the returned results categorized, filtered, and sorted according to the user preferences (e.g., date of creation, views, etc.).

6.5. Visualization in VR, Navigation and Interaction

In the context of VM applications, VR technology contribution is essential for the establishment of an immersive virtual environment. Visitors of the Invisible Museum can navigate in 3D virtual exhibitions and interact with the exhibits using different means such as (a) a web browser, and (b) any VR headset that consists of a head-mounted display, stereo sound, and tracking sensors (e.g., Oculus Quest).
Navigation in VR is readily available upon the selection of a virtual tour. By initiating a virtual tour, the Exhibition Viewer, a Web-based 3D/VR application, provides the 3D construction of the exhibition area, giving prominence to the virtual exhibits. As illustrated in Figure 13a the current exhibit of the virtual tour is indicated by a green-colored label, while the virtual exhibits that belong to the selected tour are highlighted as yellow and are numbered according to their order. Exhibits that do not belong to a specific tour are indicated with an informative icon. Visitors can interact with any exhibit that they prefer and retrieve related information through the corresponding exhibit narrative as it has been provided by the exhibition creator (Figure 13b).
The Exhibition Viewer recognizes if a VR headset is connected to the computer and allows users to navigate through the virtual tour by activating their VR headset. In case the VR headset is not connected, the application loads the virtual exhibition in the web browser. In case the VR headset is connected but the visitor wishes to proceed through a web browser, the Exhibition Viewer offers the option to enable the VR tour later on. In addition, the Exhibition Viewer provides a menu with additional information about the virtual tour progress, i.e., the total number of exhibits, the elapsed time of the tour, the option to switch the current tour to VR headset, settings, and lastly the full-screen view option.
The way that visitors can interact during a virtual tour depends on the selected navigation method (i.e., web browser or VR headset). In the case of web browser navigation, the interaction can be achieved by clicking on each exhibit. In the case of using a VR headset, the interaction remains similar, although in a more immersive way, using the tracking sensors of a VR headset. Keeping track of a user’s hands’ motion in the physical world has as a result the conversion of their movements in the virtual world, allowing the interaction with virtual exhibits and the navigation in the virtual environment. In some VR headsets, tracking a user’s hand motion can be achieved even without the need for a controller. Additionally, the visitor can interact by using an illustrating dot, which operates similarly to the mouse pointer.

7. System Architecture, Structure, and Implementation

7.1. Implementation Overview

The Invisible Museum was implemented following the Representational State Transfer (REST) architectural style, exposing the necessary endpoints that any client application (web or mobile) can consume to exchange information with the system and interface with its resources. The back-end API was developed using the NestJS framework, which is an advanced JavaScript back-end framework that combines robust design patterns with best practices offering the fundamental infrastructure capable of building and deploying highly scalable and performant server-side applications. User authentication and authorization services were built upon the OAuth 2.0 industry-standard authorization protocol with the use of JavaScript Web Tokens. At the deployment level, the Web Services are packaged as containerized Docker applications.
The Invisible Museum is based on a MongoDB [68], aiming to allow developers to work with its data naturally and intuitively, mainly due to its JSON data format. This approach ensures fast application development cycles, lowering the time to develop new features or address potential issues, thus making the platform itself very flexible to changes and upgrades. The main resources stored in the core database are the Users and their Exhibitions. Apart from these, resources such as Languages, Locations, and Files are also stored in the database and are referenced by the main models. The back-end services use libraries such as Mongoose, an Object Data Modeling (ODM) library for MongoDB and NestJS, to apply schemas to these entities to provide software constrained resource specifications that prevent data inconsistency or in some cases even data loss. Resources are modeled after these specifications, and their models hold the attributes that describe them. Some of the primary attributes are shown in Figure 14.

7.2. Analyzing Narratives Using Natural Language Processing

Analyzing free-text data for each exhibit and compiling them in a database along with the information extracted by other entries contributes to the creation of a comprehensive knowledge repository. In addition, effectively expressing the semantics of information and resources allows data to be shared and reused across application, enterprise, and community boundaries. In this context, the Invisible Museum initially focuses on (a) analyzing the textual input narratives of exhibits and (b) structuring extracted information and insights to represent it in a specific logical way. To this end, a Natural Language Processor (NLP) was developed partially using spaCy [69], while the information extracted is structured according to the CIDOC-CRM.

7.2.1. Architecture and Process

The NLP consists primarily of two parts: (a) the statistical model and (b) the rule-based algorithms. One of spaCy’s main features is the inclusion of statistical models used for text analysis. These statistical models identify named entities in the text, i.e., groups of words that have a specific meaning as a collective, for example, “World War II”. Since the default models of the library proved to be insufficient for this project, lacking both accuracy and terms that could be recognized, a new model had to be developed. Using the training techniques provided by spaCy, a series of new models are being developed, to identify key phrases, terms, and other entities that fit more accurately to the content of the platform. The set of the sample texts on which the models are being trained is comprised of pieces and excerpts written both by individuals and professionals to cover as much of the expected user spectrum as possible.
The second part of the NLP is a ruleset on which the extrapolation of the data is based. The statistical model by itself requires a large sample to be trained in recognizing all the required entities in the text accurately, and it does not have the infrastructure required to draw the relationships between entities. Thus, the rules serve to complement the model and connect the separate information annotated by it. The NLP is designed according to patterns commonly observed in similar written pieces and takes into account multiple factors, such as the syntactic position of a word/entity, its lemmatized form, and the part of speech attributed to it. After a piece of information has been successfully codified, it is added to the platform’s database from where it can be retrieved.

7.2.2. Training a Statistical Model

First, a statistical model needs to be trained to identify all the different entities in a text, such as (a) PERSON: a person, real or fictional (e.g., Winston Churchill), (b) MONUMENT: a physical monument, either constructed or natural (e.g., the Parthenon), (c) DATE: a date or time period of any format (e.g., 1 January 2000, 4th century B.C.), (d) NORP: a nationality, a religious or a political group (e.g., Japanese), (e) EVENT: an “instantaneous” change of state (e.g., the birth of Cleopatra, the Battle of Stalingrad, World War II), (f) SOE: “Start of existence”, meaning a verb or phrase that signifies the birth, construction, or production of a person or object (e.g., the phrase “was born” in the phrase “Alexander was born in 1986.”), and (g) EOE: “End of existence”, meaning a verb or phrase that signifies the death, destruction, or dismantling of a person or object (e.g., the verb “Jonathan died during the Second World War”).
Having properly established the list of entity types in Invisible Museum, the training text sample was prepared to train the statistical model. To that end, the text was annotated manually and was then inserted into the training program. A sample-annotated text is presented in Figure 15.
Training a model requires as many texts with different expressions and content as possible to cover diverse and extreme cases that may be encountered in the platform. Afterwards, the model can be exported, and it is ready for use. The ruleset, part of the NLP, remains constant regardless of the statistical model. For instance, when a curator adds a narrative to an exhibit (Figure 16), the textual input from all steps is automatically combined and passed to the NLP for analysis.
First, the statistical model reviews the input and identifies the various entities contained therein. For this particular case, the results are presented in Figure 17.
In this particular instance, the model performed perfectly, identifying all the entities in the text. It is worth mentioning that the phrase “The Greek” normally signifies nationality and would be identified as NORP, but in this particular context, it refers to a nickname given to Domenikos Theotokopoulos and is therefore correctly identified as PERSON. After the entities have been recognized and highlighted in the text, the set of rules draw the relationships between them, producing the information in the form it will be stored. In this example, the rules cover the birth and death of Domenikos Theotokopoulos. Specifically, the relevant rule, in this case, is the one covering the text of the format PERSON (SOE DATE–EOE DATE). In this case, the NLP is instructed that the most probable date of birth is the first DATE entity, and the most probable date of death is the second one. It is worth mentioning that the SOE and EOE entities are optional, since the format PERSON (DATE–DATE) is also common among relevant texts and is thus covered by the rules.
With that information extracted, the next step is to structure it according to the subset of the CIDOC model, which is presented in Figure 18. In detail, for every piece of information, a new instance of the E1 class is created, since every other entity type is a subclass of E1. Afterwards, new inner classes are instantiated following the hierarchy of Figure 19, until the point is reached when all relevant data of the piece of information can be codified in the properties of the branch. Going back to the example at hand, the birth of Domenikos Theotokopoulos would be an entity of type “E67 Birth”, with the property “P98 brought into life” having the value “Domenikos Theotokopoulos”. Finally, the date is stored in the property “P117 occurs during”, which is a property inherited by the superclass “E2 Temporal Entity”. The complete structure is sent in JSON format to the back-end, where it is processed and stored in the semantic graph database (see Section 7.3). The JSON formatting of the aforementioned example is presented in Figure 19.
As the NLP is still being developed in the context of the Invisible Museum platform, the scope of the information that can be efficiently extracted and structured is limited and has not yet achieved the desired success rate. The models are being improved with new example texts, and the ruleset is being refined to include more types of information and to portray the ones already included more accurately. When completed, the NLP’s scope is expected to cover the model presented in Figure 19. Users of the platform will be able to enhance exhibits’ narratives by receiving automatic suggestions to include more information relevant to their exhibits, based on the contributions of other users.

7.3. Semantic Graph Database

The Invisible Museum platform supports a wide range of user-submitted exhibits. These exhibits can be supplemented with important metadata that enrich the overall information about them. One of the aims of the platform is to support and promote the use of Linked Open Data (LOD). To address that requirement, data about the exhibits are stored following the latest cultural object archival standards to appear in a meaningful way in a cross-cultural, multilingual context. As illustrated in Figure 20, each exhibit is based on the EDM [15]. The accompanying narratives are modeled after the CIDOC-CRM (see Section 7.2).
Data modeled after the EDM empower the Semantic Web initiative and provide links to Europe’s cultural heritage data with references to persons, places, subjects, etc. The EDM can also show multiple views on an object including information about the actual physical object and its digital representations. As already mentioned, the exhibits’ narrative is modeled based on the CIDOC-CRM, which is a theoretical model for information integration in the field of cultural heritage providing definitions and a formal structure for describing concepts and their relationships. More precisely, narratives abide by the reduced CRM-compatible form. Both models inherently graph representations of objects and concepts. This led to the decision of using the most appropriate tool to handle this kind of information, a graph database, and in particular, the Neo4j graph database [70]. Leveraging a graph database provides an efficient way of representation of semantic data at an implementation level and contributes to LOD initiatives. Neo4j is an open-source NoSQL native graph database that is ACID-compliant, making it suitable for production environments requiring high availability. It implements constant-time graph traversals regardless of data size, is highly performant, scalable, and includes a rich and expressive query language (Cypher). Neo4j was specifically designed to store and manage interconnected data, making it the perfect fit for the interconnected nature of the Invisible Museum platform’s exhibit metadata. Exhibits in the database are represented as nodes with attributes based on their metadata. Each exhibit has a digital representation implemented as a separate node that has a relationship with the root exhibit. Other children of an exhibit’s root node can be its translations and narrations. Access to the graph database is done in real-time with fast response times to adequately support the NLP services described below, which is a feat made possible due to Neo4j’s high-performance architecture.

7.4. High-Performance File Storage

Apart from the exhibits’ metadata, there are also digital representations of the actual physical exhibits. These representations can be videos (mp4), photos (jpg, png), text, sounds (mp3, wav), or 3D files (fbx, glb) that need to be stored effectively, as they constitute the core content of the platform. To accommodate the multimedia file storage needs, the MinIO object storage suite (MinIO, Inc., Palo Alto, CA, USA) was chosen as the preferred solution of a high-performance, software-defined, open-source distributed object storage system [71]. MinIO protects data from silent data corruption, which is a problem faced by disk drives, using the HighwayHash algorithm, preventing corrupted data reads and writes. This approach ensures the integrity of the data uploaded to the platform’s repositories. Apart from data integrity, MinIO also uses sophisticated server-side encryption schemes to protect data, assuring confidentiality and authenticity. MinIO can run in the form of distributed containers providing an effective solution to scale the system depending on the demand of its data resources. Each file uploaded to MinIO is assigned with an identifier and can be referenced through it from the main database’s models, eliminating the need to rely on fragile file system structures that cannot be easily adapted to future changes and are not developer-friendly. Every file is also accessible by a system-generated URL that is stored in the corresponding database model.

7.5. Delivering Web-Based 3D/VR Experiences

The Web portal of the Invisible Museum is its main access point where users can view or create content through various means. The Web application was built using the Angular front-end framework that supports building features quickly with simple, declarative templates [72]. It can achieve maximum speed and performance, as well as meet huge data requirements, offering a scalable web app solution. The front-end architecture implementation takes advantage of the Angular module system to create a maintainable code-base. The main modules are derived for the major resources of the platform, namely its Users, Exhibits, and Exhibitions. However, separate models are used to handle Authentication and User Profiles. Modules implement their business logic in the form of Angular components and their services. These components are reusable by any service via dependency injection minimizing duplicate code. Some of the functions that these services are responsible for are user registration and authentication, exhibit and exhibition creation, and user profile customization (profile picture, name, email, password, etc.). The Web applications delivered by the Invisible Museum support responsive design principles by incorporating the Bootstrap front-end framework and uses Sassy CSS (SCSS), an extension to basic CSS, to style and customize its UI components. Thus, the user experience on both desktop and mobile devices is equally pleasing and intuitive.
Taking into consideration that visiting a virtual exhibition would be an experience best fitted for a six DOF (degrees of freedom) headset (where the user can navigate the environment as free as possible while maintaining most of his field of vision), it was crucial to support both desktop and VR headset implementations of the Invisible Museum. To deliver Web-based 3D/VR experiences, the Invisible Museum is based on a Web framework for building 3D/AR/VR applications named A-Frame [73] both for the Exhibition Designer and Exhibition Viewer. To this end, the Invisible Museum supports all WebXR browsers available for both desktops and standalone VR headsets (e.g., Oculus Quest, HTC Vive) [74].

7.6. Enhancing Photorealistic Renderings

To deliver reasonable immersive virtual tours, high-fidelity visuals were of the utmost importance. The latter was even more difficult, taking into account the Web-based 3D/VR nature of the platform, which on the one hand enables multi-platform access with minimal requirements, while on the other, it is not as powerful as native 3D applications built with popular game engines.
A-Frame [73], the Web framework for building 3D/AR/VR applications based on HTML/Javascript, is useful for multi-platform applications, but it comes at a cost. Performance tests revealed that there is an upper limit on the number of light sources a scene can have before affecting the performance of the application and aggravating the user experience in VR (e.g., frame drops and laggy hands). Making matters worse, its rendering engine calculates only direct lighting. However, not having any kind of indirect lighting can disengage users from the immersive experience that the presented platform aims to achieve, simply because a scene that tends to be flat-shaded falls into the uncanny valley of computer graphics. An indicative example is illustrated in Figure 21a.
Nevertheless, pushing the framework to its limits to fully explore its weaknesses seemed very challenging, so different approaches on lighting the scene were followed. After some time of experimenting, some basic guidelines emerged aiming to enhance the visualization of an exhibition room to look more attractive and at the same time save performance. These guidelines include the following principles:
  • The less ambient lighting, the more realistic the scene looks,
  • Windowless interior spaces are preferable, since windows increase the cognitive expectations of lighting with no payoff,
  • Use point lights or spotlights in combination with ambient lighting,
  • Having more than 5–6 lights in the case of standalone VR headsets should be avoided, since it will significantly reduce the performance.
After following the aforementioned guidelines, the scene was looking more alluring, as depicted in Figure 21b. In this setup, only one point light was used that was constantly following the user, combined with the use of ambient lighting. In another experimentation, one light source and three to four point lights were used to further enhance the scene (Figure 22).
The performance of the latter setup was the ideal outcome given the available resources on the real-time lighting pipeline of A-Frame. However, it might entail some drawbacks, since it adds limitations and increases the complexity of the final VR application. For example, it would not be feasible for large exhibitions with numerous point lights for each exhibit. In addition, this approach does not scale appropriately for 3D objects (e.g., statues, pottery) that need to receive and cast shadows (Figure 23a).
To boost graphics performance, it was important to figure out how to enable baked lighting on dynamic A-Frame scenes, since it is common practice to integrate the static lighting of the scene into the texture of static objects and then remove it from the light calculations (Figure 23b). Since, in most museum exhibitions, the objects are static, the lighting and shading in the scene should be baked. In this way, the only performance cost would be the rendering; this would result in drastically increasing the frame rate. Given that the A-Frame does not support baking functionality, a new pipeline, namely the Blender’s Bakery pipeline, was integrated with the provided system. Blender is a free open-source 3D computer graphics software [1]. Its main use in the industry is 3D modeling and animation, but it can do baking and scripting, making a perfect match for the specific case.
The pipeline is being activated automatically at the time a curator finishes the design of a virtual exhibition space using the Exhibition Designer. The latter serves real-time low-fidelity graphics rendered by A-Frame and saves user-changes continuously on the database. On completion, the Blender’s Bakery pipeline is triggered. The baking process will take place on Blender, using the ray-tracing rendering engine of Blender called “Cycles”, producing as output a .glb file that will be stored back to the database. As a result, when visitors start an exhibition tour, a photorealistic virtual space will be activated. The latter is characterized by great performance, since there is no need to simulate illumination in the scene, setting the number of polygons in the scene to be the only performance limitation.
As illustrated in Figure 24, the Exhibition Designer stores to the database a 2D tilemap, the position, rotation, and scale of each object (i.e., exhibit) and information of the light sources. The tilemap is a two-dimensional array containing integers, and it is used to represent the building of the museum.
For example, the walls are represented by the value “1” and the floor and ceiling are represented by the value “0”. In the next steps of the presented pipeline, the Blender’s Bakery pipeline service receives raw data and recreates the virtual space, as depicted in Figure 25a.
Major factors, affecting baking process time, should be considered such as (a) sample count and (b) the number of objects in the scene. Sample count is the number of iterations of light calculations that Blender should execute. The higher this number is, the more detail and the less noise is observed on the bakes. The aforementioned number of iterations needs to repeat for every object in the scene, providing a time complexity order of O (sampleCount x objectsCount). That means that the amount of sample count and object count should be reduced as much as possible without losing detail or having too much noise. Taking the above into consideration, to reduce the total objectCount, all the objects that fall in the same category should be merged as one object. As illustrated in Figure 25. (a) The result of “Blender scene recreator”. Every wall, floor, the ceiling is a unique object generated by the tilemap; (b) The result of “SceneMerger”. All the objects that fit the same category are merged into one. objects that are grouped in the same category include the floor tiles, wall tiles, etc. In general, fewer objects means fewer textures and thus a smaller file size. However, results show that not all walls should be merged into one object. The more objects are merged into one, the less space each face has in the UV map of the texture; hence, this results in less detail. It has been estimated that a balance of 30–40 objects per merge in the same category provides satisfactory results.
The baking process was developed in Python [75] using the Blender 2.91 API [76]. The process starts by creating a new UV map (“bakedUV”) for each object and uses Blender’s Smart UV unwrapping with an island margin of 0.1 to automatically create its UV map. Smart UV unwrap is now required for exhibits. Exhibits should be already UV unwrapped if they have a material applied to them. To identify if an object is an exhibit, it is properly named from step 1 of the Blender’s Bakery pipeline. Afterwards, the material of the object should be created, in case it does not yet exist. In the material node network, for each object, an image texture node is added, and a texture image is created, which is properly named according to the name of the object. For example, if the object is called “Museum_Wall”, an image called “Museum_Wall.png” will be created, where the baked texture will take place. It has been estimated that textures of 2048 pixels width x 2048 pixels height bring a good balance in file size, details, and render time. Next in the pipeline, Blender is instructed to bake the diffuse lighting (direct, indirect, and color) to the texture. When the for-loop is over, the pipeline ends up with one image for each object that includes the appropriate lighting baked. Depending on the sample count, the final texture may have little to plenty of salt-and-pepper noise. As a countermeasure, after the bake operation is complete, the images are denoised using a medianBlur filter with a kernel size of 3. Median filters are considered ideal when denoising that kind of noise, while using OpenCV makes it easier and simpler to use. Then, the textures pass from a second filter called Box Filter for finer removal of the noise of the artifacts while paying the cost of blurring the texture to a small degree. After denoising, the scene is exported as a single GLB file. Note that when exporting the model from blender, all the UV maps should be deleted except for the one created on which the bake was based.
The exported file is stored in the database notifying the curator that the process has been completed. The results are shown in the A-Frame in Figure 26a. To demonstrate how promising this technique is, a comparison of a ray-traced render with the baked version is provided in Figure 26b.

8. Evaluation Results

A cognitive walkthrough for the evaluation of the Invisible Museum was conducted with the participation of three (3) UX experts and one (1) domain expert. During a cognitive walkthrough evaluation, the evaluator uses the interface to perform tasks that a typical interface user will need to accomplish, aiming to assess system actions and responses considering the user’s goals and knowledge [62]. The evaluation is driven by a set of questions to which the evaluator aims to respond in each interaction step. In particular, for the evaluation of the Invisible Museum, the Enhanced Cognitive Walkthrough method was followed, according to which the evaluator has to respond to the following questions [77]:
  • Will the user know that the evaluated function is available?
  • Will the user try to achieve the right effect?
  • Will the user interface give clues that show that the function is available?
  • Will the user be able to notice that the correct action is available?
  • Will the user associate the right clue with the desired function?
  • Will the user associate the correct action with the desired effect?
  • Will the user get sufficient feedback to understand that the desired function has been chosen?
  • If the correct action is performed, will the user see that progress being made toward the solution of the task?
  • Will the user get sufficient feedback to understand that the desired function has been performed?
Each question is answered with a grade from 1 to 5, where 1 corresponds to a very small number of success and 5 corresponds to a very good chance of success. If a problem is identified with seriousness from 1 to 4, then the problem needs to be classified into one of the following types:
  • User: the problem is due to the user experience and knowledge.
  • Hidden: the UI gives no indications of a function or how it should be used.
  • Text/Icon: can be easily misinterpreted or not understood.
  • Sequence: an unnatural sequence of functions and operations is provided.
  • Feedback: insufficient feedback is provided by the system.
Before each evaluation session, the evaluator was introduced by the facilitator to the aims and objectives of the Invisible Museum, as well as to who its users are. To assist evaluators, a set of predefined tasks were given to them to execute. In particular, the tasks were as follows:
  • Task 1: Log in to the system and view the home page (user type: visitor).
  • Task 2: Search for an exhibition (user type: visitor).
  • Task 3: View an exhibition (user type: visitor).
  • Task 4: Take a tour to an exhibition (user type: visitor).
  • Task 5: Add a digital exhibit (user type: curator).
  • Task 6: Create a digital exhibition (user type: curator).
  • Task 7: Create a tour for an exhibition (user type: curator).
Evaluators carried out the cognitive walkthrough in separate one-hour sessions. A facilitator observed the evaluators while they were interacting and commenting on the system and kept notes regarding whether they tried and achieved the desired outcome. In addition, the facilitator recorded the steps that seemed to trouble the evaluators throughout the experiment, based on their comments and their actions.
In total, thirty-five (35) problems were identified for all the tasks that evaluators carried out. The results of the evaluation per task are presented in the tables that follow. Table 3 presents the number of problems identified per task according to their seriousness, while Table 4 presents the number of problems identified per problem type.
In brief, the most severe problems identified referred to the lack of adequate information about the exhibits included in an exhibition and its tours, which were assessed by evaluators as troublesome for users. Lack of more detailed guidance during a tour was also assessed as an issue that may confuse users, particularly those who are novice in such VR environments. For curators, evaluators identified that the functionality for creating and editing a 3D virtual space can present difficulties, especially since some functionality appears to be hidden (e.g., a collection of tools to pan, grab or measure the virtual space are only available by hovering the selected one).
Overall, evaluators appraised the clear design, the UI consistency, and provision of feedback at all times, as well as that the functions seemed to be well integrated into a logical flow. They pointed out that some improvements could be made concerning the terminology used, which might be particularly beneficial for users who are not domain experts (e.g., non-professional content creators). Finally, they noted that despite the errors that were identified, they expect that users will be able to easily overcome them once they become familiar with the system.

9. Discussion

Overall, the design, implementation, and validation of this work have provided enough evidence to validate the appropriateness of the conducted research. The Invisible Museum provides solutions to several existing limitations of state-of-the-art solutions and is based on open standards and technologies popular in the digital CH sector to ensure its applicability and further exploitation. Of course, several issues are still considered open, as this would require research efforts that exceed the scope of this research work and the acquired funding. This section discusses the outcomes of this work and attempts to provide reusable knowledge regarding three directions (a) potential guidelines that come out of the lessons learned and are generalizable for the domain, (b) limitations that should be noted for driving further research endeavors, and (c) technical limitations and the need for new technologies to fully implement the vision of this research work.

9.1. Lessons Learned

Through the iterative evaluation involving both experts and end-users, several conclusions were drawn concerning the design of virtual museum environments, addressing not only cultural institutions but non-professional content creators, as well. These conclusions, which are based on observations and suggestions that were made both by experts and users, highlight the following lessons learned:
  • User experience in such environments is heavily content-driven: no matter how good a UI is, it is the content itself that is crucial for delivering a high-quality user experience.
  • Minimalistic UI is of utmost importance: given that the content to be delivered includes compelling graphics exhibiting a wide variety, UI needs to be minimal to support the content in the most unobtrusive way.
  • Content personalization is a must-have: for virtual spaces that accumulate content of such a wide variety, it is necessary to support personalization, providing to each user content that is relevant to their interests.
  • Flexibility in data structures is the way to go: given the diversity of content, it is impossible to provide a fixed classification suitable for every digital exhibit to be added. Supporting content description by user-defined key-value pairs is a good solution to achieve universality.
  • Presentation through narratives is a win–win solution: given that narratives guide storytelling experiences and bind the presented artefacts with their socio-historic context, they are highly beneficial for end-users; at the same time, analyzing free-text narrative data for each exhibit can contribute to the creation of a comprehensive knowledge repository, which is compliant with standards for content classification and therefore valuable for information discovery and exploration.
  • Step-by-step functions can be a life-saver for complicated procedures: creating a digital exhibition, featuring virtual tours is a quite complicated procedure, which was the one that required most of the design iterations. Breaking the process to concrete steps turned out to be the most appealing solution, approved by all evaluators.
  • 3D environments disrupt the sequential nature of tours: this can be highly beneficial, since it allows users to experience tours as in physical spaces (by deviating from predetermined routes); however, it needs to be designed with caution, since the user may get lost in such virtual 3D environments. To address this challenge, shortcuts for returning to the tour, as well as information regarding one’s whereabouts in the virtual environment are useful features.
Process-wise, the iterative approach followed throughout the process, as well as the emphasis on involving domain experts from the first stages of the project was crucial for achieving a prompt delivery of results, as well as for minimizing the need for major changes at later stages of the platform. Therefore, although a considerable effort was required for the iterative heuristic evaluation of the designed mockups, this resulted in rather small numbers of problems identified at later stages, which were easier to address and did not require a major redesign in terms of UI, system architecture, and software algorithms.

9.2. Limitations

With regard to the requirements elicitation, a limitation refers to the participation of curators from one single museum, a historical museum. As such, different museums might require additional functions from such a platform. To address this limitation, the authors have carried out—besides the interviews and workshops—a detailed analysis of related efforts in the field. As a result, the platform that was developed satisfies the requirements identified by users but also provides the necessary infrastructure to dynamically support any type of content, facilitating its proper classification and supporting a personalized user experience for all the Invisible Museum visitors.

9.3. Further Technical Considerations

Considering technical limitations, the scope of the information that NLP can efficiently distinguish and codify is limited, and it has not yet achieved the desired success rate. The statistical models and rulesets have to be trained with new example texts in order to include more types of information, and portray the ones already included more accurately. At the end, users can include more information relevant to their exhibits in their narratives, based on the contributions of other users in similar subjects.
The adopted web framework for building 3D/AR/VR applications based on HTML/Javascript is useful for multiplatform applications but sets a limitation regarding the amount of light sources a scene can have, before affecting the performance of the application and aggravating the user experience in VR. To address this limitation, a baking service was developed as presented in Section 7.6. As a result, reasonable immersive virtual tours with high-fidelity visuals can be delivered to the museum visitors.

10. Conclusions and Future Work

This work presented the Invisible Museum, a user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. In summary, the Invisible Museum offers (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visualization in Web-based 3D/VR technologies, and (d) immersive navigation and interaction in photorealistic renderings.
The platform differentiates from all previous similar works in the sense that it is a generic technological framework not paired to a specific real-world museum. Thus, its main ambition is to act as a generic platform to support the representation and presentation of virtual exhibitions. The representation adheres to domain standards such as CIDOC-CRM and EDM and exploits state-of-the-art deep learning technologies to assist the curators by generating ontology bindings for textual data. At the same time, the virtual museum authoring environment was co-designed with museum experts and provides the entire toolchain for moving from the knowledge and exhibit representation to the authoring of collection and virtual museum exhibitions. An important aspect of the authoring part is the semantic representation of narratives that guide storytelling experiences and bind the presented artifacts with their socio-historic context. The platform transforms this rich representation of knowledge to multimodal representations by supporting currently Web-based 3D/VR immersive visiting experiences.
The platform itself was designed following a Human-Centered Design approach with the collaboration of museum curators and personnel of the Historical Museum of Crete, as well as a number of end-users. Different evaluation methodologies throughout the development cycle of the Invisible Museum were applied, thus ensuring that the platform will serve user needs in the best possible way and provide an engaging experience both for content creators and content consumers. In particular, the following evaluation iterations were carried out: (a) heuristic evaluation of the designed mockups, applied iteratively from the first set of mockups that were designed until the final extensive set of mockups, (b) group inspection of the final implemented mockups, ensuring that they are usable for the target users, and (c) cognitive walkthrough carried out on the implemented system, to assess whether users will know what to do at each step of the interaction. Several conclusions were drawn concerning the design of virtual museum environments, addressing not only cultural institutions but individual content creators as well. Future evaluation efforts will target larger numbers of end-users, including professional and non-professional content creators, as well as museum visitors.
With regard to future improvements, further emphasis will be placed on the experiential part of the visit focusing both on AR augmentation of physical exhibitions and more immersive representations empowered by multichannel audio and interactive narrations. Emphasis will also be given on the design and development of hybrid exhibition tours that combine physical exhibitions with AR augmentation enhanced with digital artifacts from virtual ones.
The outcomes of this research work will be exploited in the context of the reformulation of the virtual exhibitions provided by the Historical Museum of Crete such as the Ethnographic Collection with representative items mainly from the 19th and the 20th century, collected from villages across the island, particularly East and Central Crete.

Author Contributions

Conceptualization, N.P., E.Z. and C.S.; methodology, S.N.; software, A.D., S.K., E.N., A.X., Z.P., A.M. and M.F.; visualization, validation, S.N. and A.N.; investigation, N.P.; writing—original draft preparation, E.Z., E.K., I.Z., Z.P., A.D. and E.N.; writing—review and editing, N.P., S.N. and E.Z.; supervision, E.Z.; project administration, E.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This work has been conducted in the context of the Unveiling the Invisible Museum research project (http://invisible-museum.gr), and has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-02725).

Institutional Review Board Statement

The study was approved by the Ethics Committee of the Foundation for Research and Technology–Hellas (Approval date: 12 April 2019/Reference number: 40/12-4-2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank: (a) Eleftherios Fthenos, Undergraduate Student | University of Crete, for his contribution in the design process of the Exhibition Designer and the Exhibition Viewer, (b) Michalis Sifakis, Content Editor | FORTH, for his contribution in thematic areas classification, (c) Argyrw Petraki, Philologist | FORTH, for her contribution in content translation, (d) Manolis Apostolakis, Visual Artist | Alfa3 art workshop, for his kind offer of 3D reconstructed artworks, and, (e) Aggeliki Mpaltatzi, Curator, Ethnographic Collections | Head of Communications in Historical Museum of Crete, for her contribution in the realization of the virtual exhibition «Traditional Cretan House», part of the platform’s demo video presentation https://youtu.be/5nJ5Cewqngc. The authors also would like to thank all the employees of the Historical Museum of Crete, as well as all end-users who participated in the co-creation and evaluation of the Invisible Museum platform, providing valuable feedback and insights.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schweibenz, W. The “Virtual Museum”: New Perspectives for Museums to Present Objects and Information Using the Internet as a Knowledge Base and Communication System. In Proceedings of the Knowledge Management und Kommunikationssysteme, Workflow Management, Multimedia, Knowledge Transfer, Prague, Czech Republic, 3–7 November 1998; pp. 185–200. [Google Scholar]
  2. Ferdani, D.; Pagano, A.; Farouk, M. Terminology, Definitions and Types for Virtual Museums. V-Must.net del. Collections. 2014. Available online: https://www.academia.edu/6090456/Terminology_definitions_and_types_of_Virtual_Museums (accessed on 28 October 2018).
  3. Partarakis, N.; Grammenos, D.; Margetis, G.; Zidianakis, E.; Drossis, G.; Leonidis, A.; Metaxakis, G.; Antona, M.; Stephanidis, C. Digital Cultural Heritage Experience in Ambient Intelligence. In Mixed Reality and Gamification for Cultural Heritage; Ioannides, M., Magnenat-Thalmann, N., Papagiannakis, G., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 473–505. [Google Scholar]
  4. Forte, M.; Siliotti, A. Virtual Archaeology: Great Discoveries Brought to Life through Virtual Reality; Thames and Hudson: London, UK, 1997. [Google Scholar]
  5. Google Arts & Culture. Available online: https://artsandculture.google.com/ (accessed on 21 December 2020).
  6. Inventing Europe: European Digital Science & Technology Museum. Available online: http://www.inventingeurope.eu/ (accessed on 21 December 2020).
  7. Ontdek de wereld in het Museon. Available online: https://www.museon.nl/nl (accessed on 21 December 2020).
  8. Kiourt, C.; Koutsoudis, A.; Pavlidis, G. DynaMus: A fully dynamic 3D virtual museum framework. J. Cult. Herit. 2016, 22, 984–991. [Google Scholar] [CrossRef]
  9. Tsita, C.; Drosou, A.; Karageorgopoulou, A.; Tzovaras, D. The Scan4Reco Virtual Museum. In Proceedings of the 14th annual EuroVR conference, Laval, France, 12–14 December 2017. [Google Scholar]
  10. Giangreco, I.; Sauter, L.; Parian, M.A.; Gasser, R.; Heller, S.; Rossetto, L.; Schuldt, H. VIRTUE: A virtual reality museum Experience. In Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion (IUI 19), Marina del Ray, CA, USA, 17–20 March 2019; pp. 119–120. [Google Scholar]
  11. O’hOisin, N.; O’Malley, B. The Medieval Dublin Project: A Case Study. Virtual Archaeol. Rev. 2010, 1, 45–49. [Google Scholar] [CrossRef] [Green Version]
  12. Carrozzino, M.; Bergamasco, M. Beyond virtual museums: Experiencing immersive virtual reality in real museums. J. Cult. Herit. 2010, 11, 452–458. [Google Scholar] [CrossRef]
  13. Scherp, A.; Franz, T.; Saathoff, C.; Staab, S. F—A model of events based on the foundational ontology dolce+DnS ultralight. In Proceedings of the fifth international conference on Knowledge capture (K-CAP 09), Redondo Beach, CA, USA, 1–4 September 2009; pp. 137–144. [Google Scholar]
  14. ISO 21127:2014: Information and documentation—A reference ontology for the interchange of cultural heritage information. Available online: https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/05/78/57832.html (accessed on 21 December 2020).
  15. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; van de Sompel, H. The Europeana Data Model (EDM). In Proceedings of the In World Library and Information Congress: 76th IFLA general conference and assembly, Gothenburg, Sweden, 10–15 August 2010; pp. 10–15. [Google Scholar]
  16. Mani, I. Computational Modeling of Narrative. Synth. Lect. Hum. Lang. Technol. 2012, 5, 1–142. [Google Scholar] [CrossRef]
  17. Raimond, Y.; Abdallah, S. The Event Ontology. Available online: http://motools.sourceforge.net/event/event.html (accessed on 24 December 2020).
  18. Shaw, R.; Troncy, R.; Hardman, L. LODE: Linking Open Descriptions of Events. In Proceedings of the Fourth Asian Semantic Web Conference (ASWC 2009), Shanghai, China, 6–9 December 2009; pp. 153–167. [Google Scholar]
  19. Doerr, M. The CIDOC Conceptual Reference Module: An Ontological Approach to Semantic Interoperability of Metadata. AIMag 2003, 24, 75. [Google Scholar] [CrossRef]
  20. Lagoze, C.; Hunter, J. The ABC Ontology and Model. In Proceedings of the International Conference on Dublin Core and Metadata Applications, Tokyo, Japan, 24–26 October 2001; pp. 160–176. [Google Scholar]
  21. Fernie, K.; Griffiths, J.; Stevenson, M.; Clough, P.; Goodale, P.; Hall, M.; Archer, P.; Chandrinos, K.; Agirre, E.; de Lacalle, O.L.; et al. PATHS: Personalising access to cultural heritage spaces. In Proceedings of the 18th International Conference on Virtual Systems and Multimedia, Milan, Italy, 2–5 September 2012; pp. 469–474. [Google Scholar]
  22. van den Akker, C.M.; van Erp, M.G.J.; Aroyo, L.M.; Segers, R.; van der Meij, L.; Schreiber, G.; Legêne, S. Understanding Objects in Online Museum Collections by Means of Narratives. In Proceedings of the Third Workshop on Computational Models of Narrative, Instanbul, Turkey, 26–27 May 2012. [Google Scholar]
  23. Wolff, A.; Mulholland, P.; Collins, T. Storyspace: A story-driven approach for creating museum narratives. In Proceedings of the 23rd ACM conference on Hypertext and social media (HT 12), Milwaukee, WI, USA, 25–28 June 2012; pp. 89–98. [Google Scholar]
  24. Meghini, C.; Bartalesi, V.; Metilli, D.; Partarakis, N.; Zabulis, X. Mingei Ontology. Available online: https://zenodo.org/record/3742829#.YBK2ExZS9PY (accessed on 2 February 2021).
  25. Zabulis, X.; Meghini, C.; Partarakis, N.; Beisswenger, C.; Dubois, A.; Fasoula, M.; Nitti, V.; Ntoa, S.; Adami, I.; Chatziantoniou, A.; et al. Representation and Preservation of Heritage Crafts. Sustainability 2020, 12, 1461. [Google Scholar] [CrossRef] [Green Version]
  26. Chiu, C.; Sainath, T.N.; Wu, Y.; Prabhavalkar, R.; Nguyen, P.; Chen, Z.; Kannan, A.; Weiss, R.J.; Rao, K.; Gonina, E.; et al. State-of-the-Art Speech Recognition with Sequence-to-Sequence Models. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 4774–4778. [Google Scholar]
  27. Jaf, S.; Calder, C. Deep Learning for Natural Language Parsing. IEEE Access 2019, 7, 131363–131373. [Google Scholar] [CrossRef]
  28. Lyu, C.; Titov, I. AMR Parsing as Graph Prediction with Latent Alignment. arXiv 2018, arXiv:1805.05286. Available online: https://arxiv.org/abs/1805.05286 (accessed on 2 February 2021).
  29. Gildea, D.; Jurafsky, D. Automatic Labeling of Semantic Roles. Comput. Linguist. 2002, 28, 245–288. [Google Scholar] [CrossRef]
  30. Exner, P.; Nugues, P. Using Semantic Role Labeling to Extract Events from Wikipedia. In Proceedings of the Detection, Representation, and Exploitation of Events in the Semantic Web (DeRiVE 2011), Bonn, Germany, 23 October 2011; pp. 38–47. [Google Scholar]
  31. Choi, D.; Kim, E.-K.; Shim, S.-A.; Choi, K.-S. Intrinsic Property-based Taxonomic Relation Extraction from Category Structure. In Proceedings of the 6th Workshop on Ontologies and Lexical Resources, Beijing, China, 23–27 August 2010; pp. 48–57. [Google Scholar]
  32. Girju, R.; Badulescu, A.; Moldovan, D. Learning Semantic Constraints for the Automatic Discovery of Part-Whole Relations. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 03), Edmonton, AB, Canada, 27 May–1 June 2003; pp. 80–87. [Google Scholar]
  33. Gangemi, A.; Presutti, V.; Reforgiato Recupero, D.; Nuzzolese, A.G.; Draicchio, F.; Mongiovì, M. Semantic Web Machine Reading with FRED. Semant. Web 2017, 8, 873–893. [Google Scholar] [CrossRef]
  34. Flanigan, J.; Dyer, C.; Smith, N.A.; Carbonell, J. CMU at SemEval-2016 Task 8: Graph-based AMR Parsing with Infinite Ramp Loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), San Diego, CA, USA, 16–17 June 2016; pp. 1202–1206. [Google Scholar]
  35. Foundation of the Hellenic World. Available online: http://www.ime.gr/ (accessed on 21 December 2020).
  36. Cortona3D Viewers. Available online: http://www.cortona3d.com/en/products/authoring-publishing-solutions/cortona3d-viewers (accessed on 21 December 2020).
  37. Kersten, T.P.; Tschirschwitz, F.; Deggim, S. Development of a virtual museum including a 4D presentation of building history in virtual reality. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W3, 361–367. [Google Scholar] [CrossRef] [Green Version]
  38. VIVE United States: Discover Virtual Reality Beyond Imagination. Available online: https://www.vive.com/us/ (accessed on 21 December 2020).
  39. Oculus Quest 2. Available online: https://www.oculus.com/quest-2/ (accessed on 21 December 2020).
  40. Web3D Consortium: Open Standards for Real-Time 3D Communication. Available online: https://www.web3d.org/ (accessed on 21 December 2020).
  41. Sinclair, P.A.S.; Martinez, K.; Millard, D.E.; Weal, M.J. Augmented reality as an interface to adaptive hypermedia systems. New Rev. Hypermedia Multimed. 2003, 9, 117–136. [Google Scholar] [CrossRef]
  42. Goodall, S.; Lewis, P.; Martinez, K.; Sinclair, P.; Addis, M.; Lahanier, C.; Stevenson, J. Knowledge-based exploration of multimedia museum collections. In Proceedings of the European Workshop on the Integration of Knowledge, Semantics and Digital Media Technology (EWIMT), London, UK, 25–26 November 2004. [Google Scholar]
  43. Hughes, C.E.; Stapleton, C.B.; Hughes, D.E.; Smith, E.M. Mixed reality in education, entertainment, and training. IEEE Comput. Graph. Appl. 2005, 25, 24–30. [Google Scholar] [CrossRef] [PubMed]
  44. Museum het Rembrandthuis. Available online: https://www.rembrandthuis.nl/ (accessed on 21 December 2020).
  45. Barnes, M.; Levy Finch, E. COLLADA–Digital Asset Schema Release 1.5.0. Available online: https://www.khronos.org/files/collada_spec_1_5.pdf (accessed on 27 December 2020).
  46. glTF Overview: The Khronos Group Inc. Available online: https://www.khronos.org/gltf/ (accessed on 27 December 2020).
  47. OpenSceneGraph-3.6.5 Released. Available online: http://www.openscenegraph.org/index.php/8-news/238-openscenegraph-3-6-5-released (accessed on 21 December 2020).
  48. Technologies, U. Unity Real-Time Development Platform: 3D, 2D VR & AR Engine. Available online: https://unity.com/ (accessed on 21 December 2020).
  49. Second Life: Virtual Worlds, Virtual Reality, VR, Avatars, Free 3D Chat. Available online: https://www.secondlife.com/ (accessed on 21 December 2020).
  50. Looser, J.; Grasset, R.; Seichter, H.; Billinghurst, M. OSGART–A Pragmatic Approach to MR. In Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 06), Santa Barbara, CA, USA, 22–25 October 2006. [Google Scholar]
  51. Partarakis, N.; Antona, M.; Zidianakis, E.; Stephanidis, C. Adaptation and Content Personalization in the Context of Multi User Museum Exhibits. In Proceedings of the International Working Conference On Advanced Visual Interfaces (AVI 2016), Bari, Italy, 7–10 June 2016. [Google Scholar]
  52. Partarakis, N.; Klironomos, I.; Antona, M.; Margetis, G.; Grammenos, D.; Stephanidis, C. Accessibility of Cultural Heritage Exhibits. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction (UAHCI 2016), Toronto, ON, Canada, 17–22 July 2016; pp. 444–455. [Google Scholar]
  53. Amato, F.; Moscato, V.; Picariello, A.; Sperlì, G. Recommendation in Social Media Networks. In Proceedings of the 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017. [Google Scholar] [CrossRef]
  54. Moscato, V.; Picariello, A.; Sperli, G. An emotional recommender system for music. IEEE Intell. Syst. 2020, 1. [Google Scholar] [CrossRef]
  55. ISO 9241-210:2019: Ergonomics of Human-System Interaction—Part 210: Human-Centred Design for Interactive Systems. Available online: https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/07/75/77520.html (accessed on 18 December 2020).
  56. Giacomin, J. What Is Human Centred Design? Des. J. 2014, 17, 606–623. [Google Scholar] [CrossRef] [Green Version]
  57. Schmidt, C. The analysis of semi-structured interviews. In A companion to qualitative research, Flick, U.; von Kardoff, E., Steinke, I., Eds.; SAGE Publications Ltd.: Newcastle upon Tyne, UK, 2004. [Google Scholar]
  58. Prahalad, C.K.; Ramaswamy, V. Co-creation experiences: The next practice in value creation. J. Interact. Mark. 2004, 18, 5–14. [Google Scholar] [CrossRef] [Green Version]
  59. Armour, F.; Miller, G. Advanced Use Case Modeling: Software Systems; Pearson Education: Upper Saddle River, NJ, USA, 2000. [Google Scholar]
  60. Nielsen, J.; Mack, R.L. Heuristic Evaluation. In Usability Inspection Methods; John Wiley & Sons: Hoboken, NJ, USA, 1994. [Google Scholar]
  61. Følstad, A. The effect of group discussions in usability inspection: A pilot study. In Proceedings of the 5th Nordic conference on Human-computer interaction: Building bridges (NordiCHI 08), Lund, Sweden, 20–22 October 2008; pp. 467–470. [Google Scholar]
  62. Mahatody, T.; Sagar, M.; Kolski, C. State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions. Int. J. Hum. Comput. Interact. 2010, 26, 741–785. [Google Scholar] [CrossRef]
  63. General Data Protection Regulation–EU 2016/679. Available online: https://gdpr-info.eu/ (accessed on 18 December 2020).
  64. Pohl, K. Requirements Engineering: Fundamentals, Principles, and Techniques, 1st ed.; Springer Publishing Company: New York, NY, USA, 2010. [Google Scholar]
  65. Spikol, D.; Milrad, M.; Maldonado, H.; Pea, R. Integrating Co-design Practices into the Development of Mobile Science Collaboratories. In Proceedings of the 2009 Ninth IEEE International Conference on Advanced Learning Technologies, Riga, Latvia, 15–17 July 2009; pp. 393–397. [Google Scholar]
  66. Sanders, E.B.-N.; Stappers, P.J. Co-creation and the new landscapes of design. CoDesign 2008, 4, 5–18. [Google Scholar] [CrossRef] [Green Version]
  67. Cloud Translation. Available online: https://cloud.google.com/translate?hl=el (accessed on 28 December 2020).
  68. The Most Popular Database for Modern Apps. Available online: https://www.mongodb.com (accessed on 28 December 2020).
  69. spaCy: Industrial-strength Natural Language Processing in Python. Available online: https://spacy.io/ (accessed on 28 December 2020).
  70. Neo4j Graph Platform–The Leader in Graph Databases. Available online: https://neo4j.com/ (accessed on 28 December 2020).
  71. MinIO: Kubernetes Native, High Performance Object Storage. Available online: https://min.io/ (accessed on 28 December 2020).
  72. Angular. Available online: https://angular.io/ (accessed on 30 December 2020).
  73. A-Frame–Make WebVR. Available online: https://aframe.io (accessed on 28 December 2020).
  74. Immersive Web Developer Home. Available online: https://immersiveweb.dev/ (accessed on 28 December 2020).
  75. Welcome to Python.org. Available online: https://www.python.org/ (accessed on 1 February 2021).
  76. Foundation, B. blender.org–Home of the Blender project–Free and Open 3D Creation Software. Available online: https://www.blender.org/ (accessed on 1 February 2021).
  77. Bligård, L.-O.; Osvalder, A.-L. An Analytical Approach for Predicting and Identifying Use Error and Usability Problem. In Proceedings of the Symposium of the Austrian HCI and Usability Engineering Group (USAB 2007), Graz, Austria, 22 November 2007; pp. 427–440. [Google Scholar]
Figure 1. The methodology followed for the design and development of the Invisible Museum platform.
Figure 1. The methodology followed for the design and development of the Invisible Museum platform.
Electronics 10 00363 g001
Figure 2. Digital exhibit-related information.
Figure 2. Digital exhibit-related information.
Electronics 10 00363 g002
Figure 3. (a) Basic information for the representation of a digital exhibition; (b) Organizing digital exhibitions.
Figure 3. (a) Basic information for the representation of a digital exhibition; (b) Organizing digital exhibitions.
Electronics 10 00363 g003
Figure 4. Actors (user roles) of the Invisible Museum platform.
Figure 4. Actors (user roles) of the Invisible Museum platform.
Electronics 10 00363 g004
Figure 5. Use case model for digital exhibits.
Figure 5. Use case model for digital exhibits.
Electronics 10 00363 g005
Figure 6. Add digital exhibit mockups. (a) The first mockup that was designed; (b) The final mockup.
Figure 6. Add digital exhibit mockups. (a) The first mockup that was designed; (b) The final mockup.
Electronics 10 00363 g006
Figure 7. The Invisible Museum platform
Figure 7. The Invisible Museum platform
Electronics 10 00363 g007
Figure 8. (a) The sign-up screen of the Invisible Museum platform; (b) User profile screen.
Figure 8. (a) The sign-up screen of the Invisible Museum platform; (b) User profile screen.
Electronics 10 00363 g008
Figure 9. Exhibition Designer, a 3D design tool for creating virtual museums.
Figure 9. Exhibition Designer, a 3D design tool for creating virtual museums.
Electronics 10 00363 g009
Figure 10. (a) A review example within the Exhibition Designer; (b) Overview of modifications made by co-creators pending approval.
Figure 10. (a) A review example within the Exhibition Designer; (b) Overview of modifications made by co-creators pending approval.
Electronics 10 00363 g010
Figure 11. (a) Homepage: Preview of recommended exhibitions; (b) Users initiate recommendation algorithms as they select the classified thematic areas of their interest.
Figure 11. (a) Homepage: Preview of recommended exhibitions; (b) Users initiate recommendation algorithms as they select the classified thematic areas of their interest.
Electronics 10 00363 g011
Figure 12. Exhibition main screen.
Figure 12. Exhibition main screen.
Electronics 10 00363 g012
Figure 13. Virtual Reality (VR) tour. (a) VR exhibition tour; (b) Providing information about a selected exhibit.
Figure 13. Virtual Reality (VR) tour. (a) VR exhibition tour; (b) Providing information about a selected exhibit.
Electronics 10 00363 g013
Figure 14. Indicative database models.
Figure 14. Indicative database models.
Electronics 10 00363 g014
Figure 15. Sample-annotated text.
Figure 15. Sample-annotated text.
Electronics 10 00363 g015
Figure 16. Natural Language Processor (NLP) of exhibit’s textual narrative.
Figure 16. Natural Language Processor (NLP) of exhibit’s textual narrative.
Electronics 10 00363 g016
Figure 17. NLP identified entities.
Figure 17. NLP identified entities.
Electronics 10 00363 g017
Figure 18. Narratives structured according to a subset of CIDOC-CRM (Conceptual Reference Model).
Figure 18. Narratives structured according to a subset of CIDOC-CRM (Conceptual Reference Model).
Electronics 10 00363 g018
Figure 19. Structuring extracted information using the CIDOC (International Committee for Documentation of the International Council of Museums) model.
Figure 19. Structuring extracted information using the CIDOC (International Committee for Documentation of the International Council of Museums) model.
Electronics 10 00363 g019
Figure 20. Exhibits are modeled according to Europeana Data Model (EDM).
Figure 20. Exhibits are modeled according to Europeana Data Model (EDM).
Electronics 10 00363 g020
Figure 21. (a) High ambient lighting and a directional light used as a sun; (b) Low ambient lighting with a point light centered on the user.
Figure 21. (a) High ambient lighting and a directional light used as a sun; (b) Low ambient lighting with a point light centered on the user.
Electronics 10 00363 g021
Figure 22. Low ambient lighting, one-point light per two exhibits, and one centered on the user.
Figure 22. Low ambient lighting, one-point light per two exhibits, and one centered on the user.
Electronics 10 00363 g022
Figure 23. (a) Real-time rendering of A-Frame; (b) A-Frame running with the baked textures with 0 actual lights on the scene.
Figure 23. (a) Real-time rendering of A-Frame; (b) A-Frame running with the baked textures with 0 actual lights on the scene.
Electronics 10 00363 g023
Figure 24. Blender’s Bakery pipeline.
Figure 24. Blender’s Bakery pipeline.
Electronics 10 00363 g024
Figure 25. (a) The result of “Blender scene recreator”. Every wall, floor, the ceiling is a unique object generated by the tilemap; (b) The result of “SceneMerger”. All the objects that fit the same category are merged into one.
Figure 25. (a) The result of “Blender scene recreator”. Every wall, floor, the ceiling is a unique object generated by the tilemap; (b) The result of “SceneMerger”. All the objects that fit the same category are merged into one.
Electronics 10 00363 g025
Figure 26. (a) Image rendered using Cycles, the ray tracing engine of blender; (b) Blenders Bakery Pipeline with 512 samples.
Figure 26. (a) Image rendered using Cycles, the ray tracing engine of blender; (b) Blenders Bakery Pipeline with 512 samples.
Electronics 10 00363 g026
Table 1. Demographic information of participants in the co-creation workshops.
Table 1. Demographic information of participants in the co-creation workshops.
GenderAge
MaleFemale20–3030–4040–5050–60
10108642
Table 2. List of use cases and pertinent actors, organized in categories.
Table 2. List of use cases and pertinent actors, organized in categories.
CategoryUse CaseUser Roles
User managementRegistrationNon-registered user
LoginRegistered user
Forgot passwordRegistered user
Change passwordRegistered user
View user profileRegistered user
Edit user profileRegistered user (for own profile), System administrator
Delete userSystem administrator
Digital exhibitsView list of exhibitsUser
View exhibit detailsUser
Create new exhibitCreator
Edit exhibit detailsCreator, Co-Creator
Delete exhibitCreator
Add multimedia content to exhibitCreator, Co-Creator
Delete multimedia content from exhibitCreator
Digital exhibitionsView exhibition listUser
View exhibition detailsUser
Create a new exhibitionCreator
Edit exhibitionCreator, Co-Creator
Delete exhibitionCreator
Cooperative creation of exhibitionsCreator, Co-Creator
Digital toursView list of digital toursUser
View tour detailsUser
Create a tourCreator
Edit a tourCreator, Co-Creator
Delete a tourCreator
Start a tourUser
Free navigation to an exhibitionUser
User group management (for content co-creators)View user groupsGroup member
View the details of a user groupGroup member
Create a user groupGroup administrator
Edit a user groupGroup administrator
Delete a user groupGroup administrator
Add a member to a user groupGroup administrator
Remove a member from a user groupGroup administrator
Search Search for exhibitsUser
Search for exhibitionsUser
Search for usersUser
VR navigationVR navigation via a guided tourUser
VR navigation through free exploration User
Table 3. Problem seriousness per task.
Table 3. Problem seriousness per task.
TaskProblem Seriousness
12345
Task 112100
Task 202000
Task 312110
Task 414231
Task 511100
Task 612200
Task 722100
Table 4. Problem type per task.
Table 4. Problem type per task.
TaskProblem Type
UserHiddenText/IconSequenceFeedback
Task 110300
Task 200200
Task 312110
Task 423411
Task 521000
Task 612110
Task 721200
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zidianakis, E.; Partarakis, N.; Ntoa, S.; Dimopoulos, A.; Kopidaki, S.; Ntagianta, A.; Ntafotis, E.; Xhako, A.; Pervolarakis, Z.; Kontaki, E.; et al. The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support. Electronics 2021, 10, 363. https://doi.org/10.3390/electronics10030363

AMA Style

Zidianakis E, Partarakis N, Ntoa S, Dimopoulos A, Kopidaki S, Ntagianta A, Ntafotis E, Xhako A, Pervolarakis Z, Kontaki E, et al. The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support. Electronics. 2021; 10(3):363. https://doi.org/10.3390/electronics10030363

Chicago/Turabian Style

Zidianakis, Emmanouil, Nikolaos Partarakis, Stavroula Ntoa, Antonis Dimopoulos, Stella Kopidaki, Anastasia Ntagianta, Emmanouil Ntafotis, Aldo Xhako, Zacharias Pervolarakis, Eirini Kontaki, and et al. 2021. "The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support" Electronics 10, no. 3: 363. https://doi.org/10.3390/electronics10030363

APA Style

Zidianakis, E., Partarakis, N., Ntoa, S., Dimopoulos, A., Kopidaki, S., Ntagianta, A., Ntafotis, E., Xhako, A., Pervolarakis, Z., Kontaki, E., Zidianaki, I., Michelakis, A., Foukarakis, M., & Stephanidis, C. (2021). The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support. Electronics, 10(3), 363. https://doi.org/10.3390/electronics10030363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop