1. Introduction
The last two decades have seen a remarkable increase in the use of Information and Communication Technology (ICT) in Museum and Heritage sites. They have provided fertile ground where ICT has raised more accessible connections to social and cultural audiences, personalizing the visitor experience and enhancing communication [
1,
2]. Technology has proved to be an essential tool to reinvent the role and relevance of museums and heritage institutions, facilitating new and wider audiences [
3,
4,
5].
The recent museological practice has started the use of online digital resources as a means of enhancing technological interfaces between artifacts and audiences. Economic and cultural pressures have made the reinvention of the role and relevance of museums and heritage institutions an imperative in the search for new and wider audiences, especially for young audiences who are normally detached from museums. Technology has proved to be an essential tool in making that happen.
Many cultural institutions have increased their spending on digitizing collections to improve archiving, conservation, presentation, and accessibility. Whilst this reduced some of the audience’s logistical barriers such as time and cost to travel to the museums site, it did little to break down more complex and intransigent barriers such as cultural and educational barriers in order to widen audiences [
6]. Hence, something else has to be done if it is essential to achieve audience engagement with culture.
New and more interactive museology is required to improve linear exhibitions that utilize classical narratives in order to motivate wider audience exploration and understanding. Consequently, new methodologies for presentation are emerging, creating novel sensitive narratives that use and engage with the practice of storytelling, making for a more socially inclusive interaction [
7].
Engaging audiences through tablets and smartphones has boosted new storytelling techniques, making information more accessible. Nevertheless, recent developments are fueling museums practice with innovative storytelling based on virtual reality (VR), augmented reality (AR), Big Data (BD), and Artificial Intelligence (AI).
1.1. Motivation
Traditionally, museums have used printed drawings, labels, models, or digital infographics to add information about the exhibits or ruins; however, in the last decade, different virtual and augmented-reality devices have appeared that are radically changing this phenomenon. Commercial investment has transformed VR and AR from science fiction to life, making smart headsets an everyday object in information technology (IT) culture and society. In particular, the use of 3D stereoscopic visual systems, utilized to track user movements of the head, hands, and eyes in space and accompanied by fully immersive sound environments, has been greatly successful in creating unreal or virtual worlds [
8].
AR systems, which incorporate digital data in the real environment, allow users to perceive digital recreations without losing their perception of the physical world. The AR approach offers a more familiar way to better interact with data for the less well-travelled IT users since it allows information to be included in layers. In this way, information is modified to fit into real space/time without overloading users while maintaining the integrity of the original work in the context of a more natural presentation of related data.
The first version of the Microsoft HoloLens glasses, which were introduced in 2016, has been used in different contexts, including education, tourism, entertainment, medicine, architecture, or manufacturing. This head-mounted display (HMD) can combine tangible physical aspects with virtual elements while scanning space, thus allowing for free movement while maintaining the virtual objects correctly placed in the real space. They are portable, relatively light, and self-sufficient, since they do not need a PC connected to them. The device has two key features: one the one hand, they can track the space and the hands of the user to enable the placement and interaction of the virtual contents by making just some gestures; on the other hand, the stereoscopic images are generated in the central part of the transparent headsets to blend the user’s natural vision with digital objects. In addition to gestures, the system can be controlled with voice commands and indirectly by the location of the user in space.
In 2019, the second generation of Microsoft HoloLens glasses appeared, and news about new AR devices and prototypes are continuously appearing in the media, from small and big companies such as Apple, Google, Facebook, Magic Leap, and Nreal, demonstrating that the competition for AR has just begun. The challenge of these companies is to offer more affordable devices for a broader audience while maintaining the same capacity of Microsoft HoloLens glasses to anchor digital contents to our natural visual perception.
Superimposing information in the real world stimulates museological practice to develop a more accessible interface with audiences, encouraging visitors to achieve a better understanding of archives, objects, places, and their history and making the real world more enchanted [
9].
As pointed out before, there is a need for museums to engage new audiences, especially young generations. Technology can help to this purpose but is not enough per se. There is a need to include an emotional component, applying scenography and theatrical techniques, interactivity, and empathy, to create an experience as a whole. With this, museums and heritage sites could offer a new manner of showing and presenting their findings, giving to the visitor a reason not only to repeat the experience but also to recommend their experience with heritage.
In order to mitigate this need, this works presents a new methodology to design new visitor museum interfaces based on augmented-reality devices and scenography and theatrical techniques.
In particular, this article explains the challenges gathered while designing and implementing the experience and creating the scenarios and dynamics to amaze visitors and engage them with complex concepts of our history. Results of the research study indicate that the application is especially effective in involving audiences with the museum’s contents. A novel manner of storytelling related to heritage has been specified, testing different mechanics of interactivity that allow content to be customized according to the visitor’s profile and their interests while taking a trip back in time. A natural interaction with contents has also been developed by adding some gamification techniques to encourage visitors to make relations between the real and the virtual.
1.2. Related Work
One of the first museums to incorporate VR technology into its educational program was the British Museum. It offered the possibility of exploring a reconstruction of the Bronze Age from 3D-scanned pieces in their collection [
10]. In 2017, VR was ready to be implemented in the physical and real museum. An example is the experience developed at London’s Tate Modern, focused on Modigliani’s studio [
11].
Similarly, AR has been used successfully in conjunction with tablets and smartphones in projects that demonstrate how museums can benefit from this technology. The use of the Microsoft
HoloLens glasses to increase visitor experiences has begun to flourish in museum presentations and heritage centers. For instance, Leiden’s Rijksmuseum collaborated with the University of Delft to exhibit the Egyptian Temple of Teffeh [
12,
13]. Bovington’s UK Tank Museum showed a mixture of missing and actual conserved German tanks from the Second World War [
14].
Smart headsets are appropriate to be used in science museums since they can help to explain some complex processes and techniques. They are of interest in historical and archaeological museums because they are able to reconstruct the contexts. Recently, a few articles about this topic have been published; an interesting example is the experience designed for the Egyptian Museum in Cairo [
15,
16]. This research is focused on the idea of substituting the museum guide, analyzing technical issues to display contents and interact with digital data when using Head Mounted Displays such as Microsoft
HoloLens glasses. Other similar articles are just focused on reproducing the museum classical visit with the new media, for example, presenting a prototype that is just focused on the display of information for one object, without benefiting from the possibilities of this new media [
17,
18,
19,
20].
The continuous evolution of technology will enable museums to steadily develop further improvements with their interface between audience and artifact. If museums can develop a multimedia approach that exploits the new generation’s familiarity with IT devices, it could create an evolutionary leap in the way museums reach out to new audiences. With augmented-reality headsets, you can take advantage of the ruins of buildings to build digital information about them. Normally, in excavations, a series of data are obtained from which the interpretations are made. Consequently, in most cases, an “approximate” and provisional reconstruction of the buildings is carried out, which allows one to get an idea of what it was like [
21]. This is the reason why an archaeological museum was selected, so an immersive storytelling was created for the smart headsets that took the user on an emotional journey, taking advantage of all the capabilities of the device.
1.3. Aims and Contributions
The main objective of this work is to present a novel methodology based on augmented-reality techniques combined with stage and theatrical techniques to produce a more emotive, intuitive, and natural manner to explore heritage. Although the proposed methodology is general and can be applied to other areas, it is specifically designed to enhance the museum visit experience.
Without loss of generality, the proposed methodology is applied to a particular case study: the Almoina archaeological museum
https://cultural.valencia.es/es/museu/la-almoina-centro-arqueologico/, accessed on 25 November 2021. The Almoina museum is located at the city center of Valencia, Spain, holding different archaeological findings of great value dating from the Roman period to the Middle Ages. This paper fully describes the human machine interface designed for the Almoina museum following the proposed methodology and using the Microsoft
HoloLens glasses 1st generation as the augmented-reality device. The effectiveness of the proposed approach is shown with several usability tests as well as observational studies carried out for visitors of the Almoina museum.
The main contributions of this work are:
To determine what type of storytelling can meet the requirements of the Museology 4.0, whcih are that it has to be immersive, experiential, naturalized, narrative, interactive, intelligent, gamified, transmedia, and social.
To design a general methodology to produce a more emotive, intuitive, and natural manner to explore heritage.
To develop a methodology to present contents with augmented-reality HMD and integrate virtual and physical information of objects in a user-friendly environment.
To present all the steps followed to develop a functional prototype for the Almoina archaeological museum based on the proposed methodology.
To study the usability and effectiveness of the developed prototype based on the opinion of the museum’s visitors.
1.4. Content of the Article
This paper is organized as follows:
Section 2 fully describes the augmented-reality-based user interface and methodology used to develop the proposed application. Next, the feasibility of the proposal is proved in
Section 3 with usability tests. Finally,
Section 4 presents a discussion about the results obtained, and
Section 5 presents the conclusion of this work.
2. Proposed Approach
This works presents a novel methodology based on augmented-reality techniques combined with stage and theatrical techniques to produce an emotive, intuitive and natural way to explore heritage. The interface is specially designed to enhance the museum visit experience. The methodology combines digital audiovisual production pipeline to generate an emotional reaction on visitors who can engage with a virtual guide close to a human presence in space.
The proposed methodology is applied to develop a novel human machine interface (HMI) for the Almoina archaeological museum placed in Valencia (Spain). However, this methodology is extensible to other types of museums such as science museums, history museums or natural history museums, among others.
Next, a complete description of the methodology and the developed HMI is given.
2.1. Methodology
From the beginning it was important to find out what AR headsets can contribute to the museum visit those previous devices could not provide, such as augmented reality through mobile devices, or a simple poster placed on the museum wall. Therefore, it was important to design an application that went further, to be sensitive and impressive while incorporating digital data in a natural manner.
“Immersive design” was used to fit storytelling requirements, such as media convergence, or a more natural way of relating to digital data. Immersive design is a process that has been normally used in environments such as architecture, video games, art or education [
22] and was adopted it to the museum context.
The original idea was to implement a non-linear immersive experience in a real museum. The design had to be adjusted to the museological requirements, so it was important to determine what those qualities could be which has been named Museography 4.0. [
23]. Museography 4.0 is the set of techniques and practices relative to the functioning of the museum, which have evolved from the analogical museography towards the natural, immersive, and intuitive integration of the digital data in the expositional context. It can be immersive, experiential, naturalized, narrative, interactive, intelligent, gamified, transmedia, and social.
It must be connected to a fully immersive and natural experience facilitated by augmented-reality devices. Narrative can be used to create innovative storytelling, allowing interactivity and the personalization of information. It can use artificial intelligence to anticipate user’s needs and gamification techniques to initiate and sustain user motivation for deeper exploration of information and contents of presentations.
In our analysis, the Museography 4.0 can benefit from the use of transmedia where the main narrative is articulated and adapted to different media in changing situations. It can also be social and culturally responsive.
2.1.1. Production Scheme
Based on the proposed objectives, several blocks were created in the phases of the project: analysis, design, development, and implementation, as seen in the following diagram in
Figure 1.
In this scheme, sections that were completed are marked in green, and actions related to the future official implementation are colored in blue.
Following the production scheme outlined above, the resulting actions are listed:
Analysis: Analyze the contents to be represented and determine how it will be presented.
Design: Design a narrative script that contains museography 4.0 requirements, with the appropriate interactivity. Design the environmental sound and architectural elements and objects to be recreated in 3D and 2D, as well as the figuration of human characters.
Development: Create a technical script that develops the storytelling. Create the 3D assets, images, and videos. Choose and prepare an actress to act as a guide. Record the scripts on a chrome set. Post-produce the videos to be integrated into the computer program. Choose the music and sound effects.
Implementation: Coordinate and guide the programming of the application with the programmer. Run the first integration and functionality tests. Draw conclusions and analyze the results.
2.1.2. Storytelling Requirements
After several meetings with the museum’s director, it was decided to bring the virtual city to life in the republican period and the first settlement of a city in the 2nd century BC. This allowed to focus the discourse on the first civilizations, their religious rites, commerce, etc.
Most of the representations seen of cities reconstructions are focused on an architectural representation, in which the volumes of the buildings are shown without attending to the life and people. For this reason, it was agreed to show what the city was like at that time, what people inhabited it, and what kind of life they led, in order to try to reproduce the feeling of having traveled through time.
2.1.3. Contents and Interaction Design
The application was focused on the visitor’s interaction. For this reason, each user could decide what content they were interested in and could go deeper into it, according to their interest. Different narratives were presented related to each building to expand the information on the chosen topics. The main topics were: the city foundation, the Horreum and Commerce, the sanctuary and religious life, and the Terms and the social life (see
Figure 2).
2.2. Augmented-Reality-Based Interface
It was determined that each scene of the tour should have a brief introduction conducted by a human guide in the form of a video-hologram, complemented with some small audible messages of information of less than a minute each and some visual extra images (photographic images and illustrations) that help visitors to understand the concepts explained by the guide. All the contents were corrected and confirmed by the director of the museum.
The interaction of the application follows the principles of natural interaction to avoid problems derived from the use of computers, such as pressing buttons or choosing items from menus. The application for the AR headsets were designed with a short introduction to teach the way to operate the device during the experience. At the beginning of the experience, a life-size video image of a person—the virtual guide—was the first thing the user could watch and hear when they were wearing the headsets. Our guide invited the visitor to come closer from a specific location in the museum to continue the teaching. After that, the program measured the approach distance till the user was close enough, and then it activated another part of the multimedia sequence, welcoming the visitor, presenting the historical visit, and teaching them how to activate the virtual objects to discover each one of the sequences of the tour.
In previous experients, it was noticed that some users had difficulty commanding the headsets by gestures with a finger snap in the air or “Air tap” [
23]. Implementation of the voice commands were also problematic in the museum because of noise, so it was decided that an interaction be designed that was exclusively dependent on the visitor’s position and the place where they looked. This way of operating, already common in many virtual-reality systems, works with a timer that counts the seconds an object is marked by the gaze and triggers the animation of a circle of light to show the visitor the progression of the activation. In this sense, it was decided that there would only be two activators: one was in the form of a medallion/banner to activate each one of the four sequences into which the tour was divided, and the other one was a coin that opened specific pieces of information for each sequence.
The diagram shown in
Figure 3 presents the flow of the interaction, divided into a first part where the application is presented and where the user is trained to activate banners and coins, and a second part, free movement, where the visitor is given freedom to move through space to discover the sequences marked in space.
2.2.1. Characters and Voices
It was decided that the guide character be based on the myth of Clelia (Latin Cloelia), who was one of the most recognized heroines of Rome during the Republic. Storytelling was conducted by a real actress playing the role of Clelia using highly effective theatrical techniques to humanize the virtual experience. Thus, the Roman woman invited the user to begin the visit by approaching her, and then she began to teach within 3 min how to activate the holographic contents arranged in the space.
Clelia was the presenter for the four scenes and the main narrative, and a male voiceover appeared when an informative coin was activated to complete the presentation (see
Figure 4).
2.2.2. Emplacement
The storytelling was linked to the different areas of the museum already discussed related to the republican roman ruins: the foundation of Valentia, the Horreum and the commerce, the sanctuary and the water, and the thermal baths and social life, marked in blue in the museum map; see
Figure 5.
Each of the information points, shaped as coins, were built as a synchronized animations that two minutes long, composed of 3D figures, pictures, and diagrams that appeared during the voice explanation exactly in the proper place of the museum to understand the value of the ruins.
2.2.3. Display Banners and Associated Narratives
To differentiate content levels, different types of interactive labels were designed. On the one hand, four roman banners indicated each of the scenes in the museum: the Crossroads, with a tale of the origin of the city; the Sanctuary of Asclepios, telling the religious traditions of the Romans; The Horreum, introducing the way of trading of this period; and the baths, with a story about the social life in this building. Clelia briefly explained the related contents of each area marked by banners, accompanied synchronously by a multimedia sequence that showed the digital reconstruction of that area as it was in the Roman period (see example in
Figure 6a).
Once the contents of each banner had been activated and the reconstruction of the building was finished, the user could continue enjoying the scene by discovering some coins spread in the area (see
Figure 6b). These coins, based on the coin minted in the city of Valentia during the republic, marked different points of interest that could be triggered by the user by staring in front of them. For example, in the case of the Sanctuary, after listening to Clelia’s introduction and watching the reconstruction of the building, the visitor had the chance to activate each of the three coins suspended in the space, one at a time. The small stories of each coin helped to explain the importance and symbology of the god Asclepios in Roman times.
2.2.4. Video Production and Postproduction
The videos of Clelia were produced on a chroma set at the Polytechnic University of Valencia. After a casting process, which lasted several days, it was decided that the role of Clelia be played by a young local actress. The costumes and jewels were purchased for the occasion based on the characteristics described in the history books and expert advice (see
Figure 7).
During filming, a teletypewriter was enabled to facilitate the reading of the texts for the actress. The camera used was a Lumix 4G, and it was recorded in 4K resolution to maximize the possibilities to re-frame and zoom the character in post-production. The audio capture system was performed using a Sennheiser professional lapel wireless microphone concealed in the actress’s neckline.
Post-production was carried out with Adobe After Effects CC 2017, where the figure was isolated from the environment to be inserted into the program (see
Figure 8). The presentation sequence was especially complex, since it had to be divided into six moments in which Clelia instructs the visitor, where three of these are character loops enabled the ability to await the visitor’s reaction. In the image below, you can see the sections defined as loops marked in pink and those of a single reproduction in green.
The videos were exported with a square resolution of 1000 × 1000 pixels and in a special format, called webm, which was the only one able to preserve a transparency channel when the figure was integrated in the video game engine. We discovered that transparency is a very necessary feature of the video to create a credible integration of Clelia with the 3D virtual objects that appear around her; otherwise, she would look like a cut square like a TV screen without a natural relation with the rest of the virtual set.
2.3. Application Development
2.3.1. Application Programming
Programming was done with Unity video game engine, an author tool for creating programs for the Microsoft
HoloLens glasses. With this program, all the materials and assets such as photos, videos, music, 3D objects, and animations were integrated to create the interactive scenes. The programming language used was C-Sharp, and some specialized libraries for AR applications, such as Microsoft Mixed-Reality Toolkit, were used to accelerate the production, making it easy to handle the input of the sensors to detect the interactions of the user and to react in consequence (see
Figure 9).
During this phase, the integration of the audio-visual elements in the computer program was coordinated, and many graphic development questions could only be solved from experimentation with the video game engine with which these types of applications are programmed.
After integrating all the elements and programming their behavior, the program was uploaded into the Microsoft HoloLens glasses to carry out the functional tests in the real space.
2.3.2. Asset Development
For the design of the buildings and stages it was decided that the the existing images from the virtual reproductions made by the Almoina museum be used, to which some three-dimensional elements were added to give the feeling of life in the buildings, such as the inclusion of plants or mosaics on the walls.
The programming of the appearance of the three-dimensional elements on the museum space was carried out gradually, so that they were composed as Clelia commented on the contents associated with that building while listening to background music. In this sense, it was decided that materials be created that could be shown as dissolutions of matter in space, instead of simply appearing, since the introduction of virtual elements must appear organically to enhance the magic of the moment of their appearance as if rebuilt by traveling back in time. The materials were created from a special shader programmed with a variable that can be animated to make the objects appear or disappear when necessary, following an organic pattern that helped to gradually perceive the correspondence of the position of the virtual objects with the ruins.
2.3.3. Integration and Functionality Tests
To check the feasibility of the application, different tests were carried out. Specifically, three formal sessions were conducted. First was the development of two Integration tests to check the proper functioning of the program (see
Figure 10). These types of tests are normally used in the design of computer applications. After some adjustments were made, a usability test was applied to check the validation of the first alpha version of the application in the museum. In addition, an observational study was used to complement the necessary data to check the feasibility of the application.
The Integration tests were developed with five people each. In both sessions, the objectives were clearly defined to be analyzed. Likewise, in both tests, it was determined that focus be placed on assessing whether it was a natural environment for the user that validates it, as if it were an experience like any visit to a museum.
The first session focused on testing the interactive possibilities when executing the contents through the gaze. For this reason, a first test was carried out before definitively adjusting the interactivity mode. The second session focused on analyzing the functionality offered by the narrative and the audio according to the spatial location in the space of the Almoina. Subsequently, pertinent improvements were made so that the buildings were adjusted to the plan of the ruins and the contents were in the corresponding space.
The first integration test was carried out with a set of non-definitive materials to speed up the verification of functionality in the headsets. For this task, the necessary shots were recorded to assemble the introduction and the tutorial without the help of the actress. The banners and coins—created with a 3D modeling program for this test—were integrated in Unity, and the materials of these objects were assembled with images previously treated with Adobe Photoshop. These functional tests were first tested within the video game editor itself, in simulation mode, with a joystick connected to the development PC with which the the eyeglass wearer can simulate the movement and direction of their gaze.
At this stage, it was agreed to review the literary script to adjust the narrative elements to integrate the desired animations and rhythm. Likewise, the three-dimensional elements necessary to reconstruct the scenes were integrated from a combination of objects obtained from free-use libraries and models created by ourselves, all adapted to follow the plan of physical reconstruction and historically consistent with the guidelines of the Museum.
Finally, a functional alpha version was generated on Microsoft
HoloLens glasses, with the final footage shot with the actress. The animations and the reconstructions of the presentation and tutorial were then developed (see
Figure 11).
3. Experimental Results
In accordance with [
24,
25,
26], different methodologies such as usability tests of applications, which are commonly used in the verification of hardware and software, along with presence questionnaires, surveys, and in-depth interviews, were used to evaluate and validate the effectiveness of the proposed AR-based interface for guided tours in museums.
Ten people were invited to visit the museum and check the application on site. They had different profiles (5 women and 5 men with different ages, from 18 to 58 years old). Additionally, they were given the opportunity to explain comments and suggestions in order to obtain some extra information apart from the usability and presence questionnaires.
Specifically, the System Usability Scale (SUS) questionnaire was carried out to check the usability of the application (see
Table 1) [
27]. This test took an average of 50 min, including the training with the virtual guide Clelia (approx. 8 min), the visit to the content (approx. 30 min), and finally some time to respond to the questionnaire and interview (8 to 10 min).
As a result, the overall perceived usability was 84.67 out of 100 (min: 79.9; max: 100; SD: 17.41) for
. This result means that the proposed interface reached a high level of usability. In addition,
Figure 12 shows the results obtained for each question of the SUS questionnaire. It is remarkable that most of the participants indicated that they would use this interface frequently and found the interface easy to use. The participants also indicated that all the interface functionalities were well integrated and that the proposed interface was consistent. Moreover, participants felt confident with the interface. An initial tutorial might be needed to learn the interactivity, but once it is explained, the system is easy to use.
In addition, the presence questionnaire (PQ) was filled by visitors [
28,
29,
30,
31,
32,
33]. PQ contained 24 items in the form of closed-ended questions on a scale of 1 (“not at all”) to 7 (“completely”).
Figure 13 shows the results of the PQ. Specifically, the “realism” score obtained a mean of 6.60 out of 7 with a standard deviation of 0.44, while the “possibility to act” score obtained a mean of 6.15 out of 7 with a standard deviation of 0.55. The “quality of interface” score obtained a mean of 6 out of 7 with a standard deviation of 0.95, while the “possibility to examine” score obtained a mean of 6.47 out of 7 with a standard deviation of 0.12. The “self-evaluation performance” score obtained a mean of 6.25 out of 7 with a standard deviation of 0.07, while the “sounds” score obtained a mean of 6.73 out of 7 with a standard deviation of 0.25. These results demonstrate that the objectives presented in the preparation of the methodology were achieved. All users demonstrated engagement with the digital avatar together with the holograms that recreated the scenarios, with music and effects.
Moreover, the Igroup Presence Questionnaire (IPQ), based on 14 questions on a scale of 1 (“not at all”) to 7 (“completely”), was used [
34]. This questionnaire is normally used for testing virtual-reality-based interfaces, and some of the questions included in the original version are not appropriate for augmented-reality-based interfaces (see [
35] for more details about the differences between augmented-reality and virtual-reality interfaces). For this reason, only five questions from the IPQ were used in this study (see
Table 2).
Figure 14 shows the results of this study. The majority of the users indicated in Q11 that they were extremely aware of the surroundings, which is characteristic of AR HMD experiences (mean: 6.90 out of 7; standard deviation: 0.32). However, the results shown in Q12 indicate that users were also aware of the virtual elements and felt surrounded by them (mean: 6.90 out of 7; standard deviation: 0.32). This kind of results are typical in augmented-reality applications and indicate that users are interacting with the virtual world without losing contact with reality. Another important result was that the majority of the users indicated that they felt surrounded by the holograms (mean: 6.50 out of 7; standard deviation: 1.27, in Q13). This was precisely the effect that all augmented-reality interfaces look for to engage the user and transmit the required information in a natural manner. The users also indicated that the holograms did not interfere in the moments they wanted to pay attention to the real elements (mean: 6.60 out of 7; standard deviation: 0.97, in Q14). However, they were captivated by the holograms when needed, paying attention to them, and receiving the information in a natural manner (mean: 6.20 out of 7; standard deviation: 0.79, in Q15).
The above results sustained the effectiveness of the developed augmented-reality interface based on the proposed methodology in terms of usability and visitor perception.
In addition to these results, the observational study showed that users took an average of 55 min using the application, and 93% executed the training correctly. Ninety percent entered the application via the Roman Banner, and one user directly entered a coin first. Most of them stood in front of the holograms and did not move around them. Most of them tried to touch the holograms.
The users understood very well how to run the tutorial that explained how to activate the contents with their eyes. Neither needed help, and they moved naturally around the room.
Several of the users complained that there were large differences between the guide’s audio volume (Clelia) and the other narrator. Some of the dialogues could be improved to make them shorter and more open to questions.
In general, most of the users were very surprised, enjoyed the experience, and understood the associated content.
The team was satisfied with the results of these usability tests; they helped to verify that the proposed methodologies are functional and that they meet the proposed requirements. The methodology is viable, as demonstrated by the experiments.
In the interview, most of the participants explained that the field of view was small. Some of them had problems fixing the Microsoft HoloLens glasses when wearing other headsets and felt uncomfortable after 30 min.
4. Discussion
Results of previous section show a success in terms of the user experience when exploring the exhibition with the holographic contents activated with the interface. However, some comments made by the users should be considered when developing future versions of the proposed interface, as discussed below.
Users highlighted the attractiveness of the application due to the novelty of the media. They also pointed out that it was very useful to understand the origins of the ruins and what they belonged to. Moreover, they highlighted the ease of interaction with the virtual guide, the banners, and labels with the coins. One user suggested adding more voices to make the storytelling more inclusive. The authors consider that this is a good suggestion and should be contemplated.
Arguably, the main complaint of participants was that the field of view (FoV) of the device was too small. In particular, the FoV of the Microsoft HoloLens glasses used in the tests is a 34-degree angle. Therefore, participants could see digital objects interacting with the real world while looking straight ahead, but if they turned their head a little, digital objects disappeared or got cut off. It is important to reveal that users explained that once they were involved in the visit, they forgot about the problem of the field of view and instead became involved in the storytelling. Nevertheless, the second version of the Microsoft HoloLens glasses has improved this issue, and now the FoV is a 52-degree angle.
Some visitors also expressed their worries about the fatigue produced by the weight of the headset. Moreover, they pointed out the difficulty of wearing them with other headsets. The Microsoft HoloLens glasses generation 2 has changed and has the weight at the back, which is made with the purpose of improving this problem.
However, users indicated that they finally understood the history of the ruins and their origin and that they had enjoyed the experience enough to repeat it, pay for it, and even recommend it to friends.
Currently, the market of AR headsets is growing with companies such as Nreal, which are are developing AR headsets that promise similar features to the Microsoft
HoloLens glasses for a much less cost [
36] as a periphery device that can be connected to any Android phone to work. There are still many problems that will need to be resolved in the future, but with devices of this type and the knowledge reached with this research, we consider the integration of AR solutions in museums to be in the near future.
5. Conclusions
It can be concluded that specific methodologies are feasible to be used by current museums. The article has provided a fertile space to tell stories, fostering a magical, emotional, and spiritual environment where the user becomes more open, active, and sensitive to stimuli. Therefore, specific methodologies are feasible for use by current museums. The article has provided a fertile space to tell stories, fostering a magical, emotional, and spiritual environment where the user becomes more open, active, and sensitive to stimuli.
As result of applying the principles of museography 4.0 on the Almoina experience, it can be concluded that
A storytelling methodology was design hat was adapted to the wishes and profiles of visitors, placing them at the center of the experience and creating views with a marked experiential character.
The intuitive manner of operating with augmented-reality media “view-through” allows one to eliminate the barriers that many individuals have when facing digital media.
A novel way of approaching stories was specified related to heritage, testing a type of interactivity that allows the personalization of content according to the profile of the visitor and their interests.
The use of audio, video, and animated 3D recreations that surround the visitor helped to make the experience much more immersive, bringing the user closer to the feeling of taking a trip back in time.
The development of the 4.0 museography that understands the synthesis between traditional exhibition forms and their fusion with digital media can help museums to effectively use new technologies with the aim of successfully incorporating new audiences.
It has been proved how augmented-reality devices introduce new means of communication capable of containing unique immersive experiences that will have a huge impact on society and museums. Although these technologies are still in a very early stage of development, and it is difficult to fully assess their potential, it is nevertheless possible to foresee that their mere appearance is having a significant impact in the context of museums. In this sense, more and more publications appear around museums and AR, and the number of experiences carried out by different museums is increasing.
Nevertheless, the introduction of augmented-reality smart headsets in the museum context offers some challenges to be analyzed:
In terms of production, the need for professional with hybrid profiles and the need for scientific 3D reconstructions will involve some additional media production costs.
Regarding design, the dependency must be mentioned between the design of the interaction rules and the specific storytelling that need to be told.
The model of exploitation must consider the high costs and maintenance of devices.
The use of certain strategies, such as interactive languages from the world of videogames or the development of narratives from staging or theater, can help to generate valid experiences in the context of museums. Through the development of our experience, it has been verified how these types of mechanics are very effective in involving audiences with the museum’s contents. Proposing the discovery of information through an exploration and activation of coins distributed in space in Almoina AR helped to generate a non-linear guided tour. Other ideas brought from the world of video games, such as the use of a virtual guide, can encourage visitors to learn how to interact with the holographic content effortlessly, following a non-technical narrative thread from the beginning. Likewise, it was verified that the creation of reactive scenes due to the proximity to the visitor has enormous potential, especially when linking this body position in space to animations that are automatically triggered, giving the visitor the feeling of receiving a reward for it.
An experimental case has been presented that brings about innovation with respect to these type of devices and narratives. It is a new medium, and this experience has only confirmed that it has still much to discover. As for future research, in the short term, carrying out part of the implementation of the Almoina AR with the second version of the Microsoft HoloLens glasses is expected, to create the opportunity to validate the system with different types of users and museum visitors.
Precisely the ability of these devices to present digital information linked to the physical in a perceptively non-aggressive way can allows visitors to relate to each other and experience the visit in a natural and social way. Likewise, in the examples that have been developed, it was verified that the different narratives associated with the museum’s contents take force when they are presented in the room in a holographic way, creating a binding dialogue between the real and virtual elements that participate in the same perceptual space, that is, the museum room.
Museography 4.0 will allow museums and their objects to have a central role, as they did in traditional museography, making the experience of the visit attractive, interactive, and motivating.
In addition, further potential work includes a comparative study between different methodologies and technologies currently used in museums with the one proposed in this work. This study will need a significant number of visitors to test each proposal in order to highlight the advantages and drawbacks of each one of them. Based on the results of this study, it is expected that the proposed methodology will keep improving in order to develop more natural, intuitive, and emotive interfaces to enhance the museum visit experience.