Next Article in Journal
A Comparison between Online Quizzes and Serious Games: The Case of Friend Me
Next Article in Special Issue
User Experience in Neurofeedback Applications Using AR as Feedback Modality
Previous Article in Journal
A U-Net Architecture for Inpainting Lightstage Normal Maps
Previous Article in Special Issue
The Role of Situatedness in Immersive Dam Visualization: Comparing Proxied with Immediate Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses

Institute of Computer Science, Foundation for Research and Technology Hellas, N. Plastira 100, Vassilika Vouton, GR-70013 Heraklion, Crete, Greece
*
Author to whom correspondence should be addressed.
Computers 2024, 13(3), 57; https://doi.org/10.3390/computers13030057
Submission received: 11 January 2024 / Revised: 19 February 2024 / Accepted: 20 February 2024 / Published: 22 February 2024
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)

Abstract

:
In this paper, we present a comprehensive methodology to support the multifaceted process involved in the digitization, curation, and virtual exhibition of cultural heritage artifacts. The proposed methodology is applied in the context of a unique collection of contemporary dresses inspired by antiquity. Leveraging advanced 3D technologies, including lidar scanning and photogrammetry, we meticulously captured and transformed physical garments into highly detailed digital models. The postprocessing phase refined these models, ensuring an accurate representation of the intricate details and nuances inherent in each dress. Our collaborative efforts extended to the dissemination of this digital cultural heritage, as we partnered with the national aggregator in Greece, SearchCulture, to facilitate widespread access. The aggregation process streamlined the integration of our digitized content into a centralized repository, fostering cultural preservation and accessibility. Furthermore, we harnessed the power of these 3D models to transcend traditional exhibition boundaries, crafting a virtual experience that transcends geographical constraints. This virtual exhibition not only enables online exploration but also invites participants to immerse themselves in a captivating virtual reality environment. The synthesis of cutting-edge digitization techniques, cultural aggregation, and immersive exhibition design not only contributes to the preservation of contemporary cultural artifacts but also redefines the ways in which audiences engage with and experience cultural heritage in the digital age.

1. Introduction

In this paper, a methodology for the digitization, curation, and virtual exhibition of heritage artifacts is provided. For the use case, a distinct collection of contemporary dresses inspired by antiquity is being showcased. The provided methodology can be traced back to the evolution of technologies, including the recent advancements in 3D technologies, specifically laser scanning and photogrammetry, that have significantly reshaped the landscape of cultural preservation. These methods, in contrast to conventional approaches, enable the detailed capture and transformation of physical garments into digital models.
The current limits of technological advancements still require that these digital models be post-processed to create the final 3D models. The postprocessing phase is required to further refine these digital replicas by removing digitization faults and merging partial scans of the digitized artifacts. Furthermore, ensuring that the high level of accuracy in representing the intricate details and nuanced characteristics inherent in each dress pertains while fusing individual datasets.
Apart from digitization, which is an important part of preservation, effort should be invested into making the preserved artifact findable, accessible, interoperable, and reusable (FAIR) [1]. By following established standards in the cultural heritage (CH) sector and by facilitating open data infrastructures and content aggregators, this vision today can be transformed into reality [2,3]. In this use case, in collaboration with the Branding Heritage organization [4], we present how this collaboration facilitates widespread access and streamlines the integration of our digitized collection of contemporary dresses into a centralized repository, fostering cultural preservation and accessibility.
However, the significance of the exploration presented in this work extends beyond digitizing artifacts. Building on the potential of making these 3D models widely accessible, our approach builds on virtual exhibition design and implementation. This virtual experience is crafted not only to enable online exploration but, more ambitiously, to invite participants to immerse themselves in a captivating virtual reality (VR) environment.
In this synthesis of cutting-edge digitization techniques, cultural aggregation, and immersive exhibition design and implementation, we contribute to the preservation and presentation of contemporary cultural artifacts. Moreover, we redefine how audiences engage with and experience CH in the digital age.

2. Background and Related Work

2.1. 3D Digitization

Over the past decades, the evolution of 3D digitization technologies has seen significant advancements, and its adoption has been wide in several application domains, including civil engineering [5], indoor environments [6], archaeology [7], underwater structures [8,9], geography [10], health [11,12], etc. Early methods, such as structured light scanners, employed projected patterns to capture object geometry [13,14]. Subsequent advancements introduced laser scanners, offering enhanced accuracy and speed in capturing intricate details, particularly in controlled environments [15,16]. Moving forward, the integration of lidar technology revolutionized large-scale 3D scanning, providing rapid and precise data acquisition, especially outdoors. Photogrammetry, leveraging computer vision algorithms, emerged as a powerful tool, reconstructing 3D models from overlapping images with increasing accuracy [17,18]. The progression culminated in the democratization of 3D scanning through handheld devices, exemplified by smartphone apps like Trnio [19] and Poly.Cam [20], allowing users to generate detailed models conveniently on the go.

2.2. Application of 3D Digitization in Cultural Heritage

The application of digitization technologies to cultural heritage has reshaped the preservation and accessibility of historical artifacts [21,22,23]. Early on, structured light scanners found utility in capturing the details of objects like sculptures and artifacts [24,25]. As laser scanning evolved, its precision became instrumental in the preservation of architectural wonders, as exemplified by the comprehensive digitization of historical structures such as the Palace of Knossos [26,27]. Lidar technology has contributed to large-scale cultural heritage documentation, enabling the creation of detailed 3D maps for archaeological sites such as the ancient city of Petra in Jordan [28]. Photogrammetry has proven invaluable in reconstructing artifacts with high accuracy, notable in the preservation of CH artifacts [29]. In recent years, handheld devices and smartphone apps like Trnio and Poly.Cam [19,20] have empowered museums and cultural institutions to engage in on-the-spot digitization, offering immersive virtual experiences and expanding public accessibility [30,31]. These examples illustrate how the evolution of digitization technologies has diversified their applications, ranging from detailed object capture to the preservation of entire historical landscapes [32], revolutionizing the field of cultural heritage. At the same time, advances in photonics make promises for more experiential technologies in the future, including see-through head-mounted displays and advanced AR optics [33,34,35].

2.3. Virtual Clothing

The need to realistically represent clothing in virtual environments has led to numerous techniques for virtual cloth simulation. This discipline integrates mechanics, numerical methods, and garment design principles [36]. Recent advancements have led to the development of sophisticated simulation engines capable of accurately representing the complex behavior of cloth materials [37]. Techniques such as particle system models have evolved to incorporate nonlinear properties of cloth elasticity, streamlining computations and improving efficiency in simulating anisotropic tensile behavior [38]. Additionally, approaches like MIRACloth draw inspiration from traditional garment construction methods, utilizing 2D patterns and seam assembly to create virtual garments that can be animated on 3D models [37,39]. These methods ensure precise representation and measurement of cloth surfaces, crucial for achieving high-quality animations. Furthermore, enhancements in collision detection and resolution algorithms enable simulations to handle irregular meshes, high deformations, and complex collisions, thus expanding the scope of possible scenarios for cloth simulation [40]. The work presented in this paper can be perceived as complementary to the above mentioned advancements. The accurate 3D reconstruction of garmets dicussed in this work can enhance the capacity of these methods in delivering animated virtual clothing of extreme realism and quality.

2.4. Semantic Knowledge Representation and Presentation

Semantic knowledge representation has received increased attention in various application domains, including e-health [41,42], education [43,44], commerce [45,46], automotive [47,48,49], etc. Among these domains, semantic knowledge representation and presentation have an active role in CH preservation [50,51,52], particularly with the adoption of open data (OD) and linked open data (LOD) principles [53,54,55]. OD initiatives involve making cultural heritage information findable and interoperable, fostering collaboration between CH institutions and researchers [56,57]. LOD takes this a step further by establishing standardized, interlinked connections between disparate datasets [58,59]. The integration of ontologies, like the CIDOC Conceptual Reference Model (CRM) [60], provides a common semantic framework for CH data, enabling more coherent and interconnected representations of artifacts, events, historical contexts, and spatiotemporal dimensions [61]. This ensures a consistent and standardized approach to data description, facilitating interoperability across diverse cultural heritage collections. The Europeana project [62,63] is a notable example where LOD principles have been employed [64], allowing users to seamlessly navigate and explore a vast repository of cultural heritage artifacts from various institutions.

2.5. Virtual Exhibitions of Cultural Heritage Collections

Advances in virtual exhibitions and virtual museums within the CH sector have significantly transformed the way audiences engage with and experience historical artifacts and artworks [65,66]. Virtual exhibitions leverage digital technologies to create immersive and interactive online environments, providing a dynamic platform for the presentation of cultural heritage content. These exhibitions go beyond the constraints of physical spaces, allowing for the inclusion of a broader range of artifacts, contextual information, and multimedia elements. Institutions worldwide, from museums to galleries, have embraced virtual exhibitions as a means to reach global audiences, especially during times when physical visits may be restricted [67,68]. Cutting-edge technologies like augmented reality (AR) [69,70,71,72] and virtual reality (VR) [73,74,75,76,77] contribute to more engaging and authentic experiences, enabling users to virtually explore exhibitions as if they were physically present [78]. Notable examples include virtual tours of renowned museums, historical sites, and events [79,80,81], offering users the ability to navigate through exhibitions, zoom in on artifacts, and access additional information at their own pace. These advances in virtual exhibitions enhance accessibility, including for people with disabilities [82]. At the same time, recent developments redefine the traditional boundaries of cultural heritage presentation, fostering a more inclusive and immersive way for individuals worldwide to connect with and appreciate our shared cultural legacy.

2.6. Contribution of This Research Work

While significant progress has been achieved in the domains of 3D digitization technologies, semantic knowledge representation, and virtual exhibitions, especially concerning the CH sector, several research gaps persist. In this work, we propose a concrete methodology that builds on this advancement and, in combination, can support bridging these gaps.
Seamless interoperability and standardization across diverse cultural heritage datasets have not yet been achieved. The proposed methodology utilizes standard domain ontologies, such as the CIDOC CRM [60] and the Europeana Data Model (EDM) [83], for the semantic knowledge representation. Thus, more precise descriptions of artifacts and events are supported, and at the same time, this is achieved by maintaining interoperability with standards-compliant CH knowledge.
In the domain of digitization, there is still not a single methodology capable of achieving adequate results in all cases (e.g., indoors, outdoors, small scale, medium scale, etc.). As such, some form of fusion will always be required to achieve optimal results [84,85,86,87]. To this end, combining different technologies based on their strengths and weaknesses can make a difference. At the same time, in the proposed approach, we also lay the foundation for post-processing [88] the results of the technologies to get the most out of lidar scanning, laser scanning, and photogrammetry, especially in the case of dresses where time-dependent variations in their structure make the registration of individual scans challenging.
Despite the growth of virtual exhibitions, ensuring the accessible dissemination of digitized content remains a concern. Our methodology addresses this by exporting curated data in RDF/XML [89] format and ingesting them into national aggregators like SearchCulture [90] and European platforms like Europeana [91], enhancing accessibility and exposure at national and European scales. At the same time, the raw data and the digitization outcomes become available as open datasets through Zenodo [92] to foster data reuse for scientific purposes.
User engagement and interaction are considered important parts of immersing in virtual exhibitions. Our methodology incorporates innovative platforms like the Invisible Museum [93] that allow the creation of virtual spaces and the definition of the rendering characteristics of artifacts to enhance the visual appeal and lifelike representation of artifacts while simplifying immersion through its versatile support for web-based or VR-based interaction.

3. Method

The proposed methodology outlines a systematic approach for the digitization, curation, and exhibition of a diverse collection, employing a multimodal strategy as presented in Figure 1.
In the initial phase, the items undergo a comprehensive digitization process incorporating various techniques. Detailed images are captured from multiple angles through photographic documentation, serving as the foundational dataset for subsequent procedures. Concurrently, geometric data are recorded using laser scanning equipment, capturing intricate details of the materials and embellishments with an operating accuracy of 0.1 mm. Finally, a mobile app can be employed as a good all-around solution for validating the reconstruction outcomes and as a data source in the case that a partial scan fails to synthesize the entire model.
In the case of this work, for the acquisition of images, a Nikon D850 was used. For laser scanning, the FARO Focus Laser Scanner was used, which is capable of creating accurate, photorealistic 3D representations of any environment or object in just a few minutes [94]. Due to the selection of the scanning equipment, the data acquired have the pitfall that the heritage artifact cannot be covered in a single scan, and thus multiple scans are required per artifact. Finally, the Trnio Mobile app [19] was employed for on-the-go 3D model generation through mobile phones, fussing lidar, depth, and photogrammetric methods. Data processing in Trnio happens on the cloud and requires no additional resources, which makes it ideal for our proposed multimodal approach. Unfortunately, Trnio was discontinued at the time of writing this paper. For consistency in our methodology, we have performed tests with alternative software, and we propose the use of Poly.Cam [20] as an alternative to Trnio.
In the next stage, the collected data are used to perform 3D reconstruction. To this end, three processes are proposed. The first is the photogrammetric reconstruction of the collected image datasets. This results in a mesh structure and a texture for each scanned artifact. The method and software characteristics are capable of producing an ultra-high-quality texture and a lower-quality mesh structure. The second method is the post-processing of the lidar data that results in a point cloud (directly through the measurements taken by the lidar scanner), an ultra-high quality mesh structure (accuracy ~0.1 mm), and a lower quality texture (the texture is synthesized by combining the measured individual points’ colors). The third is the post-processing of the mobile device data on the cloud. This method produces a medium-averaged-quality mesh and texture that, in the proposed method, is used as a reference and fallback dataset. In the use case of this work, for the reconstruction, the PIX4Dmatic from Pix4D [95] was used. For the creation of the 3D point cloud from the laser scanner data, a Faro scene was used [96].
Following 3D reconstruction, the digitization process continues by curating the data in Blender [97]. Blender is a versatile tool that refines and transforms the collected data into high-fidelity 3D models. This phase is dedicated to preserving the essence of each item while ensuring accuracy in the digital representation. The main activities here involve the application of modifiers to individual scans to achieve alignment, their merging, and simplification to produce the final mesh structure. Then the projection of textures from individual scans to the combined mesh and the application of an averaged image stacking methodology for the combination of multiple textures into a uniform result.
The curation phase focuses on enhancing the accessibility and discoverability of the digitized collection. Collections are methodically organized on an open data repository, with each item assigned a unique Internationalized Resource Identifier (IRI). Simultaneously, to further enhance the FAIR qualities of the produced data, an online platform adhering to CH heritage standards on knowledge representation is important to enrich metadata associated with each item and document detailed historical context, materials, and cultural significance. This documentation provides a comprehensive digital resource for each artifact. Then, to broaden dissemination, curated data are exported in RDF/XML format for ingestion into a CH aggregator, adhering to LOD principles.
The methodology concludes by making data experienceable through the creation of a virtual exhibition using a digital authoring platform. This facilitates the design of virtual spaces, replicating the ambiance of a traditional museum setting while leveraging digital technologies. Rendering characteristics are configured to enhance visual appeal, ensuring a lifelike representation of each artifact.
Upon completion, the virtual exhibition is published, making it accessible online through standard web browsers and providing an immersive experience for users with VR headsets. This approach serves to preserve the collection in a digital format while establishing an interactive platform for the exploration and appreciation of cultural heritage.
An overview of the technologies employed in each step of the methodology and their functions is presented in Table 1.
In the following sections, each step of the aforementioned methodology is presented as applied in the context of the contemporary collection of dresses of the Branding Heritage organization.

4. Digitisation

4.1. Multimodal Data Acquisition

To capture detailed and accurate representations of the dress collection, a comprehensive scanning methodology combining various techniques was employed. Initially, photographic documentation served as a foundational element, with photographs taken around the object from multiple angles with regards to the z-axis and at a fixed distance with the artifact having an overlapping ratio of approximately 50%. The careful acquisition of the dataset is essential to facilitate the subsequent photogrammetric reconstruction process. Next, to enhance the three-dimensional fidelity of the data acquired, a Pharo Focus laser scanner was utilized to capture precise geometric data, and thus intricate details of the dresses, including texture and surface features. The laser scanner was positioned in four locations around the artifact with a 45-degree distance between them and calibrated to scan only the part of the artifact visible to the scanner. Partial scanning rather than 360 scanning was selected to reduce the amount of unusable data acquired and to reduce both processing and scanning time. As part of the scanning methodology used, Trnio played a crucial role as a versatile and accessible fallback solution for 3D model generation. Recognizing the need for flexibility and convenience, especially in environments where extensive scanning equipment might be impractical, Trnio emerged as an invaluable tool. Trnio allowed for the swift capture of 3D data through a user-friendly interface. While the primary data acquisition involved more sophisticated techniques such as laser scanning and photogrammetric reconstruction, Trnio served as a practical alternative since it supports on-the-go 3D model generation and can be useful for the validation of more detailed 3D scans later on.

4.2. 3D Models Reconstruction

All 3D modeling was carried out using Blender versions 2.93 for texture extraction/synthesis and 3.51 for the Geometry Node Shader capabilities. Final models were simplified using Instant Field-Aligned Meshes, a method that will automatically re-topologize complex meshes [98]. Photogrammetry is capable of generating a single mesh for the entire artifact because the acquired datasets cover multiple angles and a nearly complete coverage of the subject. The resulting mesh is imperfect; the dimensions require a reference for scale; structural errors accumulate in fine geometric details; and texture quality is also inconsistent. In contrast, LiDAR FARO scans are almost stationary, with large gaps between scans, that deliver restraint but high-quality color and geometric structure from their point of view. For the project, both scanning techniques have been used, as each has its own advantages and disadvantages. As a result, due to the nature of the above-discussed results, a merging methodology is required for using multiple scans and performing laborious retopology efforts. Another important consideration that should be taken into account is that clothing is a malleable subject, prone to shape deformations from minute external factors that worsen with time. Capturing multiple photographs is fast, but the resulting details can be disjointed, and while LiDAR is precise, its methodical nature is slow, introducing various warps between scans.
To address this issue, a forced manual registration process for the majority of the scans was required, using their common texture features as reference points. The multitude of scans compound into complicated, partial mesh overlaps, consisting of millions of polygons. Their one-sided flat structure is detrimental to Boolean operations and is almost impossible to work reliably as a means to unify the scans.
To get around the issue, a series of modifiers and geometry nodes are applied to each scan that subdivide and remove irrelevant geometry using an alpha texture. The alpha is generated for higher than 25-degree angles because LiDAR’s geometric and image quality are best at near-perpendicular angles and rapidly decline at steeper angles. The process retains only the most accurate parts from each scan. Within the Geometry Node, the alpha is the deciding factor that keeps or deletes geometry; thus, artistically editing the alpha is a powerful process that allows for fine structure to arise. For example, painting away the mannequin while keeping thin lines enables the inclusion of strings, braces, and other delicate features of a dress as real geometry, visibly determining the prospective outcome. This methodology is graphically represented using an exemplar dress in Figure 2. The first part of the figure (a) presents an original scan mesh as acquired through the scanning methodology, and then in the second part (b), the same scan mesh is shown with the modifiers applied. The unification of all meshes is presented in the third part of the figure, where the outcome is a single mesh structure acquired by combining 9 individual modified meshes. Finally, the fourth part presents the merged mesh as simplified following Instant Field-Aligned Meshes [98].
Diving into more details on the aforementioned process. In the first step, all individual FARO scans have the following modifiers applied: (a) geometry node “cut to alpha” (b) planar decimation at 0.1 degrees, and (c) triangulate. The modifier is presented in Figure 3a. Subsequently, the modified scans are merged into a new target model. The structure is complicated with overlapping polygons that often times have missing parts due to a lack of scanned information. These can be quickly covered by projecting simple polygons around them. The unification transformation happens by applying a set of modifiers. The process converts the mesh into a volume, encompassing the intended model that is thick enough to fill small structural gaps. Then the volume is converted back to a mesh. The remesh modifier is applied to smooth the geometry (see Figure 3b), and finally, a shrink modifier pushes back the surface, restoring its original, intended form (see Figure 3c). The resulting mesh is a single unified mesh and can be further simplified using Instant Field-Aligned Meshes [98] to reduce the polygon count of the model.
The individual modified scans are geometrically accurate and are used to transfer their equivalent textures to the final model. For every scan, a 16 k RGBA texture is extracted by projecting the respective panoramic onto it. The alpha is auto-generated by the overlapping geometry, benefiting greatly from the fine structure. The textures are baked with a Selected to Active and Extrusion/Ray Distance set to 0.01. Finally, an average image stacking approach is used to synthesize all textures into one detailed albedo texture.
To acquire the final 3D models of the dress collection, we applied the aforementioned methodology to all the individual datasets per dress. The final collection of dresses was subsequently digitally curated.

5. Digital Curation

The digital curation part of the methodology as applied in the presented use case is a complex procedure that transforms a collection of media files that is the output of the digitization phase into data that adhere to the FAIR principles. This process is initiated by transforming the data into an open dataset. For this purpose, we use the Zenodo [92] platform. The process of creating these datasets includes uploading and documenting all the source data and connecting them to the authors, the project’s community, and the source of funding. The result is a fully discoverable dataset assigned with a doi [99,100,101,102,103].
Each data item receives, through this integration, a unique IRI that can be reused across the web. In this stage, there is also the option of depositing the raw data used for digitization for their long-term preservation. The reasons for doing so are twofold. The first is that in the future, this data can be reused with more advanced digitization methods without the need to recapture everything. The second is that having such datasets freely available enhances the availability of data for researchers and scientists working on the improvement of digitization methods.
With the unique IRIs available, it is time to start building the semantic information for the digital assets. For this purpose, we use two levels of information. The first level regards the metadata assigned to the files (i.e., 3D models and photographs), and the second level regards the semantic representation of the artifact in the form of a CH object. For the representation in this work, we propose the usage of the Mingei Online Platform (MOP) [104], implemented in the context of the Mingei Horizon 2020 project [105], and updated and enhanced in the context of the Craeft Horizon Europe project [106].
Examples of first-level documentation of an image and a 3D object are presented in Figure 4. In Section (a), we present the metadata associated with the image, which, apart from the image file characteristics, includes semantic annotations with external vocabularies and internal links to the object representing the dress as a heritage object. Section (c) presents the meta-data for a 3D model. Here information can also be seen about the creators of the digital files, semantic annotations with external vocabularies, and internal semantic links to the object representing the dress as a heritage object. Sections (b) and (d) present the online previews supported by the MOP for the image and 3D model.
The documentation of the heritage object is more complex since it combines all the associated media assets and further social and historical information. Each object may have multiple descriptions, each one associated with a language. Furthermore, it integrates information regarding the event of its creation, the materials used, and its creator. Each of these is represented by a separate semantic instance. A rich set of semantic annotations to external vocabularies is also used to further represent the heritage object. A graphical representation of the heritage object, including its major association with MOP, is presented in Figure 5.
The aforementioned documentation is sufficient to present a heritage object online in the MOP. Further dissemination is needed to make the resource globally accessible through different dissemination channels following the LOD principles. To this end, MOP provides an exporting facility that exports the contents of its knowledge base in the form of RDF/XML. An example of such an export for the knowledge object under discussion is presented in Figure 6. Using this export functionality, the knowledge base can be ingested into content aggregators such as SearchCulture. SearchCulture is the Greek National Aggregator of Digital Cultural Content that offers access to digital collections of cultural heritage provided by institutions from all over Greece. Currently, it aggregates information about 813,269 items, including photographs, artworks, monuments, maps, folklore artifacts, and intangible cultural heritage in image, text, video, audio, and 3D [90].
For our use case, a public API was implemented in Mop to list all the dresses as a collection of metadata in the aforementioned format. Through this public API [107], SearchCulture performed the ingestion of the knowledge entities to be aggregated through its online services [108].

6. Virtual Reality Exhibition

The VR exhibition was built on top of the Invisible Museum Platform (IMP) [93,109]. For the front end, A-Frame [110], and Three.js [111] are employed. A-Frame, a web framework designed for creating web-based and virtual reality applications, was chosen as the foundation for these tools due to its utilization of web technologies like WebGL 1.0 [112] and WebVR 1.5.0 [113]. This utilization allows compatibility with modern browsers without the need for additional plugins or installations. This offers the flexibility to develop applications accessible across various devices, including desktops, mobile devices, and VR headsets, using their built-in browser.

6.1. Exhibition Scene Design

The virtual 3D exhibition scene is designed using a web-based tool called “Exhibition Designer” that is offered by the IMP. This web-based designer is the first step in the exhibition-generation pipeline, and its purpose is to increase the speed while simplifying the exhibition setup process. By minimizing the challenge of creating a 3D exhibition, the creators are not required to be familiar with complex 3D modeling software, allowing them to rapidly explore exhibition setup concepts. To initiate the process, the creator of the exhibition must first draw a top-down view of the exhibition on a tile-based canvas (see Figure 7), which will be translated into a 3D building. The tool also supports importing entire 3D models of scanned buildings. This step can be skipped by selecting one of the various preset buildings provided that can also be edited anytime through the process.
Once the space where the 3D exhibit models will be hosted is ready, it is time to import them. The tool facilitates the quick import of GLB format 3D models and allows for their position, rotation, and scale adjustments (see Figure 8a). The reasons GLB was selected as the sole format for the 3D objects are the single file format, the native support by modern browsers (without the need for external plugins or additional libraries), and the efficient rendering due to their GPU rendering optimization. After the exhibit models are placed within the exhibition, lighting sources—such as spotlights and point lights—are integrated to improve visibility and highlight specific models. Well-positioned lights can be strategically utilized to guide the viewer’s focus within the scene, drawing attention to particular objects (see Figure 8b). Additionally, the designer enables the import of decorative elements (images, videos, 3D models, and music) to further shape the tone of the exhibition and influence the mood and atmosphere (see Figure 8c). The editing of the attributes of the above 3D objects is enabled through an on-screen inspector (see Figure 8d). This inspector provides a UI environment to edit the values of the 3D objects directly from the Three.js layer. This low-level access to the objects presents the possibility of viewing the changes made in real-time without impacting the performance of the designer. Gizmos are also provided for less precise but faster changes.

6.2. Baking the Scene

Upon completing the exhibition setup, a JSON format file is generated containing the scene specification as rendered by the designer. This file contains information such as the exhibits, lights, and decorations present within the scene, as well as details about the transformations applied to them. The scene specification is directly editable in Blender (see Figure 9).
The scene specification generated by the exhibition designer is subsequently utilized within a Blender-based service responsible for recreating the entire exhibition in Blender. This pipeline includes functionalities such as lightmap baking and the merging of all the geometries of the 3D objects into one. Lightmap baking plays a crucial role in the outcome. Instead of computing lighting in real-time, this process pre-calculates how light interacts with surfaces in a 3D scene. The significant advantage lies in improved performance during user interaction with the 3D exhibition. By pre-computing lighting information, real-time rendering becomes less resource-intensive, leading to smoother user experiences. Additionally, this technique ensures consistent and predictable results across various devices and platforms.
Geometry merging also contributes to performance enhancements. By reducing the number of individual objects and consolidating them into a single entity, rendering performance is optimized, particularly in exhibitions with numerous smaller objects. Moreover, merging these 3D objects simplifies management, easing the overall workflow. The scene before and after baking can be seen in Figure 10.
Once the 3D exhibition is baked, it becomes accessible for use within the ‘Exhibition Viewer’ that directly engages users with the final result, allowing them to explore the 3D exhibition and access information about the showcased exhibits. Moreover, the Exhibition Viewer includes a tour management section, enabling tour creators to customize the experience. Within this section, creators can determine the included exhibits, establish the starting point for the tour, and even select background music. Exhibits can also be accompanied by sound. With this extra flexibility, tours can feel more dynamic since voiced-over narrations can be played when the viewer approaches an exhibit. Notably, a single exhibition can support multiple tours in multiple languages and different ages. Currently, the virtual exhibition is available for demonstration purposes [114] and is planned to be released to the public in the first quarter of 2024.

7. Discussion and Conclusions

In conclusion, in this paper, we provide a comprehensive methodology for the digitization, curation, and virtual exhibition of CH artifacts. Furthermore, we demonstrated this methodology in the digitization, curation, and virtual exhibition of a unique collection of contemporary dresses inspired by antiquity. The evolution of 3D digitization technologies, including lidar scanning, laser scanning, and photogrammetry, has been employed, combined with intelligent post-processing methodologies, to mitigate the challenges associated with multimodal digitization.
A notable insight from the post-processing phase regards the individualities of CH artifacts that may have an effect on the technical methodology to be followed. In 3D reconstruction for CH objects, there is a tendency to suppose that the subjects remain still during digitization, which is the case for most subjects such as sculptures, ancient artifacts, tools, machinery, etc. In the use case presented, we realized that dresses are not such a case since minor changes in their geometry during the acquisition of data have greatly affected the registration of partial scans. As a result, several modifiers have to be applied to partial scans to perform the registration. Furthermore, due to the fusion of data from several scanning modalities, the synthesized mesh was of increased size and complexity, and thus simplification was required. We followed the Instant Field-Aligned Meshes [98] methodology to achieve a simplified mesh. Finally, due to the need to visually combine textures from multiple scans, an average image stacking approach is used to synthesize all textures into one detailed albedo texture. As a lesson learned from this digitization effort, we can propose two deviation measures. The first involves controlling the digitization setup and placement of dresses to ensure the least possible changes in geometry. The second is to make sure that the acquisition phase has been enhanced in terms of time since all the phenomena observed during our experiments were time-dependent, with the error rate increasing during long scans.
The proposed approach to digital preservation is quite straightforward and can be applied to any form of digital artifact. By building on semantic knowledge representation standards like the CIDOC CRM, open-linked data repositories, and content aggregators, the widest possible dissemination of digital assets is supported. Moreover, the strategic dissemination of digitized content through national and European aggregators ensures wider accessibility.
In the use case, we learned that due to the digital curation, the effort needed to integrate the collection into the virtual exhibition was greatly reduced since the digital curation had already solved the issues of storing, retrieving, and accessing metadata for each digital object. This is certainly hard evidence of how FAIR data can simplify the reusability of media assets and, through simplicity in integration, enhance their value. Further validation of these open data sources from third parties will be needed in the future to ensure that the principles that we are employing in this work make data reusable since, in our case, both the provider and consumer of data were the same organization.
Using a mature platform for the implementation of the virtual exhibition was a wise decision that allowed us to greatly compress the development time. Of course, there are some limitations to the type of platform that can be employed. Careful consideration should be placed on semantic data interoperability to ensure that open data are directly exploitable by the target platform. Furthermore, compatibility with the data format of the 3D models is essential. In this work, we employ the glb format, known for its wide compatibility and integration efficiency. In the use case, the employed Invisible Museum platform offered both forms of compatibility since it is compatible with CIDOC-CRM-based knowledge representations and has off-the-shelf compatibility with glb files. Based on these facilities, it was possible to simplify the creation of the virtual exhibition without compromising quality or interaction.
In summary, we are confident that the provided methodology represents a holistic and innovative response to the multifaceted challenges in the preservation and presentation of cultural artifacts, contributing to the evolving landscape of cultural heritage in the digital age. The following table (see Table 2) summarizes the technical outcomes of this work, providing references to the location where data and content can be accessed, previewed, and experienced.

Author Contributions

Conceptualization, X.Z. and N.P.; methodology, X.Z., N.P. and E.Z.; software, A.X., T.E. and A.K.; validation, X.Z., N.P. and E.Z.; formal analysis, A.K. and A.X.; investigation, X.Z., N.P. and E.Z.; resources, T.E.; data curation, A.K.; writing—original draft preparation, N.P.; writing—review and editing, N.P.; visualization, A.K., A.X. and T.E.; supervision, X.Z., N.P. and E.Z.; project administration, E.Z.; funding acquisition, X.Z. and N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Greek Ministry of Culture in the context of the action “Digitization of the branding heritage collection of contemporary dresses” and the Horizon Europe project Craeft, which received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No. 101094349.

Data Availability Statement

The data produced by this research work can be accessed through published open datasets [99,100,101,102,103].

Acknowledgments

The authors would like to thank the anonymous reviewers for contributing to the enhancement of the quality of this manuscript. Furthermore, we would like to thank the Branding Heritage organization for its valuable collaboration in this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Mons, B. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 2016, 3, 160018. [Google Scholar] [CrossRef] [PubMed]
  2. Hermon, S.; Niccolucci, F. Fair data and cultural heritage special issue editorial note. Int. J. Digit. Libr. 2021, 22, 251–255. [Google Scholar] [CrossRef]
  3. Nicholson, C.; Kansa, S.; Gupta, N.; Fernandez, R. Will It Ever Be FAIR?: Making Archaeological Data Findable, Accessible, Interoperable, and Reusable. Adv. Archaeol. Pract. 2023, 11, 63–75. [Google Scholar] [CrossRef]
  4. Branding Heritage. Available online: https://brandingheritage.org/en/homepage/ (accessed on 11 January 2024).
  5. Ma, Z.; Liu, S. A review of 3D reconstruction techniques in civil engineering and their applications. Adv. Eng. Inform. 2018, 37, 163–174. [Google Scholar] [CrossRef]
  6. Kang, Z.; Yang, J.; Yang, Z.; Cheng, S. A review of techniques for 3D reconstruction of indoor environments. ISPRS Int. J. Geo-Inf. 2020, 9, 330. [Google Scholar] [CrossRef]
  7. De Reu, J.; De Smedt, P.; Herremans, D.; Van Meirvenne, M.; Laloo, P.; De Clercq, W. On introducing an image-based 3D reconstruction method in archaeological excavation practice. J. Archaeol. Sci. 2014, 41, 251–262. [Google Scholar] [CrossRef]
  8. Beall, C.; Lawrence, B.J.; Ila, V.; Dellaert, F. 3D reconstruction of underwater structures. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4418–4423. [Google Scholar]
  9. Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects. Sensors 2013, 13, 11007–11031. [Google Scholar] [CrossRef] [PubMed]
  10. Zanchi, A.; Francesca, S.; Stefano, Z.; Simone, S.; Graziano, G. 3D reconstruction of complex geological bodies: Examples from the Alps. Comput. Geosci. 2009, 35, 49–69. [Google Scholar] [CrossRef]
  11. Vitali, A.; Togni, G.; Regazzoni, D.; Rizzi, C.; Molinero, G. A virtual environment to evaluate the arm volume for lymphedema affected patients. Comput. Methods Programs Biomed. 2021, 198, 105795. [Google Scholar] [CrossRef]
  12. Awad, A.; Trenfield, S.J.; Pollard, T.D.; Ong, J.J.; Elbadawi, M.; McCoubrey, L.E.; Goyanes, A.; Gaisford, S.; Basit, A.W. Connected healthcare: Improving patient care using digital health technologies. Adv. Drug Deliv. Rev. 2021, 178, 113958. [Google Scholar] [CrossRef]
  13. Bell, T.; Li, B.; Zhang, S. Structured Light Techniques and Applications. Wiley Encycl. Electr. Electron. Eng. Available online: https://onlinelibrary.wiley.com/doi/full/10.1002/047134608X.W8298 (accessed on 19 February 2024).
  14. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [Google Scholar] [CrossRef]
  15. Muralikrishnan, B. Performance evaluation of terrestrial laser scanners—A review. Meas. Sci. Technol. 2021, 32, 072001. [Google Scholar] [CrossRef] [PubMed]
  16. Lemmens, M.; Lemmens, M. Terrestrial laser scanning. Geo-Inf. Technol. Appl. Environ. 2011, 5, 101–121. [Google Scholar]
  17. Baqersad, J.; Poozesh, P.; Niezrecki, C.; Avitabile, P. Photogrammetry and optical methods in structural dynamics—A review. Mech. Syst. Signal Process. 2017, 86, 17–34. [Google Scholar] [CrossRef]
  18. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  19. Trnio. Available online: https://www.trnio.com/ (accessed on 9 January 2024).
  20. Poly.Cam. Available online: https://poly.cam/ (accessed on 9 January 2024).
  21. Pitzalis, D.; Kaminski, J.; Niccolucci, F. 3D-COFORM: Making 3D documentation an everyday choice for the cultural heritage sector. Virtual Archaeol. Rev. 2011, 2, 145. [Google Scholar] [CrossRef]
  22. Stork, A.; Fellner, D.W. 3D-COFORM-Tools and Expertise for 3D Collection Formation. 2012. Available online: https://www.researchgate.net/publication/228397413_3D-COFORM_Tools_and_Expertise_for_3D_Collection_Formation (accessed on 19 February 2024).
  23. Pavlidis, G.; Koutsoudis, A.; Arnaoutoglou, F.; Tsioukas, V.; Chamzas, C. Methods for 3D digitization of cultural heritage. J. Cult. Herit. 2007, 8, 93–98. [Google Scholar] [CrossRef]
  24. Akça, M.D. 3D modeling of cultural heritage objects with a structured light system. Mediterr. Archaeol. Archaeom. 2012, 12, 139–152. [Google Scholar]
  25. Rocchini, C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A low cost 3D scanner based on structured light. Comput. Graph. Forum 2001, 20, 299–308. [Google Scholar] [CrossRef]
  26. Pervolarakis, Z.; Zidianakis, E.; Katzourakis, A.; Evdaimon, T.; Partarakis, N.; Zabulis, X.; Stephanidis, C. Three-Dimensional Digitization of Archaeological Sites—The Use Case of the Palace of Knossos. Heritage 2023, 6, 904–927. [Google Scholar] [CrossRef]
  27. Pervolarakis, Z.; Zidianakis, E.; Katzourakis, A.; Evdaimon, T.; Partarakis, N.; Zabulis, X.; Stephanidis, C. Visiting Heritage Sites in AR and VR. Heritage 2023, 6, 2489–2502. [Google Scholar] [CrossRef]
  28. Alshawabkeh, Y.; Bal’Awi, F.; Haala, N. 3D digital documentation, assessment, and damage quantifi cation of the Al-Deir monument in the ancient city of Petra, Jordan. Conserv. Manag. Archaeol. Sites 2010, 12, 124–145. [Google Scholar] [CrossRef]
  29. Aicardi, I.; Chiabrando, F.; Lingua, A.M.; Noardo, F. Recent trends in cultural heritage 3D survey: The photogrammetric computer vision approach. J. Cult. Herit. 2018, 32, 257–266. [Google Scholar] [CrossRef]
  30. Benchekroun, S.; Ullah, I.I.T. Preserving the past for an uncertain future: Accessible, low-cost methods for 3-D data creation, processing, and dissemination in digital cultural heritage preservation. In Proceedings of the 26th International Conference on 3D Web Technology, Pisa Italy, 8–12 November 2021; pp. 1–9. [Google Scholar]
  31. Peinado-Santana, S.; Hernández-Lamas, P.; Bernabéu-Larena, J.; Cabau-Anchuelo, B.; Martín-Caro, J.A. Public works heritage 3D model digitisation, optimisation and dissemination with free and open-source software and platforms and low-cost tools. Sustainability 2021, 13, 13020. [Google Scholar] [CrossRef]
  32. Haddad, N.A. From ground surveying to 3D laser scanner: A review of techniques used for spatial documentation of historic sites. J. King Saud Univ.-Eng. Sci. 2011, 23, 109–118. [Google Scholar] [CrossRef]
  33. Gevorgyan, H.; Khilo, A.; Wade, M.T.; Stojanović, V.M.; Popović, M.A. Miniature, highly sensitive MOSCAP ring modulators in co-optimized electronic-photonic CMOS. Photon-Res. 2022, 10, A1–A7. [Google Scholar] [CrossRef]
  34. Ma, Y.; Gao, Y.; Wu, J.; Cao, L. Toward a see-through camera via AR lightguide. Opt. Lett. 2023, 48, 2809–2812. [Google Scholar] [CrossRef] [PubMed]
  35. Zhang, X.; Wu, H.; Yu, B.; Rosales-Guzmán, C.; Zhu, Z.; Hu, X.; Shi, B.; Zhu, S. Real-Time Superresolution Interferometric Measurement Enabled by Structured Nonlinear Optics. Laser Photon-Rev. 2023, 17, 2370026. [Google Scholar] [CrossRef]
  36. Magnenat-Thalmann, N.; Luible, C.; Volino, P.; Lyard, E. From measured fabric to the simulation of cloth. In Proceedings of the 2007 10th IEEE International Conference on Computer-Aided Design and Computer Graphics, Beijing, China, 15–18 October 2007; pp. 7–18. [Google Scholar]
  37. Magnenat-Thalmann, N.; Volino, P.; Moccozet, L. Designing and simulating clothes. Int. J. Image Graph. 2001, 1, 1–17. [Google Scholar] [CrossRef]
  38. Hedfi, H.; Ghith, A.; BelHadjSalah, H. Dynamic fabric modelling and simulation using deformable models. J. Text. Inst. 2011, 102, 647–667. [Google Scholar] [CrossRef]
  39. Volino, P.; Courchesne, M.; Magnenat Thalmann, N. Versatile and efficient techniques for simulating cloth and other deformable objects. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 15 September 1995; pp. 137–144. [Google Scholar]
  40. Luible, C.; Magnenat-Thalmann, N. The Simulation of Cloth Using Accurate Physical Parameters; CGIM: Insbruck, Austria, 2008. [Google Scholar]
  41. Jin, W.; Kim, D.H. Design and implementation of e-health system based on semantic sensor network using IETF YANG. Sensors 2018, 18, 629. [Google Scholar] [CrossRef]
  42. Koay, N.; Kataria, P.; Juric, R. Semantic management of nonfunctional requirements in an e-health system. Telemed. e-Health 2010, 16, 461–471. [Google Scholar] [CrossRef]
  43. Devedzic, V. Education and the semantic web. Int. J. Artif. Intell. Educ. 2004, 14, 165–191. [Google Scholar]
  44. Jensen, J. A systematic literature review of the use of Semantic Web technologies in formal education. Br. J. Educ. Technol. 2019, 50, 505–517. [Google Scholar] [CrossRef]
  45. Trastour, D.; Bartolini, C.; Preist, C. Semantic web support for the business-to-business e-commerce lifecycle. In Proceedings of the 11th international conference on World Wide Web, Honolulu, HI, USA, 7–11 May 2002; pp. 89–98. [Google Scholar]
  46. Kim, W.; Chung, M.J.; Qureshi, K.; Choi, Y.K. WSCPC: An architecture using semantic web services for collaborative product commerce. Comput. Ind. 2006, 57, 787–796. [Google Scholar] [CrossRef]
  47. Wen-Yue, G.; Hai-Cheng, Q.; Hong, C. Semantic web service discovery algorithm and its application on the intelligent automotive manufacturing system. In Proceedings of the 2010 2nd IEEE International Conference on Information Management and Engineering, Chengdu, China, 16–18 April 2010; pp. 601–604. [Google Scholar]
  48. Klotz, B.; Datta, S.K.; Wilms, D.; Troncy, R.; Bonnet, C. A car as a semantic web thing: Motivation and demonstration. In Proceedings of the 2018 Global Internet of Things Summit (GIoTS), Bilbao, Spain, 4–7 June 2018; pp. 1–6. [Google Scholar]
  49. Lilis, Y.; Zidianakis, E.; Partarakis, N.; Antona, M.; Stephanidis, C. Personalizing HMI elements in ADAS using ontology meta-models and rule based reasoning. In Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods, Proceedings of the 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, 9–14 July 2017; Proceedings, Part I 11; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 383–401. [Google Scholar]
  50. Benjamins, V.R.; Contreras, J.; Blázquez, M.; Dodero, J.M.; Garcia, A.; Navas, E.; Hernandez, F.; Wert, C. Cultural heritage and the semantic web. In European Semantic Web Symposium; Springer: Berlin/Heidelberg, Germany, 2004; pp. 433–444. [Google Scholar]
  51. Signore, O. The semantic web and cultural heritage: Ontologies and technologies help in accessing museum information. In Proceedings of the Information Technology for the Virtual Museum, Sønderborg, Denmark, 6–7 December 2006. [Google Scholar]
  52. Di Giulio, R.; Maietti, F.; Piaia, E. 3D Documentation and Semantic Aware Representation of Cultural Heritage: The INCEPTION Project. In EUROGRAPHICS Workshop on Graphics and Cultural Heritage; Catalano, C.E., De Luca, L., Eds.; Eurographics Association: Eindhoven, The Netherlands, 2016; pp. 195–198. [Google Scholar]
  53. Hyvönen, E. Publishing and Using Cultural Heritage Linked Data on the Semantic Web; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  54. Daquino, M.; Mambelli, F.; Peroni, S.; Tomasi, F.; Vitali, F. Enhancing semantic expressivity in the cultural heritage domain: Exposing the Zeri Photo Archive as Linked Open Data. J. Comput. Cult. Herit. (JOCCH) 2017, 10, 1–21. [Google Scholar] [CrossRef]
  55. Lodi, G.; Asprino, L.; Nuzzolese, A.G.; Presutti, V.; Gangemi, A.; Recupero, D.R.; Veninata, C.; Orsini, A. Semantic web for cultural heritage valorisation. In Data Analytics in Digital Humanities; Springer: Berlin/Heidelberg, Germany, 2017; pp. 3–37. [Google Scholar]
  56. Marden, J.; Li-Madeo, C.; Whysel, N.; Edelstein, J. Linked open data for cultural heritage: Evolution of an information technology. In Proceedings of the 31st ACM International Conference on Design of Communication, Greenville, NC, USA, 30 September–1 October 2013; pp. 107–112. [Google Scholar]
  57. Candela, G.; Escobar, P.; Carrasco, R.C.; Marco-Such, M. A linked open data framework to enhance the discoverability and impact of culture heritage. J. Inf. Sci. 2019, 45, 756–766. [Google Scholar] [CrossRef]
  58. Nishanbaev, I.; Champion, E.; McMeekin, D.A. A web GIS-based integration of 3D digital models with linked open data for cultural heritage exploration. ISPRS Int. J. Geo-Inf. 2021, 10, 684. [Google Scholar] [CrossRef]
  59. Pattuelli, M.C.; Miller, M.; Lange, L.; Fitzell, S.; Li-Madeo, C. Crafting linked open data for cultural heritage: Mapping and curation tools for the linked jazz project. Code4Lib J. 2013. Available online: https://journal.code4lib.org/articles/8670?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+c4lj+(The+Code4Lib+Journal) (accessed on 19 February 2024).
  60. Doerr, M. The CIDOC conceptual reference module: An ontological approach to semantic interoperability of metadata. AI Mag. 2003, 24, 75. [Google Scholar]
  61. Hiebel, G.; Doerr, M.; Eide, Ø. CRMgeo: A spatiotemporal extension of CIDOC-CRM. Int. J. Digit. Libr. 2017, 18, 271–279. [Google Scholar] [CrossRef]
  62. Purday, J. Think culture: Europeana.eu from concept to construction. Bibl. Forsch. Und Prax. 2009, 33, 170–180. [Google Scholar] [CrossRef]
  63. Valtysson, B. EUROPEANA: The digital construction of Europe’s collective memory. Inf. Commun. Soc. 2012, 15, 151–170. [Google Scholar] [CrossRef]
  64. Haslhofer, B.; Isaac, A. data. europeana. eu: The europeana linked open data pilot. In Proceedings of the International Conference on Dublin Core and Metadata Applications, The Hague, The Netherlands, 21–23 September 2011; pp. 94–104. [Google Scholar]
  65. Styliani, S.; Fotis, L.; Kostas, K.; Petros, P. Virtual museums, a survey and some issues for consideration. J. Cult. Herit. 2009, 10, 520–528. [Google Scholar] [CrossRef]
  66. Machidon, O.M.; Duguleana, M.; Carrozzino, M. Virtual humans in cultural heritage ICT applications: A review. J. Cult. Herit. 2018, 33, 249–260. [Google Scholar] [CrossRef]
  67. Pascoal, S.; Tallone, L.; Furtado, M. The impact of COVID-19 on cultural tourism: Virtual exhibitions, technology and innovation. In Proceedings of the International Conference on Tourism, Technology and Systems, Cartagena de Indias, Colombia, 29–31 October 2020; Springer: Singapore, 2020; pp. 177–185. [Google Scholar]
  68. Hoffman, S.K. Online Exhibitions during the COVID-19 Pandemic. Mus. Worlds 2020, 8, 210–215. [Google Scholar] [CrossRef]
  69. Geronikolakis, E.; Zikas, P.; Kateros, S.; Lydatakis, N.; Georgiou, S.; Kentros, M.; Papagiannakis, G. A true ar authoring tool for interactive virtual museums. In Visual Computing for Cultural Heritage; Springer: Berlin/Heidelberg, Germany, 2020; pp. 225–242. [Google Scholar]
  70. Jung, T.; tom Dieck, M.C.; Lee, H.; Chung, N. Effects of virtual reality and augmented reality on visitor experiences in museum. In Information and Communication Technologies in Tourism 2016, Proceedings of the International Conference, Bilbao, Spain, 2–5 February 2016; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 621–635. [Google Scholar]
  71. Efstratios, G.; Michael, T.; Stephanie, B.; Athanasios, L.; Paul, Z.; George, P. New cross/augmented reality experiences for the virtual museums of the future. In Digital Heritage, Proceedings of the Progress in Cultural Heritage: Documentation, Preservation, and Protection: 7th International Conference, EuroMed 2018, Nicosia, Cyprus, 29 October–3 November 2018; Proceedings, Part I 7. Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 518–527. [Google Scholar]
  72. Rhodes, G.A. Future museums now—Augmented reality musings. Public Art Dialogue 2015, 5, 59–79. [Google Scholar] [CrossRef]
  73. Carrozzino, M.; Bergamasco, M. Beyond virtual museums: Experiencing immersive virtual reality in real museums. J. Cult. Herit. 2010, 11, 452–458. [Google Scholar] [CrossRef]
  74. Lee, H.; Jung, T.H.; Dieck, M.T.; Chung, N. Experiencing immersive virtual reality in museums. Inf. Manag. 2020, 57, 103229. [Google Scholar] [CrossRef]
  75. Shehade, M.; Stylianou-Lambert, T. Virtual reality in museums: Exploring the experiences of museum professionals. Appl. Sci. 2020, 10, 4031. [Google Scholar] [CrossRef]
  76. Bouloukakis, M.; Partarakis, N.; Drossis, I.; Kalaitzakis, M.; Stephanidis, C. Virtual reality for smart city visualization and monitoring. In Mediterranean Cities and Island Communities: Smart, Sustainable, Inclusive and Resilient; Springer: Berlin/Heidelberg, Germany, 2019; pp. 1–18. [Google Scholar]
  77. Pujol, L. Archaeology, museums and virtual reality. Digithum 2004, 6, 1–9. [Google Scholar] [CrossRef]
  78. Trunfio, M.; Lucia, M.D.; Campana, S.; Magnelli, A. Innovating the cultural heritage museum service model through virtual reality and augmented reality: The effects on the overall visitor experience and satisfaction. J. Herit. Tour. 2022, 17, 1–19. [Google Scholar] [CrossRef]
  79. Kabassi, K.; Amelio, A.; Komianos, V.; Oikonomou, K. Evaluating museum virtual tours: The case study of Italy. Information 2019, 10, 351. [Google Scholar] [CrossRef]
  80. Petridis, P.; White, M.; Mourkousis, N.; Liarokapis, F.; Sifniotis, M.; Basu, A.; Gatzidis, C. Exploring and interacting with virtual museums. In Proceedings of the Computer Applications and Quantitative Methods in Archaeology (CAA), Tomar, Portugal, 21–24 March 2005. [Google Scholar]
  81. Mathioudakis, G.; Klironomos, I.; Partarakis, N.; Papadaki, E.; Anifantis, N.; Antona, M.; Stephanidis, C. Supporting Online and On-Site Digital Diverse Travels. Heritage 2021, 4, 4558–4577. [Google Scholar] [CrossRef]
  82. Partarakis, N.; Zabulis, X.; Foukarakis, M.; Moutsaki, M.; Zidianakis, E.; Patakos, A.; Adami, I.; Kaplanidi, D.; Ringas, C.; Tasiopoulou, E. Supporting Sign Language Narrations in the Museum. Heritage 2021, 5, 1. [Google Scholar] [CrossRef]
  83. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H. The europeana data model (edm). In Proceedings of the World Library and Information Congress: 76th IFLA General Conference and Assembly, Gothenburg, Sweden, 10–15 August 2010; Volume 10, p. 15. [Google Scholar]
  84. Pan, H.; Guan, T.; Luo, Y.; Duan, L.; Tian, Y.; Yi, L.; Zhao, Y.; Yu, J. Dense 3D reconstruction combining depth and RGB information. Neurocomputing 2016, 175, 644–651. [Google Scholar] [CrossRef]
  85. Karami, A.; Menna, F.; Remondino, F. Combining Photogrammetry and Photometric Stereo to Achieve Precise and Complete 3D Reconstruction. Sensors 2022, 22, 8172. [Google Scholar] [CrossRef] [PubMed]
  86. Abmayr, T.; Härtl, F.; Mettenleiter, M.; Heinz, I.; Hildebrand, A.; Neumann, B.; Fröhlich, C. Realistic 3D reconstruction–combining laserscan data with RGB color information. Proc. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35 Part B, 198–203. [Google Scholar]
  87. Luhmann, T.; Chizhova, M.; Gorkovchuk, D.; Hastedt, H.; Chachava, N.; Lekveishvili, N. Combination of Terrestrial Laserscanning, UAV and Close Range Photogrammertry for 3D Reconstruction of Complex Churches in Georgia; Otto-Friedrich-Universität: Bamberg, Germany, 2019. [Google Scholar]
  88. Weyrich, T.; Pauly, M.; Keiser, R.; Heinzle, S.; Scandella, S.; Gross, M.H. Post-processing of Scanned 3D Surface Data. In Eurographics Symposium on Point-Based Graphics; Eurographics Association: Eindhoven, The Netherlands, 2004; pp. 85–94. [Google Scholar]
  89. RDF 1.1 XML Syntax. Available online: https://www.w3.org/TR/rdf-syntax-grammar/ (accessed on 9 January 2024).
  90. Greek National Aggregator of Digital Cultural Content. Available online: https://www.searchculture.gr/aggregator/portal?language=en (accessed on 9 January 2024).
  91. Europeana. Available online: https://www.europeana.eu/en (accessed on 9 January 2024).
  92. Zenodo. Available online: https://zenodo.org/ (accessed on 9 January 2024).
  93. Zidianakis, E.; Partarakis, N.; Ntoa, S.; Dimopoulos, A.; Kopidaki, S.; Ntagianta, A.; Ntafotis, E.; Xhako, A.; Pervolarakis, Z.; Kontaki, E.; et al. The invisible museum: A user-centric platform for creating virtual 3D exhibitions with VR support. Electronics 2021, 10, 363. [Google Scholar] [CrossRef]
  94. Faro Focus Laser Scanner. Available online: https://www.faro.com/en/Products/Hardware/Focus-Laser-Scanners (accessed on 9 January 2024).
  95. Pix4d. Available online: https://www.pix4d.com (accessed on 9 January 2024).
  96. Faro Scene. Available online: https://www.faro.com/en/Products/Software/SCENE-Software (accessed on 9 January 2024).
  97. Blender. Available online: https://www.blender.org/ (accessed on 9 January 2024).
  98. Jakob, W.; Tarini, M.; Panozzo, D.; Sorkine-Hornung, O. Instant field-aligned meshes. ACM Trans. Graph. 2015, 34, 189–191. [Google Scholar] [CrossRef]
  99. Images and 3D Digitisations of Branding Heritage #1. Available online: https://zenodo.org/records/8176947 (accessed on 9 January 2024).
  100. Images and 3D Digitisations of Branding Heritage #2. Available online: https://zenodo.org/records/8307886 (accessed on 9 January 2024).
  101. Images and 3D Digitisations of Branding Heritage #3. Available online: https://zenodo.org/records/8321918 (accessed on 9 January 2024).
  102. Images and 3D Digitisations of Branding Heritage #4. Available online: https://zenodo.org/records/8337684 (accessed on 9 January 2024).
  103. Images and 3D Digitisations of Branding Heritage #5. Available online: https://zenodo.org/records/8409134 (accessed on 9 January 2024).
  104. Partarakis, N.; Doulgeraki, V.; Karuzaki, E.; Galanakis, G.; Zabulis, X.; Meghini, C.; Bartalesi, V.; Metilli, D. A Web-Based Platform for Traditional Craft Documentation. Multimodal Technol. Interact. 2022, 6, 37. [Google Scholar] [CrossRef]
  105. Mingei Project. Available online: https://www.mingei-project.eu/ (accessed on 9 January 2024).
  106. Craeft Project. Available online: https://www.craeft.eu/ (accessed on 9 January 2024).
  107. Public API. Available online: http://api.mingei-project.eu/public/api/metadata?verb=ListRecords&metadataPrefix=edm&set=brandingHeritage (accessed on 9 January 2024).
  108. Digitization of Contemporary Works by Young Artists Inspired by Textile Heritage. Available online: https://www.searchculture.gr/aggregator/portal/collections/brandingHeritage/search?page.page=2&scrollPositionX=5221&sortByCount=false&resultsMode=GRID&sortResults=SCORE (accessed on 9 January 2024).
  109. Pervolarakis, Z.; Agapakis, A.; Xhako, A.; Zidianakis, E.; Katzourakis, A.; Evdaimon, T.; Sifakis, M.; Partarakis, N.; Zabulis, X.; Stephanidis, C. A Method and Platform for the Preservation of Temporary Exhibitions. Heritage 2022, 5, 2833–2850. [Google Scholar] [CrossRef]
  110. A-Frame. Available online: https://aframe.io/ (accessed on 9 January 2024).
  111. Three.js. Available online: https://threejs.org/ (accessed on 9 January 2024).
  112. WebGL. Available online: https://get.webgl.org/ (accessed on 9 January 2024).
  113. WebVR. Available online: https://webvr.info/ (accessed on 9 January 2024).
  114. Temporary URL of the Virtual Exhibition. Available online: https://invisible-museum.space/ (accessed on 9 January 2024).
Figure 1. Graphical representation of the methodology proposed by this research work. The method used is encoded in black and it’s output in red.
Figure 1. Graphical representation of the methodology proposed by this research work. The method used is encoded in black and it’s output in red.
Computers 13 00057 g001
Figure 2. FARO example with 9 scans from different angles. (a) An original scan mesh; (b) the scan mesh with modifiers applied; (c) the resulting unified mesh; all 9 modified scans are merged; modifiers and geometry nodes convert them in a single mesh (6 million triangles); (d) InstaMesh simplification.
Figure 2. FARO example with 9 scans from different angles. (a) An original scan mesh; (b) the scan mesh with modifiers applied; (c) the resulting unified mesh; all 9 modified scans are merged; modifiers and geometry nodes convert them in a single mesh (6 million triangles); (d) InstaMesh simplification.
Computers 13 00057 g002
Figure 3. (a) Alpha channel modifier, (b) Remesh modifier, (c) Shrink modifier.
Figure 3. (a) Alpha channel modifier, (b) Remesh modifier, (c) Shrink modifier.
Computers 13 00057 g003
Figure 4. (a) semantic meta-data for an image; (b) a web-based preview of the image; (c) semantic metadata for a 3D object; and (d) a web-based preview of the 3D object.
Figure 4. (a) semantic meta-data for an image; (b) a web-based preview of the image; (c) semantic metadata for a 3D object; and (d) a web-based preview of the 3D object.
Computers 13 00057 g004
Figure 5. A representation of a heritage object and its associations with its creator, materials used, the event of its creation, and the media object representing its reconstruction in 3D. Associations are marked with arrows.
Figure 5. A representation of a heritage object and its associations with its creator, materials used, the event of its creation, and the media object representing its reconstruction in 3D. Associations are marked with arrows.
Computers 13 00057 g005
Figure 6. Export of the heritage object’s metadata in RDF/XML format.
Figure 6. Export of the heritage object’s metadata in RDF/XML format.
Computers 13 00057 g006
Figure 7. Top-down drawing of the exhibition building, using the built-in tool.
Figure 7. Top-down drawing of the exhibition building, using the built-in tool.
Computers 13 00057 g007
Figure 8. (a) Imported GLB format 3D model on the exhibition; (b) with a ceiling for the exhibition and a spotlight over the model; (c) imported decorative elements to complement the 3D model; (d) The inspector (left modal) and the scale gizmos are attached to the model.
Figure 8. (a) Imported GLB format 3D model on the exhibition; (b) with a ceiling for the exhibition and a spotlight over the model; (c) imported decorative elements to complement the 3D model; (d) The inspector (left modal) and the scale gizmos are attached to the model.
Computers 13 00057 g008
Figure 9. The scene specification in Blender.
Figure 9. The scene specification in Blender.
Computers 13 00057 g009
Figure 10. (a) Before baking, (b) after baking the scene in Blender.
Figure 10. (a) Before baking, (b) after baking the scene in Blender.
Computers 13 00057 g010
Table 1. Overview of the proposed technologies.
Table 1. Overview of the proposed technologies.
Multimodal data acquisition
  • Photographic documentation: Detailed images are captured from multiple angles through photographic documentation using a Nikon D850.
  • 3D Points and their RGB value: The FARO Focus Laser Scanner acquires 3D Points and their RGB of a segment of the space occupied by the artifact.
  • Mobile depth enhanced photogrammetry: A mobile device together with the Trnio app to acquire 360-degree view from various heights.
Reconstruction
  • Photogrammetric reconstruction: PIX4Dmatic from Pix4D [95] is used with input from the acquired photographic documentation.
  • Reconstruction based on lidar data: Faro scene is used to produce the point clouds and to translate point clouds to textured 3D meshes.
  • Cloud-based reconstruction: Data acquired using Trnio is post-processed in the Trnio cloud to create the reconstruction.
Post-processing and 3D model generation
  • Application of modifiers. Partial scans are modified in Blender.
  • Mesh unification and refactoring. The modified partial scans are registered in Blender merged and refactored.
  • Mesh simplification. The combined 3D mesh is simplified to reduce the number of polygons.
  • Projection of scan textures. The individual scan textures are projected on the simplified mesh.
  • Averaged image stacking. The combined texture is created, color and lighting is adjusted and calibrated
Digital Curation
  • Deposit of 3D models as linked open data. The collection of final 3D models is deposited in Zenodo and receives a URI
  • Curation of 3D models. 3D models and their metadata are curated in a semantic repository.
  • Curation of artifacts. Semantic representations of the artifacts are authored that enrich the representation through events, materials, places, and links to open vocabularies.
  • Export for ingestion in open repositories. The semantically rich representations are exported and ingested in open repositories.
Virtual Exhibition
  • Linking with the open repository to access to both the 3D models and their metadata.
  • Creation of virtual exhibits. The collection of 3D models is transformed into a collection of virtual exhibits, i.e., objects that can placed as interactable within a virtual exhibition.
  • Authoring of the digital space. In this step, the digital space where the virtual exhibition will be hosted is authored.
  • Setup rendering and spatial parameters of exhibits.
  • Baking and publication. The final scene is baked and published.
Table 2. Summary of the outcomes of this work.
Table 2. Summary of the outcomes of this work.
Digitization
Digitization subjects: 30 dresses Laser scans: 240 partial scans Photogrammetric reconstruction: 778 photos Photogrammetric reconstruction results: 30 complete scans Trnio scans: 30 complete scans Synthesized models: 30 final models
Open Data
Images and 3D digitizations of Branding Heritage
o
Collection #1 [99].
o
Collection #2 [100].
o
Collection #3 [101].
o
Collection #4 [102].
o
Collection #5 [103].
Collections ingested to aggregators
SearchCulture: Digitization of Contemporary Works By Young Artists Inspired By Textile Heritage [108].
Data access
Branding Heritage collection public API [107]
Experiential access
Branding Heritage virtual exhibition [114]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xhako, A.; Katzourakis, A.; Evdaimon, T.; Zidianakis, E.; Partarakis, N.; Zabulis, X. Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses. Computers 2024, 13, 57. https://doi.org/10.3390/computers13030057

AMA Style

Xhako A, Katzourakis A, Evdaimon T, Zidianakis E, Partarakis N, Zabulis X. Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses. Computers. 2024; 13(3):57. https://doi.org/10.3390/computers13030057

Chicago/Turabian Style

Xhako, Aldo, Antonis Katzourakis, Theodoros Evdaimon, Emmanouil Zidianakis, Nikolaos Partarakis, and Xenophon Zabulis. 2024. "Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses" Computers 13, no. 3: 57. https://doi.org/10.3390/computers13030057

APA Style

Xhako, A., Katzourakis, A., Evdaimon, T., Zidianakis, E., Partarakis, N., & Zabulis, X. (2024). Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses. Computers, 13(3), 57. https://doi.org/10.3390/computers13030057

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop