Multiscale 3D Documentation of the Medieval Wall of Jaén (Spain) Based on Multi-Sensor Data Fusion
Abstract
:1. Introduction
- Development of data acquisition systems. The use of non-metric (conventional) and low-cost cameras and 360-degree multicameras, for example, [3,4,5,6,7,8] has increased in these types of studies due to their efficiency in data acquisition and image processing. There are a large number of examples of these applications, but we highlight those related to medieval fortresses [9,10,11] and others focused on complex scenes [5,12,13]. Therefore, the use of photogrammetric techniques is highly contrasted in this type of study mainly due to the geometric and radiometric advantages of the products obtained. On the other hand, scanning devices based on LiDAR have undergone an important evolution allowing their common use thanks to, among other reasons, their efficiency in data acquisition. For instance, a simple conventional scan acquired in a few minutes can document the scene, obtaining a point cloud of high geometric accuracy composed of millions of points. This aspect has contributed to the increase in the number of applications developed for heritage documentation using these techniques. In this context, we highlight those applications mainly based on LiDAR focused on fortresses and other complex scenes [14,15,16,17]. However, most of the studies published in recent years are based on the combination of photogrammetry and LiDAR. In this sense, data fusion based on both technologies allows us to take advantage of their potentialities and avoid their inconveniences, for example, the geometric aspect of LiDAR data and the radiometry of photogrammetry. We highlight some examples related to fortresses [18,19,20,21] and other complex scenes [22,23]. In addition to these techniques, recent studies [24,25,26,27,28] have used portable mobile mapping systems (MMS) to survey complex spaces. In these systems, data acquisition is based on images and/or LiDAR, and navigation and positioning are based on Global Navigation Satellite System (GNSS) and/or Inertial Measurements Unit (IMU) sensors. In some cases, the trajectory of the MMS is based on Simultaneous Localization and Mapping (SLAM) algorithms using images or point clouds (Visual SLAM and LiDAR SLAM). MMS improve the efficiency of data acquisition with an accuracy of several centimeters [26], which can be considered sufficient for most cases. A comprehensive review of current MMS was provided by Elhashash et al. [29]. In this context, we highlight the MMS application regarding the medieval wall of Avila (Spain) [28].
- New platforms. The widespread use of new platforms to capture the scene from adequate points of view in many cases elevated with respect to the ground, such as Remotely Piloted Aircraft System (RPAS) [30,31,32] and masts [33,34,35,36], have improved acquisition conditions, providing better configurations for documenting complex scenes. These platforms can support different sensors (e.g., cameras and LiDAR). They are remotely controlled and, in most cases, can follow a previously designed flight plan, even in complex scenes [37]. The availability of RPAS platforms (both fixed-wing and rotatory) is continuously increasing, with systems that can fly at low and medium altitude to develop a wide range of applications (e.g., from several meters to 120 m). In contrast, we must consider that their use is not always possible due to safety restrictions. In these cases, the usual sizes of historic buildings can allow the use of masts and elevated platforms to lift the sensors.
- Processing algorithms and hardware capabilities. The development and application of improved image-based processing algorithms, such as the Structure from Motion (SfM) [38,39,40,41] and the dense Multi-View Stereo (MVS) [41,42,43,44], together with the increase in hardware capacities, have made possible the development of a greater number of applications due to improved processing procedures, such as image orientation. Moreover, recent software applications have democratized the use of these techniques even for non-professional users [45]. On the other hand, improved point cloud registration algorithms, such as the Iterative Closest Points (ICP) [46,47], have also supported applications based on these techniques.
1.1. The Medieval Wall of Jaén
1.2. Objectives
2. Materials and Methods
2.1. Surveying
2.2. LiDAR
- SL1: We used an aerial LiDAR survey using a DJI L1 to cover the full area (SL1). This device was mounted on a DJI Matrice 300 RTK RPAS (Figure 3a). The flight was planned previously, considering the terrain and the scene to be surveyed. In this sense, we used the application developed by Gómez-López et al. [37] for planning block flights on a sloped zone. Thus, the flight was conducted at about 60 m over the terrain, considering seven trips. We used the coordinates of several targets well-distributed throughout the area to georeference the LiDAR point cloud. The point density of LiDAR was about 820 points per square meter with a point spacing of 3 cm. The RMS after the LiDAR processing was about 2.5 cm (height accuracy). We used DJI Terra and Lastools software to process and control these data. All points were classified for bare earth extraction (using the Lasground module). This procedure allowed us to detect ground and non-ground points. In addition, we also included those points related to the wall. As a product, we obtained a complete point cloud with color information.
- SL3: We developed a TLS survey of the wall considering a high-density point acquisition of about 7 mm at ten meters. We used a Faro Focus X130 scanner (Figure 3b). In order to improve the efficiency of acquisition, we developed the capture without considering the color mode. In this sense, the texture of final products depends on the photogrammetry focusing the TLS data on the geometry. The scanning stations were distributed along the wall considering some overlapping between adjacent scans. In addition, the scans were placed considering the geometry of the wall, aiming to capture it completely (except the highest zones, where there was no accessibility). From each scanning station, we obtained a point cloud. All point clouds were registered relatively using cloud-to-cloud algorithms (e.g., ICP [46,47]). The RMS of the registering was about 5 mm. After that, several targets located throughout the scene were used to georeference the final point cloud and to check the results of the TLS procedure. The RMS of this 3D transformation was about 27 mm. From the final point cloud, we obtained the coordinates of additional targets, which were used for orientation and for checking the low and very low flight height photogrammetry (SL2 and SL3) and the close range photogrammetry (SL3). As a product, we obtained a high density point cloud. We used Faro Scene and Maptek Point Studio software to develop these procedures.
- SL2: The point cloud obtained in the previous stage was filtered in order to obtain a simplified one. In this regard, we used a basic distance threshold of 10 mm. This implied a reduction in the number of points of about 80%.
2.3. Photogrammetry
- SL1: The image acquisition at this scale level was developed with an RPAS survey at medium flight height and, more specifically, using a DJI Matrice 300 with an RTK integrated module. The flight plan included vertical images following a block flight on a sloped zone [37], with seven strips perpendicular to the slope and maintaining the flight height of each strip with respect to the terrain. The mean flight height was 60 m and the mean GSD was about 2 cm. We used several targets, whose coordinates were obtained from a GNSS survey, to improve the previous orientation of images (camera coordinates obtained by the GNSS-RTK module integrated in the aircraft) and to check the orientation. The RMS after the orientation procedure was about 1.9 cm. As products, we obtained a set of oriented photographs, a point cloud and a texture of the zone.
- SL2: In this case, the scene covered the wall and its surrounding areas. Therefore, we developed an RPAS survey at a low flight height using a DJI Phantom 4 Pro with an integrated RTK module (Figure 4a). The selection of this aircraft, in contrast to the one used in SL1, was related to the greater maneuverability and efficiency and the need to carry less weight. The flight plan included vertical and oblique images following a combined flight (block and corridor flights) [37] in order to cover the entire wall. The average flight height was about 50 m. This supposed an average GSD of about 1.5 cm. As in the previous case, image orientation was developed using several targets that were well-distributed throughout the scene, although camera positions were pre-calculated from the RTK module. The coordinates of these targets were obtained from the high-density TLS point cloud. Therefore, we limited the GNSS survey to obtaining those targets used for orientation and checking purposes in the case of the RPAS at a medium flight height (SL1) and the high density TLS survey (SL3). The RMS after the orientation procedure was about 1.9 cm. As products, we obtained a set of oriented photographs, a point cloud and a texture of the scene.
- SL3: At this scale level, we developed two photogrammetric surveys. The first one was implemented in order to obtain high resolution products of the highest areas of the wall. It was performed through an RPAS survey undertaken at a very low flight height using a DJI mini. This aircraft allowed us to acquire closed images of the wall avoiding those issues generated by the presence of trees and other objects (such as poles and lights and power lines). The great maneuverability of this aircraft allowed us to cover the scene completely because of its capacity to position itself in complex and narrow spaces. On the other hand, the lower areas of the wall were surveyed using close-range photogrammetry (CRP) with a conventional camera (Sony A6000) mounted on a mast that allowed us to raise the sensor up to 5 m (Figure 4b). In both cases, we followed the CIPA recommendations for architectural photogrammetric projects using non-metric cameras (called 3 × 3 rules) [52]. In this sense, we obtained normal and convergent images covering the scene from different viewpoints. The orientation of the images was obtained from several targets that were well-distributed along the wall. The coordinates of these targets were obtained from the high-density TLS point cloud. The average RMS after the orientation stage was 1.4 cm. As products, we obtained a set of oriented photographs, a point cloud and a texture of the scene.
2.4. Data Fusion
- SL1: Data fusion included LiDAR point cloud and photogrammetry. The LiDAR data were used to obtain the geometry of the terrain avoiding the presence of trees, while photogrammetry was used to provide a high-quality texture. As final products, we obtained a 3D model of the area including the wall, an orthoimage of 1.5 cm of spatial resolution, a DTM (5 cm of spatial resolution) and a topographic map at a scale of 1:1000.
- SL2: Data fusion of SL2 integrated the geometry of the wall derived from the TLS (lower areas) and photogrammetry (higher areas) and the texture obtained from the photogrammetry. The selection of points from the photogrammetric point cloud was related to gaps in the TLS point cloud (Figure 5a). Thus, we selected those points from the photogrammetric point cloud that were at a certain minimum distance from the TLS point cloud. We therefore obtained a detailed 3D model of the complete wall.
- SL3: In the case of photogrammetry, we merged two projects that were previously processed independently in order to generate a complete project, including both RPAS and conventional camera images (Figure 5b). After that, we integrated the geometry of the wall obtained using the TLS point cloud and some points selected from the photogrammetric point cloud representing those areas where the TLS had gaps. As in the previous case, the texture was obtained from the photogrammetry. As final products, we obtained an orthoimage (2 mm of spatial resolution) and a DEM (15 mm of spatial resolution) of each section of the wall.
3. Results and Discussion
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Grieves, M. Digital Twin: Manufacturing Excellence through Virtual Factory Replication, White Paper. 2014. Available online: http://www.apriso.com/library/Whitepaper_Dr_Grieves_DigitalTwin_ManufacturingExcellence.php (accessed on 30 May 2023).
- Luther, W.; Baloian, N.; Biella, D.; Sacher, D. Digital Twins and Enabling Technologies in Museums and Cultural Heritage: An Overview. Sensors 2023, 23, 1583. [Google Scholar] [CrossRef] [PubMed]
- Ogleby, C.L.; Papadaki, H.; Robson, S.; Shortis, M.R. Comparative camera calibrations of some “off the shelf” digital cameras suited to archaeological purposes. Int. Arch. Photogramm. Remote Sens. 1999, 32, 69–75. [Google Scholar]
- Cardenal, J.; Mata, E.; Castro, P.; Delgado, J.; Hernandez, M.A.; Pérez, J.L.; Ramos, M.; Torres, M. Evaluation of a digital non metric camera (Canon D30) for the photogrammetric recording of historical buildings. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2004, 35, 564–569. [Google Scholar]
- Covas, J.; Ferreira, V.; Mateus, L. 3D reconstruction with fisheye images strategies to survey complex heritage buildings. In Digital Heritage 2015; IEEE: Granada, Spain, 2015. [Google Scholar]
- Fiorillo, F.; Limongiello, M.; Fernández-Palacios, B.J. Testing GoPro for 3D model reconstruction in narrow spaces. Acta IMEKO 2016, 5, 64–70. [Google Scholar] [CrossRef]
- Barazzetti, L.; Previtali, M.; Roncoroni, F. Fisheye lenses for 3D modeling: Evaluations and considerations. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, 42, 79–84. [Google Scholar] [CrossRef]
- Barazzetti, L.; Previtali, M.; Roncoroni, F. 3D modeling with 5K 360° videos. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2022, 46, 65–71. [Google Scholar] [CrossRef]
- Arias, P.; González-Aguilera, D.; Riveiro, B.; Caparrini, N. Orthoimage-based documentation of archaeological structures: The case of a mediaeval wall in Pontevedra, Spain. Archaeometry 2011, 53, 858–872. [Google Scholar] [CrossRef]
- Drap, P.; Merad, D.; Boi, J.-M.; Seinturier, J.; Peloso, D.; Reidinger, C.; Vannini, G.; Nucciotti, M.; Pruno, E. Photogrammetry for Medieval Archaeology: A Way to Represent and Analyse Stratigraphy. In Proceedings of the 2012 18th International Conference on Virtual Systems and Multimedia, Milan, Italy, 2–5 September 2012; IEEE: Milan, Italy, 2012; pp. 157–164. [Google Scholar]
- Sabina, J.A.R.; Valle, D.G.; Ruiz, C.P.; García, J.M.M.; Laguna, A.G. Aerial Photogrammetry by drone in archaeological sites with large structures. Methodological approach and practical application in the medieval castles of Campo de Montiel. Virtual Archaeol. Rev. 2015, 6, 5–19. [Google Scholar] [CrossRef]
- Martínez, S.; Ortiz, J.; Gil, M.L.; Rego, M.T. Recording complex structures using close range photogrammetry: The cathedral of Santiago de Compostela. Photogramm. Rec. 2013, 28, 375–395. [Google Scholar] [CrossRef]
- Pérez-García, J.L.; Mozas-Calvache, A.T.; Barba-Colmenero, V.; Jiménez-Serrano, A. Photogrammetric studies of inaccessible sites in archaeology: Case study of burial chambers in Qubbet el-Hawa (Aswan, Egypt). J. Archaeol. Sci. 2019, 102, 1–10. [Google Scholar]
- Teza, G.; Pesci, A. Geometric characterization of a cylinder-shaped structure from laser scanner data: Development of an analysis tool and its use on a leaning bell tower. J. Cult. Herit. 2013, 14, 411–423. [Google Scholar] [CrossRef]
- Castellazzi, G.; D’Altri, A.M.; de Miranda, S.; Ubertini, F. An innovative numerical modeling strategy for the structural analysis of historical monumental buildings. Eng. Struct. 2017, 132, 229–248. [Google Scholar] [CrossRef]
- Guarnieri, A.; Fissore, F.; Masiero, A.; Vettore, A. From TLS survey to 3D solid modeling for documentation of built heritage: The case study of Porta Savonarola in Padua. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, 42, 303–308. [Google Scholar] [CrossRef]
- Sánchez-Aparicio, L.J.; Del Pozo, S.; Ramos, L.F.; Arce, A.; Fernandes, F.M. Heritage site preservation with combined radiometric and geometric analysis of TLS data. Autom. Constr. 2018, 85, 24–39. [Google Scholar] [CrossRef]
- Stal, C.; De Wulf, A.; Nuttens, T.; De Maeyer, P.; Goossens, R. Reconstruction of a medieval wall: Photogrammetric mapping and quality analysis by terrestrial laser scanning. In Proceedings of the 31th EARSeL Symposium 2011, Prague, Czech Republic, 30 May–2 June 2011; pp. 54–65. [Google Scholar]
- Mateus, L.; Fernández, J.; Ferreira, V.; Oliveira, C.; Aguiar, J.; Gago, A.S.; Pacheco, P.; Pernão, J. Terrestrial laser scanning and digital photogrammetry for heritage conservation: Case study of the Historical Walls of Lagos, Portugal. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, 42, 843–847. [Google Scholar] [CrossRef]
- Zaragoza, I.M.E.; Caroti, G.; Piemonte, A. The use of image and laser scanner survey archives for cultural heritage 3D modelling and change analysis. Acta IMEKO 2021, 10, 114–121. [Google Scholar] [CrossRef]
- Fabris, M.; Fontana Granotto, P.; Monego, M. Expeditious Low-Cost SfM Photogrammetry and a TLS Survey for the Structural Analysis of Illasi Castle (Italy). Drones 2023, 7, 101. [Google Scholar] [CrossRef]
- Colonnese, F.; Carpiceci, M.; Inglese, C. Conveying Cappadocia. A new representation model for rock-cave architecture by contour lines and chromatic codes. Virtual Archaeol. Rev. 2016, 7, 13–19. [Google Scholar] [CrossRef]
- Mozas-Calvache, A.T.; Pérez-García, J.L.; Gómez-López, J.M.; de Dios, J.M.; Jiménez-Serrano, A. 3D models of the QH31, QH32 and QH33 tombs in Qubbet el Hawa (Aswan, Egypt). Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, 43, 1427–1434. [Google Scholar] [CrossRef]
- Zlot, R.; Bosse, M. Three-dimensional mobile mapping of caves. J. Cave Karst Stud. 2014, 76, 191–206. [Google Scholar] [CrossRef]
- Farella, E.M. 3D mapping of underground environments with a hand-held laser scanner. Bollettino Della Società Italiana di Fotogrammetria e Topografia 2016, 2, 1–10. [Google Scholar]
- Di Stefano, F.; Torresani, A.; Farella, E.M.; Pierdicca, R.; Menna, F.; Remondino, F. 3D surveying of underground built heritage: Opportunities and challenges of mobile technologies. Sustainability 2021, 13, 13289. [Google Scholar] [CrossRef]
- Fassi, F.; Perfetti, L. Backpack mobile mapping solution for DTM extraction of large inaccessible spaces. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, 42, 473–480. [Google Scholar] [CrossRef]
- Rodríguez-Gonzálvez, P.; Jiménez Fernández-Palacios, B.; Muñoz-Nieto, Á.L.; Arias-Sanchez, P.; Gonzalez-Aguilera, D. Mobile LiDAR System: New Possibilities for the Documentation and Dissemination of Large Cultural Heritage Sites. Remote Sens. 2017, 9, 189. [Google Scholar] [CrossRef]
- Elhashash, M.; Albanwan, H.; Qin, R. A Review of Mobile Mapping Systems: From Sensors to Applications. Sensors 2022, 22, 4262. [Google Scholar] [CrossRef]
- Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
- Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
- Campana, S. Drones in Archaeology. State-of-the-art and Future Perspectives. Archaeol. Prospect. 2017, 24, 275–296. [Google Scholar] [CrossRef]
- Mozas-Calvache, A.T.; Pérez-García, J.L.; Cardernal-Escarcena, F.J.; Delgado, J.; Mata de Castro, E. Comparison of Low Altitude Photogrammetric Methods for Obtaining Dems and Orthoimages of Archaeological Sites. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, 39, 577–581. [Google Scholar] [CrossRef]
- Ortiz, J.; Gil, M.L.; Martínez, S.; Rego, T.; Meijide, G. Three-dimensional Modelling of Archaeological Sites Using Close-range Automatic Correlation Photogrammetry and Low-altitude Imagery. Archaeol. Prospect. 2013, 20, 205–217. [Google Scholar] [CrossRef]
- Blockley, P.; Morandi, S. The recording of two late Roman towers, Archaeological Museum, Milan 3D documentation and study using image-based modelling. In Digital Heritage 2015; IEEE: Granada, Spain, 2015. [Google Scholar]
- Pérez-García, J.L.; Mozas-Calvache, A.T.; Gómez-López, J.M.; Jiménez-Serrano, A. Three-dimensional modelling of large archaeological sites using images obtained from masts. Application to Qubbet el-Hawa site (Aswan, Egypt). Archaeol. Prospect. 2018, 26, 121–135. [Google Scholar] [CrossRef]
- Gómez-López, J.M.; Pérez-García, J.L.; Mozas-Calvache, A.T.; Delgado-García, J. Mission Flight Planning of RPAS for Photogrammetric Studies in Complex Scenes. ISPRS Int. J. Geo-Inf. 2020, 9, 392. [Google Scholar] [CrossRef]
- Ullman, S. The interpretation of structure from motion. Proc. Royal Soc. B 1979, 203, 405–426. [Google Scholar]
- Koenderink, J.J.; Van Doorn, A.J. Affine structure from motion. J. Opt. Soc. Am. A 1991, 8, 377–385. [Google Scholar] [CrossRef] [PubMed]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer: London, UK, 2011. [Google Scholar]
- Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
- Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006. [Google Scholar]
- Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef]
- Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
- Fitzgibbon, A.W. Robust registration of 2D and 3D point sets. Image Vis. Comput. 2003, 21, 1145–1153. [Google Scholar] [CrossRef]
- Sahilioglu, Y.; Kavan, L. Scale-Adaptive ICP. Graph. Models 2021, 116, 101113. [Google Scholar] [CrossRef]
- Murphy, M.; McGovern, E.; Pavia, S. Historic building information modelling (HBIM). Struct. Surv. 2009, 27, 311–327. [Google Scholar] [CrossRef]
- Lambers, K.; Remondino, F. Optical 3D measurement techniques in archaeology: Recent developments and applications. In Proceedings of the 35th International Conference on Computer Applications and Quantitative Methods in Archaeology, Berlin, Germany, 2–6 April 2007. [Google Scholar]
- Jaén Paraiso Interior. Site Castillos y Batallas. The Route. Available online: https://www.jaenparaisointerior.es/en/castillos-y-batallas/la-ruta (accessed on 30 May 2023).
- American Society for Photogrammetry and Remote Sensing. ASPRS Positional Accuracy Standards for Digital Geospatial Data. Photogramm. Eng. Remote Sens. 2015, 81, 277. [Google Scholar]
- CIPA Heritage Documentation. The Photogrammetric Capture. The ‘3 × 3’ Rules. Available online: https://www.cipaheritagedocumentation.org/ (accessed on 16 February 2023).
Products | Scale Level | Description | Area/Dimension | GSD | Point Density |
---|---|---|---|---|---|
Three-dimensional model, orthoimage, DEM | SL1 | Full zone | 20,000 m2 | >10 mm | >10 cm |
Three-dimensional model | SL2 | Complete wall | 300 m | 1–10 mm | 1–5 cm |
Orthoimages, DEMs | SL3 | Wall section | <20 m | 0.1–1 mm | <1 cm |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pérez-García, J.L.; Mozas-Calvache, A.T.; Gómez-López, J.M.; Vico-García, D. Multiscale 3D Documentation of the Medieval Wall of Jaén (Spain) Based on Multi-Sensor Data Fusion. Heritage 2023, 6, 5952-5966. https://doi.org/10.3390/heritage6080313
Pérez-García JL, Mozas-Calvache AT, Gómez-López JM, Vico-García D. Multiscale 3D Documentation of the Medieval Wall of Jaén (Spain) Based on Multi-Sensor Data Fusion. Heritage. 2023; 6(8):5952-5966. https://doi.org/10.3390/heritage6080313
Chicago/Turabian StylePérez-García, José Luis, Antonio Tomás Mozas-Calvache, José Miguel Gómez-López, and Diego Vico-García. 2023. "Multiscale 3D Documentation of the Medieval Wall of Jaén (Spain) Based on Multi-Sensor Data Fusion" Heritage 6, no. 8: 5952-5966. https://doi.org/10.3390/heritage6080313
APA StylePérez-García, J. L., Mozas-Calvache, A. T., Gómez-López, J. M., & Vico-García, D. (2023). Multiscale 3D Documentation of the Medieval Wall of Jaén (Spain) Based on Multi-Sensor Data Fusion. Heritage, 6(8), 5952-5966. https://doi.org/10.3390/heritage6080313