Next Article in Journal
Electromagnetic Signal Attenuation Characteristics in the Lunar Regolith Observed by the Lunar Regolith Penetrating Radar (LRPR) Onboard the Chang’E-5 Lander
Previous Article in Journal
Collaborative Consistent Knowledge Distillation Framework for Remote Sensing Image Scene Classification Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Benchmarking Different SfM-MVS Photogrammetric and iOS LiDAR Acquisition Methods for the Digital Preservation of a Short-Lived Excavation: A Case Study from an Area of Sinkhole Related Subsidence

1
Dipartimento di Matematica e Geoscienze, University of Trieste, Via Weiss 2, 34128 Trieste, Italy
2
Petroleum Engineering Program, Texas A&M University at Qatar, Education City, Doha P.O. Box 23874, Qatar
3
Dipartimento di Scienze Della Terra, Sapienza University of Rome, P.le Aldo Moro 5, 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(20), 5187; https://doi.org/10.3390/rs14205187
Submission received: 9 September 2022 / Revised: 7 October 2022 / Accepted: 12 October 2022 / Published: 17 October 2022
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)

Abstract

:
We are witnessing a digital revolution in geoscientific field data collection and data sharing, driven by the availability of low-cost sensory platforms capable of generating accurate surface reconstructions as well as the proliferation of apps and repositories which can leverage their data products. Whilst the wider proliferation of 3D close-range remote sensing applications is welcome, improved accessibility is often at the expense of model accuracy. To test the accuracy of consumer-grade close-range 3D model acquisition platforms commonly employed for geo-documentation, we have mapped a 20-m-wide trench using aerial and terrestrial photogrammetry, as well as iOS LiDAR. The latter was used to map the trench using both the 3D Scanner App and PIX4Dcatch applications. Comparative analysis suggests that only in optimal scenarios can geotagged field-based photographs alone result in models with acceptable scaling errors, though even in these cases, the orientation of the transformed model is not sufficiently accurate for most geoscientific applications requiring structural metric data. The apps tested for iOS LiDAR acquisition were able to produce accurately scaled models, though surface deformations caused by simultaneous localization and mapping (SLAM) errors are present. Finally, of the tested apps, PIX4Dcatch is the iOS LiDAR acquisition tool able to produce correctly oriented models.

1. Introduction

Digital photogrammetry and LiDAR-based geospatial field data acquisition using smartphones and tablets is revolutionizing the use of close-range 3D remote sensing within the geosciences [1,2,3,4,5,6,7]. Commensurately, the rapid uptake of low-cost, readily deployable multi-senor drones has extended the reach of such techniques, enabling nadir view photogrammetric surveys of horizontal outcrops, as well as occlusion free reconstructions of large vertical sections [8,9,10]. Despite the relative simplicity with which 3D surface reconstructions of geological exposures (i.e., virtual or digital outcrop models: [11,12,13,14,15,16,17,18,19]) can be acquired using such platforms, the reliability of the geospatial information extracted from their data products is typically unclear, particularly when survey grade measurements are unavailable to calibrate and benchmark the resultant outcrop models. The deployed sensor platform’s accuracy and precision in terms of position and orientation is often a key consideration for many geoscientific applications of 3D surface reconstructions, with deviations in the resultant scale, geolocation and attitude of the generated models being deleterious to the quality of metric data extracted thereof. Recently, Uradziński and Bakuła [20] have shown that under optimal conditions, dual-frequency receivers on smartphone devices allow geolocation with accuracies of a few tens of centimeters with post-processing carrier phase correction, providing accurate ground control points (GCPs) for georeferencing and scaling. Analogously, additional authors have demonstrated that 3D models can be satisfactorily oriented and scaled by utilizing the smartphone camera pose information (i.e., the camera’s extrinsic parameters) to register photogrammetry-derived models (e.g., [21,22,23]). In similitude, Corradetti et al. [4] obtained oriented and scaled 3D models using inertial measurement unit (IMU) derived smartphone orientation data, with constraints on the device’s (and thus image’s) major axis using a handheld gimbal. Whilst relatively streamlined in comparison to conventional GCP geolocation using survey grade tools (i.e., differential GNSS or total station surveys), these methods still require a degree of setup in the field and significant post-processing. However, many casual users routinely acquire photographs for close-range photogrammetry, and more recently, scans through the iOS LiDAR devices do so without any predefined strategy for the georectification of the resultant 3D model, severely limiting their utility as a medium for quantitative geological analysis. Conversely, it is well established amongst geospatial specialists that the absence of a sound registration strategy can negatively impact upon results.
Three-dimensional models of outcrops represent valuable tools to document, analyze and interpret geology as they provide the basis to efficiently extract quantitative information from geological exposures inaccessible by manual fieldwork [11,13,24,25,26,27,28,29]. To date, geological surface reconstructions have enjoyed diverse applications within numerous geoscientific disciplines, including structural geology (e.g., [24,30]), sedimentology (e.g., [31,32]), stratigraphy (e.g., [33]), volcanology (e.g., [34]), geomorphology (e.g., [35,36]), and applications in slope stability analysis and landslide monitoring (e.g., [37,38,39,40]). Such models are also routinely employed within geo-heritage site documentation (e.g., [41,42]), as well as for the documentation of excavations (e.g., [30,43]). In recent years, geological surface reconstructions have also been leveraged as pedagogical tools to enhance contextual understanding and 3D thinking within the classroom [44,45,46,47], and to deliver virtual geological field trips to geoscience students, industry practitioners and the wider public (e.g., [48,49,50,51]). Applications of modern digital mapping have also been used to assist the inclusion of persons with disabilities in geoscience education and research (e.g., [52]). The demand for sharing such models has led to the development of dedicated online virtual outcrop model databases, such as e-Rock [53], Svalbox [54], and V3Geo [55]. Though the aforementioned developments can be viewed as positive for the geoscience community, the wider proliferation of digital outcrop modelling techniques requires awareness of the accuracy limits of these powerful tools, particularly when sharing quantitative data extracted from such media (e.g., [55]).
Using a case study of a trench that was recently excavated to probe historical subsidence in the proximity of an infilled sinkhole in the municipality of Enemonzo (Italy), we have investigated the utility of various acquisition strategies for the generation of correctly oriented and scaled 3D models. The geological interpretation of the study area has been presented elsewhere [56,57] and the reader is invited to consult the aforementioned work for further details. In summary, the area is characterized by the presence of several types of sinkholes [58,59] with the studied trench being located proximally to the west of a phenomenon that manifested at the surface during the 1970s ([56] and the references cited therein) and reactivated in the 1980s and 2010s. The presence of this sinkhole is related to a Triassic evaporitic bedrock mantled by variably consolidated and loose Quaternary deposits having variable thickness, which varies in the area, from north to south, from a few meters to more than 60 m [56,60]. Bedrock dissolution associated with groundwater flux is thought to have caused the collapse of the sinkholes [61,62] within the present study area.
A detailed description of the acquisition procedure and setup is presented within the method section below. In summary, we have used three camera-equipped devices: namely, a DJI Air 2S drone, a Nikon D5300 camera, and an iPhone 13 Pro, to generate a structure from motion-multiview stereo (SfM-MVS) photogrammetric reconstructions of the trench. The model made from the DJI Air 2S dataset (hereafter named Air 2S) included a larger acquisition area, which coupled with its superior nominal accuracy, provided a benchmark against which the ground-based surveys could be compared. The Air 2S model was also compared against a LiDAR-derived Digital Terrain Model (DTM) available at 1 m resolution to check for its vertical accuracy in relation to the world frame. Moreover, additional field-based surveys were performed using the embedded LiDAR sensor of the iPhone 13 Pro, using the 3D Scanner App and PIX4Dcatch apps in order to evaluate their utility towards geospatial site documentation.
The comparative analysis here indicates that only in optimal scenarios (i.e., when the accuracy of the geospatial positioning is intrinsically high), geotagged field-based photographs alone can result in models with acceptable scaling errors, though even in these cases the orientation of the transformed models cannot be sufficiently accurate for many geoscientific applications. Moreover, misalignment of the reconstructed scene is exacerbated when the acquisition is performed in a collinear fashion. The apps tested for iOS LiDAR acquisition were able to produce accurately scaled models. However, their resultant scene reconstructions exhibited surface deformations caused by simultaneous localization and mapping (SLAM) errors, which in turn may prove detrimental to the accuracy of structural measurements extracted from the model surfaces. Finally, of the tested apps, PIX4Dcatch is the only iOS LiDAR acquisition tool able to produce correctly oriented models.

2. Methods

The studied trench is ~20 m long, having an approximate east-west strike direction. The southern wall of the trench was cut vertical and prepared for the study, whereby gridlines demarcating 1 m2 subregions were installed, providing a reference frame for the comparative analysis. Proceeding with the preparation of this framework, field surveys using the aforementioned remote sensing platforms were performed on 6 April 2022. Weather conditions on the day of the surveys were overcast, providing uniform diffuse light, favoring the acquisition of the north-facing trench wall, minimizing the impact of shadowing upon the captured images and their resultant reconstructed scenes, thus limiting the impact of shifts in solar azimuth and zenith upon model quality.
The drone used for the acquisition is a DJI Air 2S (Table 1), which is equipped with an embedded GNSS positioning system (GPS, GLONASS, and Galileo constellations), compass, and an inertial measurement unit (IMU). According to the manufacturer, the hovering accuracy range of vertical and horizontal positioning with GNSS is ~0.5 m and ~1.5 m respectively. The DJI Air 2S is equipped with a 20-megapixel camera with a 1” CMOS sensor, mounted on a three-axis gimbal. A total of 226 photos were taken in JPG format (5472 × 3648 pixels) and 72 dpi resolution, at distances between ~0.4 to 80 m from the scene in manual flight mode. Every photo’s camera position (latitude, longitude, and altitude) and pose information (yaw, pitch, and roll angles) are automatically recorded. All photos taken with this platform have a ~0° roll angle, attributable to gimbal stabilization (e.g., [4]). This dataset includes aerial views of the trench, including the surrounding roads and buildings.
The third set of photographs was taken using an iPhone 13 Pro (Table 1). This device is also equipped with a GNSS receiver (GPS, GLONASS, Galileo, QZSS, and Beidou constellations) with the final location provided by the Apple Core Location framework, which combines GNSS geolocation measurements with data provided by Bluetooth and Wi-Fi networks when available. The iPhone 13 Pro embeds an IMU and a magnetometer able to provide orientation data. However, only the azimuthal orientation of the camera direction is preserved in each photo in addition to their geographic coordinates and altitude. A total of 329 photographs at 12.2-megapixels resolution were acquired, with the majority captured at close range from inside the trench.
These three photographic datasets were processed independently in Agisoft Metashape Professional (version 1.8.1): a commercially available SfM-MVS photogrammetric reconstruction software platform. Geographic coordinates obtained from each platform were converted to UTM zone 33N-WGS84 (EPSG: 32633) within Metashape. Exported point clouds were subjected to comparative analysis within CloudCompare: an open-source software for point cloud processing and analysis [63]. The computation speed of this analysis was enhanced by leveraging a virtual machine installed on a Dell PowerEdge R7525 server rack placed at the Department of Mathematics and Geosciences at the University of Trieste (Italy) equipped with an AMD EPYC™ 7F72 (beanTech, Udine, Italy) chipset and NVIDIA GRID RTX8000P GPU architecture. Moreover, to further improve computation speed, all point clouds were decimated using random sampling to 12 M points within CloudCompare. This value was chosen arbitrarily and considered adequate to represent the reconstructed geometry. In CloudCompare, the comparative analysis was performed after point cloud manual alignment using a minimum of four non-collinear points. The results were visually inspected, and the alignment procedure repeated when deemed unsatisfactory.
In addition to its 12.2-megapixel digital camera, the iPhone 13 Pro is equipped with a built-in LiDAR scanner. This sensor utilizes the simultaneous emission of 576 rebounded laser pulses to acquire scene geometry, which under optimal conditions has a range of up to 5 m [5,7]. After introducing this sensor in the iPad Pro and iPhone 12 Pro in 2020, several apps have been developed to retrieve geospatial information in the form of point clouds and textured meshes [5,64]. In this work, we have tested the 3D Scanner App (v. 1.9.8) and PIX4Dcatch (v. 1.12.0) apps. During each acquisition, the iPhone was mounted on a DJI OSMO 3 gimbal to limit the deleterious impacts that abrupt movements imbue upon the iPhone’s IMU measurements, and hence reduce possible errors resulting from the simultaneous localization and mapping (SLAM) required to generate the LiDAR models.
The 3D Scanner App is a free app for iOS LiDAR acquisition, which produces a textured mesh of the scanned scene as the output. Several settings related to the resolution of the acquisition can be set. In this work, we set the acquisition at (i) low confidence, (ii) 2.0 m range, (iii) no masking, and (iv) 8 mm resolution. The entire duration of the 3D Scanner App LiDAR survey required ~6 min. The scanning duration was long to accommodate coverage of the entire trench (including the trench floor). The textured model output by the 3D Scanner App was processed in less than two minutes using the acquisition device at the field site. Moreover, the output models are natively scaled, thus providing metric information directly in the field, with the size of model features being ascertained by selecting two points on the model surface within the 3D Scanner App. The textured meshes can be exported using the Wavefront *.obj format while colored point clouds of the scene can be exported via the *.xyz format. During the acquisition, the app also captures low-resolution (2.7 MP) images of the scene at a frequency of ~2 Hz. These images are primarily used for generating texture maps and to assign RGB attributes to the point cloud but can also constitute a backup dataset that can be used to generate a stand-alone SfM-MVS model.
The PIX4Dcatch app is part of a larger software suite that includes PIX4Dmapper, a popular SfM photogrammetric reconstruction software. The PIX4Dcatch app is free to use but requires a subscription to export data. In this work, the default acquisition settings with PIX4Dcatch were used, including an image overlap of 90%. With the device mounted on the DJI OSMO 3, we acquired the north-facing wall of the trench. The PIX4Dcatch app does not process the data within the smartphone but requires the LiDAR project to be uploaded to the PIX4Dcloud for processing (an upload of 1.51 Gb was required for the test case in this study). This app, in addition to the textured mesh and point cloud, stores 2.7 MP images of the scene including their position (latitude, longitude, and altitude) and pose information (omega, phi, and kappa), that are directly readable even when imported into third party software (i.e., Metashape). Despite this flexibility, the LiDAR-generated depth maps can only be read if processed using PIX4Dcatch.

3. Processing Outline and Results

In total, 226 photos from the drone (56 nadir view and 170 oblique images of the trench wall) were processed in Metashape, using high accuracy alignment and high-quality densification settings. The resulting dense point cloud has ~111 million points and comprises the trench and its surrounding area (Figure 1a). Ostensibly, due to the robust GNSS positioning data provided by the drone, coupled with a wider acquisition area, the resulting model appears reasonably georeferenced with respect to its horizontal components (Figure 1a). By contrast, after comparison with a freely available 1 m resolution LiDAR-derived Digital Terrain Model (DTM) of the region Friuli Venezia Giulia (available at http://irdat.regione.fvg.it/CTRN/ricerca-cartografia/ (accessed on 15 July 2022)), a significant vertical translation was observed (Figure 1b). It should be noted that both datasets are framed in terms of their altitude above sea level (ASL). The cloud-cloud computed distance between the DTM and the Air 2S model is relatively uniform throughout the area, lying at about 8 m (Figure 1c), indicating minimal angular deviation between the two reconstructions. It is worth noting that accurate positioning of ground control points (GCPs) should be provided to obtain below centimeter accuracy estimates in both horizontal and vertical directions and to check the consistency of the model throughout the investigated area (e.g., [65]). Nevertheless, in spite of this vertical shift, for the scope of this work we consider this dataset without any vertical translation as the benchmark against which the other models will be compared. To enhance the reconstruction quality of the trench, we selected a smaller region and repeated the densification procedure in Metashape after disabling all aerial photographs, whilst keeping the pre-established alignment. This procedure resulted in a benchmark point cloud composed of ~124 million points (~0.47 point/mm2 at the center of the scene), which was later decimated in CloudCompare, as described within the Methods section.
Metashape is able to read the EXIF metadata tagged onto photographs, which in the case of the Air 2S dataset includes the gimbal yaw, pitch and roll angles. Note that in this study, the roll angle is effectively fixed at 0°, owing to gimbal stabilization. The iPhone dataset only records the image’s direction with respect to true north, which Metashape automatically identifies as the yaw angle. The Nikon D5300 does record GNSS geolocation data but does not record camera orientation parameters.
In Metashape, photo-alignment based on the extrinsic parameters only takes into account location information, though orientation parameters can be utilized in post-processing. In the case of the Air 2S dataset, the use of the orientation parameters results in an anticlockwise rotation of the model around the world frame’s vertical axis of ~3.4 degrees. This value is close to the angular deviation between magnetic and true declination at the site (3.2°). Hence, this additional registration may prove useful in cases where the model needs to be aligned to the magnetic instead of geographic north (e.g., the collection of orientation data from a model matched to equivalent field measurements collected using a compass or compass-clinometer).
The iPhone alignment was produced using medium accuracy and high-quality densification settings. The resulting dense point cloud has ~123 million points (~0.64 point/mm2 at the center of the scene). The use of the azimuthal orientation parameter associated with iPhone photographs for the model’s alignment to magnetic north is troublesome. Camera azimuth with respect to the north cannot be used alone since the software assigns null values to pitch and roll fields. Moreover, the north bearing may also not match that of a yaw angle, for example in the case of portrait photos. Whilst we have not robustly tested the use of the iPhone orientation parameters, it appears to match the orientation and scale to the Air 2S generated model when visually compared (Figure 2a), although a major vertical translation of ~5.5 m is observed (Figure 2b). This translation reduces to ~2.5 m ASL in real-world coordinates considering the vertical shift between the Air 2S model and the LiDAR-derived DTM. Closer inspection reveals that whilst the scaling between the two models is comparable, with a scaling factor of 0.995, a rotation of ~13° around the x-axis (which corresponds to the strike of the trench) is observed (Figure 2c).
Both models acquired by directly using the iPhone’s LiDAR sensor through the 3D Scanner App and PIX4Dcatch apps lack georeferencing but are registered within a local coordinate system. Consequently, a translation had to be applied to observe both LiDAR models together with the benchmark model (Figure 3). Notably, the PIX4Dcatch model was correctly oriented. To test whether the PIX4Dcatch model’s orientation was coincidental with the benchmark model, we undertook a second survey at a different locality (not shown) and found that the model was correctly oriented. In contrast, the 3D Scanner App model was misoriented with the z-axis oriented almost at 90° to its expected value (Figure 3). The x-axis was approximately parallel to the world frame x axis but inverted. To test if the erroneous registration related to an incorrect reading order of the registered x, y and z coordinates, the model was rotated 180° around the word frame z-axis and 90° around the x-axis. The resulting model still deviated ~30° around a z-axis rotation from the world frame. Finally, a scaling factor of 1.00123 and of 1.00646 had to be applied to the 3D Scanner App and PIX4Dcatch models, respectively, to match the photogrammetric benchmark model.
The photo-alignment of the Nikon DSLR generated model was also problematic. Despite using medium accuracy settings in Metashape (in source preselection mode), the initial reconstruction failed after more than eight hours processing time using the available computing hardware (in the virtual machine described above). Thus, a second attempt to reconstruct the scene captured using the Nikon D5300 was attempted at low accuracy in source preselection mode, which was later reset to medium accuracy in estimated preselection mode for a secondary alignment. This resulted in a sparse point cloud of ~2.7 million points, which expanded to 334 million points (~0.18 point/mm2 at the center of the scene) after dense reconstruction. The resulting model (hereafter termed Nikon model) is poorly scaled and oriented. The scaling factor with the reference model was ~10, meaning the Nikon model was about 10 times larger. The orientation of the model (not shown) was also arbitrary, with the z-axis aligned almost perpendicular to the world frame.

3.1. Comparative Analysis in CloudCompare

After registering all test point clouds with the benchmark in CloudCompare, we meshed the benchmark model and then performed a point cloud to mesh distance calculation for each test point cloud to investigate the occurrence of the surface deformations (Figure 4). As shown in the histogram of Figure 4b, ~82% of the points belonging to the iPhone photogrammetric point cloud are within ±1.5 cm of the benchmark model’s surface (and ~68% within ±1 cm), with the majority of outliers being located on the floor of the trench. The Nikon point cloud only represented the north-facing wall of the trench, which in similitude to the iPhone point cloud, exhibited 85.6% of points lying within ±1.5 cm distance from the benchmark model (and ~72% of points within ±1 cm). For the Nikon model, some outliers were located at the eastern wall of the trench and on the western side of the north-facing wall. The iPhone LiDAR model captured using the 3D Scanner App covered the entirety of the trench. Data acquisition using the 3D Scanner App started from the area indicated by the green arrow in Figure 4e and followed an approximately anticlockwise transect, which tracked the sidewalls of the trench and the trench floor. After completing this circuit, the acquisition returned to the center of the scene terminating at the location indicated by the dark-red arrow in Figure 4e. About 77% of the points of the 3D Scanner App point cloud are located within ±6 cm from the benchmark model’s surface. In this case, most of the outliers are located on the western sector of the north-facing wall. Notably, this area coincided with the start and end of the survey transect used for 3D Scanner App LiDAR acquisition. Finally, ~94% of the PIX4Dcatch point cloud lies within ±2.5 cm distance from the benchmark model with outliers mainly present in the western sector of the acquisition area.

3.2. Orthomosaics

In Metashape, the 3D models of the photographic datasets (Air 2S, Nikon and iPhone) were also used to produce three orthomosaics from their associated textured meshes (Figure 5). It should be noted that the Air 2S orthomosaic in Figure 5 was produced after removing all the nadir view images oriented at an acute angle to the trench wall from the aerial survey prior to texturing, which would introduce blurring into the resultant texture map. Resultant orthomosaics reached different pixel sizes of 0.99, 1, and 0.635 mm/pixel for the Air 2S, Nikon and iPhone, respectively. Interestingly, a former Air 2S orthomosaic produced prior to the removal of nadir view images (not shown) reached a pixel size of 2.96 mm/pixel. The orthomosaics in Figure 5 were exported using a fixed resolution of 3 mm/pixel for comparison. The orthomosaics generated using each 3D imaging modality are able to capture the scene of interest adequately, such that details like individual clasts within the trench wall are resolvable (see Figure 5).
The two available textured meshes obtained from the 3D Scanner App and the PIX4Dcatch app were also used to generate two orthopanels using the LIME software suite [66] (Figure 5).

4. Discussion

Three SfM-MVS photogrammetry-based surveys using consumer-grade camera platforms and two iOS LiDAR-based apps were tested in this work to evaluate their ability to reproduce the geometry and optical signature of a typical geoscience and geoarchaeological field site (a trench transecting a sinkhole).

4.1. Scaling and Orientation Accuracy of SfM-MVS Models

The first of the three SfM-MVS photogrammetry-derived models (hereafter termed SfM models for parsimony) was generated using an aerial photographic survey using a DJI Air 2S drone (226 images), including photographs captured within the trench. The Air 2S dataset covered a much larger area (>1200 m2) than the compared surveys. This acquisition strategy, coupled with the excellent GNSS, IMU and stabilization capability of this device resulted in the most accurately scaled and (horizontally) georeferenced of the SfM models, as evidenced by the aerial orthophoto derived from the model added as an overlay in Google Earth (Figure 1). Consequently, the Air 2S model was used as a benchmark with which to test the reconstruction quality of the remaining survey methods. It should be noted that the Air 2S model is vertically translated by ~8 m (Figure 1). The assumed fidelity of the Air 2S model is therefore an approximation, though its internal scaling and registration with respect to the horizontal axes of the world frame is likely reliable for the sake of comparison.
The use of photographs taken with the Nikon D5300, which are characterized by low accuracy (GPS) geotags and lack camera orientation metadata, resulted in the model having a scaling factor of about 10 and arbitrary orientation. Despite the larger quantity of photographs in the Nikon dataset (382 images) and the larger sensor resolution, the accuracy of the GPS sensor was above the size of the investigated area (>100 images had position error > 100 m). It was only after 150 images were taken that the signal error stabilized below 20 m. It is likely that low accuracy of the dataset’s geolocation information in combination with the relatively high pixel count of the survey contributed to the failed alignment in Metashape encountered during the first attempted reconstruction.
The third SfM model was built from 329 photographs captured using an iPhone 13 Pro. The resultant 3D model was natively scaled, with a scale factor to the benchmark of ~0.995. This revealed that the multi-satellite GNSS receiver of the iPhone used by the Apple Core Location framework to retrieve the camera position provides reliable location data when large enough datasets are used. Nevertheless, recent single-point accuracy tests performed with the previous iPhone Pro model (the iPhone 12 Pro, [7]), have evidenced a location accuracy within a few meters that stabilized within seconds. Further to this, the Apple Core Location framework can provide more accurate positioning over standard consumer grade GNSS receivers while utilized within residential areas by combining data provided by the GNSS receiver, Wi-Fi networks, and nearby Bluetooth devices. The device is also equipped with the iBeacon micro-location system enabling indoor navigation when available. A possible way to discern whether the final precision is due to averaging of a large dataset or to the intrinsic location accuracy by the Apple Core Location framework is by testing the precision and the accuracy of each photo’s position in comparison to the values estimated through the photo-alignment process in Metashape. During the photo-alignment workflow, each photo is re-positioned in a new location that better fits the overall geometry of the scene linking each image to the reconstructed scene and minimizing the reconstruction error (e.g., [67,68]). If the model is properly scaled, the difference between the estimated and measured camera locations provide a proxy for the precision of the Apple Core Location framework at the site. It can be seen in Figure 6a that the estimated precision of the iPhone 13 Pro positioning is generally <1 m in each axis. The highest error can be observed in the first three photographs, suggesting that after a minimal time, the location signal stabilizes (as also previously observed by Tavani et al. [7]). To estimate the accuracy of the camera positions (which differs from the precision being compared to a benchmark), we have aligned the iPhone model to the Air 2S model (considered in this work as the benchmark model) such that the difference between the new estimated camera positions and the measured positions represent the accuracy of the device (Figure 6b). The alignment in Metashape was achieved by providing the coordinates of three non-collinear markers as obtained by the Air 2S model (in lieu of robust GCPs). It has to be noted that this is an ambitious assumption as even though it has been observed that the lat-long positioning of the benchmark model is approximately correct (see Figure 1), the actual altitude of the model suffers from a vertical translation of ~−8 m in the absence of a survey-grade registration. This comparison shows that there exists significant error in the vertical axis of the world frame (~5.9 m). Considering that the Air 2S model itself suffers a ~−8 m translation from the LiDAR-derived DTM (Figure 1b,c), the mean vertical error of the iPhone is ~2.1 m. The accuracy estimate in the east direction is within 1 m (average easting error is 0.6 m). The north direction accuracy estimate is mostly between 0 and 3 m (average northing error is 1.5 m) (Figure 6b). The total accuracy error is 6.14 m in total and mostly reflects the vertical shift. Regarding the anticlockwise rotation of about 13° around the x-axis, this is likely related to the photo-survey acquisition strategy. In fact, having performed the acquisition along the east-west direction (i.e., parallel to the strike of the trench), most of the photographs lie along the x-axis of the world frame. The high collinearity of this dataset is conducive to the introduction of rotational errors along this axis. The availability of camera pose information in the EXIF files (i.e., in similitude to the Air 2S dataset) would have provided the means to mitigate such errors.

4.2. Scaling and Orientation Accuracy of the iPhone LiDAR Reconstructions

The scaling of the point clouds derived from the LiDAR sensor of the iPhone 13 Pro closely matches that of the Air 2S model. The 3D Scanner App model had a scaling factor of 1.0012, while the PIX4Dcatch model had a scaling factor of 1.0065. Whilst the latter mentioned app is also able to return correctly oriented models, the 3D Scanner App generated model was arbitrarily oriented. Prior to its latest updates, the PIX4Dcatch app was also unable to produce correctly oriented models [7]. The capacity to build accurately scaled and oriented models directly in the field (assuming mobile network connectivity) offers a potential gamechanger for applications that require the rapid collection of attitude data at a given study site (e.g., fault and fracture analysis). It should also be noted that handheld GNSS Real-time kinematic (RTK) rovers have recently become available for selected iPhone and Android devices (e.g., the viDoc RTK rover). These highly accurate add-on receivers offer the potential to turn smartphones into survey grade GNSS tools, though at present, the cost of these devices rivals that of standalone RTK-GNSS platforms. We acknowledge that a relatively modest upgrade has been announced with the release of dual-frequency receivers in the iPhone 14 Pro. Nevertheless, the typically limited access to the Apple Core Location framework raw data may limit or impede the post-processing carrier phase fix (e.g., [20]).

4.3. Internal Accuracy of Reconstructions

All point clouds produced by the survey methods deployed in this study were compared to test for deformations in the reconstructed scene after point clouds manual alignment (similarity transform). After translating, rotating, and scaling each cloud to fit the Air 2S model, their distance to the benchmark model was computed (Figure 4). When using such a comparison it must be taken into consideration that the point-to-mesh distance will encounter some disparity due to the tessellation of the surface (mesh). Most of the points composing the iPhone model (~82%) are <1.5 cm from the benchmark model surface, with most of the outlier points at the floor of the trench. These deformations observable at the base of the model are probably caused by the obliquity between the photo view direction and the ground, which is typical of ground-based surveys targeting vertical edifices. In this work, the main objective was to reconstruct a single trench wall. In cases where the acquisition of the floor of the trench is required, it is recommended, even during ground-based surveys, to include photos that are normal to sub-normal to the base of the excavation. As for the iPhone model, the Nikon model has 85.6% of the values within a 1.5 cm distance from the benchmark model. Outlier points are more randomly distributed. The fact that both the iPhone and the Nikon models have a similar distribution of errors in relation to the distance from the benchmark model (e.g., Figure 4c,d) suggesting that, for these two models, reconstruction errors are largely dictated by the limitations of SfM estimation. Reconstruction errors related to the SfM techniques are strongly coupled to the resolution of the input image dataset, as well as additional factors, such as camera sensor noise and motion blur (e.g., [69]). The internal deformation error associated with the LiDAR acquisition is mostly within 6 cm (for 77% of the point population) and 2.5 cm (94%) for the 3D Scanner App and the PIX4Dcatch models, respectively. It should be noted that the size of these two models and their acquisition strategies were distinct (Figure 4). As a result of the strategy followed during the 3D Scanner App acquisition, the area between the green and red arrows in Figure 4e was scanned at least twice over a relatively long and convoluted path. This path exacerbated errors associated with simultaneous localization and mapping (SLAM) [70]. When the user moves the smartphone while scanning, its position and orientation (i.e., the pose information) must be continuously determined in order to append consecutive scanned portions of the scene. This is generally achieved by merging the pose information provided by visual and inertial sensors [70]. As a result, small errors in the phone’s pose estimation can accumulate and propagate, giving rise to mispositioned points within the model. These deleterious effects are evident from the survey performed with the 3D Scanner App, where the last portions of the scan produced a ‘ghost’ planar feature at a distance of about 25 cm from the features’ true position. The acquisition carried out through the PIX4Dcatch app was much smoother, having acquired only the north-facing wall of the trench (not the floor nor any of the other walls). The acquisition proceeded from east to west as indicated by the green and red arrows in Figure 4g. It can be observed (Figure 4h) that for smaller and smoother acquisitions, such as the survey conducted with the PIX4Dcatch app, the majority of points (94%) are <2.5 cm distance to the benchmark model’s surface. Qualitatively, it can be observed that most of the outlier points are located at the periphery of the acquisition. Again, this is likely the result of SLAM errors. In effect, SLAM errors are roughly comparable to the so-called ‘doming effect’, which may impact SfM-MVS photogrammetric reconstructions in terms of their deleterious impact upon model analysis (e.g., [22,71]).

4.4. Orthopanels

Any of the acquisition methods tested herein can be used to generate textured models of the scene, and to orthographically project the resulting model over a panel, thus generating an orthomosaic (or orthopanel) of the scene [30,31,72,73,74]. This procedure finds considerable applications in structural geology, stratigraphy, geoarchaeology and geomorphology, as well as other earth science disciplines, where orthomosaics are generated by orthogonally projecting the model towards a direction that minimizes geometry distortion of the targeted features observable in the model (e.g., geologic structures, bedding planes, clasts, etc. [75,76,77]). In the case of the three SfM models, the orthomosaic construction is a trivial additional step in the SfM-MVS reconstruction workflow, available within photogrammetric reconstruction software tools (e.g., Metashape [73]). All three SfM-derived orthomosaics faithfully reproduced the scene with variable resolutions (<3 mm pixels) that primarily relate to the average spatial resolution of the photo survey. A 2 × 1 m subregion of these orthomosaics is shown for comparison (Figure 5) after export at a fixed resolution of 3 mm pixels. In this section, a small (<40 cm throw) structure cutting through the Quaternary strata can be observed (Figure 5). At a glance, it can be confused with a small normal fault, but it corresponds to the eastern side wall of the active sinkhole. How this structure relates to the subsiding evolution of the area is beyond the scope of this work. The resolution of the orthomosaic produced from the textured model processed in the field by the 3D Scanner App is less sharp than the SfM models, although the sidewall and stratification are still discernable (Figure 5). The PIX4Dcatch app allows for the uploading of data to the PIX4Dcloud for remote processing, although in this work we have only used the textured model available in the saved folder of the app. The latter model is not sharp enough to recognize most of the features observed in the other models, as can be seen from the derived orthomosaic (Figure 5).

4.5. Final Remarks

In this work, we have tested readily available surface reconstruction methods, leveraging consumer-grade sensor platforms to produce 3D models of a typical field location encountered within geoscience and geoarchaeology applications. Our results have highlighted that only the SfM-MVS-derived Air 2S model and the iPhone LiDAR-derived PIX4Dcatch model satisfactorily recovered the orientation of the scene, with the Air 2S model also being georeferenced. Nevertheless, the Air 2S model has suffered a vertical translation of ~8 m with respect to the real-world ASL coordinates. The comparison of the Air 2S model with the LiDAR-derived DTM and aerial orthoimage of Google Earth together with the geometric consistency of the Air 2S model with the tested iPhone LiDAR’s models (which should nominally excel in distance accuracy in lieu of SLAM errors) demonstrates that aerial photogrammetry deployed from consumer-grade drones reached levels of accuracy in an uncontrolled field setting sufficient for many geoscience field surveying applications where survey grade measurements are not mission critical (e.g., gauging approximate bed thicknesses, orientations, etc.). In general, there are several geoscience applications where the internal scales and geometrical consistency of the scene supersede the need for accurate georeferencing, such as for models intended for the quantitative extraction of oriented data (e.g., [29,78]), the production of oriented orthopanels (e.g., [12,73]), and the qualitative observation, preservation, and sharing of models (e.g., [79,80]). For all those cases, our results suggest that modern drones, such as the DJI Air 2S, can be used to produce stand-alone surveys with orientation accuracies sufficient for the vast majority of user cases. Nevertheless, we suggest using such an approach with caution, since the GNSS signal can be subject to occlusions, particularly within mountainous or urban areas. Direct georeferencing alone is not sufficient to establish survey-grade registrations, even when RTK-drones with centimetric position accuracy are used (e.g., [81]). Therefore, direct georeferencing alone is not recommended for all applications where absolute orientation accuracy is required. For the cases where an approximate alignment to the real-world coordinates is sufficient, we suggest extending the coverage of the acquisition over a much larger area than the region of interest whilst avoiding the acquisition of collinear images. It is also recommended to always implement routine quality checking of 3D surface reconstructions produced within the field. Assuming no GCP’s are available, it is possible to insert objects of known scale and orientation into the mapped scene to meet this objective (e.g., [4,9]).
A noteworthy aspect of this work is the recognition of the recent improvement obtained by the PIX4Dcatch app in aligning iOS LiDAR-derived models to the world frame. To confirm this observation, we performed an additional test over an object characterized by a simplistic geometry. The test consisted of five independent iOS LiDAR surveys of the base of an obelisk made of limestone blocks in the city of Trieste (Italy) through the PIX4Dcatch app. The survey strategy involved circumnavigating the obelisk around either half or the entirety of its perimeter, whilst continuously acquiring LiDAR data using the iPhone 13 Pro (Figure 7a). In similitude to the workflow presented within Section 2, we were able to provide an estimate of the positional accuracy of the GNSS during the LiDAR acquisition by generating an SfM-MVS photogrammetry model of the obelisk using all frames recorded by the app during five distinct acquisitions (Figure 7b). Note that, unlike the LiDAR-derived models themselves, each image used for texturing is georeferenced, which is recorded in its EXIF data. In comparison to what was observed from the case study presented in Section 2, the accuracy estimates in the east and north directions is ~1 m, while it is ~2 m in the vertical direction. Note that the positioning error is almost stable during each discreet acquisition (Figure 7b). This suggests that proceeding the acquisition of the first frame, subsequent image locations are established based upon inertial measurements. The five resultant LiDAR models, although shifted, are consistently aligned to the world frame. The observation was corroborated by deriving the obelisk orientation data from each of the generated LiDAR point clouds using the Compass plugin of CloudCompare [82] (Figure 7c). All measurements (from 10 to 20 for each model) are plotted as black great circles in Figure 7d. Orientation measurements of the obelisk were also made at the site using the Clino app for iOS on the iPhone 13 Pro and the FieldMove app for iOS on an iPad Pro (https://www.petex.com/products/move-suite/digital-field-mapping/ (accessed on 21 July 2022)). Particular attention was paid during this procedure to avoid magnetic interactions with the target object that might perturb measurement accuracy. In the field, orientation measurements were taken as planar and linear features (Figure 7d). Field measurements in Figure 7d are referred to with respect to geographic north (the magnetic declination at the site was +4). Incidentally, an average rotational error around the vertical axis of the world frame of <5 degrees is observed. Unfortunately, due to the limited magnetic declination at the site, and the proprietary nature of the PIX4Dcatch app’s registration procedure, we cannot discern if the LiDAR models were intended to be aligned with geographic or magnetic north. If the LiDAR model is natively aligned to the magnetic north then the rotation error is potentially reduced to <1 degree. In any case, knowing this rotation, one can decide to rotate the models, or any derived structural data, accordingly, to maintain a degree of accepted accuracy where possible.
Overall, the results obtained by this study highlight that commercial products, such as the DJI Air 2S and the iPhone 13 Pro, are able to prove useful as standalone field data acquisition platforms for diverse applications in the geosciences. Nevertheless, these instruments are subject to several sources of error (e.g., SLAM, geolocation, etc.) that can compromise entire studies. It is recommended that users remain cautious about the data quality models derived from such sensor platforms if no accuracy estimates exist, particularly for applications where the fidelity of the resultant metric data is critical.

5. Conclusions

Progressively more geoscientists are relying upon the claimed accuracy of commercial-grade tools for the 3D modeling of outcrops and landforms, commonly without site-specific validation. Nevertheless, this work has shown that even the most up-to-date consumer-grade tools (e.g., the DJI Air 2S and the iPhone 13 Pro) are subject to numerous errors, which are potentially deleterious to the intended application. Indeed, the magnitude of these errors may be sufficiently profound to nullify the results of entire studies or may lie within a range that is acceptable for many user cases. In order to provide baseline reliability, accuracy must always be checked and evaluated for a given study site and/or application. Herein, we tested the geolocation capabilities and the native LiDAR sensor of the iPhone 13 Pro. The obtained results are to be considered positive, particularly with respect to the PIX4Dcatch app, which is able to provide well-scaled and oriented point clouds image geotags within their associated EXIF metadata.

Author Contributions

Conceptualization, A.C., T.S., M.M., A.B. and L.Z.; investigation, A.C. and A.B.; methodology, A.C., T.S., M.M. and C.C.; software and formal analysis, A.C. and M.M.; writing—original draft preparation, A.C., A.B., C.C., and L.Z.; writing—review and editing, T.S. and M.M.; funding acquisition, A.C. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Geological Survey of the Friuli Venezia Giulia Region within the framework of the following project: Accordo attuativo di collaborazione per l’aggiornamento censimento e pericolosità dei sinkhole del territorio regionale (prot.no. 0035220 of 27 July 2020).

Data Availability Statement

All data in support of this publication, including Agisoft Metashape reports, are available upon request to the corresponding author. A 3D surface reconstruction of the trench is available at https://skfb.ly/ouI7o (accessed on 8 September 2022).

Acknowledgments

The authors would like to acknowledge Chiara Piano (functionary of the Geological Survey of FVG Region), as well as the functionaries of the Enemonzo Municipality Mauro De Prato and Alessandra Fiorese for their assistance, as well as the land owners who facilitated access to the main study site. AC acknowledges Microgrants 2021 resources, funded by the FVG Region (LR 2/2011 “Finanziamenti al Sistema Universitario regionale”).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Pavlis, T.L.; Langford, R.; Hurtado, J.; Serpa, L. Computer-Based Data Acquisition and Visualization Systems in Field Geology: Results from 12 Years of Experimentation and Future Potential. Geosphere 2010, 6, 275–294. [Google Scholar] [CrossRef]
  2. Micheletti, N.; Chandler, J.H.; Lane, S.N. Investigating the Geomorphological Potential of Freely Available and Accessible Structure-from-Motion Photogrammetry Using a Smartphone. Earth Surf. Process. Landf. 2015, 40, 473–486. [Google Scholar] [CrossRef] [Green Version]
  3. Jaud, M.; Kervot, M.; Delacourt, C.; Bertin, S. Potential of Smartphone SfM Photogrammetry to Measure Coastal Morphodynamics. Remote Sens. 2019, 11, 2242. [Google Scholar] [CrossRef] [Green Version]
  4. Corradetti, A.; Seers, T.D.; Billi, A.; Tavani, S. Virtual Outcrops in a Pocket: The Smartphone as a Fully Equipped Photogrammetric Data Acquisition Tool. GSA Today 2021, 31, 4–9. [Google Scholar] [CrossRef]
  5. Luetzenburg, G.; Kroon, A.; Bjørk, A.A. Evaluation of the Apple IPhone 12 Pro LiDAR for an Application in Geosciences. Sci. Rep. 2021, 11, 22221. [Google Scholar] [CrossRef] [PubMed]
  6. An, P.; Fang, K.; Zhang, Y.; Jiang, Y.; Yang, Y. Assessment of the Trueness and Precision of Smartphone Photogrammetry for Rock Joint Roughness Measurement. Meas. J. Int. Meas. Confed. 2022, 188, 110598. [Google Scholar] [CrossRef]
  7. Tavani, S.; Billi, A.; Corradetti, A.; Mercuri, M.; Bosman, A.; Cuffaro, M.; Seers, T.D.; Carminati, E. Smartphone Assisted Fieldwork: Towards the Digital Transition of Geoscience Fieldwork Using LiDAR-Equipped IPhones. Earth Sci. Rev. 2022, 227, 103969. [Google Scholar] [CrossRef]
  8. Devoto, S.; Macovaz, V.; Mantovani, M.; Soldati, M.; Furlani, S. Advantages of Using UAV Digital Photogrammetry in the Study of Slow-Moving Coastal Landslides. Remote Sens. 2020, 12, 3566. [Google Scholar] [CrossRef]
  9. Panara, Y.; Menegoni, N.; Carboni, F.; Inama, R. 3D Digital Outcrop Model-Based Analysis of Fracture Network along the Seismogenic Mt. Vettore Fault System (Central Italy): The Importance of Inherited Fractures. J. Struct. Geol. 2022, 161, 104654. [Google Scholar] [CrossRef]
  10. Prabhakaran, R.; Urai, J.L.; Bertotti, G.; Weismüller, C.; Smeulders, D.M.J. Large-Scale Natural Fracture Network Patterns: Insights from Automated Mapping in the Lilstock (Bristol Channel) Limestone Outcrops. J. Struct. Geol. 2021, 150, 104405. [Google Scholar] [CrossRef]
  11. Xu, X.; Aiken, C.L.V.; Bhattacharya, J.P.; Corbeanu, R.M.; Nielsen, K.C.; McMechan, G.A.; Abdelsalam, M.G. Creating Virtual 3-D Outcrop. Lead. Edge 2000, 19, 197–202. [Google Scholar] [CrossRef]
  12. Pringle, J.K.; Clark, J.D.; Westerman, A.R.; Stanbrook, D.A.; Gardiner, A.R.; Morgan, B.E.F. Virtual Outcrops: 3-D Reservoir Analogues. J. Virtual Explor. 2001, 4, 51–55. [Google Scholar] [CrossRef]
  13. Bellian, J.A.; Kerans, C.; Jennette, D.C. Digital Outcrop Models: Applications of Terrestrial Scanning Lidar Technology in Stratigraphic Modeling. J. Sediment. Res. 2005, 75, 166–176. [Google Scholar] [CrossRef] [Green Version]
  14. McCaffrey, K.J.W.; Jones, R.R.; Holdsworth, R.E.; Wilson, R.W.; Clegg, P.; Imber, J.; Holliman, N.; Trinks, I. Unlocking the Spatial Dimension: Digital Technologies and the Future of Geoscience Fieldwork. J. Geol. Soc. London. 2005, 162, 927–938. [Google Scholar] [CrossRef] [Green Version]
  15. Buckley, S.J.; Enge, H.D.; Carlsson, C.; Howell, J.A. Terrestrial Laser Scanning for Use in Virtual Outcrop Geology. Photogramm. Rec. 2010, 25, 225–239. [Google Scholar] [CrossRef]
  16. Jones, R.R.; Pringle, J.K.; McCaffrey, K.J.W.; Imber, J.; Wightman, R.H.; Guo, J.; Long, J.J. Extending Digital Outcrop Geology into the Subsurface. In CSP010 Outcrops Revitalized: Tools, Techniques and Applications; Martinsen, O.J., Pulham, A.J., Haughton, P., Sullivan, M.D., Eds.; SEPM (Society for Sedimentary Geology): Tulsa, OK, USA, 2011; pp. 31–50. ISBN 978-1-56576-306-7. [Google Scholar]
  17. Howell, J.A.; Martinius, A.W.; Good, T.R. The Application of Outcrop Analogues in Geological Modelling: A Review, Present Status and Future Outlook. Geol. Soc. Lond. Spec. Publ. 2014, 387, 1–25. [Google Scholar] [CrossRef]
  18. Inama, R.; Menegoni, N.; Perotti, C. Syndepositional Fractures and Architecture of the Lastoni Di Formin Carbonate Platform: Insights from Virtual Outcrop Models and Field Studies. Mar. Pet. Geol. 2020, 121, 104606. [Google Scholar] [CrossRef]
  19. Bonali, F.L.; Corti, N.; Russo, E.; Marchese, F.; Fallati, L.; Mariotto, F.P.; Tibaldi, A. Commercial-UAV-Based Structure from Motion for Geological and Geohazard Studies. In Building Knowledge for Geohazard Assessment and Management in the Caucasus and Other Orogenic Regions; Springer: Berlin/Heidelberg, Germany, 2021; pp. 389–427. [Google Scholar]
  20. Uradziński, M.; Bakuła, M. Assessment of Static Positioning Accuracy Using Low-Cost Smartphone GPS Devices for Geodetic Survey Points’ Determination and Monitoring. Appl. Sci. 2020, 10, 5308. [Google Scholar] [CrossRef]
  21. Tavani, S.; Corradetti, A.; Granado, P.; Snidero, M.; Seers, T.D.; Mazzoli, S. Smartphone: An Alternative to Ground Control Points for Orienting Virtual Outcrop Models and Assessing Their Quality. Geosphere 2019, 15, 2043–2052. [Google Scholar] [CrossRef] [Green Version]
  22. Tavani, S.; Pignalosa, A.; Corradetti, A.; Mercuri, M.; Smeraglia, L.; Riccardi, U.; Seers, T.D.; Pavlis, T.L.; Billi, A. Photogrammetric 3D Model via Smartphone GNSS Sensor: Workflow, Error Estimate, and Best Practices. Remote Sens. 2020, 12, 3616. [Google Scholar] [CrossRef]
  23. Tavani, S.; Granado, P.; Riccardi, U.; Seers, T.D.; Corradetti, A. Terrestrial SfM-MVS Photogrammetry from Smartphone Sensors. Geomorphology 2020, 367, 107318. [Google Scholar] [CrossRef]
  24. Fernández, O.; Muñoz, J.A.; Arbués, P.; Falivene, O.; Marzo, M. Three-Dimensional Reconstruction of Geological Surfaces: An Example of Growth Strata and Turbidite Systems from the Ainsa Basin (Pyrenees, Spain). Am. Assoc. Pet. Geol. Bull. 2004, 88, 1049–1068. [Google Scholar] [CrossRef]
  25. Bistacchi, A.; Griffith, W.A.; Smith, S.A.F.; Di Toro, G.; Jones, R.R.; Nielsen, S. Fault Roughness at Seismogenic Depths from LIDAR and Photogrammetric Analysis. Pure Appl. Geophys. 2011, 168, 2345–2363. [Google Scholar] [CrossRef]
  26. Vasuki, Y.; Holden, E.-J.; Kovesi, P.; Micklethwaite, S. Semi-Automatic Mapping of Geological Structures Using UAV-Based Photogrammetric Data: An Image Analysis Approach. Comput. Geosci. 2014, 69, 22–32. [Google Scholar] [CrossRef]
  27. Seers, T.D.; Hodgetts, D. Comparison of Digital Outcrop and Conventional Data Collection Approaches for the Characterization of Naturally Fractured Reservoir Analogues. Geol. Soc. London Spec. Publ. 2014, 374, 51–77. [Google Scholar] [CrossRef]
  28. Pavlis, T.L.; Mason, K.A. The New World of 3D Geologic Mapping. GSA Today 2017, 27, 4–10. [Google Scholar] [CrossRef]
  29. Seers, T.D.; Sheharyar, A.; Tavani, S.; Corradetti, A. Virtual Outcrop Geology Comes of Age: The Application of Consumer-Grade Virtual Reality Hardware and Software to Digital Outcrop Data Analysis. Comput. Geosci. 2022, 159, 105006. [Google Scholar] [CrossRef]
  30. Bemis, S.P.; Micklethwaite, S.; Turner, D.; James, M.R.; Akciz, S.; Thiele, S.T.; Bangash, H.A. Ground-Based and UAV-Based Photogrammetry: A Multi-Scale, High-Resolution Mapping Tool for Structural Geology and Paleoseismology. J. Struct. Geol. 2014, 69, 163–178. [Google Scholar] [CrossRef]
  31. Pringle, J.K.; Westerman, A.R.; Clark, J.D.; Drinkwater, N.J.; Gardiner, A.R. 3D High-Resolution Digital Models of Outcrop Analogue Study Sites to Constrain Reservoir Model Uncertainty: An Example from Alport Castles, Derbyshire, UK. Pet. Geosci. 2004, 10, 343–352. [Google Scholar] [CrossRef]
  32. Westoby, M.J.; Dunning, S.A.; Woodward, J.; Hein, A.S.; Marrero, S.M.; Winter, K.; Sugden, D.E. Instruments and Methods: Sedimentological Characterization of Antarctic Moraines Using Uavs and Structure-from-Motion Photogrammetry. J. Glaciol. 2015, 61, 1088–1102. [Google Scholar] [CrossRef]
  33. Nesbit, P.R.; Durkin, P.R.; Hugenholtz, C.H.; Hubbard, S.M.; Kucharczyk, M. 3-D Stratigraphic Mapping Using a Digital Outcrop Model Derived from UAV Images and Structure-from-Motion Photogrammetry. Geosphere 2018, 14, 2469–2486. [Google Scholar] [CrossRef] [Green Version]
  34. Favalli, M.; Fornaciai, A.; Mazzarini, F.; Harris, A.; Neri, M.; Behncke, B.; Pareschi, M.T.; Tarquini, S.; Boschi, E. Evolution of an Active Lava Flow Field Using a Multitemporal LIDAR Acquisition. J. Geophys. Res. Solid Earth 2010, 115, B11. [Google Scholar] [CrossRef] [Green Version]
  35. Brodu, N.; Lague, D. 3D Terrestrial Lidar Data Classification of Complex Natural Scenes Using a Multi-Scale Dimensionality Criterion: Applications in Geomorphology. ISPRS J. Photogramm. Remote Sens. 2012, 68, 121–134. [Google Scholar] [CrossRef] [Green Version]
  36. Carrivick, J.L.; Smith, M.W. Fluvial and Aquatic Applications of Structure from Motion Photogrammetry and Unmanned Aerial Vehicle/Drone Technology. WIREs Water 2019, 6, 1–17. [Google Scholar] [CrossRef] [Green Version]
  37. Turner, D.; Lucieer, A.; de Jong, S.M. Time Series Analysis of Landslide Dynamics Using an Unmanned Aerial Vehicle (UAV). Remote Sens. 2015, 7, 1736. [Google Scholar] [CrossRef] [Green Version]
  38. Lucieer, A.; de Jong, S.M.; Turner, D. Mapping Landslide Displacements Using Structure from Motion (SfM) and Image Correlation of Multi-Temporal UAV Photography. Prog. Phys. Geogr. Earth Environ. 2014, 38, 97–116. [Google Scholar] [CrossRef]
  39. Fang, K.; An, P.; Tang, H.; Tu, J.; Jia, S.; Miao, M.; Dong, A. Application of a Multi-Smartphone Measurement System in Slope Model Tests. Eng. Geol. 2021, 295, 106424. [Google Scholar] [CrossRef]
  40. Fu, L.; Zhu, J.; Li, W.-L.; You, J.-G.; Hua, Z.-Y. Fast Estimation Method of Volumes of Landslide Deposit by the 3D Reconstruction of Smartphone Images. Landslides 2021, 18, 3269–3278. [Google Scholar] [CrossRef]
  41. Santos, I.; Henriques, R.; Mariano, G.; Pereira, D.I. Methodologies to Represent and Promote the Geoheritage Using Unmanned Aerial Vehicles, Multimedia Technologies, and Augmented Reality. Geoheritage 2018, 10, 143–155. [Google Scholar] [CrossRef]
  42. Burnham, B.; Bond, C.; Flaig, P.; van der Kolk, D.; Hodgetts, D. Outcrop Conservation: Promoting Accessibility, Inclusivity, and Reproducibility through Digital Preservation. Sediment. Rec. 2022, 20, 5–14. [Google Scholar] [CrossRef]
  43. Reitman, N.G.; Bennett, S.E.K.; Gold, R.D.; Briggs, R.W.; DuRoss, C.B. High-Resolution Trench Photomosaics from Image-Based Modeling: Workflow and Error Analysis. Bull. Seismol. Soc. Am. 2015, 105, 2354–2366. [Google Scholar] [CrossRef]
  44. Carrera, C.C.; Asensio, L.A.B. Augmented Reality as a Digital Teaching Environment to Develop Spatial Thinking. Cartogr. Geogr. Inf. Sci. 2017, 44, 259–270. [Google Scholar] [CrossRef]
  45. Whitmeyer, S.J.; Atchinson, C.; Collins, T.D. Using Mobile Technologies to Enhance Accessibility and Inclusion in Field- Based Learning Enhance Accessibility and Inclusion. GSA Today 2020, 30, 4–10. [Google Scholar] [CrossRef]
  46. Bond, C.E.; Cawood, A.J. A Role for Virtual Outcrop Models in Blended Learning–Improved 3D Thinking and Positive Perceptions of Learning. Geosci. Commun. 2021, 4, 233–244. [Google Scholar] [CrossRef]
  47. Uzkeda, H.; Poblet, J.; Magán, M.; Bulnes, M.; Martín, S.; Fernández-Martínez, D. Virtual Outcrop Models: Digital Techniques and an Inventory of Structural Models from North-Northwest Iberia (Cantabrian Zone and Asturian Basin). J. Struct. Geol. 2022, 157, 104568. [Google Scholar] [CrossRef]
  48. Eusden, J.D.; Duvall, M.; Bryant, M. Google Earth Mashup of the Geology in the Presidential Range, New Hampshire: Linking Real and Virtual Field Trips for an Introductory Geology Class. Geol. Soc. Am. Spec. Pap. 2012, 492, 355–366. [Google Scholar]
  49. Cliffe, A.D. A Review of the Benefits and Drawbacks to Virtual Field Guides in Today’s Geoscience Higher Education Environment. Int. J. Educ. Technol. High. Educ. 2017, 14. [Google Scholar] [CrossRef] [Green Version]
  50. Harknett, J.; Whitworth, M.; Rust, D.; Krokos, M.; Kearl, M.; Tibaldi, A.; Bonali, F.L.; Van Wyk de Vries, B.; Antoniou, V.; Nomikou, P.; et al. The Use of Immersive Virtual Reality for Teaching Fieldwork Skills in Complex Structural Terrains. J. Struct. Geol. 2022, 163, 104681. [Google Scholar] [CrossRef]
  51. Pugsley, J.H.; Howell, J.A.; Hartley, A.; Buckley, S.J.; Brackenridge, R.; Schofield, N.; Maxwell, G.; Chmielewska, M.; Ringdal, K.; Naumann, N.; et al. Virtual Field Trips Utilizing Virtual Outcrop: Construction, Delivery and Implications for the Future. Geosci. Commun. 2022, 5, 227–249. [Google Scholar] [CrossRef]
  52. Rutkofske, J.E.; Pavlis, T.L.; Ramirez, S. Applications of Modern Digital Mapping Systems to Assist Inclusion of Persons with Disabilities in Geoscience Education and Research. J. Struct. Geol. 2022, 161, 104655. [Google Scholar] [CrossRef]
  53. Cawood, A.; Bond, C. ERock: An Open-Access Repository of Virtual Outcrops for Geoscience Education. GSA Today 2019, 29, 36–37. [Google Scholar] [CrossRef]
  54. Senger, K.; Betlem, P.; Birchall, T.; Buckley, S.J.; Coakley, B.; Eide, C.H.; Flaig, P.P.; Forien, M.; Galland, O.; Gonzaga, L.; et al. Using Digital Outcrops to Make the High Arctic More Accessible through the Svalbox Database. J. Geosci. Educ. 2021, 69, 123–137. [Google Scholar] [CrossRef]
  55. Buckley, S.J.; Howell, J.A.; Naumann, N.; Lewis, C.; Chmielewska, M.; Ringdal, K.; Vanbiervliet, J.; Tong, B.; Mulelid-Tynes, O.S.; Foster, D.; et al. V3Geo: A Cloud-Based Repository for Virtual 3D Models in Geoscience. Geosci. Commun. 2022, 5, 67–82. [Google Scholar] [CrossRef]
  56. Zini, L.; Calligaris, C.; Forte, E.; Petronio, L.; Zavagno, E.; Boccali, C.; Cucchi, F. A Multidisciplinary Approach in Sinkhole Analysis: The Quinis Village Case Study (NE-Italy). Eng. Geol. 2015, 197, 132–144. [Google Scholar] [CrossRef]
  57. Busetti, A.; Calligaris, C.; Forte, E.; Areggi, G.; Mocnik, A.; Zini, L. Non-Invasive Methodological Approach to Detect and Characterize High-Risk Sinkholes in Urban Cover Evaporite Karst: Integrated Reflection Seismics, PS-INSAR, Leveling, 3D-GPR and Ancillary Data. a Ne Italian Case Study. Remote Sens. 2020, 12, 3814. [Google Scholar] [CrossRef]
  58. Gortani, M. Le Doline Alluvionali. Nat. Mont. 1965, 3, 120–128. [Google Scholar]
  59. Calligaris, C.; Devoto, S.; Galve, J.P.; Zini, L.; Pérez-Peña, J.V. Integration of Multi-Criteria and Nearest Neighbour Analysis with Kernel Density Functions for Improving Sinkhole Susceptibility Models: The Case Study of Enemonzo (NE Italy). Int. J. Speleol. 2017, 46, 191–204. [Google Scholar] [CrossRef] [Green Version]
  60. Calligaris, C.; Devoto, S.; Zini, L. Evaporite Sinkholes of the Friuli Venezia Giulia Region (NE Italy). J. Maps 2017, 13, 406–414. [Google Scholar] [CrossRef] [Green Version]
  61. Gutiérrez, F.; Guerrero, J.; Lucha, P. A Genetic Classification of Sinkholes Illustrated from Evaporite Paleokarst Exposures in Spain. Environ. Geol. 2008, 53, 993–1006. [Google Scholar] [CrossRef]
  62. Gutiérrez, F.; Parise, M.; De Waele, J.; Jourde, H. A Review on Natural and Human-Induced Geohazards and Impacts in Karst. Earth-Sci. Rev. 2014, 138, 61–88. [Google Scholar] [CrossRef]
  63. Girardeau-Montaut, D. Cloud Compare—3D Point Cloud and Mesh Processing Software. Available online: https://www.danielgm.net/cc/ (accessed on 17 April 2022).
  64. Chase, P.P.C.; Clarke, K.H.; Hawkes, A.J.; Jabari, S.; Jakus, J.S. Apple IPhone 13 Pro Lidar Accuracy Assessment for Engineering Applications. In Proceedings of the 2022: The Digital Reality of Tomorrow, Fredericton, NB, Canada, 23–25 August 2022; pp. 1–10. [Google Scholar]
  65. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  66. Buckley, S.J.; Ringdal, K.; Naumann, N.; Dolva, B.; Kurz, T.H.; Howell, J.A.; Dewez, T.J.B. LIME: Software for 3-D Visualization, Interpretation, and Communication of Virtual Geoscience Models. Geosphere 2019, 15, 222–235. [Google Scholar] [CrossRef]
  67. Verhoeven, G. Taking Computer Vision Aloft—Archaeological Three-Dimensional Reconstructions from Aerial Photographs with Photoscan. Archaeol. Prospect. 2011, 18, 67–73. [Google Scholar] [CrossRef]
  68. Remondino, F. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens. 2011, 3, 1104. [Google Scholar] [CrossRef] [Green Version]
  69. Ruggles, S.; Clark, J.; Franke, K.W.; Wolfe, D.; Reimschiissel, B.; Martin, R.A.; Okeson, T.J.; Hedengren, J.D. Comparison of Sfm Computer Vision Point Clouds of a Landslide Derived from Multiple Small Uav Platforms and Sensors to a Tls-Based Model. J. Unmanned Veh. Syst. 2016, 4, 246–265. [Google Scholar] [CrossRef]
  70. Kelly, J.; Sukhatme, G.S. Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-Calibration. Int. J. Rob. Res. 2011, 30, 56–79. [Google Scholar] [CrossRef] [Green Version]
  71. James, M.R.; Robson, S. Mitigating Systematic Error in Topographic Models Derived from UAV and Ground-Based Image Networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef] [Green Version]
  72. Niethammer, U.; James, M.R.; Rothmund, S.; Travelletti, J.; Joswig, M. UAV-Based Remote Sensing of the Super-Sauze Landslide: Evaluation and Results. Eng. Geol. 2012, 128, 2–11. [Google Scholar] [CrossRef]
  73. Tavani, S.; Corradetti, A.; Billi, A. High Precision Analysis of an Embryonic Extensional Fault-Related Fold Using 3D Orthorectified Virtual Outcrops: The Viewpoint Importance in Structural Geology. J. Struct. Geol. 2016, 86, 200–210. [Google Scholar] [CrossRef]
  74. Menegoni, N.; Inama, R.; Crozi, M.; Perotti, C. Early Deformation Structures Connected to the Progradation of a Carbonate Platform: The Case of the Nuvolau Cassian Platform (Dolomites—Italy). Mar. Pet. Geol. 2022, 138, 105574. [Google Scholar] [CrossRef]
  75. Gattolin, G.; Preto, N.; Breda, A.; Franceschi, M.; Isotton, M.; Gianolla, P. Sequence Stratigraphy after the Demise of a High-Relief Carbonate Platform (Carnian of the Dolomites): Sea-Level and Climate Disentangled. Palaeogeogr. Palaeoclimatol. Palaeoecol. 2015, 423, 1–17. [Google Scholar] [CrossRef]
  76. Corradetti, A.; Tavani, S.; Russo, M.; Arbués, P.C.; Granado, P. Quantitative Analysis of Folds by Means of Orthorectified Photogrammetric 3D Models: A Case Study from Mt. Catria, Northern Apennines, Italy. Photogramm. Rec. 2017, 32, 480–496. [Google Scholar] [CrossRef]
  77. Cawood, A.J.; Corradetti, A.; Granado, P.; Tavani, S. Detailed Structural Analysis of Digital Outcrops: A Learning Example from the Kermanshah-Qulqula Radiolarite Basin, Zagros Belt, Iran. J. Struct. Geol. 2022, 154, 104489. [Google Scholar] [CrossRef]
  78. Jablonska, D.; Pitts, A.; Di Celma, C.; Volatili, T.; Alsop, G.I.; Tondi, E. 3D Outcrop Modelling of Large Discordant Breccia Bodies in Basinal Carbonates of the Apulian Margin, Italy. Mar. Pet. Geol. 2021, 123, 104732. [Google Scholar] [CrossRef]
  79. McCaffrey, K.J.W.; Feely, M.; Hennessy, R.; Thompson, J. Visualization of Folding in Marble Outcrops, Connemara, Western Ireland: An Application of Virtual Outcrop Technology. Geosphere 2008, 4, 588. [Google Scholar] [CrossRef] [Green Version]
  80. Nesbit, P.R.; Boulding, A.D.; Hugenholtz, C.H.; Durkin, P.R.; Hubbard, S.M. Visualization and Sharing of 3D Digital Outcrop Models to Promote Open Science. GSA Today 2020, 30, 4–10. [Google Scholar] [CrossRef] [Green Version]
  81. Nesbit, P.R.; Hubbard, S.M.; Hugenholtz, C.H. Direct Georeferencing UAV-SfM in High-Relief Topography: Accuracy Assessment and Alternative Ground Control Strategies along Steep Inaccessible Rock Slopes. Remote Sens. 2022, 14, 490. [Google Scholar] [CrossRef]
  82. Thiele, S.T.; Grose, L.; Samsu, A.; Micklethwaite, S.; Vollgger, S.A.; Cruden, A.R. Rapid, Semi-Automatic Fracture and Contact Mapping for Point Clouds, Images and Geophysical Data. Solid Earth 2017, 8, 1241–1253. [Google Scholar] [CrossRef]
Figure 1. (a) Zenithal view of the kmz orthomosaic generated in Metashape from the Air 2S model. Note that no GCPs were used. Red corners highlight the limits of the orthomosaic overlay. Yellow arrows highlight matched features identified within the orthomosaic and Google Earth remotely sensed imagery. (b) Orthographic section view in CloudCompare of the Air 2S model with the LiDAR-derived Digital Terrain Model (DTM) at 1 m resolution of the region Friuli Venezia Giulia, and (c) zenithal view resultant cloud-cloud computed distance.
Figure 1. (a) Zenithal view of the kmz orthomosaic generated in Metashape from the Air 2S model. Note that no GCPs were used. Red corners highlight the limits of the orthomosaic overlay. Yellow arrows highlight matched features identified within the orthomosaic and Google Earth remotely sensed imagery. (b) Orthographic section view in CloudCompare of the Air 2S model with the LiDAR-derived Digital Terrain Model (DTM) at 1 m resolution of the region Friuli Venezia Giulia, and (c) zenithal view resultant cloud-cloud computed distance.
Remotesensing 14 05187 g001
Figure 2. Visual comparison in CloudCompare of the Air 2S model with the iPhone model, using oblique (a) north (b) and east (c) oriented views.
Figure 2. Visual comparison in CloudCompare of the Air 2S model with the iPhone model, using oblique (a) north (b) and east (c) oriented views.
Remotesensing 14 05187 g002
Figure 3. Visual comparison in CloudCompare of the Air 2S model with the PIX4Dcatch and 3D Scanner App models after translation next to the Air 2S model.
Figure 3. Visual comparison in CloudCompare of the Air 2S model with the PIX4Dcatch and 3D Scanner App models after translation next to the Air 2S model.
Remotesensing 14 05187 g003
Figure 4. The iPhone (photogrammetric) RGB colored point cloud (a) and its distance to the benchmark model (b). The Nikon RGB colored point cloud (c) and its distance to the benchmark model (d). The 3D Scanner App colored point cloud (e) and its distance to the benchmark model (f). The PIX4Dcatch app RGB colored point cloud (g) and its distance to the benchmark model (h). Green and red arrows in e and g represent the first and last view of the acquisition. Gray arrows in f represent artifact points related to self-localization and mapping (SLAM) errors. The Red Green and Blue Cartesian system represent the East, North and Up directions, respectively.
Figure 4. The iPhone (photogrammetric) RGB colored point cloud (a) and its distance to the benchmark model (b). The Nikon RGB colored point cloud (c) and its distance to the benchmark model (d). The 3D Scanner App colored point cloud (e) and its distance to the benchmark model (f). The PIX4Dcatch app RGB colored point cloud (g) and its distance to the benchmark model (h). Green and red arrows in e and g represent the first and last view of the acquisition. Gray arrows in f represent artifact points related to self-localization and mapping (SLAM) errors. The Red Green and Blue Cartesian system represent the East, North and Up directions, respectively.
Remotesensing 14 05187 g004
Figure 5. Detailed view of the reconstructed scene (2 × 1 m) using different 3D image capture systems and showing one sidewall of the studied sinkhole.
Figure 5. Detailed view of the reconstructed scene (2 × 1 m) using different 3D image capture systems and showing one sidewall of the studied sinkhole.
Remotesensing 14 05187 g005
Figure 6. (a) iPhone 13 Pro geotag precision estimated via the difference between the iPhone 13 Pro’s geotags, as indicated in the EXIF metadata and their estimated values in Metashape. (b) iPhone geotag accuracy error estimated after aligning the iPhone model to the Air 2S model (benchmark). Note that the benchmark model has a −8 m vertical translation from the real altitude ASL of the site.
Figure 6. (a) iPhone 13 Pro geotag precision estimated via the difference between the iPhone 13 Pro’s geotags, as indicated in the EXIF metadata and their estimated values in Metashape. (b) iPhone geotag accuracy error estimated after aligning the iPhone model to the Air 2S model (benchmark). Note that the benchmark model has a −8 m vertical translation from the real altitude ASL of the site.
Remotesensing 14 05187 g006
Figure 7. (a) SfM-MVS photogrammetric reconstruction of the base of the obelisk by means of oriented images captured by the PIX4Dcatch app. (b) Precision estimates of the image geotags collected by the PIX4Dcatch app. Each independent acquisition is identifiable by the different number of photos. (c) Orientation data of the orthogonal walls of the obelisk measured in CloudCompare from the LiDAR derived models. (d) Lower hemisphere, equal-area projection of measurements taken in CloudCompare (black great circles) and in the field using the Clino (green) and FieldMove (blue) apps. Green and blue great circles are the walls of the obelisks, whilst the markers are their trend axes.
Figure 7. (a) SfM-MVS photogrammetric reconstruction of the base of the obelisk by means of oriented images captured by the PIX4Dcatch app. (b) Precision estimates of the image geotags collected by the PIX4Dcatch app. Each independent acquisition is identifiable by the different number of photos. (c) Orientation data of the orthogonal walls of the obelisk measured in CloudCompare from the LiDAR derived models. (d) Lower hemisphere, equal-area projection of measurements taken in CloudCompare (black great circles) and in the field using the Clino (green) and FieldMove (blue) apps. Green and blue great circles are the walls of the obelisks, whilst the markers are their trend axes.
Remotesensing 14 05187 g007
Table 1. In this table are summarized the parameters of the photographic devices used.
Table 1. In this table are summarized the parameters of the photographic devices used.
DevicePositioning SystemSensorImage
Resolution
Lens ParametersProduction Year
DJI Air 2SGPS + GLONASS + Galileo1” CMOS20 MPf/2.8, 22 mm (35 mm equivalent)2021
Nikon D5300GPSCMOS APS-C24.2 MPf/3.5–5.6 G VR2014
iPhone 13 ProGPS + GLONASS + Galileo + QZSS + Beidoudual pixel PDAF12.2 MP5.7 mm f/1.5, 26 mm (35 mm equivalent)2021
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Corradetti, A.; Seers, T.; Mercuri, M.; Calligaris, C.; Busetti, A.; Zini, L. Benchmarking Different SfM-MVS Photogrammetric and iOS LiDAR Acquisition Methods for the Digital Preservation of a Short-Lived Excavation: A Case Study from an Area of Sinkhole Related Subsidence. Remote Sens. 2022, 14, 5187. https://doi.org/10.3390/rs14205187

AMA Style

Corradetti A, Seers T, Mercuri M, Calligaris C, Busetti A, Zini L. Benchmarking Different SfM-MVS Photogrammetric and iOS LiDAR Acquisition Methods for the Digital Preservation of a Short-Lived Excavation: A Case Study from an Area of Sinkhole Related Subsidence. Remote Sensing. 2022; 14(20):5187. https://doi.org/10.3390/rs14205187

Chicago/Turabian Style

Corradetti, Amerigo, Thomas Seers, Marco Mercuri, Chiara Calligaris, Alice Busetti, and Luca Zini. 2022. "Benchmarking Different SfM-MVS Photogrammetric and iOS LiDAR Acquisition Methods for the Digital Preservation of a Short-Lived Excavation: A Case Study from an Area of Sinkhole Related Subsidence" Remote Sensing 14, no. 20: 5187. https://doi.org/10.3390/rs14205187

APA Style

Corradetti, A., Seers, T., Mercuri, M., Calligaris, C., Busetti, A., & Zini, L. (2022). Benchmarking Different SfM-MVS Photogrammetric and iOS LiDAR Acquisition Methods for the Digital Preservation of a Short-Lived Excavation: A Case Study from an Area of Sinkhole Related Subsidence. Remote Sensing, 14(20), 5187. https://doi.org/10.3390/rs14205187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop