Next Article in Journal
Design, Implementation, and Kinematics of a Twisting Robot Continuum Arm Inspired by Human Forearm Movements
Previous Article in Journal
The Effect of the Degree of Freedom and Weight of the Hand Exoskeleton on Joint Mobility Function
Previous Article in Special Issue
Identifying Personality Dimensions for Engineering Robot Personalities in Significant Quantities with Small User Groups
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Performance Investigation and Repeatability Assessment of a Mobile Robotic System for 3D Mapping †

Polytechnic Department of Engineering and Architecture (DPIA), University of Udine, 33100 Udine, Italy
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Eleonora Maset; Lorenzo Scalera; Alberto Beinat; Federico Cazorzi; Fabio Crosilla; Andrea Fusiello; Alessandro Gasparetto. Preliminary Comparison Between Handheld and Mobile Robotic Mapping Systems. In Proceedings of the I4SDG Workshop 2021, held online, 25–26 November 2021.
Robotics 2022, 11(3), 54; https://doi.org/10.3390/robotics11030054
Submission received: 15 February 2022 / Revised: 23 March 2022 / Accepted: 16 April 2022 / Published: 20 April 2022
(This article belongs to the Special Issue Robotics: 10th Anniversary Feature Papers)

Abstract

:
In this paper, we present a quantitative performance investigation and repeatability assessment of a mobile robotic system for 3D mapping. With the aim of a more efficient and automatic data acquisition process with respect to well-established manual topographic operations, a 3D laser scanner coupled with an inertial measurement unit is installed on a mobile platform and used to perform a high-resolution mapping of the surrounding environment. Point clouds obtained with the use of a mobile robot are compared with those acquired with the device carried manually as well as with a terrestrial laser scanner survey that serves as a ground truth. Experimental results show that both mapping modes provide similar accuracy and repeatability, whereas the robotic system compares favorably with respect to the handheld modality in terms of noise level and point distribution. The outcomes demonstrate the feasibility of the mobile robotic platform as a promising technology for automatic and accurate 3D mapping.

1. Introduction

Nowadays, acquiring and recording high-resolution 3D information of internal and external environments is crucial in the study and analysis of both buildings and human settlements [1]. This need is highlighted by the United Nations 2030 Agenda for Sustainable Development, which has underlined the importance of making cities and human settlements inclusive, safe, resilient and sustainable [2]. In detail, Sustainable Development Goal 11 (SDG11) indicates the targets of enhancing inclusive and sustainable urbanization as well as protecting and safeguarding the world’s cultural and natural heritages. At the same time, SDG11 outlines the aim of substantially increasing the number of cities and human settlements adopting and implementing integrated policies and plans toward resource efficiency, adaptation to climate change and resilience to disasters. In this framework, the availability of efficient and cost-effective surveying techniques for semi-structured or unstructured environments is essential, especially where external methods for localization, such as a Global Navigation Satellite System (GNSS), are not always available.
The surveying of civil structures, usually performed by means of classical mapping technologies, such as Photogrammetry and Terrestrial Laser Scanning (TLS), has been revolutionized in recent years by using portable Mobile Mapping Systems (MMSs) [3]. Trolley, backpack and handheld devices [4] can be easily carried by a person acquiring accurate 3D data of surrounding environments by simply walking through the area of interest, as shown in [5]. The core of this surveying technology is Simultaneous Localization and Mapping (SLAM) algorithms, thanks to which 3D models of the environment can be obtained without requiring a GNSS signal. Initially developed by the robotics community to allow the autonomous navigation of a mobile platform in an unknown environment [6], SLAM methods are now also employed to estimate the trajectories of moving devices (RGB cameras, laser scanners and other active sensors) while acquiring accurate maps of an area of interest [7].
Several studies examined the advantages and disadvantages of portable MMSs in diverse test fields, as demonstrated by flourishing literature on the topic [8]. For example, in [9], a wearable mobile laser system was evaluated for the indoor 3D mapping of a complex historical site, whereas in [10], different handheld mobile solutions were tested for the 3D surveying of underground built heritage. Moreover, in [11], the performance of a handheld laser scanner was investigated in different outdoor scenarios, such as the surveying of a building façade and a mountain torrent. These studies reported accuracies ranging from a few centimeters to 10 cm for the surveying of indoor environments and urban areas with portable MMSs.
Mapping sensors are also adopted to inspect constructions or manufactured parts and identify possible discrepancies between as-built workpieces and their nominal specifications [12]. The data acquired by portable mapping devices are essential for creating a Building Information Model (BIM) according to the as-built condition of a structure [13,14,15]. For example, in [16], an approach for the semi-automated generation of a parametric BIM for steel structures based on TLS data was presented. Furthermore, surveyed data recently proved to be fundamental in supporting functional and occupancy analyses of buildings in challenging situations, such as the COVID-19 pandemic [17].
The reliability and completeness of 3D information on buildings and human settlements could be improved, and acquisitions could be automated by mounting mapping devices (e.g., laser scanners and RGB-D cameras) on robotic platforms. Such a platform can be steered remotely or can autonomously perform the required surveying of an area of interest, optimizing the acquisition process.
Today, mobile robots are being increasingly applied to mapping, and the importance of surveying automation is confirmed by a wide variety of applications in several different fields. Mobile platforms are indeed suitable for explorations and inspections in hazardous and challenging environments [18,19] and are used for inspections in disaster situations and rugged environments [20] instead of humans, ensuring safety and repeatability. In the agricultural field, both wheeled and tracked mobile vehicles are employed for orchard mapping and precision farming [21,22], harvest supporting and 3D mapping in greenhouses [23] as well as the detection and segmentation of plants and the ground [24]. A survey of localization and mapping by robots in agriculture and forestry is available in [25].
Mobile robots are also employed for the mapping of archaeological and cultural heritage sites, as shown in [26,27]. Finally, the digitalization of buildings is another relevant application of mobile systems based on robotic platforms and autonomous vehicles [28]. For instance, in [29], a large-scale 3D model of buildings was obtained by using multiple cooperative robots and by considering several criteria, such as visibility between robots, error accumulation and efficient traveling. A review on autonomous mobile systems for the digitization of buildings can be found in [30].
A fundamental aspect of autonomous mobile scanning systems is the quantitative evaluation related to a resulting 3D model. Indeed, the quality of a final point cloud is a pillar in regards to extracting the metric information of a scene and, possibly, a BIM model. Moreover, a realistic model is not guaranteed if the precision of a 3D map is not evaluated properly [30]. By considering handheld MMSs, and more generally standard geomatics surveying techniques, the acquired data are usually compared with already available ground truths obtained with more accurate surveying technologies, such as TLS (e.g., in [8,31]).
When moving to mobile robotic systems, only a paucity of works deal that with robot-based 3D mapping report comparisons with ground truths. For instance, in [32], previously acquired scans were assumed as ground truths, and the deviation angle from each reference axis in degree and the root mean-square error (RMSE) in meters were measured. Furthermore, in [33], rotational and translational estimation errors verified the accuracy of 3D mapping performed by a simulated mobile robot in multiple outdoor environments. However, as reported in [30], a high percentage of mobile scanning methods available in the literature did not provide reports concerning the accuracy of 3D maps obtained against ground truths.
Another important aspect that requires proper attention is survey repeatability, defined as the variation that can be expected in a final 3D model when mapping the same area under similar conditions (i.e., using the same scanner device, moving platform and trajectory) within a short time interval [34]. While comparing a single resulting map with a ground truth provides an estimate of accuracy and systematic errors, assessing repeatability allows one to quantify the model precision and uncertainties that can significantly influence change-detection analyses [35].
A preliminary comparison of handheld and mobile robotic mapping systems was discussed in [36]. However, in that work, no comparison with a ground truth was shown and only one survey did not guarantee a repeatability and reliability assessment.
In this paper, we present the performance and repeatability assessment of a mobile robotic system for 3D mapping. In detail, we used the commercial HERON Lite system composed of a 3D laser scanner coupled with an Inertial Measurement Unit (IMU), which realizes the high-resolution mapping of a surrounding environment and can provide real-time results. Experimental tests were carried out, performing a survey of the ground floor of the Rizzi building of the University of Udine (Italy) and considering two approaches:
  • Case 1: the acquisition device installed on a mobile robot (referred in the following as robotic mode (R)).
  • Case 2: the acquisition device attached to a telescopic pole and carried manually (referred to as handheld mode (H)).
Since the main contribution of this work is a comparison of the robotic and handheld mapping modes, for a fair analysis in both cases, the acquired data were processed with the same SLAM algorithm and software parameters, i.e., the proprietary HERON Desktop package provided by the manufacturer of the 3D mapping system.
An analysis of the results shows that the robotic mode compares favorably with respect to the handheld one and confirms the feasibility of using the robotic system for future automatic surveys with the aim of supporting renovation projects for sustainable cities.
This paper is an evolved version of a preliminary conference work in [36] and extends that work as follows:
  • The method is validated by comparing the point clouds obtained in both Cases 1 and 2 with a ground truth based on a previously acquired TLS survey.
  • The repeatability of the mapping is assessed by performing multiple scans and evaluating the consistency of the results.
  • A new experimental setup for the laser scanner installed on the mobile robot is developed to improve the point of view of the sensor during the robotic mapping. The higher point of view provides advantages in cluttered environments, limiting occlusions and data gaps in the model caused, e.g., by furniture.
The remainder of the paper is organized as follows: In Section 2, the materials and methods used in this work are described, whereas in Section 3, the experimental results are presented. A discussion of the results is given in Section 4. Finally, the paper is concluded in Section 5.

2. Materials and Methods

In this section, we first present the system setup and the sensor characteristics, then we describe the procedure for the experimental data acquisition and processing. Finally, the acquisition of the ground-truth point cloud is illustrated.

2.1. System Setup and Sensor Characteristics

The experimental evaluation was carried out using the portable MMS HERON Lite by Gexcel srl [37]. This device is composed of a Velodyne Puck LITE laser scanner [38] coupled with an Xsens MTi inertial sensor [39], the data from which were used for the system trajectory estimation. Sixteen channels characterize the scanning head and emit infrared laser beams at a wavelength of 903 nm, allowing one to acquire 300,000 points/s in a single return mode at a maximum distance of 100 m and to cover a 360° horizontal field of view and a 30° vertical field of view. The sensor head is connected to a battery and a control unit for data storage and real-time processing. According to the manufacturer’s specifications [37], the employed system ensures a precision of 3 cm and a final global accuracy of 5 cm. However, this can be downgraded to 20–50 cm depending on the followed trajectory and the geometry of the scanned environment, which can significantly influence the results of the SLAM algorithm, as shown in [11].
In its standard configuration, the HERON Lite system is a handheld device; thanks to a telescopic carbon-fiber pole, it can be manually carried by a surveyor during mapping operations, as shown in Figure 1a (Case 2). In our experiments, the HERON Lite device was also installed on a mobile robotic platform with an angle of approximately 30° between the vertical direction and the laser rotation axis (Case 1). Unlike the previous work [36], the scanning head was fitted on a custom aluminum support at a height of about 1 m from the robot base (Figure 1b). This new setup ensured a more adequate point of view of the scanning head, limiting unreliable measurements (at a range of <1 m) of the floor and providing advantages in cluttered indoor environments.
Regarding the mobile platform, we adopted the MP-500 robotic system by Neobotix GmbH [40], which is equipped with two drive wheels, an undriven castor wheel and a SICK 2D laser scanner used for safety purposes. The platform has dimensions of 81.4 × 59.2 × 36.1 cm, a payload of 80 kg and maximum speed of 1.5 m/s. This mobile robot is usually employed for autonomous transportation operations in industrial environments, surveillance and part handling within large areas and measurement of physical data (e.g., temperatures and gas concentrations) in indoor environments. During the surveys, the mobile robot was steered remotely using a wireless joystick.
With the HERON Lite system, a preliminary map of a scanned environment can be obtained in real time and visualized on the screen of the control unit. However, for an accurate and dense 3D model, data post-processing via the software HERON Desktop [41] is required. Indeed, the raw data of this system are saved in only proprietary file formats and cannot be exported for processing with alternative algorithms.
In detail, with the proprietary package, the trajectory and the map are computed in a three-stage pipeline. The first step, called Odometer, implements an online SLAM algorithm to retrieve a preliminary estimate of the sensor trajectory. This approach computes the pose of each cloud (a single 360° scan) by aligning it to the previous ones via the Iterative Closest Point (ICP) algorithm [42]. In this stage, IMU data provide a rough tentative pose of the scanner. Since this method is prone to drift accumulation, a subsequent refinement is needed. To this end, the Create Maps step (named in the proprietary software interface) splits the full trajectory into Local Maps, which are subsequently treated as internally rigid point clouds. The latter represent the inputs for the final stage, known as Global Optimization, which corresponds to a full SLAM method [43]. With this approach, it is possible to consider loop closures (if any) along the trajectory (i.e., resurveyed areas) that are treated as consistency constraints in the global registration process among all the local maps. In this way, error accumulation along the trajectory is minimized, resulting in more accurate 3D point clouds of the surveyed environment. Finally, the obtained complete map can be automatically filtered to remove moving objects (e.g., walking people), applying a process called Clean Data.

2.2. Experimental Data Acquisition and Processing

As a case study, the ground floor of the west-wing corridor of the Rizzi building of the University of Udine (Italy) was surveyed. It is characterized by a square-based plant with dimensions of 80 m × 80 m and was mapped following a closed-loop trajectory (shown in blue in Figure 2). Please note that the south part of the corridor was surveyed twice—at the beginning and at the end of the trajectory. As already mentioned, both the robotic- and handheld-mode mapping were performed, repeating the survey three times with each platform. In the following, we refer to the robotic cases as R-1, R-2 and R-3; similarly, the handheld ones are called H-1, H-2 and H-3.
Table 1 summarizes the main characteristics of the data acquisitions. Please note that for a fair comparison, all the surveys were processed using the same software parameters and the same SLAM algorithm (provided by the manufacturer of the system) previously described. Figure 2 shows an X-ray orthophoto of the mapped corridor extracted from the R-1 model by projecting the point cloud on the horizontal plane.

2.3. Ground-Truth Acquisition

For a quantitative assessment of the obtained 3D point clouds, we employed a “static” terrestrial laser scanner (TLS) to acquire a reference model. The instrument was the RIEGL Z390i system (Figure 3a), a time-of-flight laser scanner with an angular measurement resolution of 1 mgon, a declared accuracy of 6 mm and a repeatability of 4 mm for single shots and 2 mm for averaged measures. Thanks to the high accuracy and precision that can be reached with this scanning device, the TLS point cloud could be considered the ground-truth model.
In detail, a TLS data acquisition was performed on the south part of the corridor (corresponding to the dashed rectangle in Figure 2), which has dimensions of 42 m × 7 m. The TLS instrument was vertically fixed on three positions ( 2 V , 3 V and 4 V , where V stands for vertical) along the corridor axis; 2 V and 4 V were symmetrically placed with respect to 3 V , located in the corridor’s center (Figure 3b). From each scan position, once fixed to 0.10° the horizontal/vertical angular step, a sequence of three panoramic scans was performed, recording on average 2.84 × 10 6 points. Moreover, photogrammetric images taken with a Nikon D200 integrated metric camera were panoramically acquired to color the TLS scans with RGB values. Each scan sequence was then resampled, keeping the same angular acquisition resolution; the obtained “mean scans” had a noise, verified in the correspondence of planar surfaces, of about 2 mm, starting from scan sequences characterized by a noise of 3 mm. In order to register the three obtained scans, 16 reflecting disks 5 mm in diameter and 18 reflecting cylinders 10 mm in diameter and height were suitably placed in the corridor, mainly along the walls (Figure 3b).
The 3D coordinates of the reflecting disks were measured using a Leica TCRA 1201+ total station. This is a high precision topographic instrument characterized by a nominal accuracy of 1″ (0.3 mgon) for angular measurements and 1 mm + 1.5 ppm for distance measurements. The topographic survey of the TLS targets was conducted from three vertices roughly coinciding with the TLS stations in order to ensure the best visibility of the targets. Angular and distance measurements were taken on both the left and right faces of the instrument and then averaged. In order to refer the topographic measurements of the three stations to the same datum, in addition to the targets for the scans, two additional stable references were installed consisting of high-accuracy Leica GPR121 topographic round prisms, which were accurately measured from each station. The topographic measurements were least squares adjusted using MicroSurvey STAR*NET v. 7.1 [44], an advanced software solution used in the topographic field to estimate the coordinates of surveyed points together with their precision and reliability, allowing one to identify possible systematic errors and to verify that the results meet the accuracy requirements.
The uncertainties obtained in the final coordinates of the reflecting circular targets ranged from 0.5 to 4.1 mm along the three axes. Exploiting these points as control points, the three TLS clouds 2 V , 3 V and 4 V were registered in a global point cloud with RIEGL RiSCAN PRO v. 1.5, the companion software for RIEGL TLS systems [45], obtaining residuals with a standard deviation of 4.5 mm along the X axis (corresponding to the longitudinal axis of the corridor), 2.4 mm along the Y axis and 3.4 mm along the vertical component Z. These values confirmed that the global TLS cloud could be adopted as a reference model.
As a last remark, since the robotic and handheld point clouds were expressed in arbitrary reference frames, the subsequent comparison with the ground truth required a preliminary alignment to the reference model that was performed with the software JRC 3D Reconstructor (v. 4.3.1) [46] using the ICP algorithm. In addition to the various tools for point-cloud processing, the chosen software allowed us to directly import the results from HERON Desktop.
Figure 4 shows the point clouds of the analyzed corridor obtained with the robotic mapping platform (Figure 4a) and with the TLS (Figure 4b).

3. Experimental Results

As an initial quantitative evaluation of the point-cloud accuracy obtained with the mobile robotic mapping platform and with the handheld survey modality, the cloud-to-cloud absolute distances (C2C) with respect to the ground-truth model were estimated for each survey using the open-source software CloudCompare (v. 2.11.3) [47].
Figure 5a,b show the outcomes for R-1 and H-1, respectively, whereas Table 2 reports the mean and standard deviation values for the six analyzed cases. Moreover, graphical representations of the results in the forms of box plots are shown in Figure 6a. From a visual inspection, it can be noticed that higher residuals were located in the central parts of the walls for both the robotic and the handheld surveys, indicating the presence of a relative deformation with respect to the reference model in this area. In any case, the computed 3D distances did not exceed 6 cm and were compatible with the accuracy of the instrument according to the manufacturer’s specifications.
As highlighted in the previous section, the studied corridor is mapped at the beginning and at the end of the trajectory; the low-distance values revealed by the C2C analysis represent a confirmation that no relevant residual misalignment is present at the loop closure. Moreover, the results are consistent among the three data acquisitions with each platform, indicating a high survey repeatability.
To further investigate the repeatability, C2C distance values were evaluated between pairs of surveys carried out with the same mapping mode (i.e., R-1 vs. R-2, R-1 vs. R-3 and R-2 vs. R-3 and H-1 vs. H-2, H-1 vs. H-3 and H-2 vs. H-3). The results are reported in Figure 7, which shows the histograms of the C2C distance distribution for all the comparisons. The outcomes are also summarized in Table 3. The values of mean and standard deviation always lower than 1 cm further demonstrate the high repeatability of both survey modalities.
Observing, instead, the point-cloud density that characterizes the two acquisition modes, it can be noticed from Figure 8 that the mobile robotic platform guarantees a more uniform point distribution, whereas the handheld survey is characterized by low density values on the floor. Moreover, the statistics reported in Table 4 highlight an overall higher surface density for the robotic mapping cases (mean values ranging from 20,300 points/m 2 to 21,043 points/m 2 ), showing a minor variability among the three survey repetitions (as confirmed by the box dimensions in Figure 6b). This behaviour can derive from (i) the lower speed of the robot with respect to the walking operator (see the acquisition time reported in Table 1) and (ii) the constant speed maintained during the robotic acquisitions.
In addition to a more consistent point density, the point cloud acquired with the robotic system shows a slightly lower level of noise, confirming the results reported in our preliminary work [36]. As suggested in [5], this feature was quantitatively assessed by extracting three patches of the point clouds (one on each wall and one on the floor) and fitting a plane to each subset. Table 5 reports the Root Mean Square (RMS) of the distances of the points from the corresponding estimated plane. An average RMS of 1.7 cm was estimated for the robotic mapping cases, whereas an RMS of 2.0 cm resulted from the handheld-mode surveys. As expected, the TLS significantly outperformed the mobile systems. These outcomes can be explained by the fact that the robot performed the survey at a lower speed and, above all, avoided sudden movements and oscillations to which the laser scanner was subjected when it was manually carried by a walking person. In this regard, Figure 9b clearly shows the oscillations that characterized the trajectory followed by the scanning sensor in the handheld-mode survey that were not present when the device was installed on the robotic platform (Figure 9a). Moreover, the data acquired by the IMU are influenced by the characteristics of the trajectory. Indeed, the IMU data are subsequently employed in the SLAM process and thus play an important role in the creation of the final point cloud.
Finally, in Figure 10, we report a detail of the point clouds acquired with the two survey modalities. Thanks to the higher density, the more uniform point distribution and the slightly lower noise level, in the point cloud retrieved via the robotic mapping (Figure 10a) the objects visually appear sharper (see, e.g., the baskets and the items hanging on the walls). This feature could be helpful for the digitization and vectorization processes that usually follow the point-cloud acquisition.

4. Discussion

The results reported in the previous section clearly show that, overall, a tight correspondence with the ground-truth model can be noticed for all the obtained point clouds (mean distances ranging from 2.0 to 2.3 cm), with no significant differences between the two survey modes. The computed C2C distances are in line with the outcomes provided by other works in the literature that assessed the accuracies of portable devices carried manually. In [11], e.g., the HERON Lite system was tested in two different outdoor scenarios, reporting an average C2C distance of 5.4 cm for the survey of a torrent reach, estimated with respect to a photogrammetric ground-truth model. Discrepancies higher than 7 cm were revealed when comparing the point cloud of a building façade provided by the portable laser scanner with the TLS model. Accuracies ranging from a few to 10 cm for the surveying of buildings and urban areas with different commercial portable mapping systems were declared in [5,8,48].
As discussed in detail in [5,11,49], one of the main drawbacks of portable scanning devices is a high level of noise that affects the resulting point cloud. Experiments carried out in previous works revealed a sensor noise of 1–2 cm for all the tested commercial systems, a result that also emerged from the experimental evaluation reported in the previous section (Table 5). In this regard, the slightly lower level of noise reached with the robotic platform, which guaranteed a smoother trajectory and limited oscillations, appears to be of significant importance.
To the best of the authors’ knowledge, no similar works performing a comparison between handheld and robotic mapping platform can be found in the current literature, making a direct comparison with the presented results unfeasible. Furthermore, as highlighted in [30], several approaches based on robotic platforms for 3D mapping did not provide any quantitative evaluations of the accuracies of the obtained models, with some reporting only non-metric information, such as the number of unobserved cells in a scan position [50]. A limited number of studies provided quantitative details regarding accuracy and precision. For instance, in [29], the accuracy of a 3D model was evaluated by comparing the positions of only six corners, obtaining an average error of 23.1 mm, whereas the work in [32] reported a centimeter accuracy in a registration process between consecutive scans. More in-depth analyses and comparisons with ground-truth models should therefore be provided in the future in the field of robotic mapping to promote the evaluation of methods and platforms.

5. Conclusions

The increasing demand for updated and detailed 3D models of buildings, essential in any renovation project, is fostering the development of flexible and efficient surveying techniques. To this end, robotic mapping systems could provide a fundamental aid to reduce and automate the time-consuming manual work usually required for mapping operations. In this paper, we investigated the performance of a mobile remote-controlled robot equipped with a 3D laser scanner for indoor surveys, focusing on the accuracy and repeatability that characterized the tested system. Quantitative evaluations showed a high repeatability and a tight correspondence between point clouds obtained from robotic mapping and a ground-truth model acquired with a TLS. Moreover, the robot-based acquisition platform compared favorably with respect to the handheld mode. Indeed, a higher surface density, more uniform point distribution and lower noise level were shown in the former case. These outcomes represent a further confirmation that robotic mapping could be a viable alternative to the well-established surveying mode, reducing the manual work usually required and, above all, guaranteeing, at the same time, a high quality of final results.
In the future, our research will focus on automating the surveying carried out with robotic platforms in order to perform the autonomous 3D mapping of both more-complicated inner spaces and outdoor environments without relying on human interventions. Furthermore, we will consider the adoption of different scanning sensors capable of providing raw data to be processed with open-source SLAM algorithms. In this context, different SLAM approaches will be compared to evaluate their performance in the reconstruction of a reliable 3D map. Moreover, additional sensors, such as RGB cameras and GNSS receivers, will be integrated for further applications in outdoor scenarios, e.g., 3D mapping in agriculture or in cluttered environments.

Author Contributions

Investigation, conceptualization and methodology, E.M. and L.S.; data curation, A.B., E.M. and D.V.; writing—original draft preparation, E.M. and L.S.; writing—review and editing, A.B., A.G., E.M., L.S. and D.V.; supervision, A.B., A.G. and D.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Laboratory for Advanced Mechatronics—LAMA FVG, the international research center for product and process innovation of the three Universities of the Friuli Venezia Giulia Region (Italy).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We thank Paolo Gallina, Stefano Seriani and Matteo Caruso for their help in setting up the mobile robot.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gupta, T.; Li, H. Indoor mapping for smart cities—An affordable approach: Using Kinect sensor and ZED stereo camera. In Proceedings of the 2017 International Conference on Indoor Positioning and Indoor Navigation, Sapporo, Japan, 18–21 September 2017; pp. 1–8. [Google Scholar]
  2. United Nations. The Sustainable Development Goals. 2015. Available online: https://sdgs.un.org/goals (accessed on 14 May 2021).
  3. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  4. Otero, R.; Lagüela, S.; Garrido, I.; Arias, P. Mobile indoor mapping technologies: A review. Autom. Constr. 2020, 120, 103399. [Google Scholar] [CrossRef]
  5. Nocerino, E.; Menna, F.; Remondino, F.; Toschi, I.; Rodríguez-Gonzálvez, P. Investigation of indoor and outdoor performance of two portable mobile mapping systems. In Proceedings of the Videometrics, Range Imaging, and Applications XIV, Munich, Germany, 25–29 June 2017. [Google Scholar]
  6. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
  7. Moosmann, F.; Stiller, C. Velodyne SLAM. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 393–398. [Google Scholar]
  8. Tucci, G.; Visintini, D.; Bonora, V.; Parisi, E.I. Examination of indoor mobile mapping systems in a diversified internal/external test field. Appl. Sci. 2018, 8, 401. [Google Scholar] [CrossRef] [Green Version]
  9. Di Filippo, A.; Sánchez-Aparicio, L.J.; Barba, S.; Martín-Jiménez, J.A.; Mora, R.; González Aguilera, D. Use of a wearable mobile laser system in seamless indoor 3D mapping of a complex historical site. Remote Sens. 2018, 10, 1897. [Google Scholar] [CrossRef] [Green Version]
  10. Di Stefano, F.; Torresani, A.; Farella, E.M.; Pierdicca, R.; Menna, F.; Remondino, F. 3D surveying of underground built heritage: Opportunities and challenges of mobile technologies. Sustainability 2021, 13, 13289. [Google Scholar] [CrossRef]
  11. Maset, E.; Cucchiaro, S.; Cazorzi, F.; Crosilla, F.; Fusiello, A.; Beinat, A. Investigating the performance of a handheld mobile mapping system in different outdoor scenarios. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B1-2021, 103–109. [Google Scholar] [CrossRef]
  12. Maset, E.; Scalera, L.; Zonta, D.; Alba, I.; Crosilla, F.; Fusiello, A. Procrustes analysis for the virtual trial assembly of large-size elements. Robot. Comput.-Integr. Manuf. 2020, 62, 101885. [Google Scholar] [CrossRef]
  13. Maset, E.; Magri, L.; Fusiello, A. Improving automatic reconstruction of interior walls from point cloud data. ISPRS Arch. 2019, XLII-2/W13, 849–855. [Google Scholar] [CrossRef] [Green Version]
  14. Cantoni, S.; Vassena, G. Fast indoor mapping to feed an indoor db for building and facility management. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W9, 213–217. [Google Scholar] [CrossRef] [Green Version]
  15. Rausch, C.; Haas, C. Automated shape and pose updating of building information model elements from 3D point clouds. Autom. Constr. 2021, 124, 103561. [Google Scholar] [CrossRef]
  16. Yang, L.; Cheng, J.C.; Wang, Q. Semi-automated generation of parametric BIM for steel structures based on terrestrial laser scanning data. Autom. Constr. 2020, 112, 103037. [Google Scholar] [CrossRef]
  17. Comai, S.; Costa, S.; Ventura, S.M.; Vassena, G.; Tagliabue, L.; Simeone, D.; Bertuzzi, E.; Scurati, G.; Ferrise, F.; Ciribini, A. Indoor mobile mapping system and crowd simulation to support school reopening because of COVID-19: A case study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 44, 29–36. [Google Scholar] [CrossRef]
  18. Chen, J.; Cho, Y.K. Detection of damaged infrastructure on disaster sites using mobile robots. In Proceedings of the 16th International Conference on Ubiquitous Robots, Jeju, Korea, 24–27 June 2019; pp. 648–653. [Google Scholar]
  19. Zimroz, R.; Hutter, M.; Mistry, M.; Stefaniak, P.; Walas, K.; Wodecki, J. Why should inspection robots be used in deep underground mines? In Proceedings of the 27th International Symposium on Mine Planning and Equipment Selection—MPES 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 497–507. [Google Scholar]
  20. Kim, P.; Park, J.; Cho, Y.K.; Kang, J. UAV-assisted autonomous mobile robot navigation for as-is 3D data collection and registration in cluttered environments. Autom. Constr. 2019, 106, 102918. [Google Scholar] [CrossRef]
  21. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion—Part A: Tree detection. Comput. Electron. Agric. 2015, 119, 254–266. [Google Scholar] [CrossRef]
  22. Ristorto, G.; Gallo, R.; Gasparetto, A.; Scalera, L.; Vidoni, R.; Mazzetto, F. A mobile laboratory for orchard health status monitoring in precision farming. Chem. Eng. Trans. 2017, 58, 661–666. [Google Scholar]
  23. Masuzawa, H.; Miura, J.; Oishi, S. Development of a mobile robot for harvest support in greenhouse horticulture—Person following and mapping. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 11–14 December 2017; pp. 541–546. [Google Scholar]
  24. Weiss, U.; Biber, P. Plant detection and mapping for agricultural robots using a 3D LIDAR sensor. Robot. Auton. Syst. 2011, 59, 265–273. [Google Scholar] [CrossRef]
  25. Aguiar, A.S.; dos Santos, F.N.; Cunha, J.B.; Sobreira, H.; Sousa, A.J. Localization and mapping for robots in agriculture and forestry: A survey. Robotics 2020, 9, 97. [Google Scholar] [CrossRef]
  26. Borrmann, D.; Hess, R.; Eck, D.; Houshiar, H.; Nüchter, A.; Schilling, K. Evaluation of methods for robotic mapping of cultural heritage sites. IFAC-PapersOnLine 2015, 48, 105–110. [Google Scholar] [CrossRef]
  27. Calisi, D.; Cottefoglie, F.; D’Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V.A. Robotics and virtual reality for cultural heritage digitalization and fruition. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 503–508. [Google Scholar] [CrossRef] [Green Version]
  28. Biber, P.; Andreasson, H.; Duckett, T.; Schilling, A. 3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 4, pp. 3430–3435. [Google Scholar]
  29. Kurazume, R.; Oshima, S.; Nagakura, S.; Jeong, Y.; Iwashita, Y. Automatic large-scale three dimensional modeling using cooperative multiple robots. Comput. Vis. Image Underst. 2017, 157, 25–42. [Google Scholar] [CrossRef] [Green Version]
  30. Adán, A.; Quintana, B.; Prieto, S.A. Autonomous mobile scanning systems for the digitization of buildings: A review. Remote Sens. 2019, 11, 306. [Google Scholar] [CrossRef] [Green Version]
  31. Ramezani, M.; Wang, Y.; Camurri, M.; Wisth, D.; Mattamala, M.; Fallon, M. The newer College Dataset: Handheld LiDAR, inertial and vision with ground truth. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NA, USA, 25–29 October 2020; pp. 4353–4360. [Google Scholar]
  32. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  33. Jiang, Z.; Zhu, J.; Lin, Z.; Li, Z.; Guo, R. 3D mapping of outdoor environments by scan matching and motion averaging. Neurocomputing 2020, 372, 17–32. [Google Scholar] [CrossRef]
  34. Bartlett, J.; Frost, C. Reliability, repeatability and reproducibility: Analysis of measurement errors in continuous variables. Ultrasound Obstet. Gynecol. 2008, 31, 466–475. [Google Scholar] [CrossRef]
  35. De Marco, J.; Maset, E.; Cucchiaro, S.; Beinat, A.; Cazorzi, F. Assessing repeatability and reproducibility of Structure-from-Motion Photogrammetry for 3D terrain mapping of riverbeds. Remote Sens. 2021, 13, 2572. [Google Scholar] [CrossRef]
  36. Maset, E.; Scalera, L.; Beinat, A.; Cazorzi, F.; Crosilla, F.; Fusiello, A.; Gasparetto, A. Preliminary comparison between handheld and mobile robotic mapping systems. In International Workshop IFToMM for Sustainable Development Goals; Springer: Berlin/Heidelberg, Germany, 2021; pp. 290–298. [Google Scholar]
  37. Gexcel srl. HERON Lite. 2022. Available online: https://gexcel.it/en/solutions/heron-portable-3d-mapping-system (accessed on 12 January 2022).
  38. Velodyne Lidar. Puck LITE. 2022. Available online: https://velodynelidar.com/products/puck-lite (accessed on 2 February 2022).
  39. Xsens. MTi. 2022. Available online: https://www.xsens.com/mti-product-selector (accessed on 2 February 2022).
  40. NEOBOTIX GmbH. NEOBOTIX MP-500. 2022. Available online: https://docs.neobotix.de/display/MP500 (accessed on 12 January 2022).
  41. Gexcel srl. HERON Desktop. 2022. Available online: https://gexcel.it/it/software/heron-desktop (accessed on 2 February 2022).
  42. Besl, P.; McKay, N. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  43. Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  44. MicroSurvey. STAR*NET. 2022. Available online: https://www.microsurvey.com/products/starnet (accessed on 2 February 2022).
  45. RIEGL. RiSCAN PRO. 2022. Available online: http://www.riegl.com/products/software-packages/riscan-pro (accessed on 2 February 2022).
  46. Gexcel srl. JRC 3D Reconstructor. 2022. Available online: https://gexcel.it/en/software/reconstructor (accessed on 2 February 2022).
  47. CloudCompare. 2022. Available online: https://www.danielgm.net/cc/ (accessed on 2 February 2022).
  48. Lagüela, S.; Dorado, I.; Gesto, M.; Arias, P.; González-Aguilera, D.; Lorenzo, H. Behavior analysis of novel wearable indoor mapping system based on 3d-slam. Sensors 2018, 18, 766. [Google Scholar] [CrossRef] [Green Version]
  49. Sammartano, G.; Spanò, A. Point clouds by SLAM-based mobile mapping systems: Accuracy and geometric content validation in multisensor survey and stand-alone acquisition. Appl. Geomat. 2018, 10, 317–339. [Google Scholar] [CrossRef]
  50. Potthast, C.; Sukhatme, G.S. A probabilistic framework for next best view estimation in a cluttered environment. J. Vis. Commun. Image Represent. 2014, 25, 148–164. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) HERON Lite system carried manually (with the main components highlighted) and (b) installed on a Neobotix MP-500 mobile platform.
Figure 1. (a) HERON Lite system carried manually (with the main components highlighted) and (b) installed on a Neobotix MP-500 mobile platform.
Robotics 11 00054 g001
Figure 2. X-ray orthophoto of the surveyed area (2.5D orthophoto generated from a point cloud acquired by the robot platform and projected on the horizontal plane). Points falling outside the area of interest were manually removed. The blue line represents the desired trajectory followed during the survey, with the circles indicating the starting and end points. The dashed-line rectangle indicates the south part of the corridor on which the subsequent analyses focused.
Figure 2. X-ray orthophoto of the surveyed area (2.5D orthophoto generated from a point cloud acquired by the robot platform and projected on the horizontal plane). Points falling outside the area of interest were manually removed. The blue line represents the desired trajectory followed during the survey, with the circles indicating the starting and end points. The dashed-line rectangle indicates the south part of the corridor on which the subsequent analyses focused.
Robotics 11 00054 g002
Figure 3. (a) TLS instrument used for the acquisition of the ground-truth model and (b) network of reflecting circular targets used for the registration of the three scans ( 2 V , 3 V and 4 V , where V stands for vertical and D indicates the targets in the forms of disks).
Figure 3. (a) TLS instrument used for the acquisition of the ground-truth model and (b) network of reflecting circular targets used for the registration of the three scans ( 2 V , 3 V and 4 V , where V stands for vertical and D indicates the targets in the forms of disks).
Robotics 11 00054 g003
Figure 4. Point clouds of the analyzed corridor obtained (a) with the robotic mapping platform (colored by using reflectance values) and (b) with the TLS.
Figure 4. Point clouds of the analyzed corridor obtained (a) with the robotic mapping platform (colored by using reflectance values) and (b) with the TLS.
Robotics 11 00054 g004
Figure 5. (a) Cloud-to-cloud absolute distances computed between the TLS model and the robotic mapping system point cloud (case R-1) and (b) C2C distances between the ground truth and the handheld acquisition (case H-1). For grey points, the C2C distance could not be computed due to data gaps in the TLS model.
Figure 5. (a) Cloud-to-cloud absolute distances computed between the TLS model and the robotic mapping system point cloud (case R-1) and (b) C2C distances between the ground truth and the handheld acquisition (case H-1). For grey points, the C2C distance could not be computed due to data gaps in the TLS model.
Robotics 11 00054 g005aRobotics 11 00054 g005b
Figure 6. Box-plot representation of the experimental results: cloud-to-cloud absolute distance (a), and surface density (b) for both the robotic and handheld mapping modes. The central mark indicates the median, the bottom and top of each box represent the first and third quartiles and the whiskers extend to the most extreme data points not considered outliers.
Figure 6. Box-plot representation of the experimental results: cloud-to-cloud absolute distance (a), and surface density (b) for both the robotic and handheld mapping modes. The central mark indicates the median, the bottom and top of each box represent the first and third quartiles and the whiskers extend to the most extreme data points not considered outliers.
Robotics 11 00054 g006
Figure 7. Histograms of the cloud-to-cloud absolute distances in repeated surveys for both the robotic and handheld mapping modes.
Figure 7. Histograms of the cloud-to-cloud absolute distances in repeated surveys for both the robotic and handheld mapping modes.
Robotics 11 00054 g007
Figure 8. Surface density characterizing the point cloud obtained (a) from the robotic mapping system (case R-1) and (b) the handheld mode (case H-1).
Figure 8. Surface density characterizing the point cloud obtained (a) from the robotic mapping system (case R-1) and (b) the handheld mode (case H-1).
Robotics 11 00054 g008
Figure 9. Examples of paths of the acquisition device in the two different survey modes.
Figure 9. Examples of paths of the acquisition device in the two different survey modes.
Robotics 11 00054 g009
Figure 10. Detail of thepoint cloud acquired with (a) the robot platform and (b) the handheld mode. Points are colored according to the local plane inclination for clearer visualization.
Figure 10. Detail of thepoint cloud acquired with (a) the robot platform and (b) the handheld mode. Points are colored according to the local plane inclination for clearer visualization.
Robotics 11 00054 g010
Table 1. Main characteristics of the robotic and handheld surveys.
Table 1. Main characteristics of the robotic and handheld surveys.
Survey CharacteristicR-1R-2R-3H-1H-2H-3
Acquisition time16 24 16 51 16’23 5 37 5 52 5 13
Trajectory length (m)403399402407401402
Points number (× 10 6 )133.6132.3128.665.362.958.7
Table 2. Cloud-to-cloud absolute distances computed for each survey with respect to the TLS (ground-truth) model.
Table 2. Cloud-to-cloud absolute distances computed for each survey with respect to the TLS (ground-truth) model.
C2CR-1R-2R-3H-1H-2H-3
Mean (cm)2.32.22.22.02.02.1
Std. dev. (cm)2.52.82.92.72.73.0
Table 3. Cloud-to-cloud absolute distances computed between pairs of robotic and handheld surveys for repeatability assessment.
Table 3. Cloud-to-cloud absolute distances computed between pairs of robotic and handheld surveys for repeatability assessment.
C2CR-1 vs. R-2R-1 vs. R-3R-2 vs. R-3H-1 vs. H-2H-1 vs. H-3H-2 vs. H-3
Mean (cm)0.70.70.70.90.90.9
Std. dev. (cm)0.40.50.60.60.80.6
Table 4. Surface density values.
Table 4. Surface density values.
DensityR-1R-2R-3H-1H-2H-3
Mean (pts/m 2 )20,30021,04321,00016,64319,25516,762
Std. dev. (pts/m 2 )626769897685872496948916
Table 5. Results of the plane-fitting procedure. The RMS of the distances between the points and the corresponding estimated plane are reported for each patch, giving an indication of the sensor noise.
Table 5. Results of the plane-fitting procedure. The RMS of the distances between the points and the corresponding estimated plane are reported for each patch, giving an indication of the sensor noise.
PlaneR-1R-2R-3H-1H-2H-3TLS
Wall #1 (cm)1.92.12.23.02.72.60.3
Wall #2 (cm)1.61.61.82.32.32.00.6
Floor (cm)1.41.41.41.21.11.20.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maset, E.; Scalera, L.; Beinat, A.; Visintini, D.; Gasparetto, A. Performance Investigation and Repeatability Assessment of a Mobile Robotic System for 3D Mapping. Robotics 2022, 11, 54. https://doi.org/10.3390/robotics11030054

AMA Style

Maset E, Scalera L, Beinat A, Visintini D, Gasparetto A. Performance Investigation and Repeatability Assessment of a Mobile Robotic System for 3D Mapping. Robotics. 2022; 11(3):54. https://doi.org/10.3390/robotics11030054

Chicago/Turabian Style

Maset, Eleonora, Lorenzo Scalera, Alberto Beinat, Domenico Visintini, and Alessandro Gasparetto. 2022. "Performance Investigation and Repeatability Assessment of a Mobile Robotic System for 3D Mapping" Robotics 11, no. 3: 54. https://doi.org/10.3390/robotics11030054

APA Style

Maset, E., Scalera, L., Beinat, A., Visintini, D., & Gasparetto, A. (2022). Performance Investigation and Repeatability Assessment of a Mobile Robotic System for 3D Mapping. Robotics, 11(3), 54. https://doi.org/10.3390/robotics11030054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop