1. Introduction
Unmanned aerial vehicles (UAVs) were initially developed for military missions due to their large-scale mapping capabilities and ability to reach remote locations that people were not authorized to access [
1]. In addition, these devices were also commonly used to develop DEM (Digital Elevation Models) [
2] because they were more affordable than conventional methods [
3], such as field surveying using a total scanning station [
4], and they presented few take-off and landing requirements [
5].
The new UAV models possess increased payload capacities, thus supporting technological enhancements such as sensors and cameras. Furthermore, UAVs have expanded their application possibilities, as they can now be used for infrastructure inspections, atmospheric research, fishing ground identification, environmental control, risk, natural disaster management, geological mining explorations, topography surveys, volumetric calculations [
5], archaeological studies, plant monitoring [
6] and crop biomass forecasting [
7].
Among the available technologies that can be added to UAV systems are light detection and ranging (LiDAR) sensors. First introduced in the mid-1990s, these sensors produce a spatially accurate point cloud and are commonly used for mapping areas with high vegetation density, building segmentation and forest biomass inventories [
8]. When implemented in a UAV, their data capture range is limited to the flight area, but they allow maximum coverage of the targeted area during each flight [
9]. Even if the area is not very accessible, data collection times remain efficient against the times reported by conventional ground sensors, such as scanner tripods [
10]. Therefore, trajectory reconstruction performance is essential for accurate LiDAR point cloud georeferencing based on UAV flights [
11].
A previous study used a UAV to capture high-resolution images to generate surveying data throughout China’s Altyn Tagh fault line. The resulting DEM evidenced a resolution of 0.065 m and an orthophoto of 0.016 m. Based on the data collected from the UAV, the authors measured recent seismic displacement at 7 ± 1 m, supplemented with satellite images. In total, and due to multiple earthquakes, aggregated displacements of 15 ± 2 m, 20 ± 2 m and 30 ± 2 m were measured [
12].
LiDAR sensors have a wide variety of applications, such as using an adaptive thresholding algorithm to segment the LiDAR point cloud, which is then followed by a selection process [
13]; collecting multi-temporal LiDAR data over 6 years to obtain annual growth rates and gains and losses in carbon storage in human-modified tropical forests [
14] or assisting in the identification of vegetation that threatens the structural integrity of a given area under a Sustainable Conversation Plan [
15].
Another study assessed the performance of a UAV equipped with a LiDAR by test target point detection comparisons, obtaining a mean square error of ±10 cm without adjustment points and ±4 cm using adjustment control points compared against a previous photographic 3D reconstruction [
16].
In a comprehensive bias impact assessment for UAV-based LiDAR systems, an iterative calibration strategy was proposed to derive system parameters using different geometric characteristics, reducing the errors produced by mounting the LiDAR on the UAV and generating an accuracy of approximately 2 cm [
17].
One of the more essential advantages authors find is the opportunity to customize the LiDAR system. The integration of a commercial UAV system with an Odroid minicomputer and the ROS system was used to perform flight missions with a low-weight platform that processes data obtained in real time without using a ground station [
18]. A study for the 3D survey of forest ecosystems in China developed a low-cost LiDAR sensor system implemented in a UAV, where a multirotor UAV (eight rotors) and a Velodyne Puck VLP-16 were used as part of the hardware. Part of the development of the system was to consider the replacement of the LiDAR sensor by future models of smaller size and weight in the structure design. In addition, two types of software were developed to control the LiDAR-UAV system and processing of the LiDAR data. This low-cost system allowed obtaining the topographic and vegetation parameters necessary for biodiversity studies by obtaining high-density LiDAR data [
19].
Among the fields that are adopting the use of LiDAR-UAV systems to improve their operations are agriculture [
7,
20,
21], forestry sciences [
19,
22], archaeology [
23,
24], topography [
8,
11,
19], bathymetry [
25], building reconstruction [
1,
26] and structural inspection [
10,
27].
An UAV system equipped with a LiDAR scanner and a multi-spectral camera system was presented for the study of pre-Columbian Amazonian archaeology. The information collected by the LiDAR sensor has an 80% lateral overlap with six flight lines, three horizontal and three vertical. The result was the study of the structure of the forest and the reconstruction of surfaces under the forests for the identification of archaeological sites [
23].
For identifying, mapping and interpreting mines from ancient Rome in northwestern Spain, integrated geometric applications based on information from a UAV system equipped with a LiDAR sensor have been reported. The 1 m resolution LiDAR sensor improved channel, water reservoir and mine resolution and survey capabilities. Furthermore, using a UAV system implemented with LiDAR systems for short and detailed surveys reduced the time of data collection and tool costs [
24].
A fusion between a UAV system with a LiDAR sensor and spectral imaging was shown to identify plant species and 3D characterization at sub-meter scales. The resulting precision was between 84% and 89% for species classification, and the proposed system’s digital elevation model (DEM) correlated with the DEM of a LiDAR-derived system, with an R2 of 0.98 and 0.96 [
22].
Integrating LiDAR sensors and UAV platforms presented benefits in agriculture in observing the production and status of crops. A system was developed using a DJI Matrice 100 UAV, an Odroid XU4, the Velodyne VLP-16 LiDAR sensor, a Point Gray Chamaleon3 3.2 MP Color camera, and a Sony imx265 sensor as part of the hardware and system. The ROS system was used to record the data. A variation in the height of the crops was found between 0.35 and 0.58 m. The components used and the mount design were made available online to be adapted to similar projects. The mount was printed in 3D nylon for this project and was designed considering the payload and the pinholes already included in the LiDAR sensor and inertial measurement unit (IMU) sensor [
20].
The use of LiDAR-UAV systems in civil engineering can have many applications. For example, Bolourian et al. (2020) proposed a flight planning method for the benefit of UAVs in bridge inspection, considering a Matrice 100 UAV and a Velodyne LiDAR PUCK for the test [
27]. In [
10], a platform for structural inspection was developed, integrating a UAV and a 2D LiDAR scanner. Similarly, [
26] used a low-cost IMU and the Hokuyo UTM-30LX sensor integrated this time into the UAV by an aluminum frame. Finally, Chiang et al. (2017) [
1] proposed a LiDAR-UAV system to reconstruct urban environments, combining the UAV with an inertial navigation system (INS), a GNSS receiver, and a low-cost Velodyne VLP-16 LiDAR sensor.
There has been increasing use of LiDAR sensors in various areas during the last years, mainly due to technological accessibility and diffusion worldwide. Other point cloud generation techniques, like photogrammetry, are limited because they depend on parameters associated with images like luminosity, meteorological conditions, and others [
28]. The LiDAR sensor does not have this problem and can obtain information about places with difficult accessibility and restricted vision [
29].
As a result, several companies have developed systems integrated with LiDAR for commercial use. However, these systems can be costly, with closed systems limiting users’ possibilities to customize them. The configurations are already established by the manufacturer and may have limited options to be specialized for users’ needs.
Therefore, this study seeks to develop, integrate and use a LiDAR sensor system implemented in a UAV to provide an open system that can be configured according to the user’s needs. This research also includes the design and building of an alternative structure for the LiDAR support system, utilizing accessible materials and equipment. An interface between a LiDAR sensor and UAV was created through an integrated system to obtain a georeferenced point cloud, providing an alternative to collecting topographic information through a low-cost system compared to those currently in the market.
2. Materials and Methods
This section discusses the materials used to implement the developed system as part of this study. The system is configured using an Odroid XU4 Mini Computer connected to a Wi-Fi network using a Wi-Fi/USB module. The system is powered using a 2200 mAh/11.1 V LiPo battery connected directly to the VLP-16 sensor and a UBEC 5V regulator connected to the Mini Computer components. Below is a graphic of the integration scheme with the connections between the different parts (
Figure 1), and specifications of the main equipment are outlined in
Section 2.1.
The LiDAR sensor chosen is the Class 1 Puck Lite LiDAR VLP-16 LiDAR Laser Sensor with a 100 m range that generates up to 600,000 points/s through a 360° horizontal field of view and a 30° vertical field of view [
30]. The GARMIN 18× LVC GNSS receiver is used for data synchronization. The LiDAR was integrated into the UAV platform Matrice 600 Pro [
31], a hexacopter approximately 1.5 m in diameter with a maximum take-off weight of 15.5 kg. Six 3-cell LiPo batteries power the drone with a capacity of 4.5 Ah each. These batteries allow for approximately 15 min of flight time based on the configuration of the proposed LiDAR system, which carries a payload of 1300 g. Finally, an Odroid XU4 [
32] Mini Computer with Linux operating system is used to collect the information from the VLP-16 through a direct Ethernet connection. Position and orientation information is obtained from the UAV Flight Control System and sent to the Minicomputer through a serial/USB converter module using a ftdi232 chip. More information about the equipment is presented in
Table 1.
To validate the obtained point cloud, we surveyed six reference control points using a total station [
33]. In addition, a GNSS (Global Navigation Satellite System) [
34] was used to survey two additional control points to correct point coordinates for increased accuracy. Finally, the points were placed in a georeferenced orthophoto previously obtained using a DJI Phantom 4 Pro UAV [
35] for area recognition and comparison.
In addition, this section also explains the application procedure used to generate the final point cloud with this system. The flow presented in
Figure 2 is divided into four main stages: Data Collection, Data Processing, Validation and Final Results. Each step is described below in its corresponding section.
2.1. LiDAR Sensor Support Structure
2.1.1. Structure Design
For mounting the LiDAR system components on the M600 Pro UAV, we designed several parts that could be attached to this UAV model. A maximum payload weight of 6 kg was considered.
The structure was designed based on the LiDAR system measurements: a diameter of 103 mm and a height of 72 mm. To maximize light beam ranges when surveying, the LiDAR sensor was placed vertically in the UAV [
36]. In addition, a compartment was required for other system components that had to be loaded onto the UAV, such as the Odroid XU4 Mini Computer and the power and connection cables. The weight of the LiDAR sensor and its components is approximately 1.5 kg, allowing for greater capacity for the other components.
Figure 3 below provides detailed drawings for the component modeled in 3D using the Solidworks software [
37].
2.1.2. Structure Construction
Table 2 compares the PLA (Polylactic Acid) and the ABS (Acrylonitrile Butadiene Styrene) materials for building the structure. We selected PLA (
Figure 4b) because it features a lower melting point, does not produce intense fumes, and the material is biodegradable, which is an advantage when creating a prototype. The printer used was a Makerbot Replicator Plus (
Figure 4a).
Figure 4c illustrates the final piece printed using the PLA material.
2.1.3. Structure Assembly
The Matrice 600 Pro has a built-in anchor point that can be used to integrate additional sensors and cameras.
Figure 5 below indicates the components used.
The LiDAR sensor, the LiPo battery, the Odroid XU4 Mini Computer, and the power and communication cable were all arranged in the support structure depicted in
Figure 5b. The printed part was assembled on the UAV Matrice 600 Pro rail. We added rails and 2″ screws to secure the structure. The LiDAR support structure utilized 2 1/2″ screws.
Then, we connected the UAV Matrice 600 Pro to the XU4 Mini Computer, and the Mini Computer was connected to the Wi-Fi network through a USB Wi-Fi Adapter, as denoted in
Figure 5c.
Figure 5d illustrates the coupling of the Garmin GPS with its support piece on top of the UAV.
2.2. Data Collection and Processing Procedure Development
The data collection procedure used in the field and the subsequent processing and post-processing process are specified below.
The data acquisition process is based on a method that facilitates LiDAR recordings in an experimental winter wheat field [
38]. Here, LiDAR point clouds are registered, mapped and assessed using the functionalities of the robot operating system (ROS) and the Point Cloud Library (PCL) to estimate crop volumes.
The georeferencing equation [
12] was used as a foundation for data processing.
2.2.1. Study Area
When carrying out the flight, the area where the mapping would be carried out had to be defined, and the following variables were considered: number of flight lines, flight altitude, flight time and flight speed. These parameters were entered into the flight planning software that programs the automatic route of the UAV.
2.2.2. Pre-Flight
Using the total station, UAV waypoints were measured once per area.
Figure 6 below lists the variables obtained by the total station. The variables used were:
Lever Coordinates: Distance from IMU to the center of the LiDAR in the UAV Reference System. It is on the XYZ axis.
Distance from the center of IMU to the floor.
Distance from the center of the LiDAR to the floor.
The GPS time was synchronized with the LiDAR time, so the common data between both tools is time.
A point cloud was generated from photos of a Phantom 4 Pro UAV for georeferencing. The control points were generated by the total station from the positions provided by the Emlid GPS RTK module.
2.2.3. During the Flight
The data collection stage was conducted using an Odroid XU4 Mini Computer with the Ubuntu 1404 LTS distribution of the Linux operating system [
32]. Hardware communications are established via an Indigo-based ROS, since this framework runs on a Linux operating system [
38].
The ROS is a framework of processes (also known as nodes) developed by Willow Garage and commonly used as a meta-operational system for robots, providing services such as hardware abstraction, low-level device control, message transmission between processes, packet management, etc. Its architecture is mainly based on the publish-subscribe network pattern, in which multiple nodes subscribe to a master node that manages communication between them. Users can establish a distributed network of nodes that communicate through messages that travel on buses known as topics [
26].
Here, an open-source ROS package was used to acquire data and generate point clouds from the UDP Ethernet packages provided by the VLP-16 [
36]. This ROS package contains the velodyne_driver package, which offers essential sensor management via the velodyne_node ROS node, capturing LiDAR data and publishing them in raw form velodyne_msgs/velodyneScan messages in the velodyne_packets topic. Then, execution is conducted using the vdump command, which saves raw data in pcap format.
Figure 7 below illustrates the communications scheme for acquiring LiDAR system data and position and orientation data using the DJI_SDK ROS package, which provides a ROS interface to the DJI-embedded SDK. This allows us to control the M600 Pro platform through ROS messages and services. This package is based on the main DJI SDK library in C++ [
33]. During ROS node execution, dji_sdk publishes orientation data as geometry_msgs/QuaternionStamped messages in the/dji_sdk/attitude topic and the position data as sensor_msgs/NavSatFix messages in the/dji_sdk/gps_position topic. These data are recorded using the rosbag command and are later converted to csv format.
After feeding and starting all system components, the system must interface with the Mini Computer via the SSH protocol using the PuTTY software and activate the VNC graphical display connection to start collecting data. To guarantee synchronization, the system clock must be updated through NTP services. In addition, the VLP-16 webserver page must be accessed to check whether the GPS PPS option is Locked. Next, the ROS commands must be executed to start storing data and perform the scheduled flight. Finally, an FTP connection must retrieve the pcap and csv files and start the georeferenced point cloud generation stage.
2.2.4. Generation of Georeferenced Point Cloud
The generation of the georeferenced Point Cloud is the transformation of the local coordinates from the LiDAR sensor (
s-frame) to global coordinates (
g-frame) [
33]. The local coordinates of the point acquired at a given t moment are georeferenced according to the model proposed in Equation (1) below [
21].
Equation (1) represents the Georeferencing Equation. This equation is used to make an indirect measurement. With this type of measurement, the center of the UAV cannot be accessed. Therefore the resulting measurements would not be correct, since they are not relative to its center (0, 0, 0). The errors presented by the first equation are corrected by comparing two point clouds: a real cloud, which uses the points provided by the total station, and an artificial cloud, constructed using the measurements from the UAV.
The initial proposed model is modified by adding two variables that correct measurement errors. Equation (2) denotes the proposed georeferencing equation based on Equation (1) but adds two additional variables to correct measurement errors: R_cal and a_cal.
Where
- 1.
: Three-dimensional vector in the g-frame. This position information is delivered by the UAV’s GNSS/INS system in latitude, longitude and altitude on the reference WGS 84 ellipsoid published at 50 Hz.
- 2.
: Rotation matrix that transforms a vector from the UAV’s frame of reference, the b-frame, to the g-frame. This information is obtained from the IMU of the UAV that delivers the vehicle attitude represented as a quaternion for the FLU (Forward, Left, Up) body frame rotation to the global ENU (East, North, Up) frame published at 100 Hz.
- 3.
: Rotation matrix from the s frame, LiDAR sensor, to the b-frame, UAV, which depends on the mounting base.
- 4.
: Points provided by the LiDAR sensor.
- 5.
and : Matrix and calibration vector that corrects errors caused by assembly and shaft misalignment issues.
- 6.
: Lever arm displacement between the LiDAR sensor and the body frame origins defined within the b body frame.
These variables are divided between static and dynamic (
Table 3). Static variables do not change during application of the Equation, while dynamic variables change according to the point used within the point cloud.
Figure 8 includes a diagram that denotes the location of the reference frames used. For example, in this Figure, the b-frame (related to GNSS/INS), the s-frame (related to LiDAR), and the g-frame (ENU) may be observed.
The data obtained from the LiDAR sensor are expressed in the s reference system. However, the rotation matrix is applied since these data must be expressed in the b reference system [
31].
To obtain the
rotation matrix, basic sequential rotations around the ZXZ axes are used as per Equation (3).
The 0.135, 2, and 2 values added and divided by the 180° represented by π are mathematically obtained by the theory of rotations and the 3D printed adapter. For the z_angle, the 0.135 value was adjusted manually by overlapping with a reference plane RP located below the UAV.
Mathematically, three successive rotations are again performed around the Z, X, and Z axes.
Figure 9 denotes the raw point cloud PC without applying the rotation matrix. This point cloud contains RP and is in the LiDAR reference system.
Figure 10 denotes the raw point cloud RP expressed in the reference system of the UAV. This cloud results from multiplying the
rotation matrix and adding the
lb displacement to each point cloud element in the LiDAR reference system.
With the data in the same reference system, i.e., when the rotation matrix has been resolved, the point cloud calibration process can begin.
Figure 11 shows the initial parameters, alignment parameters, and calibration equation used for point cloud calibration. The parameters are detailed below.
Initial and alignment parameters:
R_bs2bi: Rotation Matrix. Converts LiDAR data from LiDAR-centric reference system to the UAV-centric reference system, IMU.
A_bi: Lever Coordinates. Data from the total station.
Alignment Parameters:
R_al and a_al: Data provided by the CloudCompare software after overlapping the point cloud from the UAV and artificial point cloud from the total station. The artificial point cloud is created with Python based on the measurement from the IMU center of the UAV to the ground. The area generated is proportional to the ground area generated by the LiDAR. This process is executed only once.
Table 4 lists the equivalences between the calibration equation and the parameters from the georeferencing equation for the point cloud calibration process.
Based on static values, the point cloud covers the entire route, including UAV take-off and landing.
The pcap files for each segment are stored in their corresponding folders. There can be multiple pcap files per segment.
Figure 12 denotes the calibration results from the point cloud PC calibrated using static variables.
The program developed for the point cloud georeferencing process uses the following Python packages: Panda, NumPy, PyMap3D and Lapsy. The loaded parameters are provided in
Figure 11, which includes initial and alignment parameters.
The folder path is where the corresponding pcap files for each segment are located.
The dynamic values of the georeferencing equation are r_bi2enu and x_gps_enu. R_bi2enu provides IMU orientation, and x_gps_enu provides GPS positions. Therefore, both values have to be interpolated by the data differences exhibited between the IMU and the GPS over a period of time.
Algorithm 1 denotes the development of the georeferenced point cloud for each path.
Algorithm 1: Generation of Georeferenced Point Cloud for Each Path |
Input: r_s: point cloud matrix within the s-frame, t_s: vector indicating the time associated with each r_s point. euler_bi: Euler angle value matrix (b-frame → i-frame), t_euler: vector indicating the time associated with each euler_bi point. p_enu: UAV position matrix within the i-frame (ENU), t_enu: vector indicating the time associated with each p_enu point. Output: r_utm: matrix containing the values of the georeferenced point cloud. Obtain the time intervals corresponding to the desired path. For each of the r_s points: Multiply r_s(i) by the calibration parameters and obtain r_b(i). By interpolation in euler_bi, obtain the orientation value corresponding to each r_s (i) and store it in euler_i. Obtain R_i, rotation matrix corresponding to euler_i. Calculate: r_i(i) = R_i * r_b(i). By interpolation in p_enu, obtain p_enu_i, the position value corresponding to each r_s (i). Calculate: r_enu(i) = r_i(i) + p_enu_i End of For Convert r_enu to the UTM reference system to obtain r_utm. Generate the file from r_utm. |
Table 5 lists the equivalences between the calibration equation and the parameters from the georeferencing equation for developing Algorithm 1.
Figure 13 denotes the final point cloud results after adding the dynamic values from the georeferencing equation. Finally, the point cloud is georeferenced in the UAV reference system.
R_bs_0: Time vector. Its values are adapted to the parameters from the point cloud within the X, Y and Z parameters.
X_enu: Vector generated by concatenation. Here, concatenation is initialized, and a copy is generated.
Figure 14 below illustrates the conversion of the vectors generated by concatenation to the UTM reference system.
X_enu is converted to the geodetic system (latitude, longitude) from the ENU system. The height is based on the ellipsoid when converting X_enu from the geodetic system to UTM.
2.2.5. Post-Processing
Once the point cloud is obtained, the noise filter tool of the CloudCompare software is used. CloudCompare is an open-source project with many advanced techniques for point cloud registration and resampling. Python scripts are employed for georeferencing the point cloud and creating files in *.las format. The process consists in using the Clean tool and the Noise filter option. In the window tool, the Radius option is selected with a value of 0.03 for Neighbors, and for Max error, the relative option is chosen with a 1.00 value. The Remove isolated points option is also selected [
34]. This process uses the tool to clean out the outliers, using an algorithm that adjusts planes locally (around each point in the cloud) and then eliminates the point if it is too far from the adjusted plane. To record and align the clouds generated based on the different trajectories, the alignment tool was used by selecting pair sets of points, through which two entities may be aligned by selecting at least three matching point pairs in both entities. Finally, the fusion tool generated a point cloud from all the strips. The filtering process allowed a more defined point cloud of the area, eliminating around 25% of points not calibrated with the point cloud, as shown in
Figure 15. The georeferencing proposal is constrained by the quality of the measurements of the GNSS/INS sensors. The unique semi-automatic part of the georeferencing process is the location of each parallel path’s start and end times. The extrapolation of the method in other study cases is associated with using correctly georeferenced control points.
3. Results
A flight plan was prepared with five flight lines over the gable roof of a hangar, which was used as a reference to compare the point cloud from the photogrammetric survey against the survey generated by the total station. This area was used because the roof of this building has a simple sloping geometry, which allows it to be used as a reference plane for the fusion of point clouds generated during the flight.
The flight plan was configured using the DJI Ground Station Pro software [
35] with the following parameters:
Figure 16 indicates the flight plan, wherein the UAV travels across five flight lines. In these flight lines, information is collected via the LiDAR sensor. Hence, a different point cloud of the area is obtained for each traveled line, thus generating a total of five point clouds.
From the data obtained from the UAV positions provided by the GPS geodetic system, we obtained the path followed in the UTM system.
Figure 17 is the result of the georeferencing equation in the UTM Reference System after the conversion performed in
Figure 13. The
X-axis is the equator, and the
Y-axis is the meridian. The complete path followed by the UAV is expressed in UTM coordinates for X, Y, and WGS84 for Z.
The final point cloud is the result of merging the five different point clouds, through which better point accuracy and density were obtained, as shown in
Figure 18.
Procedure Validation
In
Figure 19, the control points used for validating the procedure are indicated in orthophoto (
Figure 19a) and plan (
Figure 19b) formats.
Table 6 lists the coordinates obtained using the different survey methods (total station, LiDAR and photogrammetry). In contrast,
Table 7 lists the coordinate comparison errors between the reference control points obtained with the total station and the control points obtained with the LiDAR and photogrammetry procedures. These errors mainly reflect the positional error between the GPS and the differential GPS of the UAV.
As a first result, a root mean square error (RMSE) of 35.514 m was obtained from georeferencing (
Table 7). In the sequence, as part of the process, an offset for the georeferencing errors from the survey is added into the workflow, in which the LiDAR point cloud is superimposed on the previously defined coordinates from the total station. For this application case, the translation values were 1.757 m along the
X-axis and 1.867 m along the
Y-axis at point T4. In addition, two reference points are located for the survey overlay. In this example, the turn was 55°, as shown in
Figure 20.
In
Figure 21, the same processing was applied for the translation along the
Z axis. For this case, the value was 35 m.
The results from this procedure are described in
Table 8 and
Table 9. The corrected control points are shown according to the section and the related equipment (
Table 8).
Table 9 denotes the RMSE for the LiDAR survey obtained through georeferencing, whose value is 0.766 m concerning the total station.
4. Discussion
A RMSE of 35.513 m was initially obtained when georeferencing the LiDAR point cloud (
Table 7). After offsetting the errors, an RMSE of 0.766 m was achieved (
Table 9). According to
Table 9, the maximum errors obtained were 0.319 and 0.923 m for the
X and
Y axis, respectively, and the corresponding minimum errors were 0.033 and 0.000. For the
Z-axis, the error ranged between 0.1160 and 0.722 m, which is within the error range for the Matrice 600, 3 m in
X and
Y, and 5 m in
Z. Other works studying the vertical accuracy of LiDAR also found that the
Z-axis presents the most errors in trials done with the system, sometimes exceeding that manufacturer’s information [
36,
37].
Table 10 shows the corrected control point cloud errors and
Table 11 compares the values from measuring the distances between the control points with LiDAR and UAV photogrammetry and the reference points from the total station (TS) to determine the accuracy of the measurements of the lifted objects. Here, the maximum absolute error between the LiDAR and the TS was 0.347 m between points T3 and T4, and the absolute minimum error was 0.10 m between T1 and T2. On the other hand, for UAV photogrammetry, the maximum absolute error was 0.190 m between points T6 and T1, and the absolute minimum error was 0.006 m between T5 and T6. Likewise, the RMSE of the LiDAR and UAV photogrammetry was calculated against the values obtained with the TS. LiDAR presented an RMSE of 0.171 m, while photogrammetry presented an RMSE of 0.024 m.
Similarly, in the LiDAR-UAV system proposed for the reconstruction of urban environments, it was found that the system reached an accuracy of approximately 1 m compared to the results of a laser scanner (TLS) [
1]. These results are in the range of the accuracy errors in the presented investigation. Additionally, the system developed to observe the production of crops [
20] with software materials like the ones applied to this investigation presented a variation in the height of the crops between 0.35 and 0.58 m. The use of the UAV-LiDAR system for height estimation of different crops showed an RSME of 0.034, 0.074, and 0.12 m for wheat, sugar beet, and potato, respectively [
7].
The information from both point clouds is similar, and both show pros and cons. For example, although photogrammetry reads the area with greater detail in textures, it is affected by light and shadows. On the other hand, LiDAR is not affected by weather conditions. LiDAR is also more effective than photogrammetry in areas with dense vegetation or where the surface is complex to assess visually [
37] and is better suited for large areas, as, for more detailed surveys, it would be preferred to consider adding more conventional topographic techniques as a compliment [
36], like the GCPs obtained with GNSS and total station used in this investigation. A comparison between this two point clouds is shown in
Figure 22.
It is important to note that this investigation’s most significant advantage was the ability to customize the system according to our specifications and the equipment available. Furthermore, as other projects have shown, developing our system allows considering factors like improvement and change of the supporting structure as new technology emerges [
10,
19], defining the location of the sensor according to the user needs [
10,
19] and using low-cost equipment [
1].