Next Article in Journal
A Novel Imaging Algorithm for High-Resolution Wide-Swath Space-Borne SAR Based on a Spatial-Variant Equivalent Squint Range Model
Previous Article in Journal
Single-Stage Adaptive Multi-Scale Point Cloud Noise Filtering Algorithm Based on Feature Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Position Estimation Based on Lidar × Lidar Data for Autonomous Aerial Navigation in the Amazon Forest Region

by
Roberto Neves Salles
1,*,†,
Haroldo Fraga de Campos Velho
2 and
Elcio Hideiti Shiguemori
1,†
1
Institute for Advanced Studies, Sao Jose dos Campos 12228-001, Brazil
2
National Institute for Space Research, Sao Jose dos Campos 12227-010, Brazil
*
Author to whom correspondence should be addressed.
Current address: Trevo Cel Av Jose Alberto Albano do Amarante 1, Sao Jose dos Campos 12228-001, Brazil.
Remote Sens. 2022, 14(2), 361; https://doi.org/10.3390/rs14020361
Submission received: 29 November 2021 / Accepted: 14 December 2021 / Published: 13 January 2022
(This article belongs to the Section Forest Remote Sensing)

Abstract

:
In this paper we post-process and evaluate the position estimation of pairs of template windows and geo-referenced images generated from LiDAR cloud point data using the Normalized Cross-Correlation (NCC) method. We created intensity, surface and terrain pairs of images for use with template matching, with 5 m pixel spacing, through binning. We evaluated square and circular binning approaches, without filtering the original data. Template matching achieved approximately 7 m root mean square error (RMSE) on intensity and surface templates on the respective geo-referenced images, while on terrain templates it had many mismatches due to insufficient terrain features over the assumed flight transect. Analysis of NCC showed the possibility of rejecting bad matches of intensity and surface templates, but terrain templates required an additional criteria of flatness for rejection. The combined NCC of intensity, surface and terrain proved stable for rejection of bad matches and had the lowest RMSE. Filtering outliers from surface images changed very little the accuracy of the matches, but greatly improved correlation values, indicating that the forest canopy might have the best features for geo-localization with template matching. Position estimation is essential for autonomous navigation of aerial vehicles and the these experiments with LiDAR data show potential for localization over densely forested regions where methods using optical camera data may fail to acquire distinguishable features.

1. Introduction

Autonomous navigation of Unmanned Aerial Vehicles (UAV) has been used for multiple diverse applications such as remote sensing [1], crop monitoring [2] and reforesting [3]. It is also used for safety [4], when a remotely controlled UAV loses the connection signal with the pilot or must return to a moving location [5]. The use of external Global Navigation Satellite Systems (GNSS) combined with the on-board Inertial Navigation Systems (INS) is a tried and tested method present on many available UAVs [6,7]. Although the INS is part of the UAV platform, and its quality is a decision conceived from the combination of engineering and economics, each GNSS has its own design and is made globally available as a signal. The signal must be received and processed before becoming an usable position. For this reason, UAV platforms could be subject to interference when receiving the signal emitted from the satellites, or perhaps not receive it at all, in case of outages or denial of service from the satellite provider. The latter can be mitigated by the implementation and use of more than one source of global positioning [8]. Malicious interference can be mitigated by the use of better antennas, spacial and frequency filtering and vector tracking [9]. Finally, an UAV is still subject to natural phenomena [10,11,12]. One natural phenomenon associated with the GNSS signal interference is ionospheric scintillation [13,14,15,16]. It is characterized by irregularities in the atmospheric electronic density. The ionospheric scintillation is able to cause a complete disruption of the emitted GNSS signal [17,18,19]. Ionospheric bubbles [20,21,22] are events linked to the scintillation phenomenon, with occurrence on the equatorial magnetic zone on the Earth, including the northern region of Brazil [23].
Many solutions have been proposed for autonomous aerial navigation in GNSS-denied scenarios. One branch of research is to use Computer Vision techniques that allow the UAV to locate itself by means of template matching [24], landmark recognition [25], odometry [26], among others [27,28,29]. Those approaches are usually validated using standard optical RGB cameras. Although digital RGB cameras have many advantages, such as capturing multiple whole frames per second, there are also difficulties, for instance, navigating over forests, navigating at low light or night, or over water [30]. The use of an active sensor on those scenarios can tackle some of those limitations and provide alternatives for autonomous navigation, as they have in other domains of study [31]. Infrared-inertial precision landing [32] and thermal-inertial localization [33,34] are examples of active sensor solutions utilizing infrared wavelengths. Additionally, advancements concerning navigation, mapping and geolocalization have been made with Light Detection and Ranging (LiDAR) [35] and Synthetic Aperture Radar (SAR) sensors [36,37].
This paper explores one difficult scenario for an optical digital camera embedded in an UAV, flying over a region of the densely forested and remarkably featureless Amazon forest which, additionally, is a region where ionospheric scintillation frequently occurs and GNSS disruption is expected. Figure 1a illustrates a typical scene of a flight over the Amazon rain forest, while Figure 1b displays urban features usually present on autonomous navigation studies. Both regions, forested and urban, are illustrated with visible band satellite images.
This comparison illustrates the difficulty of finding distinguishable features on densely forested regions with optical sensors and the necessity of pursuing new methods and approaches for this particular case of GNSS-denied scenario.
LiDAR is able to acquire data at day or night, and produces topographic cloud point data, usually with the intensity information associated with the reflection of each estimated position. Compared to the optical radiometric information, the use of LiDAR in such scenario results in multiple new information derived from the laser penetration, capturing height information from the canopy to the ground, which may be investigated for applicable features for template matching. Conversely, the 3D gridless format nature of the cloud point data requires processing into a 2D matrix before using the well-known methods of position estimation, such as template matching.
Our experiment suggests the applicability of position estimation using LiDAR acquisitions and LiDAR-derived reference databases with a template matching and normalized cross-correlation (NCC) algorithm on such scenario. It uses the height information derived from the canopy and the ground, which are not readily available with RGB cameras, to complement the radiometric information of the forest.

2. Aerial Autonomous Navigation: LiDAR × LiDAR

Autonomous navigation methods based on real-time image acquisition will be influenced by the scene (e.g., terrain features and land cover) and flight conditions such as height and speed. To navigate over regions with few distinguishable features on the land cover, alternative strategies must be used. Densely forested regions of the Amazon Forest have few distinguishable features for use with template matching methods, as illustrated in Figure 1. An adequate solution for this problem is the use of active sensors, such as LiDAR, that acquire height and intensity information, as long as there are no clouds blocking the laser from reaching the scene. The next subsections provide information on LiDAR point cloud acquisition and how it can work as an aid to navigation.

2.1. The LiDAR Active Sensor

Active sensors such as LiDAR have their own illumination source. The most straightforward advantage is the controlled acquisition by means of the scene illumination. A LiDAR sensor records reflections of the emitted laser as ranges. When the measurements are integrated with an inertial motion unit (IMU) it produces cloud point data, or a set of gridless 3D points. GNSS information is added for geo-referencing the cloud point data. Often the intensity of the reflection is recorded and provided with each 3D point coordinate. One notable consequence of the acquisition by active sensors is the possibility of using the sensor at night. One notable disadvantage is the increased energy consumption, which may be a precious resource to spare, depending on the UAV specification.
Figure 2 illustrates the penetration and acquisition of a discrete-return LiDAR system acquiring data over a tree where it records three reflections from the single laser pulse, the location of those reflections shown as dots. To produce surface maps, the first reflections are used (gray dot), while to produce terrain maps, the last reflections are used (green dot). Intermediate reflections between the first and the last may occur inside the trees (red dot) and are of no use for our geo-localization experiment.
In densely forested regions, such as the Amazon forest, the last reflection of the laser might not reach the ground easily, i.e., it simply does not penetrate all the canopy or the returned signal is so weak that it does not translate to a discrete return. This is an important consideration when choosing the binning resolution (see Section 3.2).

2.2. Terrain-Referenced Navigation (TRN)

Terrain-Referenced Navigation (TRN) uses terrain features as reference to provide aid to navigation. The LiDAR sensor acquires terrain, surface and intensity data, and is applicable for use with TRN. Among the many successful applications of aerial vehicles and LiDAR, some are of interest for the context of this paper.
Campbell et al. [38,39] worked on laser data captured by the NASA Dryden’s DC-8, post-processed it, and showed a concept of TRN with LiDAR. They noticed restrictions about clear weather, low altitude, and also that it is critical to understand the underlying methods used to create the reference Digital Elevation Map (DEM). They used a thin strip of around one second of data and required a terrain model of 2 m pixel spacing. Toth et al. [40] implemented a simulation of LiDAR acquisition of terrain and tested the Iterated Closest Point (ICP) algorithm. They reported a too thin segment could exacerbate platform along-axis errors; that near square regions worked better for matching, and that ICP should be used with consideration of breaklines (points with the same position except for different heights) and flight direction. Leines and Raquet [41] tested SIFT using LiDAR-derived images with varying results. Hemann et al. [42] did a long flight experiment in a helicopter equipped with LiDAR. They reduced the 3D cloud data matching problem to a 2D image matching problem with Normalized Cross-Correlation (NCC) and a 1D problem of height. Care was taken to just match very high correlation (0.922) and terrains with sufficient variation (>2.5 m of standard deviation). There were matches on 70% of the frames—the other 30% of failed matches occurred mostly over cities where the laser does not penetrate constructions and the acquired data does not match the terrain reference. They also showed the problems of INS-only flight compared to the working solution that keeps the position error bounded irrespective of distance.
Those previous experiments suggest for us to (a) know really well the reference terrain; (b) use near square templates; and (c) preferably use a high correlation value. Point (a) makes it very difficult to use the available terrain maps of the Amazon forest region. They were mostly generated from satellite Synthetic Aperture Radar (SAR) interferometry and they do not correlate well with the terrain as measured by a LiDAR sensor, even those of longer, more penetrating wavelengths, as illustrated by Figure 3.
Such differences prompted us to design our experiment around a reference database under our control. Therefore, we utilized LiDAR transects to generate the reference image to be used for geo-localization.

2.3. Test of an Aerial Vehicle Localization Based on LiDAR Reference Data over the Amazon Forest

There are multiple challenges for an autonomous navigation system when flying over the Amazon forest in a GNSS-denied scenario. This paper addresses only the pattern matching aspect of the position estimation by template matching. Furthermore, it uses a LiDAR transect with a LiDAR geo-referenced database above a densely forested region of the Amazon forest, instead of the SAR-derived height maps. The methodology, dataset and test results are described on the next sections.

3. LiDAR × LiDAR Methodology

The tested methodology is summarized by the diagram (Figure 4) below. Each step of the diagram will be addressed on the next subsections.
Two separate datasets were used as inputs. The first is the reference cloud point database of the region of interest. The second is the transect of the flight over the region of interest. The methodology is applicable on the height information to generate terrain and surface (i.e., canopy top) matrices and is also applicable with the intensity information to generate a map of the strongest intensity of the reflections. Two different strategies are applied for binning, the 3D to 2D transformation. Each step of the methodology will be explained in the next subsections, and differences with regard to surface, terrain and intensity maps processing will be noted.

3.1. The Reference Cloud Point Data

First, the reference LiDAR transects must be united to form an aggregated cloud point database. This database will serve as a geo-referenced database and any position errors detected will be detected against this database. The aggregation operation of cloud point data is as simple as a concatenation of the lists of points as the cloud point datasets are basically lists of point coordinates and associated information such as intensity. In this step all reference transects are joined together in a single file. This operation is appropriate because the transects used were acquired at the same date. The original Universal Transverse Mercator (UTM) coordinate system provided with the LiDAR transects was used throughout the operations. The result is a single larger cloud point dataset file.

3.2. Binning

To apply the standard matrix (2D) algorithms, such as the Normalized Cross-Correlation, the 3D cloud point data must first have the 3D dimension reduced to a 2D map and fit into a regular spacial grid. This process is called binning [42]. There are multiple ways to flatten cloud point data to generate values for the 2D matrices, e.g., simple averaging or inverse square distance over a region among others. Of noticeable importance is the selection of the region to be worked on when generating a value for the matrix. Two types of bin were used in this work: a regular square bin, matching the limits of the pixel spacing coordinates; and a circular bin, overflowing and touching the four corners of the pixel spacing, as illustrated on Figure 5 below. All points with any height coordinates within the northing-easting limits of the bin are selected for the binning operation. A pixel spacing of 5 m was used in this work. Better resolutions were tested, visually inspected and were found suitable for 2D surface and 2D intensity maps but they produced unreliable results when analyzing the produced 2D terrain maps, so a safe compromise was reached with a lower 5 m resolution to generate and work with all three maps.
To generate intensity maps, all points delimited by the chosen bin were evaluated and the largest intensity value was taken as representative for that pixel value of the matrix. In a similar way, to generate terrain and surface maps all points delimited by the chosen bin were evaluated and, respectively, the minimum or maximum height value was selected as representative of the center point of the correspondent pixel. The selection of the minimum and maximum height is illustrated of Figure 6, maximum values in blue and minimum values in red for each bin.
The evaluation of a circular bin is of interest because we are choosing a representative value for the very center of the pixel and, if a representative value would be allowed when found on the corner of the region delimited by the pixel, a value within the same distance should be allowed even when located on one of the four neighbor pixels. This criteria will slightly underestimate (terrain) or overestimate (surface) more appropriate cartographic values, but will do so on both reference database and flight transect.

3.2.1. Outliers

When we began our tests, we did not apply any pre-filtering on the cloud point data with the intention of avoiding outliers, therefore an additional consequence of the binning for surface maps, specifically, is that there were some outlier points selected as representative values several meters above the forest on the original data. They were kept to see how the methodology would perform when they are present. An example of the outliers issue is shown on the rendered 3D cloud of Figure 7.

3.2.2. Filtering Outliers

To compare the influence of the presence or absence of the outliers when binning the 3D data, an optional simple filtering step was designed, specific to the surface map generation. As can be seen from the accumulated sum of the differences from the unfiltered surface and terrain maps of the flight transect on Figure 8, most of the trees are accounted for on 60 m height difference.
The filter considers the generated terrain map from the binning process and generates an additional surface map where representative values for the surface map are skipped if, on that bin, they are higher than 60 m from the detected ground. After visual inspection, most outliers were successfully removed from the new surface map.

3.3. Template Matching

The template matching algorithm takes as input a smaller image (or template) and searches for the most similar region in a larger reference image. This is an exhaustive test of all possibilities of placement and results in similarity map where larger values correspond to better matching. The largest value of the similarity map is taken as the position estimation. If the reference image is a geo-referenced image, estimated pixel displacement positions can be translated back to a coordinate system, resulting in a geographic position estimation for the captured image (template). Our experiment uses the whole reference image when searching for matches, for a thorough analysis, but the search window (M,N dimensions of Figure 9) could and should be limited on practical cases.
LiDAR images of the flight transect are generated from the cloud point data therefore they can be generated at the appropriate rotation and scale of the geo-referenced database. The template matching algorithm is illustrated by the Figure 9. The symbols that appear on the image are the same ones used on Equations (1) and (2). Both cross-correlation and normalized cross-correlation can be used as similarity operations and their differences are explained below.

3.3.1. Cross-Correlation and Its Problems When Applied on Terrain Data

The cross-correlation (CC) is a well-known method for image matching between a template and a reference. It is defined as
c ( s , t ) = x , y f ( x , y ) w ( x s , y t )
where f is a reference 2D image of dimensions M × N , w is the template or window of dimensions J × K , smaller than M × N , that is being cross-referenced and c is the cross-correlation result. The translation is estimated from the corresponding pair ( s , t ) where the value of c is greatest. This approach assumes that different regions from the reference image have similar average values [43]. Terrain features have naturally occurring regions of lower and higher average values (e.g., valleys and mountain regions), invalidating the assumption and causing the results to be skewed towards the higher average regions.

3.3.2. Normalized Cross-Correlation

The normalized cross-correlation adjusts cross-correlation for different average values, making it more robust in terms of correlating the shape of terrain or surface maps. It is defined as
N C C ( s , t ) = x , y [ f ( x , y ) f ¯ s , t ] [ w ( x s , y t ) w ¯ ] x , y [ f ( x , y ) f ¯ s , t ] 2 x , y [ w ( x s , y t ) w ¯ ] 2
where the numerator is the same as the cross-correlation presented earlier minus the average of the region under the template f ¯ s , t and minus the constant average of the template or window w ¯ . The denominator normalizes the adjusted values. The normalized cross-correlation produces results on the range [−1, 1], where 1 means the same feature was found and −1 means an inverted feature was found.
This formulation has the distinct advantage of permitting the selection of a stable cut-off value, where template matches that fail to reach a safe threshold of similarity are rejected. Template matches that reach or surpass the threshold value are considered position estimation candidates, and are tested against other validation criteria before the translation values obtained from the peak values are used for georreferenced position estimation. In our experiments, a good threshold value is at least around 0.4 for any NCC estimate. This value might change with different data and is derived from observing the results—see next section. It is relevant to note that the NCC is not computable on a perfectly flat surface.

3.3.3. Additional Validation Criteria

Two additional validation criteria are used to further restrict and avoid bad matches. The first criteria is the joint-NCC value, where the three NCC values generated from terrain, surface and intensity maps are combined in a single threshold value using
NC C j o i n t = N C C t × N C C s × N C C i 3
where N C C t , s , i are, respectively, the terrain, surface and intensity NCC. By combining the NCC from the terrain, surface and intensity we obtain a new similarity map where peaks on similar regions are reinforced and peaks without correspondence are reduced. The benefit of combining them is the possibility of a new position estimation while also having the new NCC values for rejection criteria.
A second validation criterion was deemed necessary after evaluating the template matching performance on terrain maps. Choosing an NCC rejection criterion was not sufficient to separate good and bad matches. Therefore, a measurement of flatness of the terrain was necessary. We used the percentage of gradient magnitudes below 1 m present on the template search area. An adequate value is a reasonably low value, to avoid mostly flat areas. In our experiment, it should be lower than 70%.

4. Trajectory and Results on the Amazon Rain Forest

The following subsections detail the dataset, trajectory and all experiments that apply the position estimation methodology using surface, terrain and intensity maps.

4.1. Dataset

Nine LiDAR transects were used on this study, eight for generating the reference database and one, that perpendicularly crosses all others, for position estimation. They are part of the EBA—Estimating the Biomass of Amazon project, and were used with permission of CCST/INPE (see Acknowledgements for more information). The transects cover an area of approximately 3 × 3 km. They were provided and used with the UTM coordinate system (EPSG:31981—SIRGAS 2000 / UTM Zone 21S). Figure 10 shows the studied region, located near Tapajós River, in red. The area is then expanded and the combined nine transects are shown with the LiDAR derived peak intensity information.
Figure 11 shows the acquired LiDAR cloud point data processed via square binning with 5 m pixel spacing resolution. Intensity, surface and terrain maps are shown. The trajectory (top) and the reference database (middle) are shown separately and were also combined using the position for reference provided with the data. All measurements of template matching accuracy are done relative to that combined setup. In other words, if the template window selected from the trajectory, after template matching, is positioned exactly on the placement it should be, given the geo-reference position provided, that is an accurate match.
All cloud point transects were acquired using the LiDAR sensor Trimble Harrier 68i with the configuration detailed on Table 1. The sensor is capable of full waveform acquisition, but only the discrete-return data was used for experiments. Each transect was acquired in a single pass and, although obvious outliers were present above the forest, no filtering to remove those points was applied, except for a special case of surface image generated for analysis and comparison.

4.2. Experiments

Two rounds of similar experiments using the methodology were conducted. Each round of experiment differed at the binning stage, where terrain, surface, intensity, and an extra filtered surface 2D maps were produced. The first round used the square bin and the second round used the circular bin. In each round the location of the aerial vehicle was estimated against the respective reference database, using both the CC and NCC as the template matching algorithm. For the experiments to be thorough, template windows of size 70 × 60 pixels were systematically generated at every easting pixel, from East to West, following the original trajectory (Figure 12, right to left) and each template window of terrain, surface, intensity and filtered surface was template matched against the respective reference database for the best match estimation. Note that the beginning and the end of the trajectory transect lie beyond the geo-reference mapped region (see Figure 11, bottom and Figure 12). Templates were selected outside the geo-reference database region by design—the choice of validation parameters should reject matches of those templates as invalid matches as they are impossible to be located on the reference database (Figure 11, middle).
The results of all experiments are summarized on Figure 13 and Figure 14. The horizontal axis represents the template window index. The left vertical axis goes from 0 to 1 to accommodate the NCC (orange), NCCjoint (yellow) and also the percentage of the region delimited by the template that has less then 1 m gradient magnitudes (green), for surface and terrain images. The right vertical axis displays the error of the position estimate in pixels, or the distance from the match to the correct position. This position estimate error was calculated using the Euclidean distance from the assumed correct position, derived from the geo-reference information of the flight transect, and the pixel position given by the template match estimation. It is displayed in pixel units on Figure 13 and Figure 14.
For further analysis, template indexes from 146 to 600 were selected and the root mean square errors of Cross-correlation and Normalized Cross-correlation were calculated and are compared on Table 2. Those indexes were conservatively chosen because they lie completely inside the area covered by the reference database. Therefore, any remaining errors and mismatches are caused by incorrect position estimation of the template matching algorithm and not due to missing reference data.
Moreover, for the same indexes, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 show the correct trajectory drawn over the correspondent experiment map in yellow, and show the estimated trajectory in red, using the obtained position estimation for the particular experiment.

5. Discussion

From Figure 13 and Figure 14 we can gather a lot of insight. It is possible to see that both intensity (top left) and surface (top right) have correct matches on almost all the geo-referenced region. For intensity, this region of correct matches can be selected with a criteria of NCC of 0.3, and a few bad matches remain. Those matches are of data without correspondence on the reference database, near sequential template number 78. Those surges of greater correlation happens in both square and circular bins. For the unfiltered surface experiments, selecting a criterion of correlation is much easier and it can be as high as 0.6. A lower criterion of 0.4 to 0.5 would allow a large region of bad matches after sequential template number 610 to pass as valid matches. This happens in both square and circular bin experiments.
The terrain map experiments failed badly with both binning approaches. Although some particular areas of correct matches can be found (Figure 13 and Figure 14, middle–left), it is not possible to make good use of the correlation values to select a criteria. Looking into the percentage of gradients with less than 1m, it is possible to see that most of the region is quite flat, except on the region of the early templates, outside the reference database covered region. Therefore even the correct matches are fortunate and unexpected. The gradient values for this region is shown on Figure 25. The flatness of the region delimited by the flight transect appear with darker values.
From our experiments one should establish the gradient criteria obviously be lower than 70%, but possibly lower than 30–50%, therefore causing a complete rejection of the matches, even the good ones, if terrain data were to be used alone.
The surface experiments filtered for outlier removal (Figure 13 and Figure 14, middle-right) show a distinctive result for TRN techniques. The elimination of the outliers present on the data is the elimination of noise on surface formation on the binning process. Without outliers, the correlation’s strength is greatly increased and criteria as high as 0.8 can be safely used to discard bad matches. Filtering outliers requires prior knowledge of the scene and can be most certainly applied on the reference database preparation with the expectation of stronger correlation results, as far as our experiments have determined.
For each template window, the joint NCC experiment combined the similarity maps from intensity, unfiltered surface and terrain correlations, reinforcing the peak correlation values that appeared in all maps at the same locations, otherwise diminishing the correlation peak values. Both squared bin and circular bin experiments produced slightly better position estimates (see Table 2) than any other experiment taken alone, even combining the apparently bad terrain estimations. With slightly over 0.3 NCC criteria it can correctly select the good matches and discard bad ones.
Overall, the use of circular bins slightly increased the values of correlation in all experiments with the exception of unfiltered surface maps. It is clear, in our particular case, that the circular bins cause the outliers to be selected as representatives for more pixels and thus insert more noise on the resulting surface map, lowering the correlation values. The filtered surface behavior with circular bins is also that of a slight increase in correlation.
Table 2 summarizes how well the position estimates performed for each kind of mapping used, and also for the combined mapping strategy. The results indicate a slight tendency of better positioning using the circular bin strategy, but the results are all very close.

6. Conclusions and Final Remarks

The paper addresses UAV autonomous navigation over a fully covered and dense vegetation patch of the rain forest on the Amazon region. For this cited context, the procedure of using image processing for UAV positioning, where the peak value of image matching by cross-correlation of segmented images is used, did not work when passive sensors were employed [27,28,30,34], because there are no clear features to be obtained by segmentation—see Figure 1. Therefore, for this situation, we tested the application of an active sensor as an approach to the autonomous navigation problem.
Using data from a LiDAR sensor, three types of mapping were employed: terrain (ground), surface (canopy top), and intensity (laser reflection). Mapping was built through two types of binning, square and circular, and matching was designed with the help of normalized cross-correlation formulation (NCC), as was shown in Equation (2). Direct cross-correlation was also tested. Due to the flatness of the soil (Figure 25), the use of terrain mapping alone did not result in a good strategy—see Figure 13 and Figure 14, middle-left. Mapping for surface (canopy top) and intensity were good strategies for UAV positioning, with a small advantage for the intensity mapping—the comparison between the end of the trajectory in Figure 13 and Figure 14, top left (intensity) and top right (surface). The combined mapping strategy reinforced the correlation of correct matches and diminished the correlation of incorrect matches (Figure 13 and Figure 14, bottom).
The UAV trajectories recovered from the positioning by employing the three NCC mappings combined are shown in Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24.
It is relevant to determine the criteria where the NCC approach can be applied, as stressed in Section 3.3.3.
The very distinctive correlation results of filtered surface suggests that TRN strategies can be used assuming the canopy of the trees as the terrain for densely forested regions. It should remain a reasonably stable surface at 5 m pixel space resolution, but testing with new acquisitions over an old reference database is needed to see how much the canopy correlation degrades over time.
Therefore, although LiDAR × LiDAR position estimation shows potential for use as aid to navigation in densely forested regions, imaging large regions with LiDAR for reference maps is costly, and especially so when considering the size of the Amazon forest.

Author Contributions

Conceptualization: R.N.S., H.F.d.C.V. and E.H.S.; Methodology, R.N.S., H.F.d.C.V. and E.H.S.; Software development: R.N.S.; Validation: R.N.S., H.F.d.C.V. and E.H.S.; Writing—original draft preparation: R.N.S., H.F.d.C.V. and E.H.S.; Writing—review and editing: R.N.S., H.F.d.C.V. and E.H.S.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This paper used the transects NP_T-1016_001 to NP_T-1016_009 with permission. See acknowledgments for origin and contact information. The source code is available per request.

Acknowledgments

Authors wish to thank Jean Ometto from the National Institute for Space Research (INPE) for permission to use LiDAR data from the project EBA—Estimating the Biomass of Amazon. The homepage of the project can be accessed at http://www.ccst.inpe.br/projetos/eba-estimativa-de-biomassa-na-amazonia/, (accessed on 1 October 2021). Author HFCV also thanks the National Council for Scientific and Technological Development (CNPq, Brazil) for the research grants HFCV (CNPq: 312924/2017-8). Authors also thank Projetos dos Cenários Futuros de Domínio Aéreo (Grupo de Alto Nível—Cooperação Brasil-Suécia em Aeronáutica) contract 01.20.0195.00 for publishing costs.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pajares, G. Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–329. [Google Scholar] [CrossRef] [Green Version]
  2. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Daloye, A.M.; Erkbol, H.; Fritschi, F.B. Crop Monitoring Using Satellite/UAV Data Fusion and Machine Learning. Remote Sens. 2020, 12, 1357. [Google Scholar] [CrossRef]
  3. Dileep, M.R.; Navaneeth, A.V.; Ullagaddi, S.; Danti, A. A Study and Analysis on Various Types of Agricultural Drones and its Applications. In Proceedings of the 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), Bangalore, India, 26–27 November 2020; pp. 181–185. [Google Scholar] [CrossRef]
  4. Nguyen, T.H.; Cao, M.; Nguyen, T.; Xie, L. Post-Mission Autonomous Return and Precision Landing of UAV. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 1747–1752. [Google Scholar] [CrossRef]
  5. Gautam, A.; Sujit, P.B.; Saripalli, S. A survey of autonomous landing techniques for UAVs. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; pp. 1210–1218. [Google Scholar] [CrossRef]
  6. Wang, J.; Lee, H.; Hewitson, S.; Lee, H.K. Influence of dynamics and trajectory on integrated GPS/INS navigation performance. Positioning 2003, 2, 109–116. [Google Scholar] [CrossRef] [Green Version]
  7. Labowski, M.; Kaniewski, P.; Serafin, P. Inertial navigation system for radar terrain imaging. In Proceedings of the IEEE/ION PLANS 2016, Savannah, GA, USA, 11–14 April 2016; pp. 942–948. [Google Scholar]
  8. Li, X.; Zhang, X.; Ren, X.; Fritsche, M.; Wickert, J.; Schuh, H. Precise positioning with current multi-constellation global navigation satellite systems: GPS, GLONASS, Galileo and BeiDou. Sci. Rep. 2015, 5, 1–14. [Google Scholar] [CrossRef] [PubMed]
  9. Gao, G.X.; Sgammini, M.; Lu, M.; Kubo, N. Protecting GNSS Receivers From Jamming and Interference. Proc. IEEE 2016, 104, 1327–1338. [Google Scholar] [CrossRef]
  10. Cilliers, P.; Opperman, B.; Meyer, R. Investigation of ionospheric scintillation over South Africa and the South Atlantic Anomaly using GPS signals: First results. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 2, p. II-879. [Google Scholar]
  11. Kim, T.H.; Sin, C.S.; Lee, S. Analysis of effect of spoofing signal in GPS receiver. In Proceedings of the 2012 12th International Conference on Control, Automation and Systems, Jeju Island, Korea, 17–21 October 2012; pp. 2083–2087. [Google Scholar]
  12. Kim, T.H.; Sin, C.S.; Lee, S.; Kim, J.H. Analysis of effect of anti-spoofing signal for mitigating to spoofing in GPS L1 signal. In Proceedings of the 2013 13th International Conference on Control, Automation and Systems (ICCAS 2013), Gwangju, Korea, 20–23 October 2013; pp. 523–526. [Google Scholar]
  13. Aon, E.F.; Othman, A.R.; Ho, Y.H.; Shaddad, R. Analysis of GPS link ionospheric scintillation during solar maximum at UTeM, Malaysia. In Proceedings of the 2014 IEEE 2nd International Symposium on Telecommunication Technologies (ISTT), Langkawi, Malaysia, 24–26 November 2014; pp. 84–87. [Google Scholar]
  14. Ahmed, W.A.; Wu, F.; Agbaje, G.I. Analysis of GPS ionospheric scintillation during solar maximum at mid-latitude. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 4151–4154. [Google Scholar]
  15. Sun, X.; Zhang, Z.; Ji, Y.; Yan, S.; Fu, W.; Chen, Q. Algorithm of ionospheric scintillation monitoring. In Proceedings of the 2018 7th International Conference on Digital Home (ICDH), Guilin, China, 30 November–1 December 2018; pp. 264–268. [Google Scholar]
  16. Gulati, I.; Li, H.; Stainton, S.; Johnston, M.; Dlay, S. Investigation of Ionospheric Phase Scintillation at Middle-Latitude Receiver Station. In Proceedings of the 2019 International Symposium ELMAR, Zadar, Croatia, 23–25 September 2019; pp. 191–194. [Google Scholar]
  17. Datta-Barua, S.; Doherty, P.; Delay, S.; Dehel, T.; Klobuchar, J.A. Ionospheric scintillation effects on single and dual frequency GPS positioning. In Proceedings of the 16th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS/GNSS 2003), Portland, OR, USA, 9–12 September 2003; pp. 336–346. [Google Scholar]
  18. Steenburgh, R.; Smithtro, C.; Groves, K. Ionospheric scintillation effects on single frequency GPS. Space Weather 2008, 6. [Google Scholar] [CrossRef] [Green Version]
  19. Guo, K.; Aquino, M.; Veettil, S.V. Ionospheric scintillation intensity fading characteristics and GPS receiver tracking performance at low latitudes. GPS Solut. 2019, 23, 1–12. [Google Scholar] [CrossRef] [Green Version]
  20. Carter, B.; Retterer, J.; Yizengaw, E.; Wiens, K.; Wing, S.; Groves, K.; Caton, R.; Bridgwood, C.; Francis, M.; Terkildsen, M.; et al. Using solar wind data to predict daily GPS scintillation occurrence in the African and Asian low-latitude regions. Geophys. Res. Lett. 2014, 41, 8176–8184. [Google Scholar] [CrossRef]
  21. Mokhtar, M.; Rahim, N.; Ismail, M.; Buhari, S. Ionospheric Perturbation: A Review of Equatorial Plasma Bubble in the Ionosphere. In Proceedings of the 2019 6th International Conference on Space Science and Communication (IconSpace), Johor, Malaysia, 28–30 July 2019; pp. 23–28. [Google Scholar]
  22. Takahashi, H.; Taylor, M.J.; Sobral, J.; Medeiros, A.; Gobbi, D.; Santana, D. Fine structure of the ionospheric plasma bubbles observed by the OI 6300 and 5577 airglow images. Adv. Space Res. 2001, 27, 1189–1194. [Google Scholar] [CrossRef]
  23. Silva, D.; Takahashi, H.; Wrasse, C.; Figueireido, C. Characteristics of ionospheric bubbles observed by TEC maps in Brazilian sector. In Proceedings of the 15th International Congress of the Brazilian Geophysical Society, Rio de Janeiro, Brazil, 31 July–3 August 2017; pp. 1714–1716. [Google Scholar] [CrossRef]
  24. Briechle, K.; Hanebeck, U.D. Template Matching Using Fast Normalized Cross Correlation; Optical Pattern Recognition XII; Casasent, D.P., Chao, T.H., Eds.; International Society for Optics and Photonics, SPIE. 2001; Volume 4387, pp. 95–102. Available online: https://spie.org/Publications/Proceedings/Paper/10.1117/12.421129?SSO=1 (accessed on 15 November 2021).
  25. Shiguemori, E.H.; Saotome, O. UAV visual autolocalization based on automatic landmark recognition. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W3, 89–94. [Google Scholar] [CrossRef] [Green Version]
  26. Nistér, D.; Naroditsky, O.; Bergen, J. Visual odometry. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, Washington, DC, USA, 27 June–2 July 2004; Volume 1, p. I-I. [Google Scholar] [CrossRef]
  27. Conte, G.; Doherty, P. An Integrated UAV Navigation System Based on Aerial Image Matching. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MO, USA, 1–8 March 2008; pp. 1–10. [Google Scholar] [CrossRef]
  28. Goltz, G.A.M.; Shiguemori, E.H.; Campos Velho, H.F. UAV Position Estimation By Image Processing Using Neural Networks. In Proceedings of the X Brazilian Congress on Computational Intelligence (CBIC-2011), Joinville, Brazil, 3–6 October 2011; pp. 9–17. [Google Scholar]
  29. Silva, C.A.O.; Goltz, G.A.M.; Shiguemori, E.H.; Castro, C.L.; Campos Velho, H.F.; Braga, A.P. Image matching applied to autonomous navigation of unmanned aerial vehicles. Int. J. High Perform. 2016, 6, 205–212. [Google Scholar] [CrossRef]
  30. Braga, J.R.G.; Campos Velho, H.F.; Conte, G.; Doherty, P.; Shiguemori, E.H. An image matching system for autonomous UAV navigation based on neural network. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, 13–15 November 2016; pp. 1–6. [Google Scholar]
  31. Rostami, M.; Kolouri, S.; Eaton, E.; Kim, K. SAR Image Classification Using Few-Shot Cross-Domain Transfer Learning. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 907–915. [Google Scholar] [CrossRef]
  32. Zhang, L.; Zhai, Z.; He, L.; Wen, P.; Niu, W. Infrared-inertial navigation for commercial aircraft precision landing in low visibility and gps-denied environments. Sensors 2019, 19, 408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Papachristos, C.; Mascarich, F.; Alexis, K. Thermal-inertial localization for autonomous navigation of aerial robots through obscurants. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; pp. 394–399. [Google Scholar]
  34. Silva, W.; Shiguemori, E.H.; Vijaykumar, N.L.; Campos Velho, H.F. Estimation of UAV position with use of thermal infrared images. In Proceedings of the International Conference on Sensing Technology (ICST-2015), Auckland, New Zealand, 8–10 December 2015; pp. 211–217. [Google Scholar]
  35. Qian, J.; Chen, K.; Chen, Q.; Yang, Y.; Zhang, J.; Chen, S. Robust Visual-Lidar Simultaneous Localization and Mapping System for UAV. IEEE Geosci. Remote Sens. Lett. 2021, 1–5. [Google Scholar] [CrossRef]
  36. Markiewicz, J.; Abratkiewicz, K.; Gromek, A.; Ostrowski, W.; Samczyński, P.; Gromek, D. Geometrical matching of SAR and optical images utilizing ASIFT features for SAR-based navigation aided systems. Sensors 2019, 19, 5500. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Sjanic, Z.; Gustafsson, F. Fusion of information from SAR and optical map images for aided navigation. In Proceedings of the 2012 15th International Conference on Information Fusion, Suntec City, Singapore, 9 July 2012; pp. 1705–1711. [Google Scholar]
  38. Campbell, J.; De Haag, M.U.; van Graas, F.; Young, S. Light detection and ranging-based terrain navigation-a concept exploration. In Proceedings of the 16th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS/GNSS 2003), Portland, OR, USA, 9–12 September 2003; pp. 461–469. [Google Scholar]
  39. Campbell, J.; De Haag, M.U.; van Graas, F. Terrain-Referenced Positioning Using Airborne Laser Scanner. Navigation 2005, 52, 189–197. [Google Scholar] [CrossRef]
  40. Toth, C.; Grejner-Brzezinska, D.A.; Lee, Y.J. Terrain-based navigation: Trajectory recovery from LiDAR data. In Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; pp. 760–765. [Google Scholar]
  41. Leines, M.T.; Raquet, J.F. Terrain reference navigation using sift features in lidar range-based data. In Proceedings of the 2015 International Technical Meeting of The Institute of Navigation, Dana Point, CA, USA, 26–28 January 2015; pp. 239–250. [Google Scholar]
  42. Hemann, G.; Singh, S.; Kaess, M. Long-range GPS-denied aerial inertial navigation with LIDAR localization. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 1659–1666. [Google Scholar] [CrossRef]
  43. Lewis, J. Fast Template Matching. In Proceedings of the Vision Interface 95. Canadian Image Processing and Pattern Recognition Society, Quebec City, QC, Canada, 15–19 May 1995; pp. 120–123. [Google Scholar]
Figure 1. Two images of Brazil from a visible band camera: (a) over the Amazon region, (b) over a set of buildings in a urban location.
Figure 1. Two images of Brazil from a visible band camera: (a) over the Amazon region, (b) over a set of buildings in a urban location.
Remotesensing 14 00361 g001
Figure 2. Aerial vehicle equipped with a LiDAR sensor. The first reflection (gray dot) is used for surface mapping and the last reflection (green dot) is used for terrain mapping.
Figure 2. Aerial vehicle equipped with a LiDAR sensor. The first reflection (gray dot) is used for surface mapping and the last reflection (green dot) is used for terrain mapping.
Remotesensing 14 00361 g002
Figure 3. The terrain of a location of the Amazon forest as seen by the LiDAR sensor (left) and by the ALOS PALSAR (right), at the same resolution of 12.5 m. Height values increase from black to white. Obvious differences can be seen between the two images.
Figure 3. The terrain of a location of the Amazon forest as seen by the LiDAR sensor (left) and by the ALOS PALSAR (right), at the same resolution of 12.5 m. Height values increase from black to white. Obvious differences can be seen between the two images.
Remotesensing 14 00361 g003
Figure 4. General overview of the methodology. White squares indicate processing on data until the position estimation is made. 3D point cloud domain is enclosed in green background while 2D matrix domain is enclosed in blue background.
Figure 4. General overview of the methodology. White squares indicate processing on data until the position estimation is made. 3D point cloud domain is enclosed in green background while 2D matrix domain is enclosed in blue background.
Remotesensing 14 00361 g004
Figure 5. Difference between square (left) and circle (right) bins at the same pixel spacing resolution.
Figure 5. Difference between square (left) and circle (right) bins at the same pixel spacing resolution.
Remotesensing 14 00361 g005
Figure 6. A small section of contiguous bins. Height values are shown in black centered on the correspondent bin. The top value, displayed in blue, is used when constructing the surface image. The bottom value, displayed in red, is used when constructing the terrain image. An outlier influence is visible on bin 43.
Figure 6. A small section of contiguous bins. Height values are shown in black centered on the correspondent bin. The top value, displayed in blue, is used when constructing the surface image. The bottom value, displayed in red, is used when constructing the terrain image. An outlier influence is visible on bin 43.
Remotesensing 14 00361 g006
Figure 7. Close-up of a region of the 3D cloud point data that shows presence of outliers above the trees. It is colored by height values, from blue to red.
Figure 7. Close-up of a region of the 3D cloud point data that shows presence of outliers above the trees. It is colored by height values, from blue to red.
Remotesensing 14 00361 g007
Figure 8. Accumulated sum of the differences from the unfiltered surface and terrain maps.
Figure 8. Accumulated sum of the differences from the unfiltered surface and terrain maps.
Remotesensing 14 00361 g008
Figure 9. Reference f and template window w.
Figure 9. Reference f and template window w.
Remotesensing 14 00361 g009
Figure 10. Dataset location.
Figure 10. Dataset location.
Remotesensing 14 00361 g010
Figure 11. Trajectory (top), reference database (middle) and where the trajectory crosses the reference (bottom), considering the geo-reference information provided with the trajectory. From left to right: intensity, surface and terrain images. The template matching algorithm correctly estimates a position when the estimated position matches same the position given by its geo-reference information over the reference database. Deviations from that position are errors.
Figure 11. Trajectory (top), reference database (middle) and where the trajectory crosses the reference (bottom), considering the geo-reference information provided with the trajectory. From left to right: intensity, surface and terrain images. The template matching algorithm correctly estimates a position when the estimated position matches same the position given by its geo-reference information over the reference database. Deviations from that position are errors.
Remotesensing 14 00361 g011
Figure 12. The trajectory with all template windows superimposed in translucent white. Intermediary non-overlapping contours are emphasized for better viewing.
Figure 12. The trajectory with all template windows superimposed in translucent white. Intermediary non-overlapping contours are emphasized for better viewing.
Remotesensing 14 00361 g012
Figure 13. Matching results of intensity, surface, terrain, filtered surface and combined approaches. Square binning was utilized. NCCjoint was replicated on the other graphs for facilitated comparison.
Figure 13. Matching results of intensity, surface, terrain, filtered surface and combined approaches. Square binning was utilized. NCCjoint was replicated on the other graphs for facilitated comparison.
Remotesensing 14 00361 g013
Figure 14. Matching results of intensity, surface, terrain, filtered surface and combined approaches. Circular binning was utilized. NCCjoint was replicated on the other graphs for facilitated comparison.
Figure 14. Matching results of intensity, surface, terrain, filtered surface and combined approaches. Circular binning was utilized. NCCjoint was replicated on the other graphs for facilitated comparison.
Remotesensing 14 00361 g014
Figure 15. Trajectory in yellow on intensity, square bin, experiment. Corresponding estimated positions are drawn in red.
Figure 15. Trajectory in yellow on intensity, square bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g015
Figure 16. Trajectory in yellow on surface, square bin, experiment. Corresponding estimated positions are drawn in red.
Figure 16. Trajectory in yellow on surface, square bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g016
Figure 17. Trajectory in yellow on terrain, square bin, experiment. Corresponding estimated positions are drawn in red.
Figure 17. Trajectory in yellow on terrain, square bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g017
Figure 18. Trajectory in yellow on filtered surface, square bin, experiment. Corresponding estimated positions are drawn in red.
Figure 18. Trajectory in yellow on filtered surface, square bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g018
Figure 19. Trajectory in yellow on combined intensity, surface and terrain (joint), square bin, experiment. Corresponding estimated positions are drawn in red. Drawn over the terrain background.
Figure 19. Trajectory in yellow on combined intensity, surface and terrain (joint), square bin, experiment. Corresponding estimated positions are drawn in red. Drawn over the terrain background.
Remotesensing 14 00361 g019
Figure 20. Trajectory in yellow on intensity, circular bin, experiment. Corresponding estimated positions are drawn in red.
Figure 20. Trajectory in yellow on intensity, circular bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g020
Figure 21. Trajectory in yellow on surface, circular bin, experiment. Corresponding estimated positions are drawn in red.
Figure 21. Trajectory in yellow on surface, circular bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g021
Figure 22. Trajectory in yellow on terrain, circular bin, experiment. Corresponding estimated positions are drawn in red.
Figure 22. Trajectory in yellow on terrain, circular bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g022
Figure 23. Trajectory in yellow on filtered surface, circular bin, experiment. Corresponding estimated positions are drawn in red.
Figure 23. Trajectory in yellow on filtered surface, circular bin, experiment. Corresponding estimated positions are drawn in red.
Remotesensing 14 00361 g023
Figure 24. Trajectory in yellow on combined intensity, surface and terrain (joint), circular bin, experiment. Corresponding estimated positions are drawn in red. Drawn over the terrain background.
Figure 24. Trajectory in yellow on combined intensity, surface and terrain (joint), circular bin, experiment. Corresponding estimated positions are drawn in red. Drawn over the terrain background.
Remotesensing 14 00361 g024
Figure 25. Flatness of the region crossed by the trajectory illustrated by the low magnitude of gradients. The greater the magnitude of the gradient are represented by whiter values.
Figure 25. Flatness of the region crossed by the trajectory illustrated by the low magnitude of gradients. The greater the magnitude of the gradient are represented by whiter values.
Remotesensing 14 00361 g025
Table 1. TRIMBLE HARRIER 68i LiDAR system specifications.
Table 1. TRIMBLE HARRIER 68i LiDAR system specifications.
SpecificationValue
LiDAR sensorHARRIER 68i
Wavelength1550 nm
Scan frequency5 Hz to 200 Hz
Field of viewUp to 30
Pulse density requested4 pulses/m 2
Footprint30 cm
Flying height600 m
Track width on the ground494 m (avg)
Table 2. Root mean square errors of Cross-correlation and Normalized Cross-correlation, for square and circular binning, converted from pixel units to meters.
Table 2. Root mean square errors of Cross-correlation and Normalized Cross-correlation, for square and circular binning, converted from pixel units to meters.
RMSE—Square BinRMSE—Circular Bin
DataCCNCCCCNCC
Intensity2020.176.942026.216.88
Surface1223.037.061220.376.93
Surface Filtered1227.567.041225.417.03
Terrain1239.871598.421239.871441.43
Joint1045.226.741075.286.43
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Salles, R.N.; Campos Velho, H.F.d.; Shiguemori, E.H. Automatic Position Estimation Based on Lidar × Lidar Data for Autonomous Aerial Navigation in the Amazon Forest Region. Remote Sens. 2022, 14, 361. https://doi.org/10.3390/rs14020361

AMA Style

Salles RN, Campos Velho HFd, Shiguemori EH. Automatic Position Estimation Based on Lidar × Lidar Data for Autonomous Aerial Navigation in the Amazon Forest Region. Remote Sensing. 2022; 14(2):361. https://doi.org/10.3390/rs14020361

Chicago/Turabian Style

Salles, Roberto Neves, Haroldo Fraga de Campos Velho, and Elcio Hideiti Shiguemori. 2022. "Automatic Position Estimation Based on Lidar × Lidar Data for Autonomous Aerial Navigation in the Amazon Forest Region" Remote Sensing 14, no. 2: 361. https://doi.org/10.3390/rs14020361

APA Style

Salles, R. N., Campos Velho, H. F. d., & Shiguemori, E. H. (2022). Automatic Position Estimation Based on Lidar × Lidar Data for Autonomous Aerial Navigation in the Amazon Forest Region. Remote Sensing, 14(2), 361. https://doi.org/10.3390/rs14020361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop