Next Article in Journal
Analysing the Impact of Climate Change on Hydrological Ecosystem Services in Laguna del Sauce (Uruguay) Using the SWAT Model and Remote Sensing Data
Next Article in Special Issue
Precision-Aided Partial Ambiguity Resolution Scheme for Instantaneous RTK Positioning
Previous Article in Journal
Improvement of GPR-Based Rebar Diameter Estimation Using YOLO-v3
Previous Article in Special Issue
GNSS RUMS: GNSS Realistic Urban Multiagent Simulator for Collaborative Positioning Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature-Aided RTK/LiDAR/INS Integrated Positioning System with Parallel Filters in the Ambiguity-Position-Joint Domain for Urban Environments

1
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
2
Beijing National Research Center for Information Science and Technology, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(10), 2013; https://doi.org/10.3390/rs13102013
Submission received: 15 March 2021 / Revised: 11 May 2021 / Accepted: 17 May 2021 / Published: 20 May 2021
(This article belongs to the Special Issue GNSS for Urban Transport Applications)

Abstract

:
As the modern navigation business evolves, demands for high-precision positioning in GNSS-challenged environments increase, and the integrated system composed of Global Navigation Satellite System (GNSS)-based Real-Time Kinematic (RTK), inertial system (INS), Light Detection and Ranging (LiDAR), etc., is accepted as the most feasible solution to the issue. For prior-map-free situations, as the only sensor with a global frame, RTK determines and maintains the global positioning precision of the integrated system. However, RTK performance degrades greatly in GNSS-challenged environments, and most of the existing integrated systems adopt loose coupling mode, which does nothing to improve RTK and, thus, prevents integrated systems from further improvement. Aiming at improving RTK performance in the RTK/LiDAR/INS integrated system, we proposed an innovative integrated algorithm that utilizes RTK to register LiDAR features while integrating the pre-registered LiDAR features to RTK and adopts parallel filters in the ambiguity-position-joint domain to weaken the effects of low satellite availability, cycle slips, and multipath. By doing so, we can improve the RTK fix rate and stability in GNSS-challenged environments. The results of the theoretical analyses, simulation experiments, and a road test proved that the proposed method improved RTK performance in GNSS-challenged environments and, thus, guaranteed the global positioning precision of the whole system.

Graphical Abstract

1. Introduction

In recent years, many emerging applications including, but not limited to, autonomous vehicles, unmanned delivery, and urban surveying have been applied in large-scale outdoor environments and rely on robust and high-precision positioning techniques. Compared with traditional applications, these innovative outdoor applications raise the bar for the performance of positioning concomitantly, which can be concluded as follows [1]: (1) more diversified working environments, not only in open areas, but also in GNSS-challenged areas such as urban canyons; (2) higher demand in positioning precision, normally at a decimeter or even centimeter level; (3) higher demand in continuity and stability.
So far, several types of sensors have been employed for positioning and navigation purposes, but when used in a stand-alone mode, none can be regarded as omnipotent when facing the above-mentioned requirements. As the most used outdoor positioning technique means, the Global Navigation Satellite System (GNSS) can provide absolute positioning results in the global frame. Based on GNSS, a technology named Real-Time Kinematic (RTK) can further obtain centimeter-level positioning accuracy by using double-differenced carrier phase measurements and performs well in open areas. However, GNSS cannot be used for indoor environments, and even in outdoor GNSS-challenged environments like urban areas, due to partial blockages, the challenges of low satellite availability, the poor geometric distribution, and the strong multipath effect arising, this lead to the deterioration of RTK performance, or specifically, RTK fixing. By contrast, environment-feature-based sensors represented by a Visual System (VS) and Light Detection and Ranging (LiDAR) can achieve a better performance in complicated environments with abundant features, but become worse in open areas. As a kind of dead-reckoning sensor, the Inertial Navigation System (INS) is popular for its short-term high precision and applicability in any environment, but on-going calibration is requisite or else errors will accumulate over time, causing a rapid drift. The characteristics of the aforementioned sensors can complement each other in terms of their working modes and applicable environments. Therefore, as a matter of course, the fusion of combining these sensors has become the only acceptable solution to the problem of robust high-precision positioning in diversified environments.
As a typical application of this type, state-of-the-art autonomous driving systems are generally equipped with a variety of sensors, including GNSS, INS, VS, multiple LiDARs, etc., and a prebuilt LiDAR global map is also used for providing global positions in place of RTK in tough environments, which comes at the cost of cumbersome preparatory work and large storage [2,3,4,5,6]. However, this solution is not suitable for all applications. For applications like unmanned delivery, urban surveying, emergency rescue, and unmanned ports, owing to factors such as mapping preliminary burdens and unexplored or changing environments, such prior-map-based schemes may not be that welcome or even totally inapplicable. For these applications, prebuilt-map-free algorithms are desired.
As for the prebuilt-map-free algorithms, since GNSS RTK performs poorly in GNSS-challenged environments, one intuitive idea is to apply GNSS-denied methods, among which the current favorites are the odometry-based methods. As a typical example, LiDAR Odometry and Mapping (LOAM) [7] becomes ahead of other algorithms when tested on the KITTI dataset website [8], whose key idea is to adopt high-frequency odometry and low-frequency point cloud registration to separate localization from mapping. Other algorithms, such as V-LOAM [9] and LiDAR-Monocular Visual Odometry (LIMO) [10], realize odometry by fusing LiDAR data with VS to further enhance motion estimation. These odometry-based algorithms mainly focus on the low drift of the accumulative errors and the balance between the performance and efficiency of point cloud matching and registration and achieve some degree of success, but still drift slowly over time when applied in large-scale environments, thus requiring loop closure for global calibration. Another prebuilt-map-free method is to continue using GNSS RTK in the system [11,12]. To remedy the low coverage of the RTK fix results in GNSS-challenged environments, Tohoku University allowed float RTK results within a decimeter or even meter level to be integrated in the loosely coupled RTK/LiDAR/INS Simultaneous Localization And Mapping (SLAM) system [11], which as a result, leads to a decline in precision.
The cause of this decline in precision stems from the fact that, as the only sensor in charge of global precision, RTK provides positioning results unilaterally for sensor calibration and LiDAR mapping in loosely coupled mode, but with no gain in self-performance improvement. Only by involving other sensors in RTK calculation can we improve the performance of RTK, or in other words, with a tightly coupled framework. The main advantage of tight coupling is that it allows fewer visible satellites to be used than with stand-alone mode [13] and, thus, weakens the effect of low satellite availability in GNSS-challenged environments. However, most tightly coupled systems have been designed for single-point positioning with code pseudorange [14,15,16], not for high-precision use. For high-precision applications, RTK is still a necessity, and so far, most are only tightly coupled with INS [17,18]. Nevertheless, the biases of INS are time-varying values, which are addicted to continual calibration; thus, if there is a long period of low GNSS availability and poor measurements, the RTK fix rate is unlikely to increase, and as the worst possible consequence, the positioning results may diverge from the true trajectory due to erroneous calibration.
For the cases without pre-built high-precision LiDAR maps, if positioning results that do not suffer from the cumulative error problem are desired, RTK is still the best method for maintaining the global positioning precision of the integrated system. However, the performance of RTK degrades greatly in GNSS-challenged environments, and most of the existing RTK/LiDAR/INS integrated systems adopt loose coupling mode, which, after all, does nothing to improve the RTK self-performance, which therefore limits the further improvement of the integrated system. Therefore, in this paper, aiming at improving the RTK fix rate and stability in the integrated system in GNSS-challenged environments, we proposed a new RTK/LiDAR/INS fusion positioning method. The main target of the method is to weaken the negative effects such as low satellite availability, cycle slips, and poor GNSS measurement qualities on RTK with the aid of sensor fusion.
The RTK/LiDAR/INS fusion positioning method proposed in this paper is an upgraded version of our previous work [19,20], whereby we proposed a LiDAR-feature-aided RTK in the ambiguity domain. In our previous work, we followed the idea of tight coupling, adopted pre-registered LiDAR features as pseudo satellites, integrated LiDAR feature measurements with RTK ones, and employed the Extended Kalman Filter (EKF) to obtain a more accurate ambiguity search center for Ambiguity Resolution (AR). By doing this, we could improve the RTK fix rate in situations with low satellite availability and deteriorative GNSS measurement quality. The advantages of LiDAR-aided RTK are as follows:
  • absolutely complementary work environment;
  • Unlike INS, LiDAR does not need to calibrate biases in the second derivative, namely accelerometer biases, and, thus, has a relatively low divergence speed;
  • Features are more likely to be extracted from the LiDAR point cloud at the viewpoints where some satellites are unobservable, which plays a critical role in compensating satellite geometry distribution and, thus, enhances the geometric constraint of the positioning problem.
The method was demonstrated to achieve improvement in both theoretical analysis and simulation experiments. However, when applied to the real urban scenario, it presented several deficiencies as follows:
  • The plane features were denoted with the form of a normal vector and distance, which posed difficulties and hazards in establishing error models and filtering due to the unit vector constraint;
  • As the positioning region expanded, the number of the features in LiDAR map also increased, which acted as a burden on the EKF process, and the error models of both the LiDAR feature global parameters and observations were not well considered;
  • Cycle slips and changes in the number of visible satellites occurred more frequently, and multipath effect errors of code pseudoranges were greater than expected, which kept ambiguities away from rapid convergency.
Therefore, to overcome the above deficiencies and further improve the stability of RTK in GNSS-challenged environments, we further explored the idea of LiDAR-feature-aided RTK and addressed an improved algorithm with the following modifications:
  • To bypass the constraint of the unit vector, we adopted a foothold coordinate from the origin to the plane to denote a plane that can also represent a plane feature in a certain frame without any constraint;
  • The LiDAR feature global parameters were removed from the EKF estimator and, instead, were updated after each RTK calculation based on the current RTK positioning result, and the error models of both LiDAR feature global parameters and observations were derived based on RTK results and LiDAR measurements as well;
  • To solve the third deficiency, we designed a parallel-filter scheme for RTK ambiguity resolution. In the scheme, in addition to the legacy EKF in the ambiguity domain, we added a parallel Particle Filter (PF) with an AR method in the position domain, namely the Mean-Squared Residual (MSR) [21], to strengthen the algorithm’s resistance to cycle slips and the multipath effect.
The significant modifications in this upgraded method lie in the employment of the particle filter with the MSR-based AR method in the position domain and the parallel-filter scheme. The advantages, motivations, and basic idea of these two modifications are as follows:
  • The MSR-based AR method was first proposed in [21] as an instantaneous AR method, which solves the RTK AR problem in the position domain with carrier phases only and is insensitive to cycle slips and code multipath noises. In this paper, we adopted the particle filter to manage the candidate points of MSR-based RTK in the position domain, which can expand the instantaneous method to a multi-epoch one and further improve the resistance to the multipath effect;
  • The parallel-filter scheme was designed to take the advantages while remedying the disadvantages of both AR methods in the ambiguity and position domains and further improve the output stability of the LiDAR-feature-aided RTK method. In other words, in order to reduce the computational burden, the state space of the PF only consists of the position domain, and we allocated the INS biases’ calibration tasks and the PF search space initialization to the legacy EKF algorithm. Meanwhile, we counted on the PF with the MSR method to weaken the negative effects caused by cycle slips and multipath effects in urban environments. In detail, after initialization based on the EKF-based algorithm, the PF will output a Minimum-Mean-Squared-Error (MMSE)-based positioning result and return a float ambiguity reference each epoch to help the EKF algorithm search the integer ambiguities and obtain a higher precision positioning output. In addition, to improve the output stability, a result assessment and selection submodule were employed to decide the PF MMSE result or the fix result with the higher precision of the EKF algorithm to be output and used for LiDAR feature registration.
The remainder of the paper is organized as follows. Firstly, the general framework of the proposed RTK/LiDAR/INS integrated system is introduced. Then, the measurement models and the detailed procedure including two parallel filters in the ambiguity and position domains, LiDAR feature registration, and the result selection are addressed in sequence. Next, several theoretical analyses are presented to illustrate the feasibility and improvement of the method. Finally, the results of the simulation platform experiments and a road test are given to demonstrate the performance of the proposed system.

2. System General Framework

The target of the proposed algorithm was to improve the RTK fix rate in GNSS-challenged environments to maintain the global accuracy without a prebuilt map. To realize the target, in our system, RTK was not only used for sensor calibration and LiDAR mapping, but also, innovatively, accepted the assistance from LiDAR by tightly integrating LiDAR feature observation with RTK calculation. Besides, to weaken the effects of cycle slips and multipath in urban environments, a parallel-filter scheme in the ambiguity-position-joint domain was adopted. Figure 1 shows the general framework of our RTK/LiDAR/INS integrated system that consisted of four main modules, namely INS dead-reckoning module, LiDAR process module, RTK/LiDAR/INS parallel-filter-based integration module, and LiDAR feature registration module, respectively.
The core of the system was the RTK/LiDAR/INS parallel-filter-based integration module where an EKF-based algorithm in the ambiguity domain and a PF-based algorithm in the position domain were operated simultaneously. Both submodules integrated LiDAR features to RTK calculation and output positioning results, but aimed at different purposes. The main purpose of the PF-based integration submodule was to maintain the stability of the system. In this submodule, the PF with the MSR objective function was applied, which was insensitive to cycle slip and multipath effects on code pseudoranges, and the MMSE results were output to avoid instantaneous outliers. Since the PF has the problem of large samples, to lighten the burden of filtering, the PF state space lied in the positioning domain only, and thus, the task of estimating INS biases fell onto the EKF-based integration submodule. Moreover, the PF initial sample set was derived on the results of the EKF submodule. Meanwhile, the PF submodule returned the EKF module with more accurate MMSE float ambiguities after initialization. By doing so, the EKF-based algorithm can achieve a higher AR success rate and output higher precision positioning results with fixed integer ambiguities. Then, the MMSE positioning results and the fix/float positioning results of the two submodules were assessed and elected before the final output in the result assessment and selection submodule. The detailed procedure of this core module will be further addressed in Section 4.
In the proposed system, the data output rate, sampling instant, and process complexity were different from sensor to sensor. Normally, the output rate of a multi-frequency GNSS receiver ranges from 1 to 10 Hz, while the ones of INS and LiDAR can reach up to 100 Hz and 10~20 Hz, respectively. The main task of the INS dead-reckoning module was to derive the state change between two GNSS epochs or within a LiDAR frame by means of accumulation and interpolation. Based on INS interpolation results, we can calibrate the distortion of the LiDAR point cloud that is caused by the scanning delay [22]. For the calibrated point cloud, the Random Sample Consensus (RANSAC)-based plane extraction process [23] was conducted, and the extracted plane feature measurements were then matched with the features pre-registered in the map.
Each GNSS epoch, we can utilize the current RTK result to update the global parameters of the LiDAR features in the LiDAR feature registration module, by which we realized a system of positioning while mapping.

3. Mathematical Models

The models of the sensors are normally established in their own coordinate frames, and when being integrated, involve issues such as coordinate system transformation. Therefore, before giving the equations of the models, we specify some scripts to indicate different coordinate frames, among which:
E represents the global frame, e.g., Earth-Centered Earth-Fixed (ECEF) or East-North-Up (ENU) coordinate;
B represents the body frame;
I represents the inertial frame;
L represents the LiDAR frame.
Here, we supposed that the INS frame was the same as the body frame and the transformation relationship between the body frame and LiDAR frame was rigorously calibrated in advance. Furthermore, the lever arm effects among the sensors were also well calibrated, so they were not considered in the following equations for description convenience.
In our RTK/LiDAR/INS integrated system, the observable landmarks included the moving GNSS satellites and the fixed LiDAR features and made up a diverse fusion map. The global coordinates of satellite landmarks can be derived from the broadcast ephemeris, so they were known quantities, while the ones of LiDAR landmarks were unknown without a prebuilt map. In contrast, for the observation of the landmarks, the ranges of GNSS carrier phases were ambiguous due to the unknown integer wavelength numbers, namely ambiguities, between the satellites and receiver, while the ones of LiDAR landmarks were accurate.
Therefore, the system measurement models can be divided into three parts, namely GNSS RTK, LiDAR, and INS, which are introduced below in sequence, and the parameters to be optimized included position, velocity, attitude, INS biases, ambiguities, and LiDAR landmark global parameters.

3.1. GNSS RTK Measurement Models

As for GNSS part, the distance measurements included pseudoranges and carrier phases. With the benefit of the GNSS receiver phase discriminator characteristics, the precision of carrier phases was much higher than the one of the pseudoranges, normally at the millimeter or centimeter level, and was less influenced by the multipath effect, thus making RTK a very high-precision positioning technology widely used in the surveying field. However, as mentioned previously, the ranges provided by carrier phase measurements were ambitious because only fractional phases and cycle counts from the initial acquisition can be achieved by the phase discriminators, which remain ambiguities to be estimated. Besides, both the pseudorange and carrier phase measurements contained many errors, which can be modeled as follows [24]:
P r i = ρ r i + c · d t r c · d T i + T r i + I r i + E r i + M P r , P i + ε r , P i
λ Φ r i = ρ r i + c · d t r c · d T i + T r i I r i + E r i λ N r i + M P r , Φ i + ε r , Φ i
where P r i and Φ r i represent the pseudorange and carrier phase measurements between satellites i and rover receiver r, respectively, λ represents the carrier phase wavelength, ρ r i represents the true geometric distance, c represents the speed of light, d t r and d T i represent the receiver and satellite clock biases, respectively, T r i and I r i represent the tropospheric and ionospheric errors, respectively, E r i represents the ephemeris error, N r i represents the ambiguity, M P r , P i and M P r , Φ i represent the pseudorange and carrier phase multipath errors, respectively, and ε r , P i and ε r , Φ i represent the other errors that cannot be modeled.
RTK can eliminate most spatially related errors such as satellite clock errors, atmospheric errors, and ephemeris errors by differencing the measurements of rover r and base b and further eliminate the receiver clock errors by differencing between two satellites, forming the Double-Differenced (DD) measurement models as follows:
Δ P r b i j = Δ ρ r b i j + Δ M P r b , P i j + Δ ε r b , P i j
λ Δ Φ r b i j = Δ ρ r b i j + λ Δ N r b i j + Δ M P r b , Φ i j + Δ ε r b , Φ i j
where Δ ( · ) r b i j = ( ( · ) r i ( · ) b i ) ( ( · ) r j ( · ) b j ) .
As we can infer from the above differencing, the positioning results that RTK provides were relative positions between a rover and a base station. Normally, the accurate absolute position of the base station was obtained in advance as a reference with which we can derive the global coordinates of the rover in real time accordingly.

3.2. LiDAR Plane Feature Measurement Models

As for the LiDAR part, the measurements that we employed were the parameters of the extracted features of the current point cloud in the LiDAR frame, and thus, the corresponding measurement models were the feature parameter transformation equations from the LiDAR frame to the global frame whose concrete expression depended on the feature types we used. The common geometric features included point, line, and plane features. In this paper, the system adopted the plane features that were easily found, extracted, and repeatedly observed in urban environments even with relatively sparse point clouds provided by 16-line 3D LiDARs. Generally, the plane feature is defined with the normal vector n and the distance d between the origin and the plane, and correspondingly, regardless of the measurement errors, the measurement model can be expressed as follows:
n l , L = C E L n l , E
d l , L = d l , E r E T n l , E
where n l , L , d l , L and n l , E , d l , E represent the parameters of plane l in the LiDAR (L) and global (E) frames, respectively, r E represents the user position in the global frame, and C E L represents the rotation matrix.
However, when being tightly coupled, due to the unit vector constraint of the normal vectors, it turns out to be uncontrollable when calculating the measurement residuals, filtering the states, or modeling the errors for the normal vectors. To settle the problem, we replaced the traditional plane parameters with a 3D point coordinate that was the foothold point of the origin to the plane, and it can be transformed from the original parameters form with d · n and used to identify a plane uniquely in a certain frame. The relationship between the footholds in the L frame and E frame is shown by Figure 2.
We denote the foothold coordinates of plane l in the LiDAR and global frames as y l , L and p l , E , respectively, and of note, these two coordinates point to two different points in space, of which the latter is fixed, while the former changes as the user position and attitude change. Then, according to the figure, the measurement models of the plane features can be derived as:
y l , L = C E L p l , E r E T p l , E p l , E 2 p l , E + ε L l
where ε L l represents the LiDAR feature measurement noise vector.

3.3. INS Measurement Models

As a dead-reckoning sensor, the main duty of INS is to solve the precise interpolation and dynamic state prediction problems between two GNSS epochs. The INS measurement models consist of accelerometer and gyro outputs and can be modeled as [25]:
f ˜ I B B = f I B B + b a + ε a
ω ˜ I B B = ω I B B + b g + ε g
where f ˜ I B B and ω ˜ I B B represent the INS specific force and angular rate outputs, respectively, f I B B and ω I B B represent the true values, respectively, b a and b g represent the biases of the accelerometer and gyro, respectively, and ε a and ε g represent other errors. The relationship between the true values of specific force and acceleration can be expressed as:
f I B B = a I B B g I B B
where a I B B represents the acceleration true value and g I B B represents the gravitational force.

3.4. System Total Measurement and State Transition Models

Above all, the total measurement equations can be concluded as:
z = h ( x ) + ε
where ε represents the measurement noise, and the measurement vector is:
z = Δ P T , λ Δ Φ T , Y L T T
with:
Δ P = Δ P r b 21 , , Δ P r b M 1 T
λ Δ Φ = λ Δ Φ r b 21 , , λ Δ Φ r b M 1 T
Y L = y 1 , L T , , y J , L T T
where M and J represent the numbers of satellites and plane features, respectively, and x is the system state vector to be estimated with:
x = r E T , v E T , Ψ T , b a T , b g T , Δ N T T
where v E represents the velocity in the global frame, Ψ represents the attitude, and Δ N represents the DD ambiguity vector.
In our system, the process rate of the RTK/LiDAR/INS integrated positioning module was 1 Hz, and the position, velocity, and attitude variations between two consecutive epochs were derived from the INS accelerometer and gyroscope data accumulation. Thus, the total state transition equation can be modeled on the traditional INS state transition equations and added to the ambiguity equation, which is time-invariant if no cycle slip appears, as follows:
x ˙ = f ( x ) + η
where η represents the system noise, and the detailed expression of f ( x ) is:
r ˙ E = v E v ˙ E = C B E f I B B + g E 2 Ω I E E v E C ˙ B E = C B E Ω I B B Ω I E B Δ N ˙ = O ( M 1 ) × ( M 1 )
where g E represents the gravity vector in the global frame, O represents a zero matrix, Ω b c a represents the skew symmetric matrix of angular velocity ω b c a = ω 1 , ω 2 , ω 3 T :
Ω b c a ω b c a × = 0 ω 3 ω 2 ω 3 0 ω 1 ω 2 ω 1 0
and Ω I B B = ω I B B × , Ω I E B = ω I E B × , Ω I E E = ω I E E × where ω I E B and ω I E E represent the rotational angular rate of the Earth in the B frame and E frame, respectively.

4. Feature-Aided RTK/LiDAR/INS Integrated Positioning Algorithm with Parallel Filters in the Ambiguity-Position-Joint Domain

4.1. Basic Idea

Aiming at high-precision positioning in outdoor large-scale diversified environments without a prior map, we proposed an innovative RTK/LiDAR/INS integrated system. In the system, RTK was still used for sensor calibration and LiDAR mapping, but revolutionarily, tightly integrated LiDAR feature measurements to settle the problem of low satellite availability in GNSS-challenged areas. Besides, to further improve the RTK fix rate and the resistance of anomalies like cycle slips and large multipath errors on code pseudoranges, we adopted a parallel-filter scheme with ambiguity resolution in the ambiguity-position-joint domain.
Ambiguity Resolution (AR) is the key to RTK performance. According to the used search space, AR methods can fall into two categories, namely AR methods in the ambiguity domain and position domain, respectively. For the former, both filter and search are performed in the ambiguity space; the advantage is that the search space is relatively small, but the ambiguity filtering is easily disturbed by ambiguity changes caused by cycle slips. On the contrary, the AR methods in the position domain are insensitive to cycle slips, but subjected to large search space. Therefore, for these two kinds of methods, can we reserve the superiorities and make up the defects, for example, by combining them together?
Taking the above ideas into consideration, we proposed a feature-aided RTK/LiDAR/ INS integrated positioning algorithm with parallel filters in the ambiguity-position-joint domain. The basic structure of the algorithm is displayed in Figure 1 as the core module, namely the RTK/LiDAR/INS parallel-filter-based integration module that consists of three submodules. The task assignment of these submodules is designed as follows:
  • The EKF-based integration submodule implements RTK/LiDAR/INS integrated positioning in the ambiguity domain, estimates all the states of the system, including INS accelerometer and gyroscope biases, and determines the initial search space for the PF method;
  • The PF-based integration submodule solves the AR problem in the position domain with the MSR objective function, outputs positioning results based on PF MMSE criterion, and gives reference float ambiguities as feedbacks to the EKF algorithm to revise its results in harsh situations;
  • Both revised EKF positioning results and PF MMSE positioning results are sent to the result assessment and selection submodule for output selection.
The detailed procedure of the algorithm is given as follows.

4.2. Preparation

Generally, for the initialization stage, the situation may not be challenging. For example, in an open area or static situation, RTK can achieve fix results easily in a stand-alone mode and be used for system and LiDAR feature map initialization. After the initialization, the state variation between two epochs can be predicted by the INS dead-reckoning results. Since the focus of our method was to improve the RTK performance by integrating with LiDAR measurements, both state prediction and LiDAR measurements should be synchronized to the GNSS observation epoch. Here, we denote the final output state vector of the last epoch k 1 as x ˜ k 1 k 1 with:
x ˜ k 1 k 1 = r ˜ k 1 k 1 E T , v ˜ k 1 k 1 E T , Ψ ˜ k 1 k 1 T , b ˜ a , k 1 k 1 T , b ˜ g , k 1 k 1 T T ,
and the position, velocity, and attitude variation derived by INS as Δ r k k 1 E , Δ v k k 1 E , and Δ Ψ k k 1 , respectively. Then, the predicted stat of epoch k can be obtained as:
x ^ k k 1 = r ^ k k 1 E v ^ k k 1 E Ψ ^ k k 1 b ^ a , k k 1 b ^ g , k k 1 Δ N ^ k k 1 = x ˜ k 1 k 1 + Δ r k k 1 E Δ v k k 1 E Δ Ψ k k 1 0 3 × 1 0 3 × 1
The Random Sample Consensus (RANSAC) [23] algorithm was adopted to extract plane features from the LiDAR point clouds, and the plane feature matching process was conducted subsequently. For the extracted plane l with y k l , L , there is a plane in the LiDAR map, supposing plane m with p m , E , that satisfies the following condition:
C ^ E , k k 1 L p m , E r ^ k k 1 E T p m , E p m , E 2 p m , E y k l , L < ξ p
where C ^ E , k k 1 L represents the rotation matrix converted from Ψ ^ k k 1 , ξ p represents the threshold for plane matching, and y k l , L is regarded as the matched plane of p m , E .

4.3. EKF-Based Integration in the Ambiguity Domain

The integration in the ambiguity domain was implemented based on the EKF, whose task is to provide the RTK fix results and estimate INS biases for calibration. The state space includes all state parameter to be estimated. The measurements included DD code pseudoranges, DD carrier phases, and LiDAR plane feature measurements. We denote the measurement vector at epoch k as z k :
z k = Δ P k T , λ Δ Φ k T , Y k L T T
The procedure of EKF-based RTK/LiDAR/INS integrated positioning in the ambiguity domain is given below.
The predicted state of epoch k was derived with (13), and the covariance matrix of the predicted state can be calculated with the following equation [25]:
P k k 1 = I + d t · F k P k 1 k 1 + 1 2 Q k I + d t · F k T + 1 2 Q k
where d t represents the time difference between two epochs, I represents the unit matrix, d t represents the time interval, P k 1 | k 1 represents the covariance matrix of state x ^ k 1 k 1 at the last epoch, Q k represents the system noise covariance matrix, and F k represents the state transition matrix whose expression is given as below:
F k = Ψ k O 15 × ( M 1 ) O ( M 1 ) × 15 O ( M 1 ) × ( M 1 )
where:
Ψ k = O 3 × 3 I 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 2 Ω i E E f E × C ^ B , k k 1 E O 3 × 3 O 3 × 3 O 3 × 3 Ω i E E O 3 × 3 C ^ B , k k 1 E O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3 O 3 × 3
Then, we calculated the Kalman gain with the following equation [26]:
K k = P k k 1 H k T H k P k k 1 H k T + R k 1
where R k represents the measurement noise covariance matrix and H k represents the measurement matrix with the detailed form of:
H k = h ( x ) x x = x ^ k | k 1 = G k O ( M 1 ) × 3 O ( M 1 ) × 3 O ( M 1 ) × 6 O ( M 1 ) × ( M 1 ) G k O ( M 1 ) × 3 O ( M 1 ) × 3 O ( M 1 ) × 6 λ I M 1 U k O 3 J × 3 V k O 3 J × 6 O 3 J × ( M 1 )
where:
G k = e k 2 , E e k 1 , E T e k M , E e k 1 , E T , U k = U k , 1 U k , J , V k = V k , 1 V k , J
U k , j = C ^ E , k k 1 L p j , E p j , E T p j , E 2 , V k , j = C ^ E , k k 1 L p j , E r ^ k k 1 E T p j , E p j , E 2 p j , E
where e k i , E represents the unit Line Of Sight (LOS) vector from the receiver to satellite i:
e k i , E = r k i , E r ^ k k 1 E r k i , E r ^ k k 1 E
where r k i , E represents the global coordinate of satellite i.
Next, we updated the state vector and its covariance matrix with the following equations:
x ^ k k = x ^ k k 1 + K k z k h x ^ k k 1
P k k = I K k H k P k k 1
where:
x ^ k k = r ^ k k E T , v ^ k k E T , Ψ ^ k k T , ε ^ k k B T , ^ k k B T , Δ N ^ k k T T
As mentioned previously, ambiguities have an integer constraint, but the ambiguity estimators of the EKF (17) are float values. Therefore, the next step is the crucial part of RTK, namely AR, and the target is to search the optimal integer solution for the ambiguities. In this paper, the lambda method was adopted to solve the problem. As the most widely used AR method in RTK applications, lambda searches the optimal integer ambiguities within a given range in the ambiguity domain based on the following objective function [27]:
Δ N k k = min Δ N Z M 1 Δ N ^ k k Δ N P Δ N ^ , k k 1 2
where P Δ N ^ , k | k is the covariance matrix of Δ N ^ k k obtained from (18). The detailed procedure of the above optimum searching problem can be found in [28].
After finding the optimal integer solution Δ N k k , we can revise the user position with the equation as below:
r k k E = r ^ k k E P r ^ E Δ N ^ , k k P Δ N ^ , k k 1 Δ N ^ k k Δ N k k
and the corresponding covariance matrix can be derived as:
P r k k E = P r ^ k k E P r ^ E Δ N ^ , k k P Δ N ^ , k k 1 P r ^ E Δ N ^ , k k T
where P r ^ k k E and P r ^ E Δ N ^ , k k represent the covariance matrix of r ^ E and the correlation matrix between r ^ E and Δ N ^ k k obtained from (18), respectively.
The integration of LiDAR features strengthened the geometric distribution, thus improving the RTK performance to some degree. However, in a real scenario, in addition to low satellite availability and a poor geometric distribution, frequent cycle slips and large multipath noises of code pseudoranges also bring challenges. With these challenges, the float ambiguities estimated by EKF could not converge rapidly, which means that Δ N ^ k k might not be as accurate as we wanted.
To tackle the challenges of cycle slips and multipath effect, we designed a parallel submodule, which solves AR problem in the position domain and integrates carrier phase and LiDAR feature measurements only using the PF with a modified MSR objective function.

4.4. PF-Based Integration in the Position Domain

For the PF-based integration algorithm, two techniques play quite important roles in improving the robustness of the system. One is the MSR objective function that solves the AR problem in the position domain and is free of the effects of cycle slips and pseudorange multipath noises, and another one is the particle filter that further improves the stability of the outputs.

4.4.1. MSR Objective Function for the AR Problem

The MSR-based AR algorithm was proposed in [21]. The basic idea can be addressed briefly as follows. For a satellite, the carrier phase and its possible ambiguity can be related to a set of parallel surfaces, as shown by Figure 3, which actually should be hyperboloids for DD measurements. If we ignore the existence of errors and noises, it can be found that an intersection point is located on one surface of each set, and this point is the true position we want. Although, due to noises, these surfaces may not intersect exactly at the same point; the true position is still quite near the surfaces corresponding to the correct ambiguities owing to the high-precision and multipath resistance characteristics of carrier phases. Therefore, the objective function that selects the best point from a set of position candidate points can be established by calculating the distance from the point to its nearest hyperboloids.
In details, for the candidate position point r E , i , we used the following MSR objective function to determine whether it is the best point or not:
M S R r E , i = d Φ i T R Φ 1 d Φ i
where R Φ represents the carrier phase noise covariance matrix. The element of the distance vector d Φ i = d Φ i , 21 , d Φ i , 31 , , d Φ i , M 1 T can be calculated by:
d Φ i , j k = λ r e m 1 λ Δ ρ r b i , j k Δ Φ r b j k
where Δ ρ r b i , j k is the DD geometric distance derived by r E , i with satellite j and k and r e m · is the fractional function with the range from −0.5 to 0.5, and the corresponding DD ambiguity can be easily obtained by the following equation:
Δ N r b i , j k = r o u n d 1 λ Δ ρ r b i , j k Δ Φ r b i j
where r o u n d · is the rounding function with the nearest integer.
Since the search space lies in the position domain, the method is insensitive to cycle slips. In addition, from above equations, we can see that the MSR is established on the strong integer constraint of the ambiguities, and the calculation involves carrier phase only, free of code pseudoranges with large multipath noises. Hence, compared with the traditional AR methods in the ambiguity domain, the method is more suitable for urban environments. The only remaining problem for the MSR-based AR method is how to select and manage the candidate position points, and in this paper, the PF was applied to handle the problem.

4.4.2. PF-Based Integration Algorithm

The basic idea of the PF is to approximate the state posterior probability distribution with a large number of sampling points, namely particles, in the state space by updating their weights iteratively [29]. Since it is almost impossible to sample the posterior probability distribution directly, the PF introduces the concept of the importance sampling and samples another known probability distribution, namely the importance probability density function, q x k i z 1 : k . Then, the state posterior probability distribution can be approximated with the weighted sum of these sampling points:
p x k z 1 : k i = 1 N w k i δ x k x k i
where x k represents the state of epoch k, z 1 : k represents the measurements from Epoch 1 to k, w k i represents the weight of particle x k i , and:
w k i p x k i z 1 : k q x k i z 1 : k
Here, in our PF-based integrated submodule, in order to reduce the complexity, we compressed the state space to the position state space, and further, to avoid the effect of large multipath errors in code pseudoranges, only carrier phases and LiDAR plane features were adopted as measurements in the PF calculation. Therefore, the state posterior probability distribution of our PF was p r k E z 1 : k , and we denote the shrunken measurement vector of epoch k as z k with:
z k = λ Δ Φ T , Y L T T
Supposing that we have an RTK fix positioning result r t | t E that is derived by the EKF based submodule at epoch t the center of initial sampling point set can be determined with r t | t E , and the points are sampled based on a Gaussian distribution, r t E N ( r t | t E , Q r   0 ) , and assigned the same weights. Normally, for the initialization stage, the values of the covariance matrix Q r   0 should not be too small despite the high precision of r t | t E . Q r   0 that is too small is not conducive to the approximation of the posterior probability density function in the position domain during the convergence process, especially in the situation where the initialization stage is a dynamic scene. Furthermore, considering both the error of the position state transition model and the situation where the particle filter may diverge when the observation condition deteriorates, the total number of particles should not be too small, either. Although the additional particles will increase the computational complexity of the algorithm, more particles can better guarantee the stability of the algorithm. That is, once the filter diverges, the filter can expand the position search space adaptively while maintaining a certain sampling density so that the true value can still stay within the search space and the algorithm is more likely to re-converge afterwards. In our method, we recommend Q r   0 = 0.5 2 I 3 and the number of sampling points NPF = 5000, both of which are the empirical values, but are sufficient for low dynamic scenes. We denote the particle set as r k E , i i = 1 and the weight of particle r k E , i as w k i .
Each epoch, we employed the state transition probability density function p r k E r k 1 E as the importance probability density function to realize the importance sampling. In details, a new particle of epoch k can be derived on the basis of the particle r k 1 E , i of the last epoch with the following equation:
r k E , i = r k 1 E , i + d r k E
where d r k E N Δ r k k 1 E , Q r k , Δ r k k 1 E is the position variation derived by INS and Q r k is the system noise covariance matrix of state r E .
Then, we can update the weights of the particles with the following equation:
w k , n o n n o r m a l i z e d i = w k 1 i p z k r k E , i
where p z k r k E represents the measurement probability density function. By doing so, we not only integrate the measurements of carrier phases and LiDAR features, but also solve the AR problem with the aid of the MSR objective function in terms of probability.
The measurement probability density function that we used can be derived by the extensive form of the LiDAR-feature-integrated MSR objective function:
p z k r k E = 1 C exp M S R L r k E , z k
where C is a coefficient whose value is inessential due to the subsequent weight normalization process, and the expression of the LiDAR-feature-integrated MSR objection function is given as below:
M S R L r k E , z k = d Φ d L T R Φ 1 R L 1 d Φ d L
where d Φ is the distance vector calculated with position r k E , d L is the residual vector of LiDAR feature measurements based on the measurement model of (7), and R L is the LiDAR feature observation error covariance matrix, which is given in Section 5.
Afterwards, the weights are normalized:
w k i = w k , n o n n o r m a l i z e d i i = 1 N P F w k , n o n n o r m a l i z e d i
For the PF, there are several criteria for the output, such as Maximum A Posteriori (MAP) and the Minimum Mean-Squared Error (MMSE). In our method, for the sake of stability, we preferred MMSE criterion. Hence, the MMSE positioning result of the PF-based integration submodule can be calculated by:
r ^ k , M M S E E = i = 1 N P F w k i r k E , i
Once the PF converges, we can feed back a more accurate referenced float ambiguity vector based on the MMSE to the EKF-based module to improve the fix rate of the AR method in the ambiguity domain by simply substituting the original float ambiguity vector with the following MMSE one in (19):
Δ N ^ k , M M S E = i = 1 N P F w k i Δ N k i
Whether the MMSE float ambiguity vector converges to an acceptable accuracy depends on the indicator:
ε Δ N ^ k , M M S E = 1 M 1 i = 1 N P F w k i Δ N k i Δ N ^ k , M M S E 2
where M 1 is the number of DD ambiguities.
Since the importance sampling has the problem of weight degeneracy, the resampling process is essential to the PF. That is, after a period of filtering, only a few particles have relatively large weights, while the weights of the rest are negligibly small. Therefore, we needed to resample the particles. Here, we adopted the resampling method of deleting the particles of small weights, copying the particles of large weights and setting all the new particles with the same weights.

4.5. Result Assessment and Selection

Each epoch, we can obtain three positioning results, namely the fix result r k | k E and float result r ^ k | k E provided by the EKF-based submodule and the MMSE result r ^ k , M M S E E provided by the PF-based submodule, respectively. As for r k | k E , it can provide higher precision if the AR problem is correctly solved, but has the risk of wrong fixing, while for r ^ k , M M S E E , the stability can be guaranteed at the sacrifice of a small precision loss, and the positioning precision can also be satisfactory after convergence. Therefore, the task of the result assessment and the selection submodule is to balance the precision and stability and pick one result for the final output.
The selection criterion can be defined at pleasure for different purposes. Here, as a reference, we provide one based on the classical AR ratio test [30], predicted state, and the estimated accuracy of the MMSE result:
  • the submodule will output the fix result r k | k E and mark the status as fix one if the following conditions are both met:
    r a t i o = Δ N ^ Δ N P Δ N ^ 1 2 Δ N ^ Δ N P Δ N ^ 1 2 ξ r a t i o r k k E r ^ k k 1 E ε r ^ k , M M S E E
    where Δ N is the suboptimum of the lambda method, ξ r a t i o is the ratio threshold, and ε r ^ k , M M S E E is the estimated accuracy of the MMSE result:
    ε r ^ k , M M S E E = i = 1 N P F w k i r k E , i r ^ k , M M S E E 2
  • if (35) is not met, output the MMSE result r ^ k , M M S E E , and also, mark the status as fix one if the convergence condition is met:
    ε r ^ k , M M S E E 0.3
  • if both (35) and (37) are not met, output the float result r ^ k | k E , and mark the status as float one.
If an MMSE result is selected, the corresponding position covariance matrix can be approximated as:
P r ^ k , M M S E E = i = 1 N P F w k i d i a g r k E , i r ^ k , M M S E E 2
where d i a g · is the diagonal matrix whose diagonal elements are composed of the elements of the vector.
Additionally, the final output result will be fed back to the EKF-based submodule and be used to predict the state of the next epoch.

5. LiDAR Feature Registration and Error Models

Once the final RTK result that we denote as r ˜ k k is obtained, we can use it to update the global parameters of the LiDAR feature landmarks with the following align:
p ˜ k j , E = 1 + r ˜ k k E T C ^ L , k k E y k j , L y k j , L 2 · C ^ L , k k E y k j , L
It is worth noting that the accuracy of the above-calculated LiDAR global parameters is affected by the integrated factors such as RTK positioning errors, attitude errors, and LiDAR measurement errors. The existence of LiDAR global parameter errors suggests that the measurement models used in the subsequent observation epochs are inaccurate and are normally boiled down to model error. Therefore, we needed to take the errors of the global parameters into consideration when estimating the observation errors of the LiDAR features, which is similar to the situation of GNSS with regard to ephemeris errors.
Supposing that the position result, attitude result, and LiDAR measurements are irrelevant, with the use of:
δ C L E = C ^ L E [ δ Ψ ] ×
where δ Ψ represents the attitude error, and with the help of Taylor expansion, we can derive the covariance matrix of the LiDAR plane global parameter error based on (39) as follows:
P p ˜ k j , E = C k , 1 P r ˜ k | k E C k , 1 T + C k , 2 P Ψ ^ k | k C k , 2 T + C k , 3 R L 0 , k j C k , 3 T
where P r ˜ k | k E and P Ψ ^ k | k represent the covariance matrices of the RTK position and attitude results, respectively, R L 0 , k j represents the internal measurement error covariance matrix of plane j, and the expressions of the coefficient matrices are:
C k , 1 = C ^ L , k k E y k j , L C ^ L , k k E y k j , L T y k j , L 2
C k , 2 = 1 y k j , L 2 · C ^ L , k k E y k j , L r ˜ k k E T + 1 + C ^ L , k k E y k j , L T r ˜ k k E y k j , L 2 · I C ^ L , k k E y k j , L ×
C k , 3 = 1 + C ^ L , k k E y k j , L T r ˜ k k E y k j , L 2 · C ^ L , k k E + 1 y k j , L 2 C ^ L , k k E y k j , L r ˜ k k E T C ^ L , k k E 2 y k j , L 4 C ^ L , k k E y k j , L r ˜ k k E T C ^ L , k k E y k j , L y k j , L T
Then, for the subsequent epoch t with the newly predicted user position r ^ t t 1 E and rotation matrix C ^ E , t t 1 L , if the global parameters of plane j are derived at epoch k, the observation error covariance matrix can be calculated by:
R L , t j = R L 0 , t j + B t j P p ˜ k j , E B t j T
where R L 0 , t j is the LiDAR feature internal error covariance matrix of epoch t, and the coefficient matrix derived from the Taylor expansion of (7) is given as below:
B t j = C ^ E , t t 1 L 1 r ^ t t 1 E T p ˜ k j , E p ˜ k j , E 2 · I 1 p ˜ k j , E 2 + r ^ t t 1 E T p ˜ k j , E p ˜ k j , E 4 · p ˜ k j , E p ˜ k j , E T
where r ^ t t 1 E and C ^ E , t t 1 L represent the predicted position and rotation matrix based on the predicted attitude Ψ ^ t t 1 E that is derived by INS at epoch t, respectively.
The LiDAR feature internal error covariance matrix R L 0 j depends on the working characteristic of the LiDAR itself. For LiDAR, the coordinate of a scanning point is derived with two direction angles and a ranging distance, which can be expressed as follows:
ρ L sin α cos β , cos α cos β , sin β T
where ρ L represents the ranging distance and α and β represent the horizontal and vertical angles, respectively. Therefore, the errors of the scanning points are affected by the ones of the direction angles and ranging distances. Since we adopted foothold coordinates to describe the plane features, even though the plane features are the extracted products of the point clouds, we still simply followed the model of the point errors whose covariance matrix, with the help of Taylor expansion, can be expressed below as a function of the direction angles and ranging distance:
R L 0 j = σ L , ρ 2 n L j n L j T + B L j R α , β B L j T
where σ L , ρ 2 and R α , β represent the error amplitudes of the ranging distance and direction angles, respectively, n L j represents the normal vector with direction angles of α L j and β L j , and the expression of the coefficient matrix B L j is:
B L j = d L j cos α L j cos β L j sin α L j sin β L j sin α L j cos β L j cos α L j sin β L j 0 cos β L j
where d L j represents the distance from the origin to the plane.
Since the update rate of integration was up to the one of GNSS data, which is normally 1 Hz, the update rate of mapping was also 1 Hz. Every epoch, we can calculate a new set of LiDAR global parameters based on the current RTK result and LiDAR measurements, and whether we substitute the old parameters of a certain landmark with new ones depends on whether the observation precision based on the new landmark parameters is higher. To be specific, the following precision indicator takes over the mission of assessing precision.
σ L , k j = t r a c e R L , k j
where t r a c e · represents the sum of the diagonal elements and R L , k j is the re-calculated observation error covariance matrix at current epoch k.
It is worth noting that the substitute criterion was based on the observation precision of (41), not the global parameter precision of (40), because what we cared about was how the global parameter precisions affect the integrated positioning rather than their actual values, which, in Section 6, seemed to be “awful” owing to the far distance from the global frame origin to the plane.

6. Performance Theoretical Analyses

For the proposed RTK/LiDAR/INS integrated system, the aim was to improve RTK performance by integrating the measurements of LiDAR features and GNSS, as well as the parallel-filter scheme, and what we cared most about was whether or to what degree the RTK performance can be improved by these measures.
Therefore, in this section, we addressed several performance theoretical analyses and some simple simulations to validate our idea, which can be summarized as the following topics:
  • LiDAR plane feature global parameter and observation accuracies;
  • effect of the integration of LiDAR features on RTK AR success rate;
  • effect of parallel-filter scheme on RTK AR success rate;
  • effect of the integration of LiDAR features on RTK positioning accuracy.
In the following discussion, the epoch scripts are omitted from the equations for brevity.

6.1. LiDAR Plane Feature Global Parameter and Observation Accuracy Analysis

Since the observation accuracies of the LiDAR features have an impact on the integrated RTK while the global parameters of the LiDAR features are registered on the basis of the RTK results, we analyzed the precisions of LiDAR feature global parameters and the observation accuracies firstly.
The error models related to LiDAR features were addressed in Section 5. In this section, we conducted several simple simulations as examples where the RTK position and attitude results were assumed to have been obtained already, as well as the variances or covariance matrices of the results, and the error amplitudes of the distances and direction angles of LiDAR plane features in (42) were also given. Then, we designed a set of multiple unit normal vectors and simulated the global parameter accuracies based on (40) and observation accuracies based on (41) of the plane features with the designed unit normal vectors with different distances, successively, to analyze the accuracies in a direct-viewing way.
In the following simulations, the RTK positioning (latitude, longitude, height) and attitude results were set as (60 N, 120 E, 0 m) and (0, 0, 180 ), respectively; the accuracies of the attitude results, LiDAR plane feature distances, and direction angles were set as, 0.05 , 0.02 m, and 0.005 , respectively; and the direction angles of the unit normal vectors in the L frame are listed in Table 1. Among these vectors, the first four are the nearly horizontal ones that represent the features found in the UGV situation, while the last two represent the ones found in UAV situation. What is more, we designed two sets of parameters with different RTK positioning accuracies, namely 0.05 m and 0.3 m, respectively, the former of which represents the fix situation.
In the following simulations, we chose the ECEF coordinate as the global frame. The global parameter accuracy curves in three directions with ranging distances are shown in Figure 4. From the figure, we can see that the accuracies of the global parameters look very “awful” no matter how precise the RTK positioning results were. Therefore, why are the global parameter accuracy we derived so large? From the expressions of C 2 and C 3 in (40), we can find that, since the distance from the global frame origin to the user is much longer than the one from the user to the plane, the values of C 2 and C 3 become very large, which means that the errors of the global parameter calculated by (40) are very sensitive to the attitude and LiDAR internal measurement errors. This phenomenon can also be explained from an intuitive point of view; that is, the origin of the global frame is much farther away from the planes compared with the user position, and thus, the tiny errors of the angles in local frame will be enlarged when we calculate a global foothold from a local one, which we call the distance effect. Moreover, the larger the ratio between the distance from the global origin to the plane and the one from the user position is, the greater the impact of angle errors may have on the LiDAR global parameter accuracy, which can be evidenced in our simulation with the phenomenon: as the observation distance increases, instead of increasing, the curves of the global parameter errors present a descending trend.
Then, can we still use these global parameters with such “bad” accuracy in our integrated positioning algorithm? The answer is yes. In fact, although the global parameter errors are quite large, due to the same reason as the distance effect, when calculating the observation errors that consider the global parameter errors, namely Equation (41), the effect of the global parameter errors is greatly weakened. Figure 5 displays the observation errors based on the above global parameter errors at the same observation spot. As we can see from the figure, with high-precision RTK fix positioning results, the observation errors in each direction were less than 0.1 m within a 100 m observation distance range, and the observation errors were also acceptable with the 0.3 m positioning accuracy, which suggests that the observation accuracy of the LiDAR plane features with global parameters based on RTK results was quite high and, thus, usable for RTK integrated calculation. Moreover, the plane observation errors were shown to be proportional to the observation distance, which accorded with the intuitive sense for LiDAR observation.

6.2. Effect of the Integration of LiDAR Features on the RTK AR Success Rate

From the above simulation, we know that, after deriving the global parameters based on RTK results, the observation accuracy of LiDAR plane features is lower than that of carrier phases, but equivalent to or even higher than that of pseudoranges. Therefore, to what degree can LiDAR feature integration improve the performance of RTK? As the most crucial indicator for RTK performance, the RTK AR success rate was analyzed here. To be specific, we adopted the following Ambiguity Dilution of Precision (ADOP)-based AR success rate estimation equation to assess the improvement [31]:
P c = 2 Φ 1 2 A D O P 1 M 1
where Φ · represents the Gaussian cumulative distribution function and:
A D O P = det P Δ N ^ 1 M 1
where M represents the number of the satellites, P Δ N ^ represents the covariance matrix of the float ambiguities, and det · represents the determinant function. From the above equations, we can discover that the AR success rate is a function of the float ambiguity covariance matrix, and thus, we derived the change in P Δ N ^ after integrating with LiDAR features. Here, to facilitate derivation and analysis, we made the following assumptions to simplify the model:
Assumption 1.
Assuming that the measurement errors follow the Gaussian distribution and the corresponding covariance matrices are irrelevant to the estimators, then the Cramer–Rao Lower Bound (CRLB) of the estimators can be obtained as follows:
C R L B x = H T R 1 H 1
Assumption 2.
We assumed that the measurement error covariance matrix can be simply expressed as the product of the error coefficients and the proportional matrices, that is,
R = b l k d i a g σ P 2 D , σ Φ 2 D , σ L 2 E
where σ P 2 D , σ Φ 2 D , and σ L 2 E represent the measurement error covariance matrices of pseudoranges, carrier phases, and LiDAR plane features, respectively.
Then, we can derive the CRLBs of the float ambiguities with and without LiDAR integration, which are given as below:
Without LiDAR integration:
P Δ N ^ = σ Φ 2 λ 2 D + σ P 2 σ Φ 2 G G T D 1 G 1 G T
With LiDAR integration:
P Δ N ^ = σ Φ 2 λ 2 D + σ P 2 σ Φ 2 G G T D 1 G + σ P 2 σ L 2 U T E 1 U 1 G T
Since σ P 2 σ Φ 2 , before being tightly integrated with LiDAR features, the accuracy of the float ambiguities is mainly dominated by the one of the pseudoranges. After the integration, the influences of the pseudorange errors are weakened by the added term of σ P 2 / σ L 2 U T E 1 U , and thus, the float ambiguity accuracy is greatly improved, as well as the RTK AR success rate.
Here, we also conducted a simulation with the scenario of an urban canyon and the GNSS noises slightly larger than those in open areas to evaluate the improvement of the RTK AR success rate after integrating with different numbers of LiDAR features of different observation accuracies. The simulation satellite geometric distribution is given in the dashed box of Figure 6, where, due to the blockage, almost all the satellites were located in the east-west direction, and the GNSS pseudorange and carrier phase noise coefficients σ P and σ Φ were set as 0.5 m and 0.005 m, respectively. The normal vectors used in the simulation were the same as the ones in Table 1. Figure 6 gives the AR success rates with a different number of features and LiDAR measurement error Standard Deviations (STDs).
As shown in the figure, in the GNSS-challenged environment we set, RTK can hardly solve the AR problem in a stand-alone mode. As the number of LiDAR features increased, the AR success rate increased, and the higher the LiDAR observation accuracy was, the higher AR success rate we could achieve.

6.3. Effect of the Parallel-Filter Scheme on the RTK AR Success Rate

Here, a qualitative analysis was performed for the RTK AR success rate of the parallel-filter scheme.
In our parallel-filter framework, the PF-based submodule provides a referenced MMSE float ambiguity vector Δ N ^ M M S E to the EKF-based submodule whose accuracy is accountable for the AR success rate. Although it is hard to obtain the covariance matrix with PF, we can use the following equation to approximate the covariance matrix of Δ N ^ M M S E :
P Δ N ^ M M S E = i = 1 N P F w i d i a g Δ N i Δ N ^ M M S E 2
which actually reflects the convergence state of the PF.
As introduced previously, the state space of our PF lies in the position domain only, and the weight update process involves carrier phases and LiDAR features only, which protects the filtering process from the effects of cycle slips and code multipath noises. Once the PF converges, the accuracy of Δ N ^ M M S E is guaranteed, as is the AR success rate.

6.4. Effect of the Integration of LiDAR Features on RTK Positioning Accuracy

In addition, we derived the influence of LiDAR integration on the RTK positioning results. Since the accuracies of the RTK fix and float results had large difference, we derived the CRLBs for both types of results. As for the accuracy of the MMSE positioning results, it depends on the convergence status of the filter whose evaluation equation was already addressed with (36), and thus, we do not discuss it in the following part.
In detail, without LiDAR integration:
P r ^ = σ P 2 G T D 1 G 1 ( float )
P r = σ Φ 2 1 + σ Φ 2 / σ P 2 G T D 1 G 1 ( fix )
With LiDAR integration:
P r ^ = σ P 2 σ P 2 σ L 2 U T E 1 U + G T D 1 G 1 ( float )
P r = σ Φ 2 1 + σ Φ 2 / σ P 2 · 1 σ L 2 / σ P 2 + σ L 2 / σ Φ 2 U T E 1 U + G T D 1 G 1 ( fix )
From the above equations, we can see that, before integration, the RTK float positioning accuracy was mainly dominated by the pseudorange accuracy, while the fix positioning accuracy mainly depended on the carrier phase accuracy. After integration, since σ P 2 σ L 2 σ Φ 2 , the integration of LiDAR features contributed more to the float results, while having little influence on the fix ones, which were still dominated by the carrier phase accuracy.
Most importantly, matrix U represents the geometric distribution of the LiDAR features, which means that the LiDAR feature distribution also had something to do with the positioning results. Normally, in the direction where satellite signals are blocked, shelters such as buildings exist, which indicates that there is a high probability for us to extract LiDAR plane features from that direction to make up the geometric distribution of GNSS satellites, thus improving the positioning accuracy in the occlusion direction.

7. Experiment Results

We conducted both simulation and road test experiments to verify the performance of the proposed tightly coupled RTK/LiDAR/INS method with parallel filters in the ambiguity-position-joint domain (RLI_AP) by comparing with the stand-alone RTK (R), tightly coupled RTK/INS (RI), and tightly coupled RTK/LiDAR/INS methods with EKF in the ambiguity domain only (RLI_A). Since the main concern of this article is to what degree the proposed method can improve the performance of RTK, the following discussion focuses on the analyses of the RTK outputs.

7.1. Simulation Experiments

A specially designed simulation platform based on MATLAB was established. On this platform, the measurements including GNSS pseudoranges, carrier phases, the outputs of IMU accelerometer and gyroscope, and LiDAR point clouds can be generated according to the user configuration about building group environment, vehicle motion trajectory, sensor parameters, and satellite constellations. Compared with the road test, the advantages of the simulation platform are that the experimental conditions are configurable and the true values are available for result validation. The disadvantage is that the measurement generation model may deviate from the real situation.
In the following simulation experiment, the building group was set as Figure 7 shows, and the red point in the figure represents the vehicle position. As we see from the figure, due to the building blockage, the number of visible satellites declined greatly for the rover receiver.
The INS simulated in the experiment was a tactical-grade INS, and the LiDAR was a 16-beam 3D LiDAR, which provided sparse point clouds as measurements. The sensor parameters set for INS and LiDAR are listed in Table 2 and Table 3, respectively. The main purpose of the simulation experiments was to validate the performance of the proposed method in different GNSS-challenged environments, specifically with different degrees of GNSS measurement noises. The GNSS measurement noise STD models used in the simulation platform were elevation-related functions, which had the form of:
σ m i = σ m 1 + csc e l i 2
where csc · represents the cosecant function, σ m represents the pseudorange or carrier phase noise STD base, m = P or ϕ , and e l i represents the elevation of satellite i. We set three sets of STD parameters to simulate the challenges at different levels, namely simple, normal, and harsh levels, respectively, whose values can be found in Table 4.
Table 4 also lists the AR success rates and positioning error STDs achieved by different methods. From the table, we can see that, as the GNSS measurement noises increased, the AR success rates and positioning accuracies of stand-alone RTK deteriorated significantly. In the the harshest condition, Test 3, even the INS could hardly maintain the performance of RTK. By contrast, with the tight integration of LiDAR features, the proposed method could achieve a quite satisfactory RTK AR success rate even in the harshest condition and, thus, guarantee the whole system output with a positioning precision that was better than 10 cm.
Table 5 demonstrates the average computation time each epoch cost by different RTK methods in Test 3, of which the methods integrated with LiDAR measurements are further subdivided into the time cost by LiDAR data process and the one cost by the RTK process. Undoubtedly, with the price of performance improving, the processing of LiDAR data greatly increased the calculation time of the whole integration algorithm, and the use of the particle filter further increased the computational burden, which indicated delays for RTK outputs. Therefore, the proposed method was more suitable to low-speed situations or used as a measure to maintain the global positioning precision of the integrated system, rather than the one for rapid output.
The performance of the original EKF-based method in the ambiguity domain was shown to be satisfactory enough in the simulation, and it seems that there was only little improvement for the parallel-filter method in the ambiguity-position-joint domain. For the simulation platform, only the thermal noises with Gaussian models were adopted in the simulation. However, in real scenarios, more challenges exist, such as cycle slips, the signal attenuation, and the non-Gaussian characteristic of the multipath effect. All these challenges may deteriorate the performance of the EKF-based method in the ambiguity domain and are inescapable, but not simulated in the above experiments. Therefore, to better compare the performances in real scenarios, we conducted a road test with the hardware platform, where the parallel-filter scheme in the ambiguity-position-joint domain was demonstrated to be advanced.

7.2. Road Test Experiment

We established a hardware platform for road test verification. The on-board equipment included a NovAtel SPAN ® SPAN-CPT GNSS/INS receiver, a Harxon ® HX-CSX601A survey antenna, and a RoboSense ® RS-LiDAR-16 LiDAR, and the RTK base station was set on the roof of a building with a Septentrio ® mosaic-X5 GNSS receiver and a Harxon ® HX-CSX601A survey antenna. Both GNSS receivers outputted raw data with the RTCM V3 format. The NovAtel SPAN ® SPAN-CPT receiver of the on-board hardware platform equipment provided GNSS measurements of GPS and BDS-2 with four signals, namely L1C/A, L2C, B1I, and B2I. The Septentrio ® mosaic-X5 receiver of the base station could provide the measurements of almost all civil signals of GPS, Galileo, GLONASS, and BDS-2/BDS-3. The on-board hardware platform equipment, test environment, and trajectory, which was given by the RTK ground track results of the proposed method, are shown in Figure 8.
At the beginning, the vehicle ran on the road in a relatively open area, then entered a park with surrounding buildings, drove around twice, left, and came back for the final round. The whole road test consisted of the relatively open road environment and the relatively harsh park area. The entrance area of the park was the most challenging area where the situation of extremely insufficient visible satellites almost lasted for about 40 s in each round. For the road area, the existence of the footbridge also brought challenges of temporal satellite signal loss.
Although, in this paper, we mainly focused on the positioning algorithm design in the aspect of RTK positioning performance improvement and paid less attention to the measurement outliers, in real GNSS-challenged environments with signal attenuation and a strong multipath effect, GNSS measurement outliers are non-negligible and more frequent than usual. Therefore, in the road test, before the data fusion, we conducted a simple outlier detection and exclusion based on the following pseudorange innovation:
v P i j = Δ P r b i j Δ ρ r b , k k 1 i j
where Δ ρ r b , k k 1 i j is the DD geometric distance calculated by r ^ k k 1 E achieved in (13). If v P i j exceeded the threshold that we set as an empirical value of 10 m in the road test, we judged that there was an outlier in Δ P r b i j . As for the carrier phase measurement, since the effective ranging of the carrier phases was −0.5~0.5 cycles, the innovation-based judgment may cause a high false alarm rate for the carrier phases, and thus, we just excluded the carrier phase measurements whose pseudoranges were judged to be abnormal.

7.2.1. Overall RTK Positioning Result Comparison

The RTK positioning results of different methods in the east, north, and up directions are given in Figure 9, as well as the numbers of visible satellites and feature planes used in RLI_A and RLI_AP methods. We marked the challenging areas with different background colors and enlarged the results in the up direction of the whole test and partial results in the east and north directions for better comparison purposes.
In the figure, large positioning biases can be observed for stand-alone RTK in the park area, as well as when the vehicle passed under the footbridge. This performance deterioration was mainly caused by the lack of visible satellites, and most of the large biases were reduced to some extent with the aid of INS integration; however, the precision was still not satisfactory in the areas with insufficient visible satellites, especially in the park entrance area where the positioning errors at the meter level still could be easily observed in the up direction.
As we can see from the top subplot in Figure 9, in the park entrance area, although the number of visible satellites was insufficient, there were abundant plane features in view. The geometric distribution of GNSS was strengthened by the integration of the LiDAR features, so the LiDAR-feature-aided methods could achieve better performance in the challenging area. According to the figure, the parallel-filter-based one seemed to perform the best among all the methods.

7.2.2. Advancement of the Parallel-Filter Scheme

Despite that it is hard for the road test to obtain the true values of positioning or keep the same route when passing the same area, still, we can analyze the height positioning precision in certain areas whose height did not change dramatically. In addition, we can validate the positioning accuracy by observing the height trend and trajectory consistency of the area where the vehicle passed repeatedly.
In the road test, both the number of times that the vehicle passed under the footbridge and through the park entrance areas was three, and the number of visible satellites of the two areas was insufficient with only 2~3 satellites in view, while the difference lied in the length of duration. Figure 10 provides the RTK positioning enlarged views of these areas in the up direction of three times and the 3D plots of the RLI_A and RLI_AP methods in the park area. For the 3D plots, the axis scale of the up direction is not equal to the ones of other directions for easy observation purpose. As we can see, compared with the EKF one in the ambiguity domain, the parallel-filter method achieved a higher stability and precision in the challenging environments with fewer abnormal errors. This improvement in performance was owed much to the PF in the position domain, which strengthened the resistance to anomalies like cycle slips and sudden changes in visible satellites, as well as the result assessment and selection submodule that takes stability and precision into consideration when outputting.
Table 6 gives the fix rates in the park entrance area, whole park area, and whole road test, respectively. As can been seen in the table, the RTK/LiDAR/INS integrated algorithm with parallel filters in the ambiguity-position-joint domain achieved the highest fix rates among all the algorithms, and the final fix rate for the whole road test was 91.5%.

7.2.3. Analyses of the Convergence of the PF in the Positioning Domain

Finally, we discuss the convergence process of the PF, whose status, to a certain extent, determines the performance of the proposed method. Figure 11 shows the particles at the initial time and at the 45th s, respectively. During the first 45 s, the vehicle was stationary. As we can see, the PF converged very quickly with the LiDAR-feature-integrated MSR objective function, and the range of the search space was greatly shrunken. Therefore, the PF MMSE positioning results could achieve a higher precision from 0.5 m to 0.05 m after convergence.
However, in the dynamic situation, the PF may not be able to maintain convergence all the time due to the uncertainty of the state transition model and the changes in the observation conditions. We can utilize ε r ^ k , M M S E E , which is calculated with (36), to evaluate the current convergence status of the PF, and it can also be used as an upper limit precision indicator for the current output. The upper limit precision indicators of our road test are provided in Figure 12. It is obvious that the estimated indicators were well consistent with the results. Except for the entrance area, most of them were below 0.3 m, and once the PF converged, the precision was better than 0.05 m.

8. Conclusions

To realize high-precision positioning in outdoor large-scale diversified environments without a prior map, we addressed a new RTK/LiDAR/INS integrated positioning algorithm. Since we still counted on RTK for maintaining the global positioning precision of the integrated system, the main focus of the proposed algorithm was to improve the performance of RTK in GNSS-challenged environments with the aid of other sensors and a robust AR scheme. With this purpose in mind, we further explored the idea of our previous work, which utilized RTK results to register LiDAR features while integrating the measurements of the pre-registered LiDAR features with RTK. Specifically, we upgraded the method by using innovative parallel filters in the ambiguity-position-joint domain to further weaken the effects of low satellite availability, cycle slips, and multipath effects.
The theoretical analyses, simulation, and road test experiment results indicated that the proposed method improved the RTK fix rate and output stability in GNSS-challenged environments and, thus, guaranteed the whole system with more stable and high-precision global positioning results in outdoor diversified environments. However, as the price of the performance improvement, the proposed method required more calculation time owing to the LiDAR data processing and particle filter and caused delays in RTK result output. Therefore, the method is more recommended for low-speed applications or to be used in a delay-tolerant background as a measure for maintaining the global precision of the whole system.
For a robust positioning system, fault detection and exclusion algorithms are also of great importance, among which Receiver Autonomous Integrity Monitoring (RAIM) offers the most rigorous implementation framework. Therefore, for our future work, the proposed RTK/LiDAR/INS integrated positioning system with fault detection and exclusion based on the RAIM framework will be studied and tested.

Author Contributions

Conceptualization, W.L. and X.C.; methodology, W.L.; software, W.L. and G.L.; validation, W.L. and G.L.; formal analysis, W.L., G.L. and X.C.; investigation, X.C.; resources, M.L.; data curation, G.L.; writing—original draft preparation, W.L.; writing—review and editing, X.C.; visualization, W.L.; supervision, X.C.; project administration, M.L. All authors read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Feng Guan and Xuming Yin for their help in setting up the on-board multisensor platform used in our road test.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Soloviev, A.; Miller, M.M. Navigation in difficult environments: Multi-sensor fusion techniques. In Sensors: Theory, Algorithms, and Applications; Boginski, V.L., Ed.; Springer: New York, NY, USA, 2012; pp. 199–229. [Google Scholar]
  2. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 July 2011; pp. 163–168. [Google Scholar]
  3. Seif, H.G.; Hu, X. Autonomous driving in the City—HD maps as a key challenge of the automotive industry. Engineering 2016, 2, 159–162. [Google Scholar] [CrossRef] [Green Version]
  4. Wan, G.; Yang, X.; Cai, R.; Li, H.; Zhou, Y.; Wang, H.; Song, S. Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4670–4677. [Google Scholar]
  5. Zhao, S.; Farrell, J.A. 2D LiDAR aided INS for vehicle positioning in urban environments. In Proceedings of the 2013 IEEE International Conference on Control Applications (CCA), Hyderabad, India, 28–30 August 2013; pp. 376–381. [Google Scholar]
  6. Meng, X.; Wang, H.; Liu, B. TA robust vehicle localization approach based on gnss/imu/dmi/LiDAR sensor fusion for autonomous vehicles. Sensors 2017, 17, 2140–2158. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, J.; Singh, S. LOAM: LiDAR odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
  8. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, J.; Singh, S. Visual-LiDAR odometry and mapping: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, USA, 26 May 2015; pp. 2174–2181. [Google Scholar]
  10. Graeter, J.; Wilczynski, A.; Lauer, M. Limo: LiDAR-monocular visual odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7872–7879. [Google Scholar]
  11. Shamsudin, A.U.; Ohno, K.; Hamada, R.; Kojima, S.; Westfechtel, T.; Suzuki, T.; Okada, Y.; Tadokoro, S.; Fujita, J.; Amano, H. Consistent map building in petrochemical complexes for firefighter robots using SLAM based on GPS and LiDAR. Robomech J. 2018, 5, 7–19. [Google Scholar] [CrossRef] [Green Version]
  12. Schütz, A.; Sánchez-Morales, D.E.; Pany, T. Precise positioning through a loosely-coupled sensor fusion of GNSS-RTK, INS and LiDAR for autonomous driving. In Proceedings of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, 20–23 September 2020; pp. 219–225. [Google Scholar]
  13. Greenspan, R.L. GPS and inertial integration. In Global Positioning System: Theory and Applications; Parkinson, B.W., Enge, P., Axelrad, P., Eds.; American Institute of Aeronautics and Astronautics: Washington, DC, USA, 1996; pp. 187–218. [Google Scholar]
  14. Schreiber, M.; Königshof, H.; Hellmund, A.M.; Stiller, C. Vehicle localization with tightly coupled GNSS and visual odometry. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 9–12 June 2016; pp. 858–863. [Google Scholar]
  15. Chu, T.; Guo, N.; Backén, S.; Akos, D. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments. Sensors 2012, 12, 3162–3185. [Google Scholar] [CrossRef] [PubMed]
  16. Bai, X.; Wen, W.; Hsu, L.T. Using Sky-pointing fish-eye camera and LiDAR to aid GNSS single-point positioning in urban canyons. IET Intell. Transp. Syst. 2020, 14, 908–914. [Google Scholar] [CrossRef]
  17. Qifen, L.; Lun, A.; Junpeng, A.; Hsu, L.T.; Kamijo, S.; Gu, Y. Tightly coupled RTK/MIMU using single frequency BDS/GPS/QZSS receiver for automatic driving vehicle. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 January 2018; pp. 185–189. [Google Scholar]
  18. Scherzinger, B.M. Precise robust positioning with inertially aided RTK. Navigation 2006, 53, 73–83. [Google Scholar] [CrossRef]
  19. Li, W.; Cui, X.; Lu, M. Feature-Based Tightly-Integrated RTK/INS/LiDAR Fusion Positioning Algorithm in the ambiguity domain. In Proceedings of the China Satellite Navigation Conference, Sichuan, China, 23–25 November 2020; pp. 510–526. [Google Scholar]
  20. Li, W.; Cui, X.; Lu, M. High-precision positioning and mapping using feature-based RTK/LiDAR/INS integrated system for urban environments. In Proceedings of the 33rd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2020), online, 22–25 September 2020; pp. 2628–2640. [Google Scholar]
  21. Li, W.; Fan, P.; Cui, X.; Zhao, S.; Ma, T.; Lu, M. A Low-Cost INS-Integratable GNSS Ultra-Short Baseline Attitude Determination System. Sensors 2018, 18, 2114. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Shoemake, K. Animating rotation with quaternion curves. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, San Francisco, CA, USA, 25 July 1985; pp. 245–254. [Google Scholar]
  23. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Hough-transform and extended RANSAC algorithms for automatic detection of 3D building roof planes from LiDAR data. In Proceedings of the ISPRS Workshop on Laser Scanning 2007 and SilviLaser 2007, Espoo, Finland, 12–14 September 2007; pp. 407–412. [Google Scholar]
  24. Teunissen, P.J. GPS for Geodesy, 2nd ed.; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  25. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 2nd ed.; Artech House: London, UK, 2013. [Google Scholar]
  26. Kay, S.M. Fundamentals of Statistical Signal Processing, 2nd ed.; Prentice Hall PTR: London, UK, 1993. [Google Scholar]
  27. De Jonge, P.; Tiberius, C.C.J.M. The LAMBDA method for integer ambiguity estimation: Implementation aspects. Publ. Delft Comput. Centre Lgr. Ser. 1996, 12, 1–47. [Google Scholar]
  28. Chang, X.; Yang, X.; Zhou, T. MLAMBDA: A modified LAMBDA method for integer least-squares estimation. J. Geodesy 2005, 79, 552–565. [Google Scholar] [CrossRef] [Green Version]
  29. Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Aero. El. Sys. Mag. 2010, 25, 53–82. [Google Scholar] [CrossRef] [Green Version]
  30. Verhagen, S.; Teunissen, P.J.G. The ratio test for future GNSS ambiguity resolution. GPS Solut. 2013, 17, 535–548. [Google Scholar] [CrossRef]
  31. Teunissen, P.J.G. ADOP based upper bounds for the bootstrapped and the least-squares ambiguity success rates. Artif. Satell. 2000, 35, 171–179. [Google Scholar]
Figure 1. Framework of the proposed RTK/LiDAR/INS integrated system.
Figure 1. Framework of the proposed RTK/LiDAR/INS integrated system.
Remotesensing 13 02013 g001
Figure 2. Relationship between the plane footholds in the L frame and E frame.
Figure 2. Relationship between the plane footholds in the L frame and E frame.
Remotesensing 13 02013 g002
Figure 3. Relationship among the carrier phases, ambiguities, and candidate position points.
Figure 3. Relationship among the carrier phases, ambiguities, and candidate position points.
Remotesensing 13 02013 g003
Figure 4. Global parameter accuracy (acc.) simulation results of LiDAR plane features: (a) with RTK positioning accuracy of 0.05 m; (b) with RTK positioning accuracy of 0.3 m.
Figure 4. Global parameter accuracy (acc.) simulation results of LiDAR plane features: (a) with RTK positioning accuracy of 0.05 m; (b) with RTK positioning accuracy of 0.3 m.
Remotesensing 13 02013 g004
Figure 5. Observation accuracy (acc.) simulation results of LiDAR plane features: (a) with RTK positioning accuracy of 0.05 m; (b) with RTK positioning accuracy of 0.3 m.
Figure 5. Observation accuracy (acc.) simulation results of LiDAR plane features: (a) with RTK positioning accuracy of 0.05 m; (b) with RTK positioning accuracy of 0.3 m.
Remotesensing 13 02013 g005
Figure 6. AR success rate simulation with different number of features and LiDAR measurement error STDs.
Figure 6. AR success rate simulation with different number of features and LiDAR measurement error STDs.
Remotesensing 13 02013 g006
Figure 7. A capture of the simulation platform’s building group environment and rover/base visible satellite skyplots.
Figure 7. A capture of the simulation platform’s building group environment and rover/base visible satellite skyplots.
Remotesensing 13 02013 g007
Figure 8. On-board hardware platform equipment, environment, and trajectory (RTK ground track results of the proposed method) of the road test.
Figure 8. On-board hardware platform equipment, environment, and trajectory (RTK ground track results of the proposed method) of the road test.
Remotesensing 13 02013 g008
Figure 9. RTK positioning results of different methods.
Figure 9. RTK positioning results of different methods.
Remotesensing 13 02013 g009
Figure 10. Enlarged views of the RTK positioning results. (a) RTK results of the footbridge area in the up direction; (b) RTK results of the park entrance area in the up direction; (c) 3D plot of the RLI_A method in the park area; (d) 3D plot of the RLI_AP method in the park area.
Figure 10. Enlarged views of the RTK positioning results. (a) RTK results of the footbridge area in the up direction; (b) RTK results of the park entrance area in the up direction; (c) 3D plot of the RLI_A method in the park area; (d) 3D plot of the RLI_AP method in the park area.
Remotesensing 13 02013 g010
Figure 11. Particles of the PF with the LiDAR-feature-integrated MSR objective function.
Figure 11. Particles of the PF with the LiDAR-feature-integrated MSR objective function.
Remotesensing 13 02013 g011
Figure 12. RLI_AP upper limit precision indicator results of the road test.
Figure 12. RLI_AP upper limit precision indicator results of the road test.
Remotesensing 13 02013 g012
Table 1. Normal vectors of the simulation experiment (L frame).
Table 1. Normal vectors of the simulation experiment (L frame).
n L 1 n L 2 n L 3 n L 4 n L 5 n L 6
α L j 361302223214510
β L j −5−331−45−85
Table 2. INS Accelerometer (Accel.) and Gyroscope (Gyro.) parameters set in the simulation experiments.
Table 2. INS Accelerometer (Accel.) and Gyroscope (Gyro.) parameters set in the simulation experiments.
Parameter NameValue
Update Rate100 Hz
Accel. Biases 900 , 1300 , 800 × 9.80665 × 10 6 m / s 2
Gyro. Biases 9 , 13 , 8 / 180 × π / 3600 rad / s
Accel. Noise STD 9.80665 × 10 4 m / s 2
Gyro. Noise STD 0.01 / 180 × π / 60 rad / s
Accel. Scale Factor and Cross Coupling 500 300 200 150 600 250 250 100 450 × 10 6
Gyro. Scale Factor and Cross Coupling 400 300 250 0 300 150 0 0 350 × 10 6
Table 3. LiDAR parameters set in the simulation experiments.
Table 3. LiDAR parameters set in the simulation experiments.
Parameter NameValue
Point Cloud Update Rate10 Hz
Range Noise STD2 cm
Angle Noise STD 0.005
Horizontal Field of View0~ 360
Vertical Field of View−15~ 15
Maximum Distance100 m
Horizontal Resolution 0.25
Vertical Resolution 2
Table 4. GNSS noise parameters and experiment results of Test 1~3.
Table 4. GNSS noise parameters and experiment results of Test 1~3.
GNSS Noise ARPositioning
TestSTD Base (m)MethodSuccessError STD (m)
σ P σ ϕ Rate (%)ENU
0.50.005R91.50.1680.0770.229
1RI94.00.0940.0700.214
SimpleRLI_A1000.0050.0070.009
RLI_AP1000.0050.0070.009
1.50.015R77.10.3120.1250.419
2RI81.10.1670.1220.402
NormalRLI_A1000.0060.0070.013
RLI_AP1000.0060.0070.013
3.00.03R40.80.7040.4550.558
3RI69.40.2330.2630.341
HarshRLI_A94.50.0250.0290.058
RLI_AP96.50.0240.0270.055
Table 5. Average computation time cost by different RTK methods for each epoch in Test 3.
Table 5. Average computation time cost by different RTK methods for each epoch in Test 3.
MethodComputation Time (ms)
R5.0 (RTK process)
RI4.5 (RTK process)
RLI_A3.4 (RTK process)+229.1 (LiDAR data process)
RLI_AP87.4 (RTK process)+229.1 (LiDAR data process)
Table 6. Fix rates of different methods.
Table 6. Fix rates of different methods.
EntranceParkWhole Test
R12.5%43.9%58.9%
RI59.2%45.5%63.6%
RLI_A69.2%77.3%78.0%
RLI_AP79.2%83.9%91.5%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, W.; Liu, G.; Cui, X.; Lu, M. Feature-Aided RTK/LiDAR/INS Integrated Positioning System with Parallel Filters in the Ambiguity-Position-Joint Domain for Urban Environments. Remote Sens. 2021, 13, 2013. https://doi.org/10.3390/rs13102013

AMA Style

Li W, Liu G, Cui X, Lu M. Feature-Aided RTK/LiDAR/INS Integrated Positioning System with Parallel Filters in the Ambiguity-Position-Joint Domain for Urban Environments. Remote Sensing. 2021; 13(10):2013. https://doi.org/10.3390/rs13102013

Chicago/Turabian Style

Li, Wenyi, Gang Liu, Xiaowei Cui, and Mingquan Lu. 2021. "Feature-Aided RTK/LiDAR/INS Integrated Positioning System with Parallel Filters in the Ambiguity-Position-Joint Domain for Urban Environments" Remote Sensing 13, no. 10: 2013. https://doi.org/10.3390/rs13102013

APA Style

Li, W., Liu, G., Cui, X., & Lu, M. (2021). Feature-Aided RTK/LiDAR/INS Integrated Positioning System with Parallel Filters in the Ambiguity-Position-Joint Domain for Urban Environments. Remote Sensing, 13(10), 2013. https://doi.org/10.3390/rs13102013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop