Next Article in Journal
Determination of Chewing Count from Video Recordings Using Discrete Wavelet Decomposition and Low Pass Filtration
Next Article in Special Issue
ERCP: Energy-Efficient and Reliable-Aware Clustering Protocol for Wireless Sensor Networks
Previous Article in Journal
Assessment of SMA Electrical Resistance Change during Cyclic Stretching with Small Elongation
Previous Article in Special Issue
Underground Parking Lot Navigation System Using Long-Term Evolution Signal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lane Detection Aided Online Dead Reckoning for GNSS Denied Environments

1
Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
2
The Robotics Program, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
3
School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(20), 6805; https://doi.org/10.3390/s21206805
Submission received: 19 September 2021 / Revised: 6 October 2021 / Accepted: 9 October 2021 / Published: 13 October 2021
(This article belongs to the Special Issue Learning Technology Based on Navigation Sensors)

Abstract

:
With the emerging interest of autonomous vehicles (AV), the performance and reliability of the land vehicle navigation are also becoming important. Generally, the navigation system for passenger car has been heavily relied on the existing Global Navigation Satellite System (GNSS) in recent decades. However, there are many cases in real world driving where the satellite signals are challenged; for example, urban streets with buildings, tunnels, or even underpasses. In this paper, we propose a novel method for simultaneous vehicle dead reckoning, based on the lane detection model in GNSS-denied situations. The proposed method fuses the Inertial Navigation System (INS) with learning-based lane detection model to estimate the global position of vehicle, and effectively bounds the error drift compared to standalone INS. The integration of INS and lane model is accomplished by UKF to minimize linearization errors and computing time. The proposed method is evaluated through the real-vehicle experiments on highway driving, and the comparative discussions for other dead-reckoning algorithms with the same system configuration are presented.

1. Introduction

Precise positioning and localization techniques for modern land vehicles have been widely implemented for the purpose of advanced driving assist system and autonomous driving capability. Global Navigation Satellite System (GNSS) has been adopted as a primary option to obtain the position and velocity of the vehicle. Since land vehicles are designed to be driven on the road, the positioning accuracy of GNSS can be compensated with the road map from Geographic Information System (GIS) [1,2,3,4] for the conventional navigation purpose, and even with the real time kinematics (RTK) techniques [5,6], its positioning performance can be improved up to centimeter-level accuracy.
Despite the outstanding accuracy and wide coverage of RTK GNSS, the satellite signal outage and multipath error in GNSS-denied area, such as densely built city, underpass, or indoor area, significantly threaten the reliability of the GNSS measurement [7,8]. To overcome the environmental limitation of the GNSS measurement, several alternative navigation methods with other types of measurements are introduced to ensure the consistency of positional information and improve the minimum performance under a poor satellite signal condition [9,10,11]. Those methods, well known as dead-reckoning (DR), are based on the cumulative process of relative change in the speed and direction from the latest known position.
Inertial Navigation System (INS) has been commonly adopted to complement GNSS [12,13,14,15,16]. During the period that GNSS signal is unavailable, INS estimate the position, velocity and attitude by integrating the inertial measurements, such as acceleration and angular rate. With the advancement of computing technology, visual sensors have been used as positioning devices [17,18,19,20,21,22,23,24]. Modern silicons allow the real-time processing of high-resolution stereo images, which can directly compute the motion of camera set, and uses machine learning to estimate 3-axis motion from a monocular system. Recently, lidar-based localization methods are also introduced to perform precise positioning with point cloud maps in sub-meter accuracy.
However, considering the fact that GNSS is still considered as a primary device for navigation systems, it is obvious that those alternative positioning methods have their own limitations. INS have been widely used in various fields, including military and aerospace technologies, where the performance and reliability are top priorities. Although the nature of INS convinces near-perfect motion estimation theoretically, there occurs an inevitable error in reality without external aiding due to the imperfection of sensor measurements. Visual odometry [25,26] and SLAM [27,28,29] estimate the ego-motion of the sensor, by comparing the positional changes of surrounding environments and reducing error accumulation using the historical measurements. The main drawback of methods based on external sensing is the result easily affected by the condition of surrounding environment. When the surrounding environment is not suitable to perform feature extraction and matching, for instance, foggy or rainy weather, low intensity, or highly homogeneous scenes, DR based on environmental sensing easily fails.
On the other hand, applying those advanced positioning and localization techniques on mass-production vehicles are considered premature for several reasons. Currently, the mainstream environmental sensing equipment for consumer cars consists of monocular vision for lane detection, frontal radar for collision avoidance, and GNSS for navigation system. It is known that monocular vision system has scale ambiguity, which disturbs absolute motion estimation, and radar has highly sparse feature points that can be easily lost. Moreover, global positioning methods based on map-matching approaches require a large amount of digital map data, and there still remain numerous works to implying the high-definition map (HD map)-based localization in public.
In order to mitigate the shortcomings of DR performance of monocular vision and inertial measurement, this research focused on lane detection results from camera. Unlike feature extraction, learning-based lane detection gives a highly consistent result from same images. Recently, as a remarkable evolution in neural-network and artificial intelligence, learning-based lane detection models [30,31,32] shows better robustness than conventional machine vision approaches in challenging situations, such as varying shadows and image occlusions by moving objects. Figure 1 presents the lane detection results from both feature-based and learning-based approaches. For real driving scenes like highway driving, those challenges happen everyday, and therefore, learning-based lane detection is widely adopted in production vehicles.
In this paper, we propose a DR method that uses robust lane detection results from the learning-based lane detection model [32]. As explained above, using standalone INS will gradually lead to drifting issues for vehicle kinematic/dynamic state variables, e.g., vehicle roll angle, bank angle of road surface and vehicle heading angle. By using the robust lane detection results, these drifting problems are to be compensated and therefore will be regulated to much smaller magnitudes compared to standalone INS. Moreover, using lane detection results for correction show higher performance and better computational cost than the state-of-the-art vision-based methods in real-world experiments.
We summarize the main contributions of our work as below:
  • We proposed a novel filter design that combines learning-based lane detection results with IMU mechanization for accurate vehicle localization in GNSS denied environments.
  • Accurate online vehicle localization was achieved for various road geometry and environment conditions, verifying the robustness of our proposed method.
The rest of the paper is organized as follows. In Section 2, vehicle kinematics model and observer model are introduced. In Section 3, filter selection and implementation process are illustrated. In Section 4, experiment scenarios, vehicle set up and various dataset from the experiment are explained. In Section 5, the result of lane detection-aided DR is presented and is compared with other visual odometry-based localization algorithms. Finally, in Section 6, the conclusion of this research will be illustrated.

2. System Modeling

In this section, vehicle kinematics and observer model design process will be explained thoroughly. To design a kinematic model that operates inside the filter, we first need to consider the overall framework of our research. From Figure 2, we can see that, using IMU measurement and lane detection results, the system should output reliable vehicle localization data.
As shown in Figure 3, the result from vision-based lane detection might be degraded for various reasons, such as motion of vehicle, luminous intensity or shape and color of lane lines. In the purpose of rejecting outliers in the lane detection results and securing the consistent performance of position estimation, a vehicle kinematics-based observer model will be implemented based on this general framework.

2.1. Vehicle Kinematics Model

Vehicle kinematics follow the process of INS mechanization and a total of 9 vehicle states are propagated. Vehicle states and inputs are shown below.
X k 1 = x k 1 y k 1 z k 1 v k 1 x v k 1 y v k 1 z ϕ k 1 θ k 1 ψ k 1 T
u k 1 = a k 1 x a k 1 y a k 1 z ω k 1 x ω k 1 y ω k 1 z T
ϕ , θ , ψ represent the Euler angles of the vehicle frame. At the initial step, we initialize all the states and vehicle attitude matrix according to the IMU measurements. Suppose that the vehicle attitude matrix at timestep ( k 1 ) is ( C b n ) k 1 , skew matrix computed from Euler angles is S k 1 and norm of ω k 1 3 × 1 T as | | ω k 1 3 × 1 T | | , then we can first update the vehicle attitude matrix using the angular velocity input and compute the vehicle acceleration in the navigation frame.
( a k 1 x ) n ( a k 1 y ) n ( a k 1 z ) n = ( C b n ) k 1 a k 1 x a k 1 y a k 1 z 0 0 9.8
S k 1 = s k e w ω k 1 x T ω k 1 y T ω k 1 z T = s k e w ω k 1 3 × 1 T T
( C b n ) k = ( C b n ) k 1 + I 3 × 3 + sin | | ω k 1 3 × 1 T | | | | ω k 1 3 × 1 T | | S k 1 + 1 cos | | ω k 1 3 × 1 T | | | | ω k 1 3 × 1 T | | 2 S k 1 2
T is the timestep interval and is 0.05 s (20 Hz) during the simulation process. Using the updated vehicle attitude matrix and acceleration data, we can propagate the updated Euler angles, velocity vector and position vector.
x k = x k 1 + v k 1 x T + 1 2 ( a k 1 x ) n T 2
y k = y k 1 + v k 1 y T + 1 2 ( a k 1 y ) n T 2
z k = z k 1 + v k 1 z T + 1 2 ( a k 1 z ) n T 2
v k x = v k 1 x + ( a k 1 x ) n T
v k y = v k 1 y + ( a k 1 y ) n T
v k z = v k 1 z + ( a k 1 z ) n T
ϕ k = atan2 ( C b n ) k ( 2 , 2 ) , ( C b n ) k ( 3 , 3 )
θ k = arcsin ( C b n ) k ( 1 , 3 )
ψ k = atan2 ( C b n ) k ( 1 , 2 ) , ( C b n ) k ( 1 , 1 )
Arranging the results above, propagated vehicle states can be written as following.
X k = x k y k z k v k x v k y v k z ϕ k θ k ψ k T = f ( X k 1 , u k 1 )

2.2. Observer Model

In order to update the vehicle states by using lane detection results, we can first think of using the previous step lane geometry, as shown in Figure 4.
Considering filter implementation at Section 3, previous step lane detection results and previous step vehicle position estimates are used to create the previewed lane geometry (previous sample points) at the ( k 1 ) th step. After the IMU pre-integration introduced at Section 2.1, we can resample points on the previous lane geometry by linear interpolation. This can be compared with the actual measurement made at the k th step (current sample points) for vehicle position error compensation.
The actual implementation starts off with creating the lane geometry information with ( k 1 ) th step updated vehicle position and ( k 1 ) th step lane detection results. Suppose that we are obtaining the global coordinates for n th previewed left lane point ( x n l ) k 1 , ( y n l ) k 1 . The coordinates can be computed as below.
( x n l ) k 1 = x k 1 + 10 n cos ( ψ k 1 ) ( l n ) k 1 sin ( ψ k 1 )
( y n l ) k 1 = y k 1 + 10 n sin ( ψ k 1 ) + ( l n ) k 1 cos ( ψ k 1 )
( l n ) k 1 is the lateral distance to the 10 nm (longitudinal) previewed left lane point measured by the lane detection model. These coordinates for all the previewed points are the previous sample points in Figure 4. Then, we convert previous sample points coordinates from global frame to k th vehicle body frame (IMU pre-integrated). Frame transformation of n th previewed left lane point can be done as follows.
( ψ n r e l ) k = ψ k atan2 ( y n l ) k 1 y k , ( x n l ) k 1 x k
( L n l ) k = ( ( x n l ) k 1 x k ) 2 + ( ( y n l ) k 1 y k ) 2
( x n l ) k 1 b = ( L n l ) k cos ( ψ n r e l ) k
( y n l ) k 1 b = ( L n l ) k sin ( ψ n r e l ) k
( ψ n r e l ) k in Equation (18) represents the relative angle of the previewed lane point measured from the vehicle body x axis. ( L n l ) k in Equation (19) is the 2D Euclidean distance from the IMU pre-integrated vehicle position and the n th left lane point. The superscript b at Equations (20) and (21) mean that they are measured from the vehicle body frame. Note that the subscript of ( x n l ) k 1 b in Equation (20) is ( k 1 ) because we are simply transforming ( x n l ) k 1 , which is the x coordinate of the previous sample point.
For measurement update, we can compare ( y n l ) k 1 b with ( l n ) k , which is the k th step lane detection result of n th previewed left lane point. IMU pre-integration process error can be compensated through this step. Other than the lane information, we also use vehicle longitudinal velocity for the measurement model.
v k b = v k x cos ( ψ k ) + v k y sin ( ψ k )
Combining the lateral distances of previewed points(n points for left and right lanes) and vehicle longitudinal velocity, the measurement prediction matrix can be written as follows.
Z k = v k b ( y 1 l ) k 1 b ( y 1 r ) k 1 b ( y n l ) k 1 b ( y n r ) k 1 b T = h ( X k , u k 1 )
Having n preview points for each lane, the size of the measurement prediction matrix will be R ( 2 n + 1 ) × 1 . For measurement update, we organize the actual measurement matrix as below.
Y k = v k b m ( l 1 ) k ( r 1 ) k ( l n ) k ( r n ) k T
v k b m represents the longitudinal velocity actually measured by IMU. Using Z k , Y k , we can update the vehicle states at the measurement update step, introduced at the next section.

3. Filter Design

3.1. Filter Selection and Framework

Nearly every vehicle localization problem is approached by using a filter that fits the proposed prediction/observation model and available data type well. The most popular filters are extended Kalman filter(EKF), unscented Kalman filter(UKF), and particle filter(PF) which show reliable performance for nonlinear or complex models.
Extended Kalman filter solves the nonlinear estimation problem by linearizing state and/or measurement equations and applying the standard Kalman filter formulas to the resulting linear estimation problem. The linearization yields to approximation errors, which the filter does not take into account in the prediction/update steps. Therefore, EKF error estimates tend to underestimate state uncertainties. In comparison, UKF picks so-called sigma point samples from the filtering distribution and propagates/updates them through the nonlinear state and measurement models. The resulting weighted set of sigma points represents how the updated filtering distribution, which is then approximated as a moment matched Gaussian distribution. This state estimation results represent the state uncertainty better than the estimates obtained from the EKF with an increased computational cost. Similar to UKF, the PF method propagates particles, but the main difference is that the particles are selected in a probabilistic manner. Generally, PF shows higher time complexity than EKF and UKF, because a lot of particles are needed to represent the entire nonlinear model.
Since one of our goals in this research is to implement the real-time vehicle localization method, we can see that PF is not an appropriate candidate for filter design. Taking our system into consideration, for GNSS denied situations with no precise map available, the only applicable measurement for update step is lane detection result. However, the output of lane detection model has high uncertainty for far preview distances, which may lead to huge error accumulation for EKF update process. Cancelling out the candidates, we finally have UKF as our filter structure.
From the subsections below, simple implementation of the UKF will be illustrated in the same order as the flowchart shown in Figure 5. Note that the variables used in this section are slightly modified from the ones at Section 2, adopting the Kalman filter notation.

3.2. Prediction Step

Before entering the main filtering loop, the initialization of all the state variables is done by using the GNSS/INS and vision data. Assuming that at least the initial conditions are very accurate, the variance values of all the states inside the covariance matrix were initially set as low quantities. Using the state variable format from Section 2, we can rewrite the state propagation equation in the KF notation,
u k 1 = a k 1 x a k 1 y a k 1 z ω k 1 x ω k 1 y ω k 1 z T
X k | k 1 = f ( X k 1 | k 1 , u k 1 )
where function f is the state propagation function introduced at Section 2.
Then, the measurement prediction step can also be rewritten as follows.
Z k = h ( X k | k 1 , u k 1 )
For the simplicity of an explanation, extracting sigma points and performing unscented transform were not mentioned in the Equations (26) and (27). Furthermore, the prediction step for state covariance matrix was skipped. Detailed information about the implementation process is shown in Figure 5.

3.3. Update Step

At the update step, we have to compare the predicted measurement with the actual measurements. Referring to the observer design at Section 2.2, state update can also be described in the KF form.
X k | k = X k | k 1 + K k ( Y k Z k )
The remaining filter implementation is done according to the flowchart of Figure 5. As the simulation loop continues, X k | k and P k | k are saved for data analysis at Section 5.

4. Experiment

As mentioned in Section 1, our goal is to achieve accurate online vehicle localization for GNSS denied situations. Therefore, we have to compare the result of our proposed model with ground truth and other state-of-the-art visual odometry-based methods to prove the performance. The following sections describe the equipment used in the experiment, the geographical information of the test site, lane detection model, and its results in detail.

4.1. Experiment Setup and Scenarios

In this research, we focus on the outdoor, especially highway (i.e., challenging feature extraction) situations because urban and indoor (e.g., parking lot) online vehicle localization can be achieved in high accuracy by existing visual odometry(VO) or SLAM methods. The experiment is carried out on the highway located in Daejeon, South Korea, and as shown in Figure 6a, the vehicle traveled approximately 52 km.
The test vehicle used for the research is GENESIS G80 Sedan, as shown in Figure 6b, and the camera used for forward view recording is the FLIR BLACKFLY model. Two monocular cameras are attached to the vehicle in Figure 6c to perform as stereo camera. In order to compare the proposed methods with other VO methods, an industrial grade IMU, Xsens MTi-670g is also fastened to the stereo vision system, and calibrated with the vehicle body coordinate [33,34]. Finally, the CPU used for simulation is Intel Core i5-4690 CPU @ 3.50 GHz, and RAM of 16 GB.
For the performance evaluation of our proposed method in various situations, there is a need to slice the total vehicle trajectory into some specific scenarios. The scenarios are chosen mainly according to the lane geometry and the surrounding environments. The localization performance of our proposed method will be illustrated for each scenario at Section 5. At the beginning of each scenario, we assume that there is GNSS initialization. After the initialization, our proposed method and the other comparison methods are propagated without any GNSS update.

4.2. Lane Detection Model

In order to obtain lane fragments from collected images, a CRNN-based lane detection model, named “supercombo”, is adopted [35], which is currently implemented in commercial aftermarket ADAS systems. The model takes its input as two successive image frame and latest fully connected layer. The output of the model consists of four lane line candidates, two road boundary for left and right edge, lead vehicle position estimation, and path planning results. In this research, we use only two lane lines, for left and right lanes, since those two lane lines are also presented in other types of lane detection methods as the essential output. It is worth noting that the detected lane lines have their preview length up to 100 m, while the estimated accuracy decreases as preview length increases.

4.3. Lane Detection Results

Before proceeding to DR implementation, we perform a pre-test of lane detection to validate the performance and reliability. Since the lane detection model is designed for a single-camera setup, the left camera from the stereo setup is used. The inference results of the lane detection model are presented in Figure 7, which describes the reprojected lane lines in the global coordinate. Ground truth of vehicle trajectory is obtained by OxTS RT3100, a commercial INS system for land vehicle test and survey.
Figure 7 shows lane points for 3 different time steps with 70 m preview distance. Extending the preview distance up to 100 m and plotting for full simulation time of Scenario 1 (refer to Section 5.2), we can obtain Figure 8. Due to transformation error from image to real world coordinates and image distortion for far previewed distances, it is obvious that lateral distance data of 0 m previewed lane point is much more trustworthy compared to 100 m previewed lane point. As we can see in Figure 7 and Figure 8, further previewed lane points show huge deviations, especially at curvy road segments. However, this does not mean that the previewed lane point data should be discarded due to the high uncertainty. Although further previewed lane points have larger position errors, their existence implies curvature of the previewed lanes and restrains kinematic/dynamic vehicle states from diverging. This is a trade off problem, and will be discussed intensively at Section 5.6.1.
To sum up, the most accurate mapping possible from this dataset would be merging all the 0 m previewed lane points. Ground truth for this research can be thought of as 2 parts. First is the accurate vehicle position measured by RT and the second is 0m previewed lane points transformed into global fixed coordinates.
At Section 5, localization error will be computed by using the ground truth vehicle position obtained above. Other than the Euclidean distance error, heading angle difference will also be considered for analysis.

5. Results

5.1. Comparison Method: VO

In order to evaluate the dead-reckoning performance of the proposed method, state-of-the-art visual odometry methods are also implemented. We chose VINS [36,37,38,39], top-ranked VO method in KITTI benchmarks, as competitive methods, since VINS have been designed for various types of system configurations, such as monocular vision, stereo vision, visual-inertial fusion, and even vehicle model fusion. It is worth noting that, for the fair comparison, the intrinsic and extrinsic parameters for cameras and IMU have been pre-calibrated with an open-sourced visual-inertial calibration library, kalibr [40]. Figure 9 shows the baseline of the stereo setup.
However, unlike the indoor situation or urban driving scenes, the performance of VO is figured out to be degraded in the highway environment. Figure 10 shows the feature matching and calculated optical flow from a given image sequence. Since the background scene is nearly homogeneous, a large portion of features are extracted from surrounding vehicles. Moreover, the feature points on surrounding vehicles are relatively closer; hence, the effect of that points can be emphasized in the pose estimation result, while learning-based lane line detection shows consistent result, with or without surrounding vehicles.
In order to improve the performance degrading under the homogeneity of the scenery, the direct approach, specifically direct sparse odometry(DSO) [41], that uses the photometric error rather than the matching of selected set of feature points, has been adopted to competitive methods. The direct method shows more consistent ego-motion tracking performance. The sparse points from DSO also reflect the distinguishable characteristics in the middle of road surface, while the feature points from VINS tend to be biased on the corners on images as shown in Figure 11. However, under rapid changes in illuminance in the surrounding environment, such as direct sunlight toward camera or insufficient intensity in tunnels, the direct method shows the degraded performance, or fails occasionally.
Considering the drawbacks of comparison methods and to evaluate localization performance of our proposed method for specific lane geometry conditions, we extracted 4 scenarios from the highway drive. The result of localization for various scenarios will be presented in the following subsections, and overall analysis will be done at the end of the section. For simplicity, VINS Stereo + IMU is written as VINS1, VINS Stereo as VINS2 and VINS Mono + IMU as VINS3 for the RMSE comparison.

5.2. Scenario 1: Initial Stage

The first scenario is the initial stage of the experiment, where a vehicle passes the tollbooth and enters the highway. This scene was chosen for evaluating standardized highway road geometry. As we can see from Figure 12, the ground truth lane does not have any extreme road geometry (high curvature, long straight path). The total travel distance and travel time of scenario 1 is approximately 992 m and 60 s, respectively. Localization comparison of methods is shown in Figure 12 and Figure 13 and Table 1.

5.3. Scenario 2: Straight Road

Scenario 2 represents the case for a long straight road. This is to evaluate and analyze the longitudinal error magnitude for our proposed method. The total travel distance and travel time of scenario 2 are approximately 4.6 km and 200 s, respectively. Localization comparison of methods are shown in Figure 14 and Figure 15 and Table 2. VINS1(VINS Stereo + IMU) method failed in scenario 2.

5.4. Scenario 3: Curved Road

Scenario 3 represents the case for curvy roads. High curvature trajectory was chosen from the ground truth data. The total travel distance and travel time of scenario 3 is approximately 1077 m and 60 s respectively. Localization results are shown in Figure 16 and Figure 17 and Table 3. Note that VINS1(VINS Stereo + IMU) localization result is close to the ground truth (marked yellow).

5.5. Scenario 4: Tunnels

As illustrated in Section 5.1, VO shows generally degraded performance at highway situations, and this is predicted to be more intensified in tunnels. In order to compare the localization performance of VO and proposed method for challenging feature extraction environments, scenario 4 was tested at Figure 18. Scenario 4 consists of 3 consecutive tunnels at the highway, as shown in Figure 18. The total travel distance and travel time of scenario 4 are approximately 5290 m and 225 s, respectively. Localization comparison of methods is shown in Figure 19 and Figure 20 and Table 4. In this scenario, DSO algorithm has failed.

5.6. Result Analysis

5.6.1. Localization Performance for Varying Preview Distances

For 4 scenarios and their localization results from Table 1, Table 2, Table 3 and Table 4, we can observe that the localization performance of our proposed method is generally enhanced for further preview distances. As shown in Figure 8, although further previewed points have higher positional uncertainty, vehicle localization is stabilized by introducing forward lane geometry to the model update. Predicting the previewed point positions using the previous step lane detection measurements and vehicle position estimate “push” or “pull” the IMU mechanized vehicle position to the accurate location.
However, naively increasing the preview distance is not the optimal solution to accurate localization. Results from Table 1 and Table 4 show degrading localization performance after 40 m and 50 m preview distance, respectively. This is due to the inherent uncertainty of the lane detection results for far preview distances.
Therefore, we can conclude that optimal preview distances are different for various scenarios tested in this research, but localization performance is generally enhanced for longer preview distances.

5.6.2. Longitudinal, Lateral Error and Heading Angle Drift of Proposed Method

It is intuitive that lane detection information helps vehicle localization in the lateral direction, but not for longitudinal direction. Observing the localization results for scenarios 1 to 4, we can see that our method shows accurate enough localization for both vehicle longitudinal and lateral directions. This is because previewed road curvature information “attracts” vehicle to the appropriate longitudinal position by measurement prediction model in Section 2.2, compensating for the accumulated longitudinal error.
If the road has high curvature, as shown in Section 5.4, longitudinal error is bounded with the help of previewed lane geometry. On the other hand, for scenario 2 (Figure 15), the error keeps on increasing because there is little feedback on the longitudinal direction for long straight road section (low road curvature). However, considering that the longitudinal error reached only 11 m after 4.6 km drive, this implies that even with small lane curvature feedback, longitudinal diverging tendency is maintained at a slow increasing rate.
Other than 2D Euclidean localization error, vehicle heading angle drift should also be considered for accuracy evaluation. For all the scenarios, we can see that the heading angle drift is regulated below 2 degrees of magnitude, even for long vehicle trajectories. Similar to the longitudinal error, heading angle is bounded by using the previewed lane geometry.

5.6.3. Comparison with Other Methods

As we can see from Figure 12, Figure 14, Figure 16 and Figure 19 and RMSE comparison table for each scenario, our proposed method shows much better performance in estimating the vehicle position accurately, compared to other VO and standalone INS methods.
Except for scenario 4, at least 1 VO method showed adequate localization performance for each of the scenarios. However, in scenario 4, as mentioned in Section 5.5, the accuracy of VO methods is severely degraded. DSO failed, and VINS Stereo also totally diverged from the ground truth, and the same for the remaining 2 methods. This is due to the moving and homogeneous feature extraction in 3 consecutive tunnels. Our proposed method, however, uses the robust learning-based lane detection model, which means that “features” extracted for implementation(i.e., lane information) are consistent and very stable for analysis. Based on these lane detection results and proposed model, we succeeded in achieving accurate localization performance, even for the tunnel scenario.

6. Conclusions

6.1. Overall Summary

This study proposed a novel lane detection-based online dead-reckoning method in GNSS denied situations. Using IMU measurements and robust learning-based lane detection results as input to the system, vehicle kinematics and observer were designed. Vehicle position estimation was implemented by using unscented Kalman filter with the model structure at Section 2. For the various highway drive scenarios, the evaluation of localization performance of our proposed method was done by comparing with state-of-the-art VO methods and standalone INS results. Although positional shifting was inevitable for long trajectories, the proposed method showed much better results than the comparison sets by successfully restraining the diverging vehicle states with the previewed lane geometry. Moreover, it was verified that using previewed lane information up to certain distances enhanced the vehicle localization accuracy, but showed degrading performance when using too far-previewed lane detection results.

6.2. Future Research Direction

In this paper, we have implemented the vehicle localization method by fusing learning based lane detection results with IMU mechanization. However, this method does not take into account the pitching and rolling motion of the vehicle during the highway drive. Underestimation of these additional vehicle states may have caused unwanted localization errors in the proposed model and filter design. For further research, expansion of the vehicle and lane kinematics model to 3D scale, considering the rolling and pitching motion of vehicle, can be done to enhance localization accuracy.
Moreover, together with the loop closure algorithm, the proposed method could be further improved to create an accurate digital lane map along the vehicle trajectories, and is also expected to show enhanced performances when the lane lines are not presented continuously or rapidly changing.

Author Contributions

This research was carried out in collaborations with all authors. J.J., Y.H., Y.J., S.P., I.S.K. and S.B.C., J.J., and Y.H. designed the proposed DR method. Y.H. and Y.J. performed the data acquisition and post-processing of the dataset. J.J. and Y.H. drafted and corrected the whole manuscript. All authors contributed and improved the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the KAIST Institute of Technology Value Creation, Industry Liaison Center(G-CORE Project) grant funded by the Ministry of Science and ICT(N11210075).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the authors. The data are not publicly available due to privacy reasons.

Acknowledgments

The research was supported by the 3SECONDZ Inc., building highway driving dataset and data acquisition platform. Moreover, we are sincerely grateful to unknown reviewers for their constructive remarks and the improvement of our work, as well as the expression through the research results.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Fouque, C.; Bonnifait, P. Vehicle localization in urban canyons using geo-referenced data and few gnss satellites. IFAC Proc. Vol. 2007, 40, 282–287. [Google Scholar] [CrossRef] [Green Version]
  2. Jabbour, M.; Cherfaoui, V.; Bonnifait, P. Management of Landmarks in a GIS for an Enhanced Localisation in Urban Areas. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Meguro-Ku, Japan, 13–15 June 2006; pp. 50–57. [Google Scholar]
  3. Fouque, C.; Bonnifait, P. Tightly coupled GIS data in GNSS fix computation with integrity test. Int. J. Intell. Inf. Database Syst. 2008, 2, 167–186. [Google Scholar] [CrossRef] [Green Version]
  4. Ragothaman, S. Path Planning for Autonomous Ground Vehicles using GNSS and Cellular LTE Signal Reliability Maps and GIS 3-D Maps; University of California: Riverside, CA, USA, 2018. [Google Scholar]
  5. Sun, Q.C.; Xia, J.C.; Foster, J.; Falkmer, T.; Lee, H. Pursuing Precise Vehicle Movement Trajectory in Urban Residential Area Using Multi-GNSS RTK Tracking. Transp. Res. Procedia 2017, 25, 2356–2372. [Google Scholar] [CrossRef]
  6. Feng, Y.; Wang, J. GPS RTK Performance Characteristics and Analysis. J. Glob. Position. Syst. 2008, 7. [Google Scholar] [CrossRef] [Green Version]
  7. Angrisano, A.; Gioia, C.; Gaglione, S.; Core, G. GNSS Reliability Testing in Signal-Degraded Scenario. Int. J. Navig. Obs. 2013, 2013, 12. [Google Scholar] [CrossRef]
  8. Cui, Y.; Ge, S.S. Autonomous vehicle positioning with GPS in urban canyon environments. IEEE Trans. Robot. Autom. 2003, 19, 15–25. [Google Scholar] [CrossRef] [Green Version]
  9. Rabiee, R.; Zhong, X.; Yan, Y.; Tay, W.P. LaIF: A Lane-Level Self-Positioning Scheme for Vehicles in GNSS-Denied Environments. IEEE Trans. Intell. Transp. Syst. 2019, 20, 2944–2961. [Google Scholar] [CrossRef]
  10. Rose, C.; Britt, J.; Allen, J.; Bevly, D. An Integrated Vehicle Navigation System Utilizing Lane-Detection and Lateral Position Estimation Systems in Difficult Environments for GPS. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2615–2629. [Google Scholar] [CrossRef]
  11. Schreiber, M.; Knöppel, C.; Franke, U. LaneLoc: Lane marking based localization using highly accurate maps. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013; pp. 449–454. [Google Scholar]
  12. Angrisano, A. GNSS/INS Integration Methods. Ph.D. Thesis, “Parthenope” University of Naples, Naples, Italy, 2010. [Google Scholar]
  13. Liu, H.; Nassar, S.; El-Sheimy, N. Two-Filter Smoothing for Accurate INS/GPS Land-Vehicle Navigation in Urban Centers. IEEE Trans. Veh. Technol. 2010, 59, 4256–4267. [Google Scholar] [CrossRef]
  14. Tang, Y.; Wu, Y.; Wu, M.; Wu, W.; Hu, X.; Shen, L. INS/GPS Integration: Global Observability Analysis. IEEE Trans. Veh. Technol. 2009, 58, 1129–1142. [Google Scholar] [CrossRef]
  15. Wu, Z.; Yao, M.; Ma, H.; Jia, W. Improving Accuracy of the Vehicle Attitude Estimation for Low-Cost INS/GPS Integration Aided by the GPS-Measured Course Angle. IEEE Trans. Intell. Transp. Syst. 2013, 14, 553–564. [Google Scholar] [CrossRef]
  16. Li, W.; Leung, H. Constrained unscented Kalman filter based fusion of GPS/INS/digital map for vehicle localization. In Proceedings of the 2003 IEEE International Conference on Intelligent Transportation Systems, Shanghai, China, 12–15 October 2003; Volume 2, pp. 1362–1367. [Google Scholar] [CrossRef]
  17. Miller, I.; Campbell, M.; Huttenlocher, D. Map-aided localization in sparse global positioning system environments using vision and particle filtering. J. Field Robot. 2011, 28, 619–643. [Google Scholar] [CrossRef]
  18. Levinson, J.; Montemerlo, M.; Thrun, S. Map-Based Precision Vehicle Localization in Urban Environments. In Robotics: Science and Systems 2007; MIT Press: Cambridge, MA, USA, 2008; Volume 3, pp. 121–128. [Google Scholar]
  19. Gruyer, D.; Belaroussi, R.; Revilloud, M. Map-aided localization with lateral perception. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 674–680. [Google Scholar] [CrossRef] [Green Version]
  20. Lu, W.; Seignez, E.; Rodriguez, F.S.A.; Reynaud, R. Lane marking based vehicle localization using particle filter and multi-kernel estimation. In Proceedings of the 2014 13th International Conference on Control Automation Robotics Vision (ICARCV), Singapore, 10–12 December 2014; pp. 601–606. [Google Scholar] [CrossRef] [Green Version]
  21. Li, F.; Bonnifait, P.; Ibañez-Guzmán, J. Map-Aided Dead-Reckoning With Lane-Level Maps and Integrity Monitoring. IEEE Trans. Intell. Veh. 2018, 3, 81–91. [Google Scholar] [CrossRef] [Green Version]
  22. Andrade, D.C.; Bueno, F.; Franco, F.R.; Silva, R.A.; Neme, J.H.Z.; Margraf, E.; Omoto, W.T.; Farinelli, F.A.; Tusset, A.M.; Okida, S.; et al. A Novel Strategy for Road Lane Detection and Tracking Based on a Vehicle’s Forward Monocular Camera. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1497–1507. [Google Scholar] [CrossRef]
  23. Dewangan, D.K.; Sahu, S.P. Driving Behavior Analysis of Intelligent Vehicle System for Lane Detection Using Vision-Sensor. IEEE Sen. J. 2021, 21, 6367–6375. [Google Scholar] [CrossRef]
  24. Vivacqua, R.; Vassallo, R.; Martins, F. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application. Sensors 2017, 17, 2359. [Google Scholar] [CrossRef]
  25. Wei, L.; Cappelle, C.; Ruichek, Y.; Zann, F. Intelligent Vehicle Localization in Urban Environments Using EKF-based Visual Odometry and GPS Fusion. IFAC Proc. Vol. 2011, 44, 13776–13781. [Google Scholar] [CrossRef] [Green Version]
  26. Strydom, R.; Thurrowgood, S.; Srinivasan, M. Visual Odometry: Autonomous UAV Navigation using Optic Flow and Stereo. In Proceedings of the ICRA 2014, Hong Kong, China, 31 May–5 June 2014. [Google Scholar]
  27. Wahab, A.A.; Khattab, A.; Fahmy, Y.A. Two-way TOA with limited dead reckoning for GPS-free vehicle localization using single RSU. In Proceedings of the 2013 13th International Conference on ITS Telecommunications (ITST), Tampere, Finland, 5–7 November 2013; pp. 244–249. [Google Scholar] [CrossRef]
  28. Maaref, M.; Khalife, J.; Kassas, Z.M. Lane-Level Localization and Mapping in GNSS-Challenged Environments by Fusing Lidar Data and Cellular Pseudoranges. IEEE Trans. Intell. Veh. 2019, 4, 73–89. [Google Scholar] [CrossRef]
  29. Chiang, K.W.; Tsai, G.J.; Chu, H.J.; El-Sheimy, N. Performance Enhancement of INS/GNSS/Refreshed-SLAM Integration for Acceptable Lane-Level Navigation Accuracy. IEEE Trans. Veh. Technol. 2020, 69, 2463–2476. [Google Scholar] [CrossRef]
  30. Kim, J.; Kim, J.; Jang, G.J.; Lee, M. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw. 2017, 87, 109–121. [Google Scholar] [CrossRef]
  31. Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust Lane Detection From Continuous Driving Scenes Using Deep Neural Networks. IEEE Trans. Veh. Technol. 2020, 69, 41–54. [Google Scholar] [CrossRef] [Green Version]
  32. Lu, P.; Cui, C.; Xu, S.; Peng, H.; Wang, F. SUPER: A Novel Lane Detection System. IEEE Trans. Intell. Veh. 2020, 6, 583–593. [Google Scholar] [CrossRef]
  33. Hwang, Y.; Jeong, Y.; Kweon, I.; Choi, S. Online Misalignment Estimation of Strapdown Navigation for Land Vehicle Under Dynamic Condition. Int. J. Automot. Technol. 2021, 22, 1723–1733. [Google Scholar]
  34. Hwang, Y. Identifying Vehicle Model Parameters Using Remote Mounted Motion Sensor. IEEE Sens. J. 2021. [Google Scholar] [CrossRef]
  35. Comma.ai, Inc. Openpilot. Available online: https://github.com/commaai/openpilot (accessed on 16 September 2021).
  36. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  37. Qin, T.; Shen, S. Online Temporal Calibration for Monocular Visual-Inertial Systems. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 3662–3669. [Google Scholar]
  38. Qin, T.; Pan, J.; Cao, S.; Shen, S. A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03638. [Google Scholar]
  39. Qin, T.; Cao, S.; Pan, J.; Shen, S. A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03642. [Google Scholar]
  40. Furgale, P.; Rehder, J.; Siegwart, R. Unified temporal and spatial calibration for multi-sensor systems. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1280–1286. [Google Scholar]
  41. Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef]
Figure 1. Example for lane detection output difference according to approaches. (a) Original Image, (b) Feature-based Lane Detection, (c) Learning-based Lane Detection.
Figure 1. Example for lane detection output difference according to approaches. (a) Original Image, (b) Feature-based Lane Detection, (c) Learning-based Lane Detection.
Sensors 21 06805 g001
Figure 2. Overall architecture of land-aided dead-reckoning system.
Figure 2. Overall architecture of land-aided dead-reckoning system.
Sensors 21 06805 g002
Figure 3. Potential error sources when using lane detection for vehicle localization: (a) Original road image in perspective view; (b) Blurred lane estimation accuracy along preview distance in global frame; (c) Effects of vehicle attitude and road inclination in lane detection result; (d) Mismatched lane lines in successive frames.
Figure 3. Potential error sources when using lane detection for vehicle localization: (a) Original road image in perspective view; (b) Blurred lane estimation accuracy along preview distance in global frame; (c) Effects of vehicle attitude and road inclination in lane detection result; (d) Mismatched lane lines in successive frames.
Sensors 21 06805 g003
Figure 4. Observer Model: Predicting lateral distance to the previewed lane.
Figure 4. Observer Model: Predicting lateral distance to the previewed lane.
Sensors 21 06805 g004
Figure 5. Simplified framework of UKF method.
Figure 5. Simplified framework of UKF method.
Sensors 21 06805 g005
Figure 6. Test Environment and Experimental Setups are described in the figure. The experiment was done in Daejeon, South Korea, with a stereo camera-attached test vehicle (GENSIS G80 Sedan). (a) Experiment Trajectory, (b) GENESIS G80 Sedan (c) Stereo Camera.
Figure 6. Test Environment and Experimental Setups are described in the figure. The experiment was done in Daejeon, South Korea, with a stereo camera-attached test vehicle (GENSIS G80 Sedan). (a) Experiment Trajectory, (b) GENESIS G80 Sedan (c) Stereo Camera.
Sensors 21 06805 g006
Figure 7. Lane Detection Results (0.5–7.5 s) with 70 m preview distance.
Figure 7. Lane Detection Results (0.5–7.5 s) with 70 m preview distance.
Sensors 21 06805 g007
Figure 8. A lane detection result including up to 100 m previewed points is plotted with the vehicle position measured by OxTS RT3100 (vehicle position marked blue). As shown in the figure, longer preview distance show huge lateral deviation from the ground truth.
Figure 8. A lane detection result including up to 100 m previewed points is plotted with the vehicle position measured by OxTS RT3100 (vehicle position marked blue). As shown in the figure, longer preview distance show huge lateral deviation from the ground truth.
Sensors 21 06805 g008
Figure 9. Extrinsic calibration result of stereo vision.
Figure 9. Extrinsic calibration result of stereo vision.
Sensors 21 06805 g009
Figure 10. Disturbances on optical flow with moving traffics. (a) Matched Feature, (b) Optical Flow, (c) Static Scene, (d) Moving Scene.
Figure 10. Disturbances on optical flow with moving traffics. (a) Matched Feature, (b) Optical Flow, (c) Static Scene, (d) Moving Scene.
Sensors 21 06805 g010
Figure 11. Point selection for VO in the homogeneous environment with moving traffic: (a) Selected points using feature matching. (b) Selected points using direct method.
Figure 11. Point selection for VO in the homogeneous environment with moving traffic: (a) Selected points using feature matching. (b) Selected points using direct method.
Sensors 21 06805 g011
Figure 12. Scenario 1: Vehicle Localization with Various Methods.
Figure 12. Scenario 1: Vehicle Localization with Various Methods.
Sensors 21 06805 g012
Figure 13. Scenario 1 (40 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Figure 13. Scenario 1 (40 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Sensors 21 06805 g013
Figure 14. Scenario 2: Vehicle Localization with Various Methods.
Figure 14. Scenario 2: Vehicle Localization with Various Methods.
Sensors 21 06805 g014
Figure 15. Scenario 2 (90 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Figure 15. Scenario 2 (90 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Sensors 21 06805 g015
Figure 16. Scenario 3: Vehicle Localization with Various Methods.
Figure 16. Scenario 3: Vehicle Localization with Various Methods.
Sensors 21 06805 g016
Figure 17. Scenario 3 (90 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Figure 17. Scenario 3 (90 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Sensors 21 06805 g017
Figure 18. Scenario 4: GNSS signal outage in tunnels. Green denotes low dilution of precision, and red denotes high.
Figure 18. Scenario 4: GNSS signal outage in tunnels. Green denotes low dilution of precision, and red denotes high.
Sensors 21 06805 g018
Figure 19. Scenario 4: Vehicle Localization with Various Methods.
Figure 19. Scenario 4: Vehicle Localization with Various Methods.
Sensors 21 06805 g019
Figure 20. Scenario 4 (50 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Figure 20. Scenario 4 (50 m Preview) (a) Longitudinal Error (b) Lateral Error (c) Heading Angle Drift.
Sensors 21 06805 g020
Table 1. Scenario 1 Localization Results (Trajectory Length: 992 m).
Table 1. Scenario 1 Localization Results (Trajectory Length: 992 m).
Dataset10 m20 m30 m40 m50 m60 m70 m80 m90 mINSDSOVINS1VINS2VINS3
RMSE (m)14.5615.798.095.067.349.329.319.759.8341.1148.66132.455.89456.0
RMSE Lat (m)4.2612.227.153.525.066.616.626.987.0537.1317.0482.1450.51216.4
RMSE Long (m)13.9210.003.783.635.326.576.556.816.8617.6345.58103.923.95401.4
Max Error (m)24.8735.3119.656.8410.1914.7514.8915.9816.21111.362.82230.598.18869.2
Table 2. Scenario 2 Localization Results (Trajectory Length: 4628 m) VINS1 Failed.
Table 2. Scenario 2 Localization Results (Trajectory Length: 4628 m) VINS1 Failed.
Dataset10 m20 m30 m40 m50 m60 m70 m80 m90 mINSDSOVINS1VINS2VINS3
RMSE (m)447.9161.662.2622.2412.569.058.898.608.561175334.5x1315342.4
RMSE Lat (m)423.7156.759.9320.2810.386.866.406.076.04116193.63x1214182.5
RMSE Long (m)145.339.616.869.137.076.286.166.086.07166.9321.1x465.6289.6
Max Error (m)771.2339.2127.638.0519.0113.5812.8012.7112.772984490.7x1981650.5
Table 3. Scenario 3 Localization Results (Trajectory Length: 1077 m).
Table 3. Scenario 3 Localization Results (Trajectory Length: 1077 m).
Dataset10 m20 m30 m40 m50 m60 m70 m80 m90 mINSDSOVINS1VINS2VINS3
RMSE (m)25.9413.697.884.574.134.033.913.843.8113.57284.928.0265.93428.3
RMSE Lat (m)23.8013.086.002.572.683.083.013.053.0512.85130.911.3331.81379.4
RMSE Long (m)10.324.035.093.783.152.612.492.332.284.36253.025.6257.75198.6
Max Error (m)67.8737.5719.327.956.466.456.246.126.0735.5723.261.04196.5632.5
Table 4. Scenario 4 Localization Results (Trajectory Length: 5290 m).
Table 4. Scenario 4 Localization Results (Trajectory Length: 5290 m).
Dataset10 m20 m30 m40 m50 m60 m70 m80 m90 mINSDSOVINS1VINS2VINS3
RMSE (m)753.5152.146.2110.925.125.265.435.615.66990.6x114639141489
RMSE Lat (m)695.0146.944.6710.324.314.444.634.854.92950.8x11113647313.0
RMSE Long (m)291.239.3211.843.572.782.822.832.822.80277.9x280.814221422
Max Error (m)1576337.5100.517.658.209.7910.7010.9310.812088x258369462429
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jeon, J.; Hwang, Y.; Jeong, Y.; Park, S.; Kweon, I.S.; Choi, S.B. Lane Detection Aided Online Dead Reckoning for GNSS Denied Environments. Sensors 2021, 21, 6805. https://doi.org/10.3390/s21206805

AMA Style

Jeon J, Hwang Y, Jeong Y, Park S, Kweon IS, Choi SB. Lane Detection Aided Online Dead Reckoning for GNSS Denied Environments. Sensors. 2021; 21(20):6805. https://doi.org/10.3390/s21206805

Chicago/Turabian Style

Jeon, Jinhwan, Yoonjin Hwang, Yongseop Jeong, Sangdon Park, In So Kweon, and Seibum B. Choi. 2021. "Lane Detection Aided Online Dead Reckoning for GNSS Denied Environments" Sensors 21, no. 20: 6805. https://doi.org/10.3390/s21206805

APA Style

Jeon, J., Hwang, Y., Jeong, Y., Park, S., Kweon, I. S., & Choi, S. B. (2021). Lane Detection Aided Online Dead Reckoning for GNSS Denied Environments. Sensors, 21(20), 6805. https://doi.org/10.3390/s21206805

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop