Next Article in Journal
Rapid Photogrammetry with a 360-Degree Camera for Tunnel Mapping
Previous Article in Journal
Infrared Maritime Small Target Detection Based on Multidirectional Uniformity and Sparse-Weight Similarity
Previous Article in Special Issue
NLOS Identification- and Correction-Focused Fusion of UWB and LiDAR-SLAM Based on Factor Graph Optimization for High-Precision Positioning with Reduced Drift
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GNSS Urban Positioning with Vision-Aided NLOS Identification

1
School of Electronic and Communication Engineering, Sun Yat-Sen University, Shenzhen 518107, China
2
Shenzhen Key Laboratory of Navigation and Communication Integration, Shenzhen 518107, China
3
The Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(21), 5493; https://doi.org/10.3390/rs14215493
Submission received: 1 October 2022 / Revised: 26 October 2022 / Accepted: 26 October 2022 / Published: 31 October 2022
(This article belongs to the Special Issue Remote Sensing in Navigation: State-of-the-Art)

Abstract

:
The global navigation satellite system (GNSS) has played an important role in a broad range of consumer and industrial applications. In particular, cities have become GNSS major application scenarios; however, GNSS signals suffer from blocking, reflection and attenuation in harsh urban environments, resulting in diverse received signals, e.g., non-line-of-sight (NLOS) and multipath signals. NLOS signals often cause severe deterioration in positioning, navigation, and timing (PNT) solutions, which should be identified and excluded. In this paper, we propose a vision-aided NLOS identification method to augment GNSS urban positioning. A skyward omnidirectional camera is installed on a GNSS antenna to collect omnidirectional images of the sky region. After being rectified, these images are processed for sky region segmentation, which is improved by leveraging gradient information and energy function optimization. Image morphology processing is further employed to smooth slender boundaries. After sky region segmentation, the satellites are projected onto the omnidirectional image, from which NLOS satellites are identified. Finally, the identified NLOS satellites are excluded from GNSS PNT estimation, promoting accuracy and stability. Practical test results show that the proposed sky region segmentation module achieves over 96% accuracy, and that completely accurate NLOS identification is achieved for the experimental images. We validate the performance of our method on public datasets. Compared with the raw measurements without screening, the vision-aided NLOS identification method enables improvements of 60.3%, 12.4% and 63.3% in the E, N, and U directions, respectively, as well as an improvement of 58.5% in 3D accuracy.

Graphical Abstract

1. Introduction

The global navigation satellite system (GNSS) can provide convenient and efficient positioning services, and has become an essential part of a variety of products [1]. With the development of the GNSS, the requirements for positioning services in urban environments have increased, presenting steep challenge [2]. GNSS signals are essentially electromagnetic wave signals, which are reflected by glass or metal surfaces and blocked by traffic road bridges and tall buildings. Thus, pseudorange errors are introduced due to both the multipath effect and non-line-of-sight (NLOS) reception. Multipath signals usually include line-of-sight (LOS) and reflection/diffraction signals, while NLOS reception includes only reflection/diffraction signals. The pseudorange observation error caused by the multipath effect is generally at the meter level, and the carrier phase observation error is at the centimeter level. NLOS reception can lead to positioning errors of more than 100 m in deep urban canyons [3]. Clearly, the requirements for high-precision positioning information often cannot be met in complex urban environments, such as unmanned driving and pedestrian navigation. To improve the accuracy of GNSS positioning, NLOS signals should be effectively excluded. Given that satellite navigation still occupies a dominant status for vehicle and pedestrian navigation, several promising solutions for NLOS elimination have been proposed by scholars and engineers.
Detecting NLOS signals based on raw GNSS measurements is straightforward. Ref. [4] proposed a method to mitigate the multipath effect and select the proper signals based on signal-to-noise ratio (SNR) measurements. Ref. [5] presented an improved algorithm to identify and discard measurements that are corrupted by multipath and NLOS errors. In addition, determining whether a GNSS measurement is an NLOS or LOS signal has been regarded as a typical classification problem. Hence, machine learning methods are very popular for NLOS signal detection. Ref. [6] developed a LOS/NLOS classifier algorithm based on support vector machine (SVM), which makes full use of GNSS satellites’ raw measurements, such as SNR, normalized pseudorange residual, and elevation angle, to discuss satellite visibility. Shadow matching was integrated with the proposed SVM LOS/NLOS classifier to eliminate NLOS signals and improve the robustness and accuracy of single point positioning (SPP). However, in complex urban areas, methods based on raw GNSS measurements do not significantly improve the positioning accuracy. Moreover, some methods using machine learning require a large number of training samples, and the generalization ability of a model is also a problem worth considering. In addition to NLOS signal detection methods from raw GNSS measurements, some scholars have proposed light detection and ranging (LiDAR)-based methods. LiDAR-based three-dimensional reconstruction and mapping is an environmental perception strategy. Accordingly, if we can collect prior information about the space in which a vehicle drives, the NLOS signals can be effectively rejected by combining the satellite visibility analysis [7], or raw measurements can be acquired to correct the NLOS delay in a pseudorange model [8]. However, accurate 3D building models are expensive to establish and maintain, and the use of such models places high demands on the performance of computing devices. Furthermore, an accurate 3D model must be created in advance. Some moving objects with a tall height are not among the buildings modeled in the 3D city map, which can also lead to NLOS reception [9]. Therefore, the accuracy and timeliness limit the popularity of this technology to a certain extent [10].
Compared to using LiDAR-based methods to detect NLOS signals, a vision-aided NLOS elimination method has a lower sensor cost. A vision-aided method uses a skyward camera to detect the visible sky and other objects present in the field of view. Then, the satellites are projected onto the processed image to identify whether signals can be received directly (in a sky region) or not (in a non-sky region). An important problem in vision-aided methods is the segmentation of sky and non-sky regions of the images captured by the camera. Sets of unsupervised and supervised clustering algorithms have been compared to determine the best sky region classifier in terms of good classification rate and processing time [11]. The method of [12] first divides the image into small regions. Then, the feature points in the image are tracked, and the distance traveled in the image is computed to identify those small clusters that correspond to obstacles and extract the sky regions. In addition, methods based on the hue–saturation–value (HSV) color model [13] and deep learning [14] are commonly used to identify sky regions. However, these methods cannot address the reflective phenomena on the upper floors and surfaces of buildings in urban environments, as shown in Figure 1, leading to the misidentification of sky regions. In terms of positioning results, both [14,15] conducted only experiments in static scenes, which cannot well simulate the actual effect when driving a vehicle in an urban environment.
In this paper, we propose a vision-aided method to exclude the NLOS signals caused by tall buildings in urban scenarios. Specifically, images taken by a skyward omnidirectional camera are used for sky region segmentation. Sky regions are detected by leveraging the gradient information and energy function optimization. For slender sky boundaries caused by strong light, we utilize an image morphology processing method to restore the misdetected sky regions. Then, we project satellites on the omnidirectional image after sky region segmentation, with the intent of determining which satellites are located in visible sky regions and which are hidden behind buildings and sending NLOS signals. Finally, the GNSS positioning result is calculated based on the remaining visible satellites. The method proposed in this paper significantly improves positioning accuracy in urban environments with the aid of a low-cost sensor, and the calculation is simple. The proposed method does not require a large number of samples for training and does not need to consider the generalization ability of the model.
The remainder of this paper is structured as follows. Section 2.1 provides a detailed description of the sky region segmentation algorithm. Section 2.2 discusses how NLOS signals are rejected. In Section 2.3, a code-based kinematic positioning algorithm is implemented with an extended Kalman filter for the estimation of positioning. In Section 3, we evaluate the effectiveness of the proposed method by means of experiments. Finally, Section 4 summarizes the work by presenting several conclusions and provides an outlook on future research.

2. Methodology

The vision-aided GNSS urban positioning system proposed in this paper is composed of three sensors: a GNSS receiver, an inertial measurement unit (IMU), and an omnidirectional camera. The images captured by the omnidirectional camera are first rectified, and then sky region segmentation is performed using an improved method based on gradient information and energy function optimization, which can efficiently extract the sky regions in urban canyons, under flyovers and in other challenging environments. The proposed approach using gradient information and energy function optimization is different from the above-mentioned sky region segmentation methods. The proposed method is able to correctly identify the reflective phenomena generated by the surfaces and windows of tall buildings. Therefore, the method in [16] is more suitable for the image characteristics of omnidirectional camera images after rectification and has better performance in solving the problem of reflection from building surfaces in urban environments. In our work, we developed improvements based on [16] for the problem of sky region segmentation when buildings, viaducts and flyovers block the field of view of the camera. After sky region segmentation has been completed, the omnidirectional image is recovered in accordance with the remapping parameters. The IMU is used to obtain the angle between the system and the true north direction to align the satellite skyplot with the omnidirectional image. The imaging principle of the omnidirectional camera is used to project the satellites onto the omnidirectional image after sky region segmentation for visibility analysis. Finally, the global position of the system is obtained using code-based kinematic positioning after the elimination of the identified NLOS satellites. A flow chart of the system is shown in Figure 2.

2.1. Sky Region Segmentation

The sky region segmentation module proposed in this paper first rectifies the input omnidirectional image. We perform sky region segmentation on the image after rectification using a method based on improved gradient information and energy function optimization [16]. The processed images are converted back to omnidirectional images using remapping parameters after sky region segmentation is completed. Then, image morphology processing is applied to the resulting slender boundaries. The whole process is shown in Figure 3.

2.1.1. Omnidirectional Image Rectification

For vision-aided NLOS identification, the selection of a suitable camera must be considered. Excellent details in images are not what we are seeking; instead, our needs are mainly based on the shooting angle. The simplest choice is a normal-scale camera but with a limited shooting angle, while a wide-range camera can slightly improve the shooting angle but at the cost of inevitable distortion. Therefore, an omnidirectional camera can be considered the best selection because of its affordability, zero blind spot features and storage optimization.
The first step after the camera captures an image is to rectify the omnidirectional image. Omnidirectional images have very large distortions, so they are not compatible with the sky region segmentation algorithm proposed in this paper. To rectify omnidirectional images, the imaging principle of the omnidirectional camera should be specified. An imaging system with a single effective viewpoint is called a central projection system. A conventional perspective camera is an example of a central projection system. Most commercial cameras can be described as pinhole cameras, modeled by the well-known perspective projection. However, to widen the camera field of view, an omnidirectional camera uses a combination of mirrors (such as parabolic, hyperbolic, or elliptical mirrors) and a standard perspective camera that provides a 360-degree field of view in the horizontal plane. A unified projection model for a central catadioptric camera was proposed in [17,18]. All omnidirectional cameras are isomorphic to project mappings from a sphere to a plane with a projection center perpendicular to the plane. In other words, a point in three-dimensional space is first projected onto a unit sphere and then onto the imaging plane of a pinhole camera, with the center of the unit sphere at distance ξ from the imaging plane. Thus, the projection model of an omnidirectional camera [19] can be represented as shown in Figure 4.
A point in three-dimensional space χ O s = X , Y , Z T (where the subscript O s denotes the unit spherical coordinate system with O s as the coordinate origin) is projected onto the unit sphere to obtain the point denoted by χ s .
χ s = χ χ = X s , Y s , Z s T
Then, χ s on the unit sphere is transformed into the imaging plane coordinate system, with O p as the coordinate origin.
χ s = X s , Y s , Z s + ξ T
Subsequently, χ s is projected onto the normalized imaging plane π u to obtain the point m u .
m u = X s Z s + ξ , Y s Z s + ξ , 1 T = x u , y u , 1 T = Γ χ s
Taking into account the radial and tangential distortion in the projection process, the point m d = x d , y d , 1 T is obtained after the distortion of the point m u .
x d = x u 1 + k 1 r 2 + k 2 r 4 + 2 p 1 x u y u + p 2 r 2 + 2 x u 2 y d = y u 1 + k 1 r 2 + k 2 r 4 + p 1 r 2 + 2 y u 2 + 2 p 2 x u y u
where r 2 = x u 2 + y u 2 , k 1 and k 2 are the radial distortion coefficients, and p 1 and p 2 are the tangential coefficients. The distortion matrix D can be expressed as D = k 1 , k 2 , p 1 , p 2 .
m d is transformed into the pixel plane π p using the camera intrinsic matrix K to obtain the point p.
p = u p , v p , 1 T = K m d
K = f x 0 c x 0 f y c y 0 0 1
The above is the omnidirectional camera image processing procedure under the uniform projection model. To rectify an omnidirectional image to obtain a normal image, it is necessary to invert m u to χ s .
Γ 1 m u = χ s = X s Y s Z s = ξ + 1 + 1 ξ 2 x u 2 + y u 2 x u 2 + y u 2 + 1 x u ξ + 1 + 1 ξ 2 x u 2 + y u 2 x u 2 + y u 2 + 1 y u ξ + 1 + 1 ξ 2 x u 2 + y u 2 x u 2 + y u 2 + 1 ξ
The rectified coordinates can be obtained by projecting χ s onto the normalized imaging plane.
X s Z s + ξ Y s Z s + ξ Z s Z s + ξ = x u y u 1 ξ x u 2 + y u 2 + 1 ξ + 1 + 1 ξ 2 x u 2 + y u 2
The rectification process outputs the remapping parameters of the prerectification and postrectification images. Using these remapping parameters, the rectified image of the omnidirectional image can be obtained. The result of the omnidirectional image correction is shown in Figure 5. Similarly, we can transform the image back into an omnidirectional image after sky region detection.

2.1.2. Sky Region Detection

There is a clear visual difference between the sky and building parts in the images captured by a camera in outdoor scenes. The gradient information of the image can accurately reflect the difference between the sky and the non-sky regions in most cases. Therefore, the operation of the image with the sky region starts from converting the color image into a greyscale image. After the input image has been converted to grayscale image, the corresponding gradient image is calculated using the Sobel operator.
The sky border position function border ( x ) is defined as
1 border ( x ) H ( 1 x W )
where W and H represent the width and height of the image, respectively, and the sky and non-sky regions are defined as follows:
sky = { ( x , y ) 1 x W , 1 y border ( x ) }
non-sky = { ( x , y ) 1 x W , border ( x ) y H }
Threshold t is used to calculate the sky boundaries in accordance with Formulas (10) and (11). In the gradient image, the first pixel in each column with a gradient value greater than threshold t is defined as the dividing point between the sky and non-sky regions in that column. The sky boundary function border (x) is preliminarily obtained by traversing each column of the gradient image. The Algorithm 1 shows the algorithm for calculating the sky boundary function border (x) from threshold t. However, choosing threshold t to obtain the optimal sky boundary function is a complex problem.
Algorithm 1 Calculating the sky boundary function border (x)
Input:t, g r a d
Output: border ( x )
1:
forx = 1 to W do
2:
    border ( x ) = H
3:
   for y = 1 to H do
4:
     if  g r a d ( y , x ) > t  then
5:
         border ( x ) = y
6:
        break
7:
     end if
8:
   end for
9:
end for
In [20], an energy function was first defined to optimize the boundary. In this paper, an improved energy function is used, which is as follows:
J n = 1 γ · Σ s + Σ g + γ · λ 1 s + λ 1 g
where Σ s and Σ g are the covariance matrices of the pixels that are described by the red-green-blue (RGB) values in sky and non-sky regions, respectively. γ is a parameter for the homogeneity in the sky region, which is set to 2. λ 1 s and λ 1 g correspond to the largest eigenvalues of the two matrices. Σ s and Σ g are defined as follows:
Σ s = 1 N s ( y , x ) s k y I s ( y , x ) μ s I s ( y , x ) μ s T
Σ g = 1 N s ( y , x ) n o n s k y I g ( y , x ) μ g I g ( y , x ) μ g T
where N s and N g are the numbers of pixels in the sky region and non-sky region, respectively; I is the input RGB image; and μ s and μ g are vectors that represent the average RGB values in the sky region and non-sky region, respectively.
For a given threshold t, a corresponding energy function J n t can be obtained. The sky boundary that maximizes the energy function is the optimal boundary. However, the relationship between the threshold t and the energy function J n t is nonlinear, and selecting threshold t so as to maximize the energy function J n t is a complex problem. Therefore, we use a step search method to obtain the optimal threshold t and calculate the sky boundary function border ( x ) . Using the sky boundary function, we can then extract the sky region, as shown in Figure 6b, in which the red part is the extracted sky region. For subsequent NLOS signal rejection, we binarize the image, as shown in Figure 6c, in which the sky region is represented in white, and the non-sky region is represented in black.
However, this sky region detection algorithm cannot solve the problem that flyovers block the field of view of the camera in urban environments, because Formula (10) assumes that the sky region is always present at the top of the image. A rectified omnidirectional image taken when the vehicle is passing under a flyover is shown in Figure 7a. It can be seen that in this case, the assumption that the sky region lies at the top of the image is no longer satisfied. The results of the sky region detection using the algorithm described above are shown in Figure 7b,c. It can be seen that the algorithm mistakes the boundary between the flyover and the sky for the boundary between the sky and non-sky regions, resulting in completely incorrect sky region detection, which will lead to errors in NLOS signal rejection.
In this paper, an improved algorithm is introduced to solve this problem. The HSV color model is used to detect whether the extracted sky region is instead a flyover region. The HSV model is another popular color model in addition to BGR, where H is the hue, S is the saturation, and V is the value, which is more in line with people’s description of color. The range of black in the HSV color model is 0 H 180 , 0 S 255 , and 0 V 46 . After detecting all columns in row H / 6 of the image, if at least fifty percent of the pixels are considered to be black, it can be assumed that the image contains a flyover region. When a flyover region is detected in the image, the sky region is still determined in accordance with the procedure of the above algorithm; however, the detected sky region is actually a flyover. The BGR values in the flyover region are all set to [255, 255, 255], and the flyover region is temporarily treated as part of the sky region, as shown in Figure 8a. Then, the sky region is detected again, and the results are shown in Figure 8b. Using the boundary function of the flyover region detected during the first execution of the algorithm, the BGR values in the flyover region are set to [0, 0, 0] to obtain the real sky region, as shown in Figure 8c.

2.1.3. Sky Boundary Smoothing

In an urban canyon environment, the close buildings are relatively flat in the omnidirectional camera shots, and the corresponding results of the sky region segmentation algorithm are satisfactory. In contrast, the edges of distant buildings and high-rise buildings are greatly affected by sky reflections, resulting in less clear boundaries. The sky region segmentation algorithm tends to misidentify these reflective regions as sky regions, producing narrow interruptions and slender gullies, resulting in discontinuous sky boundaries, as shown in Figure 9a. The result of binarization after converting back to the omnidirectional image is shown in Figure 9b.
To solve this problem, an image morphology algorithm is employed to process the omnidirectional image after sky region segmentation to make the sky boundary smoother. First, we conduct an erosion operation to restore the parts of the image misidentified as belonging to the sky region to the non-sky region and eliminate the thin interruptions between buildings; the result is shown in Figure 10a. After the erosion operation, the narrow interruptions and slender gullies caused by the influence of strong light are restored to the non-sky regions. However, the erosion operation also results in some loss of the sky regions. Therefore, we subsequently perform a dilation operation to restore the part of the sky region lost due to the erosion operation, with the result shown in Figure 10b. Compared with the original image, as shown in Figure 10c, we can see that after smoothing, our algorithm can well identify the parts of the non-sky region that are affected by strong light.

2.2. NLOS Signal Rejection

The proposed system involves four reference frames: the camera frame, image frame, IMU frame, and GNSS receiver frame. Since the GNSS satellites are far from the ground user, the distances between the GNSS receiver antenna, omnidirectional camera, and IMU are negligible from the perspective of the satellites; therefore, we consider these three frames to be the same frame, and use the omnidirectional camera frame to uniformly represent them.
The pseudorange p r , elevation angle e l and azimuth angle a z of a satellite can be calculated from the received GNSS observations. A satellite S can be represented by p r , e l , a z . Formula (15) is used to represent the satellite in the ENU frame, S e = x , y , z . The x, y, and z axes of the ENU frame point in the east, north, and up directions, respectively.
x = p r cos ( e l ) cos ( a z ) y = p r cos ( e l ) sin ( a z ) z = p r sin ( e l )
Then, the satellite is transformed from the ENU frame into the omnidirectional camera frame using the Euler angles ψ , φ , θ collected from the IMU, where ψ is the roll angle, which represents the rotation angle around the x axis; φ is the pitch angle, which represents the rotation angle around the y axis; and θ is the yaw angle, which represents the rotation angle around the z axis. These angles are used to define the rotation matrices around each axis of the system. Formulas (16)–(18) describe the rotations around the x axis, y axis, and z axis, respectively:
R ( ψ ) = 1 0 0 0 cos ψ sin ψ 0 sin ψ cos ψ
R ( φ ) = cos φ 0 sin φ 0 1 0 sin φ 0 cos φ
R ( θ ) = cos θ sin θ 0 sin θ cos θ 0 0 0 1
Thus, the satellite can be represented in the omnidirectional camera frame using Formula (19):
S c = R ( θ ) R ( φ ) R ( ψ ) S e
Since small changes in the roll and pitch angles can be considered constant values to the satellite when the vehicle is moving, we use only the yaw angle for the transformation from the ENU frame to the omnidirectional camera frame. Therefore, Formula (19) can be simplified to Formula (20):
S c = R ( θ ) S e
After representing the satellites in the omnidirectional camera frame, each satellite in the skyplot at the time of omnidirectional image acquisition can be projected onto the omnidirectional image using the imaging principle of the omnidirectional camera. Then, we check whether each satellite is located in a sky region, and we consider a satellite to be a LOS satellite if it is in a sky region or an NLOS satellite if it is in a non-sky region.

2.3. GNSS Kinematic Positioning Based on Extended Kalman Filter

2.3.1. GNSS Measurement Model

GNSS kinematic positioning in urban environments mainly relies on the pseudorange, which is the basic measurement of a GNSS receiver. The pseudorange is modeled as the sum of the Euclidean distance between the receiver and the satellite, the receiver clock error, the ionospheric and tropospheric errors, and the multipath error [21], as follows:
p r , j s = ρ s r + c · ( δ t r δ t s ) + δ T r s + δ I r , j s + D C B r , j s + M r , j s + ϵ p , j
where we have the following:
  • ρ r s is the real distance between the satellite’s antenna and the receiver’s antenna (in meters);
  • δ t r is the receiver clock error (in seconds);
  • δ t s is the satellite clock error (in seconds);
  • δ T r s is the tropospheric delay (in meters);
  • δ I r , j s is the ionospheric delay (in meters);
  • D C B r , j s is the differential code bias (in seconds);
  • M r , j s is the multipath error for code observation (in meters);
  • ϵ p , j is the measurement noise for code observation (in meters);
  • c is the speed of light in vacuum (in m/s).
The superscript ‘s’ refers to the s-th satellite, and the subscript ‘j’ refers to the j-th frequency band.
We can deduce from Formula (21) that the raw measurement contains various errors that need to be eliminated for an accurate geometric distance measurement. The effect of the tropospheric delay will be eliminated based on ephemeris data from the International GNSS Service (IGS) according to the Hopfield/Saastamoinen model, while the Klobuchar model will be applied to correct the effect of the ionospheric delay, also based on ephemeris data from the IGS. Clock products from the IGS are applied in the correction of the satellite clock bias, while the receiver clock bias is estimated using an extended Kalman filter. The multipath errors and measurement noise are still contained in the pseudorange, affecting the ranging accuracy, which is one of the error sources in the pseudorange measurement.

2.3.2. Extended Kalman Filter

An extended Kalman filter (EKF) [22] is employed to process the nonlinear GNSS measurement model. Two equations can be used to represent the state space model of the EKF: the state equation,
x k = f ( x k 1 ) + w k
and the measurement equation:
z k = h ( x k ) + v k
where x k R n × 1 is the state vector at time t k , z k R m × 1 is the measurement vector, F k R n × n is the system transition matrix from epoch t k to t k + 1 , H k R m × n is the measurement matrix, w k R n × 1 is the system noise vector, and v k R n × 1 is the measurement noise vector. The extended Kalman filter assumes that the system noise and measurement noise are Gaussian white noise and uncorrelated. That is,
E ( w k w l T ) = Q k δ k l E ( v k v l T ) = R k δ k l E ( w k v l T ) = 0
In the GNSS kinematic positioning problem, only the measurement equation is nonlinear, while the state equation remains linear. Therefore, the state equation can be expressed as follows:
x k = F k x k 1 + w k
Then, observation z k is incorporated to refine the state prediction, updating the state estimate and covariance, as follows.
Prediction is
x ^ k = f ( x ^ k 1 ) P k = F k 1 P k 1 F k 1 T + W k
Updating is
K k = P k H k T [ V k + H k P k H k T ] 1 x ^ k = x ^ k + K k [ z k h ( x ^ k ) ] P k = ( I K k H K ) P k

2.3.3. Receiver Motion Model

The accuracy of the motion trajectory estimation using the EKF depends on the accuracy of the description of the vehicle’s motion state. Based on this consideration, we adopt a position–velocity (PV) model. In the PV model, the state vector can be expressed as follows:
x k x , x ˙ , y , y ˙ , z , z ˙ , r t r k T
A proper discrete time model can be found in [23], and the state transition equation is as follows:
F k = 1 T 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 T 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 T 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1
The measurement matrix is derived from the Jacobian matrix of h ( x ) . For sequential Kalman filtering, the measurement matrix reflects the relationship between the receiver position and the individual satellite observations, and the Jacobian matrix can be expressed as follows:
H k = p r , j S x 0 p r , j S y 0 p r , j S z 0 p r , j S δ t r k
The partial derivatives that appear above are shown in detail below:
p r , j S x = X i x X i x 2 + Y i y 2 + Z i z 2 1 / 2 p r , j S y = Y i y X i x 2 + Y i y 2 + Z i z 2 1 / 2 p r , j S z = Z i z X i x 2 + Y i y 2 + Z i z 2 1 / 2 p r , j S δ t r = 1
where ( x , y , z ) is the position of the receiver and ( X i , Y i , Z i ) is the position of the satellite in the Earth-centered, Earth-fixed coordinate.
More details of the process noise covariance matrix can be found in [23].

3. Experimental Results and Discussions

We evaluated our system on the public UrbanLoco dataset [24]. This dataset was chosen because it is a relatively new public dataset, and other public datasets do not specifically target at the mapping and localization tasks in dense urban applications. The UrbanLoco dataset is a mapping/localization dataset collected in highly urbanized environments with a full sensor suite. The dataset includes a wide variety of urban terrains: urban canyons, bridges, sharp turns, etc. More importantly, the dataset collected in Hong Kong provides information from an omnidirectional camera, an IMU, and a GNSS receiver. In addition, this dataset provides an accurate ground truth that can be easily utilized to evaluate our method. The data collection platform used in Hong Kong, which is shown in Figure 11, was equipped with a Grasshopper3 5.0 MP (GS3-U3-51S5C-C) camera with a Fujinon FE185C057HA-1 fisheye lens (185 HFOV, 185 V-FOV, 10 Hz), an Xsens MTi 10 IMU (100 Hz), and a u-blox M8T GNSS receiver (GPS/BeiDou, 1 Hz). Based on the Hong Kong UrbanLoco dataset, we evaluated the proposed method at three levels, i.e., sky region segmentation, NLOS signal rejection, and urban positioning.

3.1. Evaluation of Sky Region Segmentation

Figure 12 shows the results of the sky region segmentation algorithm when applied to omnidirectional images in the UrbanLoco dataset, which depict several typical urban environments. The first column shows the original omnidirectional images, the second column shows the rectified images, the third column shows the images processed by the sky region segmentation algorithm, the fourth column shows the images binarized from the third column, and the fifth column shows the results of image morphology processing after conversion to omnidirectional images. As shown in Figure 12, the proposed algorithm can effectively segment omnidirectional images into sky and non-sky regions. In the first row, the boundary between the sky and non-sky regions is obvious, but the presence of streetlights and billboards produces some minor misidentifications. In the second and third rows, there are a large number of bright areas on the surfaces of buildings with colors similar to the sky, and reflective windows are present on the surfaces of buildings in the fourth row, but our algorithm can correctly detect these difficulties and achieve satisfactory segmentation results. In the fifth row, the algorithm works successfully in the case of some thin buildings. There is a flyover in the field of view of the camera in the sixth row, and our algorithm can also accurately detect the sky region in this case.
Since the true values of the sky regions are not provided in the UrbanLoco dataset, to quantitatively evaluate the accuracy of the sky region segmentation algorithm, we manually labeled the sky regions in the experimental images. The sky region of an image is colored in green, as shown in the first column of Figure 13. Because the boundaries could not be definitely distinguished, we referred to the corresponding locations on the Google Street View map to ensure the accuracy of manual sky region segmentation. After manual sky region segmentation, the images were binarized and used as benchmarks to evaluate the proposed sky region segmentation algorithm, as shown in the second column. The third column shows the results of our sky region segmentation algorithm, where the red pixels represent the parts of the sky region that are incorrectly detected by our algorithm. The results were obtained by comparing each pixel in the results of our algorithm with the corresponding benchmark image. To calculate the accuracy of the sky region segmentation algorithm more accurately, we did not include the black areas in the original omnidirectional images in the calculation. The sky region segmentation accuracy P c was calculated using Formula (32), where c n t _ c o r r e c t is the number of correct pixels; r is the radius of the omnidirectional image, which is 220; and the image size is 612 × 512. We refer to the four images in Figure 13, from top to bottom, as image 1, image 2, image 3, and image 4. The accuracy on these four images is summarized in Table 1.
P c = c n t _ c o r r e c t π r 2 × 100 %
Figure 13 and Table 1 show that the proposed sky region segmentation algorithm achieves over 96% accuracy in sky region segmentation and can effectively extract sky regions from omnidirectional images acquired in various urban environments without requiring image samples for training.

3.2. Evaluation of NLOS Signal Rejection

The satellite skyplot at a certain time is shown in Figure 14a. Here, ’G’ stands for GPS satellites, and ’C’ stands for BDS satellites. We used the method proposed in Section 2.2 to project each satellite onto the omnidirectional image after sky region segmentation to obtain the corresponding NLOS signal rejection result, as shown in Figure 14b, where green circles represent LOS satellites and red circles represent NLOS satellites.
To verify the accuracy of NLOS signal rejection, we used a 3D photo-realistic model of Hong Kong to derive the satellite visibility. We imported the downloaded 3D photo-realistic model of typical urban environments into Rhino for 3D modeling. From this 3D photo-realistic model, we could easily obtain the true north direction, and each satellite was plotted in the 3D realistic model based on its elevation and azimuth angles. Instead of using the actual distance of the satellite from the ground, we used a fixed distance for observation purposes. Once the satellites had been modeled, the LOS satellites could be clearly distinguished from the NLOS satellites based on the 3D photo-realistic model. The results of the modeling are shown in Figure 15. In the rendering mode, as shown in Figure 15a, we could see whether the 3D photo-realistic model we imported matched the scene captured by the omnidirectional camera. In the coloring mode, as shown in Figure 15b, we marked the LOS satellites in green and the NLOS satellites in red. A comparison with the results in Figure 14b shows that our NLOS signals rejection results are exactly correct. The red lines are discontinuous because the NLOS satellites are obscured by buildings.
Similarly, we verified the NLOS signal rejection performance for the other two images. The results are shown in Figure 16. Among them, Figure 16a,b show the results of the image taken when the vehicle was at a standstill waiting for a turn. Figure 16c,d show the results of the image taken when it was driving through a narrow intersection. From these results, it can be concluded that the NLOS signal rejection algorithm is quite effective in various urban environments.

3.3. Evaluation of Positioning Results

To further validate the vision-aided NLOS rejection method, we selected the HK-Data20190426-2 UrbanLoco dataset, as it contains typical urban environments, such as narrow lanes, half-open environments, and intersections. The complete driving environment and trajectory are shown in Figure 17, where the vehicle started in a narrow lane in area A, and the vehicle in areas C and D stopped waiting for a turn. When collecting the HK-Data20190426-2 dataset, the vehicle traveled a distance of 0.7 km and had a travel time of 260 s, including time spent waiting for traffic lights.
We conducted code-based kinematic positioning experiments with three data screening methods. Specifically, GNSS/Raw estimates the coordinates straightforwardly without any data screening. GNSS/SNR and ELE employs the classic data screening strategy based on the SNR and elevation. In the classic method, thresholds are set on the SNR and elevation to eliminate suspicious NLOS measurements because NLOS signals usually have a small SNR and elevation [25]. In our experiments, the SNR and elevation thresholds of the classic screening method were set to 20 dB and 15 respectively. GNSS/Vision adopts the proposed vision-aided NLOS rejection algorithm. The positioning results are shown in Figure 18.
As shown in Figure 18, the GNSS positioning accuracy exhibits significant deterioration in the harsh urban environment, which is mostly caused by NLOS signals. The positioning results estimated from the raw measurements without screening contain large errors and deviate from the true trajectory in areas B, C, and D, and the positioning results are quite unreliable in highly urbanized environments. Compared with the direct use of the raw measurements without screening, the classic method enables a certain improvement in positioning accuracy. However, in the harsh urban environment, fixed thresholds on the SNR and elevation cannot ensure the effective separation of NLOS signals from LOS signals. When the thresholds are set too high, some valid satellite signals are eliminated, resulting in an insufficient number of satellites. When the thresholds are set too low, a large number of NLOS signals are still retained in the measurements, leading to considerable errors. For example, in area A, incorrectly eliminating NLOS satellites leads to larger positioning errors. In contrast, the vision-aided screening method effectively eliminates the NLOS signals and avoids large-scale mutations. The positioning results are closer to the real trajectory and offer higher reliability.
Based on the reference trajectory provided in the dataset, the positioning errors in the east (E), north (N), and up (U) directions as well as the 3D positioning error are plotted in Figure 19 for the three positioning solutions. In parts of the urban canyon, the obscuration was too serious to observe adequate normal satellites. Conversely, the number of NLOS signals was greatly increased, resulting in incorrect results in some epochs. The vision-aided screening method avoided some large fluctuations occurring in the positioning solutions without screening, contributing to smoother and more reliable solutions. We also generated boxplots to represent the error distributions for the comparison of the three methods, as shown in Figure 20. It is obvious from this figure that the vision-aided screening method has great advantages in improving the results in all directions. The vision-aided screening method effectively reduces the occurrence of large errors while keeping the median error at a low value. It can also be seen from this figure that the classic screening method leads to larger errors at certain times due to the incorrect removal of LOS satellites and retention of NLOS satellites. The vision-aided method can more accurately remove NLOS satellites, thus demonstrating the superiority of our method.
Table 2 lists the root mean square (RMS) values in the E, N, and U directions and the 3D RMS for each solution. Compared with the raw measurements without screening, the vision-aided method enables significant improvements in all directions. In contrast, due to the incorrect removal of LOS satellites at certain times, the classic screening method actually increases the RMS error in the northward direction. The positioning accuracy improvements relative to the raw measurements without screening, expressed as percentages, are shown in Table 3. Although the classic screening method achieves a certain improvement in the positioning accuracy, the proposed vision-aided screening method improves the positioning accuracy more significantly. This is because the omnidirectional camera captures real information about the obscuration of the surroundings, allowing NLOS signals to be more effectively identified. Specifically, the classic screening method exhibits improvements of 38.1%, −5.6% and 36.2% in the E, N, and U directions, respectively, as well as a 3D accuracy improvement of 33.3%. In comparison, the vision-aided screening method exhibits improvements of 60.3%, 12.4% and 63.3% in the E, N, and U directions, respectively, as well as a 3D accuracy improvement of 58.5%.
Figure 21 shows the time-series changes in the number of satellites available for each of the three methods. The numbers of available satellites are also listed in Table 4. Since the GNSS receiver can receive both GPS and BDS satellite signals, the mean number of satellites received is 13.6. After the exclusion of NLOS satellites using the classic screening method or the vision-aided method, the mean number of satellites drops to 11.4 or 9.2, respectively. In some extreme cases, such as the vehicle passing under a flyover, the vision-aided method culls more satellites and reduces the number of available satellites to fewer than 4. If more satellite navigation systems were to be used, there would be more satellites available that could be used for positioning.

4. Conclusions

As urbanization is becoming increasingly rapid, it is becoming increasingly important to perform accurate positioning in urban environments. In this paper, we propose a GNSS urban positioning method with omnidirectional camera-aided NLOS identification. Our method adopts a novel solution to perform accurate sky region segmentation for omnidirectional images acquired in various urban environments. When a satellite is located in a non-sky region, it is identified as an NLOS satellite. Our proposed method was tested using code-based kinematic positioning on the HK-Data20190426-2 UrbanLoco dataset. The proposed method shows a clear advantage over the classic screening method in terms of the positioning results. The proposed method exhibits improvements of 60.3%, 12.4% and 63.3% in the E, N, and U directions, respectively, as well as a 3D accuracy improvement of 58.5%. In addition, the vision-aided method is simple to calculate and does not require a large number of samples for training. Finally, we conclude that the exclusion of NLOS signals is necessary for achieving more accurate and reliable GNSS urban positioning.
In our work, when NLOS satellites are detected and eliminated, they are not used for positioning, which does not make good use of all the signals. In some extreme cases, this may result in fewer than four usable satellites, and positioning may not be possible. In future work, we will investigate methods of making full use of NLOS signals for more extreme cases.

Author Contributions

Conceptualization, Z.D.; methodology, H.Y. and W.C.; software, H.Y. and W.C.; validation, H.Y., W.C. and T.X.; formal analysis, H.Y. and W.C.; investigation, H.Y. and W.C.; writing—original draft preparation, H.Y. and W.C.; writing—review and editing, H.Y. and Z.D.; visualization, T.X.; supervision, Z.D. and X.Z.; project administration, Z.D. and X.Z.; funding acquisition, Z.D. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Science and Technology Planning Project of Guangdong Province of China (Grant No.2021A0505030030), and Science and Technology Planning Project of Shenzhen (Grant No.ZDSYS20210623091807023).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found at: https://github.com/weisongwen/UrbanLoco (accessed on 15 September 2022) https://www.pland.gov.hk/pland_tc/info_serv/3D_models/download.htm (accessed on 1 September 2022).

Acknowledgments

The authors would like to thank Weisong Wen at the Hong Kong Polytechnic University for assistance with the datasets and data analysis. We are also very grateful to our reviewers who provided insight and expertise that greatly assisted the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jin, S.; Wang, Q.; Dardanelli, G. A Review on Multi-GNSS for Earth Observation and Emerging Applications. Remote. Sens. 2022, 14, 3930. [Google Scholar] [CrossRef]
  2. Cheng, Q.; Chen, P.; Sun, R.; Wang, J.; Mao, Y.; Ochieng, W.Y. A new faulty GNSS measurement detection and exclusion algorithm for urban vehicle positioning. Remote. Sens. 2021, 13, 2117. [Google Scholar] [CrossRef]
  3. Hsu, L.T. Analysis and modeling GPS NLOS effect in highly urbanized area. GPS Solut. 2018, 22, 1–12. [Google Scholar] [CrossRef] [Green Version]
  4. Uaratanawong, V.; Satirapod, C.; Tsujii, T. Evaluation of multipath mitigation performance using signal-to-noise ratio (SNR) based signal selection methods. J. Appl. Geod. 2021, 15, 75–85. [Google Scholar] [CrossRef]
  5. Ziedan, N.I. Improved multipath and NLOS signals identification in urban environments. Navigation 2018, 65, 449–462. [Google Scholar] [CrossRef]
  6. Xu, H.; Angrisano, A.; Gaglione, S.; Hsu, L.T. Machine learning based LOS/NLOS classifier and robust estimator for GNSS shadow matching. Satell. Navig. 2020, 1, 1–12. [Google Scholar] [CrossRef]
  7. Wen, W.; Zhang, G.; Hsu, L.T. Exclusion of GNSS NLOS receptions caused by dynamic objects in heavy traffic urban scenarios using real-time 3D point cloud: An approach without 3D maps. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 158–165. [Google Scholar]
  8. Wen, W.; Zhang, G.; Hsu, L.T. Correcting NLOS by 3D LiDAR and building height to improve GNSS single point positioning. Navigation 2019, 66, 705–718. [Google Scholar] [CrossRef]
  9. Wen, W.W.; Zhang, G.; Hsu, L.T. GNSS NLOS exclusion based on dynamic object detection using LiDAR point cloud. IEEE Trans. Intell. Transp. Syst. 2019, 22, 853–862. [Google Scholar] [CrossRef]
  10. Xia, Y.; Pan, S.; Meng, X.; Gao, W.; Ye, F.; Zhao, Q.; Zhao, X. Anomaly detection for urban vehicle GNSS observation with a hybrid machine learning system. Remote. Sens. 2020, 12, 971. [Google Scholar] [CrossRef] [Green Version]
  11. Attia, D.; Meurie, C.; Ruichek, Y.; Marais, J. Counting of satellites with direct GNSS signals using Fisheye camera: A comparison of clustering algorithms. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011. [Google Scholar]
  12. Kato, S.; Kitamura, M.; Suzuki, T.; Amano, Y. Nlos satellite detection using a fish-eye camera for improving gnss positioning accuracy in urban area. J. Robot. Mechatronics 2016, 28, 31–39. [Google Scholar] [CrossRef]
  13. Han, J.Y.; Juan, T.H. Image-based approach for satellite visibility analysis in critical environments. Acta Geod. Geophys. 2016, 51, 113–123. [Google Scholar] [CrossRef] [Green Version]
  14. Suzuki, T.; Amano, Y. NLOS Multipath Classification of GNSS Signal Correlation Output Using Machine Learning. Sensors 2021, 21, 2503. [Google Scholar] [CrossRef] [PubMed]
  15. Julien, M.; Sébastien, A.; Yassine, R. Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas. Sensors 2017, 17, 119. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Shen, Y.; Wang, Q. Sky Region Detection in a Single Image for Autonomous Ground Robot Navigation. Int. J. Adv. Robot. Syst. 2013, 10, 362. [Google Scholar] [CrossRef] [Green Version]
  17. Geyer, C.; Daniilidis, K. A unifying theory for central panoramic systems and practical implications. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2000; pp. 445–461. [Google Scholar]
  18. Barreto, J.P.; Araújo, H. Issues on the geometry of central catadioptric image formation. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  19. Mei, C.; Rives, P. Single view point omnidirectional camera calibration from planar grids. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, IEEE, Rome, Italy, 10–14 April 2007; pp. 3945–3950. [Google Scholar]
  20. Ettinger, S.M.; Nechyba, M.C.; Ifju, P.G.; Waszak, M. Vision-guided flight stability and control for micro air vehicles. Adv. Robot. 2003, 17, 617–640. [Google Scholar] [CrossRef]
  21. Observables—GNSS-SDR. Available online: https://gnss-sdr.org/docs/sp-blocks/observables/ (accessed on 6 June 2022).
  22. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  23. Sarunic, P.W. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications; Technical Report; Defence Science and Technology Group Edinburgh Australia: Edinburgh, Australia, 2016.
  24. Wen, W.; Zhou, Y.; Zhang, G.; Fahandezh-Saadi, S.; Bai, X.; Zhan, W.; Tomizuka, M.; Hsu, L.T. Urbanloco: A full sensor suite dataset for mapping and localization in urban scenes. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2310–2316. [Google Scholar]
  25. Zhang, G.; Xu, B.; Hsu, L.T. GNSS shadow matching based on intelligent LOS/NLOS Classifier. In Proceedings of the 16th IAIN World Congress, Chiba, Japan, 28 November–1 December 2018; pp. 1–7. [Google Scholar]
Figure 1. Two examples of reflections on the surfaces of tall buildings. (a) There are a large number of bright areas on the surfaces of buildings with colors similar to the sky. (b) Reflective windows are present on the surfaces of buildings.
Figure 1. Two examples of reflections on the surfaces of tall buildings. (a) There are a large number of bright areas on the surfaces of buildings with colors similar to the sky. (b) Reflective windows are present on the surfaces of buildings.
Remotesensing 14 05493 g001
Figure 2. Flowchart of GNSS positioning with vision-aided NLOS identification.
Figure 2. Flowchart of GNSS positioning with vision-aided NLOS identification.
Remotesensing 14 05493 g002
Figure 3. Flowchart of the method of sky region segmentation.
Figure 3. Flowchart of the method of sky region segmentation.
Remotesensing 14 05493 g003
Figure 4. The projection model of an omnidirectional camera.
Figure 4. The projection model of an omnidirectional camera.
Remotesensing 14 05493 g004
Figure 5. Results of omnidirectional image rectification. (a) Image captured by the omnidirectional camera. (b) Image after rectification.
Figure 5. Results of omnidirectional image rectification. (a) Image captured by the omnidirectional camera. (b) Image after rectification.
Remotesensing 14 05493 g005
Figure 6. Results of sky region detection. (a) Image after rectification. (b) The sky region shown in red. (c) The sky region and non-sky region in the binarized image.
Figure 6. Results of sky region detection. (a) Image after rectification. (b) The sky region shown in red. (c) The sky region and non-sky region in the binarized image.
Remotesensing 14 05493 g006
Figure 7. The results of sky region detection when there is a flyover blocking the field of view of the camera. (a) Image after rectification. (b) The algorithm incorrectly identifies the flyover region as the sky region. (c) The wrong result in the binarized image.
Figure 7. The results of sky region detection when there is a flyover blocking the field of view of the camera. (a) Image after rectification. (b) The algorithm incorrectly identifies the flyover region as the sky region. (c) The wrong result in the binarized image.
Remotesensing 14 05493 g007
Figure 8. Results of the improved sky region detection algorithm in a practical test of the case of a flyover blocking the field of view of the omnidirectional camera. (a) Temporarily treating the flyover region as part of the sky region. (b) Sky region detection. (c) Restoration of the flyover region.
Figure 8. Results of the improved sky region detection algorithm in a practical test of the case of a flyover blocking the field of view of the omnidirectional camera. (a) Temporarily treating the flyover region as part of the sky region. (b) Sky region detection. (c) Restoration of the flyover region.
Remotesensing 14 05493 g008
Figure 9. Sky region segmentation results affected by sky reflections. (a) Sky region segmentation algorithm results in discontinuous sky boundaries. (b) The result of binarization after converting back to the omnidirectional image.
Figure 9. Sky region segmentation results affected by sky reflections. (a) Sky region segmentation algorithm results in discontinuous sky boundaries. (b) The result of binarization after converting back to the omnidirectional image.
Remotesensing 14 05493 g009
Figure 10. The correct recovery of regions misidentified as part of the sky region through image morphology processing. (a) The result of erosion operation. (b) The result of dilation operation. (c) The original image.
Figure 10. The correct recovery of regions misidentified as part of the sky region through image morphology processing. (a) The result of erosion operation. (b) The result of dilation operation. (c) The original image.
Remotesensing 14 05493 g010
Figure 11. The platform for data collection in Hong Kong is a Honda Fit. All localization related sensors are equipped in a compact sensor kit on the top of the vehicle.
Figure 11. The platform for data collection in Hong Kong is a Honda Fit. All localization related sensors are equipped in a compact sensor kit on the top of the vehicle.
Remotesensing 14 05493 g011
Figure 12. Qualitative evaluation of sky region segmentation. The first column shows the original images, the second column shows the rectified images, the third column shows the images after sky region segmentation, where the sky region is represented in red; the fourth column shows the binarized images obtained from images in the third column; and the fifth column shows the results of image morphology processing after conversion to omnidirectional images.
Figure 12. Qualitative evaluation of sky region segmentation. The first column shows the original images, the second column shows the rectified images, the third column shows the images after sky region segmentation, where the sky region is represented in red; the fourth column shows the binarized images obtained from images in the third column; and the fifth column shows the results of image morphology processing after conversion to omnidirectional images.
Remotesensing 14 05493 g012
Figure 13. Quantitative evaluation of sky region segmentation. The first column shows the images containing the truth values of the sky regions that we extracted manually. The second column shows the benchmark images. The third column shows the results of our sky region segmentation algorithm, where the red pixels represent the parts of the sky region that are incorrectly detected by our algorithm.
Figure 13. Quantitative evaluation of sky region segmentation. The first column shows the images containing the truth values of the sky regions that we extracted manually. The second column shows the benchmark images. The third column shows the results of our sky region segmentation algorithm, where the red pixels represent the parts of the sky region that are incorrectly detected by our algorithm.
Remotesensing 14 05493 g013
Figure 14. Results of NLOS signal rejection. (a) The satellite skyplot. (b) NLOS signal rejection results. Green circles represent LOS satellites, and red circles represent NLOS satellites.
Figure 14. Results of NLOS signal rejection. (a) The satellite skyplot. (b) NLOS signal rejection results. Green circles represent LOS satellites, and red circles represent NLOS satellites.
Remotesensing 14 05493 g014
Figure 15. Modeling results in Rhino. (a) Rendering mode. (b) Coloring mode.
Figure 15. Modeling results in Rhino. (a) Rendering mode. (b) Coloring mode.
Remotesensing 14 05493 g015
Figure 16. NLOS signal rejection results for the other two images. (a) NLOS signal rejection results of the first image. (b) Modeling results of the first image. (c) NLOS signal rejection results of the second image. (d) Modeling results of the second image.
Figure 16. NLOS signal rejection results for the other two images. (a) NLOS signal rejection results of the first image. (b) Modeling results of the first image. (c) NLOS signal rejection results of the second image. (d) Modeling results of the second image.
Remotesensing 14 05493 g016
Figure 17. Complete driving environment and trajectory.
Figure 17. Complete driving environment and trajectory.
Remotesensing 14 05493 g017
Figure 18. Positioning results in three methods.
Figure 18. Positioning results in three methods.
Remotesensing 14 05493 g018
Figure 19. Positioning error. (a) Positioning error in the eastward orientation. (b) Positioning error in the northward orientation. (c) Positioning error in the upward orientation. (d) 3D positioning error.
Figure 19. Positioning error. (a) Positioning error in the eastward orientation. (b) Positioning error in the northward orientation. (c) Positioning error in the upward orientation. (d) 3D positioning error.
Remotesensing 14 05493 g019
Figure 20. Boxplots of the positioning errors.
Figure 20. Boxplots of the positioning errors.
Remotesensing 14 05493 g020
Figure 21. Comparison of the number of satellites.
Figure 21. Comparison of the number of satellites.
Remotesensing 14 05493 g021
Table 1. Accuracy of sky region segmentation on four images.
Table 1. Accuracy of sky region segmentation on four images.
Image P c
Image 197.05%
Image 298.68%
Image 397.57%
Image 496.89%
Table 2. RMS positioning results with different methods.
Table 2. RMS positioning results with different methods.
MethodEast (m)North (m)Up (m)3D (m)
GNSS/Raw15.1612.5549.4853.25
GNSS/SNR&ELE9.3913.2631.5835.51
GNSS/Vision6.0211.0018.1722.08
Table 3. Improvements in positioning accuracy expressed as percentages.
Table 3. Improvements in positioning accuracy expressed as percentages.
MethodEast (%)North (%)Up (%)3D (%)
GNSS/SNR&ELE38.1−5.636.233.3
GNSS/Vision60.312.463.358.5
Table 4. Numbers of available satellites for the HK-Data20190426-2 dataset.
Table 4. Numbers of available satellites for the HK-Data20190426-2 dataset.
SatellitesMean NumberMax NumberMin Number
GNSS/Raw13.6187
GNSS/SNR&ELE11.4165
GNSS/Vision9.2132
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yao, H.; Dai, Z.; Chen, W.; Xie, T.; Zhu, X. GNSS Urban Positioning with Vision-Aided NLOS Identification. Remote Sens. 2022, 14, 5493. https://doi.org/10.3390/rs14215493

AMA Style

Yao H, Dai Z, Chen W, Xie T, Zhu X. GNSS Urban Positioning with Vision-Aided NLOS Identification. Remote Sensing. 2022; 14(21):5493. https://doi.org/10.3390/rs14215493

Chicago/Turabian Style

Yao, Hexiong, Zhiqiang Dai, Weixiang Chen, Ting Xie, and Xiangwei Zhu. 2022. "GNSS Urban Positioning with Vision-Aided NLOS Identification" Remote Sensing 14, no. 21: 5493. https://doi.org/10.3390/rs14215493

APA Style

Yao, H., Dai, Z., Chen, W., Xie, T., & Zhu, X. (2022). GNSS Urban Positioning with Vision-Aided NLOS Identification. Remote Sensing, 14(21), 5493. https://doi.org/10.3390/rs14215493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop