Next Article in Journal
Assignment Approach for Electric Vehicle Charging Using Traffic Data Collected by SUMO
Previous Article in Journal
Electric Vehicle Charging Sessions Generator Based on Clustered Driver Behaviors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of Dynamic Multi-Obstacle Tracking Algorithm for Intelligent Vehicle

School of Transportation and Vehicle Engineering, Shandong University of Technology, Zibo 255000, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2023, 14(2), 39; https://doi.org/10.3390/wevj14020039
Submission received: 28 November 2022 / Revised: 23 January 2023 / Accepted: 31 January 2023 / Published: 2 February 2023

Abstract

:
Environmental perception forms the basis of intelligent driving systems and is a prerequisite for path planning and vehicle control. Among them, dynamic multi-obstacle tracking is the key to environmental perception. In order to solve the problem of a large amount of correlation calculations and false correlations in the process of dynamic multi-obstacle tracking, and to obtain more accurate surrounding environment information, this paper first designs an obstacle data correlation algorithm based on improving the joint probabilistic data-association algorithm. Then, in order to solve the problem of obstacle movement mobility and the poor filtering effect of a single model, the interacting multiple model is designed to complete the filtering of multiple behavior patterns of obstacles. An obstacle state estimation algorithm based on the unscented Kalman filter is designed to solve the nonlinear problem of obstacle motion. Finally, an experimental prototype is built and tested. The results show that the data association algorithm designed in this paper can complete the data association of obstacles at different times, and there is no problem of obstacle loss and association error. The average running time of each frame is 51.63 ms. The result comparison between the proposed method and the traditional method shows that the proposed method is more effective.

1. Introduction

Intelligent vehicles play an important role in reducing traffic accidents, solving traffic congestion and reducing pollution in the environment, and they are an important direction for the development of the automotive industry. Intelligent vehicles are composed of four parts, including environmental perception, localization, path planning and control [1]. The environmental perception system obtains the surrounding environment information through the sensors carried by the intelligent vehicle, which provides a reliable basis for the subsequent positioning and path planning of the intelligent vehicle [2].
Lidar is widely used in the environment perception of smart vehicles because of its wide field of vision, high range and angle resolution, strong anti-interference ability, which is not affected by light, and its ability to obtain rich environmental information [3]. Zhang C Y et al. [4] proposed an improved DBSCAN algorithm to improve the obstacle detection effect of the Lidar system. Li J et al. [5] proposed an obstacle detection method based on multi-frame point cloud fusion and ground plane estimation in the Lidar system to achieve detection and related parameter acquisition efficiently. Liu C et al. [6] proposed an improved Euclidean clustering algorithm based on the point cloud shot-line angle constraint to make obstacle detection more rapid and accurate.
Dynamic multi-obstacle tracking is an important function of environmental perception [7], which is based on target detection results, applying a data-association algorithm to associate the detection data of the same target at different moments [8], and using a specific filtering algorithm to update and maintain the target [9], is the key to realizing the efficient and safe operation of intelligent vehicles [10].
There are a large number of dynamic obstacles, such as vehicles and pedestrians, during driving. In order to ensure safety, it is necessary to continuously track obstacles and predict their motion state. The prediction of obstacles needs to obtain the current position, velocity and heading angle, and only the position information can be obtained according to a frame of point cloud data. The velocity and heading angle information need to be combined with multiple frames of data before and after, and they can be derived through the position change [11]. Tracking consists of two steps: data association and state estimation [12]. Data association refers to the association of detected data of the same target in different frames. State estimation refers to the application of the filter to predict the motion state of obstacles and obtain their position, velocity and heading angle information.
At present, data association mostly uses Bayesian algorithms, including nearest neighbor association, the multiple-hypothesis tracking algorithm, probabilistic data-association (PDA) and joint probabilistic data-association (JPDA) algorithm [13,14,15,16]. The nearest neighbor association algorithm requires a small number of calculations and has a fast speed and good association effect in the target sparse environment. However, when the number of targets around the intelligent vehicle is large, it is easy to associate incorrectly. The association result of the multiple-hypothesis tracking algorithm depends too much on the prior information of the target, and the real-time performance is poor when the data are large. When the probability data-association algorithm tracks multiple obstacles or the distance between targets is too close, it is likely to associate incorrectly.
Obstacle tracking refers to the estimation of the motion state of the current frame according to the data-association results, and it reasonably predicts the motion state of the next frame. Common obstacle state estimation algorithms are the Kalman filter and the particle filter [17,18]. For nonlinear system state estimation, the Kalman filter method has a large error. The estimation effect of the particle filter method is better than that of the Kalman filter, but it takes a long time, and it is difficult to guarantee real-time performance.
The accuracy of data association and association calculation still have room for further improvement. The real-time and accurate estimation and update of different motion states of obstacles need to be further improved. Therefore, as shown in Figure 1, an improved JPDA is proposed to introduce more parametric features of the obstacles to further filter the valid measurements and reduce the computational effort of the algorithm. The interacting multiple model (IMM) is designed to deal with the tracking problem of different motion models of obstacles. The unscented Kalman filter (UKF) is designed to update the motion state of the obstacles.

2. Materials and Methods

2.1. Design of Improved Joint Probability Data-Association Algorithm

2.1.1. Tracking Gate Establishment of Improved JPDA

The basic idea of the JPDA is that all effective observation data may come from each specific target, but the probability is different. Because the data falling into the intersection area of the tracking door may be associated with multiple targets, it is necessary to calculate the correlation probability between the observed data and each target. The advantage of this algorithm is that it does not need to provide prior information of targets and clutter, and it can adapt to multiple target matching problems in complex situations. The disadvantage is that when the number of detected targets increases, its calculation process is complex, which has a certain impact on real-time performance.
Establishing a tracking gate is the first step of data association, which is set by the spatial position and distance attributes of the target. The range of the measured value at the next moment is estimated by the characteristics of the target (motion velocity, number of point clouds, position information, etc.), and the threshold value is determined to judge whether the measured value is valid. In the improved JPDA, in order to further reduce the number of data-association calculations in complex environments, the prior information, such as laser reflection intensity characteristics, altitude characteristics and the heading angle of the target, are taken into account to further screen the measured values and select the most possible measured values for data association.
The establishment of the tracking gate involves setting the threshold value centered on the predicted target position of the previous frame, so that the measured values of adjacent frames are more likely to fall within the tracking gate. The effect of data association depends on the size of the threshold value. The smaller the threshold value is, the more likely it is that a valid measurement value is lost. If the threshold value is too large, the limiting effect on other invalid measurement values will be lost, so the setting of the threshold value is very important. Therefore, this paper selects an ellipse tracking gate for data association, and the specific steps are as follows.
The prediction of the state and covariance matrix at time k is obtained from Equation (1):
X ^ k | k 1 = f k 1 ( X ^ k 1 | k 1 ) Z ^ k | k 1 = H k ( X ^ k | k 1 ) P ^ k | k 1 = f k 1 P k 1 | k 1 f k 1 T + Q k 1
where X ^ k | k 1 is the state vector predicted at k − 1 time, fk−1 is the state transition function, P ^ k | k 1 is the covariance matrix of prediction at time k − 1 and Z ^ k | k 1 is the measurement value predicted at time k and at time k − 1.
Suppose the observation equation of the system is shown in Equation (2):
Z k = H k X k + V k
where Z k is the measured value of the sensor at time k, H k is the observation matrix, X k is the state vector at time k and V k is the measurement noise at time k.
Define the residual vector d k expressed by Equation (3):
d k = Z k Z ^ k | k 1
The residual covariance matrix is:
M k = H k P ^ k | k 1 H k T + R k
where R k is the measurement noise covariance matrix.
When the measured value meets Equation (5), the measured value is in the tracking gate, which is the effective measured value.
e k = d k T M k 1 d k γ
γ is the ellipse tracking threshold, as shown in Figure 2.
As can be seen from Figure 2, Z1, Z2 and Z3 are the measured values. The green rectangle in the middle is the position of the predicted target, that is, the center of the tracking gate. The Z2 measurement value is a valid measurement value in the tracking gate.
The area of the elliptical tracking gate is expressed by Equation (6):
S k = π γ M k
By calculating the distance between the measured value and the predicted value of the target and judging whether the measured value is within the elliptical tracking gate, it can be seen that the effective measured value has a certain probability of belonging to the tracked target. Using the parameter characteristics of obstacles, such as the position, height, heading angle and laser reflection intensity of the center point, the parameters of obstacles in the front and back frames can be compared to further screen the measured values and reduce the number of calculations.
Regarding the height characteristics of obstacles, and due to mutual occlusion or different Lidar angles, the length and width of the same target change at different times, but the height characteristics of the target do not change. If the height difference of obstacles in two adjacent frames is not within the set height threshold range, it does not belong to the same target.
The heading angle characteristics of the obstacles include a period of a frame of point cloud data of 0.1 s, and a heading angle that will not change much in such a short time. Therefore, the measured value can be selected by measuring whether the change of the target heading angle is within a certain threshold.
The characteristics of the laser intensity reflected by obstacles include a reflected laser intensity that is mainly affected by its own material, color and surface roughness. Therefore, the reflection intensity of obstacles is basically unchanged in different acquisition cycles. According to this characteristic, whether it is the same target is judged. The above methods can further reduce the number of calculations necessary in data association.

2.1.2. Data Association of Improved JPDA

After the measured values are processed by the tracking gate, there may be a situation that one target corresponds to multiple measured values and one measured value is in multiple target tracking gates, as shown in Figure 3.
T1 and T2 are two tracking targets that still have three measurement values of Z1, Z2 and Z3 after tracking gate processing. There are two effective measurement values of Z1 and Z2 in the T1 target tracking gate, and there are two effective measurement values of Z2 and Z3 in the T2 target tracking gate. The measurement value of Z2 exists in the intersection area of the two target tracking gates. A target has only one real measurement value corresponding to it, so the joint probabilistic data-association algorithm is used to calculate the matching probability between the target and the effective measurement value. This algorithm is suitable for multi-objective data association in dense scenes. The joint probability needs to be calculated to determine which target the Z2 measurement value belongs to.
Assuming that the state of the target and the observation equation are expressed, respectively, by Equations (7) and (8)
X t ( k + 1 ) = F t ( k ) X t ( k ) + W t ( k )
Z ( k ) = H ( k ) X t ( k ) + V ( k )
where X t ( k ) is the state of t target at k time, F t ( k ) is the state transition matrix, W t ( k ) is the process noise, H ( k ) is the measurement matrix, V ( k ) is the measurement noise and k = 0, 1, 2, …, t = 1, 2, …, T. The JPDA algorithm introduces a confirmation matrix to represent the relationship between valid measurements and targets. The confirmation matrix is represented by Equation (9).
Ω = ( ω j t ) = ω 10 ω 1 n ω m k 0 ω m k n j = 1 , 2 , , m k , t = 0 , 1 , n
where j is the number of effective measurements in the elliptical tracking gate, t represents the number of targets, t = 0 means that there is no target, corresponding to the notion that a column of elements in the matrix Ω is 1, indicating that each measurement value may come from noise or interference, etc. ω j t represents whether the j effective measurement value is in the t target tracking gate, and the numerical representation is as follows.
ω j t = 1 ,   effective measurement is in the tracking gate 0 ,   effective measurement is not in the tracking gate
In this algorithm, the set of all possible joint associated events at time k is represented by θ ( k ) = θ i ( k ) i = 1 θ k , θ k is the number of middle elements θ ( k ) , θ i ( k ) is the i-th joint event, which represents one possibility of m k effective measurements and is expressed by Equation (11).
θ i ( k ) = j = 1 m k θ j t i ( k )
θ j t i ( k ) represents the event that measures j from the target t in the i-th joint event, and Equation (12) denotes the event that measures j from the target t.
θ j t ( k ) = i = 1 n θ j t i ( k )
Joint events θ i ( k ) can be expressed as a matrix by Equation (13).
Ω ^ θ i ( k ) = ω ^ j t i θ i ( k ) , j = 1 , 2 , , m k ; i = 1 , 2 , , θ k
ω ^ j t i θ i ( k ) = 1 ,   j   comes from target t 0 ,   j   does not come from target t
The definition of feasible events needs to meet the following two conditions. The matrix corresponding to the joint feasible events is the feasible matrix.
① Each measurement value can only come from a certain target or clutter, and there is only one element in each row corresponding to the feasible matrix.
② Each target has at most one effective measurement value, and the corresponding feasible matrix is that, except the first column, the number of elements in each column is less than or equal to 1.
Defining the measurement association indicator τ j θ i ( k ) is represented by Equation (15), regardless of whether the j-th measurement in a joint event θ i ( k ) is associated with the target t.
τ j θ i ( k ) = 1 ,   t j > 0 0 ,   t j 0
The target detection indicator δ t θ i ( k ) is defined by Equation (16), regardless of whether the target t is an effective measurement in joint event θ i ( k ) .
δ t θ i ( k ) = 1 ,   j exists to detect the target 0 ,   no j exists to detect the target
According to the above definition, the number of clutters in joint events is derived from Equation (17).
ϕ θ i ( k ) = j = 1 m k 1 τ j θ i ( k )
The Bayesian conditional probability of joint event θ i ( k ) at time k is:
P { θ i ( k ) | Z k } = P { θ i ( k ) | Z ( k ) , Z k 1 } = 1 c P { Z ( k ) | θ i ( k ) , Z k 1 } P { θ i ( k ) }
where c is the normalization factor, Zk represents all the measurement sets from the beginning to k and Z(k) is the measurement set at k.
The Bayesian conditional probability with uniform distribution is as follows.
P { θ i ( k ) | Z k } = 1 c ϕ θ i ( k ) ! S ϕ θ i ( k ) j = 1 m N j t ( Z j ( k ) ) τ j θ i ( k ) t = 1 n ( P D t ) δ t θ i ( k ) ( 1 P D t ) 1 δ t θ i ( k )
where c is the normalized constant, P D t is the detection probability of tracking target t and S is the volume of the elliptical tracking gate.
The probability of association between the final j effective measurement and the target t is calculated by Equation (20).
β j t = i = 1 θ i P { θ i ( k ) | Z k } ω ^ j t i θ i ( k )
where β j t is the probability that the j-th measurement is associated with the target t, P { θ i ( k ) | Z k } ω ^ j t i θ i ( k ) represents the probability that the two events occur at the same time, P { θ i ( k ) | Z k } is the probability that the i-th joint feasible event is effectively observed at all times and ω ^ j t i θ i ( k ) is whether the measurement j in joint event θ i ( k ) is related to target t.

2.2. Obstacle State Estimation Algorithm Design

2.2.1. Unscented Kalman Filter-Based Obstacle State Estimation

(1)
Prediction
Firstly, the number of sigma points is 2n + 1, according to the system state dimension n. The selection methods of sigma points are mainly symmetric sampling and proportional sampling, etc. In this paper, we choose symmetric sampling, the first point of which is selected as the mean point, and the rest of the points are selected by Equation (21):
χ [ 0 ] = x k 1 , i = 0 χ [ i ] = x k 1 + ( n + λ ) P k 1 i , i = 1 , , n χ [ i ] = x k 1 ( n + λ ) P k 1 i n   , i = n + 1 , , 2 n
where χ is the matrix of sigma points, the number of rows of the matrix is the dimension of the system state and the number of columns is two times the dimension of the system state plus one. From Equation (21), it is known that the upper and lower two sigma points of the same column are symmetrically distributed, λ is the scale factor, the value of λ is generally taken as 3 n, a larger λ value indicates that the sigma points are farther away from the mean, a smaller λ value indicates that the sigma points are closer to the mean, x k 1 and P k 1 are the state vector and covariance matrix at the moment of k − 1 and in represents the column vector.
The sampled points are transformed by substituting a nonlinear function.
y i = g ( χ i ) , i = 0 , 1 , 2 , 2 n
The next step is to calculate the weights of each sigma point.
w [ 0 ] = λ n + λ i = 0 w [ i ] = 1 2 ( n + λ ) i = 1 , , 2 n
The formula for calculating the weights of the means is different from that of the other sigma points, but the sum of all weights is equal to 1. The sigma points with weights after the nonlinear transformation are now obtained. Finally, the new Gaussian distribution is re-approximated by the sigma points with weights and its mean and covariance are calculated.
x ^ k | k 1 = i = 0 2 n w [ i ] y i P k | k 1 = i = 0 2 n w [ i ] ( y i x ^ k | k 1 ) ( y i x ^ k | k 1 ) T + Q t
where x ^ k | k 1 is the predicted state vector (the weighted sum of the state vectors of each point in the sigma point set), P k | k 1 is the predicted covariance (the variance-weighted sum of each sigma point), w is the weight of the sigma point, y i is the nonlinear transformation, n is the system state dimension, Qt is the process noise and the above completes the prediction step of UKF.
(2)
Measurement Updates
After obtaining the predicted state vector, covariance and received measurements from the sensors, if to obtain the difference between the prediction and the measurement, the first step is to convert from the prediction space to the measurement space.
Z = h ( χ )
Further, the state vector and covariance of the predicted measurements can be calculated.
z ^ = i = 0 2 n w [ i ] Z [ i ] S = i = 0 2 n w [ i ] ( Z [ i ] z ^ ) ( Z [ i ] z ^ ) T + R
where Z is the transformed sigma point in the measurement space, z ^ is the state vector in the measurement space, S is the covariance in the measurement space and R is the measurement noise.
Assuming that Y is the cross-correlation matrix of sigma points in the state space and sigma points in the measurement space:
Y = i = 0 2 n w [ i ] ( χ [ i ] x ^ k | k 1 ) ( Z [ i ] z ^ ) T
The Kalman gain is:
K = Y S 1
K is the Kalman gain, which indicates whether the final estimate is closer to the predicted or measured value. The larger K is, the closer the final estimate is to the measured value, and vice versa, the closer it is to the predicted value.
The final updated mean and covariance are:
x ^ k | k 1 = x ^ k | k 1 + K ( z k z ^ ) P ^ k | k 1 = ( I K Y ) P ^ k | k 1
where x ^ k | k 1 is the updated state vector, x ^ k | k 1 is the predicted state vector, P ^ k | k 1 is the updated covariance, P ^ k | k 1 is the predicted covariance, z k is the actual measurement value of the sensor, z ^ is the predicted state vector in the measurement space, Y is the cross-correlation matrix and I is the unit matrix.

2.2.2. Interacting Multiple Model Construction

In the real environment, the tracked target does not necessarily move in a well-defined motion pattern, and even if there is a perfect motion model representing the target’s motion trajectory, there is no guarantee that the target will always move in this model. Therefore, in this paper, we design the IMM to handle the state estimation of targets with different motion models.
The IMM algorithm is composed of several filters corresponding to different models. Each filter works in parallel and interacts to obtain the final state estimation [19]. The IMM runs the state estimation filter by using multiple models representing different potential maneuvering targets and ranks the most likely estimation results so that it can generate better estimates for targets with fuzzy dynamic behavior. The output of a typical IMM is the probability-weighted combination of each filter, or the filter with the highest probability [20,21]. In this paper, the nonlinear Kalman filter model is selected.
Firstly, the input of the j-th filter is generated by each individual filter interaction. Then, the output of each model is calculated based on the state estimate, corresponding to the covariance and measurement values based on the unscented Kalman filter. Additionally, the predicted value is compared to the current measurement value to update the model probability, and the computed probability of the target model is added to the corresponding filter to update the state vector and covariance matrix information. Finally, the output of the model is multiplied with the corresponding model probabilities and summed to obtain the final target state estimate. Assuming that the system with motion uncertainty is a jumping Markov linear system (JMLS), the system motion model and measurement model can be derived as follows:
x k + 1 = f j ( x k , u k ) + w j , k z k = h j ( x k , u k ) + v j , k
where j = 1 , 2 , , r is part of the model set M = { M j } j = 1 r , and w j , k and v j , k are the process noise and measurement noise with zero mean conforming to Gaussian distribution, respectively. The posterior probability of IMM can be derived as:
p ( x k , M k | z k )
Based on all measurements z k , state variables x k and the motion model M k up to moment k, the probability density function can be calculated and further reduced by conditional probabilities as:
p ( x k , M k | z k ) = p ( x k , M k | z k ) p ( M k | z k ) p ( x k , M k | z k ) = j = 1 r p ( x k , M j , k | z k ) p ( M j , k | z k ) μ j k
μ j k is the posterior probability that the target motion at moment k matches the dynamics of the kinematic model of filter j.
p ( x k 1 , M k | z k 1 ) = j = 1 r p ( x k 1 , M j , k | z k 1 ) μ i | j , k 1
μ i | j , k 1 = p i j μ i , k 1 i = 1 r p i j μ i , k 1 μ j , k
Among them, p i j is the transition probability from model index i to j. The model transition matrix is shown in Equation (35). Additionally, μ j , k is the model matching probability at moment k.
Γ = p 1 , 1 p r , 1 p 1 , r p r , r
IMM merges the state x ^ i , k 1 and covariance P i , k 1 of each model filter, and the input of the j-th filter state x ^ j , k 1 * and covariance P j , k 1 * is calculated by interaction.
x ^ j , k 1 * = i = 1 r μ ( i | j ) , k 1 x ^ i , k 1 P j , k 1 * = i = 1 r x ^ j , k 1 * [ P i , k 1 + ( x ^ i , k 1 x ^ j , k 1 * ) ( x ^ i , k 1 x ^ j , k 1 * ) T ]
The states x ^ j , k 1 * , covariance P j , k 1 * and measurements obtained from the interaction of each filter are substituted into the unscented Kalman filter in the previous subsection, and x j , k and P j , k are obtained by the filter prediction formula. In addition to the predicted measurements z j , k and covariance S j , k , the states x ^ j , k and covariance P j , k of the output of each model are obtained by the filter update formula.
The model probability reflects the fit of the current measurement to the motion model, which in turn updates the model probability.
μ j , k = λ j , k μ j , k i = 1 r λ j , k μ j , k
λ j , k is the Gaussian likelihood of the measurement.
λ j , k = 1 2 π S j , k exp ( 1 2 ( z k z ^ j , k ) T S j , k 1 ( z k z ^ j , k ) )
The output of each filter is multiplied with the corresponding model probabilities and then summed to obtain the final state estimates x ^ k and P k .
x ^ k = i = 1 r μ j , k x ^ j , k P k = i = 1 r P j , k + ( x ^ j , k x ^ k ) ( x ^ j , k x ^ k ) T
The flow of the interacting multiple model algorithm is given as shown in Figure 4.

2.3. Obstacle Motion Model Construction

In a real driving environment, the obstacle motion model is a brief description of the obstacle’s motion state, and the more accurate the motion model is, the better the final state estimation is. This paper uses three main motion models, namely the constant velocity (CV), constant acceleration (CA), and constant turn rate and velocity magnitude (CTRV) models. Different models make different motion assumptions for obstacles that match the real scenario and can use the unscented Kalman filter on any of the models. Because the CV and CA models are relatively simple, this paper mainly discusses the CTRV model, and the obstacle motion model is shown in Figure 5.
Without considering the motion in the vertical direction, the state vector of the obstacle is as follows:
x = p x p y v ψ ψ ˙
In the above equation, px, py is the position of the vehicle, v is the velocity of the vehicle, ψ is the angle and ψ ˙ is the angular velocity. The state vector at moment k is known and predicting the state vector at moment k + 1 requires the derivation of the state vector, with the help of Equation (41), and the rate of change of the state vector solved.
p ˙ x p ˙ y v ˙ ψ ˙ ψ ¨ = v cos ( ψ ) v sin ( ψ ) 0 ψ ˙ 0
Because the velocities and angular velocities remain constant, v ˙ and ψ ¨ are zero after derivation, and the time difference is given in the following equation:
Δ t = t k + 1 t k
Then, the state vector at moment k + 1 is obtained from the state vector at the previous moment, plus the integral of the rate of change of the state vector during this time, calculated by Equation (43):
x k + 1 = x k + t k t k + 1 p ˙ x ( t ) p ˙ y ( t ) v ˙ ( t ) ψ ˙ ( t ) ψ ¨ ( t ) d t
further reduced to:
x k + 1 = x k + v k t k t k + 1 cos ( ψ k + ψ ˙ k ( t t k ) ) d t v k t k t k + 1 sin ( ψ k + ψ ˙ k ( t t k ) ) d t 0 ψ ˙ k 0 = x k + v k ψ ˙ k ( sin ( ψ k + ψ ˙ k Δ t ) sin ( ψ k ) ) v k ψ ˙ k ( cos ( ψ k + ψ ˙ k Δ t ) + cos ( ψ k ) ) 0 ψ ˙ k Δ t 0
Because ψ ˙ k the equation cannot be solved for the case where the vehicle travels in a straight line, when ψ ˙ k is 0, it can be deduced to that.
x k + 1 = x k + v k cos ( ψ k ) Δ t v k sin ( ψ k ) Δ t 0 0 0
The above is the deterministic part of the obstacle motion model, and the uncertainty part υ k consists of two independent noises, whose mean values are zero, conforming to Gaussian distribution. With regard to the longitudinal acceleration noise υ a , k ~ N ( 0 , σ a 2 ) and the angular acceleration noise υ ψ ¨ , k ~ N ( 0 , σ ψ ¨ 2 ) , the process noise matrix Q is expressed as follows:
Q = E { υ k υ k T } = σ a 2 0 0 σ ψ ¨ 2
for the case where the angular velocity is not zero, the final motion model after adding the noise term by considering the effect of noise on the whole integral equation of the state vector.
x k + 1 = x k + v k ψ ˙ k ( sin ( ψ k + ψ ˙ k Δ t ) sin ( ψ k ) ) v k ψ ˙ k ( cos ( ψ k + ψ ˙ k Δ t ) + cos ( ψ k ) ) 0 ψ ˙ k Δ t 0 + 1 2 υ a , k cos ( ψ k ) ( Δ t ) 2 1 2 υ a , k sin ( ψ k ) ( Δ t ) 2 υ a , k Δ t 1 2 υ ψ ¨ , k ( Δ t ) 2 υ ψ ¨ , k Δ t
After considering the two variables of longitudinal acceleration and angular acceleration in the process noise, the state vector in the state prediction process is to be augmented for x = [ p x , p y , v , ψ , ψ ˙ , υ a , υ ψ ¨ ] T , then 15 are selected in the generation of sigma points, while the corresponding state covariance matrix P is to be increased after adding the processing noise.
P a u g = P 0 0 Q ,   Q = υ a 2 0 0 υ ψ ¨ 2
In the above equation, P is the covariance matrix of 5 × 5 before considering the noise, Q is the process noise covariance matrix, and Paug is the covariance augmentation matrix of 7 × 7.

3. Results

3.1. Experimental Equipment and Parameter Setting

The experimental platform used in this paper as shown in Figure 6 is an intelligent vehicle modified based on Haval H7. A Velodyne 32-line laser radar was installed on the roof through a tripod, a Delphi millimeter wave radar was installed on the front of the bumper, a Mobileye camera was installed on the front windshield of the car, and a set of GNSS integrated navigation systems were deployed. Sensors were used to detect the surrounding environment and locate the ego vehicle, and provide the velocity, distance and movement status information of the obstacles. An industrial personal computer was installed. Its processor was Inter Core i7-8700 with a main frequency of 3.2 GHz. The GPU version was Nvidia Tesla T4, the memory was 32 GB, the hard disk was SAMSUNG 500 G, and it was also equipped with a CAN bus communication card. The ellipse tracking threshold γ was set to 3, and the measurement noise R was set to 0.02.

3.2. Data-Association Experiments

In this experiment, a pedestrian and a vehicle travelling in the same direction were selected as the association targets to judge the accuracy of the association algorithm at different moments. As shown in Figure 7, a graph of the association effect at different moments is given. In addition, the algorithm automatically numbered the detected obstacles. The test results show that at frame 70, the vehicle ID number 2187 and the pedestrian ID number 1898 were detected, in addition to the trees on both sides of the road that were accurately detected with ID numbers 2785 and 2364, respectively. The vehicle and pedestrian moved slowly forward and by frame 100, the point cloud data had clearly moved some distance forward, and the trees with ID numbers 2785 and 2364 were out of association. The trees with ID numbers 2785 and 2364 were out of range and were not associated, but the vehicles and pedestrians were correctly associated, and the ID numbers did not change. By frame 130 of the point cloud data, the obstacle was still successfully associated with no change in ID number, the trees out of detection range were not associated and the new trees detected were numbered. In summary, the designed data-association algorithm can effectively complete the data association of obstacles at different moments, and no obstacles were lost or incorrectly associated. At the same time, the average running time of the improved data-association algorithm in this paper was 51.63 ms per frame, which meets the real-time requirements of the intelligent vehicle.

3.3. Obstacle State Update Experiment

As shown in Figure 8, this experiment selected two vehicles moving in opposite directions detected by the intelligent vehicle during the driving process as the obstacle state update scenario. Figure 9 gives a partial update process of the obstacle state using the unscented Kalman filter. The algorithm accurately marked the position, velocity magnitude, ID number and direction of movement of the obstacle, where the green arrow represents the direction of the obstacle movement. At first, only one vehicle travelling in the same direction was tracked, ID number 1108, indicated by purple arrow number ①. Positioned at 12.4 m from the experimental vehicle with a velocity of 1.5 km/h, the road was lined with stationary trees. Then, the vehicle travelling in the same direction was tracked, ID number 1449, and the obstacle is indicated by purple arrow number ②, positioned at 11.1 m from the experimental vehicle with a velocity of 9.4 km/h. When driving continued, the two vehicles were tracked continuously and steadily, and if the green arrow deviated to the right, it meant that the vehicles had a tendency to move to the right. In summary, the designed target tracking algorithm can effectively and efficiently update obstacle status.
The position and velocity information of the vehicle travelling in the opposite direction to the experimental vehicle was extracted from the algorithm and compared with the real position and velocity information, and the results are shown in Figure 10. Through the position test comparison results, it can be determined that the obstacle draws nearer to the experimental vehicle, in line with the actual movement, and that the trajectory is smoother and the tracking is stable. The maximum error in position estimation was 3.14%. Through the results of the velocity test comparison, it can be determined that the test vehicle tracked consistently for 8 s with a minimum velocity of 4 km/h and a maximum velocity of 14.6 km/h. The tracking was stable. The maximum error in velocity estimation was 1.52%. In summary, the designed unscented Kalman kilter and IMM-based model is able to achieve accurate estimation and update of obstacle position and velocity.
The optimal subpattern assignment (OSPA) distance is used to evaluate the tracking effect [22]. In the OSPA, a measured distance dp (called the OSPA distance) is defined on the system state space to indicate the error of k moment between m true obstacle state sets X = x k 1 , , x k m and n estimated obstacle state sets Y = y k 1 , , y k n by Equation (49):
d p X , Y = 1 n min π Π n i = 0 m d c x i , y π i p + c p n m 1 / p , m n d p X , Y = d p Y , X , m > n
where Π n is a set that takes m elements from n sets of estimated values and arranges them. If d x , y is the cut-off distance, the Euclidean distance between the real obstacle and estimated state d c x , y can be calculated by Equation (50).
d c x , y = min c , d x , y , c > 0
The JPDA with UKF (called the traditional method), as well as the improved JPDA with IMM and UKF, proposed in this paper (called the proposed method) are used to track obstacles. A total of 60 frames of OSPA distances from the first obstacle detection are calculated as shown in Figure 11 to compare the tracking effect.
The average OSPA distance of the traditional method is 0.3880 m, and the average OSPA distance of the proposed method is 0.2925 m. Therefore, the proposed method is more effective than the traditional method.

4. Discussion

In order to solve the problem of a large number of association calculations and misassociations in the dynamic multi-obstacle tracking process, and to obtain more accurate surrounding environment information, in this paper, the existing JPDA is improved, and an improved JPDA is proposed to introduce more parametric features of the obstacles to further filter valid measurements and reduce the computational effort of the algorithm. The IMM is designed to deal with the tracking problem of different motion models of obstacles, and the optimal state estimation of obstacles is obtained by the interaction of different models. The UKF is designed to update the motion state of the obstacles, and the dynamic tracking of multiple obstacles is completed.
For the tracking of multiple obstacles in complex environments, the improved JPDA is used to achieve the association matching of multiple targets at different moments and to complete the data association of obstacles at different moments, and there is no obstacle loss or association error. In addition, the average running time of the algorithm is 51.63 ms per frame, which can meet the real-time requirements of intelligent vehicles.
Using the IMM can yield the filtering of multiple behavior patterns of obstacles and solve the problem of randomness in obstacles’ motion and the poor filtering effect of a single model. The unscented Kalman filter is used to solve the problem of nonlinear state estimation of obstacles. The position estimation error can be controlled within 3.14%, and the velocity estimation error can be controlled within 1.52%.
The result comparison shows that the proposed method is more effective than the traditional method. In the future, the research results of this paper will be tried and applied to multi-sensor information fusion, obstacle avoidance function development and overtaking function development of intelligent vehicles.

Author Contributions

Conceptualization, Y.W. and B.S.; methodology, R.D.; software, Z.W.; validation, W.L. and K.S.; formal analysis, Y.W.; investigation, B.S.; resources, R.D.; data curation, Z.W.; writing—original draft preparation, W.L.; writing—review and editing, K.S.; visualization, Y.W.; supervision, B.S.; project administration, B.S.; funding acquisition, B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Qing-Chuang science and technology plan” of colleges and universities in Shandong Province (grant number 2021KJ083), the National Natural Science Foundation Project of China (grant number 52102465), the key R&D projects in Shandong (grant number 2019GHZ016), the Major Innovation Projects in Shandong (grant numbers 2020CXGC010405 and 2020CXGC010406), and the Postdoctoral Science Foundation of China and Shandong (grant numbers 2020M680091 and 202003042).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Z. Analysis of the Development Trend of Smart Car Industry. Int. J. Educ. Econ. 2018, 1, 78–81. [Google Scholar]
  2. Dulău, M.; Oniga, F. Obstacle Detection Using a Facet-Based Representation from 3-D LiDAR Measurements. Sensors 2021, 21, 6861. [Google Scholar] [CrossRef] [PubMed]
  3. Cao, M.; Wang, J. Obstacle detection for autonomous driving vehicles with multi-lidar sensor fusion. J. Dyn. Syst. Meas. Control 2020, 142, 021007. [Google Scholar] [CrossRef]
  4. Zhang, C.Y.; Chen, Z.H.; Han, L. Obstacle detection of Lidar based on Improved DBSCAN algorithm. Laser Optoelectron. Prog. 2021, 58, 2428005. [Google Scholar]
  5. Li, J.; Li, R.; Wang, J.; Yan, M. Obstacle information detection method based on multiframe three-dimensional lidar point cloud fusion. Opt. Eng. 2019, 58, 116102. [Google Scholar] [CrossRef]
  6. Liu, C.; Zhao, J.; Liu, Z.H.; Wang, X.Q.; Lai, K.C. Improved Lidar Obstacle Detection Method Based on Euclidean Clustering. Laser Optoelectron. Prog. 2020, 57, 201105. [Google Scholar]
  7. Zhang, H.; Ikbal, M.A. Unmanned vehicle dynamic obstacle detection, tracking and recognition method based on laser sensor. Int. J. Intell. Comput. Cybern. 2021, 14, 13. [Google Scholar] [CrossRef]
  8. Li, J.; Zhang, Y.; Liu, X.; Zhang, X.; Bai, R. Obstacle detection and tracking algorithm based on multi-lidar fusion in urban environment. IET Intell. Transp. Syst. 2021, 15, 1372–1387. [Google Scholar] [CrossRef]
  9. Xie, D.; Xu, Y.; Wang, R. Obstacle detection and tracking method for autonomous vehicle based on three-dimensional LiDAR. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419831587. [Google Scholar] [CrossRef]
  10. Llamazares, Á.; Molinos, E.J.; Ocaña, M. Detection and Tracking of Moving Obstacles (DATMO): A Review. Robotica 2020, 38, 761–774. [Google Scholar] [CrossRef]
  11. Zoltan, R.; Tamas, S. Obstacle Prediction for Automated Guided Vehicles Based on Point Clouds Measured by a Tilted LIDAR Sensor. IEEE Trans. Intell. Transp. Syst. 2018, 19, 2708–2720. [Google Scholar]
  12. Yeom, S. Long Distance Moving Vehicle Tracking with a Multirotor Based on IMM-Directional Track Association. Appl. Sci. 2021, 11, 11234. [Google Scholar] [CrossRef]
  13. Yang, Y. Review of Target Tracking Algorithms Based on Bayes Filtering. In Proceedings of the 2022 3rd International Conference on Computer Science and Intelligent Communication, Tianjin, China, 12–13 March 2022; pp. 112–123. [Google Scholar]
  14. Li, Y.; Xing, X. Improved N-Best Multiple Hypothesis Tracking Algorithm. J. Phys. Conf. Ser. 2021, 1883, 012047. [Google Scholar] [CrossRef]
  15. Chen, X.; Li, Y.; Li, Y.; Yu, J.; Li, X. A Novel Probabilistic Data Association for Target Tracking in a Cluttered Environment. Sensors 2016, 16, 2180. [Google Scholar] [CrossRef] [Green Version]
  16. Ba, H.; Cao, L.; He, X.; Cheng, Q. Modified joint probabilistic data association with classification-aided for multitarget tracking. J. Syst. Eng. Electron. 2008, 19, 434–439. [Google Scholar]
  17. Farag, W. Self-Driving Vehicle Localization using Probabilistic Maps and Unscented-Kalman Filters. Int. J. Intell. Transp. Syst. Res. 2022, 20, 623–638. [Google Scholar] [CrossRef]
  18. Wang, Y.; Chen, Z. A framework for state-of-charge and remaining discharge time prediction using unscented particle filter. Appl. Energy 2020, 260, 114324. [Google Scholar] [CrossRef]
  19. Fu, R.; Zhang, M.; Wang, C. Behavior analysis of distant vehicles using LIDAR point cloud. Clust. Comput. 2019, 22, 8613–8622. [Google Scholar] [CrossRef]
  20. Choi, S.; Hong, D. Position estimation in urban U-turn section for autonomous vehicles using multiple vehicle model and Interacting Multiple Model filter. Int. J. Automot. Technol. 2021, 22, 1599–1607. [Google Scholar] [CrossRef]
  21. Wan, W.; Feng, J.; Song, B.; Li, X. Vehicle state estimation using Interacting Multiple Model based on square root cubature Kalman filter. Appl. Sci. 2021, 11, 10772. [Google Scholar]
  22. Schuhmacher, D.; Vo, B.T.; Vo, B.N. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 2008, 56, 3447–3457. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The workflow of this paper.
Figure 1. The workflow of this paper.
Wevj 14 00039 g001
Figure 2. Threshold graph of ellipse tracking.
Figure 2. Threshold graph of ellipse tracking.
Wevj 14 00039 g002
Figure 3. Data-association diagram.
Figure 3. Data-association diagram.
Wevj 14 00039 g003
Figure 4. IMM algorithm framework.
Figure 4. IMM algorithm framework.
Wevj 14 00039 g004
Figure 5. Obstacle motion model diagram.
Figure 5. Obstacle motion model diagram.
Wevj 14 00039 g005
Figure 6. Experimental platform.
Figure 6. Experimental platform.
Wevj 14 00039 g006
Figure 7. Obstacle data-association results. (a) Original scene view at frame 70, (b) frame 70 point cloud data, (c) original scene view of frame 100, (d) frame 100 point cloud data, (e) original scene view of frame 130, (f) frame 130 point cloud data.
Figure 7. Obstacle data-association results. (a) Original scene view at frame 70, (b) frame 70 point cloud data, (c) original scene view of frame 100, (d) frame 100 point cloud data, (e) original scene view of frame 130, (f) frame 130 point cloud data.
Wevj 14 00039 g007aWevj 14 00039 g007b
Figure 8. Obstacle state update scene.
Figure 8. Obstacle state update scene.
Wevj 14 00039 g008
Figure 9. Obstacle status update diagram.
Figure 9. Obstacle status update diagram.
Wevj 14 00039 g009
Figure 10. Comparative analysis of data. (a) Obstacle position comparison, (b) obstacle velocity comparison.
Figure 10. Comparative analysis of data. (a) Obstacle position comparison, (b) obstacle velocity comparison.
Wevj 14 00039 g010
Figure 11. OSPA distance of traditional method and proposed method.
Figure 11. OSPA distance of traditional method and proposed method.
Wevj 14 00039 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Sun, B.; Dang, R.; Wang, Z.; Li, W.; Sun, K. Design of Dynamic Multi-Obstacle Tracking Algorithm for Intelligent Vehicle. World Electr. Veh. J. 2023, 14, 39. https://doi.org/10.3390/wevj14020039

AMA Style

Wang Y, Sun B, Dang R, Wang Z, Li W, Sun K. Design of Dynamic Multi-Obstacle Tracking Algorithm for Intelligent Vehicle. World Electric Vehicle Journal. 2023; 14(2):39. https://doi.org/10.3390/wevj14020039

Chicago/Turabian Style

Wang, Yuqiong, Binbin Sun, Rui Dang, Zhenwei Wang, Weichong Li, and Ke Sun. 2023. "Design of Dynamic Multi-Obstacle Tracking Algorithm for Intelligent Vehicle" World Electric Vehicle Journal 14, no. 2: 39. https://doi.org/10.3390/wevj14020039

APA Style

Wang, Y., Sun, B., Dang, R., Wang, Z., Li, W., & Sun, K. (2023). Design of Dynamic Multi-Obstacle Tracking Algorithm for Intelligent Vehicle. World Electric Vehicle Journal, 14(2), 39. https://doi.org/10.3390/wevj14020039

Article Metrics

Back to TopTop