Next Article in Journal
Effective Transfer Learning with Label-Based Discriminative Feature Learning
Previous Article in Journal
Application of Distributed Fibre Optical Sensing in Reinforced Concrete Elements Subjected to Monotonic and Cyclic Loading
Previous Article in Special Issue
Modified Artificial Potential Field for the Path Planning of Aircraft Swarms in Three-Dimensional Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lane Departure Assessment via Enhanced Single Lane-Marking

1
School of Microelectronics, Southern University of Science and Technology, Shenzhen 518055, China
2
Department of Computing, The Hong Kong Polytechnic University, Hong Kong 999077, China
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(5), 2024; https://doi.org/10.3390/s22052024
Submission received: 27 January 2022 / Revised: 21 February 2022 / Accepted: 2 March 2022 / Published: 4 March 2022
(This article belongs to the Special Issue Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control)

Abstract

:
Vision-based Lane departure warning system (LDWS) has been widely used in modern vehicles to improve drivability and safety. In this paper, a novel LDWS with precise positioning is proposed. Calibration strategy is first presented through a 3D camera imaging model with only three parallel and equally spaced lines, where the three angles of rotation for the transformation from the camera coordinate system to the world coordinate system are deduced. Then camera height is calculated compared to the previous works using a measured one with potential errors. A criterion for lane departure warning with only one of the two lane-markings is proposed to estimate both yaw angle and distance between the lane-markings and the vehicle. Experiments show that calibration strategy can be easily set up and achieve an average of 98.95% accuracy on the lane departure assessment.

1. Introduction

Mobility plays an important role in modern society, and it provides a high quality life for humans. However, according to WHO, tens of millions of people are injured or disabled because of road accidents [1], making the Safety Driving Assist System (SDAS) necessary to protect drivers’ safety. An example of research in SDAS is a model constructed by Wang et al. [2] based on the host–target vehicle dynamics and road constraints to estimate the lateral motion of the preceding target vehicles. A complete system proposed by Lin et al. consists of lane change detection, forward collision warning, and overtaking vehicle identification [3]. As an important part of SDAS, the lane departure warning system (LDWS) is designed to warn the drivers when the vehicles tend to deviate from their lanes, effectively preventing traffic accidents which are mostly out of driver’s inattention or lack of experience.

1.1. Related Work

Over the years, LDWS is still attractive in its decision-making algorithms. Martínez-García et al. [4] characterized a concept of elementary steering pulses through machine learning to model human lane keeping control. Zhang et al. [5] proposed a lane departure warning algorithm based on probability statistics of driving habits to make lane departure warnings more targeted and accurate. Chen et al. [6] proposed a human-machine shared control strategy based on hybrid system theory, and the results showed good human–machine coordination.
Moreover, some researchers focused their works on evaluating the safety level of LDWS. One of the typical research in [7] presented an experimental test to show the main predictors of system fault. Another research in [8] identified the characteristics of lane departure crashes and quantified the safety potential of LDWS.
Among all the LDWS algorithms, vision-based algorithms play an important role, which can be divided into two types: (1) algorithms using only image information and (2) algorithms using image information and road model. The first type determines lane departure warning only by using the information in the image, e.g., the slopes of the lane markings in the image can estimate the vehicle’s turning direction. This type attracts many researchers because of its simplicity, while one of its shortcomings is the lack of robustness. An edge distribution function (EDF) in [9] determined lane departure using the position of the symmetry axis of EDF. However, the camera setup should be ideal enough; otherwise, the symmetry axis of EDF may change even if lane departure is not happening. Vijay et al. [10] presented a similar algorithm using the deviation of the centroid line of detected lanes from the center of the image. A lane departure identification method used three lane-related parameters, including the Euclidean distances between every two points of the Hough origin Ho, the midpoints mp1 and mp2 of the identified left and right lane-markings to identify the state of departure [11,12,13]. Besides, algorithms judging the (ρ, θ) patterns or just one of the detected left, and right lane-markings determined the left or right lane departure situation [14,15,16,17,18,19,20,21,22,23,24]. The recent study conducted by Lin et al. determines lane departure also by the information of the detected lane-markings only, and it uses a state machine to recognize the “left,” “right,” and “normal” status, which can reduce the false alarms when the lane-marking is blocked by obstacles [3].
The vision-based type usually involves the camera calibration to transform the image into the real world. Xu et al. [25] proposed a camera calibration method with a set of lines parallel and perpendicular to the ground plane to determine the camera parameters, including the camera’s three deviation angles and focal length. Then the distance between the car and road boundaries was obtained using the pre-measured camera height. A mapping algorithm between the image and road coordinates remapped the detected lanes to the actual roads, but only deviation angles on two dimensions of the camera (i.e., α0 and β0) were considered [26]. In [27], the ratio of the slopes of the detected left and right lanes represented the degree of lateral offset of the vehicle, and the slopes were calculated using the relation between the world and camera coordinates. However, only one of all three deviation angles of the camera (i.e., pitch angle) is considered nonzero. The algorithm based on probability statistics of driving habits described in [5] also uses the matrix transformation of the image coordinate system and the world coordinate system, but no deviation angle of the camera is mentioned in the transformation.
Besides, vision-based lane departure warning algorithms are similar to vision-based vehicle localization methods like Simultaneous Localization And Mapping (SLAM) since both aim to get the vehicle’s position relative to the environment. Lin et al. proposed a vehicle localization method based on topological map construction and scene recognition [28]. The Omni-directional image sequences construct the topological map in this work. The proposed method collects multiple feature points in the input images and can output vehicle position and recognition of scenes.

1.2. Contributions

In this paper, we propose a new algorithm of LDWS of the second type, which involves easy calibration and warning processes with only one of the two-lane markings. Precisely, the front camera of the vehicle is calibrated to obtain the relationship between the image and the real world. The lane detection technique is used to detect only one lane-marking in each image. This proposed lane departure warning method, combining the “only one” lane-marking information and the calibration information, can determine whether the vehicle has the potential to deviate from the lane. Figure 1 shows an overview of the proposed method. On the contrary to the previous works, this method only requires the information of one lane-marking in the image and only outputs vehicle direction and distance relative to the detected lane-marking.
In [5,26,27], the deviation angles on all three dimensions were not taken into account. This makes the algorithms less reliable since deviation angles of a camera affect the mapping accuracy or vehicle positioning. In [25], its calibration process required two horizontal lines and two vertical lines, which means there should be objects perpendicular to the ground in the environment. However, the calibration process of our proposed algorithm requires the environment to contain three parallel and equally spaced horizontal lines, which is much easier to satisfy. Neither [25], nor our proposed method utilizes the existing and mature camera calibration methods. The reason is that the existing methods always need a calibration object with a given shape and size like a chessboard, and they require particular calibration apparatuses and an elaborate setup. Besides, camera parameters like deviation angles are subject to environmental change. Therefore, the camera should be repeatedly calibrated in the case of the mature calibration method. In addition, in [25], the camera height is manually measured, while in our proposed method, the camera height can be calculated during the calibration processing without measured errors. To sum up, the contributions of this work are summarized as follows:
  • A calibration strategy with only three parallel and equally spaced lines is applied to estimate the three rotation angles to transform the camera coordinate system to the world coordinate system through a 3D imaging model. Compared to the method in [25], this model with no vertical lines enables the camera can be equipped in the front of the car without dedicated angles;
  • The camera height and lane-width can be calculated instead of measured, using estimated camera extrinsic parameters in the proposed calibration strategy. This method avoids errors during the measurement;
  • A criterion for lane departure warning is proposed by estimating the yaw angle and distance between the lane markings and the vehicle with only one of the two lane markings. This criterion is simple and reliable compared to the traditional algorithms, which should detect both lane-markings.

2. Camera Calibration

The direct relation between calibration and lane departure warning is that camera calibration transforms the image coordinate into the real-world coordinate, and a mapping algorithm remaps the detected lanes coordinate in the image to the actual real-world roads. In this section, we describe a camera calibration method to estimate the camera’s extrinsic parameters, i.e., the height of the camera and the three rotation angles for the transformation from the camera coordinate system to the world coordinate system by three parallel and equally spaced lane-markings on the ground. These extrinsic parameters are necessary for the following lane departure warning step.
Figure 2 shows the positions of the three coordinate systems utilized in this section: (1) the camera coordinate system Oc-XcYcZc; (2) the image coordinate system Oi-XiYi; (3) the world coordinate system Oc’-Xc’Yc’Zc’. Here, Oc-XcYcZc is arbitrary. Oi is at the center of the image sensor (called IMG) where the Zc-axis passes through Oi and is perpendicular to IMG, and f (i.e., the focal length) is the distance between the point Oi and Oc. The Xi- and Yi-axes are parallel to the Xc- and Yc-axes but opposite in direction respectively. Next, the world coordinate system Oc’-Xc’Yc’Zc’ is defined that its origin Oc’ coincides with Oc, its Zc’-axis is parallel to the lane-markings, and its Xc’-axis is parallel to the ground plane.
Next, the 3D imaging model of the lane-markings is established based on the pinhole camera model, as shown in Figure 3. Each lane-marking and the point Oc determine a plane (αl, αm, and αr), respectively. Here, αl, αm, and αr intersect the IMG in three lines, i.e., ll, lm, and lr, which are the projections of the left, middle, and right lane-markings onto the IMG plane. ll, lm, and lr intersect at the point Pint (vanishing point of the lane-markings), and ll, lm, and lr intersect the bottom edge lbot) of the IMG at the points Pl, Pm, and Pr. The normal line of lbot at Pint intersects lbot at Pp.
Given the positions of ll, lm, and lr in the IMG (i.e., the positions of points Pint, Pl, Pm, and Pr), the coordinate system Oc-XcYcZc is transformed to the Oc’-Xc’Yc’Zc’ following the three steps, (A) rotating θ1 around Zc-axis to make Xc-axis on the ZcZc’-plane; (B) rotating θ2 around Yc-axis to make Zc-axis coincide with Zc’-axis; (C) rotating θ3 around Zc-axis to make Oc-XcYcZc coincide with Oc’-Xc’Yc’Zc’. Meanwhile, the camera height h and then the lane-width w’ are estimated with θ1, θ2, and θ3.

2.1. Rotating θ1 around Zc-Axis to Make Xc-Axis on the ZcZc’-Plane

This step is to rotate an angle around the Zc-axis to make the Xc-axis on the ZcZc’-plane. Afterward, the position of the IMG plane is not changed, but the coordinate system Oi-XiYi is rotated θ1 around its origin. Therefore, the positions of ll, lm, and lr in the world coordinate system remain unchanged while changed in the image coordinate system Oi-XiYi. Because the Xc-axis is on the plane formed by the Zc- and Zc’-axes, Pint, the intersection of the Zc’-axis and the IMG plane, is still on the Xi-axis as depicted in Figure 4.
Figure 5 illustrates the imaging differences before (dash lines, denoted by “0” subscript) and after (solid lines, denoted by “1” subscript) this step. tanθ1 can be obtained by dividing yint by xint, and the calculated θ1 forms the rotation matrix R1 of Oc-XcYcZc in this step. R1 is then used in the lane departure warning section (i.e., Section 3, (7)) together with R2 and R3 calculated later on. Pl1Pp1, Pm1Pp1, and Pr1Pp1 can be obtained by (1) (ysize is the height of the IMG).
P k 1 P p 1 = ( θ 1 + arctan P k 0 P p 0 P int P p 0 ) × y size 2 ,   k = l ,   m ,   r

2.2. Rotating θ2 around Yc-Axis to Make Zc-Axis Coincide with Zc’-Axis

This step is to rotate an angle around the Yc-axis to make the Zc-axis coincide with the Zc-axis. Figure 6 depicts the position change of the IMG before (dash lines, Oi1) and after (solid lines, Oi2) this step. Then, Zc- and Zc’-axes coincide, while Pint coincides with Oi2. Here, the angle between Zc- and Zc’-axes (i.e., θ2) can be calculated with (2) by using the distance between points Pint and Oi1, and the calculated θ2 forms the rotation matrix R2 of Oc-XcYcZc in this step. R2 is then used in the lane departure warning section (i.e., Section 3, (7)) together with R1 and R3 calculated later on. As in Figure 7a, the positions of the IMG planes and lines before and after this step are denoted by the “1” and “2” subscripts, respectively. The two IMG planes are perpendicular to the plane (αbot) determined by the bottom lines of the IMG, i.e., lbot2 and lbot1, and αl, αm, and αr intersect αbot in three parallel lines (llcut, lmcut, and lrcut). Lines llcut, lmcut, and lrcut intersect lbot1 at Pl1, Pm1, and Pr1, and intersect lbot2 at Pl2, Pm2, and Pr2. Make two normal lines of lbot1 and lbot2 at Pint1 and Pint2, respectively, and they intersect lbot1 and lbot2 at Pp1 and Pp2. Because lbot1 and lbot2 are perpendicular to the Yc-axis, their angle is θ2. Meanwhile, because IMG2 is perpendicular to the Zc2- (which is also Zc’-) axis, lbot2 is perpendicular to the Zc’-axis and also llcut, lmcut, and lrcut. As shown in Figure 7b and Equation (3), using similar triangles, the segments Pl1Pp1, Pm1Pp1, and Pr1Pp1 time cosθ2 equals Pl2Pp2 Pm2Pp2, and Pr2Pp2 respectively.
tan θ 2 = O i 1 P int O i 1 O c = x int 2 +   y int 2 f
P k 2 P p 2 = cos θ 2 × P k 1 P p 1 ,   k = l ,   m ,   r .

2.3. Rotating θ3 around Zc-Axis to Make Oc-XcYcZc Coincide with Oc’-Xc’Yc’Zc’

This step is to rotate an angle around Zc-axis to make Oc-XcYcZc coincide with Oc’-Xc’Yc’Zc’. As illustrated in Figure 8, we denote the positions of the IMG planes and parallel lines before and after rotating θ3 around Zc-axis by “2” and “3” subscripts, respectively. Similar to step 2.1, the positions of ll, lm, and lr in the world coordinate system Oc’-Xc’Yc’Zc’ remains unchanged while their positions in the image coordinate system Oi-XiYi are changed. On the other hand, differing from step 2.1, the position of Pint keeps the same (i.e., coincides with Oi) during this step. Because lbot3 is parallel to the ground plane, the line of intersection (lgnd) of the IMG plane and the ground are also parallel to lbot3. From the known condition that the lane-markings are parallel and equally spaced, the two segments cut by the three-lane-markings are equal. By using similar triangles, Pl3Pm3 and Pr3Pm3 cut by ll, lm, and lr are also equal. Accordingly, the calculation of θ3 is obtained by (4), and the calculated θ3 forms the rotation matrix R3 of Oc-XcYcZc in this step. R3 is then used in the lane departure warning section (i.e., Section 3, (7)) together with R1 and R2. In the previous steps, the calculation of θ1 and θ2 involved only the positional relationships of points Pint and Oi, and only two lane-markings can determine the position of Pint. However, the intersection points of ll, lm, lr, and lbot, i.e., Pl3, Pm3, and Pr3, are needed for the calculation of θ3, which demonstrates that three instead of two lane-markings are necessary in the camera-calibration stage.
θ 3 = C C = arctan 2 sin A sin B sin A B C .

2.4. Calculation of Camera Height and Lane-Width

After the above steps, the camera height h, the height of Oc, can be calculated according to similar triangles (in (5), w is the distance between the two adjacent lane-markings of the three parallel and equally spaced lane-markings). Since the three-camera rotation angles and the camera height are the known extrinsic parameters, the lane-width w’ can be calculated by both left and right lane-markings, with the vehicle’s direction aligning with the lane-markings. In the case of two edges of the lane-markings, suppose the two lane-markings in the frame intersect lbot at the points Plw0 and Prw0, and the normal line of lbot at the intersection point (Pint0) of the two lane-markings intersects lbot at Ppw0, then w’ is calculated in (6) (shown in Figure 9).
y size 2 h = P l 3 P r 3 w = y size 2 w ×   [ tan θ 3 + arctan P l 2 P p 2 O i P p 2 tan θ 3 + arctan P r 2 P p 2 O i P p 2 ]
P lwi P rwi = y size 2 × [ tan θ i + arctan P lw i 1 P pw i 1 P int i 1 P pw i 1 tan θ i + arctan P rw i 1 P pw i 1 P int i 1 P pw i 1 ] ,   i = 1 ,   3 P lw 2 P rw 2 = cos θ 2 ×   P lw 1 P pw 1 P rw 1 P pw 1 y size 2 h = P lw 3 P rw 3 w

3. Lane Departure Warning

The extrinsic parameters, i.e., the rotation angles of the camera coordinate system and the camera height, deduced during the camera-calibration stage, are used to calculate the lane departure parameters. In this paper, the yaw angle (θy), which represents the vehicle direction that deviates from the road direction, can be calculated by only one of the two lane-markings projected in the IMG plane. Meanwhile, the distance between the lane-markings and the vehicle (xx) is also important for the lane departure decision. As long as at least one lane-marking is detected in the image by the lane detection technique, the two parameters related to lane departure, θy and xx, can be calculated in this section for lane departure warning.
The 3D imaging model of the lane-markings and the coordinate systems Oc’-Xc’Yc’Zc’ and Oc-XcYcZc have been described in Section 2. As illustrated in Figure 10, the vehicle coordinate system Oc’’-Xc’’Yc’’Zc’’ is defined by rotating Oc’-Xc’Yc’Zc’ around the Yc’-axis to make the Zc’’-axis align with the direction of the vehicle. It is observed that the angle between the Zc’’- and Zc’-axis is the yaw angle θy.

3.1. Calculation of the Yaw Angle θy

The first step of lane departure warning is to use only one detected lane-marking in the image to calculate the yaw angle θy, then the angle between the Zc’’- and Zc’-axes can be calculated by finding out the coordinates of intersection points of the IMG and the Zc’’- and Zc’-axis respectively (shown in Figure 11). We define the intersection point of the IMG and the Zc’’-axis as Pintc(xintc, yintc), which is the same as Pint(xint, yint) in Section 2. Then, the intersection point of the IMG and the Zc’-axis is defined as Pintd(xintd, yintd), which is the same as the intersection point of ll and lr in the case of two lane-markings detected. Once only one of the two lane-markings is detected (for example, in Figure 11, lr is detected, and it intersects the top and the bottom edge of the IMG at two points, Prtop and Prbot, whose x-coordinates in Oi-XiYi are xrtop and xrbot, respectively), the position of Pintd cannot be determined directly. However, it can be calculated using the extrinsic parameters obtained in Section 2.
Since Oc’’-Xc’’Yc’’Zc’’ is defined by rotating Oc’-Xc’Yc’Zc’ around the Yc’-axis, the Zc’’-axis is in the Xc’Zc’-plane and the line PintcPintd is the line of intersection of the IMG plane and the Xc’Zc’-plane. Therefore, the angle θxz between the unit vector pint0 of the line PintcPintd and the unit vector xi0 of the Xi-axis can be calculated as shown in (7), using the rotation matrices R1, R2, and R3 calculated in Section 2 when pint0 is perpendicular to the unit vector zc0 of the Zc-axis. Then, Pintd can be calculated as the intersection point of the detected ll or lr and the line PintcPintd as in (8).
p i n t 0 =   0 0 1 0 0 0 1 0 0 ×   z c 0 = 0 0 1 0 0 0 1 0 0 R 3 R 2 R 1 0 0 1 x i 0 = R 3 R 2 R 1 1 0 0 cos θ xz = cos < p i n t 0 ,   x i 0 > = p i n t 0   ·   x i 0 p i n t 0   ·   x i 0 = p i n t 0 · x i 0
y intd y intc x intd x intc = tan θ xz x rtop x intd y size 2 y intd = x rbot x intd y size 2 y intd P intd ( x intd ,   y intd )
Finally, with focal length f and the coordinates of points Pintc, Pintd, θy can be solved using the triangular pyramid formed by the axes and the image sensor in Figure 11 according to (9), and the rotation matrix Ry between coordinate systems Oc’’-Xc’’Yc’’Zc’’ and Oc’-Xc’Yc’Zc’ is formed by the calculated θy. Ry is then used in the calculation of xx in Section 3.2, (10). The vectors OcOi, OcPintd, and OcPintc are called vectors zc, zc’, and zc’’ respectively.
z c = z c +   x intd y intd 0   =   x intd y intd f z c = z c +   x intc y intc 0   =   x intc y intc f cos θ y = cos < z c ,   z c > = z c   ·   z c z c   ·   z c

3.2. Calculation of the Distance between the Lane-Markings and the Vehicle xx

In this paper, xx is calculated using the 3D imaging model of one of the two lane-markings. As an example, in Figure 12, the right lane lr is detected in the IMG plane. Make a plane αper perpendicular to the ground plane through the Zc’-axis, which intersects the ground plane in the line lper. It is observed that xx is the distance between lper and the right lane-marking. The IMG plane intersects the lines lper and the right lane-marking at two points Pgp and Pgr. Make vectors vbot, vlp, and vlr, which are vectors PgpPgr, PgpPintd, and PgrPintd respectively. Accordingly, xx is the x-coordinate of vbot in Oc’-Xc’Yc’Zc’. The angle between vlr and the bottom edge of the IMG is θbr, and the angle between vlr and vbot is θgr. Finally, tanθbr can be obtained using the coordinates of points Prbot and Pintd, and xx can be calculated by θbr through (10). Here, θgr is calculated by using θbr and θxz in step 3.1 for the calculation of the yaw angle θy. In particular, vbot and vlp can be calculated by using the transformation from Oc’-Xc’Yc’Zc’ to Oc-XcYcZc and the dot product of vbot and vlr.
v b o t = 0 0 1 0 0 0 1 0 0 × x x   ×   R y   ×   z c 0 R y   ×   z c 0   ×   0   0   1 v l p = 0 0 0 0 0 1 0 1 0 × h   ×   R y   ×   z c 0 R y   ×   z c 0   ×   0   0   1 v l r   ·   v b o t =   v l r   ·   v b o t   ·   cos θ gr =   v l p v b o t   ·   v b o t   ·   cos θ br θ xz x x

3.3. Lane Departure Assessment

For the real-world application, the departure status of the vehicle is assessed according to the calculated θy and xx. If xx becomes less than a threshold value, the vehicle is approaching the detected lane-marking. If θy becomes less than a threshold value, it means the vehicle is turning toward the detected lane-marking. These two parameters can efficiently and correctly determine the departure status of the vehicle.
Moreover, the vehicle position relative to the other undetected lane-marking can also be obtained with the calculated lane-width w’. The lane width generally depends on the assumed maximum vehicle width with an additional space to allow for the vehicle motion. In the case of only one edge of the lane, the other edge can also be estimated by a typical lane-width w’’. Therefore, the lane departure is easily determined using the Time to Lane Crossing (TLC) criterion or other criteria. Besides roads with lane-markings, when the vehicle is on the road without any lane-markings, the above lane departure warning method can be used to keep the vehicle in one of the leftmost or rightmost edges of the road.

3.4. Lane Detection

As mentioned above, at least one lane-marking should be detected in the image in order to calculate parameters θy and xx. We applied an open-source method proposed by Qin et al. [29] for the lane detection method used in our study. This method is based on deep segmentation, including a novel lane detection formulation aiming at breakneck speed and no-visual-clue problem. The formulation is proposed to select locations of lanes at predefined rows of the image using global features instead of segmenting every pixel of lanes based on a local receptive field, which significantly reduces the computational cost. Previous experiments show that this method could achieve state-of-the-art performance in terms of both speed and accuracy.

4. Experimental Results

Experiments were conducted at both highways and urban roads, using image sequences captured by a camera mounted on a car with an arbitrary position. At the beginning of the experiments, the camera is calibrated by parallel placing the car to the lane-markings (i.e., the angle between the car direction and the road direction is zero) while all three lane-markings were in the viewfinder of the camera. The reason for placing the car parallel to the lane-markings is that the car direction when taking the calibration image is an object of reference for real driving, and if it fails to be parallel to the lane-marking, the error in the estimation of θy becomes large. To avoid the influence of artificial error, we took several calibration images to optimize the parameters. After that, the lane-markings are detected by the lane detection technique. Then, θ1, θ2, θ3, and camera height are calculated as mentioned in Section 2. Figure 13a–c shows example frames for camera calibration and the corresponding top view of the experimental environment of highway and urban road experiments, respectively.
After the calibration step, the position and orientation of the experimental car were arbitrarily changed to simulate the real driving situation while the pose of the camera (i.e., camera coordinate system) relative to the car keeps stable. Then, an image (called “driving image”) was taken and a steel tape manually measured two parameters: (a) the distance from the camera to one of the lane-markings (xx); (b) the yaw angle of the experimental car to the lane (θy). The lane detection technique is used again to detect the clearest lane-marking in the “driving image”, and the two-lane departure parameters, i.e., θy and xx, are estimated using the previously mentioned lane departure warning method. Figure 13d–f shows example frames for lane departure assessment and the corresponding top view of the experimental environment of highway and urban road experiments, respectively. To test the algorithm in different situations, the camera’s pose was arbitrarily changed five times in the highway experiment and four times in the urban road experiment. Finally, the estimated quantities and the actual measured values were compared, and the errors were calculated.
KITTI odometry dataset was initially created for visual odometry or SLAM algorithms [30]. It is almost the only benchmark dataset with ground truth in its NO.00-11 image sequences, including the camera coordinate of each image gathered by a GPS/IMU system. According to the transformation matrix of each image, the vehicle’s deviation angle θy of each image can be deduced. However, the KITTI does not provide the real values of the distances from the camera to the lane-markings, which impossibly assesses the parameter xx. KITTI can objectively quantify the performance analysis of the proposed algorithm and the state-of-the-art works without manually measuring the errors. Figure 14 shows example frames of the KITTI odometry dataset.
Table 1 tabulates the experimental results for lane departure assessment. It is observed that the average error of θy is about 1 degree, and the average error of xx is less than 5 cm. Causes of errors probably include the small deviation of the vehicle orientation during the calibration process and the measurement errors of real values of the camera positions. To directly evaluate the effect of the warning algorithm, lane departure criteria on both θy and xx were defined to calculate the correct warning rate. For the highway experiment, the distance from the car’s front wheel to the lane-marking replaces xx as a criterion since this parameter is more direct for departure judgment. For the KITTI dataset experiment, because xx cannot be estimated, only θy is the criterion.
Table 2 compares this work to six state-of-the-art algorithms where their previous experiments are mainly conducted on their dedicated datasets but not on a public dataset. As in this work, these six algorithms provided their formulas, respectively. This makes the algorithms available to be re-implemented on other datasets. Since these six algorithms only need the expressions of the detected two lane-markings in the images, therefore, the dataset should contain information on at least two lane-markings. Another condition is setting the threshold values in each algorithm while few algorithms give their threshold values. We take the threshold values during the experiments and software-based simulations, resulting in the best correct warning rate.
Finally, only 604 in 1546 frames include two lane-markings in our dataset. The best performance on our dataset among all six is [15], which failed to reach the 90% correct warning rate. The main reason is that these algorithms need two lane-markings while the angles between the two detected lane-marking lines might change drastically with the deviation angles of the camera. For example, the angle bisector of the detected two lane-markings, the parameter of lane-departure judgment in [18], is mainly affected by camera rotation around Zc-axis.
On the other hand, the proposed algorithm is compared with [25], which uses the 3D imaging model to calculate the θy and xx parameters. The comparison result is shown in Table 3, indicating the performances of the algorithms are both excellent and almost the same. High accuracy indicates the advantage of combining the image information and the road model.
Other than the decision-making parameters θy and xx, the accuracy of camera height h and lane-width w’ is also important. We experimented with camera height and lane width in the laboratory, and the result is shown in Table 4. The total frames for testing h and w’ are 205 and 780, and the average errors are less than 2% and 3%, respectively. Figure 15a shows example frames of the h and w’ experiment.
As for the curved roads, the tangent line of a curve plays the same role as the “straight lane-marking.” Therefore, the parameter θy is the vehicle direction that deviates from the tangent line of the curved lane-marking, and the parameter xx is the distance between the vehicle and the tangent line of the curved lane-marking. The experiment for curved roads is carried out to estimate the parameter xx, while θy is not estimated because the direction of the tangent line changes as the vehicle moves, and it is hard to measure its real value. In Table 5, the error of xx is 17.29 cm, and the correct warning rate is 89.25%. The difficulty in detecting the tangent line may cause an error increase, but the main reason is the camera’s field of view (FOV), which causes the difference between the tangent point and the point for measuring xx (called the x-point), as shown in Figure 16. x-point is the closest point on the lane-marking to the vehicle, so the distance from the vehicle to the x-point is the real value of xx. However, for the precise detection of the lane-markings in front of the vehicle, the camera should face forward, so the camera usually cannot capture x-point, and the point nearest to the x-point which the camera can capture is tangent-point. Therefore, the road’s curvature causes an error between tangent-point and x-point. The greater the curvature of the lane-marking is, the more significant the difference between the slopes of the tangent lines at tangent-point and x-point is, the bigger the error of the calculated xx is. This error can be possibly reduced by estimating the position of the x-point with advanced algorithms in future research. It may be more natural to detect the curved lane-markings and use the detected curve in the image but not the tangent line to determine lane departure. However, it will be much more complex to calculate the projective relation between a curve in the world coordinate system and that in the image coordinate system. Therefore, using a tangent line is the best solution for a curved lane departure warning. Figure 15b shows example frames of the curved road experiment.

5. Conclusions

This paper proposed a lane departure assessment method with precise positioning through a 3D camera imaging model. We exhibited the advantages of this method in three aspects. First, the calibration environment is simple to be equipped with no requirements for camera installation. The camera can be arbitrarily installed in the vehicle, and the environment only needs to contain three parallel and equally spaced horizontal lines. Second, the camera height is calibrated to avoid measurement difficulty and errors. The camera focal length, which is relatively constant, is used in calibration to calculate the camera height. Third, the critical parameters of the departure determination, i.e., the yaw angle representing the deviation of vehicle direction and the distance between the lane-marking and the vehicle, can be deduced by even only one lane line, which is valuable and reliable to the real-world applications. Finally, the experiment results illustrated the high accuracy of the lane departure assessment. The drawback of the proposed method lies in the curved lane departure warning, which requires a high camera’s field of view for low estimated error. The proposed algorithm can improve traffic safety and has excellent potential to be applied to future intelligent transportation systems.

Author Contributions

Conceptualization, F.A., L.C. and Y.L.; methodology, F.A. and Y.L.; software, Y.L.; validation, Y.L. and P.L.; formal analysis, Y.L.; investigation, Y.L., G.S. and Z.L.; resources, F.A., Y.L., G.S. and Z.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, F.A. and Y.L.; visualization, Y.L.; supervision, F.A.; project administration, F.A.; funding acquisition, F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science, Technology, and Innovation Commission of Shenzhen Municipality under grant JSGG20200102162401765.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: (http://www.cvlibs.net/datasets/kitti/eval_odometry.php) accessed on 26 January 2022. Other data presented in this study are available on request from the corresponding author. The data are not publicly available due to copyright.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. Global Status Report on Road Safety 2018; World Health Organization: Geneva, Switzerland, 2018. [Google Scholar]
  2. Wang, Y.; Zhou, Z.; Wei, C.; Liu, Y.; Yin, C. HostâTarget Vehicle Model-Based Lateral State Estimation for Preceding Target Vehicles Considering Measurement Delay. IEEE Trans. Ind. Inform. 2018, 14, 4190–4199. [Google Scholar] [CrossRef]
  3. Lin, H.Y.; Dai, J.M.; Wu, L.T.; Chen, L.Q. A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection. Sensors 2020, 20, 5139. [Google Scholar] [CrossRef] [PubMed]
  4. MartÃnez-GarcÃa, M.; Zhang, Y.; Gordon, T. Modeling Lane Keeping by a Hybrid Open- Closed-Loop Pulse Control Scheme. IEEE Trans. Ind. Inform. 2016, 12, 2256–2265. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, J.; Si, J.; Yin, X.; Gao, Z.; Moon, Y.S.; Gong, J.; Tang, F. Lane departure warning algorithm based on probability statistics of driving habits. Soft Comput. 2021, 25, 13941–13948. [Google Scholar] [CrossRef]
  6. Chen, W.; Zhao, L.; Tan, D.; Wei, Z.; Xu, K.; Jiang, Y. Human Cmachine shared control for lane departure assistance based on hybrid system theory. Control. Eng. Pract. 2019, 84, 399–407. [Google Scholar] [CrossRef]
  7. Cafiso, S.; Pappalardo, G. Safety effectiveness and performance of lane support systems for driving assistance and automation ¨C Experimental test and logistic regression for rare events. Accid. Anal. Prev. 2020, 148, 105791. [Google Scholar] [CrossRef] [PubMed]
  8. Sternlund, S. The safety potential of lane departure warning systems—A descriptive real-world study of fatal lane departure passenger car crashes in Sweden. Traffic Inj. Prev. 2017, 18, S18–S23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Lee, J.W. A machine vision system for lane-departure detection. Comput. Vis. Image Underst. 2002, 86, 52–78. [Google Scholar] [CrossRef] [Green Version]
  10. Vijay, G.; Ramanarayan, M.; Chavan, A.P. Design and Integration of Lane Departure Warning, Adaptive Headlight and Wiper system for Automobile Safety. In Proceedings of the 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), Bangalore, India, 17–18 May 2019; pp. 1309–1315. [Google Scholar]
  11. Gaikwad, V.; Lokhande, S. Lane departure identification for advanced driver assistance. IEEE Trans. Intell. Transp. Syst. 2014, 16, 910–918. [Google Scholar] [CrossRef]
  12. Bhujbal, P.N.; Narote, S.P. Lane departure warning system based on Hough transform and Euclidean distance. In Proceedings of the 2015 Third International Conference on Image Information Processing (ICIIP), Waknaghat, India, 21–24 December 2015; pp. 370–373. [Google Scholar]
  13. Kortli, Y.; Marzougui, M.; Atri, M. Efficient implementation of a real-time lane departure warning system. In Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS), Hammamet, Tunisia, 5–7 November 2016; pp. 1–6. [Google Scholar]
  14. Umamaheswari, V.; Amarjyoti, S.; Bakshi, T.; Singh, A. Steering angle estimation for autonomous vehicle navigation using hough and Euclidean transform. In Proceedings of the 2015 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES), Kozhikode, India, 19–21 February 2015; pp. 1–5. [Google Scholar]
  15. Viswanath, P.; Swami, P. A robust and real-time image based lane departure warning system. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 73–76. [Google Scholar]
  16. Petwal, A.; Hota, M.K. Computer Vision based Real Time Lane Departure Warning System. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 0580–0584. [Google Scholar]
  17. Jung, C.R.; Kelber, C.R. Lane following and lane departure using a linear-parabolic model. Image Vis. Comput. 2005, 23, 1192–1202. [Google Scholar] [CrossRef]
  18. Chen, P.; Jiang, J. Algorithm Design of Lane Departure Warning System Based on Image Processing. In Proceedings of the 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 1–2501. [Google Scholar]
  19. Gamal, I.; Badawy, A.; Al-Habal, A.M.; Adawy, M.E.; Khalil, K.K.; El-Moursy, M.A.; Khattab, A. A robust, real-time and calibration-free lane departure warning system. Microprocess. Microsyst. 2019, 71, 102874. [Google Scholar] [CrossRef]
  20. Prasad, B.P.; Yogamani, S.K. A 160-fps embedded lane departure warning system. In Proceedings of the 2012 International Conference on Connected Vehicles and Expo (ICCVE), Beijing, China, 12–16 December 2012; pp. 214–215. [Google Scholar]
  21. Wu, C.B.; Wang, L.H.; Wang, K.C. Ultra-low complexity block-based lane detection and departure warning system. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 582–593. [Google Scholar] [CrossRef]
  22. Sutopo, R.; Yau, T.T.; Lim, J.M.Y.; Wong, K. Computational Intelligence-based Real-time Lane Departure Warning System Using Gabor Features. In Proceedings of the 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, China, 18–21 November 2019; pp. 1989–1992. [Google Scholar]
  23. Yu, B.; Zhang, W.; Cai, Y. A lane departure warning system based on machine vision. In Proceedings of the 2008 IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application, Wuhan, China, 19–20 December 2008; Volume 1, pp. 197–201. [Google Scholar]
  24. Marzougui, M.; Alasiry, A.; Kortli, Y.; Baili, J. A Lane Tracking Method Based on Progressive Probabilistic Hough Transform. IEEE Access 2020, 8, 84893–84905. [Google Scholar] [CrossRef]
  25. Xu, H.; Wang, X. Camera calibration based on perspective geometry and its application in LDWS. Phys. Procedia 2012, 33, 1626–1633. [Google Scholar] [CrossRef] [Green Version]
  26. Yunjiang, Z.; Gang, F.; Dong, W. Development of lane departure warning system based on a Dual-Core DSP. In Proceedings of the 2011 International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE), Changchun, China, 16–18 December 2011; pp. 476–480. [Google Scholar]
  27. Ma, X.; Mu, C.; Wang, X.; Chen, J. Projective Geometry Model for Lane Departure Warning System in Webots. In Proceedings of the 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19–22 April 2019; pp. 689–695. [Google Scholar]
  28. Lin, H.Y.; Yao, C.W.; Cheng, K.S.; Tran, V.L. Topological map construction and scene recognition for vehicle localization. Auton. Robot. 2018, 42, 65–81. [Google Scholar] [CrossRef]
  29. Qin, Z.; Wang, H.; Li, X. Ultra Fast Structure-aware Deep Lane Detection. arXiv 2020, arXiv:cs.CV/2004.11757. [Google Scholar]
  30. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012. [Google Scholar]
Figure 1. Overview of our proposed method. First, calibration with three parallel and equally spaced lines on the ground is applied to estimate the three angles of rotation for the transformation from the camera coordinate system to the world coordinate system. Next, camera height is calculated, then lane-width can be calculated from the camera height. After calibration, lane departure warning is done by estimating the distance between the lane-marking and the vehicle as well as the yaw angle of the vehicle to the lane with only one of the two lane-markings. Finally, decision is made using the estimated distance and angle.
Figure 1. Overview of our proposed method. First, calibration with three parallel and equally spaced lines on the ground is applied to estimate the three angles of rotation for the transformation from the camera coordinate system to the world coordinate system. Next, camera height is calculated, then lane-width can be calculated from the camera height. After calibration, lane departure warning is done by estimating the distance between the lane-marking and the vehicle as well as the yaw angle of the vehicle to the lane with only one of the two lane-markings. Finally, decision is made using the estimated distance and angle.
Sensors 22 02024 g001
Figure 2. The positions of the three coordinate systems.
Figure 2. The positions of the three coordinate systems.
Sensors 22 02024 g002
Figure 3. The 3D imaging model of the lane-markings.
Figure 3. The 3D imaging model of the lane-markings.
Sensors 22 02024 g003
Figure 4. The position change of the IMG in step 2.1. (Xc-, Xc’-, Yc-, and Yc’-axes are omitted).
Figure 4. The position change of the IMG in step 2.1. (Xc-, Xc’-, Yc-, and Yc’-axes are omitted).
Sensors 22 02024 g004
Figure 5. The imaging differences before and after step 2.1.
Figure 5. The imaging differences before and after step 2.1.
Sensors 22 02024 g005
Figure 6. The position change of the IMG in step 2.2. (Xc-, Xc’-, and Yc-axes are omitted).
Figure 6. The position change of the IMG in step 2.2. (Xc-, Xc’-, and Yc-axes are omitted).
Sensors 22 02024 g006
Figure 7. The imaging differences before and after step 2.2. (a) The side view of the image sensors. (b) The top view of the αbot plane.
Figure 7. The imaging differences before and after step 2.2. (a) The side view of the image sensors. (b) The top view of the αbot plane.
Sensors 22 02024 g007
Figure 8. The imaging differences before and after step 2.3.
Figure 8. The imaging differences before and after step 2.3.
Sensors 22 02024 g008
Figure 9. The imaging for the calculation of w’.
Figure 9. The imaging for the calculation of w’.
Sensors 22 02024 g009
Figure 10. The side and top view of the coordinate systems.
Figure 10. The side and top view of the coordinate systems.
Sensors 22 02024 g010
Figure 11. The imaging for the calculation of θy. The right lane lr is detected in the IMG plane.
Figure 11. The imaging for the calculation of θy. The right lane lr is detected in the IMG plane.
Sensors 22 02024 g011
Figure 12. The imaging for the calculation of xx. The right lane lr is detected in the IMG plane.
Figure 12. The imaging for the calculation of xx. The right lane lr is detected in the IMG plane.
Sensors 22 02024 g012
Figure 13. Example frames of the experiments and the corresponding top views of the environments. (a) Top view of the camera calibration experiments. (b) Camera calibration of the highway experiments. (c) Camera calibration of the urban road experiments. (d) Top view of the lane departure assessment experiments. (e) Lane departure assessment of the highway experiments. (f) Lane departure assessment of the urban road experiments.
Figure 13. Example frames of the experiments and the corresponding top views of the environments. (a) Top view of the camera calibration experiments. (b) Camera calibration of the highway experiments. (c) Camera calibration of the urban road experiments. (d) Top view of the lane departure assessment experiments. (e) Lane departure assessment of the highway experiments. (f) Lane departure assessment of the urban road experiments.
Sensors 22 02024 g013
Figure 14. Example frames of the KITTI odometry dataset.
Figure 14. Example frames of the KITTI odometry dataset.
Sensors 22 02024 g014
Figure 15. Example frames of the h and w’ experiment (a) and the curved road experiment (b).
Figure 15. Example frames of the h and w’ experiment (a) and the curved road experiment (b).
Sensors 22 02024 g015
Figure 16. The position of x-point.
Figure 16. The position of x-point.
Sensors 22 02024 g016
Table 1. The experimental results for lane departure assessment.
Table 1. The experimental results for lane departure assessment.
ExperimentTested
Frames
θy
Error
xx
Error
Departure
Frames
False
Alarms
Correct
Warning
Rate
Lane Departure CriteriaImage
Resolution
Camera
Height
Highway1090.36°4.24 cm27298.17%xfw * < 80 cm & θy ≥ 15°5456 × 3632120 cm
Urban Road15461.13°4.64 cm3052198.64%xx < 150 cm & θy ≥ 15°5456 × 3632144 cm
KITTI5330.97°-200100%θy ≥ 15°1241 × 376165 cm
Sum21881.05°4.61 cm3522398.95%---
* xfw is the distance from the front wheel of the experimental car to the lane-marking.
Table 2. Comparison of the proposed algorithm with state-of-the-art algorithms on lane departure warning.
Table 2. Comparison of the proposed algorithm with state-of-the-art algorithms on lane departure warning.
AlgorithmTotal
Frames
Departure
Frames
False
Alarms
Correct
Warning Rate
Chen and Jiang [18]60410142230.13%
Petwal and Hota [16]60410129950.50%
Gamal et al. [19]60410124160.10%
Bhujbal and Narote [12]60410118769.04%
Yu et al. [23]60410113477.81%
Viswanath et al. [15]6041016189.90%
This Work15463052198.64%
Table 3. Comparison of the proposed algorithm with [25] on lane departure warning.
Table 3. Comparison of the proposed algorithm with [25] on lane departure warning.
AlgorithmTotal
Frames
θy
Error
xx
Error
Departure
Frames
False
Alarms
Correct
Warning Rate
Xu and Wang [25]15461.36°5.19 cm3052198.64%
This Work15461.13°4.64 cm3052198.64%
Table 4. The experimental results of camera height h and lane-width w’. (Error1 is error in cm and Error2 is error in percentage).
Table 4. The experimental results of camera height h and lane-width w’. (Error1 is error in cm and Error2 is error in percentage).
h (cm)FramesError1Error2w’ (cm)FramesError1Error2
75361.39 cm1.85%
84221.56 cm1.86%
96.6231.61 cm1.67%603121.73 cm2.88%
74.2291.27 cm1.71%1202342.49 cm2.08%
84.3291.22 cm1.45%1801562.91 cm1.61%
94.6340.87 cm0.92%240784.07 cm1.69%
87.6321.03 cm1.17%
Sum2051.25 cm1.50%Sum7802.43 cm2.27%
Table 5. The experimental result of the curved roads.
Table 5. The experimental result of the curved roads.
SectionFramesxx ErrorDeparture
Frames
False AlarmsCorrect
Warning Rate
14316.19 cm16686.05%
25018.23 cm26492.00%
Sum9317.29 cm421089.25%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, Y.; Li, P.; Shi, G.; Liang, Z.; Chen, L.; An, F. Lane Departure Assessment via Enhanced Single Lane-Marking. Sensors 2022, 22, 2024. https://doi.org/10.3390/s22052024

AMA Style

Luo Y, Li P, Shi G, Liang Z, Chen L, An F. Lane Departure Assessment via Enhanced Single Lane-Marking. Sensors. 2022; 22(5):2024. https://doi.org/10.3390/s22052024

Chicago/Turabian Style

Luo, Yiwei, Ping Li, Gang Shi, Zuowei Liang, Lei Chen, and Fengwei An. 2022. "Lane Departure Assessment via Enhanced Single Lane-Marking" Sensors 22, no. 5: 2024. https://doi.org/10.3390/s22052024

APA Style

Luo, Y., Li, P., Shi, G., Liang, Z., Chen, L., & An, F. (2022). Lane Departure Assessment via Enhanced Single Lane-Marking. Sensors, 22(5), 2024. https://doi.org/10.3390/s22052024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop