Next Article in Journal
Alien Species Associated with New Introductions and Translocations of Commercial Bivalves in Italian Marine Waters
Previous Article in Journal
Sustainability in Logistics Service Quality: Evidence from Agri-Food Supply Chain in Ukraine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Ingenious Lane Departure Warning System for Autonomous Vehicles

1
Centre for Smart Grid Technologies, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai Campus, Chennai 600127, India
2
NGNLab, Department of Computer Technology, Anna University, MIT Campus, Chennai 600044, India
3
Information Networking Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
4
School of Computer Science and Engineering, Vellore Institute of Technology, Chennai Campus, Chennai 600127, India
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(4), 3535; https://doi.org/10.3390/su15043535
Submission received: 14 December 2022 / Revised: 23 January 2023 / Accepted: 31 January 2023 / Published: 14 February 2023

Abstract

:
Lane detection is necessary for developing intelligent Autonomous Vehicles (AVs). Using vision-based lane detection is more cost-effective, requiring less operational power. Images captured by the moving vehicle include varying brightness, blur, and occlusion caused due to diverse locations. We propose a Vision-based Ingenious Lane Departure Warning System (VILDS) for AV to address these challenges. The Generative Adversarial Networks (GAN) of the VILDS choose the most precise features to create images that are identical to the original but have better clarity. The system also uses Long Short-Term Memory (LSTM) to learn the average behavior of the samples to forecast lanes based on a live feed of processed images, which predicts incomplete lanes and increases the reliability of the AV’s trajectory. Further, we devise a strategy to improve the Lane Departure Warning System (LDWS) by determining the angle and direction of deviation to predict the AV’s Lane crossover. An extensive evaluation of the proposed VILDS system demonstrated the effective working of the lane detection and departure warning system modules with an accuracy of 98.2% and 96.5%, respectively.

1. Introduction

Autonomous Vehicles (AVs) travel without a human operator using a combination of sensors, cameras, and Artificial Intelligence (AI), which gives them the power to make coherent and competent decisions. The increase in AV every year necessitates the AV to make accurate trajectory decisions. The past data available in AVs are used for training to make intelligent decisions using Machine Learning (ML) algorithms [1]. By utilizing recent AI-based methodologies such as Convolutional Neural Networks (CNN) to process various instances from prior experiences (data) of the AV while constructing the model, all scenarios that the AV might encounter in its journey are trained. Light Detection and Ranging (LiDAR) and Radio Detection And Ranging (RADAR) sensors are used for collecting these data. Considering their weakness in extreme weather conditions, cameras are employed as input devices for AV lane detection. The AV’s cameras provide the vehicle with a sense of visual aids in detecting lanes on the road. Traveling on roads with multiple lanes requires additional attention, as the AV must accurately recognize the lane without interfering with other vehicles. Many obstacles are considered in constructing a lane line-detecting system, such as lane markers, occlusion, defects, and lane line interferences [2]. Several lane-detecting approaches using CNN have been proposed by integrating more than one deep learning model, such as RNN, to address these issues. This overcomes the challenges faced by individual models, and lanes can be detected with improved performance.
Intending to provide a mechanism to improve the state of lane detection, we propose a Vision-based Ingenious Lane Departure Warning System for AV (VILDS) using the GAN-LSTM model. The Generative Adversarial Network (GAN) is trained to process realistic samples by representing their specific behaviors and not averaging them. Despite GAN producing quality images, they deviate towards generating samples with little diversity, and a combination of Long Short-Term Memory (LSTM), a Recurrent Neural Network (RNN) model, and a GAN model are utilized to solve this. The LSTM model minimizes the average error between all predictions, thus learning the average behavior of the samples [3]. For lane detection, GAN produces augmented images with enhanced quality, concentrating on lane features, which are passed as continuous data to LSTM. The input images are trained for detecting lanes on the road, where the incomplete lanes are predicted by applying LSTM on preceding frame data. The combination of GAN and LSTM in our proposed work provides a solution for solving the challenges faced during lane detection.
The output of the lane detection model is passed to the Lane Departure Warning System (LDWS). A car moving in front of another one in a lane must not intersect or collide with each other while crossing lanes on the road. Even a minor direction deviation can have significant repercussions while traveling at high speeds [4,5]. These vehicles must ensure the safety of the passengers. As a result, LDWS is essential for increasing driving safety as it notifies the AV to return to the original path.
The proposed VILDS framework consists of the hybrid GAN-LSTM model, which focuses on improving the image quality for lane detection and completing lane lines in the image to maintain the AV’s trajectory within the lanes. The framework also includes the LDWS, which utilizes Hough transformation and Euclidean geometry to identify the threshold angle. This angle is used to calculate the deviation angle and to warn the AV about lane crossovers. The key contributions of this paper are summarized as follows:
  • Developing a VILDS model that enhances the quality of low-resolution images using GAN and restores incomplete lanes in the images using LSTM. The lanes detected to provide an accurate path for the AV’s trajectory.
  • A novel approach to determine the threshold angle using Hough transformation and Euclidean geometry that aids in detecting the AV’s lane crossovers and deviation angles.
  • A Warning Notification System aided by lane crossover results to alert the AV, ensuring its safety in travel.
The remainder of this paper is organized as follows. Section 2 consists of a summary of the related works. Section 3 describes the proposed VILDS model. Section 4 evaluates the lane detection and LDWS performances. Section 5 brings the paper to a close by outlining the performance evaluation results.

2. Related Works

This section discusses the existing models of lane detection and departure warning systems. The traditional approaches compared by Waykole et al. [6] include laser- and RADAR-based sensors for lane detection. A LiDAR is a laser-based sensor used for detecting lanes with the assistance of electromagnetic spectrums. However, due to the sophistication of the software and the computing resources needed, they are not cost-effective. RADAR sensors in an AV that detect lanes using radio waves emitted by the sensors are relatively weak at modeling perfectly precise lanes compared to the camera. Both methods fail to work in extreme weather conditions. Vision-based approaches can be used to overcome the above challenges and provide a cost-effective solution requiring low operational power. Using vision-based sensors in the AV, predicting lane lines with a single frame faces many issues in maintaining the real-time efficiency of the system, which leads to inadequate results. In [7,8,9,10], the authors overcome this issue by using multiple frames as input for the system, a hybrid model combining Deep CNN and RNN. Deep RNN is implemented as an LSTM network since it can forget irrelevant information and recall essential details. The cells in LSTM are utilized to assess the information requirement and maintain efficiency in real-time. The Encoder–Decoder framework is used to combine CNN and RNN. Moreover, another CNN-predicated lane detection technique is proposed by Tabelini et al. [11], who replace RNN with an anchor-based mechanism to combine the local and global features produced by the attention (LaneATT) module, generating feature maps and extracting the features of each anchor for detecting lanes. Although the F1-Score of the model is comparatively larger than other related frameworks, the model is not effective in low-light environments.
The ability of GANs to augment data to produce a desired output by focusing on specific elements solves the fundamental problem of image complexity encountered in computer vision during detection tasks. To detect lanes under different road conditions, Zhang et al. [12] proposes the Ripple Lane Line Detection Network (RiLLD-Net), which is integrated with Ripple GAN. Experiments in RiLLD-Net proved that the system’s performance increased after adding Gaussian noise, but the accuracy of detecting the partly occluded lanes remained low.
The other methods include finding the Region of Interest (ROI) for lane tracking. Wang et al. [13] suggest a MultiTask Attention Network model (MultiLaneNET), which improvises the ROI of an image and delivers strong localization ability by handcrafted features integrated with semantic information of CNN. Demonstrations of the model for four different datasets have been implemented, and it concluded that the precision of detection decreases with increased complexity in road conditions.
The output of lane detection contributes significantly to the departure warning system. Lane colors differ from the roads, which assists in considering the images as a set of nodes to be represented as pixels for constructing the departure system. With the assistance of those pixels, the proposed method by Terlizzi et al. [14] uses the Iterative Tree Search (ITS) to reduce lane detection complexity in finding the lane boundaries. In the case of tormented roads with faded lane boundaries and inconsistent shadows, the system can raise false warnings. Kamble et al. [15] overcome these challenges by proposing an efficient Gaussian filtering-based lane detection using Canny Edge Detection and Hough transformation for candidate line selection. An improvised version of this method was proposed by Teo et al. [16], who replaced the Gabor filter with Gaussian filters. The algorithm extends using Euclidean Geometry to find the lane deviation angle of the vehicle. This algorithm provides less accuracy for large deviations in road and rainy conditions.
However, these methods could not handle the different distortions available in the input images. Moreover, the accuracy of these models decreases under complex road and weather conditions. Therefore, we adopt a hybrid GAN-LSTM model along with LDA to classify images based on distortions, generate images with optimum quality, and restore incomplete lanes. Along with lane detection, the AV’s warning system for lane departure is also proposed with a novel technique adapting the Hough transformation [15] for identifying the edge points on the lanes and Euclidean Geometry [16] for calculating the deviation angles using the points obtained using the Hough transformation.

3. Proposed Work

This section describes the major components of the proposed VILDS system consisting of a hybrid GAN-LSTM model and a novel method for detecting lane deviation in LDWS.

3.1. Lane Detection in VILDS

The architecture diagram of the proposed hybrid GAN-LSTM model for lane detection is depicted in Figure 1.
GAN is employed to generate higher-quality images that are used to predict lane lines in incomplete lanes using LSTM. The performance of GAN can be improved by classifying the distortion in images based on strong/weak lighting, motion blur, and partial occlusion. Linear Discriminant Analysis (LDA) is adopted in the system to classify images based on their distortion categories, as in [17]. Identifying the category helps GAN to determine the features to focus on by fine-tuning the parameters to mitigate the distortion. LDA defines within-class and between-class scatter matrices for all class samples.
  W = r a t m a x ( H b c H w c )  
W is the matrix that depicts the output of LDA. It is obtained by maximizing the ratio (ratmax) of the between-class determinant (Hbc) to the within-class determinant (Hwc) in Equation (1). The working of the GAN-LSTM model in the proposed VILDS system is explained in Algorithm 1. The dataset (traindata) is passed as input to LDA, where the dataset is classified into the respective classes as A(1), A(2), A(3). The data in A are given as input to the GAN-LSTM model. λ and γ are the parameters used in training GAN and LSTM. The dataset is split based on the batch size (λbatchsize, γbatchsize), and for each epoch (λepoch, γepoch), the data considered are processed. During each epoch, data in A are shuffled before processing to reduce variance and to make the model remain in a general state with less overfit.
Algorithm 1. Lane Detection Algorithm in VILDS
Input: traindata, λ, γ
Output: Lane lines detected in images (η)
 1: procedure LDA(traindata)
 2:     A[1] ← traindatablur
 3:     A[2] ← traindataocclusion
 4:     A[3] ← traindatalight
 5: end procedure
 6: procedure GAN_LSTM(A, λ, γ)
 7:   Training GAN using parameters in λ
 8:   for x in A do
 9:    θ  ← TrainGAN(x, λepochs, λbatchsize)
 10:   end for
 11:   ϖ  ←  gan_denoise(ϑ)
 12:   Training LSTM using parameters in γ and dataset ϖ
 13:   η  ←  TrainLSTM(ϖ, γepochs, γbatchsize)
 14:   return η
 15: end procedure
The TrainGAN function trains the GAN model by focusing on the important features of the lane and returns the modified images with lower occlusion and optimal brightness. GAN highlights the features of the objects in the image, simultaneously providing images with the ideal brightness and allowing the detection algorithms to perform more effectively and accurately [18]. The images from the GAN output (ϑ) at each iteration are further given as input to a GAN-based denoiser to improve the quality of the images and are stored in the variable ϖ. At each iteration in the function, the distance between the distribution of real data and the distribution of data generated by GAN is reflected by loss functions. The loss function (L) of the generator and discriminator is calculated in Equation (2),
L = i = 1 n m i n G m a x D   [ l o g ( D ( ϑ ) + l o g ( 1 D ( G ( ω ) ) ]
where the output of GAN and denoiser output (ϖ) is passed to the Generator (G) and Discriminator (D) functions, respectively [19]. The value of L obtained is confirmed by the system to assist the Adam optimizer in reducing the GAN loss function for enhancing the ϖ.
By taking improvised ϖ as input, the lanes are predicted using LSTM. The LSTM network predicts and restores the incomplete lanes using the feedback connections that refer to the sequential data available by learning long-term dependencies that handle the features of the lane lines. The feedback connection enables it to process, learn, and remember those dependencies of the lane lines in this case. This provides the foundation for the LSTM model to forecast and patch incomplete lane lines, rendering solid lanes for further applications in AVs such as LDWS. The output (η) obtained has well-defined lane lines passed on as input to the LDWS.

3.2. Departure Warning System

LDWS is an important feature of an AV car. This system alerts the vehicle in case of path diversion or during lane crossovers. Figure 2 depicts the architecture diagram of the warning system in VILDS.
The detected lanes are further used in LDWS to position the vehicle consequently calculating the angle and determining the direction of deviation of the vehicle. Hough transformation positions the vehicle on the lane by identifying the right and the left lane candidate lines. This method gives us two Hough space coordinates, left point (Zl) and right point (Zr), for each lane candidate. These coordinates aid in determining lane candidates. The midpoint (ρ) is half of the output image’s width (εη) from lane detection.
ρ = ε η 2
v = d i s ( ρ , Z l )
χ = d i s ( ρ , Z r )
The distances between Zl, ρ and Zr, ρ, denoted as ν and χ, are calculated by applying the corresponding coordinate points in the Euclidean distance function (dis). The angle at which AV deviates is calculated using threshold angle and trigonometric functions. The threshold angle (ϕ) being considered determines the direction in which the deviation occurs. Two perpendicular lines from points Pl and Pr with respect to the left and right lane candidates are drawn such that they pass through ρ. This results in forming two right triangles with points, Pl or Pr, Zl or Zr, and ρ on both sides of the lanes with which the deviation angles are formed.
ω = t a n 1   ( o p p l a d j l )
θ = t a n 1     ( o p p l a d j r )
The angle deviation scenario of the proposed LDWS is depicted in Figure 3. The right triangles of respective candidate lines are used to find ω and θ using tan inverse functions. oppl and adjl denote opposite (PlZl) and adjacent (Plρ) sides of the right triangle corresponding to the left candidate lane. Similarly, oppr and adjr denote the opposite (PrZr) and adjacent (Prρ) sides of the right triangle corresponding to the right candidate lane. Considering the parameters, Algorithm 2 identifies the direction of deviation of the AV. Let ϕ be the threshold value that determines whether the lane deviation occurs based on the values of angles ω and θ. If θ is less than ϕ and O is close to Zl, then right deviation occurs. If ω is less than ϕ and O is close to Zr, then left deviation occurs. The AV moves in a straight line if θ is greater than ϕ and O is close to Zr or ω is greater than ϕ and O is close to Zl. If O is at an equal distance from Zl and Zr and θ and ω are greater than ϕ, the AV moves in the considered lane.
Algorithm 2. Departure Warning System in VILDS
Input: Output of lane detection (η)
Output: Deviation angle and direction
 1: procedure LDWS(η)
 2:   The vehicle travel path (lane) is highlighted
 3:   Pr, Pl ← HoughTrans f ormation(η)
 4:   The midpoint ρ is identified using εn
 5:   Right triangles are formed using Z, ρ, P
 6:   Angles ω and θ are computed from the right triangles
 7:   if (ϕ > ω) && (ρ - Zr < ρ - Zl) then
 8:     deviation occurs towards the left
 9:    else if (ϕ > ω) && (ρ - Zl < ρ - Zr) then
 10:      deviation occurs towards the right
 11:   else if (ϕ > θ) && (ρ - Zr < ρ - Zl) then
 12:     deviation occurs towards the left
 13:   else if (ϕ > θ) && (ρ - Zl < ρ - Zr) then
 14:     deviation occurs towards the right
 15:   else
 16:     The vehicle moves in the trajectory of the lane
 17:   end if
 18:    Offset from ρ on the deviated side is calculated
 19:   Departure angle of the side of deviation is found
 20: end procedure
ε l a n e = d i s ( Z 1 , Z r )
The distance between Zl and Zr is the lane’s width (εlane). The offset value shows the distance from which the AV deviates from the lane’s center. The equation of finding the offset differs based on the deviation side of the AV. Depending on the values of ν, χ, εlane and εreal, computed using standard real-world values, the offsets of the right (o f f setrdev) and left (o f f setldev) deviations are calculated using Equations (9) and (10).
o f f s e t r d e v = ( ( 2 * ν ε L a n e ) * ε r e a l ) ( 2 * ε l a n e )
o f f s e t r d e v = ( ( ε L a n e 2 * χ ) * ε r e a l ) ( 2 * ε l a n e )
A perpendicular line with respect to the hypotenuse side (Zlρ (or) Zrρ) of the right triangles formed is drawn. With the help of angles ω and θ, the angle of deviation of AV is calculated. DAl and DAr are the deviation angles calculated based on which direction the AV has deviated.
D A l = 90 ° ω
D A r = 90 ° θ
The input and output of the Lane Detection and Departure Warning System are depicted in Figure 4. The augmented image is shown as the output of GAN. The lane prediction performed by LSTM finally outputs monochrome images. The LSTM output is passed to the LDWS module, where the output contains the information regarding lane departure, offset of AV from the lane, and deviation angle of the vehicle on the lane.

4. Implementation and Results

4.1. Experimental Setup

The training was conducted on the TuSimple dataset [20], using Google Colab with 12 GB of RAM and 110 GB of disk space. This dataset contains 6408 United States highway road images taken from a car. Each image has a resolution of 1280 × 720. There are 3626 images for training, 358 for validation, and 2782 for testing in the dataset. The dataset is balanced, containing images under various weather conditions, road curvatures, and different lane line counts, as well as data with motion blur, occlusion, and strong/weak brightness. Tensorflow and Keras were the additional tools deployed to develop deep learning models.
The Generator part of the GAN in VILDS consists of three fully connected layers with a ReLU activation function at the end, followed by the Discriminator model where the linear, sigmoid, and ReLU activation functions are applied at the four fully connected layers. The Generator model has a batch normalization layer with a momentum of 0.8 and the Discriminator model has a dropout layer at the rate of 0.25. The denoiser is appended to the GAN model to reduce the noise generated in the images. The model is trained using the Adam optimizer at a learning rate of 0.001 for 5000 epochs with a batch size of 32. The LSTM model consists of two LSTM layers with a dropout layer rate of 0.2 in between and a fully connected layer with a linear activation function.
The processing time taken to perform this task was 92 min. The Google Colab environment was further equipped with a high-performance NVIDIA Tesla K80 GPU. Using LDA, 53.3%, 30%, and 16.7% of the images in the dataset were classified based on distortions categorized under brightness, motion blur, and occlusion, respectively. Brightness was the most common distortion form identified in the images in more than half of the dataset. OpenCV was used along with the outputs of the lane detection module for implementing the LDWS.

4.2. Comparative Analysis of GAN-LSTM Model

The comparison between different models for the average loss of the trained model to the epochs is shown in Figure 5a. The average loss of the VILDS model decreases with an increase in the number of iterations when compared to Ripple-GAN, Robust Lane (LSTM), MultiTask Attention Network model (MultiLaneNET), and Flexible CNN.
Figure 5b depicts the performance analysis of the proposed model with respect to other models of lane detection using Ripple-GAN (Ri-GAN), Robust Lane LSTM, and MultiTask Attention Network (MultiLaneNET). The precision, recall, and F1-Score values of VILDS are similar to those of the Ripple-GAN model. However, the proposed model has an accuracy value of 98.2% in detecting lanes, which is high compared with other models plotted. We conclude from comparing the models that VILDS has a higher computational efficiency and less complexity, which is supported by the training time, testing time, and recall values, as mentioned in Table 1.

4.3. Performance Analysis of LDWS

Figure 5c compares the accuracy of various lane departure system models. The graph depicts total, left, and right lane departures. VILDS has an accuracy of 96.5% based on a total frames, which is higher than the accuracy of LDWS methods in Innovative Lane Detection (ILD) and the Lane Departure Warning System for Advanced Driver Assistance (LDWS-ADA). The proposed framework predicts the left and right departures more accurately than the other two models. In the right lane departure, a higher accuracy of 96.99% is achieved. As a result, the VILDS model was successfully implemented with a minimal average loss.

5. Conclusions

Challenges faced due to road and weather conditions hinder the accurate prediction of lanes and threaten road safety. To overcome these challenges, the proposed VILDS was implemented using the GAN-LSTM hybrid model, which can potentially detect lanes with an accuracy of 98.2%. VILDS is less complex and has been implemented with minimal loss compared to other ex-lane detection models such as Ripple-GAN, LSTM, and MultiTask Lane NET. The novel LDWS gives an accuracy score of 96.5%. This score measures the closeness of the evaluated value to the actual value. In the future, the system’s left deviation accuracy can be improved by integrating relevant parameters and heterogeneous traffic conditions can be considered.

Author Contributions

Conceptualization, S.G.S. and S.A.; methodology, P.S. and G.R.; software, B.T.; validation, G.S.; formal analysis, S.G.S. and G.R.; writing—original draft preparation, P.S. and G.S.; writing—S.A. and B.T.; supervision, G.R.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, C.; Moon, J.H. Robust Lane Detection and Tracking for Real-Time Applications. IEEE Trans. Intell. Transp. Syst. 2018, 19, 4043–4048. [Google Scholar] [CrossRef]
  2. Ortiz-Esquivel, A.E.; Díaz-Hernández, R.; Altamirano-Robles, L. A method for lane recognition using active contours model in vehicular roads. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February 2018; pp. 37–43. [Google Scholar]
  3. Rossi, L.; Paolanti, M.; Pierdicca, R.; Frontoni, E. Human Trajectory Prediction and Generation using LSTM Models and GANs. Pattern Recognit. 2021, 120, 108–136. [Google Scholar] [CrossRef]
  4. Chen, Y.; Boukerche, A. A Novel Lane Departure Warning System for Improving Road Safety. In Proceedings of the ICC 2020–2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar]
  5. Prathiba, S.B.; Raja, G.; Dev, K.; Kumar, N.; Guizani, M. A Hybrid Deep Reinforcement Learning For Autonomous Vehicles Smart-Platooning. IEEE Trans. Veh. Technol. 2021, 70, 13340–13350. [Google Scholar] [CrossRef]
  6. Waykole, S.; Shiwakoti, N.; Stasinopoulos, P. Review on Lane Detection and Tracking Algorithms of Advanced Driver Assistance System. Sustainability 2021, 13, 11417. [Google Scholar] [CrossRef]
  7. Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust Lane Detection from Continuous Driving Scenes using Deep Neural Networks. IEEE Trans. Veh. Technol. 2019, 69, 41–54. [Google Scholar] [CrossRef]
  8. Haixia, L.; Xizhou, L. Flexible Lane Detection using CNNs. In Proceedings of the 2021 International Conference on Computer Technology and Media Convergence Design (CTMCD), Sanya, China, 23–25 April 2021; pp. 235–238. [Google Scholar]
  9. Wang, H.; Chen, Y.; Cai, Y.; Chen, L.; Li, Y.; Sotelo, M.A.; Li, Z. SFNet-N: An Improved SFNet Algorithm for Semantic Segmentation of Low-Light Autonomous Driving Road Scenes. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21405–21417. [Google Scholar] [CrossRef]
  10. Huang, L.; Huang, W. RD-YOLO: An Effective and Efficient Object Detector for Roadside Perception System. Sensors 2022, 22, 8097. [Google Scholar] [CrossRef]
  11. Tabelini, L.; Berriel, R.; Paixao, T.M.; Badue, C.; De Souza, A.F.; Oliveira-Santos, T. Keep Your Eyes on the Lane: Real-Time Attention-Guided Lane Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 294–302. [Google Scholar]
  12. Zhang, Y.; Lu, Z.; Ma, D.; Xue, J.H.; Liao, Q. Ripple-GAN: Lane Line Detection with Ripple Lane Line Detection Network and Wasserstein GAN. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1532–1542. [Google Scholar] [CrossRef]
  13. Wang, Q.; Han, T.; Qin, Z.; Gao, J.; Li, X. Multitask Attention Network for Lane Detection and Fitting. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1066–1078. [Google Scholar] [CrossRef]
  14. Terlizzi, M.; Russo, L.; Picariello, E.; Glielmo, L. A Novel Algorithm for Lane Detection Based on Iterative Tree Search. In Proceedings of the 2021 IEEE International Workshop on Metrology for Automotive (MetroAutomotive), Bologna, Italy, 1–2 July 2021; pp. 205–209. [Google Scholar]
  15. Kamble, A.; Potadar, S. Lane Departure Warning System for Advanced Drivers Assistance. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; pp. 1775–1778. [Google Scholar]
  16. Teo, T.Y.; Sutopo, R.; Lim, J.M.Y.; Wong, K. Innovative Lane Detection Method to Increase the Accuracy of Lane Departure Warning System. Multimed. Tools Appl. 2021, 80, 2063–2080. [Google Scholar] [CrossRef]
  17. Zhou, M.; Samiappan, S.; Worch, E.; Ball, J.E. Hyperspectral Image Classification Using Fisher’s Linear Discriminant Analysis Feature Reduction with Gabor Filtering and CNN. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 493–496. [Google Scholar]
  18. Fu, Y.; Sen, S.; Reimann, J.; Theurer, C. Spatiotemporal Representation Learning with GAN Trained LSTM-LSTM Networks. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10548–10555. [Google Scholar]
  19. Lu, H.; Barzegar, V.; Nemani, V.P.; Hu, C.; Laflamme, S.; Zimmerman, A.T. GAN-LSTM Predictor for Failure Prognostics of Rolling Element Bearings. In Proceedings of the 2021 IEEE International Conference on Prognostics and Health Management (ICPHM), Detroit, MI, USA, 7–9 June 2021; pp. 1–8. [Google Scholar]
  20. Li, X.; Li, J.; Hu, X.; Yang, J. Line-CNN: End-to-End Traffic Line Detection With Line Proposal Unit. IEEE Trans. Intell. Transp. Syst. 2020, 21, 248–258. [Google Scholar] [CrossRef]
Figure 1. VILDS framework: Lane Detection Model.
Figure 1. VILDS framework: Lane Detection Model.
Sustainability 15 03535 g001
Figure 2. VILDS framework: Departure Warning System.
Figure 2. VILDS framework: Departure Warning System.
Sustainability 15 03535 g002
Figure 3. Angle deviation in proposed LDWS.
Figure 3. Angle deviation in proposed LDWS.
Sustainability 15 03535 g003
Figure 4. VILDS Output during AV trajectory.
Figure 4. VILDS Output during AV trajectory.
Sustainability 15 03535 g004
Figure 5. (a) Average loss Vs epochs of different lane detection models; (b) performance analysis of different lane detection models; (c) performance analysis of LDWS models.
Figure 5. (a) Average loss Vs epochs of different lane detection models; (b) performance analysis of different lane detection models; (c) performance analysis of LDWS models.
Sustainability 15 03535 g005
Table 1. Analysis of computational efficiency of models.
Table 1. Analysis of computational efficiency of models.
ModelsTraining
Time/Epoch (s)
Testing
Time/Frame (s)
Recall (%)
Ripple GAN [12]5567.90.597.28
LSTM [7]6780.30.695.8
MultiLaneNET
[13]
7786.20.989.71
VILDS5520.30.597.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anbalagan, S.; Srividya, P.; Thilaksurya, B.; Senthivel, S.G.; Suganeshwari, G.; Raja, G. Vision-Based Ingenious Lane Departure Warning System for Autonomous Vehicles. Sustainability 2023, 15, 3535. https://doi.org/10.3390/su15043535

AMA Style

Anbalagan S, Srividya P, Thilaksurya B, Senthivel SG, Suganeshwari G, Raja G. Vision-Based Ingenious Lane Departure Warning System for Autonomous Vehicles. Sustainability. 2023; 15(4):3535. https://doi.org/10.3390/su15043535

Chicago/Turabian Style

Anbalagan, Sudha, Ponnada Srividya, B. Thilaksurya, Sai Ganesh Senthivel, G. Suganeshwari, and Gunasekaran Raja. 2023. "Vision-Based Ingenious Lane Departure Warning System for Autonomous Vehicles" Sustainability 15, no. 4: 3535. https://doi.org/10.3390/su15043535

APA Style

Anbalagan, S., Srividya, P., Thilaksurya, B., Senthivel, S. G., Suganeshwari, G., & Raja, G. (2023). Vision-Based Ingenious Lane Departure Warning System for Autonomous Vehicles. Sustainability, 15(4), 3535. https://doi.org/10.3390/su15043535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop