Image-Aided LiDAR Extraction, Classification, and Characterization of Lane Markings from Mobile Mapping Data
Abstract
:1. Introduction
- Road surface identification: define road surface regions in imagery/LiDAR data;
- Color/intensity enhancement: enhance the utility of color/intensity information for road surface in imagery/LiDAR data;
- Lane marking extraction: detect lane markings in the enhanced road surface imagery/LiDAR data;
- Lane marking classification: assign varying labels based on color/pattern/location to detected lane markings according to the Federal Highway Administration (FHWA) standard [5];
- Lane marking characterization: derive lane marking attributes (e.g., visibility conditions of lane markings, intensity profiles, and lane width) using classified lane markings;
- Reporting mechanisms: visualize the derived lane marking results (e.g., extracted lane markings and their characteristics) based on imagery and/or LiDAR data.
- Propose an image-aided LiDAR framework for the following:
- Lane marking extraction/classification/characterization;
- Identifying all lane markings visible in imagery/LiDAR data (not only along the driving lane);
- Handling multiple imaging and LiDAR sensors onboard an MMS;
- Evaluate the performance of the proposed strategies using an MMS equipped with multiple camera and LiDAR units along extended road segments;
- Develop a reporting mechanism for visualizing imagery and LiDAR data together with extracted lane markings, as well as their characteristics (e.g., visibility conditions of lane markings, intensity profiles, and lane width).
2. Data Acquisition Systems and Dataset Description
2.1. Mobile Mapping System
2.2. Study Site and Dataset Description
3. Methodology
3.1. Point Positioning Models for Forward/Backward Projection
3.2. Image-Aided LiDAR Lane Marking Inventory Framework
3.2.1. LiDAR-Based Lane Marking Extraction
3.2.2. Image-Based Lane Marking Extraction
3.2.3. Image-Aided LiDAR Lane Marking Extraction/Classification/Characterization
- If projected image-based lane markings are within 20 cm of LiDAR-based lane markings, the LiDAR-based extraction will be colored according to the image-based results and FHWA standard colors [5]. For instance, if image-based lane markings are white, LiDAR-based ones will be colored using RGB values of 247, 241, and 227.
- If no projected image-based lane markings are within a 20 cm neighborhood of LiDAR-based lane markings (i.e., no corresponding LiDAR-based extraction in point clouds), the image-based extraction will be utilized to extract lane markings in point clouds. First, the top 95th percentile intensity thresholding is applied to a road surface point cloud to derive hypothesized lane markings. In the hypothesized lane marking point cloud, points within 20 cm (Thaid) of projected image-based lane markings are extracted. The resultant lane markings will also be colored according to the abovementioned procedure. One should note that this study aims to utilize image information to refine LiDAR-based lane markings for establishing inventory, including intensity profiles for evaluating retroreflectivity. To prevent misrepresentation of the intensity profiles, areas with no intensity contrast in the LiDAR data will not be utilized to extract lane markings in point clouds.
3.2.4. Potree-Based Web Portal Visualization
3.3. Performance Evaluation
4. Experimental Results and Discussion
4.1. Products from Image-Aided LiDAR Lane Marking Inventory Framework
4.2. Qualitative Evaluation Using Potree-Based Web Portal Visualization
4.3. Quantitative Evaluation
4.4. Discussion
5. Conclusions and Recommendations for Future Research
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Plankermann, K. Human Factors as Causes for Road Traffic Accidents in the Sultanate of Oman under Consideration of Road Construction Designs. Ph.D. Dissertation, Universität Regensburg, Regensburg, Germany, 2014. [Google Scholar]
- Chen, S.; Saeed, T.U.; Alinizzi, M.; Lavrenz, S.; Labi, S. Safety sensitivity to roadway characteristics: A comparison across highway classes. Accid. Anal. Prev. 2019, 123, 39–50. [Google Scholar] [CrossRef]
- Zegeer, C.V.; Deacon, J.A. Effect of lane width, shoulder width, and shoulder type on highway safety. State Art Rep. 1987, 6, 1–21. [Google Scholar]
- Stein, W.J.; Neuman, T.R. Mitigation Strategies for Design Exceptions; Federal Highway Administration, Office of Safety: Washington, DC, USA, 2007. [Google Scholar]
- FHWA. Manual on Uniform Traffic Control Devices 2009; US Department of Transportation, Federal Highway Administration: Washington, DC, USA, 2009. [Google Scholar]
- Highway Safety Improvement Program Manual. 2011. Available online: https://safety.fhwa.dot.gov/hsip/resources/fhwasa09029/sec3.cfm (accessed on 6 April 2024).
- Sebsadji, Y.; Tarel, J.-P.; Foucher, P.; Charbonnier, P. Robust road marking extraction in urban environments using stereo images. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium (IV), La Jolla, CA, USA, 21–24 June 2010; pp. 394–400. [Google Scholar]
- Foucher, P.; Sebsadji, Y.; Tarel, J.P.; Charbonnier, P.; Nicolle, P. Detection and recognition of urban road markings using images. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; IEEE: Piscateville, NJ, USA; pp. 1747–1752. [Google Scholar]
- Jung, S.; Youn, J.; Sull, S. Efficient lane detection based on spatiotemporal images. IEEE Trans. Intell. Transp. Syst. 2015, 17, 289–295. [Google Scholar] [CrossRef]
- Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst. Appl. 2015, 42, 1816–1824. [Google Scholar] [CrossRef]
- Xu, S.; Wang, J.; Wu, P.; Shou, W.; Wang, X.; Chen, M. Vision-based pavement marking detection and condition assessment—A case study. Appl. Sci. 2021, 11, 3152. [Google Scholar] [CrossRef]
- Chen, X.; Kohlmeyer, B.; Stroila, M.; Alwar, N.; Wang, R.; Bach, J. Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 488–491. [Google Scholar]
- Guan, H.; Li, J.; Yu, Y.; Wang, C.; Chapman, M.; Yang, B. Using mobile laser scanning data for automated extraction of road markings. ISPRS J. Photogramm. Remote Sens. 2014, 87, 93–107. [Google Scholar] [CrossRef]
- Cheng, M.; Zhang, H.; Wang, C.; Li, J. Extraction and classification of road markings using mobile laser scanning point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 1182–1196. [Google Scholar] [CrossRef]
- Yu, Y.; Li, J.; Guan, H.; Jia, F.; Wang, C. Learning hierarchical features for automated extraction of road markings from 3-D mobile LiDAR point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 709–726. [Google Scholar] [CrossRef]
- Yan, L.; Liu, H.; Tan, J.; Li, Z.; Xie, H.; Chen, C. Scan line based road marking extraction from mobile LiDAR point clouds. Sensors 2016, 16, 903. [Google Scholar] [CrossRef]
- Huang, A.S.; Moore, D.; Antone, M.; Olson, E.; Teller, S. Finding multiple lanes in urban road networks with vision and lidar. Auton. Robot. 2009, 26, 103–122. [Google Scholar] [CrossRef]
- Li, Q.; Chen, L.; Li, M.; Shaw, S.L.; Nüchter, A. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios. IEEE Trans. Veh. Technol. 2013, 63, 540–555. [Google Scholar] [CrossRef]
- Shin, S.; Shim, I.; Kweon, I.S. Combinatorial approach for lane detection using image and LIDAR reflectance. In Proceedings of the 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Goyangi, Republic of Korea, 28–30 October 2015; IEEE: Piscateville, NJ, USA. [Google Scholar]
- Gu, X.; Zang, A.; Huang, X.; Tokuta, A.; Chen, X. Fusion of color images and LiDAR data for lane classification. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 3–6 November 2015; pp. 1–4. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Bai, M.; Mattyus, G.; Homayounfar, N.; Wang, S.; Lakshmikanth, S.K.; Urtasun, R. Deep multi-sensor lane detection. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscateville, NJ, USA; pp. 3102–3109. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Pavement Marking Management System—FHWA Office of Safety. 1999. Available online: https://safety.fhwa.dot.gov/roadway_dept/night_visib/pavement_visib/pmms/docs/ref_manual.pdf (accessed on 22 July 2022).
- Pavement Marking Inventory. 2022. Available online: https://solutions.arcgis.com/local-government/help/pavement-marking-inventory/ (accessed on 28 July 2022).
- Budzyński, M.; Kustra, W.; Okraszewska, R.; Jamroz, K.; Pyrchla, J. The use of GIS tools for road infrastructure safety management. E3S Web Conf. 2018, 26, 00009. [Google Scholar] [CrossRef]
- Velodyne LiDAR. HDL-32E User Manual and Programming Guide. 2012. Available online: https://s3.us-east-2.amazonaws.com/nclt.perl.engin.umich.edu/manuals/HDL-32E_manual.pdf (accessed on 1 January 2021).
- Velodyne LiDAR. VLP-16 User Manual and Programming Guide. 2015. Available online: https://usermanual.wiki/Pdf/VLP1620User20Manual20and20Programming20Guide2063924320Rev20A.1947942715/view (accessed on 1 January 2021).
- Applanix. POSLV Specifications. 2015. Available online: https://www.applanix.com/downloads/products/specs/POS-LV-Datasheet.pdf (accessed on 1 January 2021).
- Habib, A.; Lay, J.; Wong, C. Specifications for the Quality Assurance and Quality Control of Lidar Systems; Base Mapping and Geomatic Services of British Columbia: Victoria, BC, Canada, 2006. [Google Scholar]
- Kuçak, R.A.; Erol, S.; Erol, B. The strip adjustment of mobile LiDAR point clouds using iterative closest point (ICP) algorithm. Arab. J. Geosci. 2022, 15, 1017. [Google Scholar] [CrossRef]
- Ravi, R.; Lin, Y.-J.; Elbahnasawy, M.; Shamseldin, T.; Habib, A. Bias impact analysis and calibration of terrestrial mobile LiDAR system with several spinning multibeam laser scanners. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5261–5275. [Google Scholar] [CrossRef]
- Ravi, R.; Lin, Y.J.; Elbahnasawy, M.; Shamseldin, T.; Habib, A. Simultaneous system calibration of a multi-lidar multicamera mobile mapping platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1694–1714. [Google Scholar] [CrossRef]
- Romano, J.D.; Le, T.T.; Fu, W.; Moore, J.H. Is deep learning necessary for simple classification tasks? arXiv 2020, arXiv:2006.06730. [Google Scholar]
- Cheng, Y.-T.; Lin, Y.-C.; Habib, A. Generalized LiDAR Intensity Normalization and Its Positive Impact on Geometric and Learning-Based Lane Marking Detection. Remote Sens. 2022, 14, 4393. [Google Scholar] [CrossRef]
- Revelles, J.; Urena, C.; Lastra, M. An Efficient Parametric Algorithm for Octree Traversal. 2000. Available online: http://wscg.zcu.cz/wscg2000/Papers_2000/X31.pdf (accessed on 21 February 2024).
- Lin, Y.-C.; Habib, A. Quality control and crop characterization framework for multi-temporal UAV LiDAR data over mechanized agricultural fields. Remote Sens. Environ. 2021, 256, 112299. [Google Scholar] [CrossRef]
- AASHTO. A Policy on Geometric Design of Highways and Streets; American Association of State Highway and Transportation Officials: Washington, DC, USA, 2018. [Google Scholar]
- Lari, Z.; Habib, A. New approaches for estimating the local point density and its impact on lidar data segmentation. Photogramm. Eng. Remote Sens. 2013, 79, 195–207. [Google Scholar] [CrossRef]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. KDD 1996, 96, 226–231. [Google Scholar]
- Foedisch, M.; Takeuchi, A. Adaptive real-time road detection using neural networks. In Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems, Washington, DC, USA, 3–6 October 2004; IEEE: Piscateville, NJ, USA, 2004. [Google Scholar]
- Glaser, S.; Mammar, S.; Sentouh, C. Integrated driver–vehicle–infrastructure road departure warning unit. IEEE Trans. Veh. Technol. 2010, 59, 2757–2771. [Google Scholar] [CrossRef]
- Wang, Q.; Wei, Z.; Wang, J.; Chen, W.; Wang, N. Curve recognition algorithm based on edge point curvature voting. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2020, 234, 1006–1019. [Google Scholar] [CrossRef]
- Yang, Q.; Ma, Y.; Li, L.; Gao, Y.; Tao, J.; Huang, Z.; Jiang, R. A fast vanishing point detection method based on row space features suitable for real driving scenarios. Sci. Rep. 2023, 13, 3088. [Google Scholar] [CrossRef]
- Resonfeld, A.; Pfaltz, J. Sequential operations in digital image processing. JACM 1966, 13, 471–494. [Google Scholar] [CrossRef]
- De Boor, C. A Practical Guide to Splines; Springer: New York, NY, USA, 1978; Volume 27. [Google Scholar]
- D’Errico, J. Distance2curve. 2023. Available online: https://www.mathworks.com/matlabcentral/fileexchange/34869-distance2curve (accessed on 27 October 2022).
- Schütz, M. Potree: Rendering Large Point Clouds in Web Browsers; Vienna University of Technology: Vienna, Austria, 2016. [Google Scholar]
- Jiang, Y. Durability and Retro-Reflectivity of Pavement Markings (Synthesis Study); Indiana Department of Transportation and Purdue University: West Lafayette, IN, USA, 2008. [Google Scholar]
Sensor | Pros | Cons |
---|---|---|
Camera |
|
|
LiDAR scanner |
|
|
Highway | Date | Average Driving Speed (mph) | Imagery Data (# of Images) | LiDAR Data (# of Points) |
---|---|---|---|---|
I-465 | 11 July 2023 | inner loop: 50.6 outer loop: 50.4 | 22,428 1 | 42,000 million |
Camera | Image Size (Pixel) #of Columns × # of Rows | Near ROI (Pixel) 10 m Boundary—Bottom Row | Distant ROI (Pixel) 25 m Boundary—10 m Boundary |
---|---|---|---|
front-left camera | 3376 × 2704 | 1800–2704 | 1450–1800 |
front-right camera | |||
rear camera | 1600–2704 | 1250–1600 |
Threshold/ Parameter | Description | Strategy (Section) | Value |
---|---|---|---|
Length of road surface blocks | LiDAR-based lane marking extraction (3.2.1) | 12.8 m | |
Width of road surface blocks | LiDAR-based lane marking extraction (3.2.1) | 18 m | |
Percentile intensity threshold for lane marking extraction from point clouds | LiDAR-based lane marking extraction (3.2.1) | top 95th % | |
Length threshold for scan line-based outlier removal | LiDAR-based lane marking extraction (3.2.1) | 25 cm | |
Percentile Y value threshold for lane marking extraction from images | Image-based lane marking extraction (3.2.2) | top 97th % | |
Percentile Cb value threshold for lane marking extraction from images | Image-based lane marking extraction (3.2.2) | lowest 3rd % | |
Length for dividing a segment in an image | Image-based lane marking extraction (3.2.2) | 100 pixels | |
Number of pixels for creating buffers around a segment in an image | Image-based lane marking extraction (3.2.2) | 20 pixels | |
Y/Cb value threshold for contrast-based outlier removal in an image | Image-based lane marking extraction (3.2.2) | 5 Y/Cb values | |
Lane marking width threshold for 3D width-based outlier removal in an image | Image-based lane marking extraction (3.2.2) | 15 cm | |
Distance threshold for determining the correspondence between image-based and LiDAR-based lane markings, as well as extracting lane markings in point clouds using image-based results | Image-aided LiDAR lane marking extraction (3.2.3) | 20 cm | |
Distance threshold for grouping splines | Image-aided LiDAR lane marking classification (3.2.3) | 75 cm | |
Gap threshold for avoiding linear interpolation on curved roads | Image-aided LiDAR lane marking characterization (3.2.3) | 40 m |
Approach | Time Taken (Seconds) for One-Mile-Long Lane Marking Extraction | Platform |
---|---|---|
LiDAR based | ~450 1 | 32 GB RAM computer |
Image based | ~5070 2 | 12.7 GB RAM (GPU) Google Collaboratory |
Image-aided LiDAR | ~5970 3 | 32 GB RAM computer and 12.7 GB RAM (GPU) Google Collaboratory |
Lane Marking Extraction | Precision (%) | Recall (%) | F1-Score (%) |
---|---|---|---|
LiDAR based | 93.2 | 87.6 | 90.3 |
Image based | 88.5 | 69.4 | 77.8 |
Image-aided LiDAR | 93.4 | 91.6 | 92.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cheng, Y.-T.; Shin, Y.-H.; Shin, S.-Y.; Koshan, Y.; Hodaei, M.; Bullock, D.; Habib, A. Image-Aided LiDAR Extraction, Classification, and Characterization of Lane Markings from Mobile Mapping Data. Remote Sens. 2024, 16, 1668. https://doi.org/10.3390/rs16101668
Cheng Y-T, Shin Y-H, Shin S-Y, Koshan Y, Hodaei M, Bullock D, Habib A. Image-Aided LiDAR Extraction, Classification, and Characterization of Lane Markings from Mobile Mapping Data. Remote Sensing. 2024; 16(10):1668. https://doi.org/10.3390/rs16101668
Chicago/Turabian StyleCheng, Yi-Ting, Young-Ha Shin, Sang-Yeop Shin, Yerassyl Koshan, Mona Hodaei, Darcy Bullock, and Ayman Habib. 2024. "Image-Aided LiDAR Extraction, Classification, and Characterization of Lane Markings from Mobile Mapping Data" Remote Sensing 16, no. 10: 1668. https://doi.org/10.3390/rs16101668
APA StyleCheng, Y. -T., Shin, Y. -H., Shin, S. -Y., Koshan, Y., Hodaei, M., Bullock, D., & Habib, A. (2024). Image-Aided LiDAR Extraction, Classification, and Characterization of Lane Markings from Mobile Mapping Data. Remote Sensing, 16(10), 1668. https://doi.org/10.3390/rs16101668