A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment
Abstract
:1. Introduction
2. Detecting Road Curbs and Dynamic Obstacles
2.1. Ground Segmentation
2.1.1. Change in Radius between Neighboring Points in One Spin
2.1.2. Detecting Broken Lines
2.1.3. Tangential Angle
2.1.4. Local Height Difference
2.1.5. Gradient in the Radial Direction with a Dynamic Threshold
2.1.6. Determining the Threshold of the Features
δ1 for feature inSection 2.1.1 | 0.05 |
δ2 for feature inSection 2.1.3 | 0.6 |
δ3 for feature inSection 2.1.4 | 10 |
2.2. Obstacle Clustering
2.3. Calculating Obstacle Cluster Shape Characteristics
- (1)
- The center position of the cluster is calculated as follows:
- (2)
- The covariance matrix in the x – y plane is calculated as follows:
- (3)
- The eigenvalue and eigenvector of are calculated and saved in matrices P and X:
2.4. Detecting Road Curbs
- (1)
- As a key step, binarization is conducted on the obstacle map wherein obstacle pixels are marked as 0, whereas other pixels are designated as 255. This step is followed by smoothing filtering, in which the possibility of a pixel marked as a road surface is designated as its intensity. The intrinsic logic of filtering is as follows: when a pixel is far from the pixel marked as an obstacle, then the possibility that it is a road surface increases. In the map, the difference between the pixel and its neighboring eight pixels should not exceed 2. To satisfy this restriction, intensity difference threshold filtering is performed from the top left corner to the bottom right corner and then from the bottom right corner to the top left corner. To illustrate this step, a pixel and its eight neighboring pixels are shown in Figure 5.
- (2)
- A threshold filter is used on the map to identify the road area. The threshold is set to 20 in this work. A pixel value larger than 20 is set to 0; otherwise, it is set to 255.
- (3)
- A neighboring road pixel search algorithm is employed to detect the road area. The algorithm starts from (256, 100), which is the position of a vehicle, to search the eight neighboring pixels. The pixel is designated as a road surface pixel only if all of its 24 neighboring pixels in a 5 × 5 grid are 0. Once a pixel is designated as a road surface pixel, it is then added to the road surface area and a search is performed on its eight neighboring pixels iteratively. A drivable area will be identified at the end of the iteration. To accelerate the process, the search is restricted to a rectangular area in the vehicle forward direction.
- (4)
- A road curb is identified by searching the boundary of the road area, which is described in detail in our previous work [42], and stored in . Finally, a least square fit is applied to C to form a quadratic curve.
3. Tracking Road Curbs and Dynamic Obstacles
3.1. Tracking Road Curbs
3.1.1. Predicting Road Curbs
- : pixel set that consists of a road curb,
- : a curb pixel,
- ,
- ,
- : rotation of the vehicle,
- : x and y coordination of the vehicle in the map,
- : transformation in the x and y directions.
3.1.2. Updating the Prediction of a Road Curb
3.2. Tracking Dynamic Obstacles
3.2.1. Predicting Dynamic Obstacles
3.2.2. Updating Dynamic Obstacles on a Road
4. Results and Discussion
Scene | Accurate | Minor Error | Leak Detection | False Detection |
---|---|---|---|---|
Scene 1 | 97.44% | 1.85% | 0.55% | 0.16% |
Scene 2 | 89.29% | 6.54% | 1.56% | 2.61% |
5. Conclusions/Outlook
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef] [PubMed]
- Andriluka, M.; Roth, S.; Schiele, B. People-tracking-by-detection and people-detection-by-tracking. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8.
- Yinghua, H.; Hong, W.; Bo, Z. Color-based road detection in urban traffic scenes. IEEE Trans. Intell. Transp. Syst. 2004, 5, 309–318. [Google Scholar]
- Zhao, J.; Katupitiya, J.; Ward, J. Global correlation based ground plane estimation using V-disparity image. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; Volume 1–10, pp. 529–534.
- Sivaraman, S.; Trivedi, M.M. A general active-learning framework for on-road vehicle recognition and tracking. IEEE Trans. Intell. Transp. Syst. 2010, 11, 267–276. [Google Scholar] [CrossRef]
- Arrospide, J.; Salgado, L.; Marinas, J. HOG-like gradient-based descriptor for visual vehicle detection. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Alcala de Henares, Spain, 3–7 June 2012; pp. 223–228.
- Sun, Z.H.; Bebis, G.; Miller, R. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 694–711. [Google Scholar] [PubMed]
- Junjie, H.; Huawei, L.; Zhilin, W.; Tao, M.; Yan, S. Robust lane marking detection under different road conditions. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 1753–1758.
- Kelly, P.; O’Connor, N.E.; Smeaton, A.F. A framework for evaluating stereo-based pedestrian detection techniques. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1163–1167. [Google Scholar] [CrossRef] [Green Version]
- Soga, M.; Kato, T.; Ohta, M.; Ninomiya, Y. Pedestrian detection with stereo vision. In Proceedings of the 2005 21st International Conference on Data Engineering Workshops, 5–8 April 2005; p. 1200.
- Giosan, I.; Nedevschi, S.; Bota, S. Real time stereo vision based pedestrian detection using full body contours. In Proceedings of the IEEE 5th International Conference on Intelligent Computer Communication and Processing (ICCP 2009), Cluj-Napoca, Romania, 27–29 August 2009; pp. 79–86.
- Lin, Y.C.; Chan, Y.M.; Chuang, L.C.; Fu, L.C.; Huang, S.S.; Hsiao, P.Y.; Luo, M.F. Near-Infrared Based Nighttime Pedestrian Detection by Combining Multiple Features. In Proceedings of the 2011 14th International Ieee Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1549–1554.
- Cao, Y.Y.; Pranata, S.; Nishimura, H. Local Binary Pattern Features for Pedestrian Detection at Night/Dark Environment. In Proceedings of the 2011 18th Ieee International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011.
- Brehar, R.; Vancea, C.; Nedevschi, S. Pedestrian Detection in Infrared Images Using Aggregated Channel Features. In Proceedings of the 2014 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj Napoca, Romania, 4–6 September 2014; pp. 127–132.
- Ge, J.F.; Luo, Y.P.; Tei, G.M. Real-Time Pedestrian Detection and Tracking at Nighttime for Driver-Assistance Systems. IEEE Trans. Intell. Transp. Syst. 2009, 10, 283–298. [Google Scholar]
- O'Malley, R.; Jones, E.; Glavin, M. Detection of pedestrians in far-infrared automotive night vision using region-growing and clothing distortion compensation. Infrared Phys. Technol. 2010, 53, 439–449. [Google Scholar] [CrossRef]
- Fardi, B.; Weigel, H.; Wanielik, G.; Takagi, K. Road border recognition using fir images and lidar signal processing. In Proceedings of the Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 1278–1283.
- Shin, Y.; Jung, C.; Chung, W. Drivable Road Region Detection Using a Single Laser Range Finder for Outdoor Patrol Robots. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium (IV), San Diego, CA, USA, 21–24 June 2010; pp. 877–882.
- Wijesoma, W.S.; Kodagoda, K.R.S.; Balasuriya, A.P. Road-boundary detection and tracking using ladar sensing. IEEE Trans. Robotics. Autom. 2004, 20, 456–464. [Google Scholar] [CrossRef]
- Jaehyun, H.; Dongchul, K.; Minchae, L.; Myoungho, S. Enhanced road boundary and obstacle detection using a downward-looking LIDAR sensor. IEEE Trans. Veh. Technol. 2012, 61, 971–985. [Google Scholar]
- Kodagoda, K.R.S.; Ge, S.S.; Wijesoma, W.S.; Balasuriya, A.P. IMMPDAF approach for road-boundary tracking. IEEE Trans. Veh. Technol. 2007, 56, 478–486. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, J.L.; Liu, D.X. A New Curb Detection Method for Unmanned Ground Vehicles Using 2D Sequential Laser Data. Sensors 2013, 13, 1102–1120. [Google Scholar] [CrossRef] [PubMed]
- Yao, W.; Hinz, S.; Stilla, U. Automatic vehicle extraction from airborne LiDAR data of urban areas aided by geodesic morphology. Pattern Recognit. Lett. 2010, 31, 1100–1108. [Google Scholar] [CrossRef]
- Yao, W.; Hinz, S.; Stilla, U. Extraction and motion estimation of vehicles in single-pass airborne LiDAR data towards urban traffic analysis. ISPRS J. Photogramm. Remote Sens. 2011, 66, 260–271. [Google Scholar] [CrossRef]
- Borcs, A.; Benedek, C. Extraction of Vehicle Groups in Airborne Lidar Point Clouds With Two-Level Point Processes. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1475–1489. [Google Scholar] [CrossRef] [Green Version]
- Zhang, K.Q.; Chen, S.C.; Whitman, D.; Shyu, M.L.; Yan, J.H.; Zhang, C.C. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 872–882. [Google Scholar] [CrossRef]
- Douillard, B.; Underwood, J.; Kuntz, N.; Vlaskine, V.; Quadros, A.; Morton, P.; Frenkel, A. On the segmentation of 3D LIDAR point clouds. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2798–2805.
- Pascoal, R.; Santos, V.; Premebida, C.; Nunes, U. Simultaneous Segmentation and Superquadrics Fitting in Laser-Range Data. IEEE Trans. Veh. Technol. 2015, 64, 441–452. [Google Scholar] [CrossRef]
- Vaskevicius, N.; Birk, A.; Pathak, K.; Schwertfeger, S. Efficient Representation in Three-Dimensional Environment Modeling for Planetary Robotic Exploration. Adv. Robot. 2010, 24, 1169–1197. [Google Scholar] [CrossRef]
- Petrovskaya, A.; Thrun, S. Model based vehicle detection and tracking for autonomous urban driving. Auton. Robot. 2009, 26, 123–139. [Google Scholar] [CrossRef]
- Azim, A.; Aycard, O. Detection, Classification and Tracking of Moving Objects in a 3D Environment. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Alcala de Henares, Spain, 3–7 June 2012; pp. 802–807.
- Chen, T.T.; Dai, B.; Wang, R.L.; Liu, D.X. Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles. J. Intell. Robot. Syst. 2014, 76, 563–582. [Google Scholar] [CrossRef]
- von Hundelshausen, F.; Himmelsbach, M.; Hecker, F.; Mueller, A.; Wuensche, H.-J. Driving with tentacles: Integral structures for sensing and motion. J. Field Robot. 2008, 25, 640–673. [Google Scholar] [CrossRef]
- Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.N.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous driving in urban environments: Boss and the Urban Challenge. J. Field Robot. 2008, 25, 425–466. [Google Scholar] [CrossRef]
- Kammel, S.; Ziegler, J.; Pitzer, B.; Werling, M.; Gindele, T.; Jagzent, D.; Schroder, J.; Thuy, M.; Goebl, M.; von Hundelshausen, F.; et al. Team AnnieWAY’s autonomous system for the 2007 DARPA Urban Challenge. J. Field Robot. 2008, 25, 615–639. [Google Scholar] [CrossRef]
- Moosmann, F.; Pink, O.; Stiller, C. Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; Volumes 1 and 2, pp. 215–220.
- Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B. Junior: The Stanford entry in the Urban Challenge. J. Field Robot. 2008, 25, 569–597. [Google Scholar] [CrossRef]
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Bohren, J.; Foote, T.; Keller, J.; Kushleyev, A.; Lee, D.; Stewart, A.; Vernaza, P.; Derenick, J.; Spletzer, J.; Satterfield, B. Little Ben: The Ben Franklin Racing Team’s entry in the 2007 DARPA Urban Challenge. J. Field Robot. 2008, 25, 598–614. [Google Scholar] [CrossRef]
- Zhao, G.; Yuan, J. Curb detection and tracking using 3D-LIDAR scanner. In Proceedings of the 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 437–440.
- Simon, D. Kalman filtering with state constraints: A survey of linear and nonlinear algorithms. IET Contr. Theory Appl. 2010, 4, 1303–1318. [Google Scholar] [CrossRef]
- Jian, L.; Huawei, L.; Zhiling, W. A framework for detecting road curb on-line under various road conditions. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO), Bali, Indonesia, 5–10 December 2014; pp. 297–302.
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes–Active contour models. Int. J. Comput. Vis. 1987, 1, 321–331. [Google Scholar] [CrossRef]
- Williams, D.J.; Shah, M. A fast algorithm for active contours and curvature estimation. Cvgip Image Underst. 1992, 55, 14–26. [Google Scholar] [CrossRef]
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Liang, H.; Wang, Z.; Chen, X. A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment. Sensors 2015, 15, 21931-21956. https://doi.org/10.3390/s150921931
Liu J, Liang H, Wang Z, Chen X. A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment. Sensors. 2015; 15(9):21931-21956. https://doi.org/10.3390/s150921931
Chicago/Turabian StyleLiu, Jian, Huawei Liang, Zhiling Wang, and Xiangcheng Chen. 2015. "A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment" Sensors 15, no. 9: 21931-21956. https://doi.org/10.3390/s150921931
APA StyleLiu, J., Liang, H., Wang, Z., & Chen, X. (2015). A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment. Sensors, 15(9), 21931-21956. https://doi.org/10.3390/s150921931