A Novel Method to Enable the Awareness Ability of Non-V2V-Equipped Vehicles in Vehicular Networks
Abstract
:1. Introduction
2. Related Work
2.1. Laser Data Point Clustering
2.2. Object Representation
3. Method
3.1. System Architecture
3.1.1. Introduction to System Hardware
3.1.2. Coordinate System
3.1.3. Vehicle Localization
3.1.4. Calibration between LiDAR and Camera
- Data acquisition: Place one or more triangle planar boards in front of the camera and 3D LiDAR. Take camera images, and measure the 3D point clouds of the 3D LiDAR for various locations of the board. To reduce the measured errors in the 3D LiDAR and to detect vertices of the triangle planar board in the image easily, it is recommended to use a bright monochromatic color for the board. Furthermore, the board color should be distinctive from the background, and the size of the board has to be large enough to include multiple laser scanning lines of the 3D LiDAR on the board surface.
- Matching 2D-3D point correspondences: Detect vertices of the triangle plane in images, and identify their corresponding 3D points from the laser scans by estimating the meeting points of two adjacent sides of the board.
- Estimate the calibration parameters between 3D LiDAR and camera. With the corresponding pairs, solve the linear equations for the initial estimate and refine the solutions for the final estimates.
3.2. Software Architecture
3.2.1. Get Location Information of the Surrounding Vehicle
- cluster LiDAR points, and remove the cluster whose point number is too small, so we get the cluster list.
- get the convex hull of each cluster.
- get the rectangle that represents the non-V2V-equipped vehicle.
3.2.2. Determine If a Vehicle Is a Non-V2V Vehicle
- It is based on the open source library openCV, which means that all of its code can be easily acquired.
- It can recognize Chinese.
- It has a higher recognition rate. At present, the character recognition can reach more than 90% accuracy.
Algorithm 1: Encryption protocol performed by |
Input: Pseudonym of and message M. 1: Verify , and compute 2: Choose randomly. 3: Compute 4: Compute cipher text as 5: Transmit C to . |
Algorithm 2: Decryption protocol performed by |
Input: 1: Compute 2: Retrieve |
3.2.3. The Strategy of Receiving Safety Messages
4. Experiments and Simulation Results
4.1. Method Feasibility Simulation
4.2. Method Reliability Experiment
4.2.1. Forward Collision Warning Experiment
4.2.2. Collision Warning at Crossroads Experiment
5. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Pendleton, S.; Andersen, H.; Du, X.; Shen, X.; Meghjani, M.; Eng, Y.; Rus, D.; Ang, M. Perception, planning, control, and coordination for autonomous vehicles. Machines 2017, 5, 6. [Google Scholar] [CrossRef]
- Guerrero-ibanez, J.A.; Zeadally, S.; Contreras-Castillo, J. Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies. IEEE Wirel. Commun. 2015, 22, 122–128. [Google Scholar] [CrossRef]
- Talebpour, A.; Mahmassani, H.S. Influence of connected and autonomous vehicles on traffic flow stability and throughput. Transp. Res. Part C Emerg. Technol. 2016, 71, 143–163. [Google Scholar] [CrossRef]
- Kiess, W.; Rybicki, J.; Mauve, M. On the nature of Inter-Vehicle Communication. In Proceedings of the Communication in Distributed Systems, Bern, Switzerland, 26 February–2 March 2007. [Google Scholar]
- Uhlemann, E. Introducing Connected Vehicles [Connected Vehicles]. IEEE Veh. Technol. Mag. 2015, 10, 23–31. [Google Scholar] [CrossRef]
- Liu, J.; Liu, J. Intelligent and Connected Vehicles: Current Situation, Future Directions, and Challenges. IEEE Commun. Stand. Mag. 2018, 2, 59–65. [Google Scholar] [CrossRef]
- Shladover, S.E. Connected and automated vehicle systems: Introduction and overview. J. Intell. Transp. Syst. 2018, 22, 190–200. [Google Scholar] [CrossRef]
- Tielert, T.; Rieger, D.; Hartenstein, H.; Luz, R. Can V2X communication help electric vehicles save energy? In Proceedings of the International Conference on ITS Telecommunications, Taipei, Taiwan, 5–8 November 2012; pp. 232–237. [Google Scholar]
- Premebida, C.; Nunes, U. Segmentation and geometric primitives extraction from 2d laser range data for mobile robot applications. Robotica 2005, 2005, 17–25. [Google Scholar]
- Mertz, C.; Duggins, D.; Gowdy, J.; Kozar, J.; MacLachlan, R.; Steinfeld, A.; Suppé, A.; Thorpe, C.; Wang, C.C. Collision warning and sensor data processing in urban areas. In Proceedings of the 5th International Conference on ITS Telecommunications, Brest, France, 27–29 June 2005; Robotics Institute: Pittsburgh, PA, USA, 2005; p. 64. [Google Scholar]
- Petrovskaya, A.; Thrun, S. Model based vehicle tracking for autonomous driving in urban environments. In Proceedings of the Robotics: Science and Systems IV, Zurich, Switzerland, 25–28 June 2008; Volume 34. [Google Scholar]
- Lee, K.J. Reactive Navigation for an Outdoor Autonomous Vehicle. Master’s Thesis, University of Sydney, Sydney, Australia, December 2001. [Google Scholar]
- Dietmayer, K.C.; Sparbert, J.; Streller, D. Model based object classification and object tracking in traffic scenes from range images. Available online: https://ci.nii.ac.jp/naid/10011627036/ (accessed on 11 May 2019).
- Klasing, K.; Wollherr, D.; Buss, M. A clustering method for efficient segmentation of 3D laser data. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 4043–4048. [Google Scholar]
- Blanc, C.; Trassoudaine, L.; Le Guilloux, Y.; Moreira, R. Track to track fusion method applied to road obstacle detection. In Proceedings of the International Conference on Information Fusion, Stockholm, Sweden, 28 June–1 July 2004. [Google Scholar]
- Blanc, C.; Trassoudaine, L.; Gallice, J. EKF and particle filter track-to-track fusion: A quantitative comparison from radar/lidar obstacle tracks. In Proceedings of the 2005 8th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 2. [Google Scholar]
- Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306. [Google Scholar]
- Mirzaei, F.M.; Kottas, D.G.; Roumeliotis, S.I. 3D LIDAR–camera intrinsic and extrinsic calibration: Identifiability and analytical least-squares-based initialization. Int. J. Robot. Res. 2012, 31, 452–467. [Google Scholar] [CrossRef] [Green Version]
- Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed]
- Dorbe, N.; Jaundalders, A.; Kadikis, R.; Nesenbergs, K. FCN and LSTM Based Computer Vision System for Recognition of Vehicle Type, License Plate Number, and Registration Country. Autom. Control Comput. Sci. 2018, 52, 146–154. [Google Scholar] [CrossRef]
- Khan, M.A.; Sharif, M.; Javed, M.Y.; Akram, T.; Yasmin, M.; Saba, T. License number plate recognition system using entropy-based features selection approach with SVM. IET Image Process. 2017, 12, 200–209. [Google Scholar] [CrossRef]
- Negri, P. A MATLAB SMO Implementation to Train a SVM Classifier: Application to Multi-Style License Plate Numbers Recognition. Image Process. Line 2018, 8, 51–70. [Google Scholar] [CrossRef]
- Saghaei, H. Proposal for Automatic License and Number Plate Recognition System for Vehicle Identification. arXiv 2016, arXiv:1610.03341. [Google Scholar]
- Huang, D.; Misra, S.; Verma, M.; Xue, G. PACP: An Efficient Pseudonymous Authentication-Based Conditional Privacy Protocol for VANETs. IEEE Trans. Intell. Transp. Syst. 2011, 12, 736–746. [Google Scholar] [CrossRef]
Parameters | Meaning |
---|---|
V2V-equipped vehicles | |
visual detection radius of | |
communicate radius of | |
the number of total V2V-equipped vehicles in the V2V system | |
the number of vehicles in the visual detection radius () of | |
the number of V2V-equipped vehicles in the visual detection radius () of | |
the number of non-V2V-equipped vehicles in the visual detection radius () of | |
the number of non-V2V-equipped vehicles that can be detected by the on-board sensors of | |
detection ratio of , = | |
the number of vehicles in the communicate radius of | |
the number of V2V-equipped vehicles in the communicate radius () of | |
the number of non-V2V-equipped vehicles in the communicate radius () of | |
the number of non-V2V-equipped vehicles that other V2V-equipped vehicles “tell” about (excluding ) | |
the perception ability of , in the traditional V2V system, = , in our V2V system, = | |
the perception ability of the whole V2V system, | |
the proportion of V2V-equipped vehicles in the V2V system | |
vehicle density of the V2V system |
Sampling Time (s) | Truth Position (m) | Test Position (m) | Truth Velocity (m/s) | Test Velocity (m/s) |
---|---|---|---|---|
0.1 | 42.8 | 41.5054 | −14.3604 | −14.4186 |
0.6 | 35.3000 | 34.4359 | −14.2387 | −14.7245 |
1.1 | 27.8000 | 26.6357 | −14.0598 | −13.9693 |
1.6 | 20.3000 | 19.9314 | −13.7710 | −13.9774 |
2.1 | 12.8000 | 12.9106 | −13.2265 | −13.6934 |
2.6 | 5.9490 | 5.7122 | −8.5547 | −8.1722 |
3.1 | 2.4154 | 2.4615 | −2.4063 | −2.3768 |
3.2 | 2.1413 | 2.1830 | −1.3771 | −1.5846 |
3.3 | 2.0115 | 2.0859 | −0.3896 | −0.3336 |
average error | 2.65% | 4.95% | ||
max error | 4.85% | 24% |
Sampling Time (s) | Truth Collision Time 1 (s) | Test Collision Time 1 (s) | Truth Collision Time 2 (s) | Test Collision Time 2 (s) |
---|---|---|---|---|
0.1 | 9.8096 | 10.2855 | 2.7760 | 2.0849 |
0.9 | 7.1614 | 7.0127 | 3.4905 | 3.6553 |
1.7 | 6.6381 | 6.6739 | 4.9770 | 5.4012 |
2.5 | 7.8600 | 7.5681 | 7.2779 | 6.5011 |
3.3 | 10.8155 | 10.7973 | 10.0125 | 10.0022 |
4.1 | 14.8072 | 14.8008 | 11.6181 | 11.4639 |
4.9 | 16.1760 | 16.2989 | 10.6826 | 10.8824 |
5.7 | 13.2747 | 12.9067 | 8.5324 | 8.5928 |
average error | 1.49% | 5.28% | ||
max error | 4.85% | 23.8% |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Zheng, Q.; Mei, F.; Deng, W.; Ge, Y. A Novel Method to Enable the Awareness Ability of Non-V2V-Equipped Vehicles in Vehicular Networks. Sensors 2019, 19, 2187. https://doi.org/10.3390/s19092187
Wang J, Zheng Q, Mei F, Deng W, Ge Y. A Novel Method to Enable the Awareness Ability of Non-V2V-Equipped Vehicles in Vehicular Networks. Sensors. 2019; 19(9):2187. https://doi.org/10.3390/s19092187
Chicago/Turabian StyleWang, Jian, Qiang Zheng, Fang Mei, Weiwen Deng, and Yuming Ge. 2019. "A Novel Method to Enable the Awareness Ability of Non-V2V-Equipped Vehicles in Vehicular Networks" Sensors 19, no. 9: 2187. https://doi.org/10.3390/s19092187
APA StyleWang, J., Zheng, Q., Mei, F., Deng, W., & Ge, Y. (2019). A Novel Method to Enable the Awareness Ability of Non-V2V-Equipped Vehicles in Vehicular Networks. Sensors, 19(9), 2187. https://doi.org/10.3390/s19092187