Next Article in Journal
A New mm-Wave Antenna Array with Wideband Characteristics for Next Generation Communication Systems
Next Article in Special Issue
Human-Computer Interaction in Customer Service: The Experience with AI Chatbots—A Systematic Literature Review
Previous Article in Journal
Improved LS-SVM Method for Flight Data Fitting of Civil Aircraft Flying at High Plateau
Previous Article in Special Issue
Cascade Parallel Random Forest Algorithm for Predicting Rice Diseases in Big Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments

1
School of Rail Transportation, Soochow University, Suzhou 215123, China
2
Department of Computer Science, The University of Alabama, Tuscaloosa, AL 35487, USA
3
School of Mechanical and Electrical Engineering, Soochow University, Suzhou 215123, China
4
School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
5
Navigation College, Dalian Maritime University, Dalian 116026, China
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(10), 1559; https://doi.org/10.3390/electronics11101559
Submission received: 5 April 2022 / Revised: 1 May 2022 / Accepted: 3 May 2022 / Published: 13 May 2022
(This article belongs to the Special Issue Advancements in Cross-Disciplinary AI: Theory and Application)

Abstract

:
The intelligent road is an important component of the intelligent vehicle infrastructure cooperative system, the latest development of intelligent transportation systems. As an advanced sensor, Light Detection and Ranging (LiDAR) has gradually been used to collect high-resolution micro-traffic data on the roadside of intelligent roads. Furthermore, a fusion of multiple LiDARs has become a current hot spot to extend the data collection range and improve detection accuracy. This paper focuses on point cloud registration in a complex traffic environment and proposes a three-dimensional (3D) registration method based on traffic signs and prior knowledge of traffic scenes. Traffic signs with their reflective films are used as reference targets to register 3D point cloud data from roadside LiDARs. The proposed method consists of a vertical registration and a horizontal registration. For the vertical registration, we propose a panel rotation algorithm to rotate the initial point cloud to register it vertically, converting the 3D point cloud registration into a two-dimensional (2D) rigid body transformation. For the vertical registration, our system registers traffic signs from different LiDARs. Our method has been verified in some actual scenarios. Compared with previous methods, the proposed method is automatic and does not need to search reference targets manually. Furthermore, it is suitable for actual engineering use and can be applied to sparse point cloud data from LiDAR with few beams, realizing point cloud registration of large disparity.

1. Introduction

Traffic safety and efficiency are the main issues in the field of transportation [1,2]. An Intelligent Transportation System (ITS) effectively improves traffic safety and solves traffic congestion [3,4,5]. Traffic information acquisition is of great importance for ITS [6,7]. Nowadays, many sensors have been used to obtain road traffic information [8], such as ground loops [9], cameras [10], millimeter-wave radars [11,12], WiFi [13], other sensors [14,15], etc. Cameras are most widely used within these sensors, whether in autonomous vehicles or roadside facilities [16]. With the high resolution of image data, researchers have already worked on camera-based detection, classification, and tracking of traffic objects. The image data also have its limitations. First, the quality of image data depends on the light. The cameras may not achieve accurate detection when applied under bad light conditions. Second, it cannot directly provide distance information, which is a disadvantage for ITS [17].
Recently, LiDAR has gradually become the core sensor in various fields because of its high measurement accuracy, fast response speed, and strong anti-interference [18]. LiDAR is a kind of active sensor that uses an array of intelligent road lasers paired with intelligent road detectors to measure distances to objects [19,20]. Commercial LiDARs mainly use the Time of Flight (TOF) method to obtain point cloud data. Point cloud data contain the target’s position (distance, azimuth, and height) [21]. The spatial resolution of the point cloud data can reach centimeter levels, and the time resolution can reach millisecond levels. Based on the point cloud data, some machine learning methods can calculate traffic objects’ movement state (speed, posture). Compared with cameras, LiDARs are not affected by light conditions and can work around the clock. Compared with millimeter-wave radar, LiDAR can accurately image and form a high-precision 3D point cloud. Due to the precise ranging capability and active measurement mode, LiDAR is considered the most promising way to obtain traffic information. With the improvement of production technology, the price of LiDAR gradually decreases, which makes the large-scale application of LiDAR possible in the future.
LiDAR is often used to detect, identify, and track objects in various scenes. In ITS, LiDAR is mainly used in autonomous vehicles and roadside facilities. Researchers deploy LiDAR on the top or front of the vehicle for road traffic environment perception in autonomous vehicles. The authors in [22] propose an object classification method to classify dynamic objects on urban roads into cars, pedestrians, bicyclists, and backgrounds. The authors [23] use a 2D LiDAR system to detect obstacles for autonomous vehicles. The authors in [24] propose an on-road vehicle trajectory collection system using multiple horizontal 2D LiDAR sensors with 360° coverage for lane change behavior studies.
In roadside facilities, researchers deploy LiDARs on the roadside to collect traffic information at a fixed location, sending traffic information to road users to achieve vehicle-road collaboration. The authors in [25] propose a tracking method considering reference point switches and occlusions for multi-vehicles at the intersection based on a roadside LiDAR sensor. The authors in [26] explore fundamental concepts, solution algorithms, and application guidance to detect and track pedestrians and vehicles at intersections.
In previous research, most works were based on single LiDAR. Using single LiDAR can detect traffic objects in simple scenes. However, when facing a large and complex scene, single LiDAR has its disadvantages. First, the detection range of one single roadside LiDAR is limited, and it cannot capture complete information when facing a large traffic intersection or a long traffic flow. Second, the occlusion of obstacles is always a problem that a single roadside LiDAR cannot solve because laser is a kind of light that cannot bypass any objects. In addition, the density of the LiDAR point cloud data at different distances is inconsistent, which results in the long-distance target recognition mistake during target recognition. An effective solution is to deploy multiple LiDARs at different locations on the roadside, which can effectively improve the detection capabilities of roadside LiDARs. Some researchers have made many attempts in this way [27].
The first problem is registering point clouds from different LiDARs to make up a network of multiple LiDARs. It is one of the key research issues. Point cloud registration aims to solve the transformation matrix of different posture point clouds under the same coordinate and use this matrix to achieve accurate registration of multi-view scanning point clouds, and finally obtain a complete 3D digital model and scene. It has important applications in multi-field engineering applications, such as reverse engineering [28], Simultaneous Localization and Mapping (SLAM) [29], image processing, and pattern recognition [30]. Researchers use 3D scanners to scan the target and reconstruct it through registration in reverse engineering. In SLAM, the LiDAR is mounted on the robot, and after the robot’s position moves, the map is created through point cloud registration. The registration in the traffic scene is different from these problems. First, the point cloud data of LiDAR is sparser compared with a 3D scanner, and there are not enough features that can be referenced in the point cloud during registration. Second, unlike SLAM, the roadside LiDAR is deployed far away, and the parallax is very large. For example, two LiDARs are often deployed on two diagonals of a traffic intersection, and the scanned data are from two reverse perspectives. Therefore, there is almost no overlap on the target at the intersection center.
Therefore, it is very important to find effective features in the scene to achieve point cloud registration in traffic scenes. Ref. [31] focuses on the feature of ground points. They use an automatic ground-surface-matching optimization method to integrate the LiDAR points. The authors in [32] make a reference target for point cloud registration. However, for a large scene, the reference target needs to be very large. Therefore, finding a suitable reference target in a real traffic scene is better.
A suitable reference should have the following characteristics: First, the reference object should be large enough. Second, the reference object should be widely distributed in traffic scenes. Third, the reference object should be easily identified in the LiDAR point cloud data. After analyzing the objects in the scene, such as buildings, road infrastructure, and road boundaries, we found a very suitable reference—the traffic sign.
Traffic signs use fixed shapes and colors to provide road users with warnings and guidance, playing an important role in traffic control [33]. They have been widely used in the world since the 1970s. With the acceleration of the internationalization process, the production methods of traffic signboards in various countries have gradually become consistent. Traffic signs’ color, shape, and pattern have a fixed format and correspond to their functions. For example, triangles are used for warning signs, and circles are used for prohibition and indication signs. According to the working principle, road traffic signs are divided into reflective film signs, external auxiliary lighting signs, active light-emitting signs, and light-emitting diode (LED) variable information signs. When the ambient light is not good, the reflective film signs can produce better visual recognition and reading effects under the illumination of car lights. At present, more than 90% of road traffic signs in large, medium, and small cities are made of reflective film. Traffic signs have other obvious features: high height, bright colors, no obstructions in front, dedicated personnel for regular maintenance, etc. These characteristics make traffic signs suitable to be reference targets. Figure 1 shows a traffic sign and a Velodyne Puck 16 (VLP–16) LiDAR at the roadside.
This paper proposes a novel registration method for a multi-LiDAR point cloud. In this method, traffic signs are used as the reference target according to the characteristics of the retroreflective surface. The contributions of this paper are as follows:
  • It is the first time using traffic signs as reference targets to achieve roadside LiDAR registration.
  • An automatic 3D point cloud registration algorithm is proposed based on the traffic sign.
  • This paper realizes sparse point cloud registration with large disparity from different perspectives by utilizing common targets in actual traffic scenes.
The rest of this paper is organized as follows: in Section 2, the related works are presented. In Section 3, the proposed methodology is described in detail. Experiment and results are presented in Section 4. Finally, we conclude this paper with a summary in Section 5.

2. Related Work

Point cloud registration is the process of aligning two or more 3D point clouds of the same scene into a common coordinate system. The authors in [34] propose an Iterative Closest Point (ICP) algorithm.
Although many improved algorithms have been proposed, these methods cannot directly apply to our problem. Researchers paid more attention to specific feathers to register point cloud data from roadside LiDAR. The authors in [31] take the ground points from roadside LiDAR as features, proposing an automatic ground-surface-matching optimization method to register the LiDAR points. This method is based on some assumptions. Firstly, each LiDAR is equipped with Global Positioning System (GPS). Secondly, LiDARs are set up horizontally with inclination. Thirdly, the locations of the GPS sensor and the LiDAR are pre-measured. Fourth, different LiDAR sensors scan different terrains, and there are overlapping parts of the ground surface in different LiDAR sensors. The authors in [35] develop a semi-automatic point registration method for roadside LiDAR data in 3D space. The authors in [32] propose a point cloud registration method with a reference target, which is three panels covered by retroreflective material.
Detection and recognition are the most popular research issues about traffic signs. The most commonly used method is based on neural networks [36,37,38]. Some researchers use LiDAR for auxiliary detection of the retroreflective function of the signboard. The authors in [39] propose a deep learning method to detect traffic signs by fusing the complementary features from registered airborne geo-referenced color images and noisy airborne LiDAR data. LiDAR can detect the intensity of the retracted laser and then judge the reflection ability of the signboard, which is very helpful for evaluating the status of the signboard. The authors in [40] propose a methodology for automatically evaluating traffic sign retro-reflectivity conditions using mobile LIDAR.

3. Methodology

In this research, two LiDARs are used to make up a roadside LiDAR network. We first introduce the construction of the LiDAR network and LiDAR point cloud data. Then, we realize the registration in the vertical direction through ground plane detection and rotation. In the following, an intensity filter is set to achieve the preliminary extraction of the reference points. The clustering method is then complied with to achieve precise extraction. Finally, the ICP-based registration method registers the reference object point cloud. The process of the proposed method is shown in Figure 2.

3.1. Data Collection

The sensor we used in this paper is Velodyne VLP-16. VLP-16 is a mechanical scanning LiDAR, which obtains the depth information of the surrounding environment through the rotation of 16 laser/detector pairs. Its horizontal Field of View (FOV) is 360 , and vertical FOV is 30 . The scanning range is 100 m. The LiDAR’s horizontal angular resolution is 0.1 –0.4 , and the vertical resolution is 2 . Because the firing timing of the sensor is fixed at 55.296 μ s per firing sequence, the speed of rotation changes the angular resolution of the sensor. An example calculation for 600 Revolutions Per Minute (RPM) is given in the equation below:
A z i m u t h R e s o l u t i o n = 600 rev min × 1 60 min s × 360 rev × 1 60 × 55.296 × 10 6 s firing cycle
The higher the speed, the fewer points per frame it has. The number of points in the actual test is shown in Table 1.
According to the angular resolution of 0.1 and the data of 16 lines, there should be 57,600 points per frame, but, in reality, there are only 44,000 points. Some laser beams hit the sky or farther away without reflecting.
LiDAR is generally deployed on the top of the tripod or fixed at the facilities around the road. Common application scenarios are traffic intersections and the sides of roads. Two LiDARs are best deployed at two opposite corners to scan the traffic targets from complementary perspectives in a traffic intersection. To make full use of the LiDAR beam, the LiDAR is sometimes deployed at an angle.
LiDAR performs distance measurement by calculating the flight time of the laser beam [41]. The azimuth angle of the intelligent road sensor, the line beam, and the flight distance constitute three data in the polar coordinate system. After being transferred to a computer, it is transformed into the coordinates of the Cartesian coordinate system. In addition, the time stamp and intensity of the retroreflected laser are also reported. Intensity reflects the calibrated reflectivity of the target. Reflectivity byte values are segmented into two ranges, allowing software to distinguish diffuse reflectors (e.g., tree trunks, clothing) in the low range from retroreflectors (e.g., road signs, license plates) in the high range.
A retroreflector reflects light to its source with a minimum of scattering. The VLP-16 provides its light with negligible separation between a transmitting laser and a receiving detector so that retroreflecting surfaces pop with reflected intelligent road light compared to diffuse reflectors that tend to scatter reflected energy. Diffuse reflectors report values from 0 to 100 for reflectivities from 0% to 100%. Retroreflectors report values from 101 to 255, where 255 represents an ideal reflection.

3.2. Vertical Registration

Compared with image data, the point cloud data of LiDAR are very sparse. For mechanical rotary LiDAR, the point cloud of road signs collected by the same pair of intelligent road transmitter–receivers is only a straight line. As mentioned earlier, the LiDAR is set at different heights and angles. Therefore, the road sign reflection point cloud received by multiple LiDARs may appear intersecting or parallel straight lines. If the data are directly registered, the correct solution cannot be obtained. Therefore, the original data must be preprocessed.
The registration of 3D point clouds calculates the rotation matrix R and the translation vector T that make different point clouds coincide. However, due to the different scanning angles, the points reflected by the traffic signs are not in a one-to-one correspondence. Therefore, 3D registration cannot be directly achieved. After studying the road signs and analyzing the point cloud data in the actual scene, we found that the projection of the reference point cloud on the ground is the same in different LiDARs. Therefore, this paper adopts a two-step registration method to achieve 3D point cloud registration. First, the oblique point cloud, as shown in Figure 3, is normalized through vertical registration, the ground plane is translated to the same height, and the road sign reference point cloud is projected onto the XY plane. Then, the projections are registered in two dimensions.
To achieve the vertical registration, this paper adopts the RANSAC algorithm [42] to realize the detection of the plane. Random consistent sampling RANSAC is a robust model-fitting algorithm that can fit an accurate model from data with outliers. We fit the equation of the ground plane point cloud and get the normal plane vector.
The next step is to rotate the point cloud to level. We use Rodriguez rotation to normalize the oblique point cloud in the following equation. Rodriguez’s rotation formula in Equation (2) calculates a new vector obtained after a vector rotates a given angle around a rotation axis in a three-dimensional space. R is the final rotation matrix, I is a 3 × 3 identity matrix, θ is the rotation angle, and K is the cross-product matrix of unit vectors of the fixed rotation axis:
R = I + s i n θ K + ( 1 c o s θ ) K 2
Figure 4 shows the point cloud after the correction. The point cloud has been rotated to level. Finally, we need to count the height of the ground plane point cloud and move the point cloud as a whole to make the ground plane height 0. This is to make the heights of different point clouds consistent.

3.3. Reference Points Extraction

The traffic scene includes traffic targets such as large vehicles, small vehicles, pedestrians, and objects such as buildings, counting trees, and traffic infrastructure. The materials of the surfaces of the objects are very different. Only a few objects can achieve retroreflection, such as vehicle number plates, traffic signs, etc. We have counted a lot of actual data. In a traffic scene point cloud data, only less than 1% of the points have an intensity value greater than 200. This means that using intensity as a feature to distinguish between reference objects and non-reference objects is very appropriate.
In previous studies [32], we used a self-made reference object for registration. The self-made reference is made up of three surfaces coated with reflective material. During usage, multiple LiDARs need to be scanned to all three surfaces. Furthermore, the more points the system scanned, the better the registration effect has. The reference object is often very large in a larger scene to achieve fast and accurate registration, which is unfavorable for practical use. Inspired by the above methods, this paper still uses intensity as the feature and selects the most common retroreflective target in traffic scene-traffic signs. Choosing traffic signs as new reference objects has many advantages compared with self-made reference objects. First of all, traffic signs are generally large, with length and width between 1.5–5 m, which overcomes the big problem of reference objects. Secondly, according to the implementation requirements of the traffic sign project, the traffic signs are all deployed at the height of 5 m above the ground, there are no obstacles to cover them, and there will be regular maintenance personnel. Finally, the production of traffic signs is very standard, and the materials used and the retroreflective coefficient are strictly regulated. The characteristics are more obvious. When the custom-colored display is set to intensity-return, we can easily find the traffic sign in Figure 5.
This part will complete the automatic extraction of the reference object point cloud. First, a filter based on reflection intensity is designed to achieve coarse extraction of the target. After that, we use a Density-Based Spatial Clustering of Applications with Noise (DBSCAN)-based reference object detection method to detect traffic signs and filter out other noises [43].

3.4. Point Cloud Registration

After completing the above steps, we correct the point cloud data collected by different LiDARs and extract the 3D point cloud reflected by the road signs. To conduct the registration of global point cloud data, it is only necessary to perform translation and rotation on the XY plane of the reference point cloud, which is a 2D point cloud registration.
Here, the classic algorithm ICP in point cloud registration is applied. The core of the ICP algorithm is to minimize an objective function:
f ( R , T ) = 1 N p i = 1 N p s | p t i R · p s i T | 2
R and T are the rotation matrix and translation vectors for transforming the target point cloud to the original point cloud, respectively. N p is the number of points in the source point cloud. p t i and p s i mean the corresponding point after i-times iteration.
However, we do not know what the corresponding points are. Therefore, when we have an initial value, we assume that the source cloud is transformed with the initial rotation and translation matrix to obtain a transformed point cloud. Then, compare this transformed point cloud with the target cloud. As long as the distance between the two-point clouds is less than a certain threshold, we consider these two points to be corresponding points. This is also the source of the term “nearest neighbor”.
After having the corresponding points, we can use the corresponding points to estimate the rotation R and the translation T. There are only six degrees of freedom in R and T, and our corresponding points are huge. Therefore, we can use methods such as least squares to solve the optimal rotation and translation matrix.
We can optimize a new R and T, which led to changes in the positions of some points after conversion, and some pairs of closest points changed accordingly. Therefore, we return to the method of finding the nearest neighbor. The steps are iteratively performed until some iteration termination conditions are met. For example, the change of R and T is less than a certain value, the change of the above objective function is less than a certain value, or the pair of neighboring points no longer changes.
After registering the reference points, the obtained R and T are applied to the global data to realize the registration of the global data.
In most cases, LiDAR point cloud data interference comes from other vehicles or roadside LiDARs of the same frequency. Since only a few data frames are needed for registration, registration can be performed when no vehicles move in the detection space. In addition, the mutual interference between multiple roadside LiDARs is very small (only 0.016% of the data may have interference when the distance is 50 m). When considering this problem of noise or interference due to another device that works at the same frequency as the LiDAR, the Phase Lock function of LiDAR can be used for multi-LiDAR phase control to avoid interference.

4. Experiments and Discussions

In this section, we use the proposed method in actual scenarios to verify the feasibility. The registration process and result analysis are presented later. This paper describes two typical scenarios of point cloud registration: intersections and straight roads in urban roads. In addition, relevant experiments have been performed for different relative distances, different roadsides, and different orientations of intersections.
We conduct experiments in two situations, an intersection and one side of a straight road, to experiment with different forms in practical applications. To avoid distortion when the vehicle crosses the 0 line, the 0 line of the LiDAR is set parallel to the road direction.
There are some limitations in practical applications for that proposed method applying the ground plane detection method to the dimensionality reduction processing.
  • The road surface in the actual scene needs to be horizontal.
  • The use of traffic signs for registration requires that the point cloud reflected by the traffic signs contains the full width of the traffic signs; otherwise, a large registration error may be caused.
Therefore, although this method can register point cloud data inclined within a certain angle range, it cannot process it at a larger angle. Of course, tilting the LiDAR is only to capture more three-dimensional information on the road, and under normal circumstances, the LiDAR will not be excessively tilted.

4.1. Intersection Experiment

Deploying a network of multiple LiDARs at intersections can collect intersection traffic information from different directions. On the one hand, it can expand the detection range so that the detection capabilities in all directions tend to be consistent. On the other hand, it helps solve the impact of occlusion between vehicles. In addition, by scanning vehicle targets from different directions, we can obtain complete vehicle morphological features, which are of great significance to detection and classification.

4.1.1. Step 1

In this experiment, two LiDARs are deployed on the top of tripods at the opposite corner of the intersection. Figure 6 shows the deployment position of the LiDAR in the actual scene and the location of the traffic sign.
In the actual application, the horizontal deployment angle of LiDAR also needs attention. After a lot of experimental data research, we find that, when the traffic target passes the 0 line, its shape may be compressed or stretched laterally, as shown in Figure 7. The dividing line between red and blue is the 0 line.
Figure 7 shows the case that a vehicle crossed the 0° line from right to left, which we define as forward cross. As shown in Figure 7b, the length of the vehicle in the point cloud is compressed. Figure 8 and Figure 9 present the change of vehicle length when the vehicle crosses the 0° line from two directions. The length is compressed in a forward crossing and stretched in a backward crossing.
This is because the imaging of the LiDAR requires the intelligent road array to make one revolution. Therefore, when the LiDAR turns 0° to 360°, the traffic target runs for t seconds. The value of t is determined by Revolutions Per Minute (RPM), and its relationship is:
t = 1 R P M
Therefore, when deploying LiDARs, the 0° line of the LiDAR is the best to face the direction where there is no car. This research set the 0° line parallel to the road boundary, as shown in Figure 10.

4.1.2. Step 2

The parameters must be determined in advance when using the RANSAC algorithm for plane detection. The distance threshold t defines the points within the bounds, and the other is the number of iterations k. The distance threshold t needs to be determined according to the level of road smoothness in the actual scene. If t is selected too large, there will be a large error in the result obtained, and if it is set too small, the best plane may not be obtained. Considering the possible undulations on the road, in this scenario, the value of t is set to 0.1 m.
The number of iterations k can be inferred from the distribution of the ground point cloud. When we estimate the model parameters, we use p to represent the probability that the points randomly selected from the original point cloud are all ground points. At this point, the resulting model is likely to be useful so that p also characterizes the probability of the algorithm producing useful results. Use w to represent the probability of selecting an interior point from the data set each time, as shown in the following equation:
w = number of ground points number of all points .
Normally, we do not know the value of w in advance, but we can give robust values. To fit a plane model, three points need to be selected. w 3 is the probability that all three points are inside points. 1 w 3 is the probability that at least one of the three points is an outside point. ( 1 w 3 ) k presents the probability that the algorithm will never select k points inside points, the same as 1 p . Therefore, we have
1 p = ( 1 w 3 ) k
Based on this, we can estimate the value of k. To ensure that the correct ground plane can be calculated, the value of k should be higher than the theoretical value. The result of ground plane detection is shown in Figure 11, and the red points are detected as ground points.
We get the equation of the ground plane and the normal vector of the ground plane via plane detection. We can rotate the point cloud to the lever with the normal vector.
Since the LiDAR uses itself as the origin to establish a coordinate system, the coordinates of ground points are usually negative. The vertical registration is then completed by calculating the mean value of the ground points and raising them to 0.

4.1.3. Step 3

The extraction of traffic sign point cloud includes two parts: one is intensity filtering, and the other is noise removal.
To achieve accurate intensity filtering, we conduct experiments to count the reflection intensity of the reflection points of road signs at different distances. The specific situation is explained as follows:
From Table 2, we found that, within the detection range of the LiDAR, the intensity of the point cloud reflected by the road signs is above 200. Only some of the edge points are around 190. Therefore, we set the filtering threshold to 190, and the filtering result is shown in Figure 12.
After filtering, some high-intensity points remain in the data, reflected by vehicle license plates or reflective stickers on the body. These points are not signed on traffic signs, so they are regarded as noise points in the point cloud extraction of traffic signs. It is characterized by small areas scattered in space. We can distinguish them according to this feature as long as we reasonably set the DBSCAN clustering parameters. The parameters of DBSCAN include cluster radius (eps) and the Minimum inclusion Points (MinPts). To cluster all the points reflected by the traffic signs, we set the eps to 5 m according to the approximate size of the traffic signs. MinPts is the key to distinguishing between noise and traffic sign point cloud. Based on the number of points reflected by traffic signs at the farthest point, we set MinPts to 30. This number is much larger than the number of points that other high-reflectivity targets in the space can reflect, so we have successfully extracted the point cloud data of the traffic signs. In Figure 12, the blue points are from the traffic sign, and the red ones are noises.

4.1.4. Step 4

After the above steps, the problem has been transformed into the registration concerning the point cloud of traffic signs.
The ICP algorithm needs to set three parameters: the maximum and the minimum number of iterations and the minimum error. Since the number of points of a traffic sign is small, the iteration times do not need to be too large. In this experiment, the maximum iteration time is set to 100, and the minimum is 10. The registration accuracy depends on the minimum error, so we set the minimum error to 0.0001 m. After iteration, we get the rotation and translation matrix. The global point cloud registration is applied to the original data. The registration effect diagram is shown in Figure 13. From the bus in the middle of the figure, we can see that the points from different LiDARs fit well.

4.2. Roadside Experiment

Multi-LiDAR systems are sometimes used to solve vehicle queue length detection and vehicle behavior analysis problems. This deployment method can detect the dynamic change process of traffic flow in a wider range, characterizing various vehicle behaviors such as car-following, lane change, and overtaking, facilitating more accurate and micro-theoretical analysis.
In this experiment, the system is deployed to achieve long-distance vehicle target detection and behavior analysis of autonomous vehicles. In this scene, more attention is paid to the dynamic movement process of the vehicle target rather than the appearance details. In addition, LiDAR registration should consider more time factors when different sensors detect traffic targets in analyzing vehicle motion behavior. If the time of one revolution of the intelligent road array is t, then the maximum time difference between the two LiDARs when they scan the vehicle is t / 2 . Therefore, the rotation frequency is better at a small value. The frequency of the LiDAR in this scene is set to 20 Hz. The result of roadside scene registration is shown in Figure 14.

4.3. Analysis

The registration results in different scenarios have been shown above. It can be seen from the figures that the traffic targets, buildings, and road boundaries in the space have all achieved a good integration effect. The method in this paper uses the registration of the traffic sign point cloud instead of the registration of the global point cloud, and some errors are inevitable. Figure 15 presents the error in registration. The red points are a bit to the left than the blue points.
The method proposed in this paper is application-oriented and verified in actual scenarios. The complete registration error analysis contains six elements (x, y, z, α , β , γ ). In this paper, we divide the errors into the lateral error, the longitudinal error, and the vertical error based on the direction of the signboard. The lateral error refers to the error that exists in the direction parallel to the projection line of the signboard on the ground. The longitudinal error is the direction perpendicular to the projection line. The vertical error represents the mistake in height.
In the experiment, the longitudinal error of this method in the actual application process is from 1 to 15 cm. The lateral error is relatively large. The sparse acquisition characteristics of LiDAR cause this. The farther the distance between each point in the point cloud of the traffic sign, the greater the error will be. Because the distance between the LiDAR and the traffic sign is different, the sparsity of the points is also inconsistent. This leads to different lengths of traffic signs in different point clouds. The maximum error is affected by the resolution of the farther LiDAR. The detection distance of VLP-16 is 100 m. In the worst case, the lateral error can reach 35 cm.
We compare the proposed method with the method in [32] and the ICP algorithm [34]. The results of the comparison are presented in Table 3. The results show that the proposed method is the best among the three approaches.
As mentioned before, the rotation of the laser beam would bring a certain error. The data of different scanning angles in the same frame have a time difference. At the same time, there will be a time difference between the data of multiple LiDARs. Therefore, the errors cannot be checked with moving targets. We use static walls in the scenes to check the value of the error. From the comparison, we can see that the method in this paper achieves the smallest error. In the method [32], the error comes from the reference target because the point clouds of the reference target from different LiDARs are not the same. The error of the ICP algorithm is too large because the result of the registration is wrong. When the distance between multiple LiDARs is long and the parallax is large, the points paired between the point clouds are too few to realize the accurate registration. However, in the vertical direction, a certain degree of registration is still achieved, which means the ground points fit well. Compared with the self-made reference object in the method [32], the size of the traffic sign is larger, and there is no occlusion in a complex environment, so the applicability is stronger. Compared with the ICP algorithm, this method is more efficient and can avoid falling into the optimal local solution and, at the same time, greatly reduce the amount of calculation.
By registering multiple LiDARs, we extend the vehicle detection distance and avoid missed detections caused by some occlusions. The effect of multi-LiDAR registration and vehicle detection is shown in Figure 16. By fusing multiple LiDARs, we extend the detection distance and greatly improve the detection accuracy. For some targets that are blocked in a single LiDAR, they can also be accurately detected in the fused point cloud data.

5. Conclusions

This paper proposes a roadside LiDAR point cloud registration method, which uses traffic signs as the reference target. The reflective film covering the surface of the traffic sign can reflect the laser beam emitted by the LiDAR with little loss and return a high-intensity value. This method uses this feature to design a series of steps to realize the registration between point cloud data with a large disparity in complex scenes. This method is fast and accurate and is suitable for large-scale applications in actual scenarios. The method realizes global point cloud registration through traffic sign point cloud registration, which greatly reduces the workload and improves efficiency. It can also achieve good registration results in complex traffic scenes. At the same time, it avoids the problem in which the classical ICP algorithm is easy to fall into the optimal local solution and improves the registration accuracy.
Some shortcomings that need to be solved in the future still exist. On the one hand, this method can only complete three-dimensional point cloud registration for horizontal roads and can only achieve two-dimensional registration for scenes such as slopes. In addition, the algorithm’s accuracy depends on the angular resolution of the LiDAR, and large errors may occur in applications in very large scenes. For road areas without road signs, this method cannot be applied. In the future, we will continue to explore more universal and high-precision point cloud registration methods to help roadside infrastructure construction.

Author Contributions

Z.Z.: software and writing—original draft preparation; J.Z.: supervision; Y.T.: Conceptualization; S.Y.: methodology; Y.X.: writing—review and editing; S.A.: investigation and formal analysis; J.L.: investigation and formal analysis; and T.L.: methodology. All authors have read and agreed to the published version of the manuscript.

Funding

Zheyuan Zhang, Jianying Zheng, Yanyun Tao, and Shumei Yu’s work is supported by China’s National Natural Science Foundation (NNSF) under Grant No. 61973225, the Natural Science Foundation of Jiangsu Province under Grant No. BK20160324, and the Natural Science Foundation of Jiangsu Colleges and Universities under Grant No.16KJB580009.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, J.; Zhang, X.; Yang, B.; Wang, N. Vessel traffic scheduling optimization for restricted channel in ports. Comput. Ind. Eng. 2021, 152, 107014. [Google Scholar] [CrossRef]
  2. Chhabra, R.; Krishna, C.; Verma, S. A survey on state-of-the-art road surface monitoring techniquesfor intelligent transportation systems. Int. J. Sens. Netw. 2021, 37, 81–99. [Google Scholar] [CrossRef]
  3. Alam, M.; Ferreira, J.; Fonseca, J. Introduction to intelligent transportation systems. In Intelligent Transportation Systems; Springer: Berlin, Germany, 2016; pp. 1–17. [Google Scholar]
  4. Xu, B.; Zheng, J.; Wang, Q.; Xiao, Y.; Ozdemir, S. An Adaptive Vehicle Detection Algorithm based on Magnetic Sensors in Intelligent Transportation Systems. Ad Hoc Sens. Wirel. Netw. 2017, 36, 211–232. [Google Scholar]
  5. Zhang, X.; Li, J.; Zhu, S.; Wang, C. Vessel intelligent transportation maritime service portfolios in port areas under e-navigation framework. J. Mar. Sci. Technol. 2020, 25, 1296–1307. [Google Scholar] [CrossRef]
  6. Qi, L.; Zheng, Z.; Gang, L. A cellular automaton model for ship traffic flow in waterways. Phys. A Stat. Mech. Its Appl. 2017, 471, 705–717. [Google Scholar] [CrossRef]
  7. Liu, Z.; Wu, Z.; Zheng, Z.; Wang, X.; Soares, C.G. Modelling dynamic maritime traffic complexity with radial distribution functions. Ocean. Eng. 2021, 241, 109990. [Google Scholar] [CrossRef]
  8. Yao, C.; Zhengjiang, L.; Zhaolin, W. Distribution diagram of ship tracks based on radar observation in marine traffic survey. J. Navig. 2010, 63, 129–136. [Google Scholar] [CrossRef]
  9. Wang, Q.; Zheng, J.; Xu, H.; Xu, B.; Chen, R. Roadside magnetic sensor system for vehicle detection in urban environments. IEEE Trans. Intell. Transp. Syst. 2017, 19, 1365–1374. [Google Scholar] [CrossRef]
  10. Zhu, H.; Wang, J.; Xie, K.; Ye, J. Detection of Vehicle Flow in Video Surveillance. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 528–532. [Google Scholar]
  11. Caris, M.; Stanko, S.; Johannes, W.; Sieger, S.; Pohl, N. Detection and tracking of Micro Aerial Vehicles with millimeter wave radar. In Proceedings of the 2016 European Radar Conference (EuRAD), London, UK, 5–7 October 2016; pp. 406–408. [Google Scholar]
  12. Mukhtar, A.; Xia, L.; Tang, T.B. Vehicle detection techniques for collision avoidance systems: A review. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2318–2338. [Google Scholar] [CrossRef]
  13. Guillen-Perez, A.; Cano, M. Pedestrian characterisation in urban environments combining WiFi and AI. Int. J. Sens. Netw. 2021, 37, 48–60. [Google Scholar] [CrossRef]
  14. Zambrano-Martinez, J.L.; Calafate, C.T.; Soler, D.; Cano, J.C.; Manzoni, P. Modeling and characterization of traffic flows in urban environments. Sensors 2018, 18, 2020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Basnayake, C. Automated traffic incident detection with GPS equipped probe vehicles. In Proceedings of the 17th International Technical Meeting of the Satellite Division of the Institute of Navigation, Long Beach, CA, USA, 21–24 September 2004; pp. 1–10. [Google Scholar]
  16. Sharma, S.; Awasthi, S. Introduction to Intelligent Transportation System: Overview, Classification Based on Physical Architecture, and Challenges. Int. J. Sens. Netw. 2022, 38, 215–240. [Google Scholar] [CrossRef]
  17. Zheng, J.; Yang, S.; Wang, X.; Xia, X.; Xiao, Y.; Li, T. A Decision Tree Based Road Recognition Approach Using Roadside Fixed 3D LiDAR Sensors. IEEE Access 2019, 7, 53878–53890. [Google Scholar] [CrossRef]
  18. Wu, J.; Lv, C.; Yue, H. Grid-based lane identification with roadside LiDAR data. Int. J. Sens. Netw. 2022, 38, 85–96. [Google Scholar] [CrossRef]
  19. Liu, J.; Sun, Q.; Fan, Z.; Jia, Y. TOF lidar development in autonomous vehicle. In Proceedings of the 2018 IEEE 3rd Optoelectronics Global Conference (OGC), Shenzhen, China, 4–7 September 2018; pp. 185–190. [Google Scholar]
  20. Zheng, J.; Yang, S.; Wang, X.; Xiao, Y.; Li, T. Background Noise Filtering and Clustering with 3D LiDAR Deployed in Roadside of Urban Environments. IEEE Sens. J. 2021, 21, 20629–20639. [Google Scholar] [CrossRef]
  21. Wang, X.; Fan, M.; Zheng, J.; Xia, X.; Xiao, Y.; Li, T. ARPILC: An Approach for Short-Term Prediction of Freeway Entrance Flow. IEEE Access 2019, 7, 130946–130956. [Google Scholar] [CrossRef]
  22. Yoshioka, M.; Suganuma, N.; Yoneda, K.; Aldibaja, M. Real-time object classification for autonomous vehicle using LIDAR. In Proceedings of the 2017 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 24–26 November 2017; pp. 210–211. [Google Scholar]
  23. Catapang, A.N.; Ramos, M. Obstacle detection using a 2D LIDAR system for an Autonomous Vehicle. In Proceedings of the 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 25–27 November 2016; pp. 441–445. [Google Scholar]
  24. Shin, M.O.; Oh, G.M.; Kim, S.W.; Seo, S.W. Real-time and accurate segmentation of 3D point clouds based on Gaussian process regression. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3363–3377. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Sun, X.; Xu, H.; Yao, E. Tracking multi-vehicles with reference points switches at the intersection using a roadside LiDAR sensor. IEEE Access 2019, 7, 174072–174082. [Google Scholar] [CrossRef]
  26. Zhao, J.; Xu, H.; Liu, H.; Wu, J.; Zheng, Y.; Wu, D. Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors. Transp. Res. Part C Emerg. Technol. 2019, 100, 68–87. [Google Scholar] [CrossRef]
  27. Zhao, H.; Sha, J.; Zhao, Y.; Xi, J.; Cui, J.; Zha, H.; Shibasaki, R. Detection and tracking of moving objects at intersections using a network of laser scanners. IEEE Trans. Intell. Transp. Syst. 2011, 13, 655–670. [Google Scholar] [CrossRef]
  28. Gu, D.; Zhu, K.; Shao, Y.; Wu, W.; Gong, L.; Liu, C. 3D Scanning and Multiple Point Cloud Registration with Active View Complementation for Panoramically Imaging Large-Scale Plants. In Proceedings of the International Conference on Intelligent Robotics and Applications, Shenyang, China, 8–11 August 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 329–341. [Google Scholar]
  29. Aguilar-Moreno, M.; Graña, M. A comparison of registration methods for SLAM with the M8 Quanergy LiDAR. In Proceedings of the International Workshop on Soft Computing Models in Industrial and Environmental Applications, Burgos, Spain, 16–18 September 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 824–834. [Google Scholar]
  30. Ivanovich, P.; Dong, L. Optimal fusion rule for distributed detection with channel errors taking into account sensors’ unreliability probability when protecting coastlines. Int. J. Sens. Netw. 2022, 38, 71–84. [Google Scholar] [CrossRef]
  31. Lv, B.; Xu, H.; Wu, J.; Tian, Y.; Tian, S.; Feng, S. Revolution and rotation-based method for roadside LiDAR data integration. Opt. Laser Technol. 2019, 119, 105571. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Zheng, J.; Sun, R.; Zhang, Z. 3D Point Cloud Registration for Multiple Roadside LiDARs with Retroreflective Reference. In Proceedings of the 2020 IEEE International Conference on Networking, Sensing and Control (ICNSC), Nanjing, China, 30 October–2 November 2020; pp. 1–6. [Google Scholar]
  33. Liu, C.; Li, S.; Chang, F.; Wang, Y. Machine vision based traffic sign detection methods: Review, analyses and perspectives. IEEE Access 2019, 7, 86578–86596. [Google Scholar] [CrossRef]
  34. Besl, P.; Mckay, N. A method for registration of 3D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 1, 239–256. [Google Scholar] [CrossRef]
  35. Wu, J.; Xu, H.; Liu, W. Points registration for roadside LiDAR sensors. Transp. Res. Rec. 2019, 2673, 627–639. [Google Scholar] [CrossRef]
  36. Changzhen, X.; Cong, W.; Weixin, M.; Yanmei, S. A traffic sign detection algorithm based on deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Signal and Image Processing (ICSIP), Beijing, China, 13–15 August 2016; pp. 676–679. [Google Scholar]
  37. Wang, C. Research and application of traffic sign detection and recognition based on deep learning. In Proceedings of the 2018 International Conference on Robots & Intelligent System (ICRIS), Changsha, China, 26–27 May 2018; pp. 150–152. [Google Scholar]
  38. Tabernik, D.; Skočaj, D. Deep learning for large-scale traffic-sign detection and recognition. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1427–1440. [Google Scholar] [CrossRef] [Green Version]
  39. Javanmardi, M.; Song, Z.; Qi, X. A Fusion Approach to Detect Traffic Signs Using Registered Color Images and Noisy Airborne LiDAR Data. Appl. Sci. 2021, 11, 309. [Google Scholar] [CrossRef]
  40. Ai, C.; Tsai, Y.J. An automated sign retroreflectivity condition evaluation methodology using mobile LIDAR and computer vision. Transp. Res. Part C Emerg. Technol. 2016, 63, 96–113. [Google Scholar] [CrossRef]
  41. Anonymous. VLP-16 User’s Manual; Velodyne Acoustics, Inc.: Lafayette, CA, USA, 2019. [Google Scholar]
  42. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  43. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
Figure 1. Roadside LiDAR and traffic sign.
Figure 1. Roadside LiDAR and traffic sign.
Electronics 11 01559 g001
Figure 2. Algorithm flowchart.
Figure 2. Algorithm flowchart.
Electronics 11 01559 g002
Figure 3. Oblique point cloud.
Figure 3. Oblique point cloud.
Electronics 11 01559 g003
Figure 4. Point cloud after rotation.
Figure 4. Point cloud after rotation.
Electronics 11 01559 g004
Figure 5. Traffic signs in point cloud data.
Figure 5. Traffic signs in point cloud data.
Electronics 11 01559 g005
Figure 6. Intersection experiment site.
Figure 6. Intersection experiment site.
Electronics 11 01559 g006
Figure 7. Vehicle crossing the 0 line. (a) Before Crossing (b) Crossing (c) After Crossing.
Figure 7. Vehicle crossing the 0 line. (a) Before Crossing (b) Crossing (c) After Crossing.
Electronics 11 01559 g007
Figure 8. Change of vehicle length in a forward cross.
Figure 8. Change of vehicle length in a forward cross.
Electronics 11 01559 g008
Figure 9. Change of vehicle length in a backward cross.
Figure 9. Change of vehicle length in a backward cross.
Electronics 11 01559 g009
Figure 10. 0° line direction.
Figure 10. 0° line direction.
Electronics 11 01559 g010
Figure 11. Ground plane detection result.
Figure 11. Ground plane detection result.
Electronics 11 01559 g011
Figure 12. DBSCAN denoising.
Figure 12. DBSCAN denoising.
Electronics 11 01559 g012
Figure 13. Results of registration in the intersection experiment.
Figure 13. Results of registration in the intersection experiment.
Electronics 11 01559 g013
Figure 14. Result of registration in the roadside experiment.
Figure 14. Result of registration in the roadside experiment.
Electronics 11 01559 g014
Figure 15. Error in registration.
Figure 15. Error in registration.
Electronics 11 01559 g015
Figure 16. Point cloud registration and vehicle detection.
Figure 16. Point cloud registration and vehicle detection.
Electronics 11 01559 g016
Table 1. Number of points in one frame.
Table 1. Number of points in one frame.
Laser Frequency5 Hz10 Hz20 Hz
Number of Points44,00022,00011,000
Time0.2 s0.1 s0.05 s
Angular resolution0.1 0.2 0.4
Table 2. Intensity at different distances.
Table 2. Intensity at different distances.
DistanceMin IntensityMax IntensityAverage Intensity
10204255246.8
20191255251.3
30193255246.4
40190255230.5
50190255233.8
60189255233.8
70188255205.5
80223255240.8
90225255242.3
100239246246.9
Table 3. Errors of Registration Under Different Methods.
Table 3. Errors of Registration Under Different Methods.
MethodLateral ErrorLongitudinal ErrorVertical Error
Proposed Method0.120.210.09
The Method [32]0.390.370.13
ICP [34]38.9126.030.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Zheng, J.; Tao, Y.; Xiao, Y.; Yu, S.; Asiri, S.; Li, J.; Li, T. Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments. Electronics 2022, 11, 1559. https://doi.org/10.3390/electronics11101559

AMA Style

Zhang Z, Zheng J, Tao Y, Xiao Y, Yu S, Asiri S, Li J, Li T. Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments. Electronics. 2022; 11(10):1559. https://doi.org/10.3390/electronics11101559

Chicago/Turabian Style

Zhang, Zheyuan, Jianying Zheng, Yanyun Tao, Yang Xiao, Shumei Yu, Sultan Asiri, Jiacheng Li, and Tieshan Li. 2022. "Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments" Electronics 11, no. 10: 1559. https://doi.org/10.3390/electronics11101559

APA Style

Zhang, Z., Zheng, J., Tao, Y., Xiao, Y., Yu, S., Asiri, S., Li, J., & Li, T. (2022). Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments. Electronics, 11(10), 1559. https://doi.org/10.3390/electronics11101559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop