1. Introduction
Recently, drone technology has developed rapidly in various fields. The drone is highly useful for aerial surveillance because of its remote sensing capability. Besides, multiple target detection is essential to recognize harmful threats in advance.
It is difficult to estimate the desired target due to signal noise and interference from a multipath fading of transmitted signals in a wireless communication environment [
1]. The strength of the signal that carries information is decreased by multipath fading, which is caused by natural and man-made structures [
2]. Target estimation [
3,
4,
5,
6] using radio waves has been studied in the radar, sonar, mobile communication, and laser fields. In particular, a method for estimating targets using radar has been developed to protect military personnel from enemy threats [
7,
8,
9].
Moving target detection is an essential preprocessing step to mark regions in the application, such as abandoned animal detection, surveillance tracking, and vehicle estimation. Since it affects subsequent steps, robust and accurate moving target detection is essential to ensure optimal performance in its application. Technological advances in areas, such as signal processing technology, sensors, and position tracking, have affected various applications of UAVs.
In recent years, target detection has been studied using a camera mounted on a drone. Drones can be significantly useful in many fields, and moving target detection is essential to identify harmful threats in advance. Drones can acquire data by detecting the vehicles from the point after flying from one point to another with a program or remote controller. In Reference [
10], various applications have been proposed for the automatic video analysis technology of smart surveillance by drones. In Reference [
11], aerial image acquisition has been studied by developing an all-in-one camera-based target detection and positioning system for search and rescue missions. In Reference [
12], the speed estimation for moving objects has been studied by registering and subtracting frames after capturing images using a camera mounted on a drone. Image registration is the process of matching two or more images of the same scene to the same coordinates. In Reference [
13], pedestrian movement detection using a small drone camera consists of frame subtraction, threshold, morphological filter, and false alarm reduction in consideration of the actual target size. In Reference [
14], the paper has studied the Ground Penetrating Radar (GPR) system to detect land mines and Improvised Explosive Devices (IEDs) using vehicles and drones. The system consists of a transmitter mounted on a vehicle and a receiver mounted on a drone. This method uses a Synthetic Aperture Radar (SAR) algorithm to reduce surface clusters and target detection, but it is impossible to detect objects with drones without vehicles. In Reference [
15], the paper has studied the methodology of acquiring a growth deficit map with an accuracy of up to 5 cm and a spatial resolution of 1 m using Diffusion Interferometry-SAR (DIN-SAR). In order to form high-resolution maps for crop growth monitoring, satellite-based radar optical sensors and drone-borne optical sensors are necessary.
In past studies, drones have used cameras to detect objects. This method is difficult to apply to object detection in environments, such as foggy and dark ones. Using a LiDAR sensor mounted on a drone, it detects the speed of vehicles driving on the road. The LiDAR sensor uses LightWare’s SF30C. The moving target is detected for vehicles driving on the first and second lanes of the road, and the position of the drone detection is vertically upward to the edge of the road. After the drone is moved to the measuring point, it detects the target in the stopped state. The drone hovering range is within 1 m. The speed of moving vehicles in multiple lanes is measured using a LiDAR sensor mounted on a drone. Recently, the amount of research on Intelligent Transportation Systems (ITS) for smart roads has been increasing, and information [
16,
17,
18] about automobiles in relation to driving on roads has been found to be closely related to road safety. Currently, it is not possible to measure the speed of vehicles in multiple lanes at once with a single piece of equipment. Therefore, it is necessary to develop an ITS system that estimates the speed of moving vehicles in multiple lanes using one piece of equipment.
This study presented a drone-based speed measurement system that estimated the speed of moving vehicles in multiple lanes [
19,
20,
21,
22]. There are two methods for measuring the speed of a moving vehicle in multiple lanes. The first method measures the speed of a moving vehicle on the outskirts of a road using speed-measuring equipment [
23,
24,
25]. The second method gauges a vehicle’s speed by mounting a sensor on a fixed structure and measuring the vehicle’s speed [
26,
27]. The method by which a person obtains information about a vehicle may differ depending on the climate and the person who performs the measurements. The method for measuring the speed of a vehicle equipped with a sensor in a fixed mechanism obtains information about that vehicle on the outskirts of the road, so it is impossible to measure a moving vehicle’s speed over the entire lane. It is impossible to measure the speed of moving vehicles in all lanes because the existing measurement method only measures the speed of moving vehicles at the edge of a lane. In addition, there is a risk of roadside traffic accidents when a person measures the speed of moving vehicles in multiple lanes; thus, a method that minimizes this risk is needed.
This paper is organized as follows. In
Section 2, the methodology of vehicle detection and speed estimation is discussed. The component of the developed drone system that measures a vehicle’s speed is described in
Section 3. In
Section 4, the analysis of information about moving vehicles detected using the developed drone that measures a vehicle’s speed is represented. Vehicle speed measurements are analyzed together with the developed drone through field experiments in
Section 5. In
Section 6, the proposed method is compared with the existing method, and the paper is concluded.
2. The Methodology of Vehicle Detection and Speed Estimation
The speed of the vehicles was estimated using a drone that was equipped with a set of LiDAR sensors that measure distance. This LiDAR sensor set detected moving vehicles in multiple lanes.
Figure 1 shows the proposed method for estimating the speed of vehicles in multiple lanes using multiple LiDAR sensors. Each LiDAR [
28] sensor obtained information about vehicles passing along each lane. A set of LiDAR sensors for detecting vehicles in one lane consisted of two sensors. The front sensor in
Figure 1 scanned point A, and the other sensor scanned point B. The sensor at the front of the LiDAR indicated that point A was perpendicular to the ground.
Hf was the distance between the sensor at the front of the LiDAR and point A, and
Hr was the distance from the rear sensor to point B, and
was the angle between the two LiDAR sensors.
Figure 2 shows the flow for the drone to detect the vehicle and calculate the vehicle speed. The sensors counter read stored data by measuring the distance from the front and rear sensors at each measurement point. When the vehicle entered/exited the measurement point, the
Hf and
Hr distances differed from the reference distance. At this point, how the vehicle has driven the measurement point was determined. The distance between measuring points was calculated using
Hf,
Hr, and sensor angle. The vehicle speed was obtained by calculating the vehicle entry/exit time difference and the distance between measuring points. The distance between point A and point B was represented by
, and the angle and distance between the sensors were proportional. In
Figure 1, the distance
between point A and point B could be expressed as follows [
29,
30,
31,
32].
A moving vehicle was detected by the front and rear sensors, which transmitted data to the computer on the ground. The computer on the ground estimated the vehicle’s speed by calculating the time difference of the measuring vehicle from two sensors. The speed of the vehicle shown in
Figure 1 was calculated as follows
where
is the vehicle’s speed, and
is the time difference between the time measured at point A and the time measured at point B in
Figure 1. This paper proposed a method for measuring the speed of vehicles driving along up to two lanes. The proposed method for calculating the speed of vehicles obtained information about vehicles on the road differently compared with the existing method for measuring the speed of vehicles on the roadside. It simultaneously measured the speed of operating vehicles that drove in multiple lanes and assumed that vehicles moving in one lane did not interfere with vehicles moving in other lanes.
Four LiDAR sensors mounted on a drone were used to detect vehicles moving in the first and second lanes. Two LiDAR sensors were used in one lane to detect the vehicle entry/exit at the measuring point. The front sensor scanned the vehicle entry measurement point, and the rear sensor scanned the vehicle exit measurement point. The data measured in each lane was transmitted to the ground computer by including the header information for each lane in a data frame with the TCP protocol setting. The ground computer could distinguish measurement data of each lane based on header information of the frame.
The angle (
) between the drone LiDAR sensors was fixed. The moving vehicle was measured while the angle was fixed. In
Figure 1, the distance (
) increased when the drone was flying upward, and the distance (
) decreased when the drone was flying downward. The distance between the measuring points was controlled by the drone’s height position. In
Figure 1, the drone LiDAR sensor measured two points (entry/exit) in each lane. This data was transmitted to the ground computer system using a transceiver. When the vehicle entered/exited to two points, the distance between
Hf and
Hr changed. It was determined that the vehicle moved at two points, and the vehicle speed was calculated by the ground computer system when the distance was changed.
When the drone LiDAR sensor detected the moving vehicle, it could not verify that it was pointing correctly at the road measurement point. The measurement point of the drone LiDAR sensor could be identified as the measurement point of the Infrared (IR) laser sensor by aligning the drone LiDAR sensor and the IR laser sensor. The IR laser sensor measurement point could be identified by the human eye. The IR laser sensor measuring point became the LiDAR sensor measuring point when the LiDAR sensor measurement point and the IR laser sensor measurement point were aligned.
3. Developed Drone System Component
This drone development system is comprised of a drone system and ground equipment and can be used to obtain information about traffic in multiple lanes. The drone for obtaining information about traffic in multiple lanes include a multi-lane LiDAR distance detection system [
33,
34,
35,
36] based on a LiDAR distance measurement sensor. The drone system consists of an airframe, an autonomous flight control system, a LiDAR sensor, a gimbal device, a communication and video device, and a video camera that can transmit video, vehicle speed, and traffic data. The multi-lane LiDAR detection system calculates a vehicle’s speed by calculating the distance between two points, as well as the distance between two points by reference to the distance to the measurement point, and by aligning the sensor’s constant angle with a sensor set consisting of two LiDAR distance measurement sensors. Therefore, it is possible to accurately measure the speed of a vehicle regardless of the change in altitude of the UAV during flight. In order to minimize damage in the event that the drone crashes, the direction of the sensor is set in such a way that it can be measured by stopping and hovering in the direction directly above the shoulder.
Figure 3 shows the drone that was developed in this study. It consisted of a propeller, a Flight Control Computer (FCC), a gimbal, a multi-lane detection system, and a camera. The developed drone was designed to be a quadcopter drone with four propellers due to weight considerations. The FCC controlled the drone’s flight and various devices that were mounted on the drone. The communication system in the drone was equipped with a transceiver so that a remote controller could control the drone’s flight and transmit data from the drone to the computer on the ground using Radio Frequency (RF) communication. It was difficult to continuously measure at a specific point because the drone aircraft would shake due to changes in direction. The gimbal was equipped with a three-axis gimbal to maintain the payload attached to the drone in constant balance. The vehicle detection module consisted of a LiDAR sensor, a camera, and a transceiver. The multi-lane vehicle detection system consisted of a LiDAR distance measurement sensor and a vehicle detection module. The vehicle detection module detected a moving vehicle in a lane, and two LiDAR sensors were needed to detect a moving vehicle in one lane. The drone of this study was composed of four sensors, with the vehicle detecting the 1st and 2nd lanes simultaneously.
The multi-lane LiDAR vehicle detection system had an interface port, by which the system could be connected to the sensors for each lane, a Central Processing Unit (CPU) circuit part that could perform calculations, a memory slot for data storage, and an external communication port. The camera transmitted images of the road to the computer on the ground to check the traffic flow.
Figure 4 presents a block diagram of the control system for the multi-lane traffic information detection module. The diagram shows information about a vehicle that was derived from the calculation of the distance between two points, the front/rear sensor measurement values, and information about the installation angle for the calculation of a vehicle’s speed.
Figure 4 shows that the control system for estimating a vehicle’s speed had two parts: a vehicle detection part and a ground control part. The vehicle detection part consisted of a camera and a Detection Information Module (DIM). The ground control part comprised a Detection Control Module (DCM) and a server. The camera of the vehicle detection part transmitted images of a vehicle’s entry into and exit from the measurement point to the ground control part using 5.8 GHz RF communication. The Detection Information Module transmitted the information obtained by the drone’s sensor set to the ground control part using 915 MHz RF communication. The Detection Control Module of the ground control part shown in
Figure 3 calculated the vehicle’s speed and the traffic volume using the information received from the detection information module and stored the data in the server. In addition, the Detection Control Module transmitted a vehicle detection message to the detection information module and saved the video image data in the server in order to process data received from the camera of the vehicle detection part. The multi-lane speed and traffic detection function calculated the traffic volume in each lane and the speed of vehicles, using the information obtained by the front/rear sensors and transmitted from the detection information module, and stored the acquired data. The detection confirmation displayed a video taken during the data collection time and a still screen at the time a vehicle was detected through the video capture device to provide information about the time of the video and the still screen. The detection information generation parameter remotely set various parameters in order for the Detection Information Module to detect vehicles.
The generation of information about speed was indicated by the setting of the angle of the front/rear sensor for each lane.
The indicator was set on/off to check the exact measurement point indicated by the sensor when the drone moved to the measurement point.
The drone moved to a measurement point and collected measurements about traffic during stationary flight. In this mode, the speed of vehicles moving in multiple lanes was measured.
The debug mode was set to display the raw information obtained by the multi-lane detection sensors and to check for the presence or absence of a vehicle’s detection.
When the calculation of a vehicle’s speed was completed, the information about the speed was logged.
4. Vehicle Detection and Speed Analysis Measurement
The detection information analysis employed the Detection Information Module’s vehicle detection and the Detection Control Module’s speed measurement. The file containing the original information that was collected by the detection information module extracted the traffic volume and the speed of the vehicle. The multi-lane moving vehicle detection parameters, such as Hf, Hr, and sensor angle, were vehicle detection algorithm parameters that were included in the header of the file containing the original information.
The speed measurement for each lane extracted the speed of vehicles in each lane at the time when the front/rear sensor sensed the vehicle. The vehicle detection parameters consisted of a base slope to determine entry/exit from the measurement point, an on-off wait switch, and front and rear sensor angles for each lane. Once vehicle detection and a speed calculation were complete, the data were stored in the same log file used by the Detection Control Module function, and the detection information about all vehicles was displayed in a graph.
Figure 5 shows a block diagram of the vehicle detection information for the vehicle detection algorithm and the speed calculation algorithm. The parameters for the vehicle detection algorithm were set in BtnModify. BtnPlayCtrl operated the LiDAR front/rear sensors for the vehicle detection and speed information generation using the parameters set in BtnModify when there was a detection event.
BtnStop stopped detecting vehicles when it received a stop event message from the DCM. Then, information about a vehicle’s speed was generated in order to process the data collected by BtnPlayCtrl. BtnSave stored log files that were used to transmit data to the computer on the ground, which processed the data acquired by the front/rear sensors. The DCM loaded the raw data from the DIM into BtnRawpath.
The block diagram of information about a vehicle’s detection and speed for data processing contained the following:
BtnModify: Applied the vehicle detection parameter to the vehicle detection algorithm.
BtnPlayCtrl: Started vehicle detection and speed generation.
BtnStop: Stopped vehicle detection and speed generation.
BtnSave: Saved the created information about speed in a log file.
BtnRawPath: Loaded the file containing raw information from the DIM.
Data Processing: Detected vehicles and calculated their speed according to an event message.
Figure 6 shows the hierarchical control procedure for each vehicle detection and speed calculation event in the detection information analysis shown in
Figure 5. BtnPlayCtrl controlled vehicle detection according to the event message from the DCM. The drone detected moving vehicles when BtnPlayCtrl received an ‘operate’ message, and the drone did not detect vehicles when BtnPlayCtrl received a ‘stop’ message. BtnModify operated when vehicle detection was not required and changed each parameter setting at this time. BtnSave created a log file and created a header file to save the log file when it received a ‘start saving’ event message. BtnSave stopped saving the log file and the header file when the state of the message was changed to ‘end’. BtnStop was applied when the user wanted to change the status of an event. In this mode, it stopped both the measurement of a moving vehicle’s speed and log file creation and storage and was used to change parameters again.
The hierarchical control procedure for each event was as follows:
BtnPlayCtrl: Saved raw data by changing the status of each event in the analysis of vehicle detection information.
BtnModify: Changed the parameters when it was not in data acquisition mode.
BtnSave: Created the log file and the header file and saved the log file.
BtnStop: Updated the data display event to ‘stop’.
Figure 7 shows the Hierarchical Control Procedure (HCP) for the extraction of the data from the DIM in the vehicle detection part. The thread was operated at intervals of 8 ms and executed functions, such as initialization, data collection, and data processing, according to the operation state in order to extract information about detected vehicles. The data processing unit controlled data acquisition according to the event state of each cycle. A speed error occurred in the study due to diffused reflection or sensor receiver signal noise. In particular, the speed error was greater when the sensor scanned the wheels of the vehicle. The average filter was used to minimize the effect on the error. Since the research concept was set to detect a vehicle over 15 m, an error occurred due to the diagonal scanning of the sensor. This error was corrected by processing it as a DC offset.
When the drone arrived at the measurement point to detect vehicles, it would receive an event message from the DCM telling it to initialize the parameters for vehicle detection. The drone would transition to a standby state and would not detect vehicles when the DIM received a ‘stop’ event message from the DCM. The drone would transition from the standby state to the vehicle detection state when the DIM received an event message from the DCM stating that it was operational. The drone transitioned between states according to the messages from the DCM, which controlled vehicle detection. The data obtained by the drone were transmitted to the DCM, and the DCM calculated the vehicle’s speed by processing the data received from the DIM.
The procedure shown in
Figure 7 was as follows.
In the initial state, the parameters were initialized, and the graph output’s state was changed.
The algorithm’s state was changed to acquire data.
Filtering was performed after obtaining raw data.
The vehicle detection algorithm was applied to extract the data rate.
The speed calculation algorithm was applied using the data from the vehicle detection algorithm.
5. Experiment
Figure 8 shows the distance-measuring sensor that measured the distance between the drone and the road. The sensor was manufactured by using a Lightware’s SF30C LiDAR sensor. The distance-measuring sensor was fixed in place, and the target was moved to measure the distance in order to determine the accuracy of the developed distance-measuring sensor. The maximum detection distance of the distance-measuring sensor was found to be 100 m. The sensor had a beam angle of 3.5 mrad.
Figure 9 shows the peak-to-peak measurement of the distance from 10 m to 100 m using the distance-measuring sensor shown in
Figure 8. The measured distance in
Figure 8 was tested by using the drone from 10 m to 100 m after taking into consideration the minimum and maximum distance from the ground required for safe flight. The experiment measured the distance between the distance-measuring sensor and the target while changing the distance in 10 m increments (repeated 10 times for each distance). The results of the experiment showed that the maximum measurement distance difference error was 7 cm at 100 m, and the minimum measurement distance difference error was 3 cm at 20 m and 80 m. The average peak-to-peak distance difference error of the distance-measuring sensor was 4.5 cm.
Table 1 shows the average, maximum, minimum, and peak-to-peak measurement for each distance from the distance-measuring sensor. When the measurement distance was 60 m and 80 m, the average measurement distance was 60.02 m and 80.02 m, respectively. The sensor produced the minimum difference in measurement distance at 60 m and 80 m. When the measurement distance was 100 m, the difference in measurement distance was at its maximum due to the average measurement distance of 100.7 m. The maximum measurement distance at 100 m was 101.1 m, which was the maximum measurement difference shown in
Table 1. The maximum measurement distance at 80 m was 80.03 m, which was the minimum measurement difference shown in
Table 1. Since the minimum measurement distance of 80 m was 80 m, this distance was the most accurately measured distance in the experiment. The minimum measurement distance of 100 m showed the maximum measurement difference to be 100.4 m.
Figure 10 shows images taken during an experiment performed to compare a speed-gun with the developed drone and confirm the accuracy of the speed measurements from the system developed in this study. The drone measured the speed of a moving vehicle in only one lane and then transmitted the data to the ground control part to calculate the speed in the DCM. The number shown in red and yellow in
Figure 10 was a measurement of the speed at which the vehicle moved between the speed-gun and the drone.
Table 2 shows the results of the test conducted on one road. In the 5th experiment, the difference in measured speed between the speed-gun and the drone was the smallest (0.17 km/h). In the 3rd experiment, the difference in measured speed between the speed-gun and the drone was the largest (4.34 km/h). The average difference in measured speed between the speed-gun and the drone for a vehicle moving in only one lane was 1.47 km/h.
Table 2 shows the measured speed values of the drone and the speed gun for the detection vehicle.
Table 3 shows the drone and speed-gun measurement data for the vehicle speed in multiple lanes. Existing equipment used a speed-gun, and the drone’s height was a fixed flight (hovering) at 35 m above the road. In the experiments, the vehicle speed was measured 17 times in multiple lanes. The vehicle speeds measured using the speed-gun and the drone in the 16th experiment on the first lane were 55 km/h and 54.7 km/h, respectively. The difference in measured speed between the two pieces of equipment was 0.3 km/h, which is the minimum value shown in
Table 3. The vehicle speeds measured using the speed-gun and the drone in the 6th experiment on the first lane were 85 km/h and 78.19 km/h, respectively. The difference in measured speed between the two equipment was 6.81 km/h, which is the maximum difference in measured speed shown in
Table 3. The vehicle speeds measured using the drone and the speed-gun in the 6th experiment on the second lane were 84.78 km/h and 85 km/h, respectively. The difference in measured speed between the two pieces of equipment was 0.22 km/h, which was the minimum value obtained in the second lane during the experiment. The maximum difference in measured speed between the two pieces of equipment in the second lane was obtained during the 15th experiment. At this time, the vehicle speeds measured using the drone and speed-gun were 44.68 km/h and 50 km/h, respectively, and the difference in the measured speed was 3.32 km/h.
In
Table 3, the ‘X’ symbol indicates that the vehicle’s speed was not measured. The results of the experiment in multiple lanes showed that there were some differences in measured speed between the developed system and the existing piece of equipment. The reason for these differences was that the drone scanned the beam using a diagonal line, so a vehicle’s position and the time at which it was detected might vary depending on the scanning position and the vehicle’s driving pattern. In addition, it might be that the differences in measured speed occurred because of the difference in the time of detection of the entry and exit sensors. If the distance between the two sensors was insufficient, a vehicle might not be detected, and its speed might not be measured.
Table 4 shows Root Mean Square Error (RMSE) and the vehicle detection rate for the data shown in
Table 3. The RMSE for the first lane was 3.30 km/h, and the RMSE for the second lane was 2.27 km/h. The vehicle detection rate was 100% in the first lane; however, in the second lane, the vehicle detection rate was 94.12% because one vehicle was not detected. The average vehicle detection rate was 97.06%.
There were two reasons for the difference in speed measurements obtained during the experiment. First, since the developed drone sensor scanned the beam diagonally, the detected position of a vehicle varied depending on the scanning position and the vehicle’s driving pattern. Second, the scanning distance value was indirectly calculated by the distance values from two sensors and the proximity angle when a vehicle failed to be detected. The scanning distance value increased the difference in measured speed due to the diffusion of reflections or noise from the sensor.
6. Conclusions
In this study, a drone to accurately and safely measure the speed of a vehicle moving on a road was developed. The drone tested the moving vehicle detection 17 times in a hovering flight. The drone measurement performance was compared with the vehicle speed measurement. The proposed drone system was comprised of the drone for the detection of moving vehicles and ground equipment for the processing of data. The difference between the drone system proposed in this study and the existing speed measurement system was that a set of sensors could be installed on the drone to acquire information on all vehicles moving in each lane. To determine the accuracy of the speed measurements from the developed drone system, its performance was compared with that of a speed-gun that is commonly used to measure vehicle speeds. The advantages of the proposed drone system were as follows: (1) it reduced the risk of roadside traffic accidents, which the use of the existing method for measuring vehicle speeds entails. (2) It allowed one to obtain information about the speed of vehicles without the need to install a gentry-structure. In addition, the system improved the accuracy of vehicle speed measurements on roads affected by foggy and dark environments. Since the weight of the sensor module and the gimbal was located below the center of the drone, it had a wind resistance of up to 15 m/s when an appropriate Proportional Integral Derivative (PID) control setting was used. The weight of the drone developed in this study was 20 kg, including a dummy. The drone was equipped with an 8 kg dummy to fly at a wind speed of 15 m/s. When the dummy was removed from the drone, it could fly up to 10 m/s wind resistance, with the drone weighing 12 kg. The drone’s movement during the stationary flight (hovering) could be measured within 1 m. It would be dangerous to test the entire system on a public road without being able to ensure that the drone’s flight would be 100% safe. In addition, it was impossible to fully consider all road irregularities because the drone needed to fly in a place without streetlights and wires. In the future, research should be conducted on how to improve the drone’s wind resistance so that it remains stable during flights in strong winds. Besides, a method that uses an average filter and a speed correction algorithm for reducing the difference in measured speed must be developed.