Extracting High-Precision Vehicle Motion Data from Unmanned Aerial Vehicle Video Captured under Various Weather Conditions
Abstract
:1. Introduction
- These datasets were all collected in clear-weather conditions to maximize recording quality and are not focused on weather factors. At present, there are no datasets that contain vehicle motion data collected from UAV video captured under various weather conditions in an urban mixed-traffic scene. The weather and the ambient illumination conditions change constantly in a real traffic scenario, and this can greatly affect the quality of the images. The accuracy and robustness of the vehicle detection and tracking methods significantly decrease under various inclement weather conditions such as rain, fog, and snow, and at nighttime because of darkness, blurring, and partial occlusion.
- Another major issue with the aforementioned datasets is that the vehicle detection and tracking methods used for these datasets do not obtain highly accurate and stable vehicle motion data, because vehicle detection and tracking in UAV images and videos is challenging for the following reasons: random disturbance of the camera, obstruction from buildings and trees, the existence of many objects against intricate background, and perspective distortion. Raw data extracted from the UAV video need to be smoothed and refined based on other data collected by additional sensors, and they may contain erroneous coordinate, speed, and acceleration information.
- To the best of our knowledge, this study is the first work that extracts high-precision vehicle motion data, including vehicle trajectory, vehicle speed, and vehicle yaw angle, from UAV video captured under various weather conditions.
- Two new aerial-view datasets, named the Multi-Weather Vehicle Detection (MWVD) and Multi-Weather UAV (MWUAV) datasets, are collected and manually labeled for vehicle detection and vehicle motion data estimation, respectively. The MWVD dataset includes 7133 traffic images (1311 taken under sunny conditions, 961 under night conditions, 3366 under rainy conditions, and 1495 under snowy conditions) of 106,995 vehicles. The data were captured by a camera-equipped drone and will be used to evaluate the proposed vehicle orientation detection method. The MWUAV dataset for data estimation contains four UAV videos having over 30,000 frames of approximately 3000 vehicle trajectories. The speed and yaw angle were collected under sunny, night, rainy, and snowy conditions. This is the first and the largest vehicle motion dataset collected from UAV videos captured at an urban intersection under multi-weather conditions.
- We propose a new vehicle orientation detection method based on YOLOv5 with image-adaptive enhancement to improve vehicle detection performance under various weather conditions. Our method significantly outperforms state-of-the-art methods on the collected MWVD dataset.
- A new vehicle tracking method called SORT++ is proposed to extract high-precision and reliable vehicle motion data from the vehicle orientation detection results. Our tracking method also achieves state-of-the-art performance on the collected MWUAV dataset.
2. Methodology
2.1. Overall Framework
2.2. Image Stabilization
2.3. Structure of the Vehicle Orientation Detection Method
2.3.1. CLAHE
2.3.2. YOLOv5-OBB Algorithm
2.4. The Structure of the Tracking Method
2.4.1. RIoU Matching
2.4.2. Appearance Feature Extractor (AFE)
2.4.3. Motion Feature Extractor (MFE)
2.4.4. Kalman Filtering
2.4.5. Traffic Data Calculation
3. Experiments and Analysis
3.1. MWVD and MWUAV Datasets
3.2. Detection Algorithm Evaluation Metrics
3.3. Metrics for Multiple Object Tracking (MOT)
3.4. Results of Vehicle Detection
3.5. Results of Vehicle Tracking
3.6. Results of Vehicle Motion Data Estimation
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhan, W.; Sun, L.; Wang, D.; Shi, H.; Clausse, A.; Naumann, M.; Kummerle, J.; Konigshof, H.; Stiller, C.; de La Fortelle, A.; et al. Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. arXiv 2019, arXiv:1910.03088. [Google Scholar]
- Alexiadis, V.; Colyar, J.; Halkias, J.; Hranac, R.; McHale, G. The next generation simulation program. Inst. Transp. Eng. ITE J. 2004, 74, 22. [Google Scholar]
- Robicquet, A.; Sadeghian, A.; Alahi, A.; Savarese, S. Learning social etiquette: Human trajectory understanding in crowded scenes. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2016; pp. 549–565. [Google Scholar]
- Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar]
- Yang, D.; Li, L.; Redmill, K.; Özgüner, Ü. Top-view trajectories: A pedestrian dataset of vehicle-crowd interaction from controlled experiments and crowded campus. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 899–904. [Google Scholar]
- Bock, J.; Krajewski, R.; Moers, T.; Runde, S.; Vater, L.; Eckstein, L. The ind dataset: A drone dataset of naturalistic road user trajectories at german intersections. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NA, USA, 19 October–13 November 2020; pp. 1929–1934. [Google Scholar]
- Krajewski, R.; Moers, T.; Bock, J.; Vater, L.; Eckstein, L. The round dataset: A drone dataset of road user trajectories at roundabouts in germany. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–6. [Google Scholar]
- Moers, T.; Vater, L.; Krajewski, R.; Bock, J.; Zlocki, A.; Eckstein, L. The exiD Dataset: A Real-World Trajectory Dataset of Highly Interactive Highway Scenarios in Germany. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 958–964. [Google Scholar]
- Zheng, O.; Abdel-Aty, M.; Yue, L.; Abdelraouf, A.; Wang, Z.; Mahmoud, N. CitySim: A Drone-Based Vehicle Trajectory Dataset for Safety Oriented Research and Digital Twins. arXiv 2022, arXiv:2208.11036. [Google Scholar]
- Wu, B.F.; Juang, J.H. Adaptive vehicle detector approach for complex environments. IEEE Trans. Intell. Transp. Syst. 2012, 13, 817–827. [Google Scholar] [CrossRef] [Green Version]
- El-Khoreby, M.A.; Abu-Bakar, S.; Mokji, M.M.; Omar, S.N. Vehicle detection and counting using adaptive background model based on approximate median filter and triangulation threshold techniques. Autom. Control. Comput. Sci. 2020, 54, 346–357. [Google Scholar] [CrossRef]
- He, S.; Chen, Z.; Wang, F.; Wang, M. Integrated image defogging network based on improved atmospheric scattering model and attention feature fusion. Earth Sci. Inform. 2021, 14, 2037–2048. [Google Scholar] [CrossRef]
- Lin, C.T.; Huang, S.W.; Wu, Y.Y.; Lai, S.H. GAN-based day-to-night image style transfer for nighttime vehicle detection. IEEE Trans. Intell. Transp. Syst. 2020, 22, 951–963. [Google Scholar] [CrossRef]
- Wang, Z.; Zhan, J.; Duan, C.; Guan, X.; Lu, P.; Yang, K. A review of vehicle detection techniques for intelligent vehicles. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
- Abdullah, M.N.; Ali, Y.H. Vehicles Detection System at Different Weather Conditions. Iraqi J. Sci. 2021, 62, 2040–2052. [Google Scholar] [CrossRef]
- Huang, S.C.; Le, T.H.; Jaw, D.W. DSNet: Joint semantic learning for object detection in inclement weather conditions. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2623–2633. [Google Scholar] [CrossRef]
- Han, X. Modified cascade RCNN based on contextual information for vehicle detection. Sens. Imaging 2021, 22, 1–19. [Google Scholar] [CrossRef]
- Arora, N.; Kumar, Y.; Karkra, R.; Kumar, M. Automatic vehicle detection system in different environment conditions using fast R-CNN. Multimed. Tools Appl. 2022, 81, 18715–18735. [Google Scholar] [CrossRef]
- Cao, J.; Song, C.; Song, S.; Peng, S.; Wang, D.; Shao, Y.; Xiao, F. Front vehicle detection algorithm for smart car based on improved SSD model. Sensors 2020, 20, 4646. [Google Scholar] [CrossRef] [PubMed]
- Hassaballah, M.; Kenk, M.A.; Muhammad, K.; Minaee, S. Vehicle detection and tracking in adverse weather using a deep learning framework. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4230–4242. [Google Scholar] [CrossRef]
- Humayun, M.; Ashfaq, F.; Jhanjhi, N.Z.; Alsadun, M.K. Traffic Management: Multi-Scale Vehicle Detection in Varying Weather Conditions Using YOLOv4 and Spatial Pyramid Pooling Network. Electronics 2022, 11, 2748. [Google Scholar] [CrossRef]
- Chen, X.Z.; Chang, C.M.; Yu, C.W.; Chen, Y.L. A real-time vehicle detection system under various bad weather conditions based on a deep learning model without retraining. Sensors 2020, 20, 5731. [Google Scholar] [CrossRef]
- Al-Haija, Q.A.; Gharaibeh, M.; Odeh, A. Detection in Adverse Weather Conditions for Autonomous Vehicles via Deep Learning. AI 2022, 3, 303–317. [Google Scholar] [CrossRef]
- Walambe, R.; Marathe, A.; Kotecha, K.; Ghinea, G. Lightweight object detection ensemble framework for autonomous vehicles in challenging weather conditions. Comput. Intell. Neurosci. 2021, 2021, 5278820. [Google Scholar] [CrossRef]
- Rezaei, M.; Terauchi, M.; Klette, R. Robust vehicle detection and distance estimation under challenging lighting conditions. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2723–2743. [Google Scholar] [CrossRef]
- Baghdadi, S.; Aboutabit, N. Illumination correction in a comparative analysis of feature selection for rear-view vehicle detection. Int. J. Mach. Learn. Comput. 2019, 9, 712–720. [Google Scholar] [CrossRef]
- Nguyen, K.; Nguyen, P.; Bui, D.C.; Tran, M.; Vo, N.D. Analysis of the Influence of De-hazing Methods on Vehicle Detection in Aerial Images. Int. J. Adv. Comput. Sci. Appl. 2022, 13. [Google Scholar] [CrossRef]
- Chen, Y.; Li, W.; Sakaridis, C.; Dai, D.; Van Gool, L. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3339–3348. [Google Scholar]
- Sindagi, V.A.; Oza, P.; Yasarla, R.; Patel, V.M. Prior-based domain adaptive object detection for hazy and rainy conditions. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2020; pp. 763–780. [Google Scholar]
- Li, J.; Xu, Z.; Fu, L.; Zhou, X.; Yu, H. Domain adaptation from daytime to nighttime: A situation-sensitive vehicle detection and traffic flow parameter estimation framework. Transp. Res. Part C Emerg. Technol. 2021, 124, 102946. [Google Scholar] [CrossRef]
- Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar]
- Wen, L.; Du, D.; Cai, Z.; Lei, Z.; Chang, M.C.; Qi, H.; Lim, J.; Yang, M.H.; Lyu, S. UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking. Comput. Vis. Image Underst. 2020, 193, 102907. [Google Scholar] [CrossRef]
- Du, D.; Qi, Y.; Yu, H.; Yang, Y.; Duan, K.; Li, G.; Zhang, W.; Huang, Q.; Tian, Q. The unmanned aerial vehicle benchmark: Object detection and tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 370–386. [Google Scholar]
- Du, D.; Zhu, P.; Wen, L.; Bian, X.; Lin, H.; Hu, Q.; Peng, T.; Zheng, J.; Wang, X.; Zhang, Y.; et al. VisDrone-DET2019: The vision meets drone object detection in image challenge results. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Li, X.; Cai, Z.; Zhao, X. Oriented-YOLOv5: A Real-time Oriented Detector Based on YOLOv5. In Proceedings of the 2022 7th International Conference on Computer and Communication Systems (ICCCS), Wuhan, China, 22–25 April 2022; pp. 216–222. [Google Scholar]
- Feng, J.; Yi, C. Lightweight Detection Network for Arbitrary-Oriented Vehicles in UAV Imagery via Global Attentive Relation and Multi-Path Fusion. Drones 2022, 6, 108. [Google Scholar] [CrossRef]
- Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
- Kuran, U.; Kuran, E.C. Parameter selection for CLAHE using multi-objective cuckoo search algorithm for image contrast enhancement. Intell. Syst. Appl. 2021, 12, 200051. [Google Scholar] [CrossRef]
- Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12993–13000. [Google Scholar]
- Yang, X.; Yan, J. On the arbitrary-oriented object detection: Classification based approaches revisited. Int. J. Comput. Vis. 2022, 130, 1340–1365. [Google Scholar] [CrossRef]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [Google Scholar]
- Zhou, D.; Fang, J.; Song, X.; Guan, C.; Yin, J.; Dai, Y.; Yang, R. Iou loss for 2d/3d object detection. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Quebec, QC, Canada, 15–18 September 2019; pp. 85–94. [Google Scholar]
- Rong, W.; Li, Z.; Zhang, W.; Sun, L. An improved CANNY edge detection algorithm. In Proceedings of the 2014 IEEE International Conference on Mechatronics and Automation, Tianjin, China, 3–6 August 2014; pp. 577–582. [Google Scholar]
- Welch, G.; Bishop, G. An Introduction to the Kalman Filter. 1995. Available online: https://www.researchgate.net/publication/200045331_An_Introduction_to_the_Kalman_Filter (accessed on 21 October 2022).
- Hsieh, M.R.; Lin, Y.L.; Hsu, W.H. Drone-based object counting by spatially regularized regional proposal network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4145–4153. [Google Scholar]
- Mundhenk, T.N.; Konjevod, G.; Sakla, W.A.; Boakye, K. A large contextual dataset for classification, detection and counting of cars with deep learning. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 785–800. [Google Scholar]
- Kouris, A.; Kyrkou, C.; Bouganis, C.S. Informed region selection for efficient uav-based object detectors: Altitude-aware vehicle detection with cycar dataset. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 51–58. [Google Scholar]
- Azimi, S.M.; Bahmanyar, R.; Henry, C.; Kurz, F. Eagle: Large-scale vehicle detection dataset in real-world scenarios using aerial imagery. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 6920–6927. [Google Scholar]
- Mueller, M.; Smith, N.; Ghanem, B. A benchmark and simulator for uav tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 445–461. [Google Scholar]
- Lyu, Y.; Vosselman, G.; Xia, G.S.; Yilmaz, A.; Yang, M.Y. UAVid: A semantic segmentation dataset for UAV imagery. ISPRS J. Photogramm. Remote Sens. 2020, 165, 108–119. [Google Scholar] [CrossRef]
- Razakarivony, S.; Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. J. Vis. Commun. Image Represent. 2016, 34, 187–203. [Google Scholar] [CrossRef] [Green Version]
- Liu, K.; Mattyus, G. Fast multiclass vehicle detection on aerial images. IEEE Geosci. Remote. Sens. Lett. 2015, 12, 1938–1942. [Google Scholar]
- Ke, R.; Li, Z.; Kim, S.; Ash, J.; Cui, Z.; Wang, Y. Real-time bidirectional traffic flow parameter estimation from aerial videos. IEEE Trans. Intell. Transp. Syst. 2016, 18, 890–901. [Google Scholar] [CrossRef]
- Dendorfer, P.; Osep, A.; Milan, A.; Schindler, K.; Cremers, D.; Reid, I.; Roth, S.; Leal-Taixé, L. Motchallenge: A benchmark for single-camera multiple target tracking. Int. J. Comput. Vis. 2021, 129, 845–881. [Google Scholar] [CrossRef]
- Sun, Y.; Cao, B.; Zhu, P.; Hu, Q. Drone-based RGB-Infrared Cross-Modality Vehicle Detection via Uncertainty-Aware Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6700–6713. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.S.; Lu, Q. Learning RoI transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2849–2858. [Google Scholar]
- Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
- Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Yuan, Z.; Luo, P.; Liu, W.; Wang, X. Bytetrack: Multi-object tracking by associating every detection box. arXiv 2021, arXiv:2110.06864. [Google Scholar]
- Cao, J.; Weng, X.; Khirodkar, R.; Pang, J.; Kitani, K. Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. arXiv 2022, arXiv:2203.14360. [Google Scholar]
- Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)?–Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef]
Database | Motion Parameters | Scenarios | Road User Types | Weather | Trajectories and FPS | Locations |
---|---|---|---|---|---|---|
NGSIM [2] | Trajectory, speed, acceleration | Freeway, arterial corridor | Car, truck | Sunny | 9206; 10 Hz | 4 |
Stanford Drone [3] | Trajectory | Campus | Pedestrian, bicycle, car, cart, bus | Sunny | 10,300; 25 Hz | 8 |
highD [4] | Trajectory, speed, acceleration | Highway | Car, truck | Sunny | 110,500; 25 Hz | 4 |
DUT [5] | Trajectory, speed, yaw angle | Campus | Pedestrian, vehicles | Sunny | 1862; 23.98 Hz | 2 |
INTERACTION [1] | Trajectory, velocity, yaw angle | Intersection | Car, pedestrian, bicycle | Sunny | 18,642; 10 Hz | 4 |
inD [6] | Trajectory, speed, acceleration, yaw angle | Intersection | Pedestrian, bicycle, car, truck, bus | Sunny | 13,599; 25 Hz | 4 |
roundD [7] | Trajectory, speed, acceleration, yaw angle | Roundabout | Pedestrian, bicycle, car, truck, bus | Sunny | 13,746; 25 Hz | 3 |
exiD [8] | Trajectory, speed, acceleration, yaw angle | Highway | Car, van, truck | Sunny | 69,172; 25 Hz | 7 |
CitySim [9] | Trajectory, speed, yaw angle | Highway, roundabout, intersection | Vehicles | Sunny | 10,000+; 30 Hz | 12 |
Parameters | Mini | Car | Truck | Bus |
---|---|---|---|---|
Vehicle length (mm) | <4000 | <5200 | <12,000 | <18,000 |
Vehicle width (mm) | <1600 | <2000 | <2500 | <2500 |
Vehicle length–width ratio | <2.5 | <3 | <6 | <6 |
Minimum turning radius (mm) | 3.5∼5.0 | 4.5∼7.5 | 4.0∼10.5 | 4.0∼11.0 |
Speed limit (km/h) | <50 | <70 | <60 | <60 |
Parameter | Value |
---|---|
Weight | 430 g |
Maximum endurance mileage | 10 km |
Maximum hover time | 20 min |
Maximum take-off altitude | 5000 m |
Maximum wind speed | 10 m/s |
Operative temperature | From 0 °C to 40 °C |
Camera resolution | 3840 × 2160, 24/25/30 fps |
Equivalent focal length | 24 mm |
Video Data | Vehicle Number | Image Number | Date | Image Size |
---|---|---|---|---|
Sunny | 28,773 | 1311 | 7:30 01/12/2021 | 1920 × 1080 |
Night | 35,905 | 961 | 17:30 01/12/2021 | 1920 × 1080 |
Rainy | 19,501 | 3366 | 13:00 10/01/2022 | 1920 × 1080 |
Snowy | 22,816 | 1495 | 13:30 22/01/2022 | 1920 × 1080 |
Video Data | Vehicle Number | Frame Number | Date | Time Length | Video Size | Frame Rate |
---|---|---|---|---|---|---|
Sunny | 893 | 30,303 | 7:30 01/12/2021 | 16 min 51 s | 1920 × 1080 | 29.97 fps |
Night | 1021 | 36,010 | 18:00 01/12/2021 | 20 min 1 s | 1920 × 1080 | 29.97 fps |
Rainy | 709 | 34,585 | 13:30 10/01/2022 | 19 min 13 s | 1920 × 1080 | 29.97 fps |
Snowy | 470 | 36,026 | 13:00 22/01/2022 | 20 min 2 s | 1920 × 1080 | 29.97 fps |
Video Data | Vehicle Number | Frame Number | Date | Time Length | Video Size | Frame Rate |
---|---|---|---|---|---|---|
TEST0 (Sunny) | 57 | 1311 | 7:30 01/12/2021 | 43 s | 1920 × 1080 | 29.97 fps |
TEST1 (Night) | 61 | 961 | 17:30 01/12/2021 | 32 s | 1920 × 1080 | 29.97 fps |
TEST2 (Rainy) | 37 | 3366 | 13:00 10/01/2022 | 112 s | 1920 × 1080 | 29.97 fps |
TEST3 (Snowy) | 33 | 1495 | 13:30 22/01/2022 | 49 s | 1920 × 1080 | 29.97 fps |
Method | Car AP | Truck AP | Bus AP | mAP |
---|---|---|---|---|
RetinaNet (OBB) | 64.27 | 30.53 | 61.02 | 51.94 |
Faster R-CNN (OBB) | 67.09 | 42.98 | 66.72 | 58.93 |
Mask R-CNN (OBB) | 68.50 | 44.31 | 66.91 | 59.91 |
RoITransformer (OBB) | 71.28 | 49.02 | 71.15 | 63.82 |
YOLOv5-OBB with basic weight | 82.20 | 62.21 | 81.52 | 75.31 |
YOLOv5-OBB with enhanced weight | 85.12 | 69.07 | 84.10 | 80.43 |
Dual weight YOLOv5-OBB (Ours) | 89.10 | 72.07 | 87.55 | 82.91 |
Method | MOTA↑ | IDF1↑ | FP↓ | FN↓ | IDSW↓ | MT↑ |
---|---|---|---|---|---|---|
SORT | 85.62 | 91.87 | 2620 | 1,494 | 24 | 48 |
DeepSORT | 85.16 | 80.30 | 783 | 3302 | 186 | 43 |
ByteTrack | 84.12 | 91.97 | 4471 | 80 | 19 | 56 |
OC-SORT | 88.36 | 93.25 | 1815 | 1509 | 25 | 48 |
OC-SORT+ByteTrack | 80.33 | 95.39 | 5527 | 112 | 20 | 55 |
SORT+BF | 94.16 | 96.65 | 232 | 1445 | 3 | 48 |
ByteTrack+BF | 99.13 | 99.16 | 248 | 0 | 1 | 57 |
SORT+MFE+BF | 94.95 | 97.04 | 2 | 1448 | 3 | 48 |
ByteTrack+MFE+BF | 100.00 | 99.59 | 0 | 0 | 1 | 57 |
SORT++ (Ours) | 100.00 (+14.38) | 99.59 (+7.72) | 0 | 0 | 0 | 57 |
Method | MOTA↑ | IDF1↑ | FP↓ | FN↓ | IDSW↓ | MT↑ |
---|---|---|---|---|---|---|
SORT | 86.20 | 92.46 | 4231 | 698 | 26 | 58 |
DeepSORT | 87.96 | 92.94 | 2577 | 1704 | 42 | 55 |
ByteTrack | 79.54 | 89.18 | 7249 | 56 | 43 | 61 |
OC-SORT | 87.12 | 93.27 | 3891 | 713 | 22 | 58 |
OC-SORT+ByteTrack | 84.08 | 91.41 | 5621 | 61 | 35 | 61 |
SORT+BF | 96.87 | 98.23 | 205 | 917 | 3 | 57 |
ByteTrack+BF | 98.99 | 99.27 | 352 | 0 | 1 | 61 |
SORT+MFE+BF | 97.32 | 98.46 | 30 | 932 | 1 | 57 |
ByteTrack+MFE+BF | 99.82 | 99.91 | 60 | 4 | 0 | 61 |
SORT++ (Ours) | 99.99 (+13.79) | 99.99 (+7.54) | 0 | 4 | 0 | 61 |
Method | MOTA↑ | IDF1↑ | FP↓ | FN↓ | IDSW↓ | MT↑ |
---|---|---|---|---|---|---|
SORT | 90.22 | 94.98 | 1392 | 498 | 17 | 35 |
DeepSORT | 81.51 | 68.20 | 750 | 2693 | 163 | 26 |
ByteTrack | 83.36 | 91.81 | 3108 | 109 | 29 | 37 |
OC-SORT | 91.30 | 95.48 | 1174 | 508 | 15 | 35 |
OC-SORT+ByteTrack | 85.51 | 92.74 | 2689 | 109 | 29 | 37 |
SORT+BF | 95.92 | 97.95 | 273 | 522 | 0 | 35 |
ByteTrack+BF | 94.84 | 97.48 | 1007 | 0 | 0 | 37 |
SORT+MFE+BF | 96.57 | 98.27 | 139 | 530 | 0 | 35 |
ByteTrack+MFE+BF | 95.65 | 97.87 | 849 | 0 | 0 | 37 |
SORT++ (Ours) | 99.97 (+9.75) | 99.98 (+5.01) | 0 | 6 | 0 | 37 |
Method | MOTA↑ | IDF1↑ | FP↓ | FN↓ | IDSW↓ | MT↑ |
---|---|---|---|---|---|---|
SORT | 91.93 | 96.06 | 1473 | 369 | 0 | 30 |
DeepSORT | 80.59 | 87.85 | 821 | 3600 | 8 | 26 |
ByteTrack | 89.44 | 94.72 | 2355 | 45 | 9 | 33 |
OC-SORT | 93.55 | 96.83 | 1100 | 372 | 0 | 30 |
OC-SORT+ByteTrack | 93.02 | 96.33 | 1535 | 50 | 8 | 33 |
SORT+BF | 95.97 | 97.96 | 213 | 706 | 0 | 29 |
ByteTrack+BF | 98.22 | 99.12 | 407 | 0 | 0 | 33 |
SORT+MFE+BF | 96.84 | 98.39 | 9 | 713 | 0 | 29 |
ByteTrack+MFE+BF | 99.36 | 99.68 | 136 | 11 | 0 | 33 |
SORT++ (Ours) | 99.99 (+8.06) | 99.99 (+3.93) | 0 | 3 | 0 | 33 |
Data Type | MOTA | IDF1 | FP | FN | IDSW | MT | GT | FPS |
---|---|---|---|---|---|---|---|---|
Sunny Data | 100.00 | 99.94 | 5 | 11 | 4 | 893 | 893 | 8.57 |
Night Data | 99.97 | 99.61 | 139 | 128 | 5 | 1021 | 1021 | 9.21 |
Rainy Data | 99.99 | 99.86 | 14 | 25 | 0 | 708 | 709 | 9.73 |
Snowy Data | 99.95 | 99.89 | 1 | 211 | 2 | 469 | 470 | 10.02 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Wu, J. Extracting High-Precision Vehicle Motion Data from Unmanned Aerial Vehicle Video Captured under Various Weather Conditions. Remote Sens. 2022, 14, 5513. https://doi.org/10.3390/rs14215513
Li X, Wu J. Extracting High-Precision Vehicle Motion Data from Unmanned Aerial Vehicle Video Captured under Various Weather Conditions. Remote Sensing. 2022; 14(21):5513. https://doi.org/10.3390/rs14215513
Chicago/Turabian StyleLi, Xiaohe, and Jianping Wu. 2022. "Extracting High-Precision Vehicle Motion Data from Unmanned Aerial Vehicle Video Captured under Various Weather Conditions" Remote Sensing 14, no. 21: 5513. https://doi.org/10.3390/rs14215513
APA StyleLi, X., & Wu, J. (2022). Extracting High-Precision Vehicle Motion Data from Unmanned Aerial Vehicle Video Captured under Various Weather Conditions. Remote Sensing, 14(21), 5513. https://doi.org/10.3390/rs14215513