Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management
Abstract
:1. Introduction
1.1. Context
1.1.1. Unmanned Aerial Vehicles
1.1.2. Object Recognition with UAVs
1.1.3. Datasets
- Dataset size: It is essential for the dataset to contain many images with a wide variety of labeled objects within these images;
- Diversity of locations and time frames: The images used to train the model should be taken at different locations and under various visibility conditions. This helps prevent overfitting, enabling the model to be effective in a variety of contexts;
- Recognition of a wide range of objects: When labeling the images, we should not exclude objects related to those we want to predict. For example, if we are labeling all cars, we should not exclude trucks from the dataset. We can group all objects into a category like “vehicles” or create a category for each type of object.
1.1.4. Neural Networks for Object Detection
- YOLOv5: This model is the fifth version in the YOLO (You Only Look Once) series and has been widely used for real-time object detection. One of the main advantages of YOLOv5 is its speed, making it ideal for real-time applications. For example, it has been used for real-time face mask detection during the COVID-19 pandemic, demonstrating its utility in real-world situations where speed is essential [25,26];
- EfficientDet: This model is known for its balance between efficiency and performance in object detection [29].
- DETR (DEtection TRansformer): This model has revolutionized the field of object detection as the first end-to-end object detector. Although its computational cost is high, it has proven to be very effective in real-time object detection [30].
1.1.5. Cloud Computing and Edge Computing for Traffic Management
- Low latency: In traffic management, latency is critical. Drones need to make real-time decisions to avoid collisions and maintain efficient traffic flow. Edge computing allows drones to process data locally, significantly reducing latency compared to sending data to a distant cloud for processing;
- Enhanced security: By processing data locally on the UAVs themselves, dependence on internet connectivity is reduced, decreasing exposure to potential network interruptions or cyberattacks. This increases security in air traffic management;
- Distributed scalability: Using multiple drones equipped with edge computing allows for distributed scalability. This means that more drones can be added to address areas with dense traffic or special events without overburdening a central infrastructure;
- Data privacy: Air traffic management may involve the collection and transmission of sensitive data. Edge processing ensures that the data remain on the drones, improving privacy and complying with data privacy regulations;
- Energy efficiency: Transmitting data to the cloud and waiting for results can consume a significant amount of energy. Local processing on the drones is more energy-efficient, prolonging battery life and drone autonomy.
1.2. Research Gap
1.3. Aim of the Study
2. Materials and Methods
2.1. Study Design
- Hardware, software, and dataset selection;
- Dataset construction and cleaning;
- Experimentation:
- a
- Dataset preprocessing for training optimization;
- b
- Training with preprocessed/original datasets;
- c
- Validation of deployment results;
- d
- Measurement of energy consumption during deployment.
2.2. Hardware, Software and Datasets
2.2.1. Reduced-Board Computers
2.2.2. Software
- VSCode was used as an integrated development environment (IDE) for code development;
- Anaconda was used as a package manager and environment manager for Python;
- Python was the programming language used in the implementation of algorithms;
- TensorFlow is an open-source framework for machine learning and neural networks;
- PyTorch is a machine learning library also used in the implementation of algorithms;
- RoboFlow was utilized for dataset image management and preprocessing.
2.2.3. High-Performance Computing
2.2.4. Datasets
2.3. Preparation of Objects and Materials
2.3.1. Dataset Generation
- Random distribution: This strategy involves dividing the dataset randomly, without considering the relationship between frames;
- Frame reservation by video: In this approach, 20% of the frames from the same video were set aside for testing and validation, ensuring that temporal coherence in the training data is maintained;
- Selection of final frames: This strategy involves reserving the final 20% of frames from each video, as these may contain more challenging situations for computer vision models;
- Frame selection per second: This strategy, also known as subsampling, involves retaining only 1 frame out of every 24, equivalent to one frame per second, and then using random distribution for data partitioning.
- 90° clockwise and counterclockwise rotation of the image;
- 45° clockwise and counterclockwise rotation of the image;
- 90° clockwise and counterclockwise rotation of objects within the image.
2.3.2. Model Training
- TensorFlow: An open-source framework developed by the Google Brain Team, TensorFlow is widely used in various fields that require intensive computation operations and has become a standard in the machine learning and artificial intelligence field [42]. TensorFlow was used to implement and train EfficientDet-Lite architectures (see Figure 2), which are object detection models known for their efficiency and performance in terms of speed and accuracy [25]. These models were specifically selected for their compatibility with the chosen low-power computers, including Raspberry Pi 3B+, Raspberry Pi 4, Google Coral Dev Board, and Jetson Nano;
- PyTorch: Another open-source machine learning framework primarily developed by Facebook’s artificial intelligence research group, PyTorch is known for its user-friendliness and flexibility, allowing for more intuitive development and easier debugging of machine learning models. PyTorch was used to train models with the YOLO and DETR architectures. YOLO is a popular real-time object detection algorithm known for its speed and accuracy. Unlike other object detection algorithms, which analyze an image in multiple regions and perform object detection in each region separately, YOLO conducts object detection in a single pass, making it particularly fast and suitable for real-time applications [43]. On the other hand, DETR is an architecture developed by Facebook AI that allows for using transformers to train object detection models [30].
2.3.3. Model Deployment on Reduced-Board Computers
2.4. Experiments
2.4.1. Training Time
- Large images such as FullHD, 2K, 4K or even larger and with small objects or “targets” to detect considering the size of the image;
- Images taken at short intervals;
- Few objects within the image, or the objects are not evenly distributed within the image;
- There are static objects of interest in the image.
2.4.2. Model Metrics
2.4.3. Deployment Metrics
2.4.4. Power Consumption
3. Results
3.1. Training Times
3.2. Model Metrics
3.3. Deployment Metrics
3.4. Energy Consumption
4. Discussion
- Raspberry Pi: This is one of the least-power-consuming solutions as well as being the lightest hardware; it can run neural networks of various types, although its FPS is one of the slowest. Raspberry Pi would be the best choice for PyTorch -type networks, such as Yolo (YoloV5n, YoloV5s, YoloV8n, and YoloV8s), where power consumption and weight are critical, for example, to be employed in UAVs. If, in addition, we can work with small images such as those for classification (and not recognition), the Raspberry Pi is the best choice;
- Jetson Nano: it is the most powerful option in PyTorch -like network processing but outperforms the Raspberry Pi in power consumption and weight. The number of FPS is considerably higher, which makes it a better choice if (1) processing is a key factor, such as for object recognition, which needs in an image instead of classification, as it not only processes faster but also performs better with larger images, and if (2) power consumption and weight are not a critical factor, such as for images in static cameras (gantries or surveillance cameras);
- Google Coral: This hardware is one of the most powerful in processing capacity, with a slightly higher power consumption than the rest of the boards and with a weight between that of the two previous boards. However, this board has an important limitation: it has no GPU but TPUs (tensor processing unit), which makes it very inefficient for PyTorch-type networks but extremely efficient for TensorFlow networks such as Effi-cientDetLite0 or Effi-cientDetLite1 networks. The FPS difference is 500% faster compared to its competitors, which makes it the most suitable board when processing time is critical, and its weight makes it a good choice for onboard UAVs, while its power consumption greatly limits its working autonomy.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
CNN | Convolutional Neural Networks |
COVID-19 | COronaVIrus Disease 2019 |
CUDA | Compute Unified Device Architecture |
CVAT | Computer Vision Annotation Tool |
DDR | Double Data Rate |
DETR | DEtection TRansformer |
FP | False Positive |
PFS | Frames Per Second |
GPU | Graphics Processing Unit |
HD | High Definition |
IDE | Integrated Development Environment |
IoD | Internet of Drones |
IoU | Intersection over Union |
mAP | mean Average Precision |
microSD | micro Secure Digital |
PASCAL | Pattern Analysis, Statistical modeling, and Computational Learning |
PC | Personal Computer |
RAM | Random Access Memory |
TP | True Positive |
TPU | Tensor Processing Unit |
UAV | Unmanned Aerial Vehicle |
VOC | Visual Object Classes |
XML | Extensible Markup Language |
YOLO | You Only Look Once |
References
- Pettersson, I.; Karlsson, I.C.M. Setting the stage for autonomous cars: A pilot study of future autonomous driving experiences. IET Intell. Transp. Syst. 2015, 9, 694–701. [Google Scholar] [CrossRef]
- Yildiz, M.; Bilgiç, B.; Kale, U.; Rohács, D. Experimental Investigation of Communication Performance of Drones Used for Autonomous Car Track Tests. Sustainability 2021, 13, 5602. [Google Scholar] [CrossRef]
- Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep Learning for Generic Object Detection: A Survey. Int. J. Comput. Vis. 2020, 128, 261–318. [Google Scholar] [CrossRef]
- Menouar, H.; Guvenc, I.; Akkaya, K.; Uluagac, A.S.; Kadri, A.; Tuncer, A. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges. IEEE Commun. Mag. 2017, 55, 22–28. [Google Scholar] [CrossRef]
- Ahmed, F.; Jenihhin, M. A Survey on UAV Computing Platforms: A Hardware Reliability Perspective. Sensors 2022, 22, 6286. [Google Scholar] [CrossRef] [PubMed]
- Johnston, R.; Hodgkinson, D. Aviation Law and Drones Unmanned Aircraft and the Future of Aviation; Routledge: London, UK, 2018. [Google Scholar]
- Merkert, R.; Bushell, J. Managing the drone revolution: A systematic literature review into the current use of airborne drones and future strategic directions for their effective control. J. Air Transp. Manag. 2020, 89, 101929. [Google Scholar] [CrossRef] [PubMed]
- Milic, A.; Ranđelović, A.; Radovanović, M. Use of Drons in Operations in The Urban Environment. Available online: https://www.researchgate.net/profile/Marko-Radovanovic-2/publication/336589680_Use_of_drones_in_operations_in_the_urban_environment/links/60d2751845851566d5839b29/Use-of-drones-in-operations-in-the-urban-environment.pdf (accessed on 12 September 2023).
- Vaigandla, K.K.; Thatipamula, S.; Karne, R.K. Investigation on Unmanned Aerial Vehicle (UAV): An Overview. IRO J. Sustain. Wirel. Syst. 2022, 4, 130–148. [Google Scholar] [CrossRef]
- Plan Estratégico para el Desarrollo del Sector Civil de los Drones en España 2018–2021|Ministerio de Transportes, Movilidad y Agenda Urbana. Available online: https://www.mitma.gob.es/el-ministerio/planes-estrategicos/drones-espania-2018-2021 (accessed on 11 June 2023).
- Lee, H.S.; Shin, B.S.; Thomasson, J.A.; Wang, T.; Zhang, Z.; Han, X. Development of Multiple UAV Collaborative Driving Systems for Improving Field Phenotyping. Sensors 2022, 22, 1423. [Google Scholar] [CrossRef] [PubMed]
- Alsharif, H.; Khan, M.A.; Michailidis, E.T.; Vouyioukas, D. A Review on Software-Based and Hardware-Based Authentication Mechanisms for the Internet of Drones. Drones 2022, 6, 41. [Google Scholar] [CrossRef]
- Wang, X.; Cheng, P.; Liu, X.; Uzochukwu, B. Fast and Accurate, Convolutional Neural Network Based Approach for Object Detection from UAV. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 3171–3175. [Google Scholar] [CrossRef]
- Kyrkou, C.; Plastiras, G.; Theocharides, T.; Venieris, S.I.; Bouganis, C.-S. DroNet: Efficient Convolutional Neural Network Detector for Real-Time UAV Applications. In Proceedings of the 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 19–23 March 2018. [Google Scholar]
- Sánchez-Soriano, J.; De-Las-Heras, G.; Puertas, E.; Fernández-Andrés, J. Sistema Avanzado de Ayuda a la Conducción (ADAS) en rotondas/glorietas usando imágenes aéreas y técnicas de Inteligencia Artificial para la mejora de la seguridad vial. Logos Guard. Civ. Rev. Cient. Cent. Univ. Guard. Civ. 2023, 1, 241–270. Available online: https://revistacugc.es/article/view/5708 (accessed on 28 June 2023).
- Cuenca, L.G.; Sanchez-Soriano, J.; Puertas, E.; Andrés, J.F.; Aliane, N. Machine Learning Techniques for Undertaking Roundabouts in Autonomous Driving. Sensors 2019, 19, 2386. [Google Scholar] [CrossRef] [PubMed]
- Tang, H.; Post, J.; Kourtellis, A.; Porter, B.; Zhang, Y. Comparison of Object Detection Algorithms Using Video and Thermal Images Collected from a UAS Platform: An Application of Drones in Traffic Management. arXiv 2021, arXiv:2109.13185. [Google Scholar]
- Tobias, L.; Ducournau, A.; Rousseau, F.; Mercier, G.; Fablet, R. Convolutional Neural Networks for object recognition on mobile devices: A case study. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 3530–3535. [Google Scholar] [CrossRef]
- Akram, R.N.; Markantonakis, K.; Mayes, K.; Habachi, O.; Sauveron, D.; Steyven, A.; Chaumette, S. Security, privacy and safety evaluation of dynamic and static fleets of drones. In Proceedings of the AIAA/IEEE Digital Avionics Systems Conference, St. Petersburg, FL, USA, 17–21 September 2017. [Google Scholar] [CrossRef]
- Peng, H.; Razi, A.; Afghah, F.; Ashdown, J. A Unified Framework for Joint Mobility Prediction and Object Profiling of Drones in UAV Networks. J. Commun. Netw. 2018, 20, 434–442. [Google Scholar] [CrossRef]
- Bock, J.; Krajewski, R.; Moers, T.; Runde, S.; Vater, L.; Eckstein, L. The inD Dataset: A Drone Dataset of Naturalistic Road User Trajectories at German Intersections. In Proceedings of the IEEE Intelligent Vehicles Symposium, Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1929–1934. [Google Scholar] [CrossRef]
- Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar] [CrossRef]
- Ghisler, S.; Rosende, S.B.; Fernández-Andrés, J.; Sánchez-Soriano, J. Dataset: Traffic Images Captured from UAVs for Use in Training Machine Vision Algorithms for Traffic Management. Data 2022, 7, 53. [Google Scholar] [CrossRef]
- Puertas, E.; De-Las-Heras, G.; Fernández-Andrés, J.; Sánchez-Soriano, J. Dataset: Roundabout Aerial Images for Vehicle Detection. Data 2022, 7, 47. [Google Scholar] [CrossRef]
- Liu, R.; Ren, Z. Application of Yolo on Mask Detection Task. arXiv 2021, arXiv:2102.05402. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; Kwon, Y.; Michael, K.; Jain, M. Ultralytics/yolov5: v7.0—YOLOv5 SOTA Realtime Instance Segmentation. Zenodo 2022. [Google Scholar]
- Ouyang, H. DEYO: DETR with YOLO for Step-by-Step Object Detection. arXiv 2022, arXiv:2211.06588. [Google Scholar]
- Reis, D.; Kupec, J.; Hong, J.; Daoudi, A. Real-Time Flying Object Detection with YOLOv8. arXiv 2023, arXiv:2305.09972. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers; Springer: Cham, Switzerland, 2020. [Google Scholar]
- Yang, P.; Xiong, N.; Ren, J. Data Security and Privacy Protection for Cloud Storage: A Survey. IEEE Access 2020, 8, 131723–131740. [Google Scholar] [CrossRef]
- Yu, W.; Liang, F.; He, X.; Hatcher, W.G.; Lu, C.; Lin, J.; Yang, X. A Survey on the Edge Computing for the Internet of Things. IEEE Access 2018, 6, 6900–6919. [Google Scholar] [CrossRef]
- Hua, H.; Li, Y.; Wang, T.; Dong, N.; Li, W.; Cao, J. Edge Computing with Artificial Intelligence: A Machine Learning Perspective. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
- Singh, S. Optimize cloud computations using edge computing. In Proceedings of the 2017 International Conference on Big Data, IoT and Data Science (BID), Pune, India, 20–22 December 2017; pp. 49–53. [Google Scholar] [CrossRef]
- Kristiani, E.; Yang, C.T.; Huang, C.Y.; Wang, Y.T.; Ko, P.C. The Implementation of a Cloud-Edge Computing Architecture Using OpenStack and Kubernetes for Air Quality Monitoring Application. Mob. Netw. Appl. 2021, 26, 1070–1092. [Google Scholar] [CrossRef]
- Deng, W.; Lei, H.; Zhou, X. Traffic state estimation and uncertainty quantification based on heterogeneous data sources: A three detector approach. Transp. Res. Part B Methodol. 2013, 57, 132–157. [Google Scholar] [CrossRef]
- Zhou, Y.; Cheng, N.; Lu, N.; Shen, X.S. Multi-UAV-Aided Networks: Aerial-Ground Cooperative Vehicular Networking Architecture. IEEE Veh. Technol. Mag. 2015, 10, 36–44. [Google Scholar] [CrossRef]
- CVAT Open Data Annotation Platform. Available online: https://www.cvat.ai (accessed on 12 October 2023).
- Roboflow. Available online: https://roboflow.com/ (accessed on 5 October 2023).
- Bloice, M.D.; Stocker, C.; Holzinger, A. Augmentor: An Image Augmentation Library for Machine Learning. arXiv 2017, arXiv:1708.04680. [Google Scholar] [CrossRef]
- Perez, L.; Wang, J. The Effectiveness of Data Augmentation in Image Classification Using Deep Learning. arXiv 2017, arXiv:1712.04621. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Zheng, X. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A Forest Fire Detection System Based on Ensemble Learning. Forests 2021, 12, 217. [Google Scholar] [CrossRef]
- Rosende, S.B.; Fernández-Andrés, J.; Sánchez-Soriano, J. Optimization Algorithm to Reduce Training Time for Deep Learning Computer Vision Algorithms Using Large Image Datasets with Tiny Objects. IEEE Access 2023, 11, 104593–104605. [Google Scholar] [CrossRef]
- Hui, J. mAP (mean Average Precision) for Object Detection. Available online: https://jonathan-hui.medium.com/map-mean-average-precision-for-object-detection-45c121a31173 (accessed on 12 September 2023).
- Mariano, V.Y.; Min, J.; Park, J.-H.; Kasturi, R.; Mihalcik, D.; Li, H.; Doermann, D.; Drayer, T. Performance evaluation of object detection algorithms. In Proceedings of the 2002 International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; pp. 965–969. [Google Scholar] [CrossRef]
- Padilla, R.; Netto, S.L.; da Silva, E.A.B. A Survey on Performance Metrics for Object-Detection Algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020; pp. 237–242. [Google Scholar] [CrossRef]
Board | RAM | CPU | GPU | Peso |
---|---|---|---|---|
Raspberry Pi 3B+ | 1 GB DDR2 | 64-bit @ 1.4 GHz | VideoCore IV 400 MHz | 107 g |
Raspberry Pi 4 | 4 GB DDR4 | Quad-core 64-bit @ 1.8 GHz | VideoCore VI | 107 g |
Jetson Nano | 4 GB DDR4 | Quad-core MPCore processor | 128 NVIDIA CUDA cores | 243 g |
Google Coral | 1 GB DDR4 | Quad Cortex-A53, Cortex-M4F | Integrated GC7000 Lite TPU coprocessor: | 161 g |
Dataset | Cars | Bikes | Total |
---|---|---|---|
Traffic Images Captured from UAVs for Use in Training Machine Vision Algorithms | 137,602 | 17,726 | 155,328 |
Roundabout Aerial Images for Vehicle Detection | 236,850 | 4899 | 241,749 |
Total | 374,452 | 22,625 | 397,077 |
Model | Epochs | Time without Optimization | Time with Optimization | Time Saving |
---|---|---|---|---|
YoloV5n | 20 | 4 h 44 m 40 s | 47 m 35 s | 16.72% |
YoloV5s | 20 | 7 h 18 m 50 s | 1 h 5 m 25 s | 14.91% |
YoloV8n | 20 | 5 h 3 m 20 s | 1 h 17 m 20 s | 25.49% |
YoloV8s | 20 | 7 h 45 m 45 s | 1 h 37 m 45 s | 20.99% |
DETR | 20 | 19 h 45 m | 23 h 35 m | 119.41% |
EfficientDetLite0 | 20 | 2 h 7 m 35 s | 2 h 44 m | 128.54% |
EfficientDetLite1 | 20 | 2 h 41 m 45 s | 3 h 48 m 35 s | 141.32% |
Model | Precision | Recall | mAP50 | mAP50-95 |
---|---|---|---|---|
YoloV5n | 72.9% | 19.4% | 16.2% | 3.3% |
YoloV5s | 46.3% | 31.4% | 26.9% | 6.5% |
YoloV8n | 83.3% | 72.6% | 80% | 44.9% |
YoloV8s | 84.8% | 70.1% | 77.4% | 44.0% |
DETR | - | 28.6% | 55.5% | 21.2% |
EfficientDetLite0 | - | 19.3% | 27.4% | 7.6% |
EfficientDetLite1 | - | 23.5% | 36.8% | 10.5% |
Model | Cluster UEM (FPS) | Raspberry Pi 3B+ (FPS) | Raspberry Pi 4 (FPS) | Jetson Nano (FPS) | Google Coral (FPS) |
---|---|---|---|---|---|
YoloV5n | 130.44 | 0.46 | 1.3 | 14.7 | - |
YoloV5s | 114.72 | 0.19 | 0.73 | 4.8 | - |
YoloV8n | 75.08 | 0.27 | 0.76 | 6.2 | - |
YoloV8s | 72.24 | 0.09 | 0.44 | 3.3 | - |
DETR | 12.26 | 0.01 | 0.05 | 0.03 | - |
EfficientDetLite0 | 9.08 | 1.14 | 2.92 | 2.04 | 6.7 |
EfficientDetLite1 | 4.7 | 0.58 | 1.63 | 1.14 | 5.4 |
Reduced-Board Computer | Idle | Execution | ||||
---|---|---|---|---|---|---|
Voltage | Current | Power | Voltage | Current | Power | |
Raspberry Pi 3B+ | 5.2 V | 0.45 A | 2.34 W | 5.2 V | 0.79 A | 4.1 W |
Raspberry Pi 4 | 5.4 V | 0.35 A | 1.89 W | 5.4 V | 0.66 A | 3.56 W |
Jetson Nano | 5.4 V | 0.78 A | 4.2 W | 5.4 V | 1.9 A | 10.2 W |
Google Coral | 5.2 V | 0.95 A | 4.94 W | 5.2 V | 1.2 A | 6.24 W |
Model | Raspberry Pi 3B+ | Raspberry Pi 4 | Jetson Nano | Google Coral |
---|---|---|---|---|
YoloV5n | 8.93 | 2.74 | 0.70 | - |
YoloV5s | 21.62 | 4.88 | 2.14 | - |
YoloV8n | 15.21 | 4.69 | 1.65 | - |
YoloV8s | 45.64 | 8.10 | 3.11 | - |
DETR | 410.80 | 71.28 | 342.00 | - |
EfficientDetLite0 | 3.60 | 1.22 | 5.03 | 0.93 |
EfficientDetLite1 | 7.08 | 2.19 | 9.00 | 1.16 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bemposta Rosende, S.; Ghisler, S.; Fernández-Andrés, J.; Sánchez-Soriano, J. Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management. Drones 2023, 7, 682. https://doi.org/10.3390/drones7110682
Bemposta Rosende S, Ghisler S, Fernández-Andrés J, Sánchez-Soriano J. Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management. Drones. 2023; 7(11):682. https://doi.org/10.3390/drones7110682
Chicago/Turabian StyleBemposta Rosende, Sergio, Sergio Ghisler, Javier Fernández-Andrés, and Javier Sánchez-Soriano. 2023. "Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management" Drones 7, no. 11: 682. https://doi.org/10.3390/drones7110682
APA StyleBemposta Rosende, S., Ghisler, S., Fernández-Andrés, J., & Sánchez-Soriano, J. (2023). Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management. Drones, 7(11), 682. https://doi.org/10.3390/drones7110682