Effective Multi-Frame Optical Detection Algorithm for GEO Space Objects
Abstract
:Featured Application
Abstract
1. Introduction
- (1)
- To the best of our knowledge, this is the first effort toward introducing the deep model PP-YOLOv2 into the field of space object detection. We evaluate several representative models and conduct enough comparable experiments. This is expected to facilitate the development and benchmarking of object detection algorithms in satellite images.
- (2)
- To make the most of motion characteristics that space objects take on linear trajectories, an effective candidate filtration and supplement (CFS) method based on the straight-line searching strategy is designed for further increasing accuracy. Experimental results demonstrate that the proposed algorithm reaches an F1-score of 93.47% with competitive performance.
- (3)
- In order to verify the efficiency of proposed algorithm, we transplant the proposed pipeline to the embedded system platform of NVIDIA Jetson Nano. Considering the metric of frame per second (FPS), the proposed method achieves an inference speed at 1.42 FPS, with a ResNet50 [11] backbone other than typical lightweight backbone networks, which will pave the way for further optimization and improvement to meet the demand of future real-time image processing.
2. Related Work and Materials
2.1. Deep Models for Object Detection
2.2. Dataset
3. Methodology
3.1. Data Preprocess: Label Transform
3.2. Candidate Extraction: PP-YOLOv2
3.3. Candidate Filtration and Supplement: CFS
4. Experiment
4.1. Experiment Platform and Parameter Setting
4.2. Evaluation Metric
4.3. Comparison Results
4.4. Model Deployment
4.5. Further Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Fitzmaurice, J.; Bédard, D.; Lee, C.H.; Seitzer, P. Detection and Correlation of Geosynchronous Objects in NASA’s Wide-Field Infrared Survey Explorer Images. Acta Astronaut. 2021, 183, 176–198. [Google Scholar] [CrossRef]
- Diprima, F.; Santoni, F.; Piergentili, F.; Fortunato, V.; Abbattista, C.; Amoruso, L. Efficient and Automatic Image Reduction Framework for Space Debris Detection Based on GPU Technology. Acta Astronaut. 2018, 145, 332–341. [Google Scholar] [CrossRef]
- Chen, B.; Liu, D.; Chin, T.-J.; Rutten, M.; Derksen, D.; Martens, M.; von Looz, M.; Lecuyer, G.; Izzo, D. Spot the GEO Satellites: From Dataset to Kelvins SpotGEO Challenge. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Nashville, TN, USA, 20–25 June 2021; pp. 2086–2094. [Google Scholar]
- Chen, B.; Liu, D.; Chin, T.-J.; Rutten, M.; Derksen, D.; Märtens, M.; von Looz, M.; Lecuyer, G.; Izzo, D. SpotGEO Dataset. Available online: https://doi.org/10.5281/zenodo.4432143 (accessed on 11 May 2021). [CrossRef]
- Šára, R.; Matoušek, M.; Franc, V. RANSACing Optical Image Sequences for GEO and Near-GEO Objects. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 10–13 September 2013. [Google Scholar]
- Do, H.N.; Chin, T.-J.; Moretti, N.; Jah, M.K.; Tetlow, M. Robust Foreground Segmentation and Image Registration for Optical Detection of GEO Objects. Adv. Space Res. 2019, 64, 733–746. [Google Scholar] [CrossRef]
- Yanagisawa, T.; Kurosaki, H.; Nakajima, A. Activities of JAXA’s Innovative Technology Center on Space Debris Observation. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 1–4 September 2009. [Google Scholar]
- Liu, D.; Chen, B.; Chin, T.-J.; Rutten, M. Topological Sweep for Multi-Target Detection of Geostationary Space Objects. IEEE Trans. Signal Process. 2020, 68, 5166–5177. [Google Scholar] [CrossRef]
- Ohsawa, R. Development of a Tracklet Extraction Engine. arXiv 2021, arXiv:2109.09064. [Google Scholar]
- Huang, X.; Wang, X.; Lv, W.; Bai, X.; Long, X.; Deng, K.; Dang, Q.; Han, S.; Liu, Q.; Hu, X.; et al. PP-YOLOv2: A Practical Object Detector. arXiv 2021, arXiv:2104.10419. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-Cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Pang, J.; Chen, K.; Shi, J.; Feng, H.; Ouyang, W.; Lin, D. Libra R-Cnn: Towards Balanced Learning for Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 821–830. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. arXiv 2020, arXiv:1911.09070. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single Shot Multibox Detector. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Long, X.; Deng, K.; Wang, G.; Zhang, Y.; Dang, Q.; Gao, Y.; Shen, H.; Ren, J.; Han, S.; Ding, E.; et al. PP-YOLO: An Effective and Efficient Implementation of Object Detector. arXiv 2020, arXiv:2007.12099. [Google Scholar]
- Law, H.; Deng, J. Cornernet: Detecting Objects as Paired Keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. Centernet: Keypoint Triplets for Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 6569–6578. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: A Simple and Strong Anchor-Free Object Detector. arXiv 2020, arXiv:2006.09214. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Baidu AI Studio. Available online: https://aistudio.baidu.com/ (accessed on 20 August 2021).
- Derksen, D.; Martens, M.; von Moritz, M.; von Gurvan, G.; Izzo, G.; Chen, B.; Liu, D.; Chin, T.-J.; Rutten, M. Spotgeo Starter Kit. Available online: https://doi.org/10.5281/zenodo.3874368 (accessed on 11 May 2021).
- Abay, R.; Gupta, K. GEO-FPN: A Convolutional Neural Network for Detecting GEO and near-GEO Space Objects from Optical Images. In Proceedings of the 8th European Conference on Space Debris (virtual), Darmstadt, Germany, 20–23 April 2021. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar]
Number of Images | Number of Objects | Average Number of Objects | |
---|---|---|---|
Training | 5120 | 8964 | 1.751 |
Validation | 1280 | 2241 | 1.750 |
Test | 25,600 | 44,550 | 1.740 |
Total | 32,000 | 55,755 | 1.742 |
Methods | Backbone | FPS | F1(%) | MSE |
---|---|---|---|---|
GEO-Faster R-CNN | ResNet50_vd_dcn | 18.06 | 80.19 | 162,168.66 |
GEO-Cascade R-CNN | ResNet50_vd_dcn | 20.6 | 82.54 | 177,082.09 |
GEO-YOLOv3 | ResNet50_vd_dcn | 61.3 | 81.89 | 141,504.56 |
GEO-PP-YOLO | ResNet50_vd_dcn | 72.9 | 82.21 | 181,461.13 |
GEO-PP-YOLOv2 | ResNet50_vd_dcn | 72.1 | 84.08 | 160,997.98 |
Methods | F1(%) | MSE | |
---|---|---|---|
GEO-Faster R-CNN + CFS | 90.02 | 9.83 | 53,001.35 |
GEO-Cascade R-CNN + CFS | 91.63 | 9.09 | 53,274.17 |
GEO-YOLOv3 + CFS | 91.54 | 9.65 | 53,888.27 |
GEO-PP-YOLO + CFS | 92.89 | 10.68 | 49,274.17 |
GEO-PP-YOLOv2 + CFS | 93.47 | 9.39 | 40,222.44 |
Rank | Participant Name | F1(%) | MSE |
---|---|---|---|
1 | AgeniumSPACE | 94.83% | 33,838.9931 |
2 | POTLAB@BUAA | 94.43% | 30,541.73189 |
3 | dwiuzila | 92.89% | 41,198.45863 |
4 | Magpies | 90.43% | 48,919.9227 |
5 | Mr_huangLTZaaa | 88.42% | 62,021.80923 |
6 | francescodg | 87.89% | 65,772.46337 |
7 | mhalford | 87.70% | 69,566.90857 |
8 | PedroyAgus | 86.61% | 70,104.96654 |
9 | elmihailol | 86.11% | 83,172.81408 |
10 | Barebones | 83.66% | 10,5518.4199 |
AI Performance | GFLOPs |
---|---|
CPU | 4-core Cortex A57 |
GPU | 128-core NVIDIA Maxwell |
Memory | 4 GB 64 bit LPDDR4 |
Size(mm) | 100 × 80 × 29 |
Power(W) | 5 W (or 10 W) |
Methods | Inference Time(s) |
---|---|
GEO-PP-YOLOv2 | 0.932 |
GEO-PP-YOLOv2 (with TensorRT) | 0.706 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, Y.; Zheng, T.; Xue, C.; Zhou, L. Effective Multi-Frame Optical Detection Algorithm for GEO Space Objects. Appl. Sci. 2022, 12, 4610. https://doi.org/10.3390/app12094610
Dai Y, Zheng T, Xue C, Zhou L. Effective Multi-Frame Optical Detection Algorithm for GEO Space Objects. Applied Sciences. 2022; 12(9):4610. https://doi.org/10.3390/app12094610
Chicago/Turabian StyleDai, Yuqi, Tie Zheng, Changbin Xue, and Li Zhou. 2022. "Effective Multi-Frame Optical Detection Algorithm for GEO Space Objects" Applied Sciences 12, no. 9: 4610. https://doi.org/10.3390/app12094610
APA StyleDai, Y., Zheng, T., Xue, C., & Zhou, L. (2022). Effective Multi-Frame Optical Detection Algorithm for GEO Space Objects. Applied Sciences, 12(9), 4610. https://doi.org/10.3390/app12094610