TuSeSy: An Intelligent Turntable Servo System for Tracking Aircraft and Parachutes Automatically
Abstract
:1. Introduction
- We propose an turntable servo system named TuSeSy, which can track the aircraft and parachutes in airdrop tests automatically. TuSeSy calculates the differences between the taken images by cameras and the inferred images by tracking algorithms, and then generates the control commands to track the aircraft and parachutes.
- To achieve real-time switching from the aircraft to the parachutes in the airdrop tests, we designed an effective multi-target tracking switch algorithm based on the image frame difference and optical flow.
- We conducted extensive experiments; the results show that TuSeSy cannot only solve the problem of wrong target tracking, but also reduce computational overhead. Moreover, the multi-target tracking switch algorithm has higher computing efficiency and reliability, ensuring the practical applications of the turntable servo system.
2. Related Works
3. Case Study
3.1. Mechanical Structure of TuSeSy
3.2. Architecture of TuSeSy
3.3. Hardware Parameters of TuSeSy
- 1.
- The model of the tracking camera: PointGray (GS3-U3-41C6C-C), resolution: 2048 × 2048, pixel size: 5.5 m;
- 2.
- The model of the tracking camera lens: AF-Smicro NIKKOR 105 MM1:2.8 g, focal length: 50 mm;
- 3.
- The model of the high-speed recording camera: IO industries (Flare 2M360CCL), frame speed: up to 375FPS, resolution: 1088 × 2088;
- 4.
- The model of the high-speed recording camera lens: VR 500/4G;
- 5.
- The model of the Kr Morgan motor used for azimuth axis: KBMS-25H01-A00; the model of the corresponding driver: AKD-P01206-NBEC-0000;
- 6.
- The model of the Kr morgan motor used for the pitching axis: KBMS-17H01-A00; the model of the corresponding driver: AKD-P00606-NBEC-0000.
- 7.
- The multi-axis controller: Pyeon, model: Beckhoff (CX5130-0125);
- 8.
- The image workstation (CPU: Intel 7700 K, memory: 32 GB, graphics card: GV-N1080Ti, operating system: Linux).
4. The Design of the Tracking Algorithm
4.1. Introduction of the Multi-Target Tracking Switch
- When the aircraft enters the camera’s field of view, TuSeSy automatically captures the aircraft’s target by target measurement and then tracks the aircraft using a tracking algorithm, as shown in Figure 5a.
- The system controls the turntable rotation according to the camera return deviation signal, so that the camera is aimed at the target under test. After a period of flight, the aircraft begins an airdrop mission, throwing objects from the aircraft and opening the pilot chute, as shown in Figure 5b.
- It is necessary to make TuSeSy automatically detect the pilot chute, and decide whether to abandon the aircraft’s tracking and start tracking the pilot chute instead (Figure 5c).
4.2. Trajectory Acquisition
4.3. Moving Target Capture in a Dynamic Background
Algorithm 1: Foreground segmentation by frame difference |
Input: capture the adjacent two frames from the camera: Output: foreground:
|
Algorithm 2: Positioning the captured foreground target. |
Input: foreground: DFrame Output: area of the bounding box of the foreground
|
4.4. Tracking Switch Algorithms
Algorithm 3: Capture and track the aircraft. |
start timer: 30 ms recall Algorithm 1 recall Algorithm 2 if Area of the bounding box > the set threshold do init of optical flow: flowtracker.init(srcImage, box) the optical flow to track the aircraft target: flowtracker.update (srcImage, box) for each detected box do calculate the aircraft of each srcImage: flowtracker.update(srcImage[i + 1], box) obtain the coordinates and sizes of the bounding box: box.tl(),box.br() return box.tl(),box.br() end for |
Algorithm 4: Tracking from the aircraft to the pilot chute. |
continue to Algorithm 3 determine the direction of the aircraft detect the pilot chute in the predicted area: Algorithm 1 if find contours do Algorithm 2 if Area of bounding box > the set threshold do init of optical flow: flowtracker.init(srcImage[i], box) for each detected box do calculate the aircraft of each srcImage: flowtracker.update(srcImage[i + 1], box) obtain the coordinates and sizes of the bounding box: box.tl(),box.br() return box.tl(),box.br() end for |
Algorithm 5: Tracking from the pilot chute to the main chute. |
continue to Algorithm 4 detect the main chute in the predicted area: Algorithm 1 if find contours do Algorithm 2 if Area of the bounding box > the set threshold do init of the optical flow: flowtracker.init (srcImage, box) for each detected box do calculate the aircraft of each srcImage: flowtracker.update(srcImage[i + 1], box) obtain the coordinates and sizes of the bounding box: box.tl(),box.br() return box.tl(),box.br() end for |
5. Experiment
- Video1: the airplane went into the view field from the right (640 × 480 × 400 frames);
- Video2: the airplane went into the view field from the left (640 × 480 × 800 frames);
- Video3: the UAV simulated with the clouds as background (640 × 480 × 300 frames);
- Video4: the UAV simulated with the birds as a distraction.
5.1. Evaluation Methodology
- Accuracy: the percentage of samples with correct detection in the total samples.
- Precision: the percentage of samples that were correctly detected as A in all samples detected as A.
- Recall: the percentage of samples that were detected as A in the samples truly belonged to A.
- F2-score: a metric that combined precision (P) and recall (R) (). We set to increase the weight of the recall, i.e., reducing the missing report rate of the wrong detection, ensuring precision.
- FPS: the speed of the algorithm with frame per second.
5.2. Impact of Frame Difference and Background Subtraction
5.3. Impact of Target Tracking Switch Algorithm
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Tao, J.; Sun, Q.; Sun, H.; Chen, Z.; Dehmer, M.; Sun, M. Dynamic modeling and trajectory tracking control of parafoil system in wind environments. IEEE/ASME Trans. Mechatron. 2017, 22, 2736–2745. [Google Scholar] [CrossRef]
- Xu, B. Disturbance observer-based dynamic surface control of transport aircraft with continuous heavy cargo airdrop. IEEE Trans. Syst. Man Cybern. Syst. 2016, 47, 161–170. [Google Scholar] [CrossRef]
- Wang, W.; Shen, J.; Porikli, F.; Yang, R. Semi-supervised video object segmentation with super-trajectories. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 985–998. [Google Scholar] [CrossRef] [PubMed]
- Gnemmi, P.; Changey, S.; Wey, P.; Roussel, E.; Rey, C.; Boutayeb, M.; Lozano, R. Flight phases with tests of a projectile-drone hybrid system. IEEE Trans. Control Syst. Technol. 2017, 26, 2091–2105. [Google Scholar] [CrossRef]
- Mao, Q. Development of a parachute airdrop test system. In Measurement and Control Technology; Beihang University Press: Beijing, China, 2004; pp. 72–75. (In Chinese) [Google Scholar]
- Liu, N.; Tian, T.; Su, Z.; Qi, W. Research on Measurement Method of Parachute Scanning Platform Based on MEMS Device. Micromachines 2021, 12, 402. [Google Scholar] [CrossRef]
- Zhu, G. Research on the Testing Algorithm of Parachute Air-Drop Experiment Based on GPS. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2008. [Google Scholar]
- Xiong, W. Research on the Testing Device of Parachute Air-Drop Experiment Based on GPS. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2008. [Google Scholar]
- Strickert, G.; Jann, T. Determination of the relative motion between parafoil canopy and load using advanced video-image processing techniques. In Proceedings of the 15th Aerodynamic Decelerator Systems Technology Conference, Toulouse, France, 8–11 June 1999; pp. 410–417. [Google Scholar]
- Yakimenko, O.; Berlind, R.; Albright, C. Status on video data reduction and air delivery payload pose estimation. In Proceedings of the 19th AIAA Aerodynamic Decelerator Systems Technology conference and seminar, Williamsburg, VA, USA, 21–24 May 2007; p. 2552. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Sobral, A.; Vacavant, A. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 2014, 122, 4–21. [Google Scholar] [CrossRef]
- Zhang, J.; Cao, J.; Mao, B. Moving object detection based on non-parametric methods and frame difference for traceability video analysis. Procedia Comput. Sci. 2016, 91, 995–1000. [Google Scholar] [CrossRef] [Green Version]
- Ramya, P.; Rajeswari, R. A modified frame difference method using correlation coefficient for background subtraction. Procedia Comput. Sci. 2016, 93, 478–485. [Google Scholar] [CrossRef] [Green Version]
- Rashid, M.; Thomas, V. A Background Foreground Competitive Model for Background Subtraction in Dynamic Background. Procedia Technol. 2016, 25, 536–543. [Google Scholar] [CrossRef] [Green Version]
- Zhong, Z.; Wang, A.; Kim, H.; Paynabar, K.; Shi, J. Adaptive Cautious Regularized Run-to-Run Controller for Lithography Process. IEEE Trans. Semicond. Manuf. 2021, 34, 387–397. [Google Scholar] [CrossRef]
- Mandal, M.; Dhar, V.; Mishra, A.; Vipparthi, S.K.; Abdel-Mottaleb, M. 3DCD: Scene independent end-to-end spatiotemporal feature learning framework for change detection in unseen videos. IEEE Trans. Image Process. 2020, 30, 546–558. [Google Scholar] [CrossRef] [PubMed]
- Jin, X.; Wang, Y.; Zhang, H.; Zhong, H.; Liu, L.; Wu, Q.J.; Yang, Y. DM-RIS: Deep multimodel rail inspection system with improved MRF-GMM and CNN. IEEE Trans. Instrum. Meas. 2019, 69, 1051–1065. [Google Scholar] [CrossRef]
- Wu, Y.; Zhu, L.; Wang, X.; Yang, Y.; Wu, F. Learning to anticipate egocentric actions by imagination. IEEE Trans. Image Process. 2020, 30, 1143–1152. [Google Scholar] [CrossRef] [PubMed]
- Tajdini, M.M.; Morgenthaler, A.W.; Rappaport, C.M. Multiview Synthetic Aperture Ground-Penetrating Radar Detection in Rough Terrain Environment: A Real-Time 3-D Forward Model. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3400–3410. [Google Scholar] [CrossRef]
- Chung, K.C.; Lee, J.J.; Huang, J.R.; Lai, Y.J.; Chen, K.H.; Lin, Y.H.; Lin, S.R.; Tsai, T.Y. A Dynamic compensated and 95% high-efficiency supply buffer in RGB virtual pixel MicroLED display for reducing ghosting by 73% and achieving four times screen resolution. IEEE Trans. Power Electron. 2020, 36, 8291–8299. [Google Scholar] [CrossRef]
- Denes, G.; Maruszczyk, K.; Ash, G.; Mantiuk, R.K. Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering. IEEE Trans. Vis. Comput. Graph. 2019, 25, 2072–2082. [Google Scholar] [CrossRef] [Green Version]
- Wang, L.; Guo, Y.; Liu, L.; Lin, Z.; Deng, X.; An, W. Deep video super-resolution using HR optical flow estimation. IEEE Trans. Image Process. 2020, 29, 4323–4336. [Google Scholar] [CrossRef] [Green Version]
- Pinto, A.M.; Costa, P.G.; Correia, M.V.; Matos, A.C.; Moreira, A.P. Visual motion perception for mobile robots through dense optical flow fields. Robot. Auton. Syst. 2017, 87, 1–14. [Google Scholar] [CrossRef]
- Bengtsson, T.; McKelvey, T.; Lindström, K. On robust optical flow estimation on image sequences with differently exposed frames using primal–dual optimization. Image Vis. Comput. 2017, 57, 78–88. [Google Scholar] [CrossRef] [Green Version]
- Tu, Z.; Xie, W.; Cao, J.; Van Gemeren, C.; Poppe, R.; Veltkamp, R.C. Variational method for joint optical flow estimation and edge-aware image restoration. Pattern Recognit. 2017, 65, 11–25. [Google Scholar] [CrossRef] [Green Version]
- Choi, I.H.; Pak, J.M.; Ahn, C.K.; Lee, S.H.; Lim, M.T.; Song, M.K. Arbitration algorithm of FIR filter and optical flow based on ANFIS for visual object tracking. Measurement 2015, 75, 338–353. [Google Scholar] [CrossRef]
- Senst, T.; Eiselein, V.; Sikora, T. Robust local optical flow for feature tracking. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1377–1387. [Google Scholar] [CrossRef]
- Deqin, X.; Qiumei, Y.; Junqian, F.; Xiaohui, D.; Jianzhao, F.; Yaowen, Y.; Yongyue, L. A multi-target trapping and tracking algorithm for Bactrocera Dorsalis based on cost model. Comput. Electron. Agric. 2016, 123, 224–231. [Google Scholar] [CrossRef]
- Bozorgtabar, B.; Goecke, R. Efficient multi-target tracking via discovering dense subgraphs. Comput. Vis. Image Underst. 2016, 144, 205–216. [Google Scholar] [CrossRef]
- Chen, J.; Sheng, H.; Li, C.; Xiong, Z. PSTG-based multi-label optimization for multi-target tracking. Comput. Vis. Image Underst. 2016, 144, 217–227. [Google Scholar] [CrossRef]
- Babaee, M.; You, Y.; Rigoll, G. Combined segmentation, reconstruction, and tracking of multiple targets in multi-view video sequences. Comput. Vis. Image Underst. 2017, 154, 166–181. [Google Scholar] [CrossRef]
Video | DPWrenGABGS | MixtureOfGuassianV1BGS | MultiLayerBGS | PixelBasedAdaptiveSegmenter | LBAdaptiveSOM | TuSeSy |
---|---|---|---|---|---|---|
Video1 | - | 0.300 | 0.449 | 0.312 | - | 0.862 |
Video2 | 0.764 | 0.510 | 0.813 | 0.512 | 0.807 | 0.947 |
Video3 | 0.806 | 0.790 | 0.871 | 0.834 | - | 0.873 |
Video4 | 0.683 | 0.858 | 0.852 | 0.777 | - | 0.854 |
Video | DPWrenGABGS | MixtureOfGuassianV1BGS | MultiLayerBGS | PixelBasedAdaptiveSegmenter | LBAdaptiveSOM | TuSeSy |
---|---|---|---|---|---|---|
Video1 | - | 87.72 | 5.27 | 14.97 | - | 253.43 |
Video2 | 80.00 | 84.74 | 4.73 | 17.18 | 27.74 | 243.90 |
Video3 | 79.37 | 85.00 | 6.31 | 18.38 | - | 252.34 |
Video4 | 83.33 | 86.34 | 6.66 | 18.34 | - | 254.12 |
Parameter | Video1 | Video2 | Video3 | Video4 |
---|---|---|---|---|
F2-score | 0.892 | 0.924 | 0.863 | 0.844 |
FPS | 41.62 | 42.04 | 40.14 | 41.63 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Pei, Z.; Tang, Z.; Gu, F. TuSeSy: An Intelligent Turntable Servo System for Tracking Aircraft and Parachutes Automatically. Appl. Sci. 2022, 12, 5133. https://doi.org/10.3390/app12105133
Zhang Z, Pei Z, Tang Z, Gu F. TuSeSy: An Intelligent Turntable Servo System for Tracking Aircraft and Parachutes Automatically. Applied Sciences. 2022; 12(10):5133. https://doi.org/10.3390/app12105133
Chicago/Turabian StyleZhang, Zeyang, Zhongcai Pei, Zhiyong Tang, and Fei Gu. 2022. "TuSeSy: An Intelligent Turntable Servo System for Tracking Aircraft and Parachutes Automatically" Applied Sciences 12, no. 10: 5133. https://doi.org/10.3390/app12105133
APA StyleZhang, Z., Pei, Z., Tang, Z., & Gu, F. (2022). TuSeSy: An Intelligent Turntable Servo System for Tracking Aircraft and Parachutes Automatically. Applied Sciences, 12(10), 5133. https://doi.org/10.3390/app12105133