End-To-End Controls Using K-Means Algorithm for 360-Degree Video Control Method on Omnidirectional Camera-Equipped Autonomous Micro Unmanned Aircraft Systems
Abstract
:1. Introduction
2. Related Works
2.1. Autonomous Micro UAS for Survaillence
2.2. 360-Degree Video Control
3. End-To-End Control-Based 360-Degree Video Control Method Using Video Control Signal Generation Based on K-Means Algorithms
3.1. Overview
3.2. Data Collection Stage
Algorithm 1 Pseudocode for generating preprocessed 360-degree video control data using 360-degree video and 360-degree video control signals. |
FUNCTION GeneratePreprocessed360-DegreeVideoControlData WITH V, S SET κ←CALL Number of video control signal generation SET S′←CALL K-means algorithm (S, κ) FOR i←1 to S THEN FOR t←1 to Si THEN SET li,t←CALL Index of nearest defined video control signal (Si,t, S′) SET Li←Li∪{li,t} END SET L←L∪{Li} END RETURN V, S END |
3.3. Model Training Stage
Algorithm 2 Pseudocode for training deep learning model using preprocessed 360-degree video control data |
FUNCTION TrainDeepLearningModel WITH V, L FOR i←1 to L THEN FOR t←1 to Li THEN SET l’i,t←CALL Video Control Label Encoding with One-Hot Encoding (li,t) SET L’i←L’i∪{l’i,t} END SET L’←L’∪{L’i} END FOR ε←1 to Training Count THEN SET i←CALL Random extraction SET t←CALL Random extraction SET fi,t←CALL Inference Deep Learning Model-based Feature (vi,t) SET value←CALL Compare Feature and Control Label (fi,t, l’i,t) CALL Modify Deep Learning Model END RETURN Trained Deep Learning Model END |
3.4. Video Control Stage
Algorithm 3 Pseudocode for generating 360-degree video control using trained deep learning model |
FUNCTION VideoControlSignalGenerationUsingDeepLearningModel WITH Vi,t SET fi,t←CALL Inference Deep Learning Model-based Feature (vi,t) SET li,t←CALL Softmax(fi,t) SET si,t←CALL Generate video control signal (li,t, S′) RETURN si,t END |
4. Experiments
4.1. Dataset
4.2. Process of Generating 360-Degree Video Control Data Based on End-to-End Control
4.3. CNN Model and Training Result that Generates Video Control Signal to Control 360-Degree Video
4.4. Experimental Results and Performance Analysis
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Fahlstrom, P.G.; Gleason, T.J. Introduction to UAV Systems, 4th ed.; Wiley: Hoboken, NJ, USA, 2012. [Google Scholar]
- Wang, Y.; Phelps, T.; Kibaroglu, K.; Sayginer, M.; Ma, Q.; Rebeiz, G.M. 28GHz 5G-based Phased-arrays for UAV Detection and Automotive Traffic-monitoring Radars. In Proceedings of the IEEE/MTT-S International Microwave Symposium-IMS, Philadelphia, PA, USA, 10–15 June 2018. [Google Scholar]
- Vahidi, V.; Saberinia, E.; Morris, B.T. OFDM Performance Assessment for Traffic Surveillance in Drone Small Cells. IEEE Trans. Intell. Transp. Syst. 2019, 20, 2869–2878. [Google Scholar] [CrossRef]
- Kwak, J.; Sung, Y. Path Generation Method of UAV Autopilots using Max-Min Algorithm. J. Inf. Process. Syst. 2018, 14, 1457–1463. [Google Scholar]
- Sutheerakul, C.; Kronprasert, N.; Kaewmoracharoen, M.; Pichayapan, P. Application of Unmanned Aerial Vehicles to Pedestrian Traffic Monitoring and Management for Shopping Streets. Transp. Res. Procedia 2017, 25, 1717–1734. [Google Scholar] [CrossRef]
- Villa, T.F.; Gonzalez, F.; Miljievic, B.; Ristovski, Z.D.; Morawska, L. An Overview of Small Unmanned Aerial Vehicles for Air Quality Measurements: Present Applications and Future Prospectives. Sensors 2016, 16, 1072. [Google Scholar] [CrossRef] [PubMed]
- Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
- Lippitt, C.D.; Zhang, S. The Impact of Small Unmanned Airborne Platforms on Passive Optical Remote Sensing: A Conceptual Perspective. Int. J. Remote Sens. 2018, 39, 4852–4868. [Google Scholar] [CrossRef]
- Santana, L.V.; Brandão, A.S.; Sarcinelli-Filho, M. Outdoor Waypoint Navigation with the AR.Drone Quadrotor. In Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015. [Google Scholar]
- Kwak, J.; Sung, Y. Autoencoder-based Candidate Waypoint Generation Method for Autonomous Flight of Multi-unmanned Aerial Vehicles. Adv. Mech. Eng. 2019, 11, 1687814019856772. [Google Scholar] [CrossRef]
- Kwak, J.; Sung, Y. Autonomous UAV Flight Control for GPS-Based Navigation. IEEE Access 2018, 6, 37947–37955. [Google Scholar] [CrossRef]
- Passalis, N.; Tefas, A.; Pitas, I. Efficient Camera Control using 2D Visual Information for Unmanned Aerial Vehicle-based Cinematography. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018. [Google Scholar]
- Kwak, J.; Park, J.H.; Sung, Y. Emerging ICT UAV Applications and Services: Design of Surveillance UAVs. Int. J. Commun. Syst. 2019. [Google Scholar] [CrossRef]
- Geng, L.; Zhang, Y.F.; Wang, P.F.; Wang, J.J.; Fuh, J.Y.; Teo, S.H. UAV Surveillance Mission Planning with Gimbaled Sensors. In Proceedings of the 11th IEEE International Conference on Control & Automation (ICCA), Taichung, Taiwan, 18–20 June 2014. [Google Scholar]
- Hu, H.; Lin, Y.; Liu, M.; Cheng, H.; Chang, Y.; Sun, M. Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Lai, W.; Huang, Y.; Joshi, N.; Buehler, C.; Yang, M.; Kang, S.B. Semantic-Driven Generation of Hyperlapse from 360 Degree Video. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2610–2621. [Google Scholar] [CrossRef] [PubMed]
- Cheng, H.; Chao, C.; Dong, J.; Wen, H.; Liu, T.; Sun, M. Cube Padding for Weakly-Supervised Saliency Prediction in 360 Videos. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Pena, J.M.; Lozano, J.A.; Larranaga, P. An Empirical Comparison of Four Initialization Methods for the K-means Algorithm. Pattern Recognit. Lett. 1999, 20, 1027–1040. [Google Scholar] [CrossRef]
- Yan, F.; Zhuang, Y.; Xiao, J. 3D PRM based Real-time Path Planning for UAV in Complex Environment. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012. [Google Scholar]
- Alejo, D.; Cobano, J.A.; Heredia, G.; Ollero, A. Particle Swarm Optimization for Collision-free 4D Trajectory Planning in Unmanned Aerial Vehicles. In Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 28–31 May 2013. [Google Scholar]
- Chu, P.M.; Cho, S.; Sim, S. A Fast Ground Segmentation Method for 3D Point Cloud. J. Inf. Process. Syst. 2017, 13, 491–499. [Google Scholar]
- Kim, D.; Hong, S.K. Target Pointing and Circling of a Region of Interest with Quadcopter. Int. J. Appl. Eng. Res. 2016, 11, 1082–1088. [Google Scholar]
- Chu, P.M.; Cho, S.; Park, J.; Fong, S.; Cho, K. Enhanced Ground Segmentation Method for Lidar Point Clouds in Human-centric Autonomous Robot Systems. Hum. Centric Comput. Inf. Sci. 2019, 9, 17. [Google Scholar] [CrossRef]
- Minaeian, S.; Liu, J.; Song, Y. Vision-based target detection and localization via a team of cooperative UAV and UGVs. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 1005–1016. [Google Scholar] [CrossRef]
- Park, S.; Jung, D. Vision-Based Tracking of a Ground-Moving Target with UAV. Int. J. Aeronaut. Space Sci. 2019, 20, 467–482. [Google Scholar] [CrossRef]
- Truong, M.T.N.; Kim, S. Parallel Implementation of Color-based Particle Filter for Object Tracking in Embedded Systems. Hum. Centric Comput. Inf. Sci. 2017, 7, 2. [Google Scholar] [CrossRef]
- Ricoh Theta Z1. Available online: https://theta360.com/ko/ (accessed on 26 September 2019).
Layer | Type | Configurations |
---|---|---|
Input Layer | Input image | 270 by 480 grayscale image |
Convolutional Layers | Convolution | Number of outputs: 32, kernel size: 1 by 1, strides: 2 |
Max Pooling | kernel size: 3 by 3 | |
Convolution | Number of outputs: 32, kernel size: 3 by 3 | |
Dropout | - | |
Max Pooling | kernel size: 3 by 3 | |
Convolution | Number of outputs: 32, kernel size: 3 by 3, strides: 2 | |
Max Pooling | kernel size: 3 by 3 | |
Output Layer | Fully Connected | 26 by 1 matrix |
Index | Proposed Method | User Definition |
---|---|---|
1 | 1 | 1 |
2 | 1 | 1 |
3 | 9 | 1 |
4 | 72 | 1 |
5 | 39 | 1 |
Total | 122 | 5 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kwak, J.; Sung, Y. End-To-End Controls Using K-Means Algorithm for 360-Degree Video Control Method on Omnidirectional Camera-Equipped Autonomous Micro Unmanned Aircraft Systems. Appl. Sci. 2019, 9, 4431. https://doi.org/10.3390/app9204431
Kwak J, Sung Y. End-To-End Controls Using K-Means Algorithm for 360-Degree Video Control Method on Omnidirectional Camera-Equipped Autonomous Micro Unmanned Aircraft Systems. Applied Sciences. 2019; 9(20):4431. https://doi.org/10.3390/app9204431
Chicago/Turabian StyleKwak, Jeonghoon, and Yunsick Sung. 2019. "End-To-End Controls Using K-Means Algorithm for 360-Degree Video Control Method on Omnidirectional Camera-Equipped Autonomous Micro Unmanned Aircraft Systems" Applied Sciences 9, no. 20: 4431. https://doi.org/10.3390/app9204431
APA StyleKwak, J., & Sung, Y. (2019). End-To-End Controls Using K-Means Algorithm for 360-Degree Video Control Method on Omnidirectional Camera-Equipped Autonomous Micro Unmanned Aircraft Systems. Applied Sciences, 9(20), 4431. https://doi.org/10.3390/app9204431