Next Article in Journal
Efficient Motion Primitives-Based Trajectory Planning for UAVs in the Presence of Obstacles
Next Article in Special Issue
Aerial Map-Based Navigation by Ground Object Pattern Matching
Previous Article in Journal
A Fusion Approach for UAV Onboard Flight Trajectory Management and Decision Making Based on the Combination of Enhanced A* Algorithm and Quadratic Programming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes

by
Zhuoyong Shi
,
Jiandong Zhang
*,
Guoqing Shi
*,
Longmeng Ji
,
Dinghan Wang
and
Yong Wu
School of Electronic Information, Northwestern Polytechnic University, Xi’an 710129, China
*
Authors to whom correspondence should be addressed.
Drones 2024, 8(6), 255; https://doi.org/10.3390/drones8060255
Submission received: 29 April 2024 / Revised: 3 June 2024 / Accepted: 5 June 2024 / Published: 10 June 2024

Abstract

:
With the burgeoning impact of artificial intelligence on the traditional UAV industry, the pursuit of autonomous UAV flight has emerged as a focal point of contemporary research. Addressing the imperative for advancing critical technologies in autonomous flight, this paper delves into the realm of UAV flight state recognition and trajectory prediction. Presenting an innovative approach focused on improving the precision of unmanned aerial vehicle (UAV) path forecasting via the identification of flight states, this study demonstrates its efficacy through the implementation of two prediction models. Firstly, UAV flight data acquisition was realized in this paper by the use of multi-sensors. Finally, two models for UAV trajectory prediction were designed based on machine learning methods and classical mathematical prediction methods, respectively, and the results before and after flight pattern recognition are compared. The experimental results show that the prediction error of the UAV trajectory prediction method based on multiple flight modes is smaller than the traditional trajectory prediction method in different flight stages.

1. Introduction

An unmanned aerial vehicle (UAV) is an aerial vehicle that operates without a pilot onboard [1,2]. It can be remotely controlled or programmed to perform various tasks autonomously. The development of UAVs has greatly benefited from advancements in fields such as aviation technology, computer science, electronics, and sensor technology. Initially employed primarily for military purposes, including intelligence gathering, targeting, and aerial attacks [3], UAVs have become increasingly prevalent in civilian applications due to technological advancements and cost reductions [4,5]. Presently, UAVs serve vital roles in aerial photography, cargo transportation, agriculture, scientific research, disaster monitoring and rescue operations, as well as film and television production [6,7].
A safety monitoring system for UAVs oversees the parameters of the aircraft’s movement, enabling monitoring and control of relevant parameters during operation [8,9]. Central to this system is UAV trajectory prediction, a fundamental technology for enhancing UAV safety supervision [10,11,12]. UAV trajectory prediction involves forecasting the future path of a UAV based on its inherent information [13], providing more accurate navigation data for UAV control and navigation.
Flight state recognition of UAVs falls within the domain of pattern recognition, originally utilized in the aerospace industry for monitoring the flight status of airborne electronic devices [14]. Scholars in the field of pattern recognition employ various algorithms such as support vector machines [15], decision trees [16], random forests [17], and artificial neural networks [18] for research purposes.
In trajectory prediction research, Kan-nan and colleagues [19] introduced a low-complexity linear Kalman filter based on the differential state equation, enhancing the trajectory prediction accuracy and response time. Wu et al. [20] proposed a fixed-time extended state observer (FXTESO)-based preset performance fault-tolerant control method for handling the attitude tracking problem of UAVs under actuator failures, and the effectiveness of the proposed control strategy is verified through simulation results. Fan et al. [21] comprehensively evaluated optimization strategies for UAVs in a number of key areas such as infrastructure inspection, security surveillance, and environmental monitoring; analyzed the effectiveness of UAVs in specific tasks; and explored the challenges of operating in complex environments. Liu et al. [22] proposed a simultaneous path planning method based on an improved deep reinforcement learning (DRL) algorithm, combined with radio mapping techniques, to optimize 3D trajectories for cellular-connected UAVs in order to improve the system efficiency and reliability. Benmoussa et al. [23] conducted a parametric study to investigate the effects of control parameters on the performance of a UAV with a hybrid electric propulsion system under different flight conditions, aiming to provide guidance for the design and operation of UAVs. Zhang et al. [24] devised a trajectory prediction model spanning four dimensions by integrating historical flight data with UAV motion equations. They incorporated a genetic algorithm for dynamic weighting. Niu et al. [25] introduced a model predictive control (MPC)-based adjacent motion trajectory prediction algorithm that operates independently of communication requirements. Their method exhibited superior performance in simulation experiments compared to traditional distributed model predictive control (DMPC) algorithms. Meanwhile, Xie and colleagues [26] presented a framework for online UAV trajectory prediction using Gaussian process regression (GPR). Their methodology exhibited outstanding performance when compared to alternative approaches, as validated through simulation experiments and real-world data scenarios.
The above literature is a review of the current state of research by current scholars in the areas of pattern recognition and UAV trajectory prediction. However, the UAV trajectory prediction model based only on a certain kind of fixation has a flaw: it only predicts future trajectories based on the UAV’s trajectories in a past period of time, and it does not take into account the situation where the UAV’s maneuvering strategy has changed in a short period of time. In other words, the traditional UAV trajectory prediction model cannot comprehensively consider the flight posture of the UAV in its current state. Based on the need for research in areas related to autonomous flying UAVs, this paper investigates key areas of autonomous flying UAVs: UAV trajectory prediction.
The main contributions and innovations are listed below:
(1) A multi-sensor system is designed to complete the UAV on-board data acquisition system, which can complete the accurate monitoring of UAV data.
(2) The extraction of UAV flight trajectory features is accomplished by fusion of sensor acquisition data.
(3) Two trajectory prediction models are designed for five different flight states of UAVs.
The next part of this paper is organized as follows. The Section 2 introduces UAV sensor devices and data acquisition and preprocessing, and a multi-sensor-based UAV data acquisition system is designed. The Section 3 presents the UAV trajectory prediction model, and two types of trajectory prediction models are established based on the UAV data acquired by the sensors in the previous part to determine the effect of flight state identification on trajectory prediction. The prediction results of the two trajectory prediction models are given and explained in the Section 4. Finally, Section 5 concludes the paper.

2. Data Acquisition and Preprocessing

2.1. UAV Data Acquisition

The UAV data acquisition system built in this study was built based on the Pixhawk open-source UAV platform. The UAV data acquisition system consists of several sensors, including a position module, an air pressure module and an attitude module. As shown in Figure 1, the UAV monitoring system is constructed according to the block diagram of this system.
The block diagram of the monitoring system composed of multiple sensors is shown in Figure 2.
The monitoring system structure described consists of three main components: the lower unit, the transmission system, and the upper unit.
Lower unit: This unit comprises several modules:
(1) Microcontroller minimum system, which includes a microcontroller sensor (MCU) along with supporting components like a crystal oscillator, reset circuitry, etc.
(2) Power supply module: responsible for providing stable power to the system components.
(3) Power module: involved in managing power distribution and regulation.
(4) GPS positioning module: used for determining the UAV’s location.
(5) Flight altitude measurement module: used to measure the UAV’s altitude during flight.
(6) Attitude detection module: used to detect the UAV’s orientation or attitude.
(7) Main control module: central component responsible for coordinating the operation of the lower unit and possibly interfacing with other modules.
Transmission system: This system utilizes a 3DR wireless digital transmission module for its operation. Its purpose is to establish a wireless communication channel between the upper and lower units, enabling data exchange and control commands.
Upper unit: This unit is implemented using LabVIEW and serves as the monitoring and control interface for the entire system during UAV flight. It typically includes functionalities such as:
(1) Monitoring: observing various parameters and data from the lower unit in real time.
(2) Control: sending commands to the lower unit for controlling the UAV’s operation.
(3) Data recording: capturing and storing data generated during the UAV’s flight for analysis or future reference.
Overall, this monitoring system facilitates the operation of an unmanned aerial vehicle (UAV) by providing essential functionalities such as location tracking, altitude measurement, attitude detection, wireless communication, monitoring, control, and data recording.

2.2. Collection Data Preprocessing

Based on factors such as sensors acquisition errors or data transmission during remote processes, collected UAV trajectory data may contain noise. Therefore, it becomes imperative to conduct data preprocessing after UAV data collection. Data preprocessing primarily involves three components: eliminating abnormal data, rectifying missing data, and filtering the high-frequency noise.
Abnormal data rejection is mainly based on the principle of statistics to remove the abnormal values in the data collected by the UAV. The specific steps include differentiating the collected data and statistically analyzing the data for mean-variance, with the mean value being E X and the variance being σ . Data in the interval E X 3 σ , E X + 3 σ are considered normal data, and data outside this interval are considered abnormal data for elimination.
Missing data interpolation is the process of estimating the data that are lost and eliminated from the UAV delivery process, and the main method used is Newton interpolation.
High-frequency noise rejection occurs if the Gaussian noise during UAV data acquisition is eliminated, and a low-pass filter is used for filtering.
The finalized step involves outputting the rectified data after the completion of preprocessing.
Data preprocessing is conducted on a segment of the UAV flight trajectory model, and the processing outcomes are illustrated in Figure 3.
As illustrated in Figure 3, it is clear that the error signals generated during the acquisition process experience enhanced filtration post data preprocessing, leading to an improved accuracy.

3. UAV Trajectory Prediction Modelling

Unmanned aerial vehicle (UAV) trajectory prediction involves the UAV forecasting its own trajectory over a future time span using specific algorithms derived from collected airborne data. This predictive capability offers theoretical backing and enhances the UAV control accuracy for various applications such as trajectory planning and autonomous guidance.
This section outlines the establishment of a UAV trajectory prediction system grounded in the five fundamental flight states of UAVs.

3.1. Neural-Network-Based UAV Trajectory Prediction Model

A neural network is a computational model inspired by the structure and functionality of the human brain’s neural networks and is extensively utilized in machine learning and artificial intelligence domains. It comprises interconnected nodes, or neurons, organized into different layers, typically including input, hidden, and output layers.
In this network, information flows from the input layer through successive layers of neurons for processing, culminating in an output. Each neuron possesses weights and biases, influencing its response to input stimuli. The learning mechanism of a neural network commonly employs backpropagation, adjusting these weights and biases based on the disparity between the network’s output and actual labels. This iterative process aims to minimize error and optimize the network’s performance.
Neural networks find application across diverse tasks such as image and speech recognition, natural language processing, and predictive analytics, among others. They stand as a cornerstone in the realm of artificial intelligence, playing a pivotal role in numerous practical implementations and research advancements.
The hidden layer output results z k and the output layer y j are shown in Formulas (1) and (2).
z k = f 1 i = 0 n v k i x i           k = 1 , 2 , q
y j = f 2 k = 0 q w j k x i           j = 1 , 2 , m
In Equations (1) and (2), the f 1 is the transfer function from the input layer to the hidden layer and the f 2 from the hidden layer to the output layer; n is the number of nodes in the input layer and q is the output layer of the hidden layer. v and w represent the weights of different layers in the two delivery networks, respectively. x is the network inputs to the two transfer networks.
The neural network employs the stochastic gradient descent technique to iteratively adjust its parameters and optimize its learning process. The sample is input into the neural network model, and the resultant network output is represented as Equation (3).
y ^ i = f x i ; θ
In Equation (3), x i represents the column vector input, θ is the neural network’s learning criterion, and the y ^ i represents the column vector input in the neural network.
The approach described involves utilizing neural networks to predict the future flight trajectory of a UAV based on its current state and flight trajectory information. The collected UAV operating state data, segmented into different operating states, serve as the training dataset for the neural network.
Once trained, the neural network can predict the future flight trajectory by considering the current operating state and flight trajectory information of the UAV. The UAV’s flight state is categorized into five types, and a corresponding neural network is trained for each flight state to predict the UAV’s trajectory under that specific motion state.
This trajectory prediction model is specifically designed for small UAVs, aiming to enhance their trajectory prediction accuracy and contribute to their overall operational efficiency and effectiveness.
Table 1 illustrates the parameters of the simulation network model along with their initial configurations.
Table 1 presents the foundational parameters established within the neural network. Following the configuration of these parameters, the neural network undertakes flight trajectory prediction utilizing the data relayed by the UAV.

3.2. Multivariate Adams Predictive Correction Trajectory Prediction Model

Throughout the UAV’s flight, various factors affect its velocity in all directions. However, this model primarily focuses on the impact of time, spatial position, and flight states on velocity. Initially, the UAV’s navigation speed is determined, followed by the establishment of a differential equation governing navigation speed concerning the spatial position. Subsequently, the UAV’s spatial position is calculated utilizing the Adams prediction correction formula. Multiple regression equations are then formulated to depict the UAV’s flight speed concerning trajectory position and the independent variable time. The regression equation as shown in Formula (4) is solved for the optimal solution with the error minimization as the mathematical model, which is shown in Formula (4).
min v ( t , x ) V * ( t , x ) 2 2
In the best function approximation in Formula (4), the quadratic function approximation works best to construct the quadratic regression equation of velocity with respect to distance and time in the three directions of x y z, as shown in Formula (5).
v x = a x 2 + b t 2 + c x t + d x + e t + f v y = a y 2 + b t 2 + c y t + d y + e t + f v z = a z 2 + b t 2 + c z t + d z + e t + f
The regression errors of the regression equations corresponding to the UAV’s five flight states are shown in Table 2.
Based on the velocity binary function obtained by regression for each directional position, the Adams prediction correction formula is selected for iterative solution. The Adams prediction correction formula is shown in Formula (6).
y n + 1 ( 0 ) = y n + h 24 ( 55 f n 59 f n 1 + 37 f n 2 9 f n 3 ) y n + 1 = y n + h 24 ( 9 f ( x n + 1 , y n + 1 ( 0 ) ) + 19 f n 5 f n 1 + f n 2 )
(1)
Prediction algorithm design
The multivariate Adams prediction correction algorithm is shown in Algorithm 1.
Algorithm 1 Multivariate Adams prediction correction algorithm
Input: training data D = x i , y i i = 1 N , validation data V
1 Determine the UAV’s flight states
2 Invoke the multivariate Adams prediction correction formula corresponding to the flight state
3 Determine initial velocity and position
4    repeat
5Determine starting speed and position
6for i = 1…N do
7
8
9
Select data t i , x i from data D
Predictive drone trajectory placement
     y n + 1 ( 0 ) = y n + h 24 ( 55 f n 59 f n 1 + 37 f n 2 9 f n 3 )
10
11
12
13
Correction of predicted trajectory
y n + 1 = y n + h 24 ( 9 f ( x n + 1 , y n + 1 ( 0 ) ) + 19 f n 5 f n 1 + f n 2 )
Update confidence interval
14end
15 until   Iterative update of all positions with speed
Output P = t i , x i i = N + 1 M
(2)
Confidence Curve Establishment
(a)
Confidence curve radius determination
The radius of the confidence interval of the speed regression equation for a single dimension of UAV navigation is calculated as shown in Formula (7).
r 0 = S 1 + 1 n + ( x 0 + x ¯ ) 2 i = 1 n ( x i + x ¯ ) 2
In Formula (7), S is the total variance, n is the number of samples, x 0 is the specific variable, and x ¯ is the mean.
The confidence interval radius distance in space is the sum of the spatial distances of the confidence interval radius in each direction, i.e., the spatial confidence interval radius, and is shown in Formula (8).
r = r x 2 + r y 2 + r z 2
  • (b)
    Confidence curve direction determination
The confidence curve for any point in space predicted at position P 1 ( x 1 , y 1 , z 1 ) should be orthogonal to the directions P 0 ( x 0 , y 0 , z 0 ) and P 1 ( x 1 , y 1 , z 1 ) . That is, the unit normal vector of the face where this confidence curve is located can be expressed as shown in Formula (9).
a = x 1 x 0 ( x 1 x 0 ) 2 + ( y 1 y 0 ) 2 + ( z 1 z 0 ) 2 b = y 1 y 0 ( x 1 x 0 ) 2 + ( y 1 y 0 ) 2 + ( z 1 z 0 ) 2 c = z 1 z 0 ( x 1 x 0 ) 2 + ( y 1 y 0 ) 2 + ( z 1 z 0 ) 2
As shown in Figure 4, c 1 is a confidence curve in space, P 0 ( x 0 , y 0 , z 0 ) and P 1 ( x 1 , y 1 , z 1 ) are two points in space, c 2 is a curve with known parametric equations on the xoy plane past the origin, and the equation of the curve is shown in Formula (10).
x = r cos ( t ) y = r sin ( t ) z = 0
The normal vectors n 1 and n 2 of the curves c 1 and c 2 have the relationship shown in Formula (11).
G n 1 T = n 1 T
In Formula (11), G is the product of the Givens matrix, which can be expressed as shown in Formula (12).
G = cos α 0 sin α 0 1 0 sin α 0 cos α cos β sin β 0 sin β cos β 0 0 0 1
In Formula (12), α and β are rotation factors, which can be expressed as shown in Formula (13).
α = arcsin ( b a 2 + b 2 ) β = arcsin ( c a 2 + b 2 + c 2 )
By rotating the matrix G , the normal vector n 1 can be rotated to n 2 . The inverse matrix G 1 of this rotation matrix can be used to rotate the parameter equation of c 2 to the direction of c 1 . The parameter equation in the direction of c 1 is shown in Formula (14).
x y z = G 1 r cos ( t ) r sin ( t ) 0
The direction of the confidence curve can be found by as shown in Formula (14), but the spatial location of the confidence curve needs to be further determined.
  • (c)
    Confidence curve position determination
The confidence curve can be rotated to the position where the center of the circle is at the origin by the rotation transformation shown in Formula (12), and the direction is the same as the direction of P 1 ( x 1 , y 1 , z 1 ) . Therefore, it is necessary to shift the curve equation in Formula (12) spatially, and the spatial shift is exactly the coordinates of P 1 ( x 1 , y 1 , z 1 ) ; i.e., the spatial shift can be expressed as shown in Formula (15).
x y z = G 1 r cos ( t ) r sin ( t ) 0 + x 1 y 1 z 1
The confidence curve equation for any position in space can be expressed as in Equation (15).
The method for solving the confidence curve is described in Algorithm 2.
Algorithm 2 Confidence curve solving algorithm
Input: predicted location P ( x i , y i , z i ) i = 1 N , confidence interval radius
1 Determine the starting position of the UAV
2   repeat
3Determine the predicted location
4for i = 1…N do
5
6
7
Calculate the spatial confidence curve radius
Determine the direction-of-travel vector
Determine the Givens matrix
8
9
10
Solve for spatial displacement
Determine the parametric equation of the curve
Plot the confidence curve for point P
11end
12 until   Calculate the full confidence curve
Output c i = 1 N

4. UAV Trajectory Prediction Results and Analysis

4.1. Neural Network Model Prediction Results

In this research, we used the device shown in Figure 1 to measure the flight information of the UAV. Meanwhile, we recorded the information of the UAV flight; the fight process of the UAV is shown in Figure 5.
As depicted in Figure 5, the UAV flight data were collected (time interval is 0.01 s) and preprocessed. The flight states of the UAV were predicted in climbing, leveling, turning, circling and descending flight states using a segment of the UAV navigation data, and the trajectory prediction results are shown in Figure 6.

4.2. Multivariate Adams Predictive Prediction Results

Upon preprocessing the gathered UAV flight data and identifying flight states, predictions are made for climbing, level flight, turning, spiraling, and descending states, respectively. The predicted and corrected flight trajectory prediction models are determined by the multivariate Adams model. The UAV’s confidence curves predicted by the model and after flight state recognition are shown, respectively, in Figure 7.

4.3. Analysis of Prediction Results

The distance between the real and predicted UAV trajectory points is quantified by Formula (16), expressed as follows.
d = x x ^ 2 + y y ^ 2 + z z ^ 2
In Formula (16), x represents the real x -axis distance of the UAV and x ^ represents the predicted x -axis distance of the UAV. yz follows the same principles as x.
The mean Euclidean distance between the predicted trajectory and the actual trajectory at each time point is computed as the error distance for trajectory prediction. The error distance μ of the predicted trajectory is depicted in Formula (17).
μ = 1 N i = 1 N d i = x i x ^ i 2 + y i y ^ i 2 + z i z ^ i 2
In Formula (17), N represents the number of the trajectory points in a segment of the prediction.
The predicted UAV trajectory distance errors before and after flight state recognition for the two prediction models are calculated separately, as depicted in Figure 8.
In Figure 8a, the error of the UAV trajectory prediction model based on the machine learning approach is depicted, and Figure 8b depicts the error of the classical mathematical approach for predicting the UAV’s trajectory.
The errors in trajectory prediction for the five flight modes in Figure 8 were statistically analyzed and their maximum, minimum and mean values are shown in Table 3.
A comparison of the figures and table shows that the errors of both prediction models are significantly reduced after identifying the UAV’s flight pattern. The experimental results show that the UAV trajectory prediction model based on flight pattern recognition can better achieve UAV trajectory prediction compared to when directly using the prediction model.

5. Conclusions

This study addresses an improvement to traditional UAV trajectory prediction methods. Traditional UAV trajectory prediction only considers past UAV flight history data, which is insufficient for the current UAV flight state. Based on this, this paper comprehensively considers the current UAV flight pattern and historical flight trajectory to predict the future UAV trajectory. In this paper, two prediction models are devised to forecast UAV trajectories, with and without flight state identification. The results show that the prediction accuracy of the model incorporating flight state identification exceeds that of the model without this identification.
The focus of this paper lies in demonstrating the effectiveness of utilizing UAV flight state identification to enhance trajectory prediction models. Subsequently, five flight state prediction models were developed using a BP neural network and an Adams prediction model to forecast trajectories. The prediction results show that the flight-mode-recognition-based trajectory prediction system proposed in this paper has a much smaller prediction error.
The following conclusions can be drawn from the experiment.
(1) A Pixhawk-based online UAV information collection system was developed and designed.
(2) The prediction accuracy of the UAV prediction model can be improved after performing UAV flight state recognition.
(3) Compared with the traditional neural network prediction model, the neural network prediction model based on the UAV flight state (climbing, level flight, turning, spiraling, and descending) recognition leads to different degrees of reduction in prediction errors in all kinds of flight states.
(4) The prediction accuracy of the multivariate Adams prediction correction model proposed in this paper is higher than that of the neural network model, and the prediction accuracy of this model is more sensitive to whether the flight states are identified or not.
In future research, we aim to explore the application of unsupervised machine learning techniques for UAV trajectory prediction and multi-aircraft collaborative UAV flights. Additionally, we plan to study six-degree-of-freedom-based UAV maneuver decision-making and path planning.

Author Contributions

Literature review, D.W., L.J. and Y.W.; writing, Z.S., J.Z. and G.S.; editing., J.Z. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the 2024 Northwestern Polytechnical University Graduate Student Innovation Fund Project (Program No. 06080-24GH01020101), the Natural Science Basic Research Program of Shaanxi (Program No. 2022JQ-593) and the Key R&D Program of the Shaanxi Provincial Department of Science and Technology (Program No. 2022GY-089).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kakaletsis, E.; Symeonidis, C.; Tzelepi, M.; Mademlis, I.; Tefas, A.; Nikolaidis, N.; Pitas, I. Computer Vision for Autonomous UAV Flight Safety: An Overview and a Vision-Based Safe Landing Pipeline Example. ACM Comput. Surv. 2022, 54, 1–37. [Google Scholar] [CrossRef]
  2. Yan, C.; Fu, L.; Luo, X.; Chen, M. A Brief Overview of Waveforms for UAV Air-to-Ground Communication Systems. In Proceedings of the 3rd International Conference on Vision, Image and Signal Processing, Vancouver, BC, Canada, 26 August 2019; pp. 1–7. [Google Scholar]
  3. Orfanus, D.; de Freitas, E.P.; Eliassen, F. Self-Organization as a Supporting Paradigm for Military UAV Relay Networks. IEEE Commun. Lett. 2016, 20, 804–807. [Google Scholar] [CrossRef]
  4. Greenwood, W.W.; Lynch, J.P.; Zekkos, D. Applications of UAVs in Civil Infrastructure. J. Infrastruct. Syst. 2019, 25, 04019002. [Google Scholar] [CrossRef]
  5. Sherman, M.; Gammill, M.; Raissi, A.; Hassanalian, M. Solar UAV for the Inspection and Monitoring of Photovoltaic (PV) Systems in Solar Power Plants. In Proceedings of the AIAA Scitech 2021 Forum, Virtual Event, 11 January 2021. [Google Scholar]
  6. Siean, A.-I.; Vatavu, R.-D.; Vanderdonckt, J. Taking that Perfect Aerial Photo: A Synopsis of Interactions for Drone-Based Aerial Photography and Video. In Proceedings of the 2021 ACM International Conference on Interactive Media Experiences, Virtual Event, USA, 21 June 2021; pp. 275–279. [Google Scholar]
  7. Xia, Y.; Ye, G.; Yan, S.; Feng, Z.; Tian, F. Application Research of Fast UAV Aerial Photography Object Detection and Recognition Based on Improved YOLOv3. J. Phys. Conf. Ser. 2020, 1550, 032075. [Google Scholar] [CrossRef]
  8. Liu, Y.; Liu, S. Design and Implementation of Farmland Environment Monitoring System Based on Micro Quadrotor UAV. J. Phys. Conf. Ser. 2022, 2281, 012005. [Google Scholar] [CrossRef]
  9. Zhang, M.; Wang, H.; Wu, J. On UAV Source Seeking with Complex Dynamic Characteristics and Multiple Constraints: A Cooperative Standoff Monitoring Mode. Aerosp. Sci. Technol. 2022, 121, 107315. [Google Scholar] [CrossRef]
  10. Corbetta, M.; Banerjee, P.; Okolo, W.; Gorospe, G.; Luchinsky, D.G. Real-Time UAV Trajectory Prediction for Safety Monitoring in Low-Altitude Airspace. In Proceedings of the AIAA Aviation 2019 Forum, Dallas, TX, USA, 17 June 2019. [Google Scholar]
  11. Banerjee, P.; Corbetta, M. Uncertainty Quantification of Expected Time-of-Arrival in UAV Flight Trajectory. In Proceedings of the AIAA Aviation 2021 Forum, Virtual Event, 2 August 2021. [Google Scholar]
  12. Zwick, M.; Gerdts, M.; Stütz, P. Sensor Model-Based Trajectory Optimization for UAVs Using Nonlinear Model Predictive Control. In Proceedings of the AIAA Scitech 2022 Forum, San Diego, CA, USA (Virtual), 3 January 2022. [Google Scholar]
  13. Zhang, J.; Shi, Z.; Zhang, A.; Yang, Q.; Shi, G.; Wu, Y. UAV Trajectory Prediction Based on Flight State Recognition. IEEE Trans. Aerosp. Electron. Syst. 2023. early access. [Google Scholar] [CrossRef]
  14. de Marina, H.G.; Espinosa, F.; Santos, C. Adaptive UAV Attitude Estimation Employing Unscented Kalman Filter, FOAM and Low-Cost MEMS Sensors. Sensors 2012, 12, 9566–9585. [Google Scholar] [CrossRef]
  15. Shi, Z.; Shi, G.; Zhang, J.; Wang, D.; Xu, T.; Ji, L.; Wu, Y. Design of UAV Flight State Recognition System for Multi-Sensor Data Fusion. IEEE Sensors J. 2024. early access. [Google Scholar] [CrossRef]
  16. Wang, Y.; Li, K.; Han, Y.; Yan, X. Distributed Multi-UAV Cooperation for Dynamic Target Tracking Optimized by an SAQPSO Algorithm. ISA Trans. 2022, 129, 230–242. [Google Scholar] [CrossRef]
  17. Heredia, G.; Duran, A.; Ollero, A. Modeling and Simulation of the HADA Reconfigurable UAV. J. Intell. Robot. Syst. 2012, 65, 115–122. [Google Scholar] [CrossRef]
  18. Shi, Z.; Jia, Y.; Shi, G.; Zhang, K.; Ji, L.; Wang, D.; Wu, Y. Design of Motor Skill Recognition and Hierarchical Evaluation System for Table Tennis Players. IEEE Sensors J. 2024, 24, 5303–5315. [Google Scholar] [CrossRef]
  19. Kannan, R. Orientation Estimation Based on LKF Using Differential State Equation. IEEE Sensors J. 2015, 15, 6156–6163. [Google Scholar] [CrossRef]
  20. Wu, Q.; Zhu, Q. Prescribed Performance Fault-Tolerant Attitude Tracking Control for UAV with Actuator Faults. Drones 2024, 8, 204. [Google Scholar] [CrossRef]
  21. Fang, Z.; Savkin, A.V. Strategies for Optimized UAV Surveillance in Various Tasks and Scenarios: A Review. Drones 2024, 8, 193. [Google Scholar] [CrossRef]
  22. Liu, X.; Zhong, W.; Wang, X.; Duan, H.; Fan, Z.; Jin, H.; Huang, Y.; Lin, Z. Deep Reinforcement Learning-Based 3D Trajectory Planning for Cellular Connected UAV. Drones 2024, 8, 199. [Google Scholar] [CrossRef]
  23. Benmoussa, A.; Gamboa, P.V. Effect of Control Parameters on Hybrid Electric Propulsion UAV Performance for Various Flight Conditions: Parametric Study. Appl. Mech. 2023, 4, 493–513. [Google Scholar] [CrossRef]
  24. Zhang, H.; Yan, Y.; Li, S.; Hu, Y.; Liu, H. UAV Behavior-Intention Estimation Method Based on 4-D Flight-Trajectory Prediction. Sustainability 2021, 13, 12528. [Google Scholar] [CrossRef]
  25. Niu, Z.; Jia, X.; Yao, W. Communication-Free MPC-Based Neighbors Trajectory Prediction for Distributed Multi-UAV Motion Planning. IEEE Access 2022, 10, 13481–13489. [Google Scholar] [CrossRef]
  26. Xie, G.; Chen, X. Efficient and Robust Online Trajectory Prediction for Non-Cooperative Unmanned Aerial Vehicles. J. Aerosp. Inf. Syst. 2022, 19, 143–153. [Google Scholar] [CrossRef]
Figure 1. The UAV monitoring system.
Figure 1. The UAV monitoring system.
Drones 08 00255 g001
Figure 2. The block diagram of monitoring system structure.
Figure 2. The block diagram of monitoring system structure.
Drones 08 00255 g002
Figure 3. Comparison of data before and after data preprocessing.
Figure 3. Comparison of data before and after data preprocessing.
Drones 08 00255 g003
Figure 4. Schematic diagram of confidence curve solution.
Figure 4. Schematic diagram of confidence curve solution.
Drones 08 00255 g004
Figure 5. UAV flight information measurement pictures.
Figure 5. UAV flight information measurement pictures.
Drones 08 00255 g005
Figure 6. Neural network model prediction results.
Figure 6. Neural network model prediction results.
Drones 08 00255 g006aDrones 08 00255 g006bDrones 08 00255 g006c
Figure 7. Multivariate Adams model prediction results.
Figure 7. Multivariate Adams model prediction results.
Drones 08 00255 g007aDrones 08 00255 g007bDrones 08 00255 g007c
Figure 8. Comparison of two trajectory prediction methods. (a) Comparison of UAV trajectory prediction errors of machine learning methods before and after flight mode recognition. (b) Comparison of UAV trajectory prediction errors of classical mathematical methods before and after flight mode recognition.
Figure 8. Comparison of two trajectory prediction methods. (a) Comparison of UAV trajectory prediction errors of machine learning methods before and after flight mode recognition. (b) Comparison of UAV trajectory prediction errors of classical mathematical methods before and after flight mode recognition.
Drones 08 00255 g008
Table 1. Neural network simulation parameter table.
Table 1. Neural network simulation parameter table.
Name of ParameterSettings
Training sample number80
Test sample number20
Learning rate0.001
Number of nodes7
Number of hidden layer nodes8
Number of output nodes4
Normalization functionSigmoid
Activation functionReLU
Table 2. The five flight states of UAVs and their corresponding error.
Table 2. The five flight states of UAVs and their corresponding error.
Flight StatesClimbingLevel FlightCirclingTurningDescending
Errors/m0.04170.05940.07890.04410.0470
Table 3. Trajectory prediction error comparison table.
Table 3. Trajectory prediction error comparison table.
Name of MethodMinimum/mMaximum/mAverage/m
Traditional machine learning method0.08450.16300.1100
Improved machine learning method0.03660.08890.06426
Traditional mathematical method0.0460.0710.057
Improved mathematical method0.02250.0450.0298
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, Z.; Zhang, J.; Shi, G.; Ji, L.; Wang, D.; Wu, Y. Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes. Drones 2024, 8, 255. https://doi.org/10.3390/drones8060255

AMA Style

Shi Z, Zhang J, Shi G, Ji L, Wang D, Wu Y. Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes. Drones. 2024; 8(6):255. https://doi.org/10.3390/drones8060255

Chicago/Turabian Style

Shi, Zhuoyong, Jiandong Zhang, Guoqing Shi, Longmeng Ji, Dinghan Wang, and Yong Wu. 2024. "Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes" Drones 8, no. 6: 255. https://doi.org/10.3390/drones8060255

APA Style

Shi, Z., Zhang, J., Shi, G., Ji, L., Wang, D., & Wu, Y. (2024). Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes. Drones, 8(6), 255. https://doi.org/10.3390/drones8060255

Article Metrics

Back to TopTop