Next Article in Journal
Attitude Stabilization of a Satellite Having Only Electromagnetic Actuation Using Oscillating Controls
Previous Article in Journal
A Novel Data-Driven-Based Component Map Generation Method for Transient Aero-Engine Performance Adaptation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

System Identification of an Aerial Delivery System with a Ram-Air Parachute Using a NARX Network

by
Kemal Güven
* and
Andaç Töre Şamiloğlu
Mechanical Engineering Department, Başkent University, Ankara 06500, Turkey
*
Author to whom correspondence should be addressed.
Aerospace 2022, 9(8), 443; https://doi.org/10.3390/aerospace9080443
Submission received: 5 April 2022 / Revised: 2 August 2022 / Accepted: 9 August 2022 / Published: 12 August 2022

Abstract

:
Neural networks are one of the methods used in system identification problems. In this study, a NARX network with a serial-parallel structure was used to identify an unknown aerial delivery system with a ram-air parachute. The dataset was created using the software-in-the-loop method (Software in the loop). Gazebo was used as the simulator and PX4 was used as the autopilot software. The performance of the NARX network differed according to parameters used, such as the selected training algorithm, input and output delays, the hidden layer, and the number of neurons. Within the scope of this study, each parameter was examined independently. Models were trained using MATLAB 2020a. The results demonstrated that the model with one hidden layer and five neurons, which was trained using the Bayesian regularization algorithm, was sufficient for this problem.

1. Introduction

In aircraft, system identification can be thought of as estimating aerodynamic parameters or defining a mathematical model of the system. Three methods have been proposed in the literature for the estimation of aerodynamic parameters of parachute landing systems [1]. The first of these covers analytical methods based on computational fluid dynamics. Others are wind tunnel tests and flight tests. In this study, we focused on the methods used in flight tests.
The purpose of system definition is to obtain a mathematical model according to the inputs and outputs obtained from the flight tests. Hamel and Jategaonkar proposed the 4M (maneuver, measurement, method, model—see Figure 1) requirements for successful system identification [2], arguing that:
  • Control inputs should be created to cover extreme points;
  • High-resolution measurements should be used;
  • The possible mathematical model of the vehicle should be defined; and
  • The most suitable method for the data should be chosen.
Jann and Strickert suggested separating the symmetric and asymmetric maneuvers that need to be carried out in the formation of data to be used in the definition process [3] (Figure 2).
The methods used in parameter estimation can be listed as the Equation-error, output-error, and filter-error methods. The question of which method to choose can be decided according to the measurement and the noise present in the process [2] (Figure 3). If disruptive factors can be ignored in both, the fastest method, the equality-error method, is preferred. If the disturbing factors are only assumed in the measurements, the output-error method is recommended, and if both are present, the filter-error method is recommended.
The output-error method is the most widely preferred method for parameter estimation in the literature. In his study, Grauer calculated a dynamic model of an aircraft during flight by adapting the output-error method, which is usually carried out using post-flight data, to real-time flight data [4]. In another study using the output-error method, Jann estimated the state variables of a parachute landing system called ALEX via sensor inputs (GPS, Magnetometers, Gyros, Accelerometers) [5]. On the other hand, Jaiswal, Prakash, and Chaturvedi estimated the aerodynamic coefficients of a parachute landing system using the maximum likelihood method and the output-error method [6].
In addition to statistical methods, machine learning techniques, which are increasing in popularity day by day, have also been successfully used in solving system identification problems. In the literature, artificial neural networks have been used in modeling aircraft dynamics [7,8,9,10,11], estimating aerodynamic forces and moments [12,13,14,15], and in controller designs [16,17]. Both feed-forward neural networks [14,18] and recurrent neural networks have been widely used in these studies [19]. Roudbari and Saghafi proposed a new method for describing the dynamics of highly maneuverable aircraft. In the model they developed, they modeled the flight dynamics with artificial neural networks. The difference between their approach and those of traditional methods is that they did not use aerodynamic information during the training process [20]. Bagherzadeh supported a model with flight dynamics in order to increase the performance of the artificial neural network model [21].
The development of deep learning methods has enabled these methods to be used frequently in system identification problems. The residual neural network approach, which is one type of deep neural network, is one of the methods used to solve these problems. Goyal and Benner developed a special architecture for dynamic systems called LQResNET [22]. The method they proposed allowed for the use of observations in the modeling of dynamical systems. Their model was based on the principle that the rate of a variable depends on the linear and quadratic forms of the variable. Chen and Xiu suggested the framework called gResNet. They defined the residual as the estimation error of the prior model. They also used a DNN to model the residual [23].
In this study, a NARX Network with a serial-parallel structure was used to identify an unknown aerial delivery system with a ram-air parachute. The dataset was created using the software-in-the-loop method (software in the loop). Gazebo was used as the simulator and PX4 was used as the autopilot software. The performance of the NARX network differed according to parameters used, such as the selected training algorithm, the input and output delays, the hidden layer, and the number of neurons. Within the scope of this study, each parameter was examined independently. Models were trained using MATLAB 2020a.

2. Mathematical Model

In this study, a 6-degree-of-freedom model developed for a parachute landing system was used [24]. The Equations of motion of the vehicle can be written as:
m I 3 x 3 u ˙ v ˙ w ˙ = F m S ω u v w ,
I p ˙ q ˙ r = M S ω I p q r ,
where m is the mass, I is the inertia matrix, [u, v w] are linear velocities, [p, q r] are the angular velocities in the body frame, S(ω) is a skew-symmetric matrix consisting of linear velocity vectors, F is the force, and M is moment.
Due to the xz-symmetry plane of the parachute landing system, the inertial matrices consist of 4 unique components.
S ω = 0 r q r 0 p q p 0
I = I x x 0 I x z 0 I y y 0 I x z 0 I z z
The forces and moments affecting the parachute are caused by gravity and aerodynamic forces. The gravitational force can be written according to the body (b) axis.
F g = m g sin θ cos θ sin Φ cos θ cos Φ
The aerodynamic forces acting on the system are written using the relevant aerodynamic coefficients ( C D 0 , C D α 2 , C D δ s ,   C Y β ,   C L 0 ,   C L α ,   C L δ s ), according to the body axis.
F a = Q S R w b C D 0 + C D α 2 α 2 + C D δ s δ ¯ s C Y β β C L 0 + C L α α + C L δ s δ ¯ s
In this Equation, S represents the parachute surface area, δ ¯ s represents symmetric trailing edge deflection, and ( R w b ) is the rotation matrix from the aerodynamic coordinate system to the body axis.
R w b = R α R β = cos α 0 sin α 0 1 0 sin α 0 cos α cos β sin β 0 sin β cos β 0 0 0 1
R w b = R α R β = cos α cos β cos α sin β sin α sin β cos β 0 sin α cos β sin α sin β cos α
The angle of attack and slip angle are obtained from the velocity vector in the body axis.
α = tan 1 v z v x
β = tan 1 v y v x 2 + v z 2
The velocity vector in the body axis consists of the global velocity and the wind effect.
V a = v x v y v z = u v w R n b w x w y w z
R n b is the rotation matrix from the coordinate system on the North-East-Down-axis which has its origin in the center of mass of the parachute to the body axis. Euler angles (roll, pitch, yaw) are used in this notation.
R ϕ = 1 0 0 0 cos ϕ sin ϕ 0 sin ϕ cos ϕ
R θ = cos θ 0 sin θ 0 1 0 sin θ 0 cos θ
R ψ = 1 0 0 0 cos ψ sin ψ 0 sin ψ cos ψ
R n b = R ϕ R θ R ψ
Aerodynamic moments affecting the parachute can also be written using the relevant coefficients ( C l β ,   C l p , C l r , C l δ a ,   C m 0 ,   C m α ,   C m q ,   C n β ,   C n p ,   C n r ,   C n δ a ). These are roll, pitch, and yaw moments, respectively [2].
M a = ρ V a 2 S 2 b C l β β + b 2 V a C l p p + b 2 V a C l r r + C l δ a δ ¯ a c ¯ C m 0 + C m α α + c 2 V a C m q q b C n β β + b 2 V a C n p p + b 2 V a C n r r + C n δ a δ ¯ a
here, ρ is air density, c ¯   represents mean aerodynamic chord, δ ¯ a = δ a / δ a   m a x is asymmetric trailing-edge deflection, and S is the canopy reference area.

3. Materials and Methods

The dataset was created using the software-in-the-loop method (software in the loop). Gazebo was used as the simulator and PX4 was used as the autopilot software. A virtual flight was performed in the Gazebo environment (Figure 4).
The parameters required for the simulation were used considering the autonomous landing system with a parachute model named Snowflake (Table 1) [3].
Gazebo compatible sensor models were used to obtain the flight data for the vehicle in the simulation environment. These consisted of a gyroscope, magnetometer, accelerometer, barometer, and GPS. The estimation of the state variables of the vehicle was carried out with PX4 software, using the extended Kalman filter.
PX4 has a state estimation module called EKF2 which uses the EKF algorithm. It uses IMU data in the state prediction phase. To correct these values, a GPS and barometer are used in the state correction phase [24].
Simplified models of the sensors used can be shown similarly [25]:
x m = x + b + n ,
b ˙ = n b ,
where x m is the measured value; x is the real value; and b , n , and n b   represent bias and Gaussian noise, respectively. The sensor parameters can also be expressed using this notation. The sensor parameters used in the simulation are given in Table 2.
The simulation was carried out in a windless environment and the air density was 1.225 kg/m3. In the simulation, the system was released from a height of 500 m. Dropping occurred in 30 s. Control inputs δ ¯ a and δ ¯ s are given as full right and full left (Figure 5).
The flight data received from the system were arranged and the input vector x and the output vector y were created.
x = a ¯   s ¯
y = u   v   w   p   q   r
A total of 270 s of data were reduced to 225 s to cover the flight section, and 2250 pieces of data were produced using a 10 Hz measurement. The position and velocity of the vehicle in the flight data used are shown in Figure 6, Figure 7 and Figure 8.
In order to improve the performance of the model, 70% of the flight was used for training and the remaining 30% was used in the testing process. Since the landing position is the most important phase, the first phase of the flight was selected as the training data.

3.1. NARX Network

A nonlinear autoregressive exogenous (NARX) network is a nonlinear model representation used in time series models. In this notation, the model’s outputs depend on the past output values, the inputs, and the past values of the inputs. Its mathematical expression is given as follows:
y t = f y t 1 ,   y t 2 ,   .   .   .   ,   y t n y ; u t ,   u t 1 ,   .   .   .   ,   u t n u ,
where y denotes outputs, u denotes inputs, and f represents a nonlinear function. The structure in which f is modeled as a neural network is named the NARX neural network (NARX network) [26]. This model has been used for modeling conventional fixed-wing [27,28] and rotary-wing [29,30] aircraft. A NARX neural network can be modeled using two types of models: parallel and serial-parallel (Figure 9). In the parallel model, the estimated output values are fed back into the system.
y ^ t = f y ^ t 1 ,   y ^ t 2 ,   .   .   .   ,   y ^ t n y ;   u t ,   u t 1 ,   .   .   .   ,   u t n u
In the serial-parallel model, only real system outputs are used:
y ^ t = f y t 1 ,   y t 2 ,   .   .   .   ,   y t n y ; u t ,   u t 1 ,   .   .   .   ,   u t n u
where y ^ t represents the estimated output value time t.
Since the data set used in this study included real system outputs, the serial-parallel structure was preferred. The feed-forward network block shown in Figure 9 consisted of multilayer feedforward neural networks, which consisted of at least one hidden layer and neurons. Each neuron calculated the outputs with the help of the activation function, determined using the inputs and their weights, as shown in Figure 10, where, x n , w n , b, and f represent inputs, weights, bias, and the activation function, respectively. The architecture of the NARX neural network with a serial-parallel structure is shown in Figure 11.
The selection of the activation functions plays an important role in the model design. The functions used in the hidden layers and the functions used in the output layer vary. Differentiable functions are preferred in hidden layers. These functions, which are preferred over linear functions during training, enable the models to perform successfully with more complex problems. In the literature, functions that are frequently used in hidden layers are ReLU (Rectified Linear Activation), sigmoid (logistic), and Tanh (hyperbolic tangent) functions. The function used in the output layer differs according to the type of problem. Linear functions are used in regression problems, whereas softmax or sigmoid functions are used in classification problems. This concept is illustrated in detail in Table 3.
The process of calculating and updating the weights is called training. The aim here is to minimize the targeted error function for model performance. In the neural network model, this function can be written as the sum of the squares of the errors:
E = i = 1 n e i 2 ,
where e is the error and n is the number of data.
The training algorithm used in feed-forward neural network methods is known as the back-propagation algorithm [31]. Since the convergence rate of the steepest descent method, which is used as a standard in the back-propagation algorithm, is slow, many learning algorithms have been developed for neural network training. The main ones are the Levenberg–Marquardt algorithm [32], the Bayesian regularization algorithm [33], and the scaled conjugate gradient algorithm [34].

3.2. Levenberg–Marquardt

The Levenberg–Marquardt algorithm is a second-order training algorithm used in solving nonlinear optimization problems. According to the weight values that need to be updated, the Jacobian of the error function shown in Equation (23) can be calculated as follows:
J = e 1 w 1 e 1 w m e n w 1 e n w m ,
where m is the number of weights in the network. After finding the Jacobian matrix, the gradient vector (g) and the Hessian matrix (H) can also be calculated.
g = J T e
H = J T J
The weights are updated based on the Jacobian matrix.
w i + 1 = w i   J i T J i + α i I 1 2   J i T e i g = J T e ,
where α i is the learning coefficient and I is the unit matrix. A theoretical analysis can be found in [35].

3.3. Bayesian Regularization

The error function is rearranged using the regularization method to generalize the neural network [36]:
F = μ E w + ν E ,
where μ and ν are the regularization parameters and E w is the sum of the squared weights. The Bayesian regularization method is used for the optimization of the editing parameters. Considering the weight values as random variables, it aims to calculate the weight values that will maximize the posterior probability distribution of the weights in the given data set. The posterior distribution can be expressed according to the Bayes rule:
P w | D , μ ,   ν ,   N = P D | w ,   ν ,   N   P w | μ , N P D | μ ,   ν , N ,
where D represents the dataset and N represents the neural network model. P D | w ,   ν ,   N expresses the likelihood function, P w | μ , N   is the prior density, and P D | μ ,   ν , N   is the normalization factor. It can be said that the noise in the dataset and in the weights has a Gaussian distribution. Thus, the likelihood function and antecedent intensity values can be calculated.
P D | w ,   ν ,   N = e ν E Z ν
P w | μ , N = e μ E w Z w μ
Here, Z = π ν n / 2 and Z w = π μ m / 2 . By rearranging these equations, the posterior distribution to the weights can be rewritten.
P w | D , μ ,   ν ,   N = e μ E w + ν E Z w μ Z ν
Regularization parameters are effective in the N model. The Bayes rule can be applied for the optimization of these parameters.
P μ ,   ν | D ,   N = P D | μ ,   ν ,   N   P μ ,   ν | N P D | N
As can be seen in Equation (34), the function P D | μ ,   ν ,   N is directly proportional to P μ ,   ν | D ,   N . Therefore, the maximum value of the function P D | μ ,   ν ,   N   must be calculated. Adjustment parameters can be calculated using the Taylor expansion of Equation (29). A theoretical analysis can be found in [37].
μ = γ 2 E w
ν = n γ 2 E
γ = m μ   t r ( H 1 )

3.4. Scaled Conjugate Gradient

In the steepest descent algorithm implemented in the standard back-propagation algorithm, a search is made in the opposite direction of the gradient vector while updating the weights. Although the error function decreases rapidly in this direction, the same cannot be said for the convergence rate. Conjugate gradient algorithms search using the direction with the fastest convergence. This direction is called the conjugate direction. In this method, the search first starts in the reverse of the gradient vector, similarly to the steepest descent algorithm. It differs from the second iteration as follows.
p 0 = g 0
x k + 1 = x k + α k g k
p k = g k + β k p k 1
Different algorithms have been developed according to the way in which the β k   coefficient is calculated. Moller, on the other hand, combined the LM algorithm and the conjugate gradient algorithm for the calculation of the number of steps in the algorithm he developed. This algorithm is called the scaled conjugate gradient algorithm [35]. In this algorithm, which is based on calculating the approximate value of the Hessian matrix, the design parameters change in each iteration and are independent of the user. This is the most important factor affecting the success of the algorithm.
H k = E w k + σ k p k E w k σ k + λ k p k
β k = g k + 1 2 g k + 1 T g k g k T g k
p k + 1 = g k + 1 + β k p k

4. Results and Discussion

The performance of the NARX network differs according to parameters used, such as the selected training algorithm, the input and output delays, the hidden layer, and the number of neurons. Within the scope of this study, each parameter was examined independently. Models were trained using MATLAB 2020a. The root-mean-square error (RMSE) and mean absolute error (MAE) values were used to evaluate model performance. The metrics used are presented in Table 4.
First, the performance of the training algorithms (Bayes arrangement, Levenberg–Marquardt, scaled conjugate gradient) in a model consisting of a single hidden layer and 15 neurons was compared. The input and output delay vectors were determined as in [12]. A hyperbolic tangent was used as the activation function in the hidden layer and a linear function was used in the output layer. The errors according to the training algorithms are shown in Table 5.
Despite the fast training time, SCG performed worse than LM and BR. At this stage, the hidden layer and the number of neurons within it were changed and the results were examined and shown in Table 6. BR was used as the training algorithm.
According to the angle of attack and the slip angle, it can be seen that model 4, which consisted of a single hidden layer and five neurons, showed the best performance. A comparison of the model results with the real system is shown in Figure 12.
In order to observe the performance of the models with the same aerodynamic characteristics and different weights, the system weight was increased to 10 kg and a flight was carried out from an altitude of 1000 m. Control inputs produced during the flight are shown in Figure 13.
The model performances for a 120 s flight were compared using error metrics and computational costs measures. As can be seen in Table 7, increasing the number of hidden layers and neurons increased the computation time. Considering the model performances, model number 4 exhibited the best performance.
The estimation errors for increased weight are shown in Figure 14. The increase in the number of hidden layers increased the overshoot values, although it did not result in any significant changes in model performance. Finally, the performance of the developed models was examined in a system with different aerodynamic properties. An aerial delivery system called ALEX was used to determine the necessary parameters (Table 8) [3].
The control inputs used in this flight, starting from a 200 m altitude, are shown in Figure 15.
The performances of the models are shown in Table 9 and Figure 16. Model 1, which was found to have the best performance, consisted of a single hidden layer and 10 neurons.
In order to determine the limits of the developed models, the effects of weight and aerodynamic coefficients on the models were observed. The aerodynamic coefficients that determined the effects of control inputs on force and moment were chosen. Error rates were observed by changing the selected parameters by ±10%. RMSE was used as the error metric. As can be seen in Table 10, model 4, which consisted of a single hidden layer and five neurons, demonstrated the best performance.
Considering the maximum error of five degrees, the limits of the models could be determined approximately, according to the parameters, via interpolation. The results are shown in Table 11.

5. Conclusions

In this study, a simulation environment was designed for a parachute landing system in the Gazebo/ROS environment. By implementing an aerial delivery system in PX4 autopilot software, the necessary infrastructure for a software-in-the-loop system was created. Flights were performed in the simulation environment and flight data were collected. Using these data for the description of the system, an NARX network model was trained, and a dynamic model was used to estimate the system. During the training process, different training algorithms were used (LM, BR, and SCG) and the effects of the numbers of hidden layers and neurons were observed. The effects of weight and aerodynamic coefficients on the models were also examined and the model limits were determined. As a result of the examinations, the model consisting of a single hidden layer and five neurons outperformed the other models evaluated. As the rates of different model parameters increase, the model which has the best performance may change. Therefore, errors in models can be improved by means of online training methods.
In future studies, pre-trained models will be updated using online training methods. Furthermore, the trained model will be tested using real flight data. After the model is verified, controller studies will be carried out and autonomous landing of the landing system will be carried out at the desired target location.

Author Contributions

Conceptualization, K.G. and A.T.Ş.; methodology, K.G.; software, K.G.; validation, K.G. and A.T.Ş.; investigation, K.G.; writing—original draft preparation, K.G.; writing—review and editing, A.T.Ş. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yakimenko, O.A. Precision Aerial Delivery Systems: Modeling, Dynamics, and Control; American Institute of Aeronautics and Astronautics: Monterey, CA, USA, 2015. [Google Scholar]
  2. Hamel, P.G.; Jategaonkar, R.V. Evolution of flight vehicle system identification. J. Aircr. 1996, 33, 9–28. [Google Scholar] [CrossRef]
  3. Jann, T.; Strickert, G. System Identification of a Parafoil-Load Vehicle-Lessons Learned. In Proceedings of the 18th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Munich, Germany, 23–26 May 2005. [Google Scholar]
  4. Grauer, J.A. Real-Time Parameter Estimation using Output Error. In Proceedings of the AIAA Atmospheric Flight Mechanics Conference, National Harbor, MD, USA, 13–17 January 2014. [Google Scholar]
  5. Jann, T. Aerodynamic model identification and GNC design for the parafoil-load system ALEX. In Proceedings of the 16th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Boston, MA, USA, 21–24 May 2001. [Google Scholar]
  6. Jaiswal, R.; Prakash, O.; Chaturvedi, S.K. A Preliminary Study of Parameter Estimation for Fixed Wing Aircraft and High Endurability Parafoil Aerial Vehicle. INCAS Bull. 2020, 12, 95–109. [Google Scholar] [CrossRef]
  7. Heimes, F.; Zalesski, G.; Land, W.; Oshima, M. Traditional and evolved dynamic neural networks for aircraft simulation. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997. [Google Scholar]
  8. Saghafi, F.; Heravi, B.M. Identification of Aircraft Dynamics Using Neural Network Simultaneous Optimization Algorithm. In Proceedings of the 2005 European Modeling and Simulation Conference (ESM), Porto, Portugal, 24–26 October 2005. [Google Scholar]
  9. Harris, J.; Arthurs, F.; Henrickson, J.V.; Valasek, J. Aircraft system identification using artificial neural networks with flight test data. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 679–688. [Google Scholar]
  10. Narendra, K.; Parthasarathy, K. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1990, 1, 4–27. [Google Scholar] [CrossRef] [PubMed]
  11. Phan, M.Q.; Juang, J.N.; Hyland, D.C. On Neural Networks in Identification and Control of Dynamic Systems. In Wave Motion, Intelligent Structures and Nonlinear Mechanics; National Aeronautics and Space Administration: Washington, DC, USA, 1995; pp. 194–225. [Google Scholar] [CrossRef]
  12. Valmorbida, G.; Wen-Chi, L.; Mora-Camino, F. A neural approach for fast simulation of flight mechanics. In Proceedings of the 38th Annual Simulation Symposium (ANSS’05), San Diego, CA, USA, 4–6 April 2005. [Google Scholar]
  13. Hu, Z.; Balakrishnan, S.N. Parameter Estimation in Nonlinear Systems Using Hopfield Neural Networks. J. Aircr. 2005, 42, 41–53. [Google Scholar] [CrossRef]
  14. Linse, D.J.; Stengel, R.F. Identification of aerodynamic coefficients using computational neural networks. J. Guid. Control. Dyn. 1993, 16, 1018–1025. [Google Scholar] [CrossRef]
  15. Puttige, V.R.; Anavatti, S.G. Real-Time Neural Network Based Online Identification Technique for a UAV Platform. In Proceedings of the 2006 International Conference on Computation Intelligence for Modelling Control and Automation and International Conference on Intelligent Agents Web Technologies and International Commerce (CIMCA’06), Washington, DC, USA, 28 November–1 December 2006; p. 92. [Google Scholar] [CrossRef]
  16. Kamasaldan, S.; Ghandakly, A. A neural network parallel adaptive controller for fighter aircraft pitch-rate tracking. IEEE Trans. Instrum. Meas. 2011, 60, 258–267. [Google Scholar] [CrossRef]
  17. Savran, A.; Tasaltin, R.; Becerikli, Y. Intelligent adaptive nonlinear flight control for a high performance aircraft with neural networks. ISA Trans. 2006, 45, 225–247. [Google Scholar] [CrossRef]
  18. Hess, R. On the use of back propagation with feed-forward neural networks for the aerodynamic estimation problem. In Proceedings of the Flight Simulation and Technologies, Guidance, Navigation, and Control and Co-Located Conferences, Monterey, CA, USA, 9–11 August 1993. [Google Scholar]
  19. Raol, J.; Jategaonkar, R. Aircraft parameter estimation using recurrent neural networks: A critical appraisal. In Proceedings of the 20th Atmospheric Flight Mechanics Conference, Guidance, Navigation, and Control and Co-located Conferences, Baltimore, MD, USA, 7–10 August 1995. [Google Scholar]
  20. Roudbari, A.; Saghafi, F. Intelligent modeling and identification of aircraft nonlinear flight dynamics. Chin. J. Aeronaut. 2014, 27, 759–771. [Google Scholar] [CrossRef]
  21. Bagherzadeh, S.A. Nonlinear aircraft system identification using artificial neural networks enhanced by empirical mode decomposition. Aerosp. Sci. Technol. 2018, 75, 155–171. [Google Scholar] [CrossRef]
  22. Goyal, P.; Benner, P. LQResNet: A Deep Neural Network Architecture for Learning Dynamic Processes. arXiv 2021, arXiv:2103.02249. [Google Scholar]
  23. Chen, Z.; Xiu, D. On Generalized Residual Network for Deep Learning of Unknown Dynamical Systems. J. Comput. Phys. 2021, 438, 110362. [Google Scholar] [CrossRef]
  24. PX4 Development. Available online: https://docs.px4.io/master/en/development/development.html (accessed on 1 April 2022).
  25. Xiao, K.; Tan, S.; Wang, G.; An, X.; Wang, X.; Wang, X. XTDrone: A Custom-izable Multi-Rotor UAVs Simulation Platform. In Proceedings of the 4th International Conference on Robotics and Automation Sciences (ICRAS), Chengdu, China, 12–14 June 2020. [Google Scholar]
  26. Menezes, J.M.P., Jr.; Barreto, G.A. Long-term time series prediction with the NARX network: An empirical evaluation. Neurocomputing 2008, 71, 3335–3343. [Google Scholar] [CrossRef]
  27. Sezginer, K.; Kasnakoğlu, C. Autonomous navigation of an aircraft using a narx recurrent neural network. In Proceedings of the 2019 11th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 28–30 November 2019. [Google Scholar]
  28. Puttige, V.R. Neural Network Based Adaptive Control for Autonomous Flight of Fixed Wing Unmanned Aerial Vehicles. Ph.D. Thesis, University of New South Wales, Sydney, NSW, Australia, 2008. [Google Scholar]
  29. Tooba, H.; Kadri, M.B. Comparison of different techniques for experimental modeling of a Quadcopter. In Proceedings of the 2019 Second International Conference on Latest Trends in Electrical Engineering and Computing Technologies (INTELLECT), Karachi, Pakistan, 13–14 November 2019. [Google Scholar]
  30. Avdeev, A.; Assaleh, K.; Jaradat, M.A. Quadrotor Attitude Dynamics Identification Based on Nonlinear Autoregressive Neural Network with Exogenous Inputs. Appl. Artif. Intell. 2021, 35, 265–289. [Google Scholar] [CrossRef]
  31. Werbos, P.J. Beyond Regression: New Tools for Prediction and Analysis in The Behavioral Sciences. Ph.D. Thesis, Harvard University, Boston, MA, USA, 1974. [Google Scholar]
  32. Lera, G.; Pinzolas, M. Neighborhood based Levenberg-Marquardt algorithm for neural network training. IEEE Trans. Neural Netw. 2002, 13, 1200–1203. [Google Scholar] [CrossRef] [PubMed]
  33. Foresee, F.D.; Hagan, M.T. Gauss-Newton approximation to Bayesian learning. In Proceedings of the International Joint Conference on Neural Networks, Lausanne, Switzerland, 8–10 June 1997. [Google Scholar]
  34. Saipraneeth, G.; Jyotindra, N.; Roger, S.; Sachin, G. A Bayesian regularization-backpropagation neural network model for peeling computations. J. Adhes. 2020. [Google Scholar] [CrossRef]
  35. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. Numer. Anal. Lect. Notes Math. 1978, 630, 105–116. [Google Scholar]
  36. MacKay, D.J.C. A practical Bayesian framework for backpropagation networks. Neural Comput. 1992, 4, 448–472. [Google Scholar] [CrossRef]
  37. Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
Figure 1. 4M-based system identification process.
Figure 1. 4M-based system identification process.
Aerospace 09 00443 g001
Figure 2. Recommended control input.
Figure 2. Recommended control input.
Aerospace 09 00443 g002
Figure 3. Output-error method.
Figure 3. Output-error method.
Aerospace 09 00443 g003
Figure 4. Gazebo simulation of the system.
Figure 4. Gazebo simulation of the system.
Aerospace 09 00443 g004
Figure 5. Control inputs.
Figure 5. Control inputs.
Aerospace 09 00443 g005
Figure 6. Position of the system.
Figure 6. Position of the system.
Aerospace 09 00443 g006
Figure 7. Euler angles.
Figure 7. Euler angles.
Aerospace 09 00443 g007
Figure 8. Velocity of the system.
Figure 8. Velocity of the system.
Aerospace 09 00443 g008
Figure 9. Parallel (Left) and serial-parallel (Right) NARX networks.
Figure 9. Parallel (Left) and serial-parallel (Right) NARX networks.
Aerospace 09 00443 g009
Figure 10. Structure of a neuron.
Figure 10. Structure of a neuron.
Aerospace 09 00443 g010
Figure 11. Serial-parallel NARX network architecture.
Figure 11. Serial-parallel NARX network architecture.
Aerospace 09 00443 g011
Figure 12. Estimation errors.
Figure 12. Estimation errors.
Aerospace 09 00443 g012
Figure 13. Control inputs for the 10 kg system.
Figure 13. Control inputs for the 10 kg system.
Aerospace 09 00443 g013
Figure 14. Estimation errors for increased weight.
Figure 14. Estimation errors for increased weight.
Aerospace 09 00443 g014
Figure 15. Control inputs for ALEX.
Figure 15. Control inputs for ALEX.
Aerospace 09 00443 g015
Figure 16. Estimation errors for ALEX.
Figure 16. Estimation errors for ALEX.
Aerospace 09 00443 g016
Table 1. Parameters of the Snowflake parachute model [3].
Table 1. Parameters of the Snowflake parachute model [3].
ParameterValue
Mass (m)1.9 kg
Canopy reference area (S)1 m2
Inertia matrix (I) 0.042 0 0.0068 0 0.027 0 0.0068 0 0.054
Maximum brake deflection ( δ s m a x )0.25 m
Aerodynamic coefficients C D 0 = 0.15 C D α 2 = 0.90 C Y β = 0.05 C L 0 = 0.25 C L α = 0.68 C m 0 = 0 C m α = 0 C m q = 0.265 C l β = 0.036 C l p = 0.355 C l r = 0 C n β = 0.036 C D 0 = 0.09 C l δ a = 0.15 C n p = 0 C n δ a = 0.003
Table 2. Parameters used in the simulation.
Table 2. Parameters used in the simulation.
SensorsNoise Density ( σ n ) Random Walk ( σ n b ) Bias Correlation Time ( σ n )
Gyroscope0.000180.0001000.0
Accelerometer0.001860.006300.0
Magnetometer0.000400.000600.0
Table 3. Activation functions.
Table 3. Activation functions.
FunctionPlot
ReLU f x = x , x > 0 0 , x 0 Aerospace 09 00443 i001
Sigmoid f x = 1 1 + e x Aerospace 09 00443 i002
Hyperbolic tangent f x =   tan h x Aerospace 09 00443 i003
Lineer f x =   x Aerospace 09 00443 i004
Softmax f x i = e x i j = 1 K e x j Aerospace 09 00443 i005
Table 4. Metrics used in the evaluation of models.
Table 4. Metrics used in the evaluation of models.
MeasuresEquationDescription
Root-mean-square error R M S E = i = 1 n e i 2 n Low values indicate that the model was successful.
Mean absolute error M A E = i = 1 n e i n
Table 5. Performance based on the training algorithms.
Table 5. Performance based on the training algorithms.
AlgorithmRMSEMAERMSEMAERMSEMAE
LM0.00070.00050.00260.00230.00160.0011
BR0.00070.00050.00250.00210.00150.0011
SCG0.02600.00810.01010.00180.02180.0037
Table 6. Performance based on the number of hidden layers and neurons.
Table 6. Performance based on the number of hidden layers and neurons.
NoHidden LayerTrainTestTotal
1234RMSEMAERMSEMAERMSEMAE
110---0.00070.00050.00250.00210.00150.0011
23---0.15380.00510.00180.00160.12560.0039
352--0.02080.00720.03180.03120.02500.0152
45---0.00080.00060.00120.00100.00100.0007
5105--0.00070.00050.00560.00440.00330.0018
625---0.00060.00050.00180.00150.00120.0008
750---0.00060.00050.00230.00190.00140.0010
81512--0.00070.00050.00270.00200.00170.0010
9151212-0.00090.00050.00150.00130.00120.0008
1015121260.00090.00050.00160.00140.00120.0008
Table 7. Model performances for increased weight.
Table 7. Model performances for increased weight.
Model NoRMSEMAEComputational
Cost (ms)
10.23790.178242.274
20.22350.152340.971
30.29650.220649.747
40.13630.097343.325
50.30290.237643.777
60.29630.207143.872
70.26160.180244.537
80.31000.218266.263
90.31370.221966.851
100.30040.224466.961
Table 8. Parameters of ALEX [3].
Table 8. Parameters of ALEX [3].
ParameterValue
Mass (m)97.6 kg
Canopy reference area (S)19.72 m2
Aerodynamic coefficients C D 0 = 0.084 C D α 2 = 0.90 C Y β = 0.216 C L 0 = 0.25 C L α = 2.36 C m 0 = 0 C m α = 0 C m q = 0.174 C l β = 0.104 C l p = 0.149 C l r = 0.096 C n β = 0.019 C D 0 = 0.084 C l δ a = 0.048 C n p = 0.027 C n δ a = 0.039
Table 9. Model performances for ALEX.
Table 9. Model performances for ALEX.
Model NoRMSEMAE
10.02960.0225
20.04910.0457
30.07810.0721
40.12510.1195
50.04130.0308
60.05620.0471
70.03880.0273
80.07340.0375
90.05100.0462
100.10240.0435
Table 10. Model performances according to changes of 10%.
Table 10. Model performances according to changes of 10%.
Nom C d δ s C l δ a C l δ s C n δ a
+10%−10%+10%+10%−10%+10%−10%−10%−10%−10%
10.00350.00420.00360.00340.00330.00320.00300.00290.13450.0039
20.00700.00720.00630.00450.00570.00550.00520.00560.03970.0065
30.01560.01570.01290.02840.01660.01820.01790.01830.05910.0182
40.00320.00410.00320.00230.00310.00310.00270.00310.07390.0028
50.00550.00940.00720.00540.00640.00610.00530.00580.20640.0063
60.01230.01160.01110.00730.00990.01040.00810.00860.13340.0093
70.01190.01140.01220.00640.00860.00870.00730.00780.06230.0089
80.01230.01610.00270.00840.01210.01120.00980.00200.20580.0110
90.01200.01040.00990.00570.01100.00940.00760.01130.05780.0089
100.01210.01310.01020.00770.01120.01020.00900.01000.09500.0091
Table 11. Limits of models.
Table 11. Limits of models.
Nom C d δ s C l δ a C l δ s C n δ a
MaxMinMaxMinMaxMinMaxMinMaxMin
16.6230.0000.342−0.1560.545−0.2580.390−0.2000.0032−0.0037
24.2610.0000.238−0.0930.379−0.0870.267−0.0550.0037−0.0010
32.9600.8470.1670.0690.2290.0780.1490.0520.00340.0016
47.0660.0000.372−0.2780.571−0.2710.422−0.1810.0034−0.0063
54.9050.1410.221−0.0610.354−0.0640.264−0.0500.0031−0.0011
63.2440.4750.178−0.0190.2820.0250.207−0.0010.00320.0002
73.2890.4500.171−0.0360.3020.0000.219−0.0120.00340.0001
83.2440.8730.422−0.0040.2580.0330.189−0.3350.00310.0006
93.2780.3110.188−0.0530.2690.0110.2140.0230.00350.0001
103.2660.6380.185−0.0130.2670.0220.1970.0130.00330.0001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Güven, K.; Şamiloğlu, A.T. System Identification of an Aerial Delivery System with a Ram-Air Parachute Using a NARX Network. Aerospace 2022, 9, 443. https://doi.org/10.3390/aerospace9080443

AMA Style

Güven K, Şamiloğlu AT. System Identification of an Aerial Delivery System with a Ram-Air Parachute Using a NARX Network. Aerospace. 2022; 9(8):443. https://doi.org/10.3390/aerospace9080443

Chicago/Turabian Style

Güven, Kemal, and Andaç Töre Şamiloğlu. 2022. "System Identification of an Aerial Delivery System with a Ram-Air Parachute Using a NARX Network" Aerospace 9, no. 8: 443. https://doi.org/10.3390/aerospace9080443

APA Style

Güven, K., & Şamiloğlu, A. T. (2022). System Identification of an Aerial Delivery System with a Ram-Air Parachute Using a NARX Network. Aerospace, 9(8), 443. https://doi.org/10.3390/aerospace9080443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop