Next Article in Journal
Adaptive Nonlinear Model Predictive Horizon Using Deep Reinforcement Learning for Optimal Trajectory Planning
Previous Article in Journal
A Distributed Task Rescheduling Method for UAV Swarms Using Local Task Reordering and Deadlock-Free Task Exchange
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ground Risk Assessment for Unmanned Aircraft Systems Based on Dynamic Model

1
Department of Civil Aviation Operation Technology, China Academy of Civil Aviation Science and Technology, No. 24 Xibahe Beili, Chaoyang District, Beijing 100028, China
2
Department of Air Traffic Management, Civil Aviation University of China, No. 2898 Jinbei Highway, Dongli District, Tianjin 300300, China
3
Meituan Group, No. 4 East Wangjing Road, Chaoyang District, Beijing 100102, China
*
Author to whom correspondence should be addressed.
Drones 2022, 6(11), 324; https://doi.org/10.3390/drones6110324
Submission received: 28 September 2022 / Revised: 17 October 2022 / Accepted: 25 October 2022 / Published: 27 October 2022

Abstract

:
Ground risk, as one of the key parameters for assessing risk before an operation, plays an important role in the safety management of unmanned aircraft systems. However, how to correctly identify ground risk and to predict risk accurately remains challenging due to uncertainty in relevant parameters (people density, ground impact, etc.). Therefore, we propose a dynamic model based on a deep learning approach to assess the ground risk. First, the parameters that affect ground risk (people density, ground impact, sheltered, etc.) are defined and analyzed. Second, a kinetic-theory-based model is applied to assess the extent of ground impact. Third, a joint convolutional neural network–deep neural network model (C-Snet model) is built to predict the density of people on the ground and to calculate the shelter factor for different degrees of ground impact. Last, a dynamic model combining a deep learning and a kinetic model is established to predict ground risk. We performed simulations to validate the effectiveness and efficiency of the model. The results indicate that ground risk has spatial-temporal characteristics and that our model can predict risk accurately by capturing these characteristics.

1. Introduction

As a key component of advanced air mobility (AAM), unmanned aircraft systems (UAS) provide the opportunity to improve understanding of the very low-level (VLL) airspace resource [1,2]. Progress in the development of UAS technology is raising convenience to new heights [3]. For instance, package delivery by UAS, or drones, has emerged to address the issue of last-mail delivery. Many conventional logistics and other start-up companies are developing drone business lines. In 2019, Antwork (a start-up company) received the first approved urban drone operation license from the Civil Aviation Administration of China (CAAC) [4]. In 2021, SF Express (a Chinese delivery company) also received approval for its suburb delivery service. At the same time, MeiTuan (a technology-driven retail company) began to deliver take-out food using multi-rotor drones in Shenzhen, China [5,6]. With widespread use of UAS in various areas of industry, the number of drones (especially for recreation) is rapidly increasing. According to a statistical report published by the CAAC [7,8], the registered number of drones is now 832,000, an increase of 58.8 percent from last year. In the USA [9], drones are also rapidly increasing in number and complexity. There are currently 865,505 registered drones, including 314,689 commercial drones, 538,172 recreational drones and 3644 paper registrations.
Although drones can bring greater convenience, the risks associated with their operation are gradually increasing [10]. With continuous growth in the number of UAVs, the operational risks associated with UAVs threaten the overall security of the airspace. In recent years, the impact of unauthorized UAVs on the normal operation of airports has been growing, as illustrated by the Heathrow Airport UAV incident in 2019. As most UAVs are required to operate in very-low-altitude segregated airspace, if they crash to the ground, there is potential for severe injury to people and damage to property. To ensure the safety of people on the ground, it is crucial to precisely estimate ground risk in terms of relevant parameters (falling range, people density, etc.) before planning a UAV route.
To better understand ground risk, a review of the literature was conducted which was divided into two parts: theoretical and practical.
From a theoretical point of view, ground risk assessment is closely related to calculation of a drone’s ground impact. Many studies have estimated the degree of risk by calculating the magnitude of ground impact. Moha and Hu [11] applied ADAMS (automatic dynamic analysis of mechanical systems) and MATLAB co-simulation methods to evaluate the impact of different UAV failure types on the crash trajectory and crash area. Andrew [12] built a kinetic energy model to evaluate the ground risk buffer. Both these studies demonstrated that the size of the ground risk buffer is dynamic and depends mainly on the height, velocity, and mass of the operation. These two studies each provided ways to calculate ground risk. Noting that the ground impact is strongly related to descent trajectories, Anders [13] presented a probability density function based on a second-order drag model to predict the trajectory of fall drones. Baptiste [14,15,16] proposed two approaches (a DoF dynamic model and a stochastic approach) to simulate the descent states of fixed-wing drones and focused on ground impact probabilities within 95%, 99% and 99.9% limits. It was found to be possible to accurately predict the ground impact, but the approach used required large storage and computational resources, which may limit its practical application. To address these issues, Liu [17] simplified the model and analyzed associated uncertainties, such as wind and airspeed, that can affect UAV descents.
In much of the published research, ground risk is considered alongside path planning. Many studies have focused on determining the nature of ground risk for different paths. Hu and Pang [18,19,20] introduced an integrated risk assessment model to plan the path of a UAV while minimizing operational risk. These authors combined parameters such as population density, impact area, and fatality rate to calculate a risk index under different paths. Results from a case study demonstrated that the risk assessment model was able to identify high-risk areas, and that the risk map produced enabled safe UAV path planning. Stefano [21,22] proposed a risk map to quantify the risk associated with an operational area based on the population density, the shelter factor and no-fly zones. The studies described have helped to clarify the relationship between risk and path planning but ignore the temporal dynamics of risk. For example, assessment of risk needs to consider population density, but most population data used in current methods are static data, so that the models used cannot capture dynamic trends in population change. Therefore, the risk value determined by the model is very different from the actual risk, so that the model used is unable to meet actual operational needs. Hence, there is now an increasing focus on the spatiotemporal characteristics of risk associated with drone use. Aliaksei [23] used spatiotemporal population density data to assess the ground risk of drone operations. This method can enable operators to assess risk more precisely at different times. With the development of artificial intelligence (AI), deep learning methods have been used extensively to predict the density of people based on spatiotemporal characteristics. Nabeela [24] and Zhang [25,26] forecast citywide crowd flows using a deep learning method. The results showed that crowd flows can be predicted using information on the time period and trends by incorporating data on crowd closeness. Using the findings of previously published studies on crowd flows, a new perspective was provided enabling assessment of dynamic changes in the size of the ground risk buffer and evaluation of the corresponding ground risk by predicting the density of people.
From the perspective of practical applications, aviation bureaus (such as EASA, CAAC and FAA) have begun to publish regulations and guidelines to ensure safer, more efficient and sustainable operations [27,28,29]. With respect to drone safety management, the CAAC and EASA have applied a specific operations risk assessment (SORA) methodology [30] which was proposed by the Joint Authorities for Rulemaking on Unmanned Systems (JARUS) to evaluate and manage the operational risks of the use of drones. Using the SORA approach, risk is divided into ground risk and air risk. Ground risk is determined by parameters such as mass, the velocity of drones, the ground environment and mitigation methods (people, ground risk buffer, and so on). The SORA approach represents one of the most comprehensive methods for providing effective guidance for aviation bureaus to better manage the operation of drones. However, the SORA method is largely qualitative. For instance, the ground risk buffer (the area in which people are protected from falling drones) is one of the most important ground risk criteria because it not only determines the level of UAV safety, but also has a direct impact on operational efficiency. The standard SORA method for assignment of the ground risk buffer follows a “one-to-one” rule in which the minimal size of the ground risk buffer is equal to the height of the operation. This rule may, however, be too rigid and inflexible for actual operations. Some aviation bureaus, such as the FAA and the civil aviation department of Hong Kong, China (HKCAD) have evaluated the probability and severity of drone accidents to determine the operational risk. This method is similar to the traditional civil aviation safety management approach and is, therefore, convenient for use by safety departments as they do not need become familiar with a new method. Nevertheless, identifying all accident hazards and accurately calculating their probability of occurrence remains a challenge due to the lack of adequate operational data.
Whichever of the two methods described above is used, they each have clear drawbacks. The current method of assessing ground risk is highly reliant on a large number of experienced experts and the use of many assurance documents, which can be time-consuming and lack a logical or scientific basis. Moreover, operational risk may be mainly affected by dynamic parameters, such as people density, wind, airspeed, and weight of the drones, and the current method has difficulty calculating the size of the buffer quickly and accurately. Furthermore, the rate of utilization of airspace will be reduced if a qualitative risk evaluation method is used, with the result that the operational resources are insufficient for the increasing number of drones.
The existing methods cannot conduct real-time assessments of UAV operation risk, especially because of the large gap between the risk during assessment and the risk during actual operation due to the dynamic characteristics of some parameters. Therefore, how to accurately predict future risk remains a challenge.
Hence, we propose a deep-learning-based method to calculate the dynamic ground risk. The model is divided into three parts: we first analyze the dynamic parameters that may affect ground risk and build a kinetic model to predict the size of the ground impact. Second, we build a joint convolutional neural network and deep neural network model (C-Snet model) to predict people density and to calculate the casualty rate of a ground impact. Third, a dynamic model combining a deep learning and a kinetic model is established to predict the ground risk. The contributions of this research include the following:
(1)
In contrast to previous methods, our approach offers a new approach to the prediction of ground risk under uncertain parameters based on a joint deep learning method and kinetic model.
(2)
We analyze the uncertainty of the dynamic parameters and show the corresponding relationships to ground risk.
(3)
We consider the relationship between the ground risk and the casualty rate using the model.
The remainder of this paper is structured as follows: Section 2 analyzes the parameters that affect ground risk and builds a kinetic model to evaluate the size of the ground risk buffer. Section 3 builds the deep learning-kinetic model to predict the corresponding people density and the size of ground impact and then calculates the operation ground risk level. Section 4 describes the experiments undertaken to evaluate the robustness of the model and analysis of their results. Section 5 summarizes the conclusions and discusses potential further work.

2. Parameters That Affect Ground Risk

Based on the literature review, the methods used to evaluate ground risk involve consideration of a range of parameters, including people density, shelter, ground impact and others [31]. In this section, we first define the method for calculating the ground risk.
P ( g r o u n d r i s k ) = p o p × A × P ( c a s u a l t y ) × P ( e v e n t )
where P ( g r o u n d r i s k ) is the probaility of ground risk, A is the area of impact point in m2, pop is the corresponding people density of the ground impact, P ( e v e n t ) is the probability of the fallen drone, and P ( c a s u a l t y ) is the fatality rate of the people impacted by the fallen drones.
The impact of various parameters on ground risk in the context of actual operations is discussed below.

2.1. Ground Impact

The ground impact is defined as the landing point of the UAV after control is lost. This factor plays an important role in the design of a ground risk buffer. The ground impact is positively correlated with the falling range of the UAV [32]—the further the ground impact, the larger the ground risk buffer. The size of the ground risk buffer can be modified by calculating the ground impact. The ground impact can be assessed in terms of a ballistic descent model. The ballistic descent model represents the descent of the drone following loss of control and assumes zero lift-drag. Hence, the ballistic descent of a UAV can be considered as a kind of horizontal projectile motion—free fall motion in the vertical direction and variable speed linear motion in the horizontal direction. The model is based on the standard second-order drag model.
m v ˙ = m g c | v | v
where m and v are the mass and velocity of the drones, g is the gravity, and c is a constant that encapsulates the drag coefficient.
However, the ballistic descent model may also be considered to represent a probabilistic problem due to uncertainties of wind and track error.
Track error: The GPS positioning accuracy of a UAV is affected by weather, obstacles, and its own flight control system. This can cause deviation between the actual flight path and the planned flight path. Under this circumstance, when calculating the ground risk impact of a UAV, the track error will interfere with the accuracy of the position, which makes it impossible to accurately calculate the falling range of the UAV, and results in large errors in the definition of the ground risk buffer.
The UAV track error analysis experiments undertaken included the following stages: first, a three-dimensional motion capture system for an indoor UAV, based on VICON optical position sensing, was set up in a laboratory environment. Second, a UAV was used in a flight experiment in the laboratory. Finally, the flight path position data output by the UAV flight control system was compared with the vision position data calibrated in the flight path capture system and the distributions of the initial position errors and velocity errors were analyzed.
According to Han [31], aircraft track error follows a three-dimensional Gaussian distribution with a mean value of 0, and the track error is independent in the longitudinal, lateral, and vertical directions, as represented in the following formula:
P = 1 ( 2 π ) 3 2 | Σ | 1 2 e [ 1 2 ( E M ) T Σ 1 ( E M ) ]
where P is the distribution probability column vector of the track error, E is the aircraft track point error column vector, M is a column vector of the three-dimensional aircraft track point error mean value, and Σ is the covariance matrix of the track error.
Wind: The relationship between the height and wind speed is analyzed. Because the wind speed is different at different heights, the incoming flow velocity is also different. However, at present, most of the data obtained by UAV operators are the wind speed values from ground observation stations. If the air resistance is calculated based on the wind speed, it will often cause large errors. Therefore, to ensure accurate calculation of the range of the UAV landing points, it is necessary to accurately calculate the wind speed at each altitude. Considering that the flight height of a UAV is mostly within 1000 m, this paper uses a near surface wind profile model to calculate the wind speed at different heights.
u ¯ ( h ) = u ¯ 1 ( h h 1 ) n
where u ¯ is the average wind speed that needs to be calculated, h stands for the height at which the wind speed is calculated, h 1 is the height at which the wind speed is observed, u ¯ 1 is the observed wind speed of height h 1 , and n is the variation coefficient of wind speed with height.

2.2. People Density

People density is one of the most important factors in a risk assessment because it describes how many people may be involved in a drone crash and how they are distributed on the map. At present, the risk assessment for a UAV is still based on static statistical data as the basis for analysis of ground risk. Real-time assessment of UAV is not carried out according to the spatial-temporal characteristics of people density, which can lead to a significant difference between the assessed risk and the risk during actual flight. In particular, the population density is highly dynamic data, while the probability of crowd density for different times of operation is quite different. Therefore, accurate population density predictions are required for proper risk assessment. As spatial-temporal data, people density has temporal characteristics including closeness, trend, and period components and spatial characteristics including nearby and distance components [33].

2.2.1. Temporal Characteristics

Closeness is expressed as the data for two adjacent time nodes. Crowd density data are often similar—for example, the crowd density at 9 a.m. is similar to that at 10 a.m.
Period refers to similar characteristic values for population density at certain time intervals. For example, the crowd density at 8 a.m. on Monday is similar to the crowd density at 8 a.m. on Tuesday. This similarity feature requires statistical data and analysis over time periods.
The figure above shows the people density for a whole week. From Figure 1, the people density has the characteristic of period; the population density at the same time but on different days has strong similarity (the cosine similarity value is larger than 0.90).
Trend is expressed as the change in season or year. The density index will also change and the number of people will also exhibit a certain change trend. Figure 2 below shows that the population density was lowest at 4:00 a.m. With elapsed time, the value continued to increase and reached a peak at 9:00 a.m.

2.2.2. Spatial Characteristics

Nearby. Defined as the inflow of region A, is affected by the outflow of nearby regions. According to geographic theory [34,35], a city can be divided using a grid map based on longitude and latitude where a grid denotes a region. The closer the regions are, the greater their connection.
Distance. The flow can be affected by the distance of the regions. For example, people who live far away from a central business district always take the subway or highway to work, which means that the outflow from distance areas directly affects the inflow of the central business district.

2.3. Shelter Factor

The shelter factor is defined as the degree of protection against injury to people afforded by buildings and trees on the ground after a UAV falls. Similar to the ground protection area, the shelter factor frequently takes into account the coverage provided by the buildings and trees under the path. The ground protection area is a two-dimensional region, whereas the shelter factor is a three-dimensional area. By estimating the size of nearby buildings and trees, the extent of harm that a falling UAV may cause people can be assessed. The term “shelter factor” has been defined differently in several studies. According to Guglieri [36], there are 11 levels of ground cover from 0 to 10. Zero represents no protection and 10 is the strongest protection. However, Dalamagkidis [37] suggested that the range of the shelter factor was between zero and positive infinity. Positive infinity denotes the strongest possible protection, whereas zero denotes no protection at all. This paper takes into account the actual computation and application requirements. As a result, the definition of the shelter factor ranges from 0 to 1. Table 1 provides details of the shelter factor levels.

2.4. Fatality Rate

The fatality rate is defined as the probability of fallen drones impacting a person and producing fatal injuries. It is highly dependent on the impact energy and shelter factors. According to Dalamagkidis [37], the fatality rate can be evaluated using the shelter factor, impact energy, etc. The expression for the fatality rate is as follows.
P ( f ) = 1 1 + α β ( β E i m p ) 1 4 p s
where P ( f ) is the fatality rate, E i m p is the impact energy of fallen drones, p s is the shelter factor, α stands for the impact energy required for 50% mortality when p s = 0.5, and β stands for the impact energy limit required to cause death when p s falls to 0. According to research undertaken by the American Range Command Commission (RCC) (RCCDocument323-99 issued in 1999 and RCCDocument321-07 issued in 2007) [19], if a 0.00454 kg (1lb) object strikes a human body, the corresponding impact kinetic energy for 10% and 90% fatalities is 50 J and 200 J. Therefore with respect to the operation of drones, we reference Hu [19] and set α = 106 and β = 100 J. The larger the shelter factor, the lower the fatality rate for the same impact kinetic energy.

3. Method of Dynamic Ground Risk Assessment

In this section, a dynamic ground risk model under uncertainty parameters is established. The dynamic ground risk model has two main components: a kinetic model and a joint deep learning model. First, the size of the ground impact and the fatality rate are regarded as dynamic parameters and are calculated by the kinetic model. Second, the joint deep learning model is built to predict the people density and to evaluate the shelter factor. Finally, the dynamic ground risk model is obtained by multiplying the output of the kinetic model and the joint deep learning model.

3.1. Kinetic Method to Assess Ground Impact

The mechanical model for the quadrotor UAV coordinate system is shown in Figure 3 below. Four evenly distributed rotors produce corresponding speed, thrust and torque [32]. In addition, the UAV fuselage is subject to uniformly distributed gravity and air drag that need to be calculated (Figure 4).
According to Newton’s second law, the UAV crash motion can be decomposed into vertical and horizontal directions with the following equations (Equation (5)) of motion:
F = m a m a x = D x + F w i n d cos θ m a z = G D z
where m is the mass of the drone, G is the gravity, and Dx and Dz are the air drag of the different sides, respectively. ax and az are the accelerated velocities of the horizontal and vertical directions, respectively. The expression can be further decomposed as follows (Equation (6)):
m d 2 x d t 2 = 1 2 C d A x ρ ( d x d t ) 2 m d 2 z d t 2 = m g 1 2 C d A z ρ ( d z d t ) 2
where m is the mass of the drone, Cd is the air drag coefficient, and Ax and Az are the frontal areas of the horizontal and vertical directions, respectively. x and z are the displacements of the longitudinal and vertical directions, respectively.
With initial conditions vx0 = v0, vz0 = 0 and x0 = 0, z0 = 0, the expression for the ground impact point can be obtained:
x = 2 m C D A x ρ a ln [ ( C D A x ρ a v 0 t + 2 m ) ] 2 m ln ( 2 m ) C D A x ρ a y = 2 m C D A y ρ a ln [ cos h ( C D A y ρ a g 2 m t ) ] t | y = h = 2 m C D A y ρ a g ln ( exp ( C D A y ρ a h 2 m ) ± exp ( C D A y ρ a h m ) 1 )
Considering that the wind may have an impact on ground impact, the expression relative to the projection of the event point onto the ground becomes the vector [31].
Γ ( x ) = Γ ( [ cos θ sin θ sin θ cos θ ] [ x 0 ] + w [ cos ϕ sin ϕ ] t )

3.2. Joint Deep Learning Model to Predict People Density

In Section 2.2, we discussed the characteristic of people density and analyzed the relationship between people density and ground risk. The reason for using a deep learning model to predict people density is as follows:
From the perspective of drone risk management, unmanned aircraft pose a high risk to the safety of people on the ground due to very low altitude operation. The operator of drones needs to assess the risks before operation. One of the key parameters for risk assessment is people density. However, how to accurately predict people density and how to calculate ground risk using an online method represent current challenges.
From the perspective of path planning, the operator needs to calculate the ground risk when planning the flight path of drones according to the regulations of the Civil Aviation Administration of China. If the people density of the ground is large, the path must be redesigned to decrease the risk. However, from geography theory, the people density is not static but has spatial and temporal properties, which means that the dynamic path must be designed according to the dynamic character of people density.
From the perspective of operation authorization, the bureau needs to know the operational environment (especially people density) of the drones; hence, a deep learning model can be utilized to better explain the ground environment to the bureau and to let the inspector know the people density and risk in the next couple of hours.
Therefore, we decided to use a deep learning model to predict the people density to solve the problem of uncertainty in the people density parameter. Using a deep learning model, the people density of an operational area can be predicted for the next few hours and the operator can predict the precise operational risk accordingly.
According to deep learning theory, an LSTM model is suitable for time series prediction, while a CNN model is suitable for discrete data prediction. Therefore, considering the spatial-temporal characteristics of population density, we decided to establish a CNN-LSTM joint model to improve the accuracy of prediction. We input spatial features, such as nearby and distance features, into the CNN model and temporal features, such as closeness, period and trend into the LSTM. Finally, the two models jointly output the predicted population density. Our model can not only take into account the space-time characteristics of population density, but can also improve the efficiency of computing through joint computing.

3.2.1. Convolutional Neural Network

The convolutional neural network sub-model consists of a convolutional layer, a pooling layer and a fully connected layer (see Table 2 and Figure 5) [38,39,40,41]. The pooling layer uses MaxpoLling, which can shield the unimportant parameters while maintaining the data characteristics to solve the problem of high data redundancy in the model. For convolution layer setting, because the data in this study are discrete data and are not sensitive to periodic change in time, the horizontal sliding value and vertical sliding value of the two convolution layers of the model are set to 1, and the convolution operation is performed using padding for the same 0 filling. This paper uses the Relu function as the sub-model activation function, the convolutional layer of the convolutional neural network and the depth of the linear neural network layer results of non-linear mapping to avoid the model gradient explosion and the disappearance of the gradient problem. The formula is as follows (Equation (9)):
f ( x ) = max ( 0 , x )
The input parameters for the CNN are shown in Equation (10) below,
C N N P r e d i c t ( i ) = f ( c r o w d ( i T ) , d a y ( i ) , h o u r ( i ) )
where c r o w d ( i T ) is the crowd density of the same area in the last week, d a y ( i ) is the date of week to which the crowd density of the area belongs, and h o u r ( i ) is the time corresponding to the crowd density. From Figure 6, these three parameters are input to a convolutional neural network and the predicted crowd density index is obtained using Adam’s optimization algorithm.

3.2.2. LSTM Neural Network

Long short-term memory (LSTM), a type of recurrent neural network (RNN), can learn long-term dependencies, especially in sequence-prediction problems [42]. LSTM overcomes the drawbacks of long-term prediction instability in the RNN algorithm. Like RNN, LSTM also has a chain structure to repeat and memorize information; each structure is called a cell. However, instead of having a single neural network layer in RNN, LSTM consists of four parts: a cell state, a forget gate layer, an input gate layer, and an output gate layer (Figure 7).
f t = σ ( W f × [ h t 1 , x t ]   +   b f )
i t = σ ( W i × [ h t 1 , x t ]   +   b i )
O t = σ ( W o × [ h t 1 , x t ]   +   b o )
C ˜ t = t a n h ( W C × [ h t 1 , x t ]   +   b C )
where σ is the sigmoid activation, which outputs a number between 0 and 1 for each piece of information in the cell state. If the sigmoid value is 1, then the information is completely retained. Similarly, the information is completely deleted when the sigmoid value is 0. x t is the input information of the cell. h t 1 is the short-term state of the former cell. W f , W i , W o are the weight matrices of the hidden layer input to the three gate layers. b f , b i , b o , b C are the biases of the layer function. The layer concatenates the input information x t and the short-term state of the former cell h t 1 using the sigmoid function σ to determine whether the information should be kept.
The input parameters of the LSTM model are as follows:
L S T M P r e d i c t ( i ) = f ( P e o p l e ( t 3 ) , P e o p l e ( t 2 ) , P e o p l e ( t 1 ) )
where (t − 3) is the hour’s people density, (t − 2) the hour’s people density, and (t − 1) the hour’s people density at each of these time points, respectively.
From Figure 8, the LSTM model has three layers. The number of neurons in each of the three layers is 256, 128 and 64, respectively. The model uses Relu as the activation function and MSE as the loss function. The look-back value of this model is three. Finally, the model outputs the predicted people density.

3.2.3. Architecture of Joint Deep Learning Model

Some characteristics of the people density (such as nearby and distance) cannot be predicted well by the LSTM model, while the CNN model may not be suitable for representing closeness and trend in people density. Therefore, we decided to combine the CNN and LSTM models to predict people density to improve the prediction accuracy of the model.
The framework for the joint deep learning model (CS-net model) is shown in the figures (Figure 9 and Figure 10) below. The model is composed of a CNN component and an LSTM component. A regression function is set at the top of the model to combine the output from the CNN and LSTM components. We first put the spatial characteristics into the CNN model. In the CNN model, the parameter features use convolutional units to expand the model’s ability to extract feature information and ensure the precision of the model. At the same time, the temporal characteristics are put into the LSTM model for fitting training. The model consists of four parts. The activation function of each hidden layer is the sigmoid. Each neuron in the adjacent layer is fully connected. The CS-net model uses the mean square error (MSE) as the loss function of the top-level regression function of the model. When performing fitting training, the model needs to consider the loss functions of both the CNN and LSTM components. In addition, the model must consider the fitting rates to optimize all parameters. Joint training of a CS-net model is achieved by back-propagating the gradients from the output to both the CNN and LSTM components of the model simultaneously using mini-batch stochastic optimization. Compared with embedding learning, the CS-net model training is not independent and only requires a small number of feature types to obtain higher accuracy. All the parameters in the model are jointly trained under the mean square error (MSE) loss. Due to the combination of two modules, it is quite difficult to determine a proper learning rate to train the joint model. Therefore, we chose Adam, a stochastic gradient descent method with an adaptive step size and momentum to optimize the model. The formula (Equation (17)) for the model prediction result is:
f ( t ) = L S T M ( f ( t n ) , f ( t 2 ) , f ( t 1 ) )
From the time trend characteristics of crowd density, the crowd flow in the last two hours has a great influence on the crowd density in the next hour. Therefore, the input parameter of the LSTM sub-model is the surface flow in the last two hours.
p r e d i c t i o n ( p e o p l e ) = σ ( W L S T M T [ x , δ ( x ) ] + W C N N T p f a + b )
where σ is the activation function of the nested deep learning model, ϕ ( x ) is the LSTM transformation, x is a feature vector of the LSTM model, W C N N T is the transpose feature’s weight of the CNN model, p f a is the activation function of the CNN model, and b is the bias.

3.2.4. Hyper-Parameter Setting

The hyper-parameters in our Cs-net model include learning rate, epoch, loss function, calculation speed and neural numbers. The learning rates we set are 0.1, 0.01 and 0.001, respectively. The epochs are 100, 500, and 1000. We use mean square error as the loss function. As the Cs-net is a joint deep learning model, following LeCun [42], we set the neural numbers to be 64, 128 and 256, respectively.

3.2.5. Evaluation of Models

To determine how the model will perform on future data, we need to evaluate the performance of the prediction model. We used a regression model performance evaluation metric to assess our model as it is based on regression algorithms. The evaluation index is as follows:
(1)
Mean square error (MSE)
M S E = 1 m i = 1 m ( y y ^ ) 2
(2)
Mean absolute error (MAE)
M A E = 1 m i = 1 m | y y ^ |
(3)
R-square (R2)
R 2 = 1 ( y y ^ ) 2 ( y y ¯ ) 2
where y is the actual output value and y ^ is the prediction value of the model.

3.3. Fusion Model

In this section, we discuss how to fuse the kinetic method and Cs-net model to produce the dynamic ground risk model. In Section 2, we analyze the ground risk obtained from the people density, casualty rate and probability of a fallen drone.
Hence, the structure of the model is as follows (Figure 11): First, the input data of the model comprises three parts: people density, aircraft operation data, and map data. According to the different data characteristics, we input these three sets of data into the corresponding models. The people density and map data with spatial-temporal characteristics were input into the joint deep learning model. The aircraft operation data were input into the kinetic model. The two models are parallel due to the different input data and the two models may not affect each other. The Cs-net model is built to predict the people density using a deep learning method. Considering that the shelter factor is a special expression of the map data (the shelter factor can be calculated by the building coverage in the map data), when predicting the population density, the shelter factor is calculated using the map data accordingly. The kinetic model is applied to evaluate the size of the ground impact and the impact energy. As the fatality rate is combined with the shelter factor and the impact energy, we need to fuse the output value from the Cs-net model and the kinetic model.
P ( f a t a l i t y r a t e ) = F ( C ( s h e l t e r ) K ( e n e r g y ) f ( α β ) )
where P ( f a t a l i t y r a t e ) is the probability of a fatality rate, C ( s h e l t e r ) is the shelter factor, K ( e n e r g y ) is the impact energy of ground impact, and f ( α β ) is the kinetic coefficient.
Last, according to the ground risk definition in Section 2, the dynamic ground risk model can be calculated using the people density output from the Cs-net model, the ground impact output from the kinetic model, the fatality rate and the probability of a fallen drone.

4. Experiments

In this section, we report three experiments with three main goals: (1) to validate the optimal performance model for the prediction of people density; (2) to assess the robustness of the kinetic model; and (3) to analyze the spatial-temporal characteristics of the ground risk model.

4.1. Datasets

The data that we used were derived from the fusion of three sources: (1) population data, (2) open street map data and (3) drone performance data. We used the regional population density index in the Beijing government data website for research. The time range of the data statistics was from 17 January 2020, to 15 February 2020. The statistical area covered all urban areas of Beijing. The data information and the location distribution of observation points are shown in Table 3 and Figure 12. In addition, the open street map data was used to identify the ground features (such as buildings, trees, and roads). As for the drone performance data, we selected the M210-RTK and Swoop Aero UAV as analysis objects.
(1) Population data: Considering the airspace constraints in the main urban area of Beijing, we selected a 3 km × 3 km research target area in Daxing District and divided the area into 100 m × 100 m grids (Figure 13). A total of 990 grid coordinate areas of 30 × 33 were formed. The blue circle in the figure represents the position information of the observation center point.
(2) Map data: The map data that we used was the EPSG:3857, which can better visualize the population and calculate shelter factors. Figure 14 shows the components of the ground; different colors stand for the different layers of the ground. Green stands for park, dark green stands for farm, blue stands for water, red stands for road, and other colors stand for buildings. We can define the shelter factor using the map data.
(3) Drone performance data: We used two drones (DJ M210RTK and Swoop Aero) as examples to demonstrate the ground risk assessment. DJ M210RTK is a rotor wing vertical takeoff and landing (VTOL) aircraft, which is designed for topographical survey and drawing. Swoop Aero is a fixed-wing unmanned aircraft, suitable for the transportation of small packages between urban and suburb areas. The parameters of the two drones were as follows (Table 4 and Table 5).

4.2. Data Processing

4.2.1. Data Standardization

To avoid model prediction deviation caused by different variable dimensions, the data was preprocessed so that the model could predict and calculate.
For the crowd density index variable in the depth model, a Z-score standardization method was used to change each data dimension into normal distribution data with a mean value of 0 and variance of 1 to eliminate the influence of dimensional differences of different types of data on the model and to ensure that each dimension was unidimensional when the fitting operation was conducted.
Z s c o r e = x x ¯ σ
where x is the value of the target parameters, x ¯ is the average value of the target parameters, and σ is the standard deviation of the target parameters.

4.2.2. Data Resampling

The people density data only reflect the number of people in the vicinity of the observation points in the area. However, because the number of observation points cannot cover the whole area, the data cannot accurately describe the crowd density at each location point in the area, resulting in a decrease in the prediction accuracy of the model. Therefore, to solve this problem, the SMOTE [43] method was used to resample the data to ensure that the population data covered the study area. The advantage of this algorithm is that it mitigates overfitting without losing valuable data information by randomly sampling the generated sample data rather than using a copy of the original data. The specific method is as follows:
Assuming that the number of samples of the minority class is n , first calculate the Euclidean distance between the sample x i , i { 1 , 2 n } and other samples in the minority class sample, so as to obtain k neighbors of its x i , i { 1 , 2 n } , defined as x i ( n e a r ) ,   n e a r { 1 , 2 k } Randomly select a sample from k nearest neighbors. Then randomly generate a number between (0, 1) named α , and generate a new sample using Equation (21):
x i 1 = x i + α × ( x i ( n n ) x i )
Repeat step (2) N times, then N new samples can be generated.

4.3. Performance of People Density Prediction Models

In this section, we first use ablation studies [44,45,46] to verify the performance of the Cs-net model. The joint deep learning model (Cs-net) was implemented using the Keras framework with a Tensorflow backend. The optimizer for the model was Adam and the loss function was MSE.
It can be seen from Table 6 above that, when the learning rate was 0.001 and the epoch was 1000, the mean square error (MSE) was the smallest. However, a lower learning rate or higher epoch will lead to model overfitting. Considering that the model established uses the adaptive step size random gradient descent method (Adam) to optimize the parameters, the initial learning rate of the model was 0.01 and the epoch was 1000.
The batch size determines how many samples are used in each model’s gradient descent, which affects the gradient descent’s direction and strength during Cs-net training. Performance on the test is based on the model’s convergent position. From Table 7 above, we may conclude that with increase in batch size, the MSE value of the model first decreases and then increases. A larger batch size results in a higher gradient similarity between two adjacent mini-batch sets, so the model is highly susceptible to falling into the local minimum along one direction. When the batch size is set to 32, the model error is the minimum. Therefore, we set the batch size of the model to 32.
The number of neurons in a convolutional neural network will also affect the overall performance of the model. Therefore, we compared the number of convolution kernels in the convolution layer and the number of neurons in the full connection layer of the convolution neural network model to obtain the optimal number of neurons (Table 8).
It can be seen from Table 6, Table 7 and Table 8 that, when the learning rate was 0.01, the epoch was 1000, and when the number of neurons in the pool layer was 256-256-256, the mean square error was the smallest and the training speed was the highest. With increase in the number of neurons in the hidden layer, the mean square error of the model decreased and the time required for model training increased. However, according to Yann [42], the number of neurons depends on the size of the input and output data of the model, the training speed of the model and the prediction accuracy of the model. Since the C-snet model established here is based on a regression algorithm and the output size is one-dimensional data, to prevent overfitting caused by too many neurons, 256-128-64 was used as the number of hidden layer neurons.
From the Figure 15, the blue line represents the performance of the test dataset and the yellow line represents the performance of the training dataset. The MSE value of the training dataset converges around 1 and the MSE of the test dataset starts to fluctuate around 7. There is no obvious decrease.
In conclusion, when the initial learning rate of the model was 0.01, the epoch was 1000, and the number of neurons was 256-128-64, the accuracy of the model prediction was the highest.
To determine whether the prediction performance of the Cs-net model was better than that of other existing models, we compared the Cs-net model with other people density prediction methods: regression-based models (random forest, support vector machine) and deep-learning-based methods (DNN, LSTM).
In this section, we directly use the evaluation parameters (R2, MSE, and MAE) to test the performance of the model. Table 9 presents the comparison results of the Cs-net model and other existing models. From Table 9, the Cs-net model has the highest prediction performance. The R2 of the Cs-net model was 0.86; this is the highest value compared with other models. In addition, from the perspective of MAE, the value for the Cs-net model (MAE = 5.32) was much lower than random forest (10.25) or support vector machine (18.3). From the perspective of accuracy, the mean square error of C-s net is 3.78, which is the lowest compared with other models. We can infer from Table 9 that a machine learning model that uses only historical data and ignores temporal characteristics to estimate population density results in poor prediction performance. A DNN model has difficulty predicting the population density with uncertain parameters, which limits the ability of the model to be used in actual situations. Therefore, the Cs-net model outperformed other current models when taking into account multi-dimensional data feature fusion and model prediction accuracy issues (R2 = 0.86).

4.4. Robutness Analysis of Kinetic Model

In this section, we discuss the ground impact points that were evaluated by the kinetic model; the input parameters that we used are shown in Table 10 below. We randomly generated 100,000 points to represent the initial position of the UAV loss of control and to calculate the impact points corresponding to each initial point according to Formula (4). The input parameters are shown in Table 10.
Taking the longitudinal coordinates of the ground impact point as an example, according to the statistical results, the ground impact points of M210-RTK followed a normal distribution with a mean value of 85.10 m and a standard deviation of 2.89 m (see Figure 16 and Figure 17). Figure 17 reveals that the ground impact points of Swoop Aero followed a normal distribution with a mean value of 116.06 m and a standard deviation of 1.74 m. Considering that the distance followed a normal distribution, to fully consider the different impact probabilities of a UAV on different areas of the ground and to accurately measure the risk value of a certain operating position of a UAV in the airspace, the impact point distribution map was obtained according to the impact area 3 σ criterion, which is Sσ, 2Sσ and 3Sσ. Different σ stands for the different risks for each ground impact area. Table 11 below shows the impact probabilities for the different areas.

4.5. Case Study

4.5.1. Ground Risk under Different Paths

As described in Section 4.1, we transformed the map into 100 m∗100 m grid areas and decided to use DJ UAV (widely applicable) as the research object to assess the ground risk.
In actual operation, the ground risk assessment is executed after the path is planned. The operator of a UAS needs to first plan the flight path, then the civil aviation department, or an authoritative third party, would conduct an assessment of the ground risk and air risk according to the planned path. Hence, in this section, we first decided to select a group of starting and ending points, then planned different routes for this group, and finally used our model to carry out a ground risk assessment on these routes (Figure 18).
According to the group of points (Table 12), we planned three different routes using the A* algorithm [47] based on the following three categories: (1) consider no risk, (2) only consider people density and (3) consider all parameters. The route information is shown in the following figure and table.
From Figure 19 and Table 13, the yellow path (path 1) stands for the path that considers no risks and is based on the shortest distance, the orange path (path 2) stands for the path that only considers the parameter of people density, and the black path (path 3) stands for the path that considers all the parameters that were mentioned in Section 2. Figure 19 shows that the yellow path has the shortest distance (straight line) and the minimum number of turns of angle compared with the other two paths; however, it neglects the people density and has the highest risk value (risk index = 0.0204). The orange path is mainly along both sides of the road and above the water area, which effectively avoids the high population areas (such as points (18,2) and (19,2)) and can reduce the ground risk (risk index = 0.0152). However, its disadvantage is that it leads to a longer flight distance (distance = 3746 m) and increased energy consumption. As for the black path, it has the lowest risk value (risk index = 0.0127) and the distance of its drop is about 24.98 percent compared with path 2, with a rise of about 5.803 percent compared with path 1. Although path 3 has the lowest risk value, it results in a number of flights turning that affect both efficiency and maneuverability. Path 2 and path 3 have similar risk indices but the distances are quite different. According to the map data, the shelter factor of path 2 (average shelter factor = 0.2) is less than path 3 (average shelter factor = 0.5), resulting in a higher ground risk.

4.5.2. Spatial-Temporal Characteristic of Ground Risk

To simplify the risk assessment process and improve assessment efficiency, we calculated the risk value of each grid based on different time periods (as shown in the figure below). According to Figure 20 the risk in different regions has obvious spatial-temporal characteristics, such as for location point (2,6), whose population density was the lowest at 4:00 a.m. With increase in time, its risk value continued to increase and reached a peak at 9:00 a.m. The same applies to point (22,6), which had minimum risk at 1:00 a.m. and maximum risk at 9:00 a.m. Figure 21 shows that the risk value may also be affected by temporal characteristic (such as period and trend). To analyze the temporal characteristics of ground risk, we introduced cosine similarity to test our hypothesis. We took point (2,6) as an object. It can be seen from Table 6 that the risk value shows characteristics of daily variation periodicity (according to Table 14, the similarity of risk value for different dates for the same time period was larger than 0.85).

5. Conclusions

The accuracy of assessment of ground risk not only directly affects the flight safety of a UAV but also plays an important role in the efficient operation of a UAV. Therefore, we built a dynamic model to assess ground risk. First, we analyzed the parameters that affect ground risk (e.g., people density, ground impact, shelter factor and fatality rate). Second, we built a kinetic model to calculate the ground impact using dynamic parameters (such as wind and speed). Third, a joint deep learning model (Cs-net model) was established to predict people density. Finally, we fused the two models (the kinetic model and the Cs-net model) into a dynamic risk assessment model to calculate the ground risk of drones. The results of evaluation of the model showed that:
(1)
The proposed risk assessment model can accurately calculate the ground risk under the condition of uncertain parameters.
(2)
The prediction accuracy (R2) of the Cs-net model with dynamic parameters was 0.86. The model-fitting prediction capability was higher than for other machine learning algorithms, such as SVM and RF. The model can be used to predict the actual people density.
(3)
The ground impact size follows a Gaussian distribution. With increase in flight altitude, the rate of influence of speed error and altitude error on the ground impact position gradually decreases.
(4)
A ground risk map was built to better assess the operational risk. The similarity of risk values for different dates in the same time period was larger than 0.85, indicating that the ground risk map also has spatial-temporal characteristics.
(5)
The spatial-temporal characteristics of the ground risk map enable further reduction in ground risk by enabling choice not only of lower risk flying routes but also of lower risk flying periods during the day.
(6)
Due to limitations in data acquisition, only an urban scenario was analyzed. Next, we plan to include other scenarios in the proposed model for future risk assessment and further comparisons to improve the universality of the model.

Author Contributions

Conceptualization, X.C. and Z.Z. (Zhigang Zheng); methodology, Q.J.; software, Q.J.; validation, L.S. (Linshi Sun), Y.B. and Z.Z. (Zhengjuan Zhang); formal analysis, L.S. (Longni Sun) and G.R.; investigation, G.Z.; resources, Y.Y.; writing—original draft preparation, Q.J.; writing—review and editing, L.S. (Linshi Sun); visualization, Q.J.; supervision, X.C. and Z.Z. (Zhigang Zheng); project administration, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Civil Aviation Safety Project, grant number 261.

Data Availability Statement

All the data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barrado, C.; Boyero, M.; Brucculeri, L.; Ferrara, G.; Hately, A.; Hullah, P.; Martin-Marrero, D.; Pastor, E.; Rushton, A.P.; Volkert, A. U-Space Concept of Operations: A Key Enabler for Opening Airspace to Emerging Low-Altitude Operations. Aerospace 2020, 7, 24. [Google Scholar] [CrossRef] [Green Version]
  2. Li, S.; Cheng, X.; Huang, X.; Otaibi, S.A.; Wang, H. Cooperative conflict detection and resolution and safety assessment for 6G enabled unmanned aerial vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 1–16. [Google Scholar] [CrossRef]
  3. Commercial UAV News. Drone Technology Has Reached New Heights. Available online: https://www.commercialuavnews.com/forestry/drone-technology-has-reached-new-heights (accessed on 19 August 2021).
  4. CAAC. CAAC Issues the First License for the Trial Operation of UAS in Urban Areas. Available online: http://www.caac.gov.cn/en/XWZX/201911/t20191114_199470.html (accessed on 14 November 2019).
  5. SF. Fengniao Technology Has Obtained the World’s First Commercial Trial License for Regional Logistics Drones. Available online: https://mp.weixin.qq.com/s/wJ5fdgpurf9kkvstPqKUgg (accessed on 25 January 2022).
  6. MeiTuan. Meituan Unveils Self-Developed Drone Model, Plans for Drone Logistics Network at 2021 World Artificial Intelligence Conference. Available online: https://www.prnewswire.com/news-releases/meituan-unveils-self-developed-drone-model-plans-for-drone-logistics-network-at-2021-world-artificial-intelligence-conference-301327951.html (accessed on 8 July 2021).
  7. CAAC; CAST. Report of UAS Cloud Data Statistics; Civil Aviation Administration of China: Beijing, China, 2022.
  8. CAAC. Statistical of Civil Aviation Industry in 2021; Civil Aviation Administration of China: Beijing, China, 2022.
  9. FAA. Drones by the Numbers. In UAS Quarterly Activity Reports; Federal Aviation Administration: Washington, DC, USA, 2022. [Google Scholar]
  10. Yang, H.H.; Chang, Y.H.; Lin, C.H. A combined approach for selecting drone management strategies based on the ICAO Safety Management System (SMS) components. J. Air Transp. Manag. 2022, 104, 102257. [Google Scholar] [CrossRef]
  11. Che Man, M.H.; Hu, H.L.; Low, K.H. Crash Area Estimation for Ground Risk of Small Unmanned Aerial Vehicles Due to Propulsion System Failures. In Proceedings of the AIAA SciTech 2022 Forum, San Diego, CA, USA, 3–7 January 2022. [Google Scholar]
  12. Shelley, A.V. Ground Risk for Large Multirotor UAVs; ACADEMIA: San Francisco, CA, USA, 2021; Volume 3, pp. 154–196. [Google Scholar]
  13. Andres, L.C. Quantifying Risk of Ground Impact Fatalities for Small Unmanned Aircraft. J. Intell. Robot. Syst. 2019, 93, 367–384. [Google Scholar]
  14. Levasseur, B. Accurate Ground Impact Footprints and Probabilistic Maps for Ayalysis of UAV Mission. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; IEEE: Manhattan, NY, USA, 2019; pp. 154–196. [Google Scholar]
  15. Levasseur, B.; Bertrand, S.; Raballand, N. Efficient Generation of Ground Impact Probability Maps by Neural Networks for Risk Analysis of UAV Missions. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 1398–1406. [Google Scholar]
  16. Levasseur, B.; Bertrand, S. Impact Probability Maps Computation and Risk Analysis for 3D Ground Infrastructures due to UAV Operations. In Proceedings of the 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021; pp. 1248–1257. [Google Scholar]
  17. Liu, Y.; Zhang, X.; Wang, Z.; Gao, Z.; Liu, C. Ground Risk Assessment of UAV Operations Based on Horizontal Distance Estimation under Uncertain Conditions. Math. Probl. Eng. 2021, 2021, 1–12. [Google Scholar] [CrossRef]
  18. Pang, B.; Hu, X.; Dai, W.; Low, K.H. UAV path optimization with an integrated cost assessment model considering third-party risks in metropolitan environments. Reliab. Eng. Syst. Saf. 2022, 222, 108–123. [Google Scholar] [CrossRef]
  19. Pang, B.; Low, K.H.; Lv, C. Adaptive conflict resolution for multi-UAV 4D routes optimization using stochastic fractal search algorithm. Transp. Res. Part C Emerg. Technol. 2022, 139, 1–26. [Google Scholar] [CrossRef]
  20. Hu, X.; Pang, B.; Dai, F.; Low, K.H. Risk Assessment Model for UAV Cost-Effective Path Planning in Urban Environments. IEEE Access 2020, 8, 150162–150173. [Google Scholar] [CrossRef]
  21. Matteo, M.; Stefano, P.; Giorgio, G. Air Risk Maps for Unmanned Aircraft in Urban Environments. In Proceedings of the 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 21–24 July 2022; pp. 1073–1082. [Google Scholar]
  22. Stefano, P.; Matteo, M.; Giorgio, G.; Rizzo, A. A risk-based path planning strategy to compute optimum risk path for unmanned aircraft systems over populated areas. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 641–650. [Google Scholar]
  23. Pilko, A.; Sóbester, A.; Scanlan, J.P.; Ferraro, M. Spatiotemporal Ground Risk Mapping for Uncrewed Aerial Systems Operations. In Proceedings of the AIAA SciTech 2022 Forum, San Diego, CA, USA, 3–7 January 2022; pp. 1–23. [Google Scholar]
  24. Awan, N.; Ali, A.; Khan, F.; Zakarya, M.; Haleem, M. Modeling dynamic spatio-temporal correlations for urban traffic flows prediction. IEEE Access 2021, 9, 26502–26511. [Google Scholar] [CrossRef]
  25. Zhang, J.; Zheng, Y.; Qi, D.; Li, R.; Yi, X.; Li, T. Predicting citywide crowd flows using deep spatio-temporal residual networks. Artif. Intell. 2018, 259, 147–166. [Google Scholar] [CrossRef] [Green Version]
  26. Zhang, J.; Zheng, Y.; Qi, D. Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31, pp. 1–7. [Google Scholar]
  27. CAAC. UAS Operation Management Guidance for Special Class; Civil Aviation Administration of China: Beijing, China, 2019.
  28. FAA. 14 CFR Part 107 Small Unmanned Aircraft Systems; Federal Aviation Administration: Washington, DC, USA, 2016.
  29. European Union Aviation Safety Agency. Commission Delegated Regulation (EU) 2019/945[EB/OL]. Available online: https://www.easa.europa.eu/documentlibrary/regulations/commission-delegated-regulation-eu-2019945.2019 (accessed on 12 March 2019).
  30. Joint Authorities for Rulemaking of Unmanned Systems. JARUS Guidelines on Specific Operations Risk Assessment (SORA)[EB/OL]. 2019. Available online: http://jarus-pas.org/press-releasehttp://jarus-pas.org/pressrelease (accessed on 12 March 2019).
  31. Han, P.; Zhang, B.Y. Effect of track error on safety risk assessment of UAV ground impact. China Saf. Sci. J. 2021, 31, 106–111. [Google Scholar]
  32. Andres, L.C. Ground impact probability distribution for small unmanned aircraft in ballistic descent. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 1442–1451. [Google Scholar]
  33. Jiao, Q.Y. Prediction of Taxi Time of Departure Aircraft Based on Uncertain Parameters. Master’s Thesis, Civil Aviation University of China, Tianjin, China, 2021. [Google Scholar]
  34. Zhang, X.C.; Sun, Y.R.; Guan, F.L. Forecasting the crowd: An effective and efficient neural network for citywide crowd information prediction at a fine spatio-temporal scale. Transp. Res. Part C Emerg. Technol. 2022, 143, 103–121. [Google Scholar] [CrossRef]
  35. Ahas, R.; Aasa, A.; Yuan, Y.; Raubal, M.; Smoreda, Z.; Liu, Y.; Ziemlicki, C.; Tiru, M.; Zook, M. Everyday space-time geographies: Using mobile phone-based sensor data to monitor urban activity in Harbin, Paris, and Tallinn. Int. J. Geogr. Inf. Sci. 2015, 29, 2017–2039. [Google Scholar] [CrossRef]
  36. Guglieri, G.; Quagliotti, F.; Ristorto, G. Operational issues and assessment of risk for light UAVs. J. Unmanned Veh. Syst. 2014, 2, 119–129. [Google Scholar] [CrossRef]
  37. Dalamagkidis, K.; Valavanis, K.P.; Piegl, L.A. Evaluating the risk of unmanned aircraft ground impacts. In Proceedings of the the 16th Mediterranean Conference of Control and Automation, Ajaccio, France, 25–27 June 2008; pp. 709–716. [Google Scholar]
  38. Li, N.; Jiao, Q.; Zhu, X.; Wang, S. Prediction of departure aircraft taxi time based on deep learning. Trans. Nanjing Univ. Aeronaut. Astronaut. 2020, 37, 232–241. [Google Scholar]
  39. Zheng, Q.; Yang, M.; Yang, J.; Zhang, Q.; Zhang, X. Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
  40. Jin, B.; Cruz, L.; Goncalves, N. Deep Facial Diagnosis: Deep Transfer Learning From Face Recognition to Facial Diagnosis. IEEE Access 2020, 8, 123649–123661. [Google Scholar] [CrossRef]
  41. Zhao, M.; Chang, H.; Xie, W.; Xie, Z.; Hu, J. Cloud shape classification system based on multi-channel cnn and improved fdm. IEEE Access 2020, 8, 44111–44124. [Google Scholar] [CrossRef]
  42. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  43. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-Sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  44. Zheng, Q.; Zhao, P.; Li, Y.; Wang, H.; Yang, Y. Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification. Neural Comput. Appl. 2021, 33, 7723–7745. [Google Scholar] [CrossRef]
  45. Zheng, Q.; Zhao, P.; Zhang, D.; Wang, H. MR-DCAE: Manifold regularization-based deep convolutional autoencoder for unauthorized broadcasting identification. Int. J. Intell. Syst. 2021, 36, 7204–7238. [Google Scholar] [CrossRef]
  46. You, L.; Jiang, H.; Hu, J.; Chang, C.H.; Chen, L.; Cui, X.; Zhao, M. GPU-accelerated Faster Mean Shift with euclidean distance metrics. In Proceedings of the 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), Torino, Italy, 27 June–1 July 2022; pp. 211–216. [Google Scholar] [CrossRef]
  47. Husain, Z.; Al Zaabi, A.; Hildmann, H.; Saffre, F.; Ruta, D.; Isakovic, A.F. Search and Rescue in a Maze-like Environment with Ant and Dijkstra Algorithms. Drones 2022, 6, 273. [Google Scholar] [CrossRef]
Figure 1. Population density in a week.
Figure 1. Population density in a week.
Drones 06 00324 g001
Figure 2. Population density in a day.
Figure 2. Population density in a day.
Drones 06 00324 g002
Figure 3. Coordinate system of quadrotor UAV.
Figure 3. Coordinate system of quadrotor UAV.
Drones 06 00324 g003
Figure 4. Structure of kinetic model.
Figure 4. Structure of kinetic model.
Drones 06 00324 g004
Figure 5. Convolutional neural network sub-model.
Figure 5. Convolutional neural network sub-model.
Drones 06 00324 g005
Figure 6. Structure of CNN model.
Figure 6. Structure of CNN model.
Drones 06 00324 g006
Figure 7. Long short-term memory (LSTM).
Figure 7. Long short-term memory (LSTM).
Drones 06 00324 g007
Figure 8. Structure of LSTM model.
Figure 8. Structure of LSTM model.
Drones 06 00324 g008
Figure 9. Structure of Cs-net model.
Figure 9. Structure of Cs-net model.
Drones 06 00324 g009
Figure 10. Data procession of Cs-net.
Figure 10. Data procession of Cs-net.
Drones 06 00324 g010
Figure 11. Structure of ground risk model.
Figure 11. Structure of ground risk model.
Drones 06 00324 g011
Figure 12. Location distribution of observation points (blue is the observation point).
Figure 12. Location distribution of observation points (blue is the observation point).
Drones 06 00324 g012
Figure 13. Schematic diagram of Daxing regional gridding.
Figure 13. Schematic diagram of Daxing regional gridding.
Drones 06 00324 g013
Figure 14. Ground environment semantic map.
Figure 14. Ground environment semantic map.
Drones 06 00324 g014
Figure 15. The loss function of C-s net under test data and training data.
Figure 15. The loss function of C-s net under test data and training data.
Drones 06 00324 g015
Figure 16. Ground impact of M210-RTK.
Figure 16. Ground impact of M210-RTK.
Drones 06 00324 g016
Figure 17. Ground impact of Swoop Aero.
Figure 17. Ground impact of Swoop Aero.
Drones 06 00324 g017
Figure 18. Grid map of path planned.
Figure 18. Grid map of path planned.
Drones 06 00324 g018
Figure 19. Ground risk of different paths.
Figure 19. Ground risk of different paths.
Drones 06 00324 g019
Figure 20. Calculation of ground risk.
Figure 20. Calculation of ground risk.
Drones 06 00324 g020
Figure 21. Spatial-temporal characteristics of ground risk map. Subfigure (a) stands for the ground risk map at 0:00 o’clock; Subfigure (b) stands for the ground risk map at 8:00 o’clock; Subfigure (c) stands for the ground risk map at 12:00 o’clock; Subfigure (d) stands for the ground risk map at 18:00 o’clock.
Figure 21. Spatial-temporal characteristics of ground risk map. Subfigure (a) stands for the ground risk map at 0:00 o’clock; Subfigure (b) stands for the ground risk map at 8:00 o’clock; Subfigure (c) stands for the ground risk map at 12:00 o’clock; Subfigure (d) stands for the ground risk map at 18:00 o’clock.
Drones 06 00324 g021
Table 1. Shelter factor index.
Table 1. Shelter factor index.
Shelter FactorArea
0No shelter, such as lawn, desert, farmland or lake
0.25Road, business district and playground
0.5Residence area, garden
0.75Park
1Fully sheltered area, such as forest
Table 2. Architecture of CNN component.
Table 2. Architecture of CNN component.
Hidden LayerNumber of ConvolutionalSize of Convolutional
Conv1322 × 2
Conv2642 × 2
Maxpool15122 × 2
FC15124 × 4
FC2512512 × 1
Table 3. Population density index.
Table 3. Population density index.
IDObserver TimeLatitudeLongitudeArea/m2People Density
101/17/2020 01:00116.20139.90690,0622.9
201/17/2020 03:00116.31139.933100,2351.0
301/17/2020 09:00116.25139.92073,6406.5
Table 4. Parameters of UAV.
Table 4. Parameters of UAV.
ParametersM210-RTKSwoop Aero
Mass/kg6.1410
Length/m0.8871.63
Width/m0.8802.22
Height/m0.4080.5
Frontal Area/m20.2340.5
Vertical area/m20.0350.3
Table 5. Parameters of environment.
Table 5. Parameters of environment.
ParametersValue
g / m × s 1 9.8
C D 0.3
ρ a /kg/m31.225
α /J106
β /J34
Event 10−5
Table 6. Model parameter adjustment table.
Table 6. Model parameter adjustment table.
MSE (Normalization)
lr0.10.010.001
Epoch
100 0.980.220.22
500 1.00.190.15
1000 0.630.140.08
Table 7. Model of batch size.
Table 7. Model of batch size.
Batch SizeMSE
164.25
323.98
646.55
1288.23
Table 8. Performance of Cs-net under different number of neurons.
Table 8. Performance of Cs-net under different number of neurons.
Number of NeurousMSESpeed (s)
64-64-6415.48 (under fit)1048.06
256-256-2567.031326.51
256-128-643.781023.37
Table 9. Model comparison table.
Table 9. Model comparison table.
Model R2MSEMAE
SVM0.734.0218.3
RF0.823.7710.25
Multi regression0.7014.2322.30
C-snet0.863.785.32
Table 10. Input parameters of ground impact model.
Table 10. Input parameters of ground impact model.
Type M210-RTKSwoop Aero
Mass/kg6.1417
Altitude/m100~N (0.5)100~N (0.5)
Initial Speed/m∗s−11020
Wind direction270
Wind speed/m∗s−12
Aircraft heading90
Track errorN (0.2)
Table 11. Impact probabilities of the different areas.
Table 11. Impact probabilities of the different areas.
Type 2Sσ3Sσ
M210-RTK(82.11,87.89)(79.32,90.88)(76.43,93.77)
Swoop Aero(114.32,117.8)(112.58,119.54)(110.84,121.28)
Table 12. Example of path planned.
Table 12. Example of path planned.
Coordinate Start(17,0)
Destination(28,26)
Speed A v e r a g e s p e e d / m × s 1 10
Euclidean Distance Start-Destination/m2729.468
Table 13. Ground risk under different paths.
Table 13. Ground risk under different paths.
Path 1Path 2Path 3
Risk index0.02040.01520.0127
distance2823.1137462997.04
Table 14. Similarity of ground risk for different dates.
Table 14. Similarity of ground risk for different dates.
PeriodDay 1Day 2 Day 3Day 4Day 5Day 6Day 7
Day 11.000.9750.9820.9630.8690.9440.726
Day 20.9751.000.9890.9800.8900.9780.949
Day 30.9820.9891.000.9820.9770.9740.960
Day 40.9630.9800.9821.000.9720.9720.974
Day 5 0.8690.8900.9770.9721.000.9540.966
Day 60.9440.9780.9740.9720.9541.000.953
Day 70.7260.9490.9600.9740.9660.9531.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiao, Q.; Liu, Y.; Zheng, Z.; Sun, L.; Bai, Y.; Zhang, Z.; Sun, L.; Ren, G.; Zhou, G.; Chen, X.; et al. Ground Risk Assessment for Unmanned Aircraft Systems Based on Dynamic Model. Drones 2022, 6, 324. https://doi.org/10.3390/drones6110324

AMA Style

Jiao Q, Liu Y, Zheng Z, Sun L, Bai Y, Zhang Z, Sun L, Ren G, Zhou G, Chen X, et al. Ground Risk Assessment for Unmanned Aircraft Systems Based on Dynamic Model. Drones. 2022; 6(11):324. https://doi.org/10.3390/drones6110324

Chicago/Turabian Style

Jiao, Qingyu, Yansi Liu, Zhigang Zheng, Linshi Sun, Yiqin Bai, Zhengjuan Zhang, Longni Sun, Gaosheng Ren, Guangyu Zhou, Xinfeng Chen, and et al. 2022. "Ground Risk Assessment for Unmanned Aircraft Systems Based on Dynamic Model" Drones 6, no. 11: 324. https://doi.org/10.3390/drones6110324

APA Style

Jiao, Q., Liu, Y., Zheng, Z., Sun, L., Bai, Y., Zhang, Z., Sun, L., Ren, G., Zhou, G., Chen, X., & Yan, Y. (2022). Ground Risk Assessment for Unmanned Aircraft Systems Based on Dynamic Model. Drones, 6(11), 324. https://doi.org/10.3390/drones6110324

Article Metrics

Back to TopTop