Next Article in Journal
Resultant Normal Contact Force-Based Contact Friction Model for the Combined Finite-Discrete Element Method and Its Validation
Previous Article in Journal
Spatial Behavior of Solutions in Linear Thermoelasticity with Voids and Three Delay Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Evolutionary Game-Theoretic Approach to Unmanned Aerial Vehicle Network Target Assignment in Three-Dimensional Scenarios

School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300401, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(19), 4196; https://doi.org/10.3390/math11194196
Submission received: 11 September 2023 / Revised: 1 October 2023 / Accepted: 6 October 2023 / Published: 8 October 2023

Abstract

:
Target assignment has been a hot topic of research in the academic and industrial communities for swarms of multiple unmanned aerial vehicle (multi-UAVs). Traditional methods mainly focus on cooperative target assignment in planes, and they ignore three-dimensional scenarios for the multi-UAV network target assignment problem. This paper proposes a method for target assignment in three-dimensional scenarios based on evolutionary game theory to achieve cooperative targeting for multi-UAVs, significantly improving operational efficiency and achieving maximum utility. Firstly, we construct an evolutionary game model including game participants, a tactical strategy space, a payoff matrix, and a strategy selection probability space. Then, a multi-level information fusion algorithm is designed to evaluate the overall attack effectiveness of multi-UAVs against multiple targets. The replicator equation is leveraged to obtain the evolutionarily stable strategy (ESS) and dynamically update the optimal strategy. Finally, a typical scenario analysis and an effectiveness experiment are carried out on the RflySim platform to analyze the calculation process and verify the effectiveness of the proposed method. The results show that the proposed method can effectively provide a target assignment solution for multi-UAVs.

1. Introduction

With the innovation and rapid development of technology, unmanned aerial vehicles (UAVs) are widely used in search and rescue, road patrol, target surveillance, and other scenarios by virtue of their advantages of low cost, high reliability, and high scalability. Due to the complexity and rapid expansion of the UAV operating environment, it has become a hot field of scientific research to rationally assign the best target to the most suitable UAV at the least cost by virtue of autonomous perception and collaborative decision-making, and it is of great significance to effectively monitor and track multiple moving air targets in complex environments as well as gaming [1,2]. However, with the increase in the number of UAVs and the urgency of real-time surveillance, low-damage and high-efficiency many-to-many target assignment has become a crucial challenge [3,4,5].
In recent years, researchers have proposed various methods to address the issue of target assignment, including mathematical programming methods [6,7,8], heuristic methods [9,10], and optimization algorithms [11]. Orhan et al. proposed a dynamic target allocation framework based on artificial neural networks that optimizes the target assignment plan by simultaneously considering multiple objectives and constraints [12]. Zhu et al. proposed an intelligent decision-making framework based on reinforcement learning for multi-target assignment problems, which can effectively learn, adapt to changing environments, and optimize the overall system performance through decision-making [13]. Zou et al. derived a decision tree-based target assignment algorithm that considers various factors such as distance, angle, and speed to effectively allocate targets to multiple space vehicles and make accurate allocation decisions [14]. Zhen et al. derived a cooperative target allocation algorithm based on an improved Contract Network Protocol (ICNP) for heterogeneous UAV swarms. It considers UAV capabilities, target characteristics, and communication constraints to make real-time and optimal allocation decisions [15]. Shalumov et al. introduced a dynamic programming-based approach for weapon–target allocation in multi-agent target-missile-defender engagement that considers missile flight time, defender capability, and target priority to make real-time and optimal allocation decisions, which was shown to be effective through experimental results [16]. Duan et al. introduced a target allocation algorithm inspired by wolf behaviors for unmanned aerial systems (UASs), which considers target characteristics, UAS capabilities, and communication constraints. They can make real-time and coordinated allocation decisions [17]. Yeduri et al. proposed a novel method of energy and throughput management in a delay-constrained small-world unmanned aerial vehicle (UAV)–IoT network [18]. Bose et al. propose a novel method that computes the optimum height at which UAVs should hover, resulting in a maximum coverage radius with a sufficiently small outage probability [19]. Xia et al. proposed a multi-UAV cooperative target tracking system and designed a deep reinforcement learning (DRL)-based algorithm for intelligent flight action decisions of UAVs to track a moving air target using past and current position information [20]. Zhou et al. proposed a UAV swarm-based cooperative tracking architecture to systematically improve the UAV tracking performance [21]. Despite recent progress, existing air combat target assignment methods still have some limitations, such as over-reliance on perfect information, lack of adaptability to dynamic environments, and single objective optimization.
Traditional optimization methods have limitations in producing optimal target assignment plans. In existing studies, when multi-UAV networks target a single target, they are often treated as independent and unrelated individuals, and the target’s survival probability is multiplied. However, obtaining accurate probability values in actual situations is often significantly challenging. The adversary process between both sides is essentially a game, and the other side’s behavior can significantly affect the effectiveness of the multi-UAV network target assignment plan. Therefore, it is necessary to estimate the target assignment plan that the other side may adopt and comprehensively judge its impact to optimize and obtain the optimal target assignment plan for our side.
Evolutionary game theory, an outgrowth of traditional game theory, incorporates principles of biological evolution [22]. It combines the economic concept of “equilibrium” with the concept of “adaptation” to depict how groups adapt to their environment through learning, imitation, and trial-and-error under conditions of incomplete rationality, asymmetric information, and biases in expectations and environment [23]. Furthermore, evolutionary game theory prioritizes cooperation over destructive competition, making it a means of resource assignment that enables the maximization of overall effectiveness. This process ultimately leads to an evolutionarily stable state, providing a particle framework for studying interactions among multiple intelligent agents.
Numerous scholars have utilized evolutionary game theory to study cooperation problems, yielding significant results that have been widely applied to practical issues [24,25,26]. Given that the concept of stable cooperation and target assignment in a multi-UAV system, under limited information and resources, aligns with the principles of evolutionary game theory, a comprehensive analysis of the stable long-term trends of both parties engaged in the game becomes possible by leveraging these principles. Both sides in the target assignment problem face a choice between cooperation and competition. They must consider the balance between their own interests and the overall benefits. This decision-making process can be analyzed using the concept of evolutionary games, including the choice of game strategies, the dynamics of evolution, and the final equilibrium state. Evolutionary game theory can explore the effects of different decision-making strategies and evolutionary dynamics on the advantages of both sides, which can help to deeply understand the strategic decision-making and competitive mechanisms in multi-UAV network target assignments and provide effective solutions to the problem of optimal target assignment. The proposed method establishes a multi-UAV network target assignment evolutionary game model, with both sides’ UAVs in this model regarded as game participants and their respective target assignment plans serving as the game strategies. The payoff function of the game is constructed based on the total attack effectiveness and ineffectiveness achieved by both sides in the target assignment plan, with the confidence level used to measure the attack effectiveness of each UAV against its target [27]. Additionally, a multi-level information fusion method, based on evidence theory, is used to calculate the total attack effectiveness of multi-UAVs against multiple targets [28]. Based on the multi-UAV network target assignment evolutionary game model, a replicator equation is constructed to determine the evolutionarily stable strategy (ESS) and obtain the optimal assignment plan. Finally, the effectiveness of the proposed method is verified through typical case studies and experiments.
The main contributions of this paper can be summarized as follows: (i) evolutionary game theory is utilized to solve the multi-UAV network target assignment problem; (ii) a multi-level information fusion method based on evidence theory is designed to calculate the total attack effectiveness of multi-UAVs against multiple targets; and (iii) an experiment in different scenarios is conducted to demonstrate the effectiveness of the proposed method.

2. Problem Formulation and Model Construction

2.1. Problem Formulation

The problem studied in this paper can be described as follows: Both our multi-UAV and target multi-UAV consist of multiple UAVs of the same type and each of our UAVs evaluates the expected benefits based on environmental information, its own capabilities, and the threat of targets. Using this information, each of our UAVs will be able to determine the optimal target assignment plan for the target multi-UAVs, thus gaining an advantage in the game.
The state of each UAV is represented as X = ( x , y , z , v , θ , φ ) , where ( x , y , z ) represents the UAV’s three-dimensional coordinates, v represents the UAV’s speed, θ represents the UAV’s pitch angle, and φ represents the UAV’s heading angle. Let R be the set of multi-UAVs and B be the set of targets. The multi-UAV assignment plan is denoted by ω R = a i , j i R , j B , a i , j { 0,1 } , where a i , j = 1 represents that the UAV i selects the target j . Similarly, the targets’ assignment plan is denoted by ω B = b j , i j R , i B , b j , i { 0,1 } , where b j , i = 1 represents that target j selects the UAV i . The relative situation diagram between the UAV i and target j is shown in Figure 1, where the red UAV represents our UAV and the blue UAV represents the target UAV.
Assuming that each UAV flies toward its target based on the target assignment results, and if the target is within attacking range and the situation is in favor, the UAV will attack its target. Let the number of attacks by our side’s UAV be denoted by w p R . Similarly, let the number of attacks by the target side’s UAV be denoted by w p B . Figure 2 provides a comprehensive overview of the multi-UAV cooperative target assignment problem, illustrating the entire flow of the problem.

2.2. Model Construction

We can model the problem of multi-UAV network target assignment as an evolutionary game model, denoted by G = N , S , P , I with the following specifications:
Let N = R , B be the participating entities of the two sides of the game: R represents our side’s multi-UAV and B represents the target side’s.
Let S = S R , S B be the strategy space in the evolutionary game model, where S R is the set of our side’s pure strategies, and S B is the set of pure strategies for the target side. Specifically, the I -th pure strategy s R I in S R corresponds to the I -th target assignment plan ω R I for our side and the J -th pure strategy s B J in S B corresponds to the J -th target assignment plan ω B J for the target side. Let m denote the number of our side’s multi-UAVs and let n denote the number of the target side’s multi-UAVs. Let τ R = n m denote the number of pure strategies for our side and let τ B = m n denote the number of pure strategies for the target side. Therefore, we have that S R = s R 1 , , s R I , , s R τ B and S B = s B 1 , , s B J , , s B τ R .
Let P = P R , P B be the payoff matrix in the evolutionary game model, where P R = u R s R I , s B J s R I S R , s B J S B is the payoff matrix for our side and P B = u B s R I , s B J s R I S R , s B J S B is the payoff matrix for the target side. Specifically, u R s R I , s B J is the payoff value obtained by our side when we choose the pure strategy s R I and the target side chooses the pure strategy s B J . Similarly, u B s R I , s B J is the payoff value obtained by the target side when they choose the pure strategy s R I and our side chooses the pure strategy s B J .
Let I = I R , I B be the probability space for both sides to choose their respective strategies in the evolutionary game model. Let I R = p s R I s R I S R denote the probability space for our side, and let I B = q s B J s B J S B denote the probability space for the target side. Specifically, p s R I denotes the probability of our side choosing a pure strategy s R I , and p s B J denotes the probability of the target side choosing a pure strategy s B J .

3. Construction of the Payoff Matrix

3.1. Establishment of a Situational Assessment Function

Situational assessment is an important basis for UAVs to determine their target assignment. This evaluation is primarily determined by factors such as the perspective of both sides, distance, approach speed, and relative altitude. For example, the UAV has various advantage functions, which are as follows:
(I)
Advantage function of perspective.
The advantage function of perspective is primarily related to the attack angle of the UAV. During the process of searching, tracking, and attacking targets, it is necessary to control the target within a certain angle range for the UAV to meet the requirements of detection accuracy and precision. The expression for the advantage function of perspective is given by:
S α = q B q R 2 π
where S α represents the advantage of the perspective of the UAV against the target, q B denotes the angle between the velocity vector of the target and the tracking line of the target, and q R denotes the angle between the velocity vector of the UAV and the tracking line of the UAV, as shown in Figure 3, where the red UAV represents our UAV and the blue UAV represents the target.
(II)
Advantage function of approach speed.
The advantage function of approach speed is primarily related to the relative approach speed of the UAV with respect to the target, as shown in Figure 4. It is compressed using the arctangent function and increased by 0.5 to maintain its positive value. The expression for the advantage function of approach speed is given by:
S V C = 0.5 1 π arctan V C 100
where S V C represents the advantage of the approach speed of the UAV against the target, and V c denotes the relative approach speed of the UAV with respect to the target, measured in units of meters per second (m/s).
(III)
Advantage function of altitude.
The advantage function of altitude is primarily related to the difference in altitude between the UAV and the target, as shown in Figure 5. It is controlled by a threshold altitude difference and a proportional coefficient that controls the rate of change of the advantage function when the UAV deviates from the optimal altitude value. The expression for the advantage function of altitude is given by:
S h = e h h x σ h x 2
where S h represents the advantage of the altitude of the UAV against the target, h denotes the difference in altitude between the UAV and the target, h x denotes the threshold altitude difference representing the optimal altitude difference for the UAV, and σ denotes the proportional coefficient used to control the rate of change of the advantage function of altitude.
(IV)
Advantage function of attack.
The advantage function of attack comprises two parts: the advantage function of attack distance and angle, as shown in Figure 6 and Figure 7. The advantage function of attack distance represents the degree of advantage of the target side’s UAV being within the attack range of our side’s UAV and is given by:
S r = e r R 0 σ r 2
where S r represents the advantage of the attack distance of our side’s UAV against the target side’s UAV, r denotes the distance between UAVs on both sides, R m i n   and R m a x denote the minimum and maximum attack distances, respectively, and σ r denotes the attack range, where σ r = R m a x + R m i n / 2 . and R 0 = 2 R m a x R m i n . The value of σ r controls the rate of change of the attack advantage function when it deviates from the optimum. When the relative distance between UAVs on both sides is twice the maximum and minimum attack distances, the attack distance advantage function reaches its maximum value of 1, indicating that the target UAV will be within the attack range for a longer period.
The expression for the attack angle advantage function is given by:
S c = e β β 0 2
where S c represents the advantage of the attack angle of our side’s UAV against the target side’s UAV, β denotes the angle between the tracking line of our side’s UAV’s tracking line and our side’s UAV’s velocity vector, and β 0 denotes the maximum off-axis angle. When β = 0 , the attack angle advantage function reaches its maximum value of 1, indicating that the velocity vector of our side’s UAV points toward the target side’s UAV. A large value of β 0 indicates better attack performance. When the velocity vector of our side’s UAV does not point toward the target side UAV, the rate of change of the advantage function of the attack angle decreases as the value of β 0 increases.
The total attack advantage function is related to both the attack angle condition and the attack distance condition. Therefore, it can be expressed as the product of these two advantage values, as shown in the following expression:
S A = S C × S r
(V)
Situation assessment function.
The situation assessment function is used to evaluate the effectiveness of UAV attacks. It is obtained by weighting the angle, closing speed, altitude, and attack advantage function value. Experts typically determine the weighting values based on the magnitude and importance of each advantage value, which may change according to the UAV’s flight performance and changes in the posture situation. The expression for the constructed situation assessment function is shown in Equation (7):
S = 2 × S α × S V C + S h + 2 × S A

3.2. Multi-Level Information Fusion Method Based on Evidence Theory

A payoff matrix is constructed using a multi-level fusion method based on evidence theory, utilizing the situation assessment function developed in Section 3.1, as shown in Figure 8.
The effectiveness matrices for UAVs on both sides denoted as T R and T B , respectively, are obtained by calculating the relative attack effectiveness of UAVs on both sides based on the situation assessment function. These matrices are defined in Equations (8) and (9):
T R = t R 11 t R 1 n t R i j t R m 1 t R m n
T B = t B 11 t B 1 m t B j i t B n 1 t B n m
Here, t R i j represents the attack effectiveness of the i -th UAV against the j -th target, and t B j i represents the attack effectiveness of the j -th target against the i -th UAV.
The first fusion level in the target assignment plan ω R I involves using evidence theory to fuse the attack effectiveness of UAV targeting the same target. Let c j denote the number of UAVs confronting target j , with j = 1 n c j = m , where m denotes the total number of UAVs. In ω R I , the joint attack effectiveness E R j of c j UAVs attacking target j is calculated by Equation (10):
E R j ω R I = t R k 1 j t R k 2 j t R k c j j
Here, denotes the evidence theory fusion operator, and k 1 , k 2 , , k c j represent the UAVs in the assignment ω R I targeting the target j . t R k 1 j , t R k 2 j , , t R k c j j represent the effectiveness of our side’s UAVs targeting j .
Using the theory of evidence, all the evidence E R j ( ω R I ) , where j = 1,2 , , n , are fused at the second-level fusion, resulting in the overall attack effectiveness E R ( ω R I ) of m UAVs against n targets, as shown in Equation (11):
E R ω R I = E R 1 ω R I E R 2 ω R I E R n ω R I
Similarly, we can use the above theory-based multi-level information fusion algorithm to calculate E B ( ω B I ) .
To calculate the payoff value in the multi-UAV network target assignment evolutionary game model G, we first obtain the total effectiveness and ineffectiveness of both sides under the target assignment plans ω R I and ω B J , respectively. Denote these values as E R ω R I , N R ω R I , E B ω B J , and N B ω B J . Using these values, we can calculate the payoff value for both the UAVs and targets under a given strategy combination s R I , s B J . The payoff value for the UAVs is the fusion of E R ω R I with N B ω B J divided by the fusion of E B ω B J with N R ω R I , as shown in Equation (12). Similarly, the payoff value for targets is the fusion of E B ω B J with N R ω R I divided by the fusion of E R ω R I with N B ω B J , as shown in Equation (13). Here, N B ω B J is 1 minus E B ω B J , and N R ω R I is 1 minus E R ω R I .
u R s R I , s B J = E R ω R I N B ω B J E B ω B J N R ω R I
u B s R I , s B J = E B ω B J N R ω R I E R ω R I N B ω B J
Iterating through all possible strategy combinations in the multi-UAV network target assignment evolutionary game model G , we can calculate the corresponding payoff values for UAVs on both sides. This allows us to generate the payoff matrix for UAVs on both sides, denoted as P R and P B , respectively. The payoff matrix is a τ R by τ B matrix, where τ R is the number of UAV strategies, and τ B is the number of target strategies. The elements of these matrices are the calculated payoff values for each strategy combination, as shown in Equations (14) and (15), respectively.
P R = u R s R 1 , s B 1 u R s R 1 , s B J u R s R 1 , s B τ B u R s R I , s B 1 u R s R I , s B J u R s R I , s B τ B u R s R τ R , s B 1 u R s R τ R , s B J u R s R τ R , s B τ B
P B = u B s R 1 , s B 1 u B s R 1 , s B J u B s R 1 , s B τ B u B s R I , s B 1 u B s R I , s B J u B s R I , s B τ B u B s R τ R , s B 1 u B s R τ R , s B J u B s R τ R , s B τ B

4. Analysis of Evolutionary Game Model

Under bounded rationality, game players have limited initial knowledge, which can result in the adoption of suboptimal strategies. This leads to constant adjustment and improvement of the payoffs of both sides during the game. Due to the differences in interests, those with lower payoffs will improve their strategies by learning from those with higher payoffs, resulting in changes in the proportion of each strategy over time. The dynamic change of strategies can be defined as a function of time. The dynamic change rate can be represented by a replicator equation based on biological evolution ideas [29]. This equation describes the dynamic change process of strategies and analyzes the equilibrium state of the evolutionary game to solve the equilibrium solution of the multi-UAV cooperative target assignment evolutionary game model. The specific algorithm is described as follows:
(I)
Given the probability space I R and I B of the UAVs’ and targets’ choices of each strategy in the evolutionary game,
(II)
The payoffs and expected payoffs of different target assignment strategies for both sides can be calculated using the probability space I R and I B , and the payoff matrix. Equations (16)–(19) show the formulas for calculating these:
U s R I = J = 1 m p ( s B J ) u R ( s R I , s B J )
U ¯ R = I = 1 n p ( s R I ) U s R I
U s B J = I = 1 m q ( s R I ) u B ( s R I , s B J )
U ¯ B = I = 1 n q ( s B J ) U s B J
(III)
The replicator equations for UAVs on both sides are established as follows:
R ( p ) = d p ( t ) d t = p ( s R I ) ( U s R I U ¯ R )
B ( p ) = d q ( t ) d t = q ( s B J ) ( U s B J U ¯ B )
(IV)
To solve for the evolutionary stable equilibrium, we can combine the replicator equations for UAVs on both sides obtained in the previous step (III) to construct the following system of replicator equations:
R ( p ( s B 1 ) ) = d p ( t ) d t = p ( s R 1 ) ( U s R 1 U ¯ R ) = 0 R ( p ( s B m ) ) = d p ( t ) d t = p ( s R m ) ( U s R m U ¯ R ) = 0 B ( q ( s R 1 ) ) = d q ( t ) d t = q ( s B 1 ) ( U s B 1 U ¯ B ) = 0 B ( q ( s R n ) ) = d q ( t ) d t = q ( s B n ) ( U s B n U ¯ B ) = 0
Solving the above system of equations yields the evolutionary equilibrium strategy for the multi-UAV cooperative target assignment evolutionary game model, which is a strategy where the selection probabilities of each player’s strategies remain unchanged. However, some of the evolutionary equilibrium strategies may be unstable. This means that if both players deviate from this equilibrium state, the replicator equation will cause the evolutionary outcome to no longer converge to this strategy. Therefore, further analysis of the stability of the evolutionary equilibrium strategy is necessary to identify the stable strategies within the equilibrium and achieve optimal target assignment.

5. Simulation Results and Analysis

5.1. Introduction to the RflySim Simulation Platform

This study utilizes the RflySim platform, which is built in the Matlab/Simulink environment for flight simulation. The RflySim platform is a UAV flight control ecosystem released by the Reliable Flight Control Group at Beihang University. In reference [30], a comparison between the multi-wing flight entity experiment and the simulation experiment based on the RflySim platform validates the high accuracy level of the platform’s simulation. The platform has undergone quantitative analysis tests and comparison experiments with fault injection, and its platform’s credibility was found to be above 90% (with 60% being the lowest value in the accuracy confidence interval). This fully confirms the high fidelity and practicality of the platform. The platform’s core value is reflected in the software and hardware in the loop simulation, which includes the unique CopterSim, visual system plugins, and developed models. The software platform components are shown in Figure 9.

5.2. Analysis of the Typical Scenario

This section constructs a typical scenario of three UAVs and two targets, as shown in Figure 10, where the red UAV represents our UAVs and the blue UAV represents the targets. Each UAV had two attacks and the same probability of being damaged. The situational information of each UAV is provided in Table 1.
In Table 1, the units of x,y,z are meters (m), the unit of v is meters per second (m/s), and the units of θ and φ are radians (rad). Both sides must strategize to maximize their advantage in the conflict. The target assignment problem is modeled as an evolutionary game model G = N , S , P , I .
The specific process of its construction is as follows:
(I)
N = R , B represents the UAVs on both sides in the evolutionary game model.
(II)
S = S R , S B represents the strategy space in the evolutionary game model, as shown specifically in Table 2.
(III)
I = I R , I B represents the probability space in the evolutionary game model, with equal initial probabilities as shown in Table 3.
(IV)
Construct the payoff matrices on both sides.
First, calculate the attack effectiveness of each UAV based on the situational advantage function, resulting in the effectiveness matrices T R and T B , as shown in Equations (23) and (24):
T R = 0.57 0.26 0.85 0.60 0.28 0.30
T B = 1.39 1.11 0.99 0.92 0.59 0.51
Then, calculate the total effectiveness for each target assignment plan, based on Equations (10) and (11), as shown in Table 4.
On this basis, calculate the payoffs for UAVs on both sides for each combination of strategies, based on Equations (12) and (13), resulting in the payoff matrices P R and P B for the UAVs on both sides, as shown in Equations (25) and (26).
P R = 0.97 0.96 0.96 1.02 0.97 0.96 1.01 1.07 0.97 0.72 0.73 0.73 0.69 0.72 0.73 0.70 0.67 0.73 0.93 0.92 0.92 0.97 0.92 0.92 0.96 1.02 0.92 0.96 0.95 0.94 1.00 0.95 0.95 1.00 1.05 0.95 1.00 0.99 0.97 1.05 1.00 1.00 1.04 1.10 0.99 0.94 0.93 0.93 0.98 0.93 0.93 0.97 1.03 0.93 0.99 0.98 0.97 1.03 0.98 0.98 1.02 1.08 0.98 0.40 0.44 0.45 0.31 0.42 0.43 0.33 0.23 0.43
P B = 1.03 1.38 1.07 1.04 1.00 1.06 1.01 2.50 1.04 1.35 1.09 1.06 1.01 1.07 1.03 2.26 1.04 1.34 1.09 1.06 1.01 1.08 1.03 2.21 0.98 1.42 1.03 1.00 0.96 1.01 0.97 3.21 1.04 1.36 1.08 1.05 1.00 1.06 1.02 2.36 1.04 1.35 1.09 1.05 1.01 1.07 1.02 2.30 0.99 1.41 1.04 1.01 0.97 1.03 0.98 3.01 0.94 1.46 0.98 0.95 0.91 0.97 0.92 4.29 1.04 1.36 1.08 1.05 1.01 1.07 1.02 2.34
To obtain an evolutionary equilibrium solution for the multi-UAV network target assignment game, a replicator equation is constructed. This equation describes how the distribution of strategies evolves based on their relative payoffs. The resulting equilibrium solution represents a stable distribution of strategies that will persist over time. The evolutionary trend of strategies for UAVs on both sides can be visualized through Figure 11 and Figure 12, which show the proportion of different strategies over time.
It can be observed from Figure 11 and Figure 12 that to achieve higher payoffs, our side will eventually tend to select the 6-th strategy as the target assignment plan. In contrast, the target side will eventually tend to select the 2-nd strategy as the target assignment plan. The target assignment results of both sides are shown in Figure 13, where the red dots represent our UAVs, the blue dots represent the target UAVs, the red lines represents the strategy of our UAVs to select the target, and the blue lines represents the strategy of the target UAVs to select the target. This figure provides a visual representation of the final target assignments resulting from the use of the selected strategies by both sides. As can be seen in Figure 13, our R 2 chooses target B 1 , and our R 1 and R 3 choose target B 2 ; under this strategy, our side has the greatest benefit, choosing this strategy to eliminate B 2 first, and R 2 attacks B 1 for cover, and targets B 1 and B 2 choose our R 3 . Under this strategy, the target side has the greatest benefit; choosing this strategy can eliminate R 3 in a short period, reducing their own loss risk.

5.3. Results and Analysis of the Effectiveness Experiment

The advantage or disadvantage between both sides is largely dependent on the number of UAVs, their performance, and the relative situation between them. When the performance of the UAVs on both sides is equal, the advantage or disadvantage between the sides is primarily determined by the number of UAVs and the relative situation between them. To evaluate the effectiveness of the target assignment method for multi-UAVs based on the evolutionary game, we designed typical scenarios that include our advantageous scenario, our disadvantageous scenario, and an evenly-matched scenario by adjusting the number of UAVs on both sides. To compare the effectiveness of our proposed method, we selected the Min–Max strategy, PSO, GA and Hungarian Algorithm as baselines [31,32,33,34]. These methods represent commonly used approaches for multi-UAV network target assignment and provide a useful benchmark for evaluating the performance of our proposed method.
At the conclusion of the simulation experiment, three simulation results were defined from the perspective of our sides based on the remaining number of UAVs on both sides, as follows:
Our side victory: At the end of the simulation, our side has more remaining UAVs than the target side.
Our side defeat: At the end of the simulation, our side has fewer remaining UAVs than the target side.
Draw: At the end of the simulation, both sides have the same number of remaining UAVs.
The simulation results are analyzed through the winning rate, losing rate, and draw rate [35], for which the detailed descriptions and calculations are shown in Table 5.
For each typical scenario, the target side used the proposed method outlined in this paper, while our side used the method proposed in this paper and the baselines to conduct 400 simulation experiments. Based on the simulation results, our side’s win, loss, and draw rates were calculated, as shown in Table 6.
The experimental results demonstrate that the proposed method outperforms the selected baselines to varying degrees in the three typical scenarios. In our advantageous scenario, the proposed method achieved a winning rate of 89.75%, a draw rate of 2.75%, and a loss rate of 7.5%, while the selected baselines achieved success rates ranging from 83% to 87.25%. In the evenly-matched scenario, the proposed method achieved a winning rate of 38.5%, a draw rate of 27.25%, and a loss rate of 34.25%, while the selected baselines achieved success rates ranging from 30.75% to 37.5%. In our disadvantageous scenario, the proposed method achieved a winning rate of 9.25%, a draw rate of 4.25%, and a loss rate of 86.5%, while the selected baselines achieved success rates ranging from 5.5% to 8%. Across our advantageous, evenly-matched, and disadvantageous scenarios, the proposed method achieved an average winning rate of 45.83%, an average draw rate of 11.42%, and an average loss rate of 42.75%, outperforming the average success rates of the selected baselines which ranged from 40% to 44.17%.
The results of our proposed method indicate its potential to both enhance advantages for our side in advantage scenarios, as well as mitigate disadvantages in unfavorable situations. By implementing evolutionary game theory, we are able to consider possible countermeasures that the target side may adopt in response to our actions. This allows our approach to outperform selected baseline methods by anticipating dynamic interactions between our system and the target. Evolutionary game theory takes into account the adversarial nature of real-world environments when assigning UAV targets. It models the evolution of combined strategies over time as both sides continuously adapt to each other’s decisions and equilibrium shifts. This makes evolutionary game theory a more robust and flexible framework compared to alternative approaches. Overall, evolutionary game theory enhances the resilience and long-term effectiveness of our target assignment scheme in practical situations involving strategic adversaries.

6. Conclusions

In this paper, the multi-UAV network target assignment problem is investigated in a 3D scenario to solve the assignment problem for moving air targets. The method uses evolutionary game theory to model the multi-UAV network target assignment problem, and constructs the payment matrices of both sides by establishing the situational assessment function and a multi-level information fusion algorithm. Then, the evolutionary equilibrium solution is solved by constructing replicator equations to find the optimal target assignment strategy for both sides, which confers the benefit of balancing each side’s interest and the overall benefit. Finally, the whole process of the proposed method is analyzed through the typical scenario and the effectiveness experiment is analyzed in comparison with other algorithms in three scenarios. The experimental results show that the win rate of the game under the proposed method is higher than that for the Min–Max strategy, PSO, GA, and Hungarian Algorithm, and the performance of the proposed method is the best.
In future work, we plan to expand our analysis to consider factors such as opponent occupancy uncertainty and resource adequacy and uncertainty to further study the problem of multi-UAV network target assignment. We aim to improve the applicability and robustness of our solution method. Additionally, we will incorporate a UAV maneuvering decision-making algorithm to maximize the exceptional efficiency of the UAV during the movement process. These future efforts will help to advance our understanding of multi-UAV application scenarios and provide valuable insights into the development of effective target assignment strategies.

Author Contributions

Conceptualization, methodology, validation, writing, and software, Y.G.; Data curation and formal analysis, Y.G., C.W., L.Z. and Q.W.; Supervision, funding acquisition, and review, Q.W., X.Z. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the S&T Program of Hebei under grant number 21567698H.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shin, H.; Lee, J.; Kim, H.; Hyunchul, S.D. An autonomous aerial combat framework for two-on-two engagements based on basic fighter maneuvers. Aerosp. Sci. Technol. 2018, 72, 305–315. [Google Scholar] [CrossRef]
  2. Jordan, J. The future of unmanned combat aerial vehicles: An analysis using the three horizons framework. Futures 2021, 134, 102848. [Google Scholar] [CrossRef]
  3. Nguyen, N.V.; Choi, S.M.; Kim, W.S.; Lee, J.W.; Kim, S.; Neufeld, D.; Byun, Y.H. Multidisciplinary unmanned combat air vehicle system design using multi-fidelity model. Aerosp. Sci. Technol. 2013, 26, 200–210. [Google Scholar] [CrossRef]
  4. Zhang, J.; Xing, J. Cooperative task assignment of multi-UAV system. Chin. J. Aeronaut. 2020, 33, 2825–2827. [Google Scholar] [CrossRef]
  5. Kline, A.; Ahner, D.; Hill, R. The Weapon-Target Assignment Problem. Comput. Oper. Res. 2019, 105, 226–236. [Google Scholar] [CrossRef]
  6. Chopra, S.; Notarstefano, G.; Rice, M.; Egerstedt, M. A Distributed Version of the Hungarian Method for Multirobot Assignment. IEEE Trans. Robot. 2017, 33, 932–947. [Google Scholar] [CrossRef]
  7. Davis, M.T.; Robbins, M.J.; Lunday, B.J. Approximate dynamic programming for missile defense interceptor fire control. Eur. J. Oper. Res. 2016, 259, 873–886. [Google Scholar] [CrossRef]
  8. Summers, D.S.; Robbins, M.J.; Lunday, B.J. An approximate dynamic programming approach for comparing firing policies in a networked air defense environment. Comput. Oper. Res. 2020, 117, 104890. [Google Scholar] [CrossRef]
  9. Huang, H.; Zhuo, T. Multi-model cooperative task assignment and path planning of multiple UAV formation. Multimed. Tools Appl. 2017, 78, 415–436. [Google Scholar] [CrossRef]
  10. Kong, L.; Wang, J.; Zhao, P. Solving the Dynamic Weapon Target Assignment Problem by an Improved Multi-objective Particle Swarm Optimization Algorithm. Appl. Sci. 2021, 11, 9254. [Google Scholar] [CrossRef]
  11. Lai, C.-M.; Wu, T.-H. Simplified swarm optimization with initialization for dynamic weapon–target assignment problem. Appl. Soft Comput. 2019, 82, 105542. [Google Scholar] [CrossRef]
  12. Orhan, K.; Esra, K.; Ahmet, S. A multi-objective approach for dynamic missile allocation using artificial neural networks for time sensitive decisions. Soft Comput. 2021, 25, 10153–10166. [Google Scholar] [CrossRef]
  13. Zhu, J.; Zhao, C.; Li, X.; Bao, W. Multi-target Assignment and Intelligent Decision Based on Reinforcement Learning. Acta Armamentarii 2021, 42, 2040. [Google Scholar]
  14. Zou, Z.; Chen, Q. Decision tree-based target assignment for the confrontation of multiple space vehicles. Acta Aeronautica Astronaut. Sin. 2022, 43, 726910. [Google Scholar]
  15. Zhen, Z.; Wen, L.; Wang, B.; Hu, Z.; Zhang, D. Improved Contract Network Protocol Algorithm Based Cooperative Target Allocation of Heterogeneous UAV Swarm. Aerosp. Sci. Technol. 2021, 119, 107054. [Google Scholar] [CrossRef]
  16. Shalumov, V.; Shima, T. Weapon–Target-Allocation Strategies in Multiagent Target–Missile–Defender Engagement. J. Guid. Control Dyn. 2017, 40, 2452–2464. [Google Scholar] [CrossRef]
  17. Duan, H.; Yang, Q.; Deng, Y.; Li, P.; Qiu, H.; Zhang, T.; Shen, Y. Unmanned aerial systems coordinate target allocation based on wolf behaviors. Sci. China Inf. Sci. 2019, 62, 014201. [Google Scholar] [CrossRef]
  18. Yeduri, S.R.; Chilamkurthy, N.S.; Pandey, O.J.; Cenkeramaddi, L.R. Energy and Throughput Management in Delay-Constrained Small-World UAV-IoT Network. IEEE Internet Things J. 2023, 10, 7922–7935. [Google Scholar] [CrossRef]
  19. Bose, T.; Suresh, A.; Pandey, O.J.; Cenkeramaddi, L.R.; Hegde, R.M. Improving Quality-of-Service in Cluster-Based UAV-Assisted Edge Networks. IEEE Trans. Netw. Serv. Manag. 2022, 19, 1903–1919. [Google Scholar] [CrossRef]
  20. Xia, Z.; Du, J.; Wang, J.; Jiang, C.; Ren, Y.; Li, G.; Han, Z. Multi-Agent Reinforcement Learning Aided Intelligent UAV Swarm for Target Tracking. IEEE Trans. Veh. Technol. 2022, 71, 931–945. [Google Scholar] [CrossRef]
  21. Zhou, L.; Leng, S.; Liu, Q.; Wang, Q. Intelligent UAV Swarm Cooperation for Multiple Targets Tracking. IEEE Internet Things J. 2021, 9, 743–754. [Google Scholar] [CrossRef]
  22. Nowak, M.A. Evolving cooperation. J. Theor. Biol. 2012, 299, 1–8. [Google Scholar] [PubMed]
  23. Takesue, H.; Ozawa, A.; Morikawa, S. Evolution of favoritism and group fairness in a co-evolving three-person ultimatum game. Europhys. Lett. 2017, 118, 48002. [Google Scholar] [CrossRef]
  24. Barreiro-Gomez, J.; Mas, I.; Giribet, J.I.; Moreno, P.; Ocampo-Martinez, C.; Sánchez-Peñac, R.; Quijano, N. Distributed data-driven UAV formation control via evolutionary games: Experimental results. J. Frankl. Inst. 2021, 358, 5334–5352. [Google Scholar] [CrossRef]
  25. Sun, C.H.; Duan, H.B. Markov decision evolutionary game theoretic learning for cooperative sensing of unmanned aerial vehicles. Sci. China Technol. Sci. 2015, 58, 1392–1400. [Google Scholar] [CrossRef]
  26. Yu, M.G.; He, M.; Zhang, D.G.; Luo, L.; Liu, J.T.; Zhang, L.G. An approach to coordinated control of structured unmanned swarm based on evolutionary game. In Proceedings of the 2020 3rd International Conference on Unmanned Systems, Harbin, China, 27–28 November 2020; pp. 220–226. [Google Scholar]
  27. Du, Y.; Yang, N. Double-Layer Distributed Fusion Decision Method in Big Data Environment. Chin. Manag. Sci. 2016, 24, 127–138. [Google Scholar]
  28. Li, W.; Liu, Y.; Wang, Z. A Modified Combination Rule of Evidence Theory. Entropy 2019, 21, 1025. [Google Scholar] [CrossRef]
  29. Taylor, P.D.; Jonker, L.B. Evolutionary stable strategies and game dynamics. Math. Biosci. 1978, 40, 145–156. [Google Scholar] [CrossRef]
  30. Dai, X.; Ke, C.; Quan, Q.; Cai, K.Y. RFlySim: Automatic test platform for UAV autopilot systems with FPGA-based hardware-in-the-loop simulations. Aerosp. Sci. Technol. 2021, 114, 106727. [Google Scholar] [CrossRef]
  31. Ramírez López, N.; Żbikowski, R. Effectiveness of autonomous decision making for unmanned combat aerial vehicles in dogfight engagements. J. Guid. Control Dyn. 2018, 41, 1021–1024. [Google Scholar] [CrossRef]
  32. Chen, K.; Sun, Q.; Zhou, A.; Wang, S. Adaptive Multiple Task Assignments for UAVs Using Discrete Particle Swarm Optimization. In Proceedings of the International Conference on Internet of Vehicles, Paris, France, 6–9 November 2018; pp. 220–229. [Google Scholar]
  33. Wu, X.; Yin, Y.; Xu, L.; Wu, X.; Meng, F.; Zhen, R. Multi-UAV task allocation based on improved genetic algorithm. IEEE Access 2021, 52, 100369–100379. [Google Scholar] [CrossRef]
  34. Jiang, Y.; Wang, D.B.; Bai, T.T.; Yan, Z.Y. Multi-UAV Objective Assignment Using Hungarian Fusion Genetic Algorithm. IEEE Access 2022, 10, 43013–43021. [Google Scholar] [CrossRef]
  35. Li, Q.; Yang, R.; Feng, C.; Liu, Z. Approach for air-to-air confrontment based on uncertain interval information conditions. J. Syst. Eng. Electron. 2019, 30, 100–109. [Google Scholar] [CrossRef]
Figure 1. Relative positioning of the UAV and target.
Figure 1. Relative positioning of the UAV and target.
Mathematics 11 04196 g001
Figure 2. Process diagram for multi-UAV network target assignment based on evolutionary game theory.
Figure 2. Process diagram for multi-UAV network target assignment based on evolutionary game theory.
Mathematics 11 04196 g002
Figure 3. Relative angle of the UAV and the target.
Figure 3. Relative angle of the UAV and the target.
Mathematics 11 04196 g003
Figure 4. Advantage function of approach speed.
Figure 4. Advantage function of approach speed.
Mathematics 11 04196 g004
Figure 5. Advantage function of altitude.
Figure 5. Advantage function of altitude.
Mathematics 11 04196 g005
Figure 6. Advantage function of attack distance.
Figure 6. Advantage function of attack distance.
Mathematics 11 04196 g006
Figure 7. Advantage function of attack angle.
Figure 7. Advantage function of attack angle.
Mathematics 11 04196 g007
Figure 8. Payoff matrix construction process.
Figure 8. Payoff matrix construction process.
Mathematics 11 04196 g008
Figure 9. Composition of the RflySim software platform.
Figure 9. Composition of the RflySim software platform.
Mathematics 11 04196 g009
Figure 10. Typical scenario based on the RflySim platform.
Figure 10. Typical scenario based on the RflySim platform.
Mathematics 11 04196 g010
Figure 11. Evolutionary trend of strategies for UAVs.
Figure 11. Evolutionary trend of strategies for UAVs.
Mathematics 11 04196 g011
Figure 12. Evolutionary trend of strategies for targets.
Figure 12. Evolutionary trend of strategies for targets.
Mathematics 11 04196 g012
Figure 13. Target assignment results.
Figure 13. Target assignment results.
Mathematics 11 04196 g013
Table 1. Situational information of UAVs and targets.
Table 1. Situational information of UAVs and targets.
UAV x ( m ) y ( m ) z ( m ) θ ( r a d ) φ ( r a d ) v ( m / s )
R 1 1121333540000.8578
R 2 1538299840000.8567
R 3 758304340000.8584
B 1 1525034200−1.0782
B 2 3227504200−1.0770
Table 2. Strategy space on both sides.
Table 2. Strategy space on both sides.
Our side s R 1 s R 2 s R 3 s R 4 s R 5 s R 6 s R 7 s R 8
1 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 0 1 1 0 0 1 0 1 0 1
Target side s B 1 s B 2 s B 3 s B 4 s B 5 s B 6 s B 7 s B 8 s B 9
1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1
Table 3. Initial probabilities for each strategy of both sides.
Table 3. Initial probabilities for each strategy of both sides.
Our side p s R 1 p s R 2 p s R 3 p s R 4 p s R 5 p s R 6 p s R 7 p s R 8
0.1250.1250.1250.1250.1250.1250.1250.125
Target side p s B 1 p s B 2 p s B 3 p s B 4 p s B 5 p s B 6 p s B 7 p s B 8 p s B 9
0.11110.11110.11110.11110.11110.11110.11110.11110.1111
Table 4. Total effectiveness for each target assignment plan.
Table 4. Total effectiveness for each target assignment plan.
Red SideTarget
Assignment Plan
1 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 0 1 1 0 0 1 0 1 0 1
Total Attack Effectiveness0.331.050.290.310.380.290.361.42
target side SideTarget
Assignment Plan
1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 1 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1
Total Attack Effectiveness0.380.410.420.310.400.410.330.270.40
Table 5. Target assignment results on both sides.
Table 5. Target assignment results on both sides.
Indicator NameIndicator DescriptionComputational Formula
r w Probability of win(Winning count/Total simulation count) × 100%
r d Probability of draw(Draw count/Total simulation count) × 100%
r l Probability of loss(Losing count/Total simulation count) × 100%
Table 6. Simulation experiment results.
Table 6. Simulation experiment results.
Typical ScenarioMethod r w r d r l
Advantage (4v2)Evolutionary Game89.75%2.75%7.5%
Min–Max83%6.75%10.25%
GA85.25%4.75%10%
PSO87%8.5%4.5%
Hungarian Algorithm87.253.75%9%
Evenly-matched (2v2)Evolutionary Game38.5%27.25%34.25%
Min–Max30.75%24.5%44.75%
GA31%26.25%42.75%
PSO37.5%21.75%40.75%
Hungarian Algorithm37.25%27%35.75%
Disadvantage (2v4)Evolutionary Game9.25%4.25%86.5%
Min–Max6.25%2.25%91.5%
GA5.5%6.5%88%
PSO7%5.5%87.5%
Hungarian Algorithm8%3.25%88.75
Averaged value of the three scenariosEvolutionary Game45.83%11.42%42.75%
Min–Max40%11.17%48.83%
GA40.58%12.5%46.92%
PSO43.83%11.92%44.25%
Hungarian Algorithm44.17%11.33%44.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Y.; Zhang, L.; Wang, C.; Zheng, X.; Wang, Q. An Evolutionary Game-Theoretic Approach to Unmanned Aerial Vehicle Network Target Assignment in Three-Dimensional Scenarios. Mathematics 2023, 11, 4196. https://doi.org/10.3390/math11194196

AMA Style

Gao Y, Zhang L, Wang C, Zheng X, Wang Q. An Evolutionary Game-Theoretic Approach to Unmanned Aerial Vehicle Network Target Assignment in Three-Dimensional Scenarios. Mathematics. 2023; 11(19):4196. https://doi.org/10.3390/math11194196

Chicago/Turabian Style

Gao, Yifan, Lei Zhang, Chuanyue Wang, Xiaoyuan Zheng, and Qianling Wang. 2023. "An Evolutionary Game-Theoretic Approach to Unmanned Aerial Vehicle Network Target Assignment in Three-Dimensional Scenarios" Mathematics 11, no. 19: 4196. https://doi.org/10.3390/math11194196

APA Style

Gao, Y., Zhang, L., Wang, C., Zheng, X., & Wang, Q. (2023). An Evolutionary Game-Theoretic Approach to Unmanned Aerial Vehicle Network Target Assignment in Three-Dimensional Scenarios. Mathematics, 11(19), 4196. https://doi.org/10.3390/math11194196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop