1. Introduction
Currently, infrastructure networks, such as water supply networks, aviation networks, and transportation networks, play essential roles in human society [
1,
2]. Excessive dependency on these networks results in human systems having a wide range of vulnerabilities, including to the threats posed by both terrorists and hackers. For example, the September 11 attacks against the World Trade Center in New York and the Pentagon in Virginia resulted in a significant loss of life and had an enormous impact on the economy and politics. Moreover, networks are also prime targets during times of conflict. Therefore, it is vital to consider adversaries’ strategies and understand network interdependencies from a global perspective.
Numerous methods, such as probabilistic risk analyses and data analyses, have been proposed to protect infrastructures [
3,
4]. These methods are unsuitable for modeling the behavior of intelligent adversaries [
3,
4,
5]. In these cases, game theory provides an appropriate model framework to address this problem, within which the optimal strategies and interactions of players can be assessed [
6,
7]. For example, Brown et al. [
8] formulated a sequential game model to minimize the operating costs for both attack and defense strategies. Pita et al. [
9] employed game theory to examine the complexity of airport security. Zhang et al. [
10] proposed a game model to address challenges in factory safety management. Feng et al. [
11] took this a step further by integrating game theory and risk assessments to evaluate protective measures for multiple chemical facilities under the looming threat of attacks. They later expanded their study to incorporate multiple attackers [
12]. Zhang et al. [
13] investigated resource allocation within security games, while Guan et al. [
14] delved into an attack–defense game model that incorporated budget constraints. Zhang et al. [
15] transformed the game of an infrastructure problem into a multiobjective optimization model and employed evolutionary algorithms to solve it.
However, importantly, the above studies overlook the complex interactions that exist within infrastructure systems. In reality, interconnected infrastructures form a complex network, wherein the failure of a single facility could potentially affect the entire network. A typical network consists of nodes, edges connecting the nodes, and weights assigned to the edges. Initially, mathematicians believed that real systems could be represented by regular structures such as regular lattices and nearest-neighbor grids. By the late 1950s, Erdős et al. [
16] introduced random networks, in which the existence of an edge between two nodes is determined by a probability. Networks generated via this method are referred to as random networks. In recent decades, research on small-world networks and scale-free networks has initiated complex network studies. Watts et al. [
17] proposed a small-world network model involving the rewiring of the edges between nodes in a regular network. Barabási et al. [
18] introduced the scale-free network model, which is characterized by a few nodes with many connections, resulting in a power-law distribution of the node degrees in this type of network. Li et al. [
19] proposed a localized world evolutionary network model by using the world trade web. Comellas [
20] introduced a small-world network model with a certain regularity in its node connections from the perspective of graph theory to study the topology of communication networks.
Therefore, it is crucial to consider the comprehensive impact of localized infrastructure failures on the entire infrastructure network. To address this issue, protection measures for infrastructure networks should be analyzed by integrating game theory and complex network theory. Fu et al. [
21] developed a static network attack and defense game model to examine the impact of cascading failures. Gu et al. [
22] analyzed the significance of the Bayesian Stackelberg game model from the perspective of network science. Zeng et al. [
23] used the Bayesian Stackelberg game model and proposed a false network construction method. Qi et al. [
24,
25] proposed a link-hiding rule and analyzed its optimization impact within the context of dynamic attack and defense games in complex networks. Huang et al. [
26] used sequential game theory to model attack and defense games in complex networks and proposed a strategy optimization method. Baykal-Guersoy et al. [
27] introduced the concept of an attack number, which considers factors such as the number of affected individuals or the occupancy level of critical infrastructure, as a measurement. They developed a game model to examine the security of transportation networks. Li et al. [
28,
29,
30] proposed an attack–defense model that takes a network perspective to investigate how network structure and cost constraints influence equilibrium outcomes under two typical strategies. Thompson et al. [
31,
32] analyzed the potential impacts of intelligent attacks and worst-case interruptions on the U.S. air transportation network. Subsequently, they established a defender–attacker–defender optimization model with three levels and proceeded to solve it. These game models can be roughly divided into two categories. One category is that of the simultaneous game models, where the attacker and the defender do not know their opponent’s chosen strategies [
28,
29,
33]. The other category, containing the Stackelberg (sequential) game models, is the one in which the attacker can effectively surveil the security measures of the defender [
23,
24,
25,
34,
35,
36].
In these studies, it is assumed that players’ strategies are not constrained, which is not always possible in realistic situations. In practice, players are often restricted by objective conditions when choosing strategies. Charnes [
37] developed the two-person zero-sum constrained game. Owen [
38] investigated the existence of solutions to the two-person zero-sum constraint matrix countermeasure problem using dual linear programming theory. Firouzbakht et al. [
39] proposed a constrained bimatrix game framework that has practical applications in various fields, such as modeling packet jamming in wireless networks. Xiao et al. [
40] proposed an interval bimatrix game with a constrained strategy.
In this paper, we introduce a new approach to the attacker–defender game model by incorporating strategy constraints. Considering the average distances between nodes is crucial for securing critical infrastructures like high-speed rail (HSR) networks. Shorter distances between stations enable the rapid communication of security signals, essential for swift detection and responses to threats. This quick communication directly impacts the response time of automated security measures. During an attack, shorter distances can significantly reduce the time needed to activate security protocols, mitigating the attack’s severity. By incorporating the average node distance as a key metric, our model introduces a method for quantifying the feasibility of strategy selection. The larger the average distance between selected nodes, the more difficult it is to apply that strategy in realistic situations. This approach is not only innovative within the field but also mirrors practical scenarios. We conduct experiments in a target network to analyze the impacts of these constraints.
The rest of the paper is organized as follows: In
Section 2, we present some basic assumptions, constrained strategies, and payoffs. In
Section 3, the method used for solving the game is presented. The equilibrium results are analyzed in
Section 4. Finally,
Section 5 concludes the paper.
2. An Attacker–Defender Game Model Based on Constrained Strategies
Considering constrained strategies, we build an attacker–defender game model for infrastructure networks. An infrastructure network can be represented by an undirected simple graph , where represents the node set, is the number of nodes, and represents the link set. Let the adjacency matrix of graph G be . If there is a link between nodes and , then ; otherwise, .
Since the actions of the attacker and the defender are simultaneous, this game model is a static model. The attacker–defender game model uses a ten-tuple to represent the confrontation, where :
(1) Let represent the attacker in the attacker–defender game model. The attacker predicts the defense strategy of the defender to develop an attack strategy.
(2) Let represent the defender in the attacker–defender game model. The defender predicts the attack strategy of the attacker to develop a defense strategy.
(3) Let represent the attack node set. If the attacker chooses to target nodes and , then .
(4) Let represent the attack strategy set. The vector indicates the ith attack strategy in the set of attack strategies. In this case, if the node is attacked (); otherwise, .
(5) Let represent the defense node set. If the defender chooses to target nodes and , then .
(6) Let represent the defense strategy set. The vector indicates the jth defense strategy in the set of defense strategies. In this case, if the node is defended (); otherwise, .
(7) Let represent the probability that the attacker adopts an attack strategy. The element indicates that the attacker adopts the strategy with a probability of .
(8) Let represent the probability that the defender adopts a defense strategy. The element indicates that the defender adopts the strategy with a probability of .
(9) Let represent the profit function for the attacker. The value of the function also depends on and . Different attack strategies and different defense strategies generate different profit values for the attacker.
(10) Let represent the profit function for the defender. The value of the function depends on and . Different attack strategies and different defense strategies generate different profit values for the defender.
2.1. Basic Assumptions
(1) In this game, there are two rational players, namely, the attacker and the defender. Both players possess complete information about the target network, including knowledge of all possible strategies and the objective metrics associated with the network’s structure for each strategy profile in the network.
(2) All attacks and defenses are target nodes. A node is considered to be successfully attacked when it is targeted by the attacker without being protected by the defender. Once a node is successfully attacked, all the edges connected to that node are removed from the network.
(3) In this game, both the attacker and the defender independently formulate their strategies without prior knowledge of each other’s decisions. This simultaneous move structure is designed to capture scenarios in which each party operates under conditions of strategic secrecy and independent decision-making. Furthermore, the game is structured as a single-shot interaction, implying that there are no subsequent rounds which could provide opportunities for reassessment or adaptation based on an opponent’s previous moves.
2.2. Constraint Strategies
The attack strategy’s selection probability is denoted as
, while the defense strategy’s selection probability is represented by
. The strategy constraints are established as follows:
and
where
and
are constraint coefficients for the attacker and the defender, respectively, and belong to the interval
.
Figure 1 provides a detailed illustration of the method used for calculating the strategy selection probability based on the average distance between selected nodes. In an example network comprising 10 nodes, we presume that both the attacker and the defender opt for three nodes for their respective strategies. The shortest paths between each pair of nodes are then meticulously calculated. The figure shows the process of deriving the average of these shortest paths, which forms the foundation for imposing constraints on strategy selection.
In this model, as the average distance between selected nodes increases, the probability of choosing their corresponding strategy decreases. We denote the average distance as
for the
ith attack strategy and
for the
jth defense strategy. We set the strategy constraint rules as follows:
and
Let
represent the attack strategy constraint parameter and
represent the defense strategy constraint parameter. These parameters indicate the strength of the strategy constraints for the two players. The values of
and
depend on the targeted network structures, the players’ experience, and their subjective preferences. Larger values of
and
indicate weaker constraints, while smaller values indicate stronger constraints. For the attacker,
is calculated by
where
and
are the minimum and maximum values in
, respectively.
For the defender,
is calculated by
where
and
are the minimum and maximum values in
, respectively.
Additionally, we propose incorporating an entropy-based measure to quantify the uncertainty and variability of the strategy selection probabilities. The entropy
H of the attack and defense strategies can be defined as follows:
and
These entropy measures provide additional insights into the diversity and unpredictability of these strategies. They serve as a metric for assessing the diversity and unpredictability of these strategies.
2.3. Payoffs
In
Section 2.1, we assume that node
is successfully removed only if it is attacked by the attacker and is not protected by the defender. We define the sets of removed nodes and edges as
and
, respectively. Then, the resulting network after its removal can be denoted as
.
Here, it is evident that
. The set of removed nodes
is equal to the set of nodes attacked by the attacker
minus the set of nodes attacked by the attacker and protected by the defender
. This can be shown by the following calculation:
We denote the measure of network performance as
, which can be evaluated by the size of the largest connected component [
41], efficiency [
42], and other metrics. Additionally, we define the attacker’s payoff as
while the defender’s payoff is defined as
where
is defined as the measure of network performance. In this paper,
and
are the sizes of the largest connected component of network
and network
, respectively. The sum of the attacker’s payoff and the defender’s payoff is zero, indicating a zero-sum game.
4. Experiment
In this section, we conducted experiments to demonstrate the effectiveness of our model, using a high-speed rail (HSR) network as an example. In the context of a high-speed rail network, which spans vast geographical areas with numerous stations and control centers, the average distance between nodes plays a pivotal role in the security strategy used. Consider the security strategy for a major HSR network like China’s extensive HSR system, which connects numerous cities across the country. This strategy must ensure the safety and integrity of both passengers and infrastructure. The average distance between stations and control centers is crucial to determine the efficiency and effectiveness of security measures, from real-time monitoring to emergency response coordination.
For comparison purposes, we divided the experiments into two groups: one under unconstrained conditions and the other under constrained conditions. For each group of experiments, we set the number of attackable or defendable nodes to 2, 3, and 4, respectively. We then applied Equations (
12) and (
13) to generate Nash equilibrium solutions before proceeding with the analysis.
Our analysis was conducted on a system equipped with a 12th Gen Intel Core i7-12700H processor, 32.0 GB of RAM, and a 64-bit operating system running on an x64-based processor. The equipment is from Lenovo, a manufacturer located in Beijing, China. The data originated from the targeted network.
4.1. Experiment without Constrained Strategies
4.1.1. Experimental Setting
In our experiments conducted within a target network,
Figure 2 offers a comprehensive visualization of the nodes’ significance under various centrality metrics: degree centrality (DC), closeness centrality (CC), betweenness centrality (BC), and eigenvector centrality (EC). This visualization employs a color gradient, with the nodes appearing more red having a higher value for the corresponding centrality metric.
Degree centrality (DC ) [
43]: This is a measure that quantifies the direct influence of a node on a network. It is based on the principle that nodes with higher degrees have a greater potential to directly affect their neighbors, thereby increasing their significance within the network. The degree of
i is
, which is equal to the number of edges connected to it. This is calculated by
where
N denotes the total number of nodes in
G and
is the maximum possible degree. For normalization, the equation is divided by N-1 based on the degree.
Closeness centrality (CC) [
44]: This metric is based on the average time it takes for information to travel from one node to another. It quantifies how quickly a node can reach all other nodes in the network. The closeness centrality of a node is calculated as the sum of the reciprocals of the shortest distances from that node to all other nodes divided by the number of nodes in the network. This value represents the average transmission time needed for information to travel from one node to all other nodes in the network. Nodes with higher closeness centrality values are considered more important because they have greater access to information and can influence the network more quickly. CC is calculated by
where
represents the average shortest distance from node
to node
. If there is no connection between
and
, the distance approaches infinity, in which case
.
Betweenness centrality (BC) [
45]: This is a measure of the influence of a node on the flow of information in a network. This measure quantifies how many shortest paths pass through a particular node and, in turn, how many other nodes are reachable from those paths. The betweenness centrality of a node is calculated by summing the number of shortest paths that pass through each of its neighbors, weighted by the number of shortest paths that include those neighbors. Nodes with higher betweenness centrality values are considered more influential as they play a crucial role in connecting different parts of the network and distributing information efficiently. BC is calculated by
where
represents the number of shortest paths from node
s to node
t through node
i.
represents the total number of shortest paths from node
s to node
t.
Eigenvector centrality (EC) [
46]: This metric is a measure of the importance of nodes in a network based on the quality of their connections to other nodes. EC quantifies how influential a node is by accounting for not only the number of its neighbors but also the importance of those neighbors. This is calculated by
where
is the adjacency matrix of the network,
is the value of the
jth entry of the normalized largest eigenvector, and
is a constant.
This network consists of 10 nodes and 20 edges. Among the nodes, and have high values and and have low values. In this model, a objective function is established based on the size of the largest connected component. We conducted this experiment with different numbers of nodes to be attacked or defended.
4.1.2. The Nash Equilibrium
The mixed-strategy Nash equilibrium results are presented in
Table 1,
Table 2 and
Table 3. Notably, the pure strategies with nonzero probabilities in their equilibrium are listed, as are their respective probabilities. From the equilibrium results when
, we observe that the attacker has five pure strategies with nonzero probabilities. On the one hand, the highest probabilities are assigned to the attack strategies
,
,
, and
, all of which have a probability of 0.23077. On the other hand, strategy
has the lowest probability. Notably,
and
have high values for the centrality properties examined in this network. For the defender, strategies
,
,
, and
have the highest probabilities, all equal to 0.15385, with
having the highest value for the centrality properties. When
, the nonzero probabilities of each strategy chosen by both the attacker and defender are given in
Table 2. For example, strategy
has the highest probability among the attacker’s strategies, with a value of 0.2093, and the probability of the defender selecting strategy
is 0.25. Similarly, in the third scenario, where
, there are a total of eight attack strategies and seven defense strategies with nonzero probabilities.
Table 3 provides the probabilities for each strategy. The attacker is more likely to choose strategy
, while the defender tends to choose strategies
and
.
It is evident that some strategies are much more likely to be chosen than others. For example, in the three scenarios, certain attack and defense strategies have probabilities close to 0.2 or 0.3, respectively. Additionally, by comparing the three scenarios using different values of and , we can see that the number of nodes to be attacked or defended affects the players’ decision-making results. With an increased number of nodes, players have more flexibility in choosing their strategies, leading to a more complex game.
To explore the nodes that the attacker and the defender are most likely to select in the Nash equilibrium, we map the probabilities over pure strategies to those over each node via the following equations:
and
where
and
are the probability distributions over each node for two players. With this approach, the selection probability distributions for each node are obtained and mapped from the probabilities in
Table 1,
Table 2 and
Table 3, as shown in
Figure 3.
The nodes with the lowest probabilities of being attacked are and , whose degree centrality, closeness centrality, betweenness centrality, and eigenvector centrality are the highest. However, the defender allocates the greatest probability to protecting nodes and . This finding suggests that nodes with greater scores are generally more likely to be protected. As the number of nodes to be attacked or defended increases, the probability distribution of the nodes becomes more uniform.
4.2. Experiments with Constrained Strategies
The Nash Equilibrium
As shown in
Section 2.2,
and
in Equations (
5) and (
6) are determined by various factors. In this experiment, we set
and
based on the network structure shown in
Figure 2 and the unconstrained Nash equilibrium results in
Table 1,
Table 2 and
Table 3. However, when
, there is no solution for which
. Therefore, when
, we set the critical value
.
Table 4,
Table 5 and
Table 6 present the mixed-strategy Nash equilibrium results with strategy constraints for scenarios where the numbers of attack and defense nodes are equal (
). These tables show the first ten highest probabilities of each attack and defense strategy being chosen by both players.
When , compared to the result without any constraints, it is obvious that both the attacker and the defender are more likely to choose strategies and . For the attacker, the probability of selecting strategy is 0.25, and the probability of choosing is 0.20602. For the defender, the probabilities of selecting or are 0.06 or 0.049444, respectively.
When , the probability distribution becomes more uniform. Certain strategies share equal probabilities. For instance, when , the probabilities of the attacker choosing attack strategies , , , and so on, are all 0.048188. Similarly, the probabilities of the defender selecting defense strategies , , , and so on, are all 0.05. When , the probability of the attacker choosing attack strategy is 0.035504, and the probability of them choosing strategies , , and so on, is 0.024047. Similarly, the probability of the defender selecting defense strategies , , , and so on, is 0.020088.
4.3. The Probability Distribution of Each Node
Subsequently, we obtain the distribution of probability across nodes based on Equations (
20) and (
21). To analyze the various constraints effectively, we have illustrated them in
Figure 4.
According to
Figure 4, by applying the proposed model, the selection probability of the 10 nodes in the target network changes substantially. When the number of attackable or defendable nodes is two, the change in the selection probability for different nodes is significant. However, as the number of attackable or defendable nodes increases, this change becomes less apparent. Specifically, when the number of nodes to be attacked or defended is two, the selection probabilities
and
significantly increase for the attacker. For the defender, the selection probability of
decreases, while the selection probability of the other nodes does not change significantly. When the number of attackable or defendable nodes is three, there is a small fluctuation in the selection probability of
for both the attacker and the defender. When the number of nodes is four, only small changes occur.
According to our experiments, we have found key insights that set apart unconstrained and constrained scenarios. Constraints significantly impact the choices of both attackers and defenders. This shows that constraints are not just theoretical; they affect real-world security strategies. Without constraints, decision makers focus on single node metrics when choosing strategies. But with constraints, they must think broadly, considering node interconnections and dependencies. This broadens the strategic landscape, mirroring the complexity of actual security situations.
5. Conclusions
Currently, infrastructure attack and defense scenarios have attracted considerable attention. The integration of complex network theory and game theory has provided valuable insights for choosing attack and defense strategies. Modeling an attacker–defender game helps in the analysis of strategic choices. To fit this to realistic situations, we propose a strategy constraint rule and a static game model under this rule.
This approach provides foundational understanding but is recognized to be a simplification of complex realities. In practice, strategic choices are subject to a multitude of constraints, including, but not limited to, resource limitations, temporal dynamics, and regulatory frameworks. The interplay of these factors requires a more integrated model. Future work will involve the development of a more adaptive algorithm. Therefore, we propose several perspectives for future research:
(1) Dynamic constraints: Real-world infrastructure systems are dynamic and constantly changing. Decision makers may face varying constraints over time due to factors such as resource availability, changes in the threat landscape, or evolving regulations. Future research may include exploring the implications of dynamic constraints on the game model and considering how decision makers adapt their strategies based on evolving constraints.
(2) Multiobjective optimization: In addition to constraints, decision makers often need to consider multiple objectives when selecting strategies for infrastructure protection. These objectives may include minimizing damage, maximizing system resilience, or optimizing resource allocation. Future research may include integrating multiobjective optimization techniques into game models to assist decision makers in selecting strategies that balance multiple competing objectives under constrained conditions.