1. Introduction
The National Development and Reform Commission (NDRC) and the National Energy Administration (NEA) of the People’s Republic of China pointed out in their jointly issued “Guiding Opinions on the High-Quality Development of Distribution Grids under New Situations” that, by 2030, the transformation of distribution grids to be flexible, intelligent, and digital will be basically completed [
1]. However, as the core business hub, the cloud master station supports a great proportion of new distribution business security protection needs. With the integration of a large number of power electronic devices into the power grid, under the complex application scenarios of multi-domain, multi-terminal, and multi-task, the expected operational pressure on the cloud master station has increased sharply. Therefore, how to reduce the expected operational pressure on the cloud master station has become one of the hotspots of the current stage of research [
2,
3,
4,
5].
To address the issue of reducing the expected operational pressure on cloud master stations for new types of distribution business, Wang et al. [
6] constructed a new type of power system security protection system from the aspects of management and technology. Li et al. [
7] designed a centralized information recognition algorithm based on the working principle of an electric power networked ordering system, to achieve reasonable centralized information cloud scheduling. The authors in [
8] combined the architecture of the Internet of Things in the power sector. They put forward corresponding security protection countermeasures according to the security risks faced by intelligent terminals. However, from the perspective of cloud–edge collaboration, the above work is only carried out at the cloud master station and does not consider some of the available computing resources in the edge devices, so more flexibility is still needed in resource scheduling.
As a data processing center, the cloud master station is an integral part of the cloud–edge architecture of the new power distribution system in real life. To alleviate the expected operation pressure of the cloud master station, in terms of edge arithmetic discovery [
9], it is proposed in [
10,
11] that containerized applications can be extended to the nodes and devices at the edge, and some of the edge-side device resources can be utilized for security protection. The authors in [
12] proposed the detection and protection of false data injection attacks, which are useful attempts to defend the power information network security defense mechanism. Younes et al. used a flexible laboratory-scale test platform to test new devices, control functions, and smart electronics configuration and communication systems to support the monitoring, control, and protection of power grids [
13]. The cyber security protection of foreign power systems started early, and a relatively complete power system security protection system has been formed [
14]. However, at this stage, edge stations often support multiple business cloud masters at the same time, and these methods fail to adequately consider the impact of its distributed synergy on the balance of the expected operating pressure of each cloud master.
From the perspective of cloud–edge collaboration, the security protection of the power distribution system is currently only concentrated at the cloud master station, and the computing resources on the edge side are not fully utilized. In addition, in the face of the situation that one edge substation supports multiple cloud master stations, considering the expected operation pressure balance of the cloud master station, the current edge computing power allocation is inflexible. Therefore, this article studies the cloud–edge-terminal network architecture, which reduces the operating pressure of the cloud master station by mining the available computing resources on the edge side and transferring part of the security protection work to the edge side, and comprehensively considers the service delay requirements and computing resource allocation to model the problem. To solve the problem, we can refer to the master-allocation-micro-integrated collaborative control strategy for distributed resource potential in [
15] and propose an adaptive scheduling method of heterogeneous resources on the edge side of power distribution collaboration based on cloud–edge security dynamic collaboration (ASM-CESDC). The main contributions of this article are summarized below:
- (1)
A multi-stage relay business security protection model is constructed for the edge stations and the business cloud master by combining the available arithmetic resources of each edge station.
- (2)
Secondly, the corresponding linear programming problem is constructed by considering the distributed collaboration of multiple edge stations with the objective of minimizing the expected operating pressure of the maximum business cloud master; the problem is solved quickly by using KKT.
- (3)
The simulation results show that the proposed ASM-CESDC algorithm can significantly reduce and minimize the expected operating pressure of the main station of the maximum business cloud by fully exploiting the arithmetic resources of multi-domain edge stations.
The rest of this article is organized as follows:
Section 2 provides an overview of relevant important works.
Section 3 introduces the cloud–edge collaborative computing framework and system model. In
Section 4, an optimization problem is proposed with the goal of minimizing the maximum operating pressure of the cloud master.
Section 5 presents the solutions to problems.
Section 6 describes the simulation results. Finally,
Section 7 concludes this article.
2. Related Work
In recent years, with the rapid construction of the new power system, massive distributed resources are connected to the power grid, but it also brings more collaborative and interactive communication needs and security threats at the same time [
16]. In terms of distributed security, based on the multi-time-scale interaction between distributed resources and the power grid, the authors in [
17] constructed a cloud–edge-terminal collaborative distributed resource interactive communication networking architecture, and further proposed the interactive security protection strategy of distributed resources, which can ensure the information security of distributed resources participating in the power grid interaction. In terms of cloud–edge-terminal collaborative security protection, an architecture that integrates real-time perception, dynamic decision making, and active defense, which deeply integrate the cloud, edge, and terminal to form security collaboration capabilities, was mentioned in [
18]. It further realizes the consistency of cloud, edge, and device risk perception, policy decision making, and attack and defense, and implements the efficient and low-cost collaborative protection of security risks. In terms of cross-domain security protection, Zhao et al. [
19] proposed the cyber–physical security characteristics that integrate physical security, functional safety, and network security, and studied key technologies such as cross-domain security threat propagation modeling, collaborative situational awareness and detection, and multi-spatiotemporal linkage protection, and explored their application prospects in combination with power monitoring services, so as to provide support for the safe operation of new power systems.
In addition, a lot of research has been performed, in [
20,
21,
22], on single-objective, dual-objective, and multi-objective directions in the Internet of Things, such as response time, reliability, service quality, and resource utilization. Despite the progress made in the above related research, there are still deficiencies in the flexibility of collaborative resource scheduling at the cloud–edge. This limitation makes part of the arithmetic resources of the edge devices not fully explored and utilized, limiting the overall computing power of the system and the potential for optimal resource allocation [
23].
Edge computing plays a key role in alleviating the operational pressure of the cloud master, and the efficient shunting and optimization of the computational load can be achieved through reasonable scheduling of edge arithmetic resources [
24]. The authors in [
25] address the dynamic change in computing load in the “cloud master”, establishing a dual-objective optimization model of the optimal allocation of cloud resources with the goal of the shortest average response latency and the smallest energy consumption of the “cloud master”. The introduction of edge computing technology can redefine the relationship between the cloud, management, and device; Sun et al. [
26] proposed edge computing technology applied to the distribution of the Internet of Things, in situ, to realize real-time and efficient lightweight data processing, and with a new generation of a distribution automation cloud master for the network, data, business, and other aspects of the synergies, to achieve the autonomy of the distribution station area. As mentioned in [
27], a cloud–edge-device collaborative network task offloading framework named FreeOffload realizes real-time awareness of computing resources and network status, and is designed for flexible offloading of heterogeneous embedded terminal tasks. In addition, its small-scale cloud–edge-terminal collaborative prototype experiment system can also efficiently and flexibly offload the task of the terminal.
Edge computing can be used to transfer tasks from the cloud master to the edge-side and terminal devices, which can alleviate the expected operation pressure of the business cloud master to a certain extent. However, the current edge substations usually need to support multiple cloud masters at the same time, and the existing research has not yet fully considered the impact of the edge substations on the balanced nature of the expected operation pressure of the various cloud masters, which restricts the overall effect of the optimization of resource scheduling [
28]. In this paper, the flexibility of resource allocation can be improved by exploring the available computing resources and implementing the adaptive allocation of computing resources at the edge. At the same time, distributed security protection can effectively reduce and balance the expected operating pressure of each cloud master.
5. Problem Solving
5.1. Solution Process
To facilitate problem solving, it is assumed that each edge substation and cloud master possesses sufficient computational resources to complete all security protection tasks within the specified business delay requirements. Therefore, in the process of solving, the delay constraint C3 is used as the criterion for calculating the number of security protection stages completed by each service at the edge substation. In the context of arithmetic resource allocation, the aforementioned optimization problem can be reformulated as shown in Equation (10).
For the inequality constraint optimization problem, the Karush–Kuhn–Tucker (KKT) method is widely used to transform inequalities and equations at present. The aforementioned linear programming problem exhibits strict concavity with respect to the variable
. Consequently, a unique extreme point exists that satisfies the KKT conditions [
30]. The corresponding Lagrangian function is presented as shown in Equation (11).
Among them,
,
, and
are the Lagrange multipliers corresponding to
,
, and
, respectively. As an example, the KKT conditions of the cloud master to which the business
belongs are as shown in Equation (12).
is the first-order derivative of . represents the optimal solution for the cloud master allocation associated with operation . ,, and are the optimal solutions corresponding to , , and . A further analysis of Equation (12) leads to the following conclusions, and the properties of the optimal solution are summed up as shown in Equations (1)–(3):
- (1)
When , , while and ;
- (2)
When and , , at the time ;
- (3)
When , , , and .
Based on the aforesaid properties, it can be discerned that there exists a certain correlation between and . decreases monotonically with the increase in . The optimal solution can be computed based on the relationships among , , and .
Specifically, when , Property (1) is satisfied, in which case ; when , Property (3) is satisfied, in which case ; when and , Property (2) is satisfied, by iterating until approaches , and the optimal computing power resource allocation scheme can be obtained.
5.2. Algorithm Design and Complexity Analysis
Based on the decomposition of the problem in local convex approximation processing in
Section 5.1, the global iterative optimization is carried out to complete the specific iteration process of adaptive scheduling of cloud–edge computing resources, and the pseudocode is shown as Algorithm 1.
Algorithm 1: An adaptive Scheduling Method of Heterogeneous Resources on the Edge-Side of Power Distribution Collaboration based on Cloud-Edge Security Dynamic Collaboration |
Inputs: latency requirements of each business, total computing power resources of the edge substation, total computing power resources of the cloud master, and businesses included in each edge substation. Outputs: Expected operating pressure of each cloud master. 1: Calculate the available arithmetic resources at each edge substation. 2: for do 3: Edge substations adaptively allocate computing power resource in accordance with the method described in Section 4.1. 4: for do 5: for do if // Based on the delay constraint, the security protection process that each business can complete at the edge substation is determined else Safeguarding process end end 6: Each edge substation transmits the business to the corresponding cloud master station and accomplishes the remaining security protection work. 7: Calculate the expected operating pressure of each cloud master according to Equation (8). 8: return Expected operational pressure on each cloud master. |
Taking a single-edge substation as an example, the block diagram of its security process is shown below (
Figure 2):
The complexity of this algorithm mainly stems from two aspects: firstly, the calculation of CPU frequency allocation for edge substations, and secondly, the calculation of the data security process at the edge substation and the cloud master station. If the number of edge substations is , the number of cloud master stations is , and the number of security processes is . As the iterative process of KKT is accomplished through dichotomy, the complexity of completing the allocation of computing power resources at an edge substation can be expressed as . Since each cloud master station needs to comprehensively consider the business from each edge substation and its security process, its complexity increases monotonically with the growth of the number of edge substations and cloud master stations, and the total complexity of Algorithm 1 is .
6. Experimental Results and Analysis
In this section, MATLAB R2020a is employed to simulate the proposed edge-side communication resource adaptive allocation method for cloud–edge collaborative security protection and its related algorithms to validate the effectiveness of ASM-CESDC. The parameters of the system simulation environment are set as shown in
Table 1.
In real scenarios, the types of businesses included in each edge substation vary due to the different geographic environments in which they are located. In order to verify the effectiveness of the proposed algorithm, the edge substation is set up to contain random businesses and all businesses in the simulation process for comparison experiments, and the details of the edge substation business type settings are shown in
Table 2.
In order to verify the advantages of ASM-CESDC in the adaptive allocation of computing resources, this paper compares ASM-CESDC with Not Utilizing the Available Computing Resources of the Edge Substations (NACR) and Evenly Distributing the Available Computing Resources of Edge Substations (EDACR). The focus is on analyzing the expected operational pressure experienced by each business cloud master. In a system model comprising five edge substations and three cloud masters,
Figure 3 and
Figure 4 illustrate the simulation results of random-business and full-business expected operating pressures, respectively.
As shown in
Figure 3, by utilizing the available computing resources of the edge substations, both EDACR and ASM-CESDC can effectively reduce the expected operation pressure of the main station of each business cloud compared with NACR. However, a comparison of the results from EDACR and ASM-CESDC reveals that ASM-CESDC is more effective in minimizing the maximum expected operating pressure of the cloud master. Specifically, the maximum expected operating pressure decreases from 77.02% to 71.54% when using EDACR.
Figure 4 presents the simulation results of the expected operating pressure for full business at each edge substation under identical conditions. Compared with
Figure 3, the operating pressure of the cloud master station shown in
Figure 4 is relatively high due to the increase in the number of service types contained in each edge substation, which leads to an increase in the overall data volume of the system. However, based on the simulation results, it is evident that ASM-CESDC demonstrates greater effectiveness in minimizing the expected operating pressure of the largest cloud master. The maximum expected operating pressure of the cloud master shown in
Figure 4 decreases from 83.18% to 76.54% of EDACR.
Figure 5 and
Figure 6 show the simulation results corresponding to the system model with 10 edge substations and 5 business cloud masters, respectively. Regarding the method of computing power resource allocation, the comparison results are the same as the conclusions drawn from the above
Figure 3 and
Figure 4. That is, the ASM-CESDC algorithm is superior in minimizing the maximum expected operating pressure of the cloud master station. However, since the increase in the number of edge substations leads to an increase in the total amount of data for each business compared with
Figure 3 and
Figure 4, the expected operating pressure of each business cloud main station shown in
Figure 5 and
Figure 6, in the case of not utilizing the available computing power resources of the edge substations, surges sharply, and even exceeds the performance upper limit of the cloud main station, thereby the pressure exceeds 100%.
Accordingly, in comparison to
Figure 5, the subsequent introduction of businesses depicted in
Figure 6 influences the allocation of computing power resources at the edge substation. The diversification of business types results in an increased volume of data associated with these businesses, thereby heightening the anticipated operational pressure on the cloud master. Therefore, the expected operating pressure of each business cloud master corresponding to EDSCR and ASM-CESDC is larger compared to the results in
Figure 5. In addition, the ASM-CESDC algorithm is more effective in minimizing the maximum expected cloud master operating pressure, which decreases from 94.82% to 86.54% for EDSCR as shown in
Figure 5 and decreases from 97.81% to 90.32% for EDSCR as shown in
Figure 6.
Figure 7 and
Figure 8 present the simulation results regarding the expected operational pressure of random business and full business under a frequency of 4 GHz for edge substation arithmetic. As the computational resources of the edge substation increase, the available computational resources that can be allocated to each business also rise correspondingly. Consequently, this leads to a reduction in the expected operational pressure experienced by each cloud master. The comparative analysis presented in
Figure 7 and
Figure 8 alongside
Figure 3 and
Figure 4 indicates that an increase in available arithmetic resources leads to a reduction in the expected operational pressure on the cloud master during distributed cooperative security protection facilitated by edge substations. Furthermore, the ASM-CESDC algorithm demonstrates superior efficacy in minimizing the maximum expected operating pressure of the cloud master. This is evidenced by a decrease from 73.02% to 70.54% of EDACR, as illustrated in
Figure 7, and a reduction from 81.18% to 76.75% of EDACR, as shown in
Figure 8.
Figure 9 and
Figure 10 present the simulation results regarding the expected operational pressure of random business and full business at a frequency of 6 GHz, corresponding to the computational capacity of the cloud master station. A comparative analysis between
Figure 3 and
Figure 9 as well as between
Figure 4 and
Figure 10 indicates that when distributed cooperative security protection is implemented with the assistance of edge substations, an increase in the computational resources available to the cloud master significantly alleviates its expected operational pressure. In addition, the simulation results indicate that the ASM-CESDC algorithm is more effective in minimizing the maximum expected operating pressure of the cloud master. This effectiveness is illustrated in
Figure 9, where a reduction from 71.18% to 66.54% of EDACR is observed, and in
Figure 10, which shows a decrease from 73.25% to 66.54% of EDACR.
As can be seen from
Table 3, the change in the number of edge substations has the greatest impact on the expected pressure of the cloud master, because as the number of edge substations increases, the overall data volume in the network increases. However, the increase in computing power resources of the edge substation has the least impact on the expected operating pressure of the cloud master station, because the edge substation needs to allocate the available computing power resources to different services, so the increase in its total amount has the least impact on the overall network.
7. Conclusions
Based on the background of the new power distribution system, this article aims at the cloud–edge framework including a multi-domain, multi-edge substation, and multi-cloud master, considering the current underutilization of edge computing resources and the uneven expected operation pressure of the cloud master; taking the computing resources consumed by data security protection as the starting point, some data are selected to be processed at the edge substation, and the operation pressure is reduced by reducing the amount of data processed at the cloud master station. And in this process, each edge substation adaptively allocates computing resources based on the expected operating pressure of each cloud master station to achieve the purpose of cloud–edge collaboration.
Therefore, a new adaptive scheduling method for power distribution general computing resources is proposed, which is based on cloud–edge security relay collaboration. This method facilitates distributed collaborative scheduling of the available computing resources at each edge substation through a multi-stage relay business security protection model. Under the condition of satisfying the service delay requirements and different network environments, the distributed collaborative scheduling of the available arithmetic resources of each edge substation was completed, and the utilization rate of edge resources was improved. It can significantly reduce and minimize the expected operational pressure on the main station of the largest business cloud. By enhancing the flexibility of joint scheduling across multi-domain heterogeneous resources within the distribution system, this approach is of great significance for the flexible, intelligent, and digital transformation of the power distribution system in the future.
However, for businesses with large data volume and higher latency requirements, the use of edge substations for preliminary processing may make the service stay at the edge substation longer, resulting in the inability to complete all security protection within the delay requirements. Therefore, this paper needs to be improved in considering the adaptive allocation of multi-domain computing resources for high-concurrency and low-latency businesses.