Next Article in Journal
Optimizing Brown Stock Washing in the Pulp and Paper Industry: A System Dynamics Approach
Previous Article in Journal
Modbus RTU Protocol Timing Evaluation for Scattered Holding Register Read and ModbusE-Related Implementation
Previous Article in Special Issue
A Fast Calculation Method for Economic Dispatch of Electro-Thermal Coupling System Considering the Dynamic Process of Heat Transfer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Scheduling Method of Heterogeneous Resources on Edge Side of Power System Collaboration Based on Cloud–Edge Security Dynamic Collaboration

1
State Grid Jibei Electric Power Co., Ltd., Economic and Technical Research Institute, Beijing 100038, China
2
School of Electrical Engineering, North China Electric Power University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(2), 366; https://doi.org/10.3390/pr13020366
Submission received: 28 November 2024 / Revised: 19 January 2025 / Accepted: 20 January 2025 / Published: 28 January 2025
(This article belongs to the Special Issue Modeling, Simulation and Control in Energy Systems)

Abstract

:
In recent years, the large-scale integration of new power distribution technologies such as distributed power generation, electric vehicles, and flexible load control has led to a sharp increase in the operating pressure of the power cloud master station. To this end, an adaptive resource allocation method for edge-side general computing resources, which is used for cloud–edge collaborative security protection, is proposed. Firstly, considering the computing resources available to multiple edge substations, a Cloud–edge Collaborative Relay Business Security Protection Model (C2RBSPM) is constructed. Then, with the goal of minimizing the operating pressure of the maximum cloud master, the corresponding linear programming problem is established, and finally the Karush–Kuhn–Tucker (KKT) is used to solve it quickly. The simulation results show that the proposed method can reduce the expected operating pressure of the cloud master station by up to 35.19%. Therefore, reasonable mining of available computing resources on the edge side and relay security protection can effectively reduce the operating pressure of the cloud master station, and improve the operation efficiency of the system. This approach is of great significance for the flexible, intelligent, and digital transformation of the power distribution system in the future.

1. Introduction

The National Development and Reform Commission (NDRC) and the National Energy Administration (NEA) of the People’s Republic of China pointed out in their jointly issued “Guiding Opinions on the High-Quality Development of Distribution Grids under New Situations” that, by 2030, the transformation of distribution grids to be flexible, intelligent, and digital will be basically completed [1]. However, as the core business hub, the cloud master station supports a great proportion of new distribution business security protection needs. With the integration of a large number of power electronic devices into the power grid, under the complex application scenarios of multi-domain, multi-terminal, and multi-task, the expected operational pressure on the cloud master station has increased sharply. Therefore, how to reduce the expected operational pressure on the cloud master station has become one of the hotspots of the current stage of research [2,3,4,5].
To address the issue of reducing the expected operational pressure on cloud master stations for new types of distribution business, Wang et al. [6] constructed a new type of power system security protection system from the aspects of management and technology. Li et al. [7] designed a centralized information recognition algorithm based on the working principle of an electric power networked ordering system, to achieve reasonable centralized information cloud scheduling. The authors in [8] combined the architecture of the Internet of Things in the power sector. They put forward corresponding security protection countermeasures according to the security risks faced by intelligent terminals. However, from the perspective of cloud–edge collaboration, the above work is only carried out at the cloud master station and does not consider some of the available computing resources in the edge devices, so more flexibility is still needed in resource scheduling.
As a data processing center, the cloud master station is an integral part of the cloud–edge architecture of the new power distribution system in real life. To alleviate the expected operation pressure of the cloud master station, in terms of edge arithmetic discovery [9], it is proposed in [10,11] that containerized applications can be extended to the nodes and devices at the edge, and some of the edge-side device resources can be utilized for security protection. The authors in [12] proposed the detection and protection of false data injection attacks, which are useful attempts to defend the power information network security defense mechanism. Younes et al. used a flexible laboratory-scale test platform to test new devices, control functions, and smart electronics configuration and communication systems to support the monitoring, control, and protection of power grids [13]. The cyber security protection of foreign power systems started early, and a relatively complete power system security protection system has been formed [14]. However, at this stage, edge stations often support multiple business cloud masters at the same time, and these methods fail to adequately consider the impact of its distributed synergy on the balance of the expected operating pressure of each cloud master.
From the perspective of cloud–edge collaboration, the security protection of the power distribution system is currently only concentrated at the cloud master station, and the computing resources on the edge side are not fully utilized. In addition, in the face of the situation that one edge substation supports multiple cloud master stations, considering the expected operation pressure balance of the cloud master station, the current edge computing power allocation is inflexible. Therefore, this article studies the cloud–edge-terminal network architecture, which reduces the operating pressure of the cloud master station by mining the available computing resources on the edge side and transferring part of the security protection work to the edge side, and comprehensively considers the service delay requirements and computing resource allocation to model the problem. To solve the problem, we can refer to the master-allocation-micro-integrated collaborative control strategy for distributed resource potential in [15] and propose an adaptive scheduling method of heterogeneous resources on the edge side of power distribution collaboration based on cloud–edge security dynamic collaboration (ASM-CESDC). The main contributions of this article are summarized below:
(1)
A multi-stage relay business security protection model is constructed for the edge stations and the business cloud master by combining the available arithmetic resources of each edge station.
(2)
Secondly, the corresponding linear programming problem is constructed by considering the distributed collaboration of multiple edge stations with the objective of minimizing the expected operating pressure of the maximum business cloud master; the problem is solved quickly by using KKT.
(3)
The simulation results show that the proposed ASM-CESDC algorithm can significantly reduce and minimize the expected operating pressure of the main station of the maximum business cloud by fully exploiting the arithmetic resources of multi-domain edge stations.
The rest of this article is organized as follows: Section 2 provides an overview of relevant important works. Section 3 introduces the cloud–edge collaborative computing framework and system model. In Section 4, an optimization problem is proposed with the goal of minimizing the maximum operating pressure of the cloud master. Section 5 presents the solutions to problems. Section 6 describes the simulation results. Finally, Section 7 concludes this article.

2. Related Work

In recent years, with the rapid construction of the new power system, massive distributed resources are connected to the power grid, but it also brings more collaborative and interactive communication needs and security threats at the same time [16]. In terms of distributed security, based on the multi-time-scale interaction between distributed resources and the power grid, the authors in [17] constructed a cloud–edge-terminal collaborative distributed resource interactive communication networking architecture, and further proposed the interactive security protection strategy of distributed resources, which can ensure the information security of distributed resources participating in the power grid interaction. In terms of cloud–edge-terminal collaborative security protection, an architecture that integrates real-time perception, dynamic decision making, and active defense, which deeply integrate the cloud, edge, and terminal to form security collaboration capabilities, was mentioned in [18]. It further realizes the consistency of cloud, edge, and device risk perception, policy decision making, and attack and defense, and implements the efficient and low-cost collaborative protection of security risks. In terms of cross-domain security protection, Zhao et al. [19] proposed the cyber–physical security characteristics that integrate physical security, functional safety, and network security, and studied key technologies such as cross-domain security threat propagation modeling, collaborative situational awareness and detection, and multi-spatiotemporal linkage protection, and explored their application prospects in combination with power monitoring services, so as to provide support for the safe operation of new power systems.
In addition, a lot of research has been performed, in [20,21,22], on single-objective, dual-objective, and multi-objective directions in the Internet of Things, such as response time, reliability, service quality, and resource utilization. Despite the progress made in the above related research, there are still deficiencies in the flexibility of collaborative resource scheduling at the cloud–edge. This limitation makes part of the arithmetic resources of the edge devices not fully explored and utilized, limiting the overall computing power of the system and the potential for optimal resource allocation [23].
Edge computing plays a key role in alleviating the operational pressure of the cloud master, and the efficient shunting and optimization of the computational load can be achieved through reasonable scheduling of edge arithmetic resources [24]. The authors in [25] address the dynamic change in computing load in the “cloud master”, establishing a dual-objective optimization model of the optimal allocation of cloud resources with the goal of the shortest average response latency and the smallest energy consumption of the “cloud master”. The introduction of edge computing technology can redefine the relationship between the cloud, management, and device; Sun et al. [26] proposed edge computing technology applied to the distribution of the Internet of Things, in situ, to realize real-time and efficient lightweight data processing, and with a new generation of a distribution automation cloud master for the network, data, business, and other aspects of the synergies, to achieve the autonomy of the distribution station area. As mentioned in [27], a cloud–edge-device collaborative network task offloading framework named FreeOffload realizes real-time awareness of computing resources and network status, and is designed for flexible offloading of heterogeneous embedded terminal tasks. In addition, its small-scale cloud–edge-terminal collaborative prototype experiment system can also efficiently and flexibly offload the task of the terminal.
Edge computing can be used to transfer tasks from the cloud master to the edge-side and terminal devices, which can alleviate the expected operation pressure of the business cloud master to a certain extent. However, the current edge substations usually need to support multiple cloud masters at the same time, and the existing research has not yet fully considered the impact of the edge substations on the balanced nature of the expected operation pressure of the various cloud masters, which restricts the overall effect of the optimization of resource scheduling [28]. In this paper, the flexibility of resource allocation can be improved by exploring the available computing resources and implementing the adaptive allocation of computing resources at the edge. At the same time, distributed security protection can effectively reduce and balance the expected operating pressure of each cloud master.

3. Cloud–Edge Collaborative Computing Framework and Scheduling Model

3.1. Cloud–Edge Collaborative Computing Framework

The new power system cloud–edge cooperative computing framework is divided into cloud, edge, and terminal three-layer structures. Among them, the terminal equipment performs data acquisition tasks through various types of sensors and then uploads the collected massive data to the edge station; the edge station, as a transit node and a real-time processing center in the data link, completes the preliminary processing of the edge data by using its local resources, and only sends the processing results to the cloud master station, and shares some of the computational tasks for the cloud master station; the cloud master allocates saved computing resources to data analysis and task scheduling efforts for faster and more efficient intelligent decision making.
The massive integration of heterogeneous devices at terminals has led to a sharp increase in the volume of data within the distribution grid. The varying latency requirements of different businesses have significantly heightened the anticipated operational pressure on the cloud master station. From the perspectives of security businesses and dispatch management, a multi-stage relay-based business security protection model can be designed to alleviate the expected operational pressure on the cloud master station.

3.2. System Model

The proposed multi-stage relay-based business security protection model includes three components: the edge substation security protection, upload of business data and security process, and cloud master station security protection. This model can be represented using a directed graph, G ( B , E , C ) , where b i B represents a type of business, each corresponding to a business cloud master station; e i E represents an edge substation encompassing multiple types of businesses; and c i C represents a business cloud master station. As illustrated in Figure 1, various areas including the resource, power grid, storage, and load will first transmit its businesses to edge substations, and each edge substation will transmit the data to the cloud master after preliminary processing. In the process of initial processing, the edge substation initially allocates its available computing resources to each business. Each business completes the maximum number of security protection steps at the edge substations that satisfies its latency constraints. Subsequently, the edge station transmits the processed business data to its corresponding cloud main station. Upon receiving the data, each cloud master station determines the next required security protection steps and completes all security protection tasks for the data. At the edge substation, an adaptive computing resource allocation strategy is employed to optimize the protection process, aiming to reduce and minimize the maximum anticipated operational pressure on the business cloud main stations.

4. Computing Resource Allocation Strategy

4.1. Allocation of Computational Resources at Edge Stations

The computational resources available at edge stations for data security are constantly changing. Therefore, it is necessary to allocate the limited computational resources reasonably among various businesses as in [29]. Taking a single-edge substation as an example, the computing resources allocated to each business are constrained by Equation (1).
i = 1 λ r i e t r max e t
where λ is the total number of business types included in each edge station; r i e t is the computational resources allocated by the edge station to a business at time t ; and r max e t is the total computational resources available at the edge station for data security at time t .
Considering the latency constraints of each business and the computational resource constraints of the edge stations, the number of security stages ω that each business needs to complete at the edge station level is determined. Taking business i as an example, the latency constraint is calculated as shown in Equation (2).
t i , e d g e + t i , t r a n s + t i , c l o u d t i , q o s
where t i , e d g e is the total latency of business i at the edge station, t i , e d g e = t s i , e d g e + t i , p , t i , p is the encryption and decryption latency of business i , and t s i , e d g e is the latency incurred by business i for completing data security at the edge station; t i , t r a n s is the transmission latency between the edge station and the corresponding cloud master station for business i ; t i , c l o u d is the latency at the cloud master station for business i ; t i , q o s is the latency constraint for business i .
The calculation method for security latency t s i , e d g e is shown in Equation (3).
t s i , e d g e = x = 1 ω t s i , e d g e x , ω ϖ
where ω corresponds to the number of security stages that business i needs to complete at the edge station; ϖ corresponds to the total number of security stages; and t s i , e d g e x is the time consumed by business i to complete security stage x , which is calculated as shown in Equation (4).
t s i , e d g e x = d i γ i , k r i e t
where d i is the data size for business i ; γ i , k is the computational resources required to process a unit of data for security stage x ; and r i e t is the computational resources allocated by the edge station to business i at time t .

4.2. Calculation of Computing Resources for Cloud Master Station

According to Equation (3), the number of security links ω completed by the business at the edge substation can be determined, and the remaining security protection stages can be completed at the cloud master. When the business data are transmitted from the edge station to the business cloud master, the business cloud master first determines the rest of the security links that need to be carried out. If ω = ϖ , the business has already completed all the security work, and the required computing power resources of the business cloud master it belongs to are recorded as 0. On the contrary, it calculates the required computing power resources of the cloud master it belongs to, and takes business i as an example; the method is as follows:
First, the security time of the business cloud master t s i , c l o u d is calculated based on the business delay constraint in Equation (5).
t s i , c l o u d = t i , c l o u d t i , p
where t i , c l o u d is the total time left for the business cloud master, and t i , p is the encryption and decryption latency for business i . Based on t s i , c l o u d , the computational resources required by the business cloud master station r i c l o u d can be calculated, which is expressed as shown in Equation (6).
r i c l o u d = j = 1 m r i , j c l o u d
where m is the total number of edge stations; r i , j c l o u d is the computational resources required to complete security for the data sent from the edge station j to the cloud master station for business i , which is calculated as shown in Equation (7).
r i , j c l o u d = y = ω + 1 ϖ γ i , k d i t s i , c l o u d
Based on Equation (7), the required computational resources of each business cloud master station can be calculated. The expected operational pressure on the business cloud master station φ is defined as the ratio of the operating frequency required to complete all data security tasks within the latency constraint to the maximum work frequency. The size of packets, latency, and computing resources required to process per bit vary for each business. So, taking business i as an example, the expected operational pressure φ i for the corresponding cloud master station is calculated as shown in Equation (8).
φ i = r i c l o u d + r i , o t h e r c l o u d r i , c l o u d max
where r i , o t h e r c l o u d represents the computational resources occupied by tasks other than security protection at the cloud master station to which business i is assigned at time t ; r i , c l o u d max represents its maximum operational frequency. Based on the calculation results, φ i , it is possible to calculate whether the operating pressure of the cloud master station will be reduced by mining the computing power resources on the edge side.

4.3. Problem Formulation

Based on the multi-stage relay business security protection model, the aim is to reduce and minimize the maximum expected operational pressure on the cloud master stations through the distributed collaboration of multi-domain edge stations. The optimization problem is constructed for the model containing n business cloud master stations and m edge stations. The optimization objective is to minimize the maximum expected operational pressure on the business cloud master station, which is expressed as shown in Equation (9).
min r   max i B { φ i } s . t . C 1 : i = 1 λ r i e t r max e t C 2 : 0 r i e t r max e t C 3 : t i , e d g e + t i , t r a n s + t i , c l o u d t i , q o s
where C 1 is the computational resource constraint, meaning that the total computational resources allocated to all businesses by the edge station cannot exceed its total computational resources; C 2 ensures that the computational resources allocated to each business are non-negative and cannot exceed the total computational resources; and C 3 is the latency constraint, where the sum of the transmission latency and the security protection latency of each business is required to satisfy the latency requirement.

5. Problem Solving

5.1. Solution Process

To facilitate problem solving, it is assumed that each edge substation and cloud master possesses sufficient computational resources to complete all security protection tasks within the specified business delay requirements. Therefore, in the process of solving, the delay constraint C3 is used as the criterion for calculating the number of security protection stages completed by each service at the edge substation. In the context of arithmetic resource allocation, the aforementioned optimization problem can be reformulated as shown in Equation (10).
min r i = 1 n φ i s . t . C 1 : i = 1 λ r i e t r max e t C 2 : 0 r i e t r max e t
For the inequality constraint optimization problem, the Karush–Kuhn–Tucker (KKT) method is widely used to transform inequalities and equations at present. The aforementioned linear programming problem exhibits strict concavity with respect to the variable r i e t . Consequently, a unique extreme point exists that satisfies the KKT conditions [30]. The corresponding Lagrangian function is presented as shown in Equation (11).
g ( f ^ ) = i = 1 n φ i ( r i e t ) + i = 1 n α i [ r max e t r i e t ] + i = 1 n β i r i e t + χ k [ r max e t i = 1 n r i e t ]
Among them, α i , β i , and χ are the Lagrange multipliers corresponding to r max e t r i e t , r i e t 0 , and r max e t i = 1 n r i e t , respectively. As an example, the KKT conditions of the cloud master to which the business i belongs are as shown in Equation (12).
φ i ( f i e t ) χ ˙ k α ˙ i + β ˙ i = 0 α ˙ i [ r max e t r ˙ i e t ] = 0 , β ˙ i r ˙ i e t = 0 χ ˙ k [ r max e t i = 1 n r ˙ i e t ] = 0 χ ˙ k 0 , α ˙ i 0 , β ˙ i 0
φ i ( r i e t ) is the first-order derivative of φ i ( r i e t ) . r ˙ i e t i = 1 λ r i e t r max e t represents the optimal solution for the cloud master allocation associated with operation i . α ˙ i , β ˙ i , and χ ˙ are the optimal solutions corresponding to α i , β i , and χ . A further analysis of Equation (12) leads to the following conclusions, and the properties of the optimal solution are summed up as shown in Equations (1)–(3):
(1)
When α ˙ i > 0 , r ˙ i e t = r max e t , while β ˙ i = 0 and φ i ( r i e t ) χ ˙ k = α ˙ i ;
(2)
When α ˙ i = 0 and β ˙ i = 0 , 0 r ˙ i e t r max e t , at the time φ i ( r i e t ) = χ ˙ k ;
(3)
When β ˙ i > 0 , r ˙ i e t = 0 , α ˙ i = 0 , and φ i ( r i e t ) χ ˙ k = β ˙ i .
Based on the aforesaid properties, it can be discerned that there exists a certain correlation between r ˙ i e t and φ i ( r i e t ) χ ˙ k . φ i ( r i e t ) decreases monotonically with the increase in χ ˙ k . The optimal solution r ˙ i e t can be computed based on the relationships among χ ˙ k , φ i ( 0 ) , and φ i ( r max e t ) .
Specifically, when χ ˙ k < φ i ( r max e t ) , Property (1) is satisfied, in which case r ˙ i e t = r max e t ; when χ ˙ k > φ i ( 0 ) , Property (3) is satisfied, in which case r ˙ i e t = 0 ; when φ i ( r i e t ) = χ ˙ k and φ i 1 ( r i e t ) = r ˙ i e t , Property (2) is satisfied, by iterating χ ˙ k until i = 1 n r ˙ i e t approaches r max e t , and the optimal computing power resource allocation scheme can be obtained.

5.2. Algorithm Design and Complexity Analysis

Based on the decomposition of the problem in local convex approximation processing in Section 5.1, the global iterative optimization is carried out to complete the specific iteration process of adaptive scheduling of cloud–edge computing resources, and the pseudocode is shown as Algorithm 1.
Algorithm 1: An adaptive Scheduling Method of Heterogeneous Resources on the Edge-Side of Power Distribution Collaboration based on Cloud-Edge Security Dynamic Collaboration
Inputs: latency requirements of each business, total computing power resources of the edge substation, total computing power resources of the cloud master, and businesses included in each edge substation.
Outputs: Expected operating pressure of each cloud master.
1: Calculate the available arithmetic resources at each edge substation.
2: for i = 1 , 2 , , m do
3: Edge substations adaptively allocate computing power resource in accordance with the method described in Section 4.1.
4: for j = 1 , 2 , , n do
5: for k = 1 , 2 , , κ do
    if t i , e d g e + t i , t r a n s + t i , c l o u d t i , q o s // Based on the delay constraint, the security
    protection process that each business can complete at the edge substation is determined
  else
    Safeguarding process x 1
   end
  end
6: Each edge substation transmits the business to the corresponding cloud master station and accomplishes the remaining security protection work.
7: Calculate the expected operating pressure of each cloud master according to Equation (8).
8: return Expected operational pressure φ i on each cloud master.
Taking a single-edge substation as an example, the block diagram of its security process is shown below (Figure 2):
The complexity of this algorithm mainly stems from two aspects: firstly, the calculation of CPU frequency allocation for edge substations, and secondly, the calculation of the data security process at the edge substation and the cloud master station. If the number of edge substations is X , the number of cloud master stations is Y , and the number of security processes is ϖ . As the iterative process of KKT is accomplished through dichotomy, the complexity of completing the allocation of computing power resources at an edge substation can be expressed as ο ( log 2 Y ) . Since each cloud master station needs to comprehensively consider the business from each edge substation and its security process, its complexity increases monotonically with the growth of the number of edge substations and cloud master stations, and the total complexity of Algorithm 1 is ο ( X ϖ log 2 Y ) .

6. Experimental Results and Analysis

In this section, MATLAB R2020a is employed to simulate the proposed edge-side communication resource adaptive allocation method for cloud–edge collaborative security protection and its related algorithms to validate the effectiveness of ASM-CESDC. The parameters of the system simulation environment are set as shown in Table 1.
In real scenarios, the types of businesses included in each edge substation vary due to the different geographic environments in which they are located. In order to verify the effectiveness of the proposed algorithm, the edge substation is set up to contain random businesses and all businesses in the simulation process for comparison experiments, and the details of the edge substation business type settings are shown in Table 2.
In order to verify the advantages of ASM-CESDC in the adaptive allocation of computing resources, this paper compares ASM-CESDC with Not Utilizing the Available Computing Resources of the Edge Substations (NACR) and Evenly Distributing the Available Computing Resources of Edge Substations (EDACR). The focus is on analyzing the expected operational pressure experienced by each business cloud master. In a system model comprising five edge substations and three cloud masters, Figure 3 and Figure 4 illustrate the simulation results of random-business and full-business expected operating pressures, respectively.
As shown in Figure 3, by utilizing the available computing resources of the edge substations, both EDACR and ASM-CESDC can effectively reduce the expected operation pressure of the main station of each business cloud compared with NACR. However, a comparison of the results from EDACR and ASM-CESDC reveals that ASM-CESDC is more effective in minimizing the maximum expected operating pressure of the cloud master. Specifically, the maximum expected operating pressure decreases from 77.02% to 71.54% when using EDACR.
Figure 4 presents the simulation results of the expected operating pressure for full business at each edge substation under identical conditions. Compared with Figure 3, the operating pressure of the cloud master station shown in Figure 4 is relatively high due to the increase in the number of service types contained in each edge substation, which leads to an increase in the overall data volume of the system. However, based on the simulation results, it is evident that ASM-CESDC demonstrates greater effectiveness in minimizing the expected operating pressure of the largest cloud master. The maximum expected operating pressure of the cloud master shown in Figure 4 decreases from 83.18% to 76.54% of EDACR.
Figure 5 and Figure 6 show the simulation results corresponding to the system model with 10 edge substations and 5 business cloud masters, respectively. Regarding the method of computing power resource allocation, the comparison results are the same as the conclusions drawn from the above Figure 3 and Figure 4. That is, the ASM-CESDC algorithm is superior in minimizing the maximum expected operating pressure of the cloud master station. However, since the increase in the number of edge substations leads to an increase in the total amount of data for each business compared with Figure 3 and Figure 4, the expected operating pressure of each business cloud main station shown in Figure 5 and Figure 6, in the case of not utilizing the available computing power resources of the edge substations, surges sharply, and even exceeds the performance upper limit of the cloud main station, thereby the pressure exceeds 100%.
Accordingly, in comparison to Figure 5, the subsequent introduction of businesses depicted in Figure 6 influences the allocation of computing power resources at the edge substation. The diversification of business types results in an increased volume of data associated with these businesses, thereby heightening the anticipated operational pressure on the cloud master. Therefore, the expected operating pressure of each business cloud master corresponding to EDSCR and ASM-CESDC is larger compared to the results in Figure 5. In addition, the ASM-CESDC algorithm is more effective in minimizing the maximum expected cloud master operating pressure, which decreases from 94.82% to 86.54% for EDSCR as shown in Figure 5 and decreases from 97.81% to 90.32% for EDSCR as shown in Figure 6.
Figure 7 and Figure 8 present the simulation results regarding the expected operational pressure of random business and full business under a frequency of 4 GHz for edge substation arithmetic. As the computational resources of the edge substation increase, the available computational resources that can be allocated to each business also rise correspondingly. Consequently, this leads to a reduction in the expected operational pressure experienced by each cloud master. The comparative analysis presented in Figure 7 and Figure 8 alongside Figure 3 and Figure 4 indicates that an increase in available arithmetic resources leads to a reduction in the expected operational pressure on the cloud master during distributed cooperative security protection facilitated by edge substations. Furthermore, the ASM-CESDC algorithm demonstrates superior efficacy in minimizing the maximum expected operating pressure of the cloud master. This is evidenced by a decrease from 73.02% to 70.54% of EDACR, as illustrated in Figure 7, and a reduction from 81.18% to 76.75% of EDACR, as shown in Figure 8.
Figure 9 and Figure 10 present the simulation results regarding the expected operational pressure of random business and full business at a frequency of 6 GHz, corresponding to the computational capacity of the cloud master station. A comparative analysis between Figure 3 and Figure 9 as well as between Figure 4 and Figure 10 indicates that when distributed cooperative security protection is implemented with the assistance of edge substations, an increase in the computational resources available to the cloud master significantly alleviates its expected operational pressure. In addition, the simulation results indicate that the ASM-CESDC algorithm is more effective in minimizing the maximum expected operating pressure of the cloud master. This effectiveness is illustrated in Figure 9, where a reduction from 71.18% to 66.54% of EDACR is observed, and in Figure 10, which shows a decrease from 73.25% to 66.54% of EDACR.
As can be seen from Table 3, the change in the number of edge substations has the greatest impact on the expected pressure of the cloud master, because as the number of edge substations increases, the overall data volume in the network increases. However, the increase in computing power resources of the edge substation has the least impact on the expected operating pressure of the cloud master station, because the edge substation needs to allocate the available computing power resources to different services, so the increase in its total amount has the least impact on the overall network.

7. Conclusions

Based on the background of the new power distribution system, this article aims at the cloud–edge framework including a multi-domain, multi-edge substation, and multi-cloud master, considering the current underutilization of edge computing resources and the uneven expected operation pressure of the cloud master; taking the computing resources consumed by data security protection as the starting point, some data are selected to be processed at the edge substation, and the operation pressure is reduced by reducing the amount of data processed at the cloud master station. And in this process, each edge substation adaptively allocates computing resources based on the expected operating pressure of each cloud master station to achieve the purpose of cloud–edge collaboration.
Therefore, a new adaptive scheduling method for power distribution general computing resources is proposed, which is based on cloud–edge security relay collaboration. This method facilitates distributed collaborative scheduling of the available computing resources at each edge substation through a multi-stage relay business security protection model. Under the condition of satisfying the service delay requirements and different network environments, the distributed collaborative scheduling of the available arithmetic resources of each edge substation was completed, and the utilization rate of edge resources was improved. It can significantly reduce and minimize the expected operational pressure on the main station of the largest business cloud. By enhancing the flexibility of joint scheduling across multi-domain heterogeneous resources within the distribution system, this approach is of great significance for the flexible, intelligent, and digital transformation of the power distribution system in the future.
However, for businesses with large data volume and higher latency requirements, the use of edge substations for preliminary processing may make the service stay at the edge substation longer, resulting in the inability to complete all security protection within the delay requirements. Therefore, this paper needs to be improved in considering the adaptive allocation of multi-domain computing resources for high-concurrency and low-latency businesses.

Author Contributions

Conceptualization, L.L., S.L., H.S. and R.W.; methodology, L.L., S.L., H.S. and R.W.; writing—original draft preparation, L.L., S.L., H.S. and R.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by State Grid Jibei Electric Power Co., Ltd. project funding, grant number SGJBJY00GPJS2400029.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Authors Li Li, Shanshan Lu and Haibo Sun were employed by State Grid Jibei Electric Power Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wang, J.N.; Gao, X.P. Research on Computer Network Information Security Protection of Power System. Ind. Control Comput. 2024, 37, 137–138+160. [Google Scholar]
  2. Cao, X.; Jiang, M. Substation Cyber Security Risk Assessment Method Based on Business Association Model. Electr. Power Inform. Commun. Technol. 2022, 20, 57–64. [Google Scholar]
  3. Mania, H.; Abdurachman, E.; Gaol, F.L.; Soewito, B. Survey on threats and risks in the cloud computing environment. Procedia Comput. Sci. 2019, 161, 1325–1332. [Google Scholar] [CrossRef]
  4. Huang, Z.Y.; Xia, G.M.; Wang, Z.H.; Yuan, S. Survey on edge computing security. In Proceedings of the 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), Fuzhou, China, 12–14 June 2020; IEEE Press: Piscataway, NJ, USA, 2020; pp. 96–105. [Google Scholar]
  5. Li, P.; Liu, N.; Hu, Q.; Zhou, Q.; Li, Z.; Yu, H.; Sun, B.; Yan, Z.; Wen, F.; Xue, Y. Review of the special issue of “Review of Key Technologies for Digitalization of New Power System”. Automat. Electr. Power Syst. 2024, 48, 1–12. [Google Scholar]
  6. Wang, C.H.; Qiao, Y.K.; Fang, W.S. Research on New Power System Security Protection System. Electr. Technol. Econ. 2024, 31–34. [Google Scholar] [CrossRef]
  7. Li, J.; Wang, W.; Jin, G.H.; Zhang, R.L.; Zhao, W.Z.; Xiang, L. Cloud-based Centralized Information Scheduling Method for Power Systems under the Network-Oriented Paradigm. Inform. Technol. 2023, 129–133. [Google Scholar] [CrossRef]
  8. Zou, Z.; Li, F.; Hou, Y.; Fei, J.; Yang, R. Research on Security Protection Technology of Intelligent Terminal of Electricity Internet of Things. In Proceedings of the 2021 IEEE 5th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Xi’an, China, 15–17 October 2021; pp. 1467–1470. [Google Scholar]
  9. Wang, C.X.; Shi, Z.Y.; Liang, Z.F.; Li, Q.M.; Hong, B.W.; Huang, B.B.; Jiang, L.P. Key Technologies and Prospects for the Utilization of Demand-side Resources in the Power System with New Energy as the main body. Automat. Electr. Power Syst. 2021, 45, 37–48. [Google Scholar]
  10. Ruan, Z.P.; Yu, W.K.; Li, K.; Zhou, P. Research of the Edge Intelligent Device Management Method Based on KubeEdge. Electr. Power Inform. Commun. 2020, 18, 63–68. [Google Scholar]
  11. Zhao, H.; Liu, S.; Luo, K.; Chen, S.C.; Kong, L.H.; Jia, F. Research on the application of KubeEdge edge computing system. Chin. J. Intell. Sci. Technol. 2022, 4, 118–128. [Google Scholar]
  12. Chattopadhyay, A.; Mitra, U. Security against false data-injection attack in cyber-physical systems. IEEE Trans. Control Netw. Syst. 2019, 7, 1015–1027. [Google Scholar] [CrossRef]
  13. Seyedi, Y.; Keller, J.; Grijalva, S. A Research Testbed for Protection, Automation, and Cyber-Security of Digital Substations. In Proceedings of the SoutheastCon 2024, Atlanta, GA, USA, 15–24 March 2024; pp. 1306–1310. [Google Scholar]
  14. Tang, Y.D.; Liu, Y.Y.; Wei, Y. Risk Analysis and Prevention of New Power System based on Hierarchical Protection Network Security System. Netw. Secur. Technol. Appl. 2023, 130–133. [Google Scholar] [CrossRef]
  15. Pan, X.H.; Luo, X.; Mu, Y.; Zhou, H.; Gui, R.J.; Xu, J.J.; Li, X. Coordinated Optimization Control for Main-Distribution-Microgrid Interconnection Utilizing the Potential of Massive Distributed Resources. Power Syst. Big Data 2024, 27, 1–8. [Google Scholar]
  16. Wu, K.H.; Han, Y.; Tian, Z.; Sun, Y.Z.; Wu, Y.X.; Guo, Y.D. Research on Security Access Control Technology for Cloud-edge-device Architecture in a New Type of Power System. Electr. Power Inform. Commun. Technol. 2024, 22, 1–8. [Google Scholar]
  17. Wang, W.Q.; Song, Y.B.; Guan, L.Y.; Zhang, H.; Gao, K.Q.; Ding, H.X.; Wang, Z.H.; Zheng, Q.R.; Zhao, J.L. Secure Interaction Communication Network Architecture and Evolution Route for Distributed Source-load-storage Resources. Electr. Power Inform. Commun. Technol. 2023, 21, 10–18. [Google Scholar]
  18. Fan, Q.G.; Jiang, Z.Y.; Li, X.H.; Ma, J.F. Collaborative security assessment of cloud-edge-device distributed systems based on order parameters. Chin. J. Netw. Inform. Secur. 2024, 10, 38–51. [Google Scholar]
  19. Dong, Y.; Zhou, J.Y.; Zhang, X.; Liu, W.; Wang, Q. Research on Cyber-Physical Security Protection of New Power System. Proc. CSEE 2025, 1–14. [Google Scholar] [CrossRef]
  20. Seifhosseini, S.; Shirvani, M.H.; Ramzanpoor, Y. Multi-objective cost-aware bag-of-tasks scheduling optimization model for IoT applications running on heterogeneous fog environment. Comput. Netw. 2024, 240, 110161. [Google Scholar] [CrossRef]
  21. Hosseini, S.M.; Shirvani, M.H.; Motameni, H. Multi-objective discrete Cuckoo search algorithm for optimization of bag-of-tasks scheduling in fog computing environment. Comput. Electr. Eng. 2024, 119, 109480. [Google Scholar] [CrossRef]
  22. Hosseini Shirvani, M.; Ramzanpoor, Y. Multi-objective QoS-aware optimization for deployment of IoT applications on cloud and fog computing infrastructure. Neural. Comput. Appl. 2023, 35, 19581–19626. [Google Scholar] [CrossRef]
  23. Zhao, C.C.; Lv, F.; Shi, B.; Wei, X.M.; Yang, X.C.; Yue, X.C. A Review of Collaborative Inference Methods for Edge Intelligence. Comput. Eng. Appl. 2025, 1–22. [Google Scholar] [CrossRef]
  24. Dong, Y.M.; Zhang, J.; Xie, C.Z.; Li, Z.Y. A Survey of Key Issues in Edge Intelligent Computing Under Cloud-Edge-Terminal Architecture: Computing Optimization and Computing Offloading. J. Electron. Inform. Technol. 2024, 46, 765–776. [Google Scholar]
  25. Sun, Y.Y.; Cai, Z.Y.; Ma, G.L. Workload model and optimal resource allocation of cloud master station in Power Internet of Things. Electr. Power Automat. Equip. 2021, 41, 177–183. [Google Scholar]
  26. Sun, H.Y.; Zhang, J.C.; Wang, P.; Jiaying, L.; Shen, G. Edge Computation Technology Based on Distribution Internet of Things. Power Syst. Technol. 2019, 43, 4314–4321. [Google Scholar]
  27. Li, S.; Yan, F.; Zhang, L.Q.; Luo, Q.C.; Yang, X.L. An eBPF-empowered Task Offloading Approach for Cloud-edge-end Collaborative Networks. J. Zhengzhou Univ. (Nat. Sci.) 2025, 1–9. [Google Scholar] [CrossRef]
  28. Fu, L.D.; Jia, Q.Q.; Lin, J.M.; Hu, J.H.; Wang, C. Real-Time Consistency Distributed Optimization and Control Strategy for Distribution Network Under Cloud-Edge Architecture. Northeast Electr. Power Technol. 2024, 45, 8–16. [Google Scholar]
  29. Yu, X.Y.; Qiu, L.X.; Song, J.N.; Zhu, H.B. Security Communication and Energy Efficiency Optimization Strategy in UAV-aided Edge Computing. J. Commun. 2023, 44, 45–54+58. [Google Scholar]
  30. Guo, H.; Wu, R.; Ma, Y.; Sun, S.; Wang, Y.; Qi, B.; Gao, J.; Xu, C. Synergy-Payoff-Maximization-Based Rechargeable Adaptive Energy-Efficient Dual-Mode Data Gathering Using Renewable Energy Sources. IEEE Internet Things 2024, 11, 35292–35305. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of cloud–edge security relay coordination strategy.
Figure 1. Schematic diagram of cloud–edge security relay coordination strategy.
Processes 13 00366 g001
Figure 2. Block diagram of single-edge substation.
Figure 2. Block diagram of single-edge substation.
Processes 13 00366 g002
Figure 3. Simulation result of random-business expected operating pressure.
Figure 3. Simulation result of random-business expected operating pressure.
Processes 13 00366 g003
Figure 4. Simulation result of full-business expected operating pressure.
Figure 4. Simulation result of full-business expected operating pressure.
Processes 13 00366 g004
Figure 5. Simulation result of random-business expected operation pressure under 10 edge substations.
Figure 5. Simulation result of random-business expected operation pressure under 10 edge substations.
Processes 13 00366 g005
Figure 6. Simulation result of full-business expected operation pressure under 10 edge substations.
Figure 6. Simulation result of full-business expected operation pressure under 10 edge substations.
Processes 13 00366 g006
Figure 7. Simulation result of random-business expected operation pressure under computing power of 4 GHz of edge substations.
Figure 7. Simulation result of random-business expected operation pressure under computing power of 4 GHz of edge substations.
Processes 13 00366 g007
Figure 8. Simulation result of full-business expected operation pressure under computing power of 4 GHz of edge substations.
Figure 8. Simulation result of full-business expected operation pressure under computing power of 4 GHz of edge substations.
Processes 13 00366 g008
Figure 9. Simulation result of random-business expected operation pressure under computing power of 6 GHz of cloud master stations.
Figure 9. Simulation result of random-business expected operation pressure under computing power of 6 GHz of cloud master stations.
Processes 13 00366 g009
Figure 10. Simulation result of full-business expected operation pressure under computing power of 6 GHz of cloud master stations.
Figure 10. Simulation result of full-business expected operation pressure under computing power of 6 GHz of cloud master stations.
Processes 13 00366 g010
Table 1. Simulation Parameters.
Table 1. Simulation Parameters.
ParametersNumerical Value
Maximum CPU operating frequency of the edge substation3 GHz
Total number of protection links5
Total amount of data transmitted by each edge substation300 k
Number of edge substations5
Number of business cloud masters5
Maximum operating frequency of
business cloud master CPUs
5 GHz
Delay requirements for each operation[0.8 s, 1.7 s, 1.8 s, 1.8 s, 1.5 s]
Business transmission bandwidth1000 bps
Number of frequency cycles per unit bit required for each business[420, 1000, 1200, 1700, 2200]
Table 2. Edge Substation Business Type Settings.
Table 2. Edge Substation Business Type Settings.
Edge Substation NumberRandom BusinessAll Businesses
1Business 1, 2Business 1–Business 5
2Business 1, 2Business 1–Business 5
3Business 1, 2, 3Business 1–Business 5
4Business 1, 2, 3Business 1–Business 5
5Business 1, 4, 5Business 1–Business 5
Table 3. The maximum expected operating pressure drop of the cloud master.
Table 3. The maximum expected operating pressure drop of the cloud master.
Simulation EnvironmentRandom BusinessAll Businesses
5 edge substations5.48%6.64%
10 edge substations8.28%7.49%
The computing power resource of the edge substation is 4 GHz2.48%4.43%
The computing power resource of the cloud master station is 6 GHz4.64%6.71%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Lu, S.; Sun, H.; Wu, R. Adaptive Scheduling Method of Heterogeneous Resources on Edge Side of Power System Collaboration Based on Cloud–Edge Security Dynamic Collaboration. Processes 2025, 13, 366. https://doi.org/10.3390/pr13020366

AMA Style

Li L, Lu S, Sun H, Wu R. Adaptive Scheduling Method of Heterogeneous Resources on Edge Side of Power System Collaboration Based on Cloud–Edge Security Dynamic Collaboration. Processes. 2025; 13(2):366. https://doi.org/10.3390/pr13020366

Chicago/Turabian Style

Li, Li, Shanshan Lu, Haibo Sun, and Runze Wu. 2025. "Adaptive Scheduling Method of Heterogeneous Resources on Edge Side of Power System Collaboration Based on Cloud–Edge Security Dynamic Collaboration" Processes 13, no. 2: 366. https://doi.org/10.3390/pr13020366

APA Style

Li, L., Lu, S., Sun, H., & Wu, R. (2025). Adaptive Scheduling Method of Heterogeneous Resources on Edge Side of Power System Collaboration Based on Cloud–Edge Security Dynamic Collaboration. Processes, 13(2), 366. https://doi.org/10.3390/pr13020366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop