2. Conventional Network and the Need for Smart Grids
Research in the modeling of residential demand is typically focused on the monthly or yearly data average demand. Little emphasis is put on energy consumptions in a home or appliance in particular in this line of research [
54]. Residential consumption represents an important share of the total electricity demand, due to the exponential growth experienced throughout the world. In this context, the prediction of the energy demand of the housing industry is important, as suggested in [
15,
23]. Consequently, a new concept is introduced: “the demand of the firm”, which refers to the ability to control the individual loads. Moreover, the demand of the firm refers to load management, which means being able to have real-time and smart control of the load. In the conventional electrical system, there are two types of controls, which are: cost control and direct control [
23]. Cost control seeks to change the form of the load curve [
55] without considering that the consumption of energy increases. This mechanism entails increasing energy prices in peak periods and the application of new rates. The direct control refers to the classic methods of load control involving the increase in energy production when the demand increases [
4].
The electricity is generated and distributed on a hierarchical network that has three different subsystems: generation, transmission and distribution. The aggregation of data on each of the subsystems of an electric network is crucial in SG for the control, protection, automatic functioning of interrelated components and the integration of DRES in IEN [
56]. DRES are capable of functioning independently or in conjunction with the main electrical network under the concept of microgrids [
57,
58].
The rapid advances in automation and control generate potential benefits, such as: reducing the consumption of resources, improvements in infrastructure capacity and the coordination of the demand peaks [
8,
59]. This is mainly due to the introduction of the Information Communication and Technology (ICTs) [
60], which has allowed the transformation of the conventional electrical network into an electrical network that ensures the productive interaction among suppliers of power, consumers and other stakeholders, as suggested in [
12,
15,
61,
62,
63]. Therefore, changes in generation, transmission and distribution systems are inevitable [
16].
A smart electrical network should be able to motivate consumers to participate actively in the operations of the network and, as suggested in [
23,
47,
64,
65], must be able to withstand attacks to provide a higher quality of power. For the existence of IEN, a large-scale implementation of sensors and measuring instruments is necessary, which have to be able to communicate with each other in order to add data from the state of the network [
66]. The services of data aggregation can be structured as a tree, and their goal is to merge data from various sources [
22,
67]. Finally, the European Commission defines a smart electrical network as: “An electrical network that can integrate efficiently the behavior and actions of all the users in a framework based on rules and priorities for achieving interoperability of devices in a system of smart electrical networks” [
63].
AMI in Microgrids
Nowadays, there are new devices that are capable of processing information in the electrical sector and that access the Internet or adjust the energy consumption based on cost or availability depending on the preferences of consumers. All of this is part of what is called the Internet of Things (IoT). The “things” in SG include sensors [
3,
68,
69], smart devices and the SMs [
1,
27,
68,
70]. The devices need to be interconnected in a hierarchical network with adequate levels of quality and reliability. The introduction of SG contributes to providing digital intelligence to the power system network [
56]. The benefits associated with these new concepts are: adequate management of the energy resources, reduction of the interruption rates, reduction of the pollution rates in the ecosystem, reduction in the number of interruptions due to problems in the quality of power and lower costs of operations and maintenance [
1]. Consequently, one of the main benefits of SG is the intelligent and efficient design of hybrid communication networks, which take into account the congestion of the network, real-time transmission as suggested in [
2,
47,
71] and the concern of reducing the emissions of greenhouse gases [
69].
The fast growth of data requires researchers to pay attention to how to handle these data [
72]. Therefore, three definitions have to be analyzed: volume, velocity and variety. Volume refers to the large amount of data to be processed; the speed refers to the latency of data transmission; and the variety refers to the different types of data that must be processed [
59]. The consumers of energy resources are equipped with SMs that collect the data in real time. AMI receives all data and sends them to Meter Data Management Systems (MDMS) that controls the storage. MDMS is in charge of the analysis of data and provides the information in a useful way [
73,
74]; in addition, the efficient management of wireless resources is essential to increase the life of the network [
75]. AMI is not a technology, but rather a configured infrastructure that integrates a series of technologies to achieve their goals. AMI includes SMs, communication networks, MDMS and the tools to integrate the collected data of software application platforms and interfaces [
16,
76]. Among the communication technologies used in this paper for extracting and transporting the information are WiFi, cellular and optical fiber.
Optical fiber has dominated by being able to maintain communications over long distances, such as for metropolitan networks (see
Figure 1). Additionally, it provides increased bandwidth, low transmission losses and greater tolerance to other cable access technology interference [
77]. One of the disadvantages is that it requires a huge cost for a deep penetration of fiber. Therefore, the wireless access networks are a promising technology, since they provide the flexibility of low cost, increase the coverage and robustness and are easy to implement. A disadvantage is that the bandwidth capacity is limited severely [
78]. Therefore, considering the advantages of each technology, it was proposed to build a hybrid network technology that includes wireless technology and optical fiber.
The integration of renewable energy resources with small sources of storage leads to the concept of microgrids [
74,
79]. The uncontrolled integration of microgrids affects the quality of the power, among which, the more important events are sag voltage signals induced by failure defects [
80]. Therefore, with the insertion of DRES, the quality of voltage cannot be guaranteed when there is not a communications system to provide timely information of the state of the conventional network. To ensure the quality of voltage in the network, through the integration of microgrids, the voltage levels of the conventional network and the DRES must be resynchronized [
81]. This resynchronization can be done by obtaining real-time information of the state of the network. Hence, the key is the integration of an adequate communications infrastructure that allows the aggregation of data and AMI to monitor and control the conventional electrical network. This allows the levels of voltage to be always known when introducing microgrids to run adequate processes of quality energy management.
Therefore, in this paper, we propose a heuristic method capable of providing a roadmap for deployment of an advanced metering infrastructure. This method can be a solution to the sizing problem. In this way, it allows the planning of FiWi communication networks considering certain restrictions. The speed of transmission of data does not intervene in the model as a restriction, but it can be estimated knowing the packet rate, transmission rates and data length generated by the SMs. In this research, these values are taken from the literature. Another calculated parameter that depends directly on the distance is the FSPL. These parameters are referents to determine the importance of the topology and how it affects the network for the minimization of the end to end delay and the losses in the free space. The model minimizes the number of SMs that use cellular technology through the incorporation of WiFi technology. In summary, in this work, we intended to deploy a WiFi communication network optimizing resources through clustering techniques. These techniques are based on a variant of the Prim algorithm and the minimum expansion tree algorithm (Dijkstra). With these algorithms, the adjacency matrix (G) is constructed. This matrix is formed with the existing relations between the different elements of the communication network (SMs, UDAPs, BSs and central office). These elements will form the resulting route map for the optimal integration of DRES. The model includes the connectivity of BS and the central office using a fiber optic link. In this way, the communication resources are integrated into a FiWi network.
Table 1 presents the model and parameters of simulation to be used in this paper.
3. Problem Formulation
There are n SMs X for electrical energy measurements distributed in a georeferenced area A, . With Algorithm 1, Nearest-Neighbor Spanning Tree (N-NST), the clusters are formed and using Algorithm 2, Optimal Delay Balancing (ODB), the SM is selected that will become the head of the group (UDAP) Z. Each cluster has a capacity to group until m SMs. We assume that the maximum range of bidirectional transmission of intra-cluster data is and the maximum range of bidirectional data transmission of inter-cluster data is . That is to say, any intra-cluster and inter-cluster length, the haversine distance and of which is within and , respectively, can communicate between each other. The X and Z that do not reach the maximum haversine distance allowed in a single jump will do so with multiple jumps until being able to transmit the respective data packages. The multiple breaks are restricted by w, which is the maximum number of jumps allowed. It is worth mentioning that an SM will not be able to transmit its data directly to the BSs. Therefore, the use of a node of transition UDAP (head of each group) is of vital importance to comply with that function. Since UDAP has physically two slots to hold dual wireless and cellular cards, it is able to receive the information transmitted from the access of single SMs to WiFi technology and merge the information to retransmit these data further to the nearest cellular access BSs. Therefore, the allowed breaks will be performed only between intra-cluster SMs or between UDAPs; mainly to transmit the data to the closest BSs to finally send it, via optical fiber, to the central office where the information will be processed. Consequently, the link between each of the vertices (SMs, UDAPs, BSs, central office) can be represented by an adjacency matrix. This matrix indicates the pairs of vertices that are related or not by a link or edge in the graph. In addition, the adjacency matrix is a binary matrix (0, 1) with zeros in its diagonal. It stores one when there is an edge from the vertex i to the vertex j and zero when there is no edge.
Initially, all
X are candidate
Z with a cost
. Once having identified the clusters and the transition nodes
Z, the links are created at
cost. Due to this, it eliminates the need for all X to be Z. This happens because cellular links are deleted at a cost
and links to WiFi are added at a cost
, ensuring the 100% observability of the SMs deployed. Subsequently, the UDAP merges the data and sends them to the BSs. Once the data are merged in the BSs, they will be transmitted through optical fiber to the central office with a cost
(see
Figure 1). The
,
and
variables are identified as unit costs for each type of technology: cellular, WiFi and optical fiber, respectively. In addition, it should be noted that
.
Table 2 presents a summary of the variables used in the model.
In Equations (
1)–(
3), the total costs of each technology are expressed: WiFi, cellular and optical fiber:
where
represents the length of each cluster,
k is the maximum number of clusters to be deployed in the network and
is the required distance of optical fiber to be used in the FiWi network.
In this way, the optimization problem can be expressed as follows:
subject to:
Equation (
4) corresponds to the objective function, which consists of minimizing the costs of implementation on a FiWi network. Equation (
5) necessarily asserts that there are three types of costs. Equation (
6) presents a restriction of verification, in which it must be satisfied that the sum of WiFi links and the sum of cellular links does not exceed the total number of SMs deployed at A. This ensures that there are no loops within the wireless network.
Equations (
7) and (
8) enable any SM belonging to A to be able to be a UDAP. The restriction of capacity, of Equation (
9), limits the number of intra-cluster SMs that will be able to bring together each cluster. In Equation (
10), the maximum radio coverage allowed is restricted to give way to the existence of an intra-cluster link. It is very important to mention that the referential distances that are restricted are those from point to point, which are given between an SM and its respective UDAP; in such a way that all the nodes that comply with the restriction would be able to form part of a cluster by a single jump or multiple jumps. Finally, the model verifies the capacity of the conglomerate and the maximum radio coverage. Therefore, if a node needs more than one jump to transmit the information to the UDAP and if the referential distance allows it, the resulting length would be the sum from the initial node, passing through each transition node, until reaching the UDAP. In Equation (
11), the maximum radio coverage allowed is restricted to make way for the existence of inter-cluster links. If the cluster heads (UDAPs) do not connect in a single jump to the base station (due to the coverage radius restriction), they could do it by multiple jumps supported in the transition UDAPs. In such a way, the point-to-point distances that are part of the accumulated distances from the initial node to the destination node will be determined by the maximum distance allowed between the UDAPs and the base station. Finally, Equation (
12) expresses that the necessary optical fiber distance must exist, guaranteeing the connectivity between the BSs, toward the central office.
Algorithm 1 Nearest-neighbor spanning tree: receive (m, , w, , ). |
- 1:
- 2:
- 3:
- 4:
ifthen - 5:
while do - 6:
- 7:
- 8:
while do - 9:
find next adjacent node index ; - 10:
- 11:
- 12:
if then - 13:
- 14:
- 15:
- 16:
|
Algorithm 2 Optimal delay balancing: receive (group, , ). |
- 1:
fordo - 2:
- 3:
- 4:
- 5:
- 6:
- 7:
- 8:
- 9:
- 10:
- 11:
- 12:
|
Algorithm 3 requires the input of the coordinates . The coordinates are georeferenced. Therefore, it allows rehearsing a real scenario. Following that, a distance matrix is obtained using the haversine formula between the displayed SMs. Once the distance matrix is identified, a vector is created with the pairs of adjacent SMs and is ordered according to the distance between pairs from least to greatest. It is important to mention that the starting criterion for the exploration and construction of the clusters begins from the pair of SMs with the minimum distance. In addition, Algorithms 1 and 2 are iteratively called from Algorithm 3 to obtain the results. Once obtained , Algorithm 3 calls Algorithm 1 (N-NST) to solve the wireless network deployment by minimizing the number of UDAPs through a heuristic based on the Prim algorithm. Thus, it is possible to guarantee the coverage of SMs as long as they comply with the restrictions. Recall that one of the objectives is to reduce the use of cellular links (higher cost) and exchange them with WiFi links (lower cost). Firstly, from the vector, the SM that has the shortest distance to the BS is selected. This SM is a candidate that could be a UDAP. What has been done brings about a pre-clustering of a wireless network that achieves the connection of SMs forming a tree of minimum expansion. This problem is NP-complete. The end to end delay and the losses in the obtained topology are verified by propagation of the wave in the free space. This topology is recorded in the adjacency binary matrix (G) Subsequently, Algorithm 3 verifies by means of Algorithm 2 whether it is possible to decrease the end to end delay and the losses by propagation of the wave in the free space by means of the intra-cluster modification previously obtained. If this decrease is possible, the algorithm takes the latter as the best solution, otherwise it takes the first option. Therefore, the model iterates and corrects what was originally obtained as a result with Algorithm 1. The model iterates until the objective function (subject to restrictions) is the minimum. It also verifies that there is no option to further reduce the cost of cellular links, delays and losses by propagation of the wave in free space. Finally, once the near the optimum route map with the heterogeneous wireless network (WiFi-cellular) is obtained, the algorithm proceeds to find the minimum route from the BSs to the central office with a fiber optic link. In this way, the route map of a heterogeneous FiWi network is achieved as a final result.
Algorithm 3 Generate topology: receive ( , , , , , n). |
- 1:
- 2:
- 3:
- 4:
- 5:
- 6:
- 7:
- 8:
- 9:
- 10:
whiledo - 11:
if then - 12:
- 13:
- 14:
- 15:
- 16:
- 17:
- 18:
- 19:
- 20:
- 21:
- 22:
- 23:
ifthen - 24:
- 25:
- 26:
|
4. Results
The near optimal route map on an advanced measurement infrastructure under the concept of FiWi network allows analysts to know the state of the conventional electrical network for the optimal integration of microgrids and is presented in
Figure 2. By having a georeferenced route map, we have all the information required to run the actual deployment, and more importantly, we can account for each of the resources required for planning, implementation, economic assessing and FiWi network operability. In
Figure 2 is depicted the existence of a multi-jump intra-cluster, for securing 100% coverage of each of the SMs in the area of interest. It is very important to point out that each cluster of the present paper is formed with a method that is different from the conventional clustering methods (k-means, k-medoid and mean shift). The method that was developed to achieve the goals of the research proposes the application of Algorithm 1, N-NST. Since it is capable of forming balanced clusters, subject to restrictions, it allows us to build clusters of similar lengths, contributing in this way to reliable data on each cluster. With it, it is possible to make a sound planning with the respective analysis, which is part of a tree-type wireless hierarchical network. It is known that the above-mentioned conventional algorithm uses divisive methods to form clusters without observing the lengths of each one. Therefore, they are unpredictable and do not build balanced conglomerates. In addition, they are not able to accept design parameters such as: capacity and coverage.
In
Figure 3, we can identify the near optimal route map accompanying the respective sparsity pattern matrix (spy) obtained from the binary array of dispersed adjacency of length
n ×
n. Therefore, using these square matrices, the binary relationships one and zero are represented, where one represents the existence of an edge and zero its non-existence. For each node, which binds to an edge, is placed a one represented in blue in
Figure 3, and in the remainder is placed a zero represented in white color. Therefore, spy is a binary matrix that contains the information of the vertices and edges of the solution to the problem posed in this research. In this figure is proposed a scenario defined by a finite number of nodes, in which two different criteria of selection of the UDAP are applied.
Figure 3a,b corresponds to the first criterion of the selection of the UDAP, that is by the minimum distance from the closest BS to one of the SMs of the corresponding cluster. The SMs that meets this condition will be selected as UDAP, and the rest will be single-access SMs to WiFi technology.
Figure 3c,d corresponds to the second criterion, which applies the ODB algorithm for the selection of the UDAP. The characteristics of the sparsity pattern matrix in this research are: square matrix, binary, symmetric and the inputs of the zero diagonal. If the diagonal is zero, this is because there cannot be one edge of one vertex and toward the vertex
v, since it will be the same vertex and it is not possible to construct a graph
G(V,V). Therefore, a graph is defined as
G(V,A), where
V is the vertex represented by SMs, UDAPs, BSs or the central office, and A are the edges represented by the WiFi-cellular links that provide a link address, in such a way that a direct graph will be built. Therefore, the spy matrices in
Figure 3 represent the connectivity array from a vertex
i to a vertex
j, denoted as
Vij. The number of nonzero elements of the spy arrays is 988 (see
Figure 3), which divided to two, results in 494, which is the number of WiFi links required by the network, which represents 96.48% of the use of technology with cost
and 3.52% of cellular links at a cost
for hybrid wireless communication. If we checked Scenario 1 in
Table 3 and
Table 4, we can identify that we need 494 WiFi links and 18 UDAPs, giving as a result
n = 512, which is the number of SMs to deploy in
A(n). Accordingly, the number of nonzero elements of the spy arrays of
Figure 3 corresponds to the set of vertices and edges
and its respective image
, after which being added, we have
, if
; as we refer to the same link, the result is
. Therefore, if we replace the required number of WiFi links from Scenario 1 of
Table 3 in the previous expression, we are left with the number of nonzero elements
, presented in
Figure 3.
Considering the above statements, in
Figure 3b,d, completely different arrays can be seen, with the same number of nonzero elements, which correspond to the binary matrices resulting from adjacency by applying different criteria for the selection of the UDAP. In
Figure 3b, greater dispersion of the nonzeros in the positions (400, 400) can be seen. Comparing this with
Figure 3d, the existence of a greater number of jumps required to guarantee the coverage for each SMs available on the stage occurs; therefore, the dispersion is associated with the number of hops. Consequently, the end to end delay parameters and FSPL will be increased. In
Figure 3d, through the application of the ODB algorithm, unnecessary dispersions are eliminated. Reducing the possible utilization of jumps to the maximum, to transmit data packages from the most distant SMs toward their respective UDAP, this contributes to a significant reduction of the delay for a UDAP to add and to merge the information of its associated clusters to relay to the respective BSs. In the same way, FSPL is diminished. In
Figure 3, it can be determined that the SMs suitable to be selected as UDAP by the ODB algorithm are the nearest nodes to the center of mass of each group. Thereby reducing the average end to end delay of each group to the maximum. This decreases the average number of links that a data package must pass through to reach the respective UDAP. If the number of crossed links increases, this is because the SMs are far away from their respective UDAPs and require mandatory jumps to be able to transmit. This can happen because the radio coverage of the UDAP does not guarantee observability of the furthest SM. Therefore, if the number of crossed links to transmit data packages from SMs until their respective UDAP increases, this is because in the same way, different variables increase, such as the distances of transmission and the jumps required, and consequently, end to end delay increases. Therefore, the end to end delay is directly proportional to the number of average links crossed by a data package.
In addition, through
Figure 3, it is shown that the heuristics proposed is able to mutate the adjacency matrix, seeking to provide the best resulting topology to the solution of the problem. The topology will ensure a significant reduction of the average end to end delay that the UDAP takes to add the information of its associated clusters. Therefore, in
Figure 3c,d, the georeferenced near optimal deployment of SMs is shown. This serves for measurement, monitoring and control of the conventional electrical system, giving rise to the possibility of an optimal data management and the integration of micro-grids to increase the reliability and quality of energy.
Table 3 presents the required number of links and the computation of the analyzed variables in this paper for the required wireless WiFi network. It presents five different scenarios, in which the density of SMs is varied
n (512, 256, 128, 64 and 32) to be deployed in
, thus demonstrating the criterion of scalability enabled by the heuristic proposed. It is known that
n is the sum of WiFi and cellular links and can be confirmed in the corresponding scenarios using
Table 3 and
Table 4. The purpose of these tables is to quantify the necessary resources and review the behavior of the network in its different scenarios by analyzing the number of WiFi links and cellular links required, coverage rates, average maximum distances of intra-cluster and inter-cluster links, average time that a UDAP takes to add the information and the computation of FSPL considering different frequencies applicable to a wireless WiFi and cellular network. Each of these results allows us to plan the deployment of the network by observing their behavior. Considering that by the proposed heuristic, the minimum values on FSPL, end to end delay and transmission distances are obtained, this provides a near-optimal solution to the planning problem exposed in this research.
As the frequency of the wireless WiFi and cellular network signal increases, also the FSPL metrics increase. In general terms, the lower the frequency of transmission, the better will be the signal that will travel through the air and the objects. FSPL is used to predict the intensity of the required signals in a wireless system. In addition, in
Table 3 and
Table 4, if we add the delays that it takes a UDAP to collect the information of the cluster and the delay in a cellular technology, we can estimate the average total time in which the BSs have the data of each UDAP deployed in the scenario merged available. The data of Round Trip Time (RTT) of
Table 4 are taken from [
82], which are applied in cellular technology. If we compare
Table 3 and
Table 4, we can see that the metrics of delays in WiFi are much greater than the metrics of cellular delays. However, the amount does not exceed the allowed delays in AMI exposed in the literature for efficient data aggregation.
Therefore, with
Table 3 and
Table 4, viewing each scenario, we can obtain the required procedures for the deployment of SMs under the configuration of a hybrid wireless network (WiFi-cellular). Another fact of much interest is the length of optical fiber between the BSs and the central office. In this case study, the length is 280 m in all scenarios, since the latitude and longitude coordinates of the BSs and central office are fixed. As a result, the heuristic has been able to provide a minimum route map, required for the planning of a hybrid FiWi network at the lowest cost while maximizing reliability and the robustness of the bidirectional communication network needed to control and supervise the conventional electrical network allowing us by optimal information management to integrate SMG systems that will be able to run connected to the network through an adequate synchronization and, in the same way, able to work in islanded mode, namely disconnected from the system. The importance of microgrids, through an adequate two-way communication system, is that they can operate autonomously according to what the physical and economic conditions dictate.
Figure 4 shows the increases in end to end delays as the capacity of a UDAP to accommodate SMs increases. This happens because the ability to agglutinate a cluster is directly related to the number of average links that a data package must go through to transmit the package from the SMs to their respective UDAPs. In addition, the higher the capacity of the UDAP, the more various effects may occur, such as: increased delay time in collecting the information, greater distances of transmission, greater number of jumps and greater chargeability of each link in the network. On the other hand, in each density of SMs, the topologies of each cluster are changing, to comply with the requirements of the network, which causes and requires different routing characteristics to the extent that the density of SMs is increasing or decreasing, causing variability in the features of each cluster and therefore the resulting topology. As a result, if clusters are built with minimum distances, the need to transmit through multiple jumps is null. Therefore, the delay is directly proportional to the capacity-coverage of the UDAP and inversely proportional to the density of the SMs.
In
Table 5, the capacity algorithm is presented 2. This algorithm helps to reduce the average times in which the UDAP collects the information from the group. The ODB algorithm performs intra-cluster scans to determine the best concentrator position (UDAP). It can be seen that by increasing the density of nodes (SMs) and maintaining the capacity of the conglomerates, the need to deploy UDAPs also increases. The number of UDAPs required in each scenario is different since a heuristic has a stop criterion. Therefore, once the restrictions are met, the algorithm stops providing one of the possible combinations as a solution that satisfies the constraints of the problem. Moreover, the model being combinatorial and having complexity that is NP-complete only provides solutions that are close to optimal. Hence, it would demand an excessive computational time to explore each of the possible combinations and, thus, to determine an optimal global solution. Consequently, the stopping criteria (restrictions) contribute to the relaxation of machine time that the heuristic takes to provide a near optimal solution. Finally,
Table 5 shows the end to end transmission percentages that can be reduced by applying the ODB algorithm to a previous solution.
Figure 5a shows the metric obtained with the following characteristics: data length
L = 800 bits, Lambda = 0.1 package/s and by varying the density of SMs and the capabilities of each cluster. In
Figure 5b,
L is kept, the density of SMs is
n = 512 and Lambda and the capabilities are varied. In
Figure 5a,b, it is noted that, when the need of UDAPs decreases, the average delay of the entire wireless network increases. This happens due to the increase in the capacity of each UDAP to accommodate SMs. If the capacity to accommodate SMs of a UDAP increases and its radio coverage is minimum, the need for multiple jumps to aggregate data from the more distant nodes to the UDAP also increases. Therefore, as the multiple jumps in the cluster increase, there is also an increase in the distance of an SM to its associated UDAP. This translates into an increased time required to add and merge the data in each UDAP. In addition, in
Figure 5a, it can be seen that the average delays while maintaining the capacity are similar in each increment of density of the deployed SMs. This is because these are partial averages of each cluster, which demonstrates that the heuristics is capable of building, through appropriate topologies, balanced graphs, which in turn directly contributes to decreasing technical losses in a wireless network. Therefore, the amount of required UDAPs responds to three variables in particular: Density of SMs, capacity and coverage (in terms of the technical characteristics available for the UDAP).
If we verify the behavior of the metrics in
Figure 5a, in the populations of 32 and 128 with capacities of 20–27 and 27–32, respectively, there is no need to implement a UDAP since the proposed algorithm searches in each capacity increment to include (if the capacity allows it) the nodes that were not included (due to the restrictions of the problem); thereby completing the clusters without the need of adding UDAPs. On the other hand, in
Figure 5a, it is clear that as the SMs’ density increases, the slope of the delays is stabilized. This happens because, as it has a larger number of SMs, the algorithm manages to build clusters mostly balanced in terms of the following: distances, radio coverage and number of elements for each group. Therefore, the higher the density of SMs, the better the results obtained in terms of optimization due to the closeness of the SMs. Therefore, when varying the capacities of a UDAP, the following is modified: the topology, the average number of traversed links by the package to reach its destination, the length of the cluster, the end to end delay, the link capacity and the coverage distance.
In
Figure 5b, significant variations in the global delay are depicted for the data aggregation as the traffic generated by each SM increases. Therefore, the higher the traffic generated, the greater the FiWi network delay. This is because the increase in delay is directly proportional to the increase in capacity. If the capacity of the UDAP increases, the greater will be the length of the cluster, and therefore, the greater will be the traffic generated in each cluster; resulting in an increase in the global end to end delays. Accordingly, the delay is directly proportional to the traffic generated by each SMs, whereas the number of UDAPs
k required is inversely proportional to the capacity and coverage of the UDAP.
In
Figure 6, it is shown that the greater the amount of average links that a data package must go through from an SM source to a UDAP, the greater the increase in the delays of each scenario. This happens due to the following reason: if the average number of links that a data package must go through increases, this is because the package was generated by a node that is located at a greater distance than the maximum radio coverage allowed for the UDAP. That is, if a node is very distant, it increases the global delays of the wireless network. Due to this, the data package has to carry the information through jumps, supported on the SMs of transition, to bring the information to the UDAP. Each trend in
Figure 6 corresponds to a different scenario. Therefore, the behavior of each trend responds to the near optimal topology in each of the cases. This heuristic is a solution to the problem of planning.
If we see the trend with
n = 512, in
Figure 6, we can corroborate the affirmation made in previous paragraphs: the higher the number of deployed SMs, the better the optimization results reached. Therefore, in
Figure 6, it is shown that when there is a high density of SMs, the average number of jumps required for the transmission of data packages is lower than in all other cases. This is because the greater the number of SMs, the more dispersions are avoided (see
Figure 3). Consequently, this translates into technical losses in a wireless network. Finally, if the average of links crossed by a package is zero, this means that the entire network does not require multiple jumps to transmit the information from a source SM to a target UDAP.
5. Conclusions
The heuristic proposed allows practitioners to deploy the necessary number of UDAPs for the monitoring, supervision and control of conventional electrical network, providing coverage to a number n of SMs and making possible the integration of microgrids with the conventional electrical system. In this way, final users of energy resources will become consumers and prosumers thanks to the integration of DRES. A fundamental feature of the model is that it adapts to the conditions of the required wireless network. In addition, the research carried out allowed us to determine the importance of reducing to the maximum the end to end delay of the entire network. This metric not only provides information in terms of time, but in addition, allows us to comprehend and minimize the chargeability of the network and the need to allocate the capacity of the point-to-point links for its efficient operation. The model has been shown to be scalable in time and space and has the following characteristics: presents finite solutions and optimizes the resources required by the FiWi network using an efficient clustering method (different to the traditional). Moreover, with the N-NST algorithm, balanced clusters can be built, which are subject to real restrictions, such as capacity and coverage. The heuristics works with georeferenced scenarios, reducing to the maximum the aggregation delays of data of each cluster using the ODB algorithm. Furthermore, it minimizes FSPL and is a planning model of NP-complete complexity. The complexity of the problem lies in the population density of SMs, since, in a graph with n SMs, there are possible trees; thus, the proposed model is combinatorial in nature. Hence, the results obtained are near optimal due to the exponential increase in the complexity if there is a minimal increase of the SMs in the scenario.
Consequently, in order to relax the problem, stop criteria are introduced. The goal is that once the algorithm converges, it stops providing a near optimal solution. We assume that all the nodes are linked by cellular technology, a very expensive situation. As the model replaces the cellular links with lower cost WiFi links, the objective function decreases as much as possible, thus approaching the optimal solution. Once the model cannot further decrease the cost, the algorithm stops. Therefore, the objective of this research is to minimize cellular links and to maximize WiFi links guaranteeing coverage to the nodes located in the area of interest. Another fundamental characteristic of the present model is its combinatorial nature; because, if the density of nodes increases and due to the capacity and coverage restrictions, the nodes are not covered, and after verifying the best options, these nodes must necessarily be UDAPs and could serve as future expansions.
In future works, a comparative analysis will be carried out between different clustering methods. The link capacity restriction (Mbps) will be increased to decide on the topology, and finally, the fault tolerance will be included, as well.