3.2. Hyper-Angle Exploitive Searching HAES
This section presents a hyper angle exploitive searching HAES algorithm. Firstly, we present its working principle and the difference between HAES and MOGA-AQCD [
30] in
Section 3.2.1. Secondly, we present the objective partitioning in
Section 3.2.2. Lastly, the algorithm of HAES in
Section 3.2.3.
3.2.1. Working Principle and the Difference between HAES and MOGA-AQCD
Both the proposed HAES and MOGA-AQCD use the concept of angle quantization for searching, which is based on dividing the space into equal-angle sectors and building a histogram that calculates the number of solutions selected for each sector. However, HAES behaves differently from MOGA-AQCD in terms of the selection of the new solutions. MOGA-AQCD favors solutions located in the least angular sector in terms of the previously selected solutions when two solutions are non-dominated with each other. In contrast, HAES favors solutions located in the maximum angular sector in terms of the previously selected solutions. Typically, the MOGA-AQCD concept is to perform extensive exploration to yield substantial optimal solutions, whereas the HAES concept is that sectors that cover suitable solutions in the past are also likely to be rich in the future. We then provide an example to explain the critical difference between HAES and MOGA-AQCD regarding the searching concept.
The concept of HAES is depicted in
Figure 2. The solution space is decomposed into a set of angular sectors. Each angular sector contains a set of solutions. The already found solutions are marked with black bullets and the candidate solutions are represented with white bullets. HAES selects the solutions that are located in the highest angular sector with respect to the number of solutions. We mark the selected solutions with yellow bullets and the ignored solutions with blue bullets.
3.2.2. Objectives Partitioning
The multi-objective optimization when working on a high number of objectives requires searching within a wide objective space, which makes it challenging to converge toward the boundary of the objective space. Hence, we do boundary searching mechanisms by activating the sub-set of objectives at each iteration out of the entire objectives. We name it objective partitioning; its role is to reach the boundary of the solution space with respect to the activated objectives. We select at each iteration of the optimization size where denotes the number of objectives, and we use it for evaluating the solutions, sorting them, and selecting non-dominated ones. The sub-set of objectives is selected randomly at each iteration using a uniform distribution.
3.2.3. Algorithm of HAES
The general algorithm of HAES is presented in Algorithm 1. The algorithm takes the number of generations
, the number of solutions
, the sector range value
, and the set of objectives
as inputs, the size of objectives partitioning. The output of the algorithm is the Pareto front
. As can be seen in Algorithm 1, the algorithm starts with the initialization of the first population in line 10, keeping it as a previous population in line 11, initialization of the counter of the population in line 12, initialization of the angle range rank in line 13, and initialization of crowding distance in line 14. Next, an iterative while loop is performed until the number of generations is finished. The loop is composed of calling for the evaluation of the solutions in the previous generation using the objective partitioning in function
(line 15) and the objective function calculation in the function
(line 16), updating the crowding distance using the function
(line 17), updating the ranges using the function
(line 18), selecting the elites that are responsible for generating the off-spring using
(line 20), generating the off-spring using the function
(line 22), and the concatenation of the parents and their off-spring using the concatenation operator
(line 23), and finally the new population is selected again from the resulted concatenated using the
one more time (line 25). This process is repeated until the total iterations are finished, then the Pareto front of the last generated solution is the result of the algorithm, as presented in line 26.
Algorithm 1 Pseudocode of the HAES Algorithm |
1. | Input: |
2. | //Number of Generations |
3. | //Number of Solutions |
4. | SectorRange //Sector Range |
5. | = fi, where i = 1, 2, …, m; //Set of Objectives |
6. | K //size of objectives partitioning |
7. | Output: |
8. | ParetoFront //Found Pareto Front |
9. | Start: |
10. | = InitiateFirstPopulation; //generate first population randomly |
11. | populationPrevious = ; //first population is the previous population |
12. | counterOfGeneration = ; |
13. | angleRangeRank = zeros (1, 2π/SectorRange) //initialize the angle range rank |
14. | while (CounterofGeneration < ) |
15. |
= |
16. | [solutionsRanks,objectiveValues] = evaluate (populationPrevious,) |
17. | [crowdingDistance] = updateCrowdingDistance (populationPrevious,objectiveValues) |
18. | [angleRangeRank] = updateRanges (populationPrevious,solutionsRanks, |
19. | SectorRange,angleRangeRank, ) //select from the previous solutions |
20. | selected Elites = selectElites |
21. | (,solutionsRanks,angleRangeRank,crowdingDistance, ) |
22. | offSpring = geneticOperations (selected Elites) |
23. | combinedPop = selectedElites || offSpring sortedCombinedPop = |
24. | NonNominatedSorting (combinedPop) |
25. | = selectElites (sortedCombinedPop, angleRangeRank,NSol) |
26. | populationPrevious = |
27. | CounterofGeneration++; |
28. | end while |
29. | End |
The algorithm calls three essential functions:
,
, and
. We provide the details of each of them in Algorithm 2, Algorithm 3, and Algorithm 4, respectively. For the
, the algorithm (detailed in Algorithm 2) takes the number of solutions
and the objective values
as input, and provides the set of crowding distance
. The algorithm starts with the initialization of the set of the crowding distance with the size of solutions
(line 7). Next, the two extreme solutions are assigned the value of infinity (line 8). Afterward, the algorithm sorts the solutions as the separated lists according to their objective values (line 9). Then, the algorithm updates the crowding distance in an accumulated way, corresponding to the difference between each objective of a solution and the value of its next solution in the sorted list (line 11).
Algorithim 2 Pseudocode of calculating the crowding distance |
1. | Input: |
2. |
|
3. | objectiveValues |
4. | Output: |
5. | CrowdingDistance |
6. | Start: |
7. | crowdingDistance = zeros (); |
8. | crowdingDistance (1) = crowdingDistance () = |
9. | for (each objective of objectiveValues) sortedSolutions = sort (,); |
10. | for (solution from 2 to ) |
11. | crowdingDistance () = crowdingDistance() + objectiveValues() − objectiveValues( − 1); |
12. | end for |
13. |
end for |
14. | End |
The
function is provided in Algorithm 3. It takes three variables: Solutions,
, and
SoB, as input. Additionally, it gives
as output. The approach of obtaining
is based on performing an iterated loop in the input Solutions and updating the counter of each sector in the
that contains the solution, as presented in the for loop from line 10 to line 13.
Algorithim 3 Pseudocode of updating the angle range rank |
1. | Input: |
2. | Solutions |
3. | SectorRange |
4. | SoB |
5. | Output: |
6. |
|
7. | Start |
8. | = length (Solutions) |
9. | angleRangeRank = zeros (360/SectorRange) |
10. | for ( = 1 to L) |
11. | = angle (solution())//angle of solution |
12. | angleRangeRank (j) = map (, SectorRange) + angleRangeRank () |
13. | end for |
14. | return |
15. | End |
The final procedure receives the pool of solutions Pool of Solutions, the rank of solution Rank, the angle range rank
, the array of the crowding distance
, and the number of solutions to be selected
as input and provides selected solutions (Algorithm 4). The procedure performs an iterated loop for
times, where it selects two solutions in each time and calculates three measures for each solution: rank, angle range rank, and crowding distance. Next, the selection function determines which one has a better rank (line 17), better angle range rank (line 19), and better crowding distance (line 21). Then, the selection process is applied by checking the condition (line 22–24) to identify which favors a solution that has a better rank. In the case that two solutions have the same rank, then the solution with better angle range rank is selected. If the two solutions both have the same values of rank and angle range rank, then the approach will select the solution that has better crowding distance. In addition, the definition of “better” is provided for rank in line 17, for angle range rank in line 19, and for crowding distance in line 21. The detail of the algorithm for selecting the elites is shown in Algorithm 4.
Algorithim 4 Pseudocode of selecting the elites |
1. | Input: |
2. | Pool of Solutions |
3. | Rank |
4. | AngleRangeRank |
5 | |
6. | //number of the selected solutions |
7. | Output: |
8. | selected solutions |
9. | Start: |
10. | for (solution = 1 to ) //number of the selected solutions |
11. | Select two individuals A, B randomly for an individual |
12. | Compute Non-domination rank (rank) |
13. | Compute Crowding distance (distance) |
14. | Compute Angle rank level (angle Range Rank) |
15. | |
16. | //Compare Solutions |
17. | betterRank = A_rank < B_rank |
18. | sameRank = A_rank == B_rank |
19. | betterAngleRangeRank = A_angleRangeRank > B_angleRangeRank |
20. | sameAngleRangeRank = A_angleRangeRank == B_angleRangeRank |
21. | betterCrowdingDiandstance = A_distance > B_distance |
22. | if (betterRank) |
23. | or (sameRank and betterAngleRangeRank) |
24. | or (sameRank and sameAngleRangeRank and betterCrowdingDistance) |
25. | then |
26. | add A to the selected solutions |
27. | else |
28. | add B to the selected solutions |
29. | end if |
30. | end for |
31. | End |
3.3. Fog Computing Closed Loop Model (FCCL)
This section presents our developed integrated objectives fog computing model FCCL. It is composed of five main sections. Section: 3.3.1 explains the first layer which is the fog interface.
Section 3.3.2 is an overview of the task decomposer and task model. Next,
Section 3.3.3 the task dispatcher. Then,
Section 3.3.4 contains the network model, and lastly,
Section 3.3.5 contains the optimization objectives.
From a fog computing perspective, our problem is formulated similarly. The fog has an interface that receives from the user a request of executing a computational task with the needed criterion for optimization. Next, it calls an optimization algorithm that provides a set of non-dominated solutions with respect to the provided criteria. The user will make a decision for selecting one among them. The criteria are denoted by vectors where denotes a criterion for fog computing optimization. Without loss of generality, we consider five criteria, namely, Energy Consumption, Energy Distribution, Renting Cost, and Stability.
. The solutions that are provided to the user gives the selected fog nodes for the execution of the request; we are represented by vector
. The goal is to maximize the domination aspect of the provided solutions and their diversity. This gives the user more variety of choices. To elaborate more, we present
Figure 3, which elaborates the user giving a request to the user interface and waiting for a set of non-dominated solutions to select one. The fog interface communicates with the task decomposer that decomposes the task that is requested by the user to execute in the fog network. The role of the task decomposer is to partition the task into subsets of independent subtasks; we call each subset a group. Each group is executable independently on the other task.
This aspect enables shorter execution time, which is one of the metrics to be optimized. The task decomposer communicates with the task dispatcher that is responsible for calling the mathematical functions of the fog criterion for calculating the objective function for any candidate solution. Obviously, the task dispatcher receives the needed information from the fog network and the task decomposition and specification before carrying the optimization. The optimization is carried using a multi-objective optimization algorithm named HAES.
3.3.1. Fog Interface
The fog interface will accept from the user two inputs. The first one is the task, and the second one is the preference vector of the various objectives for optimizing the task. The vector of preference between the five objectives is the five components vector, given as with the constraint. The second input is the configuration input, which is also given by a vector named where denotes the maximum number of iterations, and denotes the size of the population. Assuming that there is more interest in the time execution (makespan) and stability, the second interest is in the cost, and the third interest in the energy consumption and the energy balance, then the value of . This implies, Then, .
3.3.2. Task Decomposer and Task Model
The logical decomposition of data fusion tasks is a fundamental process in the design of systems aiming at combining multiple and heterogeneous cues collected by sensors. In recent years, a relevant body of research has focused on formalizing logical models for multi-sensor data fusion in order to propose appropriate and general task decomposition. Therefore, we suggest a task decomposer, which is elaborated in
Figure 4, to decompose the data and classify based on priority. The role of the task’s decomposer is to decompose the tasks into a set of independent tasks; we denote them into groups
. Example 1 went particularly into decomposing and classifying the tasks.
This component has the role of accepting the task from the user. The task itself is modeled as a directed graph , where , where denotes the number of tasks in the graph and denotes the number of directed edges. Where each edge it denotes that is dependent on . Another piece of information that is related to the task and has to be provided by the interface is the workload of tasks in terms of both computation and communication, where the computation is described by set where each denotes the computation that is the number of a clock for the task and the communication load is described by the set where represents the communication loads, which describes the total length of data to be exchanged among selected nodes for executing the task.
Example 1:
Task decomposer will classify the nodes in the network into groups, and each group depends on the number of nodes in the fog network. In addition, its direct graph, which is the fog nodes, will forward the request to the next node. The result of the task decomposer is set of three groups as = {1,2,3}, = {4,5}, = {6,7,8,9}. As we see, the tasks in each group are independent of each other, and they can be processed in any order.
3.3.3. Task Dispatcher
The task dispatcher is responsible for allocating certain nodes in the fog network for the execution of the sub-tasks that result from the task decomposer. It contains the optimization algorithm HAES, which was presented in
Section 3.2.3. The fog computing closed loop is presented in
Section 3.3.
3.3.4. Network Model
We assume that the network is an undirected graph where where denotes the number of nodes in the network. where . Assuming that the nodes have wireless connections between each other, then we are interested in the distance between every two nodes. Each node has a rate of computational energy consumption [], and each two nodes , have distance between them, which is given as In addition, we assume that each node has a speed for execution . Furthermore, we assume that each node has a maximum capacity for executing computational load and maximum capacity for executing communication load .
3.3.5. Optimization Objectives
We present in this section the equations of the optimization objectives. Our model has the aspect of integrating five objectives at the same time, which makes it distinguished from other models in the literature.
A. Time Latency
Time latency is an expression of how much time it takes for a packet of data to get from one designated point to another. It is sometimes measured as the time required for a packet to be returned to its sender, which is calculated by the following formula.
where
denotes the queue waiting time.
denotes the task computational load that is assigned to node
. The speed is
of the node
.
denotes the communication load between
and
. Lastly,
denotes the bandwidth.
B. Energy Consumption
In order to send number packets from node
until node
where the distance between the two nodes is
we calculate the consumed energy as Equation (9).
where
denotes energy consumption for operating the radio model for each bit in the data.
denotes distance between the two nodes
. The coefficient of transmit amplifier given by
.
denotes the number of bits to be sent from node A to node B.
Based on the term
,
, we can calculate the total energy consumption based on terms
,
which represent the computation energy consumption and communication energy, respectively. The total energy is given in Equation (10), the computation energy is given in Equation (11), and the communication energy is given in Equation (12).
where
denotes to the energy consumption because of execution in node
,
denotes to the time allocation of the node
,
denotes the energy consumption because of communication between nodes
and
, and
number of bits transferred between nodes
and
.
C. Energy Distribution
This term indicates the differences among the nodes in terms of the energy levels. The term is calculated as the standard deviation of the node’s energy as it is given in Equation (13).
where
denotes the consumed energy of node
denotes the average consumed energy of all nodes.
denotes the total number of nodes.
D. Renting Cost
The renting cost is defined as the total cost of rent, which is the summation of node
rental rate
multiplied by the time of allocating the node according to Equation (14).
where
denotes the renting rate of the node
denotes the time of allocating the node
.
E. Stability
This term indicates the total stability of the task execution. It is calculated as the summation of the reliability percentage of a certain node
multiplied by the time of allocating the nodes. The calculation is depicted in Equation (15).
where
denotes to the reliability rate of the node
, and
denotes the time of allocating the fog node
F. Constraints
Before assigning any given solution to the fog network, it is needed to assure that it meets the constraints. Basically, there are two types of constraints that should be satisfied. The first one is the connectivity constraint, which states that any sub-network is assigned an execution of a task; it should be connected in order to execute the task that is assigned to the sub-network. The second constraint is named the load constraint. It states that for a task
with computational load
and communication load
, it should be allocated at least
for execution. The value
is calculated based on Equation (16).