1. Introduction
From the network operator’s point of view, the ever-increasing demand for high-speed data services translates into the need to continually upgrade the networks to increase the data transmission rate per optical fiber. An engineer who is responsible for the task of continual upgrading of dense wavelength division multiplexing (DWDM) networks [
1] needs specific software tools that allow for estimation of the network’s performance and optimization of the architecture with respect to operational and capital costs. Thus, the specific task considered in this contribution related to DWDM network optimization was formulated as an integer programming (IP) problem and solved by the available general purpose solvers [
2]. However, the IP optimization results show that even if the IP approach is applicable, it is inefficient for larger networks since the design task is, in general, NP-complete [
3]. The numerical efficiency limitations are particularly acute in the context of routing and wavelength assignment (RWA) [
4,
5,
6] and routing and spectrum allocation (RSA) [
7,
8,
9,
10] problems, which are at the heart of DWDM network optimization. Moreover, the constraints and cost functions are in a general case nonlinear and thus may further complicate the situation.
Since the invention of computationally efficient exact algorithms is rather unlikely, heuristic discrete optimization methods seem to be best suited for the optimization of DWDM networks with realistic size and high modularity of node resources while taking into account optical network impairments, such as attenuation or an optical signal to noise ratio (OSNR) [
11,
12]. Therefore, the main aim of this paper is to study the properties of heuristic algorithms that are applicable to the optimization of DWDM networks, whereby particular attention is paid to nature-inspired algorithms, with problem-specific operators that were specifically adapted by the authors to the problem studied. The performances of all optimization algorithms were compared for selected test networks with differing complexity. All networks used for comparison were optimized, subject to traffic demand sets extracted from real traffic data [
13].
This paper is organized as follows. In
Section 2 the problem is formulated and the network models are described. Due to space constraints we decided to keep the presentation of the problem to a minimum and refer to our previous article. In
Section 3 general methodology is described. In
Section 4 the proposed metaheuristics are described. Next, in
Section 5 a testbed of four network structures is presented and results of series of numerical experiments testing the algorithms are provided and compared. Finally,
Section 6 provides a summary of the research findings.
2. Problem Formulation
In this section, the UW-WDM network optimization problem is formulated using MIP [
10]. For this purpose the following sets are defined:
set of all nodes;
set of transponders;
set of frequency slices;
set of edges;
set of paths between nodes for transponder ; ;
set of bands;
set of frequency slices used by band ; ; ;
set of frequency slices that can be used as starting; slices for transponder ; .
Notice that the power budget constraints are taken into account in the model by limiting sets . In this approach, the sets are precomputed and contain only paths that are feasible from the point of view of the power budget.
We use the following variables:
binary variable, equal to 1 if band b is used on edge e and 0 otherwise;
binary variable that equals 1 if transponders t are installed between node n and node , routed on path p and start on frequency slice , and 0 otherwise.
In the model, the following constants are used:
cost of using band b at a single edge;
cost of using a pair of transponders t in band b;
bitrate provided by transponder t;
bitrate demanded from node n to node ;
binary constant that equals 1 if transponder t using bandwidth starting at frequency slice s also uses frequency slice ; 0 otherwise.
In the cost model EDFAs, preamplifiers, boosters, ILAs and transponders are included in and but not WSSs and filters since the latter devices cannot be subject to optimization.
The optimization model is as follows:
The subject of minimization is the cost of installed amplifiers and transponders in (
1). Constraints (
2) ensure that all demands are satisfied. Constraints (
3) ensure that each frequency slice at each edge is not used more than once. Moreover, they ensure that using a band at an edge results in installing appropriate amplifiers.
In this contribution the CAPEX minimization problem is considered and several techniques are applied in order to reduce the calculation time. The motivation for reducing the calculation time of the CAPEX minimization problem is two fold. First of all the CAPEX minimization should be performed each time new capacity is needed following a request coming from a customer. Hence, smaller calculation time is of clear benefit. Secondly, it was observed that the calculation time of standard optimization methods (e.g., ILP) growths rapidly with the number of network nodes [
7]. Thus, for large networks, application of heuristic methods becomes indispensable.
5. Results and Discussion
Computational results were obtained for seven optical networks that have different characteristics. The networks correspond to actual optical networks stemming from specified countries and were taken from [
29].
Table 1 and
Figure 4,
Figure 5,
Figure 6,
Figure 7,
Figure 8,
Figure 9 and
Figure 10 provide the relevant parameters for both networks and their topologies.
The demands that were used in the optimization of network costs are given by a demand matrix, which provides the values of traffic flow between selected nodes expressed in gigabits per second (Gbps). The calculations were carried out using a linear solver engine of CPLEX 12.8.0.0 on a 2.1 GHz Xeon E7-4830 v.3 processor with 256 GB RAM running under Linux Debian operating system.
Table 2 describes in detail the sets and their settings and presents constant settings used during the computational process.
The following modeling parameters were used for heuristic algorithms: , and for HS; , , , , and for BA; and , and for EA. All these values were selected empirically after a series of tests.
In the first numerical experiment a comparison of optimization methods was performed. For Polish, German10, German15 and German20 networks (
Figure 4,
Figure 5,
Figure 6 and
Figure 7 and
Figure 11), which are discussed first, all considered algorithms reached the optimal solution. Results of heuristic algorithms are characterized by a small standard deviation. This observation supports the claim that the simulations are quite repetitive and that they converge consistently to effectively the same value. Further, the results from
Table 3,
Table 4,
Table 5 and
Table 6 show that average values are close to the minimum values. This is particularly the case for the BA algorithm. Thus, BA algorithm showed on average the best performance among all heuristic algorithms in terms of consistency.
The results obtained by HS algorithm on the other hand, are characterized by the highest standard deviation and the highest average value of the cost function. Thus, HS algorithm performed worst among all heuristic algorithms. It is also important to note that CPLEX software took about 6 min to find the optimal solution of 1321. Therefore, on average, IP approach won the competition for the Polish network instance. Finally, it is noted that on average, the fastest heuristic algorithm turned out to be HC algorithm. A bit worse, in terms of time, was the HS algorithm. Both of these algorithms were close to the IP approach in terms of time. The swarm algorithms (BA and EA) finally reached the optimum, but needed about 20 times more time.
The implemented algorithms were also applied to the large networks, i.e., the USA, American and German50 networks (
Figure 8,
Figure 9 and
Figure 10). That was the main purpose of this article. Analogously, the results were collected in
Table 7,
Table 8 and
Table 9. Additionally, convergence curves over 50 independent runs are depicted in
Figure 12,
Figure 13 and
Figure 14 for the USA, American and German50 networks. In all three cases the IP approach found solutions which were much further away from the optimal value than the ones found by heuristic algorithms. In all cases studied, the highest minimum values were obtained by HC and HS algorithms.
It is worth noticing that the ILP approach has one advantage over heuristic methods, i.e., ILP, in that it provides the lower bound for the minimum sought. The results obtained show that for the cases studied ILP is particularly effective when applied to German10, Polish, German15 and German20 networks, since in this case an ILP algorithm finds the optimum. For the USA, American and German networks, as mentioned, ILP finds the lower band and thus allows estimating the optimality gap. However, it is noted that for networks with exceedingly many nodes the optimality gap may be so large as to preclude any practical knowledge concerning the quality of the feasible solutions obtained.
Considering the average results, the HC algorithm performed worst in all cases. On the other hand, the best results in all considered cases for USA, American and German networks were obtained consistently using BA and EA algorithms. BA and EA algorithms both reached the minimum, and most importantly, the lowest average value for the American network. This observation supports the claim that the swarm algorithms (BA and EA) are best suited for optimization of optical networks with a large number of nodes (i.e., USA, American and German). The BA and HC algorithms results have the highest values of the standard deviation. This suggests that BA and HC algorithms explore the solution space more robustly and thus give more diverse results.
In the more complex cases, American, USA and German 50 node networks (
Figure 8,
Figure 9 and
Figure 10), algorithms based mainly on solution space exploitation (HC and HS) converged to a certain point, and then, due to small exploration, stopped at one of the extrema of local solution spaces. At this point, EA and BA algorithms show their superiority. This is because in EA and BA algorithms more emphasis is placed on exploration than in the previous two algorithms (HC and HS), which allows EA and BA algorithms to avoid the trap of local extremes and enables them to continue with further optimization.
The
Figure 11,
Figure 12,
Figure 13 and
Figure 14 also show the influence of initialization population by HS, EA and BA algorithms in contrast to HC algorithm. More than one solution at the start gives a potentially better base solution, so that these algorithms find a lower cost solution at the start (the chart starts with a much lower cost value).
Further, it is noted that it took more than 6 days for the IP approach to find a solution, which is still further away from the optimum value than the solutions obtained by heuristic algorithms. On average, it took almost three times less time for the BA algorithm than for IP approach, and almost six times less time was needed by the EA algorithm. This is due to the fact that the iterations of the climbing algorithm are computationally very efficient when compared with other algorithms, e.g., the EA algorithm. Finally, the same amount of time was needed for completing calculations using HS and HC algorithms. In fact, HS and HC algorithms proved to be the fastest among all algorithms tested. However, these algorithms did not find the best solutions, which were provided by BA and EA algorithms.
In the last experiment, fine tuning of the EA was performed. Results concerning tuning of and parameters are provided, as this emerged as a challenging problem. The purpose of this experiment was to establish best settings for and . The methodology was as follows: values of and were set experimentally; then were tested, and .
Convergence curves presented in
Figure 15,
Figure 16,
Figure 17 and
Figure 18 have similar shapes for different
values. If
is small, almost no gain is observed for the first iterations. After the initial iterations, however, quick convergence follows, after which the calculated cost values settle to a constant value. With large
, the initial slow convergence does not occur at all and the fast convergence period is followed again by settling down to a constant value.
Thus, the analysis of the impact of the studied parameters on convergence led to the formulation of the following conclusion: if , should be kept to about 1–2 to yield the best EA algorithm performance.