Next Article in Journal
A Survey of Wi-Fi 6: Technologies, Advances, and Challenges
Previous Article in Journal
Natural Language Processing and Cognitive Networks Identify UK Insurers’ Trends in Investor Day Transcripts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Experimenting with Routing Protocols in the Data Center: An ns-3 Simulation Approach

Instituto de Computación (InCo), Universidad de la República (UdelaR), Montevideo 11300, Uruguay
*
Author to whom correspondence should be addressed.
Future Internet 2022, 14(10), 292; https://doi.org/10.3390/fi14100292
Submission received: 8 September 2022 / Revised: 6 October 2022 / Accepted: 10 October 2022 / Published: 14 October 2022
(This article belongs to the Section Smart System Infrastructure and Applications)

Abstract

:
Massive scale data centers (MSDC) have become a key component of current content-centric Internet architecture. With scales of up to hundreds of thousands servers, conveying traffic inside these infrastructures requires much greater connectivity resources than traditional broadband Internet transit networks. MSDCs use Fat-Tree type topologies, which ensure multipath connectivity and constant bisection bandwidth between servers. To properly use the potential advantages of these topologies, specific routing protocols are needed, with multipath support and low control messaging load. These infrastructures are enormously expensive, and therefore it is not possible to use them to experiment with new protocols; that is why scalable and realistic emulation/simulation environments are needed. Based on previous experiences, in this paper we present extensions to the ns-3 network simulator that allow executing the Free Range Routing (FRR) protocol suite, which support some of the specific MSDC routing protocols. Focused on the Border Gateway Protocol (BGP), we run a comprehensive set of control plane experiments over Fat-Tree topologies, achieving competitive scalability running on a single-host environment, which demonstrates that the modified ns-3 simulator can be effectively used for experimenting in the MSDC. Moreover, the validation was complemented with a theoretical analysis of BGP behavior over selected scenarios. The whole project is available to the community and fully reproducible.

1. Introduction

Content-centric, cloud-based networking is the dominant model in the current Internet, with the pervasive presence of content providers with data center infrastructures deployed throughout the whole world. Content delivery networks (CDNs) replicate content in locations close to users, in order to improve their quality of experience, while over the top (OTT) providers behave in a similar way, consolidating the Internet distributed data center model. Moreover, resource virtualization is making its contribution to the Internet architecture shift as well, since the usual way to deploy online applications is to use cloud computing providers, which, not surprisingly, also base their operations on data centers with ubiquitous connectivity.
Both content and computing business are based in huge data centers with similar basic functions such as compute, store, and replicate data using message exchange among servers, taking advantage of supporting communication infrastructure. These data centers, which may comprise hundreds of thousands of servers, are called massive scale data centers (MSDC).
The traffic between users and applications running in the data center is called north–south traffic, while on the other hand, east–west traffic is the one exchanged by servers within the data center; the latter represents 85% of the total [1].
The traffic demand in MSDC, much higher than the traditional internet, requires specific solutions at the forwarding, routing and transport levels, taking advantage of the topological possibilities offered by Fat-Trees, inspired by Clos networks [2]. These networks, originally conceived to build non-blocking switching matrices for telephone networks, are made up of multiple levels of switches, where each switch of one level is connected to all those of the next level, obtaining a high path diversity as a result.
In a previous work [3], we experimented with data center routing protocols in emulated environments such as Kathara [4,5], Megalos [6], CORE [7] or Mininet [8], complementing the work presented in [9], where the Sibyl framework is used for evaluating implementation of routing protocols in fat-trees, including the Border Gateway Protocol (BGP) in the data center [10], Openfabric (IS-IS with flooding reduction) [11], and Routing in Fat Trees (RIFT) [12,13]. This framework presents wall-clock independent metrics, which permits us to normalize the results disregarding the underlying execution environment.
These previous works are based on the routing protocols from the Free Range Routing (FRR) suite, an open source implementation of BGP, OSPF, RIP, IS-IS, and other protocols, inheriting the code base of the Quagga project [14].
As mentioned in the previous work, emulated devices run exactly the same firmware of hardware devices, therefore implementing identical functionality. Moreover, emulated devices are exposed to real-life software errors, which permits us to not only evaluate functionality, but also resilience. On the other hand, re-implementation of network protocols and applications is needed for discrete event simulation, weakening the chances of testing real use cases. Nevertheless, a simulator provides an environment for replicable experiments always guaranteeing the same conditions and provides fine management of the timing issues.
With these considerations in mind, in this paper we present a port of FRR, to the Direct Code Execution (DCE) [15] mode of the ns-3 Network Simulator [16]. ns-3 is a discrete event network simulator for Internet systems, widely supported in the networking community. ns-3 has a mode of execution called DCE [17], which allows using native code (properly compiled) in the simulations. In this way, it is possible to execute existing implementations of network protocols or applications within ns-3. Therefore, it is possible to reconcile the virtues of discrete event simulators with emulation, which preserves the real implementation of protocols and applications. Moreover, this approach permits us to run a fair comparison among different experimentation frameworks which run FRR.
Thus, we seek to perform the necessary implementations so that ns-3 can support FRR, in order to develop simulations in ns-3 DCE that use FRR code. While FRR implements a set of network protocols, the scope of this work is to support the implementation provided for BGP. For the implementation of the simulations, we will focus on the fat tree CLOS topology, which is widely used in massive data centers. The aim is to study the behavior of the BGP protocol in this context.
The main contribution of our work consists of a simulation platform to test and analyze routing protocols in the context of MSDCs. This platform provides the ability to simulate the FRR suite and in particular the MSDC routing algorithms. To achieve this, our work includes: (i) an extension of DCE to support the FRR suite, (ii) a FRRHelper class, which facilitates the instantiation and usage of FRR in a simulation script, (iii) an extension of the fat-tree topology generator VFTGen [18] to produce ns-3 simulation scripts, (iv) a comparison and validation between emulation and simulation-based approaches for BGP in data center, and (v) a comparison and validation between the experimental results and a theoretical analysis of BGP behavior over selected scenarios.
The remainder of this paper is organized as follows: Section 2 provides the background and presents the Sibyl framework as related work. Section 3 describes the process of porting FRR to ns-3, using the DCE module. Section 4 exposes the validation of the port and the experimental results generated. Consequently, a basic functional evaluation is described. Secondly, a comparison against the Sibyl framework is carried out. Finally, a validation against a theoretical analysis of BGP behavior over two selected scenarios is performed. In Section 5, a performance analysis is exposed. It evaluates the scalability, memory usage and execution times for different network sizes. Additionally, two features to improve the performance of the port are described; and a comparison between the execution times in the simulated and emulated environment is exhibited. Finally, in Section 6, we discuss the most relevant aspects and conclusions of this work.

2. Background and Related Work

There are different approaches to network control plane debugging, namely model-based verification, and testing over emulation or simulation environments. In this work, we concentrate on testing tools. Regarding emulation tools, we have been working with scalable environments such as CORE, Mininet, Kathará and Megalos, where an actual protocol implementation can be tested in a controlled environment. Moreover, the Sibyl framework, which works over Kathará and Megalos, assembles different tools for protocol evaluation over fat-trees.
In the case of simulations, re-implementation is often needed. This presents a major drawback for protocol debugging, and therefore it is not the most usual path to follow. Some previous works have attempted to offer real code execution over a simulator but, to the best of our knowledge, only DCE has a working environment tested with many real world implementations. In Section 3, we present in more detail the characteristics of ns-3 and DCE, and the FRR port effort.

2.1. The Sibyl Framework

In this section, we will briefly describe the Sibyl framework that we will use as a baseline for comparison and validation of our proposal, given the public availability of a complete data-set of experiments [19].
Kathará is a network emulation system that accurately reproduces the behavior of a real system, using Docker containers [20] to implement devices, which represents a lightweight alternative to standard virtualization solutions, allowing devices to use different images in the same network scenario (for example, different implementations of a given network protocol).
Kathará supports different virtualization managers, and in order to support horizontal scalability, it uses Kubernetes [21], adopting the name Megalos. Since it runs distributed in a cluster of servers, the low level connectivity of emulated devices is implemented using a Virtual Extensible LAN (VXLAN) data plane with an EVPN BGP control plane.
The Sibyl framework integrates the aforementioned environments, tailored to perform a large number of experiments on parametric fat-tree topology configurations. During each experiment, Sibyl performs a series of steps, starting by generating a topology, deploy nodes running specific containers and network links, start the experiment and capture relevant PDUs, shutting down and analyzing the results (for further details, see [9]).
We used the results gathered following these steps as a baseline for comparison with other experimentation environments, in particular with the FRR port to ns-3 presented in this paper.

2.1.1. Sibyl Fat-Tree Experimentation Tools

In this section, we describe the tools included in the Sibyl framework, as follows:
  • VFTGen [18] automatically generates and configures fat-tree topologies for Sibyl. It takes as input the parameters of a fat-tree.
  • Sibyl RT Calculator is a tool for generating the expected forwarding tables of the network nodes of a fat-tree, taking into account the routing protocol (e.g., BGP) and the type of test (e.g., Node Failure).
  • Sibyl Analyzer is a tool to analyze the results of the experiments using the packets exchanged by the nodes during an experiment.

2.1.2. The Timing Issue

Sibyl implements a wall-clock independent metric, which permits us to normalize the results disregarding the underlying execution environment.
This is necessary for emulated environments, where underlying hardware resources cannot be taken for granted. On the other hand, execution time is completely under control in discrete event simulations, permitting us to measure performance parameters with certainty. This is the main reason to attempt the FRR port to ns-3, along with the fact that DCE permits us to execute native code.

2.2. Fat Tree Networks

Fat-tree networks are topologically partially ordered graphs, and “level” denotes the set of nodes at the same height in such a network, where the nodes of level zero (the lowest) are called Leaves, those of level one are Spines, and the ones of level two are Top of Fabric (ToF) or Cores. The subset of Leaf and Spine nodes that are fully interconnected is called a Point of Delivery (PoD). Level two is called the aggregation level and has the responsibility of connecting different PoDs.
Following the notation described in [12], a fat-tree topology can be specified by three parameters: K L E A F , K T O P and R. K L E A F and K T O P describe the number of ports pointing north or south for the leaf and spine nodes, respectively. Finally, the number of links from a ToF to a PoD are denoted by R and called “redundancy factor”. As an example, the Figure 1 shows a fat-tree with K L E A F = 2 , K T O P = 2 and R = 1 . For simplicity, from now on we assume K L E A F = K T O P = K .
Observe that there are two types of fat trees: single-plane and multi-plane. In a single-plane topology, each ToF is connected to all the Top of PoD (ToP). This topology has the maximum value of redundancy factor, with R = K . In these topologies, the number of ports for each ToF is at least P × K , which might be unfeasible if P and/or K are too large.
On the other hand, in a multi-plane topology, ToFs are partitioned into planes:    N = K / R sets, each with the same number of nodes. All the ToFs of the same plane are connected to the same set of spines of each PoD. The topology shown in the Figure 1 can be described as a multi-plane fat-tree with K = 2 and R = 1 and N = K / R = 2 ToF planes. It is worth noting that in this configuration, redundancy is sacrificed to increase the number of PoDs.

3. FRR Port to ns-3 DCE

In this section, we detail the process of porting FRR to ns-3, using the DCE module. The process involved: (i) changes to DCE to be able to execute the FRR code, which implied re-implementing some functions from the C library (glibc) that are used by FRR, also fixing some bugs found in existing DCE code; (ii) minor changes to the code of FRR, in order to solve some problems that were difficult to find another solution to; (iii) implementing a class FrrHelper in a way that makes it easy to write scripts that use the port, and (iv) carrying out tests in order to evaluate and validate the port, which we will see in Section 4 and Section 5. The aforementioned port is open source and is available at [22].

3.1. Background on ns-3 and DCE

ns-3 [16] is a discrete-event network simulator used mainly in research and education. It is open-source and free, licensed under the GNU GPLv2 license.
Both ns-3 core and models are implemented in C++. It is built as a library that can be linked both statically and dynamically by a main C++ program, which defines the network topology and starts the simulation [23]. Typically, to run a simulation in ns-3, a C++ program is created (script in the ns-3 nomenclature) that defines the topology and configuration for the simulation. This program includes at the end a call to the Run() function of the Simulator class that will start the simulation.
Regarding Direct Code Execution (DCE) [15], it is a framework for ns-3 that allows us to execute existing implementations of network applications or protocols within ns-3 without any changes to the source code. This permits us to execute existing real applications such as the ping application or even more, the entire Linux networking stack within an ns-3 simulation.
Thus, in a ns-3 simulation which uses DCE, the network topology as well as channel configurations will be done in ns-3, while applications running on nodes can use DCE, including Linux native applications or actual implementations of network protocols, such as Linux’s TCP, as shown in Figure 2.
There are two ways to run DCE: basic mode and advanced mode. Basic mode uses the ns-3 networking stack, while advanced mode uses the Linux networking stack. The latter is done using the Linux kernel as a library.
The design of DCE takes its idea from the library operating system (LibOS [24]). DCE is structured around three components: Core, Kernel and POSIX, as shown in Figure 3. First, at the bottom level is the Core module that handles memory virtualization: stack, heap and global variables. Above that is the Kernel layer that takes advantage of these services to provide an execution environment for the Linux network stack within the simulator. For Advanced Mode, DCE uses the Linux kernel implementation of layer 3 and 4 protocols and Layers 1 and 2 are simulated with ns-3. DCE takes care of synchronization, making the Linux kernel see ns-3 network devices as if they were real devices. Finally, the POSIX layer builds on top of the Core and Kernel layers to re-implement the standard socket API for use by simulated applications.
DCE runs each simulated process on the same host process. This model makes it possible to synchronize and schedule each simulated process without having to use inter-process synchronization mechanisms. What’s more, it allows the user to track the behavior of the experiment by different processes without having to use a distributed debugger, which tends to be more complex. The threads in each simulated process are managed by a task handler, implemented in DCE, synchronized with the simulated host and isolated from the other simulated hosts.
Since the loader of the host system aims to ensure that each process does not contain more than one instance of each global variable, DCE provides its own implementation of the loader with a specific loading mechanism to instantiate each global variable, once per simulated instance.
The POSIX implementation in DCE replaces the use of the traditional glibc library. Thus, when an application running on top of DCE makes a call to glibc, DCE intercepts the call and executes the re-implemented function. Most of these functions are simply a handshake to the corresponding function in the host’s glibc library. However, calls that involve system resources must be re-implemented. These include calls involving network resources, the system clock, or memory management. DCE classifies the functions of the library glibc using the macros DCE or NATIVE. The former are functions that are re-implemented by DCE, while the latter are passed to the operating system’s own library.

3.2. Previous Work: Quagga Port

Quagga is a routing software suite, providing implementations of OSPFv2, OSPFv3, RIP v1 and v2, RIPng and BGP-4 for Unix platforms. FRR is a fork of Quagga, which has been embraced by both industry and the community, replacing Quagga as the suite of choice for open source routing projects. FRR incorporates implementations of protocols used in data centers such as OpenFabric [26], and allows the necessary modifications to be incorporated into BGP for routing in large-scale data centers.
Quagga has been ported to DCE in 2008 [27]. The Quagga module in DCE allows using Quagga routing protocols implementation as models in the network simulation. To date, the Quagga DCE project is no longer actively maintained, being its last update in 2012. Despite this, the project is still functional and can be executed with DCE without major problems. Quagga support in DCE is not complete.
To make it easier to use Quagga in simulations, the project provides a QuaggaHelper class. This class provides methods that can be used from the simulation scripts to install a protocol on a node and configure it. During the port of FRR, we drew heavily on this class to develop a FrrHelper to provide similar facilities. Additionally, the fact that not all features work with the ns-3 stack (Basic Mode) motivated us to focus on the Linux stack (Advanced Mode) for the FRR port.

3.3. DCE Extensions to Support FRR

As previously mentioned, DCE does not support all existing glibc functions; therefore, when porting a new application to DCE, it is possible that multiple errors appear due to unrecognized function symbols, since they were not declared in DCE. Therefore, the process of adding support for a new application is very cumbersome, and it is mostly based on trial and error until all needed functions are detected and correctly implemented in DCE.
During the process of porting FRR to DCE, we found several (10) functions that were not declared in the POSIX layer of DCE. For seven of them, it was necessary to implement their functionality inside DCE because they are related to memory allocation, timing, file operations, disk allocation and threading. The other three are functions where their by FRR does not involve system resources, therefore, it is enough to indicate DCE to use the original glibc implementation (use the NATIVE macro). These extensions can be found in our public repository [22].
In addition to these added functions, we detected two bugs in memory management in functions already implemented under the DCE macro. Two Pull Requests were performed in the ns-3 DCE project due to the correction of these bugs [28,29]. In addition, another Pull Request was made with the necessary functions to execute the code of FRR. At the time of writing this paper, the Pull Requests are pending review.

3.4. FRR Extensions to Run over DCE

In addition to the extensions to DCE mentioned in the previous section, changes were made in FRR in order to run FRR over DCE. These changes were made for practicality reasons, due to the difficulty of adapting DCE to run FRR in its original form. The changes took two forms: changes in the method of compiling and changes in the source code.
In general, in order for an application to run in DCE, it needs to relocate the executable binary into memory. In turn, these executable files need to be built with specific options for the compilation and linking stages as explained in the ns-3 DCE manual. For the case of FRR, which is a framework optimized for several different platforms and also for real networking hardware, we need to tune the compiling process.
Compiler optimizations often use function symbols that DCE does not implement. For example, when compiling FRR with the default compiling configuration, the obtained binary uses symbols such as __strndup or pthread_condattr_setclock. Therefore, we opt to disable some compiler optimization so as to reduce the number of new functions to implement in DCE.
Regarding source code changes to FRR, we perform some minimal modifications to avoid the usage of some unimplemented glibc’s function symbols in DCE. The changes are related to the log buffering of FRR, which does not have impact on the functionality of the application. The compiling and source code changes are summarized in a compilation script available in [30].

3.5. Helper Class for Running FRR over DCE

To assist in the creation of simulations using the FRR port, we created a ns-3-dce-frr module within the ns-3 DCE project. This is based on the existing ns-3-dce-quagga module of the ns-3-dce-quagga port.
The ns-3-dce-frr module includes, among other things, simulation examples and the FrrHelper. The latter contributes to the configuration of the environment required for the deployment of simulations and assists on the instantiation of the selected daemons of FRR where indicated (one or multiple simulated nodes), with zebra being installed implicitly.
Moreover, the FrrHelper creates the necessary directories, configuration files and loads the programs to be executed by the nodes. In addition, the FrrHelper also includes methods for the configuration of the ported daemons: zebra, BGP and OSPF. Furthermore, a frr-utils class has been implemented that provides useful functions for both the FrrHelper and the simulations.

3.6. Fat Tree Generator for ns-3 DCE

In order to be able to execute multiple test cases on different configurations of fat-trees, without the need to implement them each time, we develop a fat-tree generator for ns-3 DCE inspired in the Kathará analogous VFTGen [18].
This makes it easy to automate, create, and reproduce test cases. To create this generator, the utilities vftgen-utils and vftgen-classes were implemented, which are responsible for building the topology. That is, according to the indicated parameters, they create the appropriate number of ns-3 nodes, connect them according to the corresponding fat-tree and assign them appropriate IP addresses.

4. Validation and Experimental Results

Our implementation has been evaluated using three different approaches: (i) a simple functional evaluation, (ii) a comparison against the Siybl framework, and (iii) a theoretical analysis.

4.1. Functional Evaluation

Several test cases has been developed along the process of implementing changes and additions to ns-3 DCE to allow the execution of FRR, following an iterative and incremental approach.
The core of FRR architecture is the zebra daemon, which manages IP routing by updating the operating system’s kernel routing tables. It also permits to discover interfaces and redistribute routes among the different routing protocols running on the host [31].
Thus, in order to verify the correct functionality of the implementation, it is necessary to run zebra and routing protocols’ implementations together; in this case, we focus on BGP. Under normal operation, BGP will learn prefixes and install entries in the kernel routing tables via zebra, allowing the node data plane to forward IP packets. Note that BGP may run without zebra, if we only want to verify the control plane operation (without packet forwarding).
Therefore, for any given scenario, the verification method consists in checking routing table updates. Likewise, connectivity tests can also be performed using, for example, the ping command.
For this evaluation, we selected three scenarios and validate the correct execution:
  • Running zebra and BGP in a single node: This scenario allows us to test that FRR with zebra and the chosen routing protocol can be loaded and executed in a node.
  • Running zebra and BGP in a network: We implemented some scenarios based on Kathará project labs [32], such as BGP Simple Peering and BGP Prefix Filtering, which permit us to test BGP update propagation and filtering among peers, and BGP Multi-homed Stub, which is a more complex scenario.
  • Running other routing protocols: Although our focus is on BGP, all the FRR daemons should run on ns-3 DCE. Therefore, we evaluate running OSPF in the same networks from the previous scenario. It is worth mentioning that no particular change was necessary in the OSPF case, and consequently, we expect that other protocols will also work correctly without the need to make further modifications to the implementation; this is reasonable since, by the architecture of FRR, most of the complexity is contained in the zebra daemon.
After the correct execution of these scenarios, and having verified that the routing tables are correctly updated, we can conclude that zebra and BGP are running correctly in ns-3 DCE, validating our extension.

4.2. Comparison against Sibyl Framework

Given that our focus is on evaluating routing protocols over Clos networks, we decided to validate our implementation running several experiments over the same network topologies used by the Sibyl framework [9]. Using these same scenarios in ns-3 DCE gave us the opportunity to compare the results, since Sibyl framework defines various fat-tree based scenarios.

4.2.1. Experimental Setup

The experiments consist on recording the number of Protocol Data Units (PDUs) exchanged among nodes until convergence of the routing protocol. In our case of study (the BGP protocol), this is equivalent to the number of BGP UPDATE messages. It worth noting that each experiment consider the propagation of a single network prefix for each leaf node in the topology, without losing generality in the results.
When a simulation starts and the BGP protocol begins to execute, the nodes start exchanging UPDATE messages, until convergence is reached and the UPDATE messages cease to be exchanged. Convergence occurs when all topology information has been distributed (i.e., multi-path connectivity has been reached for every network prefix). Any change in the topology, or in the routing table of a node, generates a new exchange of UPDATE messages until convergence is reached again.
A scenario comprises a certain fat-tree topology (determined by the parameters k_leaf, k_top and redundancy) and five different situations (test cases) that are described below:
  • Bootstrap: The objective is to study the standard behavior of the protocol in the topology when it is started, without any failure.
  • Node Failure: This test case is used both to verify that BGP converges after a switch failure, and also to count the number of PDUs that the protocol exchanged for that purpose. The fault can be introduced in any type of switch in the topology, that is, Leaf, Spine or Tof. It is done by shutting down the BGP daemon on the given switch.
  • Node Recovery: In this test case, the objective is to count the number of PDUs exchanged by the switches, after one of them fails and is replaced by a new one. Like the previous case, this case can be run on a Leaf, Spine, or Tof. We implement this case by raising the topology without running BGP on the node in question, we wait for it to converge and then we start BGP. This is equivalent to crashing the node and then starting it again.
  • Link Failure: This case also has two goals. On the one hand, to verify BGP convergence after a link failure, and on the other hand, to count the number of PDUs for this purpose. The test can be run for both the Leaf–Spine link case and the Spine–Tof link case, simply by pulling down a given interface.
  • Link Recovery: This case counts the number of PDUs after a failed link is replaced. That is, the simulation is started and the protocol is expected to converge. Link failure is then caused and the protocol is again expected to converge. Finally, the link is recovered and the new convergence is expected. The number of PDUs that are taken into account are those exchanged in this last phase.
Each scenario is named using the following criterion: x_y_z_case-level, where:
  • x is the k_leaf parameter.
  • y is the k_top parameter.
  • z is the redundancy parameter.
  • case represents the test case, which can be link-failure, link-recovery, node-failure, or node-recovery.
  • level depends on the case:
    - If the case is link-failure or link-recovery, level can be leaf-spine or spine-tof, referencing the level where link failure or recovery occurred.
    - If the case is node-failure or node-recovery, level can be leaf, spine or tof, referencing the level where the failure or recovery of the node occurred.
The different scenarios where configured using the same values for k_leaf and k_top parameters so as to have homogeneous switches at the different levels of the fat-trees. On the other hand, for a given value of k_leaf and k_top, we variate the redundancy parameter, always considering that it divides the k_top value. Moreover, during the different executions of the test cases, we vary the level where the failure is produced so as to cover all the possibilities.

4.2.2. Execution Environment

All the experiments presented were executed on a server machine running Ubuntu 16.04 with 30 CPUs AMD Opteron 63xx class and with 244 GB of RAM memory.
We configure the simulation duration so as to allow convergence while minimizing it. For this, we studied several simulations to find the best values for each scenario. The final configurations for the simulation duration (in simulated time) for each scenario are:
  • Bootstrap: 10 s.
  • Node-failure and link-failure: 20 s. This time allows for the bootstrap to finish and converge, produce the failure in the node or link and then wait again for convergence.
  • Node-recovery and link-recovery: 30 s. In this case, after the failure and the convergence, the node or link is recovered and we have to wait again for convergence.
The simulations are configured to generate traffic capture files (.pcap) for every interface of each node simulated. This files are then processed so as to count the number of BGP’s UPDATE messages exchanged.

4.2.3. Results

The results of all the experiments executions are shown in Appendix A. As can be seen from Table A1, the number of PDUs (BGP UPDATES) obtained with our simulations in ns-3 DCE exactly match the number obtained with the Sibyl emulations for most of the scenarios. This exact match between the results in ns-3 DCE and the emulation platform strongly validates the accuracy of our simulation platform. In particular, this shows that with the proposed platform, we can execute the exact same BGP algorithm that runs in the Sibyl emulation approach.
Regarding the scenarios where there are differences, we should note that there are some Sibyl scenarios that present more than one result. This is due to the fact that the emulations are not deterministic, and depend, for example, on the host machine resource usage. For the cases where we have more than one result from Sibyl, the results obtained in our simulations are between these values or very close to them. These differences also demonstrate one of the main advantages of the simulation against the emulation given that in the simulation the results are deterministic and reproducible. The exact same result can be obtained independently of the underlying hardware or software where it is run.
In Table 1, we select some specific results. In particular, we show two cases with a significant difference between our experiments and the Sibyl framework results. If we consider the couple of scenarios painted in blue in Table 1, the result in scenario 10_10_1_node-failure-spine for Sibyl is roughly half the result obtained in ns-3 DCE; we argue that this is an outlier in Sibyl, as we will further show in Section 4.3.1. Regarding 12_12_1_node-failure-leaf scenario, note that the result for Sibyl is smaller, but in the same order than the one of ns-3 DCE; here, we argue that the vector-distance nature of BGP and its well known characteristic of “path hunting” is responsible for this difference, as we further explain in Section 4.3.2.
Note that we can also compare the results of the simulations with and without the zebra daemon running. As explained in Section 5, we executed the experiments disabling zebra in order to reduce the resource consumption of the simulation. In most of the scenarios considered, this change does not affect the number of BGP updates exchanged in the experiment. Nevertheless, in some cases there exists a small difference (actually, in 8 out of 250 experiments, 3.2%). Most of the misalignment in the results are experienced in node-failure-leaf scenarios, due to the very reason we mentioned above: the path-vector nature of BGP may cause extra UPDATE messages to be exchanged, as explained in Section 4.3.2.
In the following section, we also compare the obtained results against a theoretical model of the behavior of BGP.

4.3. Theoretical Analysis of BGP Behavior over Selected Scenarios

In this section, we will analyze the BGP behavior over the scenarios presented before, taking advantage of the regularity of the fat-trees multi-plane with R = 1 , which can be described by a single parameter k [33]. In effect, in a fat-tree topology of k PoDs, there are k switches (each with k ports) in each PoD, arranged in two levels (Leafs and Spines) of k / 2 switches each. Each Leaf is connected to the k / 2 Spines and vice versa. There are ( k / 2 ) 2 Core switches, each of which connects to k PoDs.
The aim of this theoretical analysis is to find an expression in function of k that describes the number of packets exchanged in two scenarios: the fail of a leaf, and the fail of a spine. These scenarios were intentionally selected after the differences in results shown in Table 1.
Remember that the experiments propagate one prefix per leaf, i.e., in a fat-tree of k PoDs there are a total of k 2 / 2 prefixes, or equivalently, leaf nodes.

4.3.1. Case Spine Node Failure

To analyze the behavior of BGP when a spine node fails, we divided the problem into three sub-problems: (1) the PoD of the failure, (2) the PoD with no failures and (3) the spine–core links. Since the goal is to find an expression that models the total number of packets exchanged after the failure, dividing the problem into sub-problems is equivalent to dividing the expression into sums.
  • First note that each leaf needs connectivity information for k 2 / 2 prefixes; while k 2 / 2 1 prefixes are “foreign”, the remaining one is directly connected. When the fails occurs, in the PoD of the fail there are k / 2 leaf nodes aware of the failure. The BGP process of each leaf node will recalculate the routes and will notice that for every known prefix, one possible next-hop is missing. Consequently, it will send, for each known prefix, a BGP update with the next-hop attribute updated. Thus, we have k / 2 leafs sending k 2 / 2 1 packets (the total number of prefixes in the fabric which have lost a next-hop) through their k / 2 1 links. Consequently, the total amount of BGP packets in the PoD of the failure equals k / 2 × ( k / 2 1 ) × ( k 2 / 2 1 ) .
  • The rest of the PoDs learn about the failure through the spine connected to the same plane as the faulty spine, through the corresponding core switch. This spine sends a BGP withdraw containing all the prefixes no longer reachable through the corresponding core switch (all the prefixes inside the PoD with the failure) to all its neighbors ( k / 2 leaves in this PoD). After that, each leaf recalculate its routes and notice that each prefix received in the withdraw are no longer reachable through one of its next hops. Consequently, it will send, for each of these k / 2 prefixes, a BGP update to all its neighbors ( k / 2 spines). Consequently, the total amount of BGP packets in each PoD without a failure equals ( k / 2 ) + ( k / 2 ) × ( k / 2 ) × ( k / 2 ) .
  • The faulty spine was connected to k / 2 core switches. Because the topology considered is multi-plane with R = 1 , these core nodes have exactly one link with each PoD. After the failure, each core connected with the faulty spine have no longer reachability to the prefixes of the corresponding PoD, and it must send a BGP withdraw for the prefixes of such PoD to all its neighbors ( k 1 spines). After that, when a spine connected to these cores receives the withdraw from all of them, it will notice that it no longer has reachability to the prefixes of the given PoD, and it will send the correspondent withdraw upstream to the cores; therefore, a total of two BGP packets traverse every core–spine link. In effect, we have k / 2 cores, which send and receive one BGP withdraw through all their “live” interfaces ( k 1 ). Consequently, the total amount of BGP packets in the core–spine links equals 2 × ( k 1 ) × k / 2 = ( k 1 ) × k packets.
Put together, and multiplying the expression for the PoDs without a failure times the amount of such PoDs ( k 1 ), we arrive at a total BGP packets of k / 2 × ( k / 2 1 ) × ( k 2 / 2 1 ) + ( k 1 ) × ( ( k / 2 ) + ( k / 2 ) × ( k / 2 ) × ( k / 2 ) ) + ( k 1 ) × k . Simplifying, we obtain the polynomial
k 4 4 3 k 3 8 + 5 k 2 4 k
Figure 4 compares packet growth as a function of the number of nodes for the results of ns-3, Sibyl and the polynomial expression. Notice that the results of ns-3 fits the polynomial exactly. On the other hand, Sibyl results shows a deviation from the polynomial, and there are far less results for this particular case. In fact, the Sibyl results for k = 20 double those expected following the theoretical expression. If we look closely at the presented analysis (step 1), the Sibyl packet count for this case barely exceeds the number of packets needed to update the routes within the PoD of the failure. Therefore, our assumption that the difference for the scenario 10_10_1_node-failure-spine shown in Table 1 is an outlier is confirmed.

4.3.2. Case Leaf Node Failure

To analyze this case, let us consider what happens at the routing level when a leaf fails; this is analogous to a prefix that is no longer reachable, and k / 2 links down in the PoD of the failure. When a leaf is no longer reachable, its neighbors ( k / 2 spines of the given PoD) will notify the fact with a BGP withdrawn that will spread throughout the fabric. In terms of packet count, this implies that each node in the fabric will send a BGP withdrawn out all of its interfaces. Similarly, two packets must be observed on each fabric link.
The total number of links of a fat-tree multi-plane with R = 1 are k 3 / 2 (before the leaf failure). As mentioned, k / 2 links go down after the failure, and consequently, if each link carries two packets, the total amount of BGP packets for a leaf failure is 2 × ( k 3 / 2 k / 2 ) or equivalently
k 3 k
Note that this is a lower bound due to the following. When a Leaf node receives from a Spine the aforementioned withdrawal, its routing table still holds the reachability information for the given prefix using the rest of the spines, and therefore the leaf “thinks” it can still reach the prefix. This race condition can cause the leafs to send an BGP Update announcing the (now inexistent) routes to its corresponding spines in the PoD. Every announced route contains in its AS-PATH the ASN of the spine that receives it, and therefore is discarded by it. Thanks to the specific numbering of ASNs in the fat-tree, the inexisting route is no longer propagated, and the “path hunting” is stopped early. A simple way to find an expression that models that behavior is by adding one more packet for every spine–leaf link to the above expression (1). To determine the number of links, first we count the number of links inside a normal PoD, i.e., k / 2 × k / 2 and multiply this by the amount of normal PoDs ( k 1 ). Then, the number of links in the PoD of the failure is k / 2 × ( k / 2 1 ) . Consequently, the total amount of BGP packets that model this behavior is
5 k 3 4 3 k 2
Figure 5 compares packet growth as a function of k for the results of ns-3, Sibyl and the polynomials expressions for the Leaf Node failure scenario. Note that while Sibyl results follow polynomial (1), ns-3 DCE results follow both (1) and (2) alternatively. Regardless, the results are correct since both behavior may occur due to the nature of BGP and timing of control plane packets.

5. Performance Analysis

Given the possible massive size of a network (in number of nodes and links) in MSDCs, it is important to study how the simulator performs and scales with different network’s sizes.
Specifically, we decided to evaluate the scalability in two aspects, memory usage and simulation runtime for different network sizes.
Even more, while developing and testing our implementation, we notice that the performance regarding memory usage and simulation time was a limiting factor to obtain results with big networks. Therefore, we decided to incorporate two features to improve the performance: disable the zebra daemon and disable IPv6 support.
Regarding the first one, our implementation allows us to run a simulation without running the zebra daemon. In fact, a routing protocol in FRR does not need zebra to execute, therefore it can be tested correctly without using zebra. On the other hand, when disabling zebra, we lost the data plane functionality, and the kernel routing tables are not updated. This means that the nodes will exchange routing information following the routing protocol algorithm, but the routes will not be saved. As a consequence, there will not be connectivity between nodes.
Nevertheless, as our objective is to study the behavior of the control plane only, in particular the convergence of BGP under different situations, the previous drawbacks do not affect our results.
So as to be able to run a routing protocol (and BGP in particular) without zebra, some modifications and extensions are needed. For example, it is not possible to discover neighbor interfaces. For the case of BGP, this implies that in the configuration it is not possible to refer to the BPG peers in a generic form using the interface. Instead, we must use the IP address. In our code, we provide all the configurations and modifications of the FrrHelper to run simulations with the zebra daemon disabled.
Regarding IPv6, we notice a high load of IPv6 control packets such as ICMPv6 Router Solicitation and ICMPv6 Router Advertisement in our simulations. This was due to the use of the Linux kernel in DCE, which includes IPv6 in all interfaces by default. Therefore, our solution allows us to easily disable the use of IPv6 in all interfaces by executing a single command.
Additionally, in the results we also show the execution time of each scenario. It can be seen that the experiments without zebra can run faster, taking approximately half of the time to execute in comparison with the experiments with zebra. Even more, when running without zebra, the simulations use much less RAM memory, which allows us to experiment with bigger scenarios. Without zebra, we were able to run a scenario with 1125 nodes (15_15_1_*) while when using zebra, the biggest scenario was one with 320 nodes (8_8_1_*).
In summary, if the objective of a given use case is only to test the control plane of a routing algorithm, it is recommended to disable the zebra daemon, as it has shown to improve the simulations performance in execution time and consumed memory.
In order to illustrate the performance improvement that the above changes produced, the example 6_6_6_link-failure-spine-tof was run. In it, the changes will be applied independently, and then all together. Table 2 illustrates the results of execution time, memory used and improvement percentage for the introduced change.
This improved version allows us to scale the experiments and perform the same scenarios performed with Sibyl in the work of reference [9]. Despite both experimentation environments managing the internal time differently, we can still compare the execution times and the resource consumption of each environment. These comparisons, for scenarios implementing the densest fat-tree topologies, are shown in Table 3. The resources available for the ns-3 environment are the presented in Section 4.2.2, i.e., 30 CPUs and 244 GB of RAM. On the other hand, the scenarios 2_2_1 to 8_8_1 from Sibyl were executed on a cluster of 22 VMs, each with 2-core vCPUs and 8 GB of vRAM, while the scenarios 10_10_1 to 16_16_1 were executed on a cluster composed of 160 VMs, each with 4-core vCPUs and 8 GB vRAM.
It is worth noting that our environment use a single CPU, this is due to the nature of the ns-3 simulator with DCE. Additionally, in addition to the information provided in the table, we observed a RAM saturation in the largest scenario performed. This can explain the gap between the execution time presented in the last of the simulated scenarios in ns-3.

6. Discussion and Conclusions

As a general result, we can conclude that the FRR port to ns-3 DCE is functionally correct and promisingly scalable. In fact, running control plane-only experiments (i.e., without the zebra daemon) on a single server, we were able to achieve competitive results as Sibyl running on a cluster of computing nodes.
In terms of execution times, the results were very satisfactory. ns-3 DCE shows a performance almost close to the Sibyl-emulated environment for scenarios up to 320 nodes, which were run on infrastructures with similar resources. On the other hand, for scenarios with more than 320 nodes, ns-3 DCE shows an average time 2.4 times higher than Sibyl, but with a cumulative vRAM ratio of 5.2 times lower.
The port validation included a theoretical analysis of the behavior of BGP for the datacenter on multi-plane fat-tree topologies. This analysis exposes how to find a formal expression that describes the growth of control packets injected into the network after a failure scenario. Although this analysis validated the results obtained, it is worth noting that the implementations of the routing protocols may be subject to race conditions or limited by the available resources that slightly vary the behavior based on optimizations of the implementations.
In this paper, we focused on BGP in the datacenter, and briefly commented about the execution of other routing daemons of the FRR suite. In this regard, another straightforward line of future work is to undertake a thoroughly testing of other routing daemons, in principle in the MSDC scope. To this end, Openfabric (IS-IS with flooding reduction) is already implemented and ready to run.
Overall, to the best of our knowledge, in this work we provide a functionally correct and scalable FRR port to ns-3 DCE, ready to use by researchers and practitioners alike. For the time being, we only focused on the control plane of Fat-Tree network routing protocols, reaching competitive results with less resource consumption. A foreseeable line of research shall include the forwarding plane, enabling research on traffic behavior in MSDC and/or other topologies.

Author Contributions

Conceptualization, L.A., E.G., M.R., S.A. and F.V.; methodology, L.A., E.G., M.R.; software, S.A. and F.V.; validation, S.A., F.V. and L.A.; formal analysis, L.A., E.G. and M.R.; investigation, L.A., E.G. and M.R.; writing—original draft preparation, L.A., E.G., M.R., S.A. and F.V.; writing—review and editing, L.A., E.G. and M.R.; visualization, L.A., S.A. and F.V; supervision, E.G. and M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Uruguayan National Research and Innovation Agency (ANII) under Grant No. POS_NAC_M_2020_1_163847.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We would like to thank our colleagues from the Computer Networks research group at Università Roma Tre, Italy, for the facilitation of execution times reported by the Sibyl framework in their environment.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison of execution time and number of PDUs exchanged between the proposed ns-3 simulation platform and Sibyl for all the scenarios.
Table A1. Comparison of execution time and number of PDUs exchanged between the proposed ns-3 simulation platform and Sibyl for all the scenarios.
ScenarioNumberExecutionPDUsExecutionPDUsPDUs
of NodesTimens-3Timens-3Sibyl
w/o Zebraw/o Zebraw Zebraw Zebra
2_2_1_link-failure-leaf-spine201:07.19441:18.7344-
2_2_1_link-failure-spine-tof200:53.28451:30.7745-
2_2_1_link-recovery-leaf-spine200:54.39691:20.0769-
2_2_1_link-recovery-spine-tof201:04.59721:40.3172-
2_2_1_node-failure-leaf200:59.59601:24.616060
2_2_1_node-failure-spine201:01.74561:40.4256-
2_2_1_node-failure-tof201:02.40721:21.3372-
2_2_1_node-recovery-leaf200:52.62961:03.4296-
2_2_1_node-recovery-spine200:58.331601:50.73160-
2_2_1_node-recovery-tof200:55.881681:19.57168-
2_2_2_link-failure-leaf-spine100:37.19160:46.5116-
2_2_2_link-failure-spine-tof100:30.20120:52.3612-
2_2_2_link-recovery-leaf-spine100:33.69290:32.7929-
2_2_2_link-recovery-spine-tof100:32.03260:32.4824-
2_2_2_node-failure-leaf100:34.74280:50.992828
2_2_2_node-failure-spine100:32.22180:36.6818-
2_2_2_node-failure-tof100:34.83240:50.4724-
2_2_2_node-recovery-leaf100:32.05480:48.9448-
2_2_2_node-recovery-spine100:34.10680:47.4068-
2_2_2_node-recovery-tof100:35.70720:32.0672-
4_4_1_link-failure-leaf-spine804:15.533125:23.27312-
4_4_1_link-failure-spine-tof804:15.094275:10.87427-
4_4_1_link-recovery-leaf-spine804:37.034095:49.97409-
4_4_1_link-recovery-spine-tof804:23.285425:33.42542-
4_4_1_node-failure-leaf804:12.735045:17.12504504
4_4_1_node-failure-spine804:28.1111285:57.28904-
4_4_1_node-failure-tof804:14.7015685:39.911568-
4_4_1_node-recovery-leaf804:31.557686:09.64768-
4_4_1_node-recovery-spine804:13.0918086:00.581808-
4_4_1_node-recovery-tof804:16.7223206:11.652320-
4_4_2_link-failure-leaf-spine402:11.22963:03.5196-
4_4_2_link-failure-spine-tof402:07.431123:15.35112-
4_4_2_link-recovery-leaf-spine402:20.261453:29.08145-
4_4_2_link-recovery-spine-tof402:07.441623:10.86162-
4_4_2_node-failure-leaf402:10.242483:08.23248248
4_4_2_node-failure-spine402:13.892923:00.28292-
4_4_2_node-failure-tof402:06.926722:59.16672-
4_4_2_node-recovery-leaf402:09.553843:11.17384-
4_4_2_node-recovery-spine402:11.836403:11.03640-
4_4_2_node-recovery-tof402:00.6310403:22.351040-
4_4_4_link-failure-leaf-spine201:05.64721:16.4472-
4_4_4_link-failure-spine-tof201:03.96561:31.8356-
4_4_4_link-recovery-leaf-spine201:12.47971:19.0397-
4_4_4_link-recovery-spine-tof201:10.85781:32.0678-
4_4_4_node-failure-leaf200:58.071201:44.26120120
4_4_4_node-failure-spine201:08.241960:59.93196-
4_4_4_node-failure-tof201:07.312241:30.04224-
4_4_4_node-recovery-leaf201:03.671921:15.28192-
4_4_4_node-recovery-spine201:09.483841:49.51384-
4_4_4_node-recovery-tof201:06.604000:59.46400-
6_6_1_link-failure-leaf-spine18010:42.6199614:40.34996996
6_6_1_link-failure-spine-tof18010:27.84152914:17.1915291529, 1577
6_6_1_link-recovery-leaf-spine18011:11.31121315:53.6712131069, 1238
6_6_1_link-recovery-spine-tof18010:55.97179616:14.0417961657, 1796
6_6_1_node-failure-leaf18010:41.58171613:52.9417161716, 1734
6_6_1_node-failure-spine18010:12.82470415:13.5647044704, 4726
6_6_1_node-failure-tof18010:19.49871214:31.7787128712, 8808
6_6_1_node-recovery-leaf18010:38.73259214:44.5525922592, 3444
6_6_1_node-recovery-spine18010:23.92787215:32.4778728686, 9362
6_6_1_node-recovery-tof1809:56.0911,25614:51.0711,25611,190, 11,316
6_6_2_link-failure-leaf-spine905:08.222887:25.55288288, 358
6_6_2_link-failure-spine-tof905:03.013967:19.56396396, 456
6_6_2_link-recovery-leaf-spine905:26.723977:34.14397325, 397
6_6_2_link-recovery-spine-tof905:26.115007:43.13500439, 506
6_6_2_node-failure-leaf905:16.178527:12.16852852, 858
6_6_2_node-failure-spine905:09.5614468:01.9514461446
6_6_2_node-failure-tof905:05.0139607:21.6539603960
6_6_2_node-recovery-leaf904:49.4312967:27.4612961296, 1926
6_6_2_node-recovery-spine905:09.2925807:21.4025802916, 3247
6_6_2_node-recovery-tof905:13.7852087:29.2852085208, 5310
6_6_3_link-failure-leaf-spine603:13.012285:07.65228228
6_6_3_link-failure-spine-tof603:32.992644:34.69264264
6_6_3_link-recovery-leaf-spine603:31.023015:07.55301253, 301
6_6_3_link-recovery-spine-tof603:46.153325:25.22332295, 338
6_6_3_node-failure-leaf603:28.605645:30.40564562, 564
6_6_3_node-failure-spine603:27.4010864:29.9510861086
6_6_3_node-failure-tof603:32.0623764:37.0323762376, 2942
6_6_3_node-recovery-leaf603:26.278644:54.17864864, 1002
6_6_3_node-recovery-spine603:24.8618605:41.3218601836, 2413
6_6_3_node-recovery-tof603:25.2231925:08.3931923354, 3450
6_6_6_link-failure-leaf-spine301:46.371682:14.63168168, 541
6_6_6_link-failure-spine-tof301:44.241322:37.48132132
6_6_6_link-recovery-leaf-spine301:48.372052:29.47205181, 205
6_6_6_link-recovery-spine-tof301:47.561642:32.80164151, 170
6_6_6_node-failure-leaf301:43.672762:26.95276276
6_6_6_node-failure-spine301:47.867262:39.75726726
6_6_6_node-failure-tof301:44.047922:29.69792792
6_6_6_node-recovery-leaf301:42.014323:08.22432432
6_6_6_node-recovery-spine301:46.0411402:28.3911401104, 1182
6_6_6_node-recovery-tof301:45.7911762:28.3011761176
8_8_1_link-failure-leaf-spine32021:56.54228829:13.1922882288
8_8_1_link-failure-spine-tof32020:06.32373529:26.2637353743
8_8_1_link-recovery-leaf-spine32022:56.82267335:14.2726732417
8_8_1_link-recovery-spine-tof32022:28.77421834:43.0042184218
8_8_1_node-failure-leaf32021:16.01408030:44.5840804076, 4080, 4088
8_8_1_node-failure-spine32021:12.3015,15230:26.0015,15215,152, 15,197
8_8_1_node-failure-tof32020:56.0528,80028:37.2128,80028,958, 29,058
8_8_1_node-recovery-leaf32021:26.21614433:47.0861448172, 10,228
8_8_1_node-recovery-spine32021:09.1022,81631:55.6322,81622,848
8_8_1_node-recovery-tof32021:26.9934,84830:46.4534,84834,622, 35,076
8_8_2_link-failure-leaf-spine1609:51.7664015:02.54640640
8_8_2_link-failure-spine-tof1609:52.7896014:14.68960960
8_8_2_link-recovery-leaf-spine16010:59.4983316:07.96833705, 855
8_8_2_link-recovery-spine-tof16010:33.65114614:20.7311461033, 1154
8_8_2_node-failure-leaf16010:28.93203214:30.9520322028, 2032
8_8_2_node-failure-spine1609:56.79448814:38.2344884488, 4510
8_8_2_node-failure-tof16010:18.3813,44014:32.5313,44013,440, 13,680
8_8_2_node-recovery-leaf16010:15.41307215:36.2930724584
8_8_2_node-recovery-spine16010:05.70713614:24.2471367104, 10,439
8_8_2_node-recovery-tof1609:38.9816,41613:43.0116,41616,599, 16,832
8_8_4_link-failure-leaf-spine804:53.834167:01.98416416, 447
8_8_4_link-failure-spine-tof804:51.564807:02.12480480
8_8_4_link-recovery-leaf-spine805:18.905137:16.01513449, 513
8_8_4_link-recovery-spine-tof805:14.525707:09.06570521, 578
8_8_4_node-failure-leaf805:06.7510086:52.7210081008, 1024
8_8_4_node-failure-spine805:02.5526966:27.5226962696
8_8_4_node-failure-tof804:59.9757606:39.8457605760
8_8_4_node-recovery-leaf804:57.0915366:42.3715361784, 2278
8_8_4_node-recovery-spine804:52.5040647:18.5640644400, 4802
8_8_4_node-recovery-tof805:04.1772006:54.3872007200
8_8_8_link-failure-leaf-spine402:31.423043:19.37304304
8_8_8_link-failure-spine-tof402:02.312403:03.47240240
8_8_8_link-recovery-leaf-spine402:04.053533:59.65353321, 353
8_8_8_link-recovery-spine-tof401:54.872823:36.25282265, 290
8_8_8_node-failure-leaf402:05.694963:45.87496496
8_8_8_node-failure-spine401:56.5218003:30.0518001800
8_8_8_node-failure-tof402:16.2719203:16.8319201920
8_8_8_node-recovery-leaf402:07.447683:48.72768768, 888
8_8_8_node-recovery-spine402:21.6625283:23.4425282613, 2856
8_8_8_node-recovery-tof402:09.0025923:43.6525922592, 2960
10_10_1_link-failure-leaf-spine50043:21.57438053:33.934380-
10_10_1_link-failure-spine-tof50035:48.74742952:05.107429-
10_10_1_link-recovery-leaf-spine50040:25.8549811:03:154981-
10_10_1_link-recovery-spine-tof50038:10.10819259:07.938192-
10_10_1_node-failure-leaf50036:31.21798054:41.8479807980
10_10_1_node-failure-spine50037:16.8537,48053:23.7637,48018,390
10_10_1_node-failure-tof50035:52.5572,20055:32.0472,20036,072
10_10_1_node-recovery-leaf50040:15.0912,00055:27.1112,000-
10_10_1_node-recovery-spine50036:11.8052,64058:46.4352,640-
10_10_1_node-recovery-tof50037:35.4884,04054:39.9484,040-
10_10_2_link-failure-leaf-spine25016:59.57120023:04.1012001200
10_10_2_link-failure-spine-tof25017:04.08190026:00.3819001900
10_10_2_link-recovery-leaf-spine25018:30.72150126:21.9915011301, 1501
10_10_2_link-recovery-spine-tof25018:25.44219227:09.1221922011, 2202
10_10_2_node-failure-leaf25017:31.08398024:08.5939803980, 3996
10_10_2_node-failure-spine25016:21.0910,81023:40.4910,81010,810, 10,833
10_10_2_node-failure-tof25016:44.7134,20023:44.8834,20034,527, 34,788
10_10_2_node-recovery-leaf25017:46.35600025:01.3860006000, 7976
10_10_2_node-recovery-spine25016:41.6215,94024:22.6615,94015,661, 19,401
10_10_2_node-recovery-tof25017:13.1040,04024:49.7240,04041,188, 41,667
10_10_5_link-failure-leaf-spine1006:22.986608:42.60660660
10_10_5_link-failure-spine-tof1006:14.277608:32.91760760, 820
10_10_5_link-recovery-leaf-spine1006:52.487819:46.89781701, 781
10_10_5_link-recovery-spine-tof1006:52.5387210:36.91872811, 882
10_10_5_node-failure-leaf1006:43.4315808:57.9015801580
10_10_5_node-failure-spine1006:41.4954108:41.5554105410
10_10_5_node-failure-tof1006:34.0911,4008:57.9311,40011,544, 11,550
10_10_5_node-recovery-leaf1006:31.9124009:57.7324003180, 2790
10_10_5_node-recovery-spine1006:39.0075409:25.6575407510, 7490
10_10_5_node-recovery-tof1006:38.3113,6409:25.5413,64014,798, 15,178
10_10_10_link-failure-leaf-spine503:09.214804:35.08480480
10_10_10_link-failure-spine-tof503:14.543804:21.50380380
10_10_10_link-recovery-leaf-spine503:31.835415:10.14541501, 541
10_10_10_link-recovery-spine-tof503:15.084324:37.93432411, 442
10_10_10_node-failure-leaf503:13.247804:22.19780780
10_10_10_node-failure-spine503:11.3636104:14.4136103610
10_10_10_node-failure-tof503:12.8638004:25.5738003800
10_10_10_node-recovery-leaf503:11.3112004:31.7312001580, 1582
10_10_10_node-recovery-spine503:02.2347405:03.1547404670, 4710
10_10_10_node-recovery-tof503:08.4848404:46.5548404840, 5540
12_12_1_link-failure-leaf-spine7201:37:587464not enough vRAM-
12_12_1_link-failure-spine-tof7201:19:0312,995not enough vRAM-
12_12_1_link-recovery-leaf-spine7201:28:558329not enough vRAM-
12_12_1_link-recovery-spine-tof7201:24:2214,102not enough vRAM-
12_12_1_node-failure-leaf7201:22:4017,244not enough vRAM13,800
12_12_1_node-failure-spine7201:19:2378,456not enough vRAM-
12_12_1_node-failure-tof7201:19:27152,352not enough vRAM-
12_12_1_node-recovery-leaf7201:20:3420,736not enough vRAM-
12_12_1_node-recovery-spine7201:20:01104,880not enough vRAM-
12_12_1_node-recovery-tof7201:21:15172,848not enough vRAM-
12_12_3_link-failure-leaf-spine24019:05.59148825:35.7614881488
12_12_3_link-failure-spine-tof24019:54.03220825:56.4722082208
12_12_3_link-recovery-leaf-spine24021:26.32177729:01.0117771585, 1777
12_12_3_link-recovery-spine-tof24020:56.25248628:28.6324862317, 2498
12_12_3_node-failure-leaf24019:53.39572428:19.2545844580, 4584, 4628
12_12_3_node-failure-spine24019:28.0915,85226:19.4315,85215,929, 16,002
12_12_3_node-failure-tof24019:12.6446,36826:11.43463,6847,227, 57,292
12_12_3_node-recovery-leaf24019:48.84691226:56.8869128052, 11,514
12_12_3_node-recovery-spine24019:21.3521,79228:02.6821,79221,797, 23,710, 29,768
12_12_3_node-recovery-tof24019:31.3353,04026:09.6653,04053,926, 54,024
12_12_4_link-failure-leaf-spine18014:05.84122418:58.7512241224
12_12_4_link-failure-spine-tof18013:44.66165618:52.2916561656, 1680
12_12_4_link-recovery-leaf-spine18014:55.32144121:11.2814411297, 1441
12_12_4_link-recovery-spine-tof18015:11.86186221:08.2818621741, 1874
12_12_4_node-failure-leaf18014:28.07385219:42.5734323430, 3432
12_12_4_node-failure-spine18014:28.0812,68419:42.6812,68412,763, 12,775
12_12_4_node-failure-tof18013:56.6433,12019:12.5033,12033,930, 34,800
12_12_4_node-recovery-leaf18014:16.45518419:31.8651845184
12_12_4_node-recovery-spine18013:50.6717,18420:41.0417,184-
12_12_4_node-recovery-tof18014:33.3738,06419:19.0338,06438,979, 39,312
12_12_6_link-failure-leaf-spine1209:39.0296012:19.99960960, 983
12_12_6_link-failure-spine-tof1208:50.37110412:47.9311041104
12_12_6_link-recovery-leaf-spine12010:03.56110514:16.4911051009, 1105
12_12_6_link-recovery-spine-tof12010:04.93123813:47.7512381165, 1250
12_12_6_node-failure-leaf1209:44.62270012:41.8022802278, 2280, 2292
12_12_6_node-failure-spine1209:32.30951612:41.8695169516, 9589
12_12_6_node-failure-tof1209:35.4719,87212:30.1219,87220,580, 20,712
12_12_6_node-recovery-leaf1209:15.39345612:37.1234563456, 5146
12_12_6_node-recovery-spine1209:27.3412,57613:25.5512,57612,514, 15,936
12_12_6_node-recovery-tof1209:09.3023,08812:53.8423,08823,352, 23,625
12_12_12_link-failure-leaf-spine604:14.676965:56.28696696
12_12_12_link-failure-spine-tof604:04.785526:10.82552552
12_12_12_link-recovery-leaf-spine604:41.717696:56.05769721, 769
12_12_12_link-recovery-spine-tof604:38.936146:57.11614589, 626
12_12_12_node-failure-leaf604:25.2915486:18.8111281128
12_12_12_node-failure-spine604:12.5363486:08.1663486348
12_12_12_node-failure-tof604:11.1766246:25.7266246624
12_12_12_node-recovery-leaf604:22.3717286:15.4117281728
12_12_12_node-recovery-spine604:24.1579686:31.7479688162, 9469
12_12_12_node-recovery-tof604:23.1281126:25.3881128112, 8676
14_14_1_link-failure-leaf-spine9802:42:2311,732not enough vRAM-
14_14_1_link-failure-spine-tof9802:32:3820,817not enough vRAM-
14_14_1_link-recovery-leaf-spine9802:40:3812,909not enough vRAM-
14_14_1_link-recovery-spine-tof9802:39:2622,332not enough vRAM-
14_14_1_node-failure-leaf9802:41:2327,398not enough vRAM21,924
14_14_1_node-failure-spine9802:40:11146,384not enough vRAM-
14_14_1_node-failure-tof9802:32:28285,768not enough vRAM-
14_14_1_node-recovery-leaf9802:40:4232,928not enough vRAM-
14_14_1_node-recovery-spine9802:42:50188,608not enough vRAM-
14_14_1_node-recovery-tof9802:36:31318,360not enough vRAM-
14_14_2_link-failure-leaf-spine49054:43.59313659:22.853136-
14_14_2_link-failure-spine-tof49052:49.34529258:15.275292-
14_14_2_link-recovery-leaf-spine49056:43.9037251:02:303725-
14_14_2_link-recovery-spine-tof49056:29.8458821:00:445882-
14_14_2_node-failure-leaf49054:45.5014,0701:01:3314,07010,948
14_14_2_node-failure-spine49052:58.7140,78257:15.3140,782-
14_14_2_node-failure-tof49053:31.63137,59257:16.8313,7592-
14_14_2_node-recovery-leaf49053:24.0816,46457:33.5316,464-
14_14_2_node-recovery-spine49053:23.6554,74059:57.3554,740-
14_14_2_node-recovery-tof49054:33.1815,372058:34.59153,720-
14_14_7_link-failure-leaf-spine14012:01.43131619:57.6913161336
14_14_7_link-failure-spine-tof14011:36.53151218:12.6515121512, 1635
14_14_7_link-recovery-leaf-spine14012:29.71148521:14.8214851373, 1485
14_14_7_link-recovery-spine-tof14012:07.47166820:51.2616681583, 1682
14_14_7_node-failure-leaf14012:13.42387819:14.0131083108, 3150
14_14_7_node-failure-spine14012:02.3015,30219:24.9815,30215,374
14_14_7_node-failure-tof14011:36.7031,75220:30.4131,75232,988, 32,990
14_14_7_node-recovery-leaf14011:37.59470420:36.1447045470, 7014
14_14_7_node-recovery-spine14011:53.9219,46019:59.2119,46021,484, 24,662
14_14_7_node-recovery-tof14011:45.4836,12019:06.7236,12037,967, 39,645
14_14_14_link-failure-leaf-spine705:20.939529:06.52952952
14_14_14_link-failure-spine-tof705:16.327569:00.30756756
14_14_14_link-recovery-leaf-spine705:50.10103710:22.861037981, 1037
14_14_14_link-recovery-spine-tof705:44.7882810:41.18828799, 842
14_14_14_node-failure-leaf705:30.5121149:50.4215401540, 1552, 1554
14_14_14_node-failure-spine705:39.9210,20610:21.8610,20610,206, 10,308, 10,350
14_14_14_node-failure-tof705:32.2110,5849:21.2010,58410,570, 10,584
14_14_14_node-recovery-leaf705:19.37235210:15.2123522730
14_14_14_node-recovery-spine705:32.3412,40410:15.3112,40412,580, 13,024
14_14_14_node-recovery-tof705:50.7712,6009:44.8812,60014,406, 14,574
15_15_1_link-failure-leaf-spine11256:49:5214,370not enough vRAM-
15_15_1_link-failure-spine-tof11255:10:0425,694not enough vRAM-
15_15_1_link-recovery-leaf-spine11257:16:4315,721not enough vRAM-
15_15_1_link-recovery-spine-tof11257:38:0427,437not enough vRAM-
15_15_1_node-failure-leaf11257:21:4333,705not enough vRAM-
15_15_1_node-failure-spine11256:57:28193,470not enough vRAM-
15_15_1_node-failure-tof11257:28:52378,450not enough vRAM-
15_15_1_node-recovery-leaf11257:14:2640,500not enough vRAM-
15_15_1_node-recovery-spine11257:06:44245,535not enough vRAM-
15_15_1_node-recovery-tof11257:06:41418,560not enough vRAM-
16_16_1_link-failure-leaf-spine1280not enough vRAMnot enough vRAM-
16_16_1_link-failure-spine-tof1280not enough vRAMnot enough vRAM-
16_16_1_link-recovery-leaf-spine1280not enough vRAMnot enough vRAM-
16_16_1_link-recovery-spine-tof1280not enough vRAMnot enough vRAM-
16_16_1_node-failure-leaf1280not enough vRAMnot enough vRAM32,736
16_16_1_node-failure-spine1280not enough vRAMnot enough vRAM-
16_16_1_node-failure-tof1280not enough vRAMnot enough vRAM-
16_16_1_node-recovery-leaf1280not enough vRAMnot enough vRAM-
16_16_1_node-recovery-spine1280not enough vRAMnot enough vRAM-
16_16_1_node-recovery-tof1280not enough vRAMnot enough vRAM-
16_16_8_link-failure-leaf-spine16016:09.07172827:12.6517281728
16_16_8_link-failure-spine-tof16015:58.80198423:39.0319841984
16_16_8_link-recovery-leaf-spine16017:09.07192127:48.6219211793, 1991
16_16_8_link-recovery-spine-tof16016:58.23216226:25.2721622065, 2178
16_16_8_node-failure-leaf16016:24.14558424:45.5540644062, 4064
16_16_8_node-failure-spine16016:31.0223,05623:16.4223,05623,262
16_16_8_node-failure-tof16016:35.3747,61623:58.3047,61648,543, 48,807
16_16_8_node-recovery-leaf16015:59.40614424:17.7661448230, 10,174
16_16_8_node-recovery-spine16016:20.5528,48025:01.3528,48029,729, 30,013
16_16_8_node-recovery-tof16016:54.9353,31224:16.8653,31254,467, 56,962
16_16_16_link-failure-leaf-spine807:09.04124812:02.8312481248, 1310
16_16_16_link-failure-spine-tof807:16.8799211:04.84992992
16_16_16_link-recovery-leaf-spine807:42.15134513:31.5013451281, 1345
16_16_16_link-recovery-spine-tof807:44.84107413:39.6010741041, 1090
16_16_16_node-failure-leaf807:32.59276812:06.7320162016
16_16_16_node-failure-spine807:27.1815,37611:50.8915,37615,582, 15,596, 15,599
16_16_16_node-failure-tof807:38.5115,87212:19.2215,87215,376, 15,840, 15,872
16_16_16_node-recovery-leaf807:29.09307212:13.9830723070, 4064
16_16_16_node-recovery-spine807:21.5518,24013:44.5318,24018,016, 19,074
16_16_16_node-recovery-tof807:21.6418,49612:05.7518,49619,328, 19,776

References

  1. Cisco. Cisco Global Cloud Index: Forecast and Methodology, 2016–2021; White Paper; Cisco: San Jose, CA, USA, 2018. [Google Scholar]
  2. Clos, C. A study of non-blocking switching networks. Bell Syst. Tech. J. 1953, 32, 406–424. [Google Scholar] [CrossRef]
  3. Alberro, L.; Castro, A.; Grampin, E. Experimentation Environments for Data Center Routing Protocols: A Comprehensive Review. Future Internet 2022, 14, 29. [Google Scholar] [CrossRef]
  4. Bonofiglio, G.; Iovinella, V.; Lospoto, G.; Di Battista, G. Kathará: A container-based framework for implementing network function virtualization and software defined networks. In Proceedings of the NOMS 2018—2018 IEEE/IFIP Network Operations and Management Symposium, Taipei, Taiwan, 23–27 April 2018; pp. 1–9. [Google Scholar] [CrossRef]
  5. Scazzariello, M.; Ariemma, L.; Caiazzi, T. Kathará: A Lightweight Network Emulation System. In Proceedings of the NOMS 2020—2020 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 20–24 April 2020; pp. 1–2. [Google Scholar] [CrossRef]
  6. Scazzariello, M.; Ariemma, L.; Battista, G.D.; Patrignani, M. Megalos: A Scalable Architecture for the Virtualization of Network Scenarios. In Proceedings of the NOMS 2020—2020 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 20–24 April 2020; pp. 1–7. [Google Scholar] [CrossRef]
  7. Ahrenholz, J. Comparison of CORE network emulation platforms. In Proceedings of the 2010—MILCOM 2010 Military Communications Conference, San Jose, CA, USA, 31 October–3 November 2010; pp. 166–171. [Google Scholar] [CrossRef]
  8. Lantz, B.; Heller, B.; McKeown, N. A Network in a Laptop: Rapid Prototyping for Software-Defined Networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, Monterey, CA, USA, 20–21 October 2010; Association for Computing Machinery: New York, NY, USA, 2010. Hotnets-IX. [Google Scholar] [CrossRef]
  9. Caiazzi, T.; Scazzariello, M.; Alberro, L.; Ariemma, L.; Castro, A.; Grampin, E.; Battista, G.D. Sibyl: A Framework for Evaluating the Implementation of Routing Protocols in Fat-Trees. In Proceedings of the NOMS 2022—2022 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 25–29 April 2022; pp. 1–7. [Google Scholar] [CrossRef]
  10. Lapukhov, P.; Premji, A.; Mitchell, J. Use of BGP for Routing in Large-Scale Data Centers; RFC 7938, RFC Editor; 2016; IETF. Available online: https://datatracker.ietf.org/doc/rfc7938/ (accessed on 9 October 2022).
  11. White, R.; Hegde, S.; Zandi, S. IS-IS Optimal Distributed Flooding for Dense Topologies. Internet-Draft Draft-White-Distoptflood-03, IETF Secretariat. 2020. Available online: https://datatracker.ietf.org/doc/html/draft-white-distoptflood-03 (accessed on 9 October 2022).
  12. Przygienda, T.; Sharma, A.; Thubert, P.; Rijsman, B.; Afanasiev, D.; Head, J. RIFT: Routing in Fat Trees. Internet-Draft Draft-Ietf-Rift-Rift-16, IETF Secretariat. 2022. Available online: https://datatracker.ietf.org/doc/draft-ietf-rift-rift/ (accessed on 9 October 2022).
  13. Aelmans, M.; Vandezande, O.; Rijsman, B.; Head, J.; Graf, C.; Alberro, L.; Mali, H.; Steudler, O. Day One: Routing in Fat Trees (RIFT); Juniper Networks Books: Sunnyvale, CA, USA, 2020. [Google Scholar]
  14. Quagga. Available online: https://www.quagga.net/ (accessed on 1 August 2022).
  15. Tazaki, H.; Uarbani, F.; Mancini, E.; Lacage, M.; Camara, D.; Turletti, T.; Dabbous, W. Direct code execution: Revisiting library os architecture for reproducible network experiments. In Proceedings of the Ninth ACM Conference on Emerging Networking Experiments and Technologies, Santa Barbara, CA, USA, 9–12 December 2013; pp. 217–228. [Google Scholar]
  16. ns-3 Network Simulator. Available online: https://www.nsnam.org (accessed on 30 September 2022).
  17. ns-3 Direct Code Execution. Available online: https://www.nsnam.org/about/projects/direct-code-execution (accessed on 30 September 2022).
  18. Caiazzi, T.; Scazzariello, M.; Ariemma, L. VFTGen: A Tool to Perform Experiments in Virtual Fat Tree Topologies. In Proceedings of the IM 2021—2021 IFIP/IEEE International Symposium on Integrated Network Management, Virtual, 17–21 May 2021. [Google Scholar]
  19. Sibyl Results. Available online: https://gitlab.com/uniroma3/compunet/networks/sibyl-framework/sibyl-results (accessed on 30 September 2022).
  20. Merkel, D. Docker: Lightweight linux containers for consistent development and deployment. Linux J. 2014, 2014, 2. [Google Scholar]
  21. Kubernetes. 2021. Available online: https://kubernetes.io/ (accessed on 30 September 2022).
  22. Azpiroz, S.Y.; Velázquez, F. FRR ns-3 DCE. 2021. Available online: https://gitlab.com/fing-mina/datacenters/frr-ns3 (accessed on 30 September 2022).
  23. ns-3 Manual. Available online: https://www.nsnam.org/docs/release/3.34/manual/singlehtml/index.html (accessed on 30 September 2022).
  24. Kaashoek, M.F.; Engler, D.R.; Ganger, G.R.; Briceño, H.M.; Hunt, R.; Mazières, D.; Pinckney, T.; Grimm, R.; Jannotti, J.; Mackenzie, K. Application Performance and Flexibility on Exokernel Systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles (SOSP’97), Saint Malo, France, 5–8 October 1997; Association for Computing Machinery: New York, NY, USA, 1997; pp. 52–65. [Google Scholar] [CrossRef]
  25. ns-3 Direct Code Execution (DCE) Documentation. 2021. Available online: https://ns-3-dce.readthedocs.io/en/latest/intro.html (accessed on 30 September 2022).
  26. White, R.; Zandi, S. IS-IS Support for Openfabric. Internet-Draft Draft-White-Openfabric-07, IETF Secretariat. 2018. Available online: https://datatracker.ietf.org/doc/html/draft-white-openfabric-07 (accessed on 9 October 2022).
  27. DCE Quagga. Available online: https://www.nsnam.org/docs/dce/manual-quagga/html/getting-started.html (accessed on 30 September 2022).
  28. Fix Bug in Dce Vasprintf. Available online: https://github.com/direct-code-execution/ns-3-dce/pull/132 (accessed on 30 September 2022).
  29. Fix Bug in Dce InternalClosedir. Available online: https://github.com/direct-code-execution/ns-3-dce/pull/133 (accessed on 30 September 2022).
  30. Azpiroz, S.Y.; Velázquez, F. FRR Compilation and Installation Script for ns-3 DCE. 2021. Available online: https://gitlab.fing.edu.uy/proyecto-2021/scripts/-/blob/master/04-install-frr-SUDO (accessed on 30 September 2022).
  31. Free Range Routing. Available online: https://frrouting.org (accessed on 30 September 2022).
  32. Kathara-Labs. Available online: https://github.com/KatharaFramework/Kathara-Labs (accessed on 30 September 2022).
  33. Medhi, D.; Ramasamy, K. Network Routing, Second Edition: Algorithms, Protocols, and Architectures, 2nd ed.; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2017. [Google Scholar]
Figure 1. A fat-tree multi-plane topology with K = 2 , R = 1 and N = 2 planes.
Figure 1. A fat-tree multi-plane topology with K = 2 , R = 1 and N = 2 planes.
Futureinternet 14 00292 g001
Figure 2. DCE is used for running Linux applications without code changes. On top of that, it enables the use of the Linux network protocol stack in ns-3 simulations. Net devices (and channel) are simulated only with ns-3, while applications and network protocols can use DCE.
Figure 2. DCE is used for running Linux applications without code changes. On top of that, it enables the use of the Linux network protocol stack in ns-3 simulations. Net devices (and channel) are simulated only with ns-3, while applications and network protocols can use DCE.
Futureinternet 14 00292 g002
Figure 3. Architecture of DCE. The application layer is where our programs will be executed using DCE to connect to the core of the network simulator (ns-3). [Prepared by the authors on the basis of an image obtained from [25]].
Figure 3. Architecture of DCE. The application layer is where our programs will be executed using DCE to connect to the core of the network simulator (ns-3). [Prepared by the authors on the basis of an image obtained from [25]].
Futureinternet 14 00292 g003
Figure 4. Evolution of the results of ns-3 and Sibyl for the Spine failure scenario, in comparison with the evolution of the polynomial k 4 4 3 k 3 8 + 5 k 2 4 k .
Figure 4. Evolution of the results of ns-3 and Sibyl for the Spine failure scenario, in comparison with the evolution of the polynomial k 4 4 3 k 3 8 + 5 k 2 4 k .
Futureinternet 14 00292 g004
Figure 5. Evolution of the results of ns-3 and Sibyl for the Leaf failure scenario, in comparison with the evolution of the polynomials P 1 = k 3 k and P 2 = 5 k 3 4 3 k 2 .
Figure 5. Evolution of the results of ns-3 and Sibyl for the Leaf failure scenario, in comparison with the evolution of the polynomials P 1 = k 3 k and P 2 = 5 k 3 4 3 k 2 .
Futureinternet 14 00292 g005
Table 1. Representative results for comparison against Sibyl.
Table 1. Representative results for comparison against Sibyl.
ScenarioNumberPDUsPDUsPDUs
of Nodesns-3ns-3Sibyl
w/o Zebraw Zebra
2_2_1_node-failure-leaf20606060
2_2_2_node-failure-leaf10282828
4_4_1_node-failure-leaf80504504504
4_4_2_node-failure-leaf40248248248
4_4_4_node-failure-leaf20120120120
6_6_1_link-failure-leaf-spine180996996996
6_6_1_link-recovery-spine-tof180179617961657, 1796
10_10_1_node-failure-spine50037,48037,48018,390
12_12_1_node-failure-leaf72017,244-13,800
Table 2. Results of execute the scenario 6_6_6_link-failure-spine-tof after applying performance improvements.
Table 2. Results of execute the scenario 6_6_6_link-failure-spine-tof after applying performance improvements.
Proposed FeatureExecution TimeMemory
Consumption
Execution Time
Improvement
None2:50.140705 MB-
Disable IPv61:44.284705 MB+38.71%
Reduce simu time2:18.452705 MB+18.62%
Without zebra1:29.786403 MB+47.23%
All together1:18.833403 MB+53.67%
Table 3. Comparison of execution times between the environment in ns-3 and the sibyl framework for the failure of a leaf node in fat-trees multi-plane.
Table 3. Comparison of execution times between the environment in ns-3 and the sibyl framework for the failure of a leaf node in fat-trees multi-plane.
ScenarioNumber
of Nodes
Execution Time
in ns-3
Execution Time
in Sibyl
2_2_1_node-failure-leaf200:592:2
4_4_1_node-failure-leaf804:123:4
6_6_1_node-failure-leaf18010:416:25
8_8_1_node-failure-leaf32021:1620:49
10_10_1_node-failure-leaf50036:3114:18
12_12_1_node-failure-leaf7201:22:4037:3
14_14_1_node-failure-leaf9802:41:231:5:31
15_15_1_node-failure-leaf11257:21:43-
16_16_1_node-failure-leaf1280-1:44:58
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alberro, L.; Velázquez, F.; Azpiroz, S.; Grampin, E.; Richart, M. Experimenting with Routing Protocols in the Data Center: An ns-3 Simulation Approach. Future Internet 2022, 14, 292. https://doi.org/10.3390/fi14100292

AMA Style

Alberro L, Velázquez F, Azpiroz S, Grampin E, Richart M. Experimenting with Routing Protocols in the Data Center: An ns-3 Simulation Approach. Future Internet. 2022; 14(10):292. https://doi.org/10.3390/fi14100292

Chicago/Turabian Style

Alberro, Leonardo, Felipe Velázquez, Sara Azpiroz, Eduardo Grampin, and Matías Richart. 2022. "Experimenting with Routing Protocols in the Data Center: An ns-3 Simulation Approach" Future Internet 14, no. 10: 292. https://doi.org/10.3390/fi14100292

APA Style

Alberro, L., Velázquez, F., Azpiroz, S., Grampin, E., & Richart, M. (2022). Experimenting with Routing Protocols in the Data Center: An ns-3 Simulation Approach. Future Internet, 14(10), 292. https://doi.org/10.3390/fi14100292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop