Next Article in Journal
Laser and LED Hybrid Plant Lighting System Design Based on the Particle Swarm Algorithm
Next Article in Special Issue
Implementation of a Sensor Big Data Processing System for Autonomous Vehicles in the C-ITS Environment
Previous Article in Journal
Generative Adversarial Network for Global Image-Based Local Image to Improve Malware Classification Using Convolutional Neural Network
Previous Article in Special Issue
A Blockchain-Based OCF Firmware Update for IoT Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics

by
Jose E. Lozano-Rizk
1,2,*,
Juan I. Nieto-Hipolito
1,*,
Raul Rivera-Rodriguez
2,
Maria A. Cosio-Leon
3,
Mabel Vazquez-Briseño
1 and
Juan C. Chimal-Eguia
4
1
Faculty of Engineering, Architecture and Design, Universidad Autonoma de Baja California, Ensenada 22860, Mexico
2
Telematics Division, Centro de Investigacion Cientifica y de Educacion Superior de Ensenada, Ensenada 22860, Mexico
3
Research, Innovation and Academic Division, Universidad Politecnica de Pachuca, Zempoala 43830, Mexico
4
Center for Computing Research, Instituto Politecnico Nacional, Ciudad de Mexico 07738, Mexico
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7586; https://doi.org/10.3390/app10217586
Submission received: 8 October 2020 / Revised: 23 October 2020 / Accepted: 26 October 2020 / Published: 28 October 2020
(This article belongs to the Special Issue Internet of Things (IoT))

Abstract

:
When Internet of Things (IoT) big data analytics (BDA) require to transfer data streams among software defined network (SDN)-based distributed data centers, the data flow forwarding in the communication network is typically done by an SDN controller using a traditional shortest path algorithm or just considering bandwidth requirements by the applications. In BDA, this scheme could affect their performance resulting in a longer job completion time because additional metrics were not considered, such as end-to-end delay, jitter, and packet loss rate in the data transfer path. These metrics are quality of service (QoS) parameters in the communication network. This research proposes a solution called QoSComm, an SDN strategy to allocate QoS-based data flows for BDA running across distributed data centers to minimize their job completion time. QoSComm operates in two phases: (i) based on the current communication network conditions, it calculates the feasible paths for each data center using a multi-objective optimization method; (ii) it distributes the resultant paths among data centers configuring their openflow Switches (OFS) dynamically. Simulation results show that QoSComm can improve BDA job completion time by an average of 18%.

1. Introduction

In recent years, the Internet of Things (IoT) has evolved as one of the leading technologies which generate a massive amount of data stored in distributed data sources. IoT devices transfer the generated data to big data systems located in distributed data centers for further analysis. Organizations and users can perform all kinds of processing and analysis on the basis of massive IoT data, thus adding to their value [1].
Big data processing and big data applications are shifting the computing paradigms, computing concepts, and treatment of data [2]. Big data analytics (BDA) refers to the strategy of analyzing large volumes of data, or big data. These big data are gathered from a wide variety of sources, including social networks, videos, digital images, sales transaction records, end-user activities, environmental monitoring, sensors (IoT devices), among others. IoT data exhibit four main characteristics, such as large-scale streaming data, heterogeneity, time and space correlation, and high noise data [3]. With the use of BDA, a variety of these IoT data are examined to reveal trends, unseen patterns, hidden correlations, and new information. BDA in IoT is helping business associations, and other organizations, to achieve a better understanding of data and efficient decision-making [4].
The big data concept is represented by describing its categorization through the three-V model extended to five Vs. (volume, velocity, variety, veracity, and value), with an emphasis on different data generation sources such as IoT devices [5].
The IoT decision-making process uses the BDA results, and the time to get those results becomes a crucial factor in making quick decisions. Therefore, scientists designed and developed BDA to process and analyze large datasets from data centers in the shortest possible time.
BDA running jobs in data centers require computing capabilities such as CPU time, storage capacity, RAM, among others, to be able to fulfill their data processing and analysis objective. When working with a large amount of data and running out of resources, they can scale horizontally by adding more computational power from distributed data centers.
The MapReduce [6] programming paradigm is one of the most representative BDA. MapReduce provides horizontal scaling to petabytes of data on thousands of compute nodes, a simplified programming model, and a high degree of reliability when failed nodes occur [7]. In MapReduce, the input data are divided into many parts. Each part is then sent to compute nodes that can be located in distributed data centers and finally aggregated according to a specified function. When MapReduce jobs require transferring data between compute nodes located in distributed data centers, the communication network is a factor that can affect the application completion time. Representative BDA running across distributed data centers are:
  • Intensive computation. Concentrate on data processing and minor communication or fewer data transfers between compute nodes. We consider them a best-effort application.
  • Intensive communication. Need to be in constant communication with compute nodes or at some point will require transferring large volumes of data to compute nodes located in geographically distributed data centers. In this case, the data transfer process is crucial for completion time. We consider them a time-sensitive application.
BDA classified as intensive communication are required to have an efficient communication system to be able to complete jobs in the shortest amount of time. Under this scheme, the network architecture used to interconnect data centers becomes critical, and it can impact application performance. In a BDA system, the application scheduler submits jobs to compute nodes that meet their computational requirements without considering adverse network conditions in end-to-end paths unless the network administrator manually configures a QoS policy or specifies a network path per application. In the host-to-host communication among distributed data centers, to provide QoS, it is essential to consider four network parameters such as available bandwidth, end-to-end delay, packet loss rate, and jitter according to application requirements.
Software-defined networks (SDN) provide a mechanism to allow scientific applications dynamically request network services or parameters such as available bandwidth, QoS policies, security, among others [8]. These scientific applications can be classified as intensive communication. SDN can address the programmability problem allowing for scientific applications to program local or inter-domain networks at run-time to meet their requirements [9]. This process is considered an SDN advantage compared to traditional network services. SDNs are gradually spreading to large-scale (such as data centers) and complex networks (multi-agency collaborative networks) [10].
The millions of data generated from IoT devices will suppose, in the short term, the most massive flow of information and will be the largest provider for BDA. Given the amount of information that needs to be processed to make quick decisions, BDA needs to use computing and storage resources available in distributed data centers connected with state-of-the-art communication networks. As a result, IoT demands more advanced computing, storage, and communication network solutions to handle BDA distributed workloads.
We designed QoSComm, and it is defined as an SDN strategy to allocate QoS-based data flows for BDA running jobs across distributed data centers to minimize their completion time. QoSComm implements a QoS-based data flow allocation algorithm for data flow forwarding using a multi-objective optimization method, considering the four most essential networks parameters: available bandwidth, delay, jitter, and packet loss. In addition, QoSComm includes a communication process to dynamically configure feasible paths through the SDN controller for each data center. BDA classified as intensive communication is the target domain of QoSComm. Figure 1 shows the BDA and QoSComm general view.
The rest of the article is organized as follows: related work is presented in Section 2. Section 3 introduces our strategy called QoSComm, its design, and implementation. Section 4 describes the simulation model and the experimental setup. Section 5 presents the performance tests and the discussion of the results. Section 6 presents the conclusions and future directions.

2. Related Work

In this section, we explore the use of SDN for Big Data Analytics and describe the research works that have ventured to improve the application performance considering SDN parameters. These research works use the SDN parameters in the application job scheduling process between SDN-based distributed data centers.
For the aim of this research, we identified BDA with MPI (Message Passing Interface) and MapReduce. MPI is a library of routines for inter-process communication and is widely used in distributed systems [11]. Apache Hadoop MapReduce is a programming model for processing large data sets based on the division and conquest method. It has a distributed file system on each node of the cluster (HDFS). MapReduce consists of two main phases: Map and Reduce, and are linked by an internal phase Shuffle and Sort [12]. We consider such applications as time-sensitive.
Greedy Shortest Binomial Tree (GSBT) [13] is an algorithm that aims to minimize the number of hops for the longest path in the reduction tree. When MPI applications initiate their execution, GSBT sends the IP and MAC address of each process to the SDN controller. The SDN controller is responsible for finding the shortest path between a pair of hosts and installs the input flow rules in all the OpenFlow switches. Their results showed that, for MPI messages size larger than 22 KB, GSBT affected the application performance. For messages smaller than 22 KB, this algorithm improved application performance.
In Bandwidth and Latency Aware Routing (BLAR) [14], authors performed an SDN routing scheme according to the application requirements, using two network parameters: bandwidth and latency. If the application requests specific bandwidth or latency, the SDN controller set data flow rules in OpenFlow switches to forwarding data to routes that meet application bandwidth or latency requirements. For some cases, authors demonstrated that the paths that meet some of these parameters improved the application completion time instead of only using the shortest route default routing method. Results showed some cases where application performance was affected in its completion time.
In BASS (Bandwidth-Aware Scheduling with SDN) [15], the authors performed a task scheduler that uses the SDN to get information about bandwidth parameter for MapReduce jobs running in Hadoop clusters. Its objective is to use the SDN to manage the bandwidth and then BASS assigns the tasks locally or remotely in geographically distributed Hadoop clusters. BASS verifies the network bandwidth of the SDN controller and classifies it as a parameter to use in a Hadoop scheduling process. Results showed that BASS can improve the completion time of a MapReduce job running among distributed clusters if bandwidth conditions are met according to application requirements.
In ASETS (A SDN Empowered Task Scheduling System) [16], the authors use an algorithm called SETSA (SDN-Empowered Task Scheduler Algorithm) that is based on the SDN capabilities to schedule tasks on the virtual machine that is available to maximize the use of the bandwidth. This algorithm focuses on HPC as a Service architecture (HPCaaS). The SETSA algorithm, which is used by ASETS, uses the bandwidth of the computational cloud to more efficiently increase the performance of the HPCaaS architecture related to the response time of the jobs submitted to the cloud.
In CLS (Cross-Layer scheduling) [17], this proposal includes the application-level job scheduler interacting with the SDN scheduler which is responsible for assigning network links. The objective of this work is to allocate the tasks and a selection of links that can achieve high performance for the application. The application-level scheduler uses bandwidth information from the SDN controller scheduler and distributes the tasks on the servers. The results indicated an improvement in the performance of Hadoop and Storm applications.
BWARE (Bandwidth-Aware) [18] authors improve the backup task scheduling policy for Hadoop clusters running in an SDN network. It uses SDN bandwidth parameter to get real network bandwidth from the input data source node and the execution destination node. This network parameter is used for data transfer in the backup tasks process from the source node to the implementation node. Results showed that BWARE can improve the time elapsed for the backup tasks process when considering the SDN bandwidth parameter.
As a summary of the reviewed proposals, GSBT [13] gets available routes in the SDN and calculates the shortest path for use in the reduction tree for MPI communications process without considering traffic congestion, delay, and packet loss that can exist in some routes. BASS [15], ASETS [16], CLS [17], and BWARE [18] consider the network bandwidth parameter in their application job scheduling process. In BLAR [14], the bandwidth and latency parameters were considered in their MPI job scheduling process for path selection to compute nodes, and achieved an improvement in application performance. The results of some of the research works showed that there were cases where the performance was affected [13,14], and this could be due to several network conditions that were not considered in data flow allocation process, such as available bandwidth, point to point delay, packet loss, and jitter. Therefore, the opportunity in these research works would be to consider these network parameters to provide QoS in the flow allocation process to improve application performance.
Regarding QoS in SDN, we identified proposals considering three main aspects: architecture, path selection algorithm, and if the proposal works within a single domain or among several domains. Examples of these proposals are OpenQoS [19] and VSDN [20], where, in addition to consulting the distance of the nodes, in some cases, they also consider the delay among them. The focus of the most QoS proposals is forwarding data flows to control bandwidth allocation and improve multimedia transmissions (video streaming) and even VoIP [21,22,23], but none of them consider other applications such as Big Data, so it is not clear under which circumstances these proposals can benefit BDA performance. In CECT, Ref. [24] proposed a scheme to minimize network congestion and reallocate network resources according to applications’ requirements. Their algorithm was designed for a software-defined cloud data center, where the dynamic data flow reallocation of some services such as virtual machine motion needs special QoS guarantees that can generate overhead in the reallocation process. CECT uses a genetic algorithm to calculate the routes, also considering only available bandwidth as a constraint, in addition to using a single controller within the software-defined data center. In AmoebaNet [9], authors proposed a service to apply SDN providing QoS-guaranteed network services in LAN (Local Area Network) or campus networks for Big Data science, using WAN (Wide Area Network) path reservation systems such as ESNet (Energy Science Network) OSCARS (On-Demand Secure Circuits and Advance Reservation System) and I2 (Internet 2) specialized networks. In addition, it uses a Dijkstra shortest path variant algorithm to compute an end-to-end network path, using only a bandwidth parameter as a constraint and considering a single SDN controller for communication between the different domains.
Proposals analyzed describe experiments considering QoS within a single domain and one controller for the SDN, except in AmoebaNet [9], but it is only used in specialized networks. None of the works considered a QoS proposal under an architecture where BDA jobs are running between two or more distributed data centers, or considering the four most essential networks parameters to provide QoS, as QoSComm does.
Considering the research works described above, our proposal QoSComm differs from them, mainly in its architecture. It does not consider any modification to the SDN controller, and it is not necessary to add a module to the SDN controller. To resolve the problem of path selection, QoSComm uses a multi-objective algorithm, which was designed considering four fundamental parameters in communication networks mentioned in the previous section. In addition, QoSComm can establish communication with one or more SDN controllers and process path selection within a single domain or between two or more domains (distributed data centers), which is the primary goal of our strategy. Table 1 shows the main features of the revised QoS proposals and QoSComm.
The BDA scheduler sends a request to QoSComm to get and set the optimal paths according to application network requirements. Once QoSComm gets and configures these paths in both data centers, the application scheduler submits the job to compute nodes in the distributed data centers and meet network requirements, as it is illustrated in Figure 1.
Our main objective is to minimize application job completion time, considering the four parameters required by the application to provide network QoS in the data flow allocation process to transfer data flows among geographically distributed data centers. With this in mind, QoSComm considers BDA network requirements and meets QoS parameters, then selects and configures the path with the minimum end-to-end delay between two distributed compute nodes for time-sensitive applications.

3. QoS Comm: Design and Implementation

In this section, we present QoSComm, a strategy to allocate QoS-based data flows for BDA running among SDN-based distributed data centers. QoSComm’s main objective is to compute, configure, and allocate data flow rules in OF (OpenFlow) switches for QoS network paths across distributed data centers to improve applications’ job completion time.
The QoSComm design consists of two main processes, as illustrated in Figure 2:
  • Compute QoS network paths according to a constraint-based multi-objective decision algorithm. Gets the optimal paths with the minimal point-to-point delay, which complies with applications requirements considering four network parameters: available bandwidth, delay, packet loss rate, and jitter.
  • Configure QoS optimal paths via SDN controller to OFS (OpenFlow Switch). Communicate to SDN controller to set a data flow rule on each OFS on the network (at the input and output port level). The SDN controller installs the data flow rule on each of the OFS, specified in the communication with the REST API (Representational State Transfer Application Program Interface). The REST API allows the dynamic configuration of data flow rules in the OFS using HTTP requests to a RestFul Web Service provided by the SDN Controller to get, post, put, and delete data.
We use QoSComm to get and configure the optimal network paths to compute nodes among the SDN-enabled data centers. This process is performed just before the BDA scheduler submits jobs to compute nodes.

3.1. Path Selection

Problem Formalization

In a typical communication network topology, there is n number of paths for a transmission from the same source and destination (end-to-end path) [25]; this is represented in Equation (1):
P s , d = { p 1 , p 2 , p n }
where:
  • s = Source node,
  • d = Destination node,
  • P = Set of paths from source node s to destination node d,
  • p = End-to-end path, p P s , d , p n = { l 1 , l 2 , l n } ,
  • l = Network links for each node in the end-to-end path
Each path has different network parameters’ values. These network parameters serve as metrics to get path network conditions. The metrics that are considered for applications that require intensive use of the communications network are delay ( D ( p ) ) , jitter ( J ( p ) ) , packet loss ( P L R ( p ) ) , and transmission speed ( B ( p ) ) ; with these parameters, we can provide network QoS. In addition, the cost or distance metric of the link ( C o s t ( p ) ) is considered; these network conditions for each path are represented in Equation (2):
p n = [ C o s t ( p n ) , D ( p n ) , J ( p n ) , P L R ( p n ) , B ( p n ) ]
The default routing algorithm selects network paths taking into account the cost or distance between network nodes, which are the metrics used by the SDN controller to select the shortest route ( p ˜ ). It is represented in Equation (3) as the minimum cost value among the n possible paths:
p ˜ = [ m i n ( C o s t ( P s , d ) ) ]
The traditional routing algorithm does not consider the additional parameters that provide network conditions. For some applications, the set of these parameters should be considered in the path selection process to improve network performance instead of just considering the shortest path (default method). The path with better network conditions ( p n ^ ) is the path with best metric values among all paths as represented in Equation (4):
p n ^ = [ D ^ ( p n ) , J ^ ( p n ) , P L R ^ ( p n ) , B ^ ( p n ) ]
The shortest path in Equation (3) is not always the path with the best metrics in one or more network parameters. We identified a path with better metrics conditions with a ( ) ^ symbol. It must have the minimum value of each of the corresponding end-to-end parameters such as delay, jitter, and packet loss among the n paths, represented in Equations (5)–(7), except the bandwidth, which we defined as the maximum available bandwidth among the n paths, and is represented in Equation (8).
The QoS parameters such as delay and jitter have an additive metric composition rule for the end-to-end path. For the bandwidth parameter, it has a concave metric composition where the end-to-end paths ( p n ) were selected considering the minimum capacity bandwidth of the network links ( l n ) that rules the maximum bandwidth for the corresponding ( p n ) . For the scope of our research, we used these criteria for end-to-end path calculation based on QoS constraints:
D ( p n ) ^ = [ m i n ( D ( P s , d ) ) ] ,
J ( p n ) ^ = [ m i n ( J ( P s , d ) ) ] ,
P L R ( p n ) ^ = [ m i n ( P L R ( P s , d ) ) ] ,
B ( p n ) ^ = [ m a x ( B ( P s , d ) ) ] ,
where:
  • D ( p n ) = Path delay,
  • J ( p n ) = Path jitter,
  • P L R ( p n ) = Path packet loss,
  • B ( p n ) = Path available bandwidth
The network provides the connection service to the set of applications clients, where each application has its network requirements to ensure optimal performance. This set of applications is represented in Equation (9), and their requirements in Equation (10):
A = [ a 1 ^ , a 2 ^ , a 3 ^ , , a n ^ ]
a n ^ = [ D ( a n ^ ) , J ( a n ^ ) , P L R ( a n ^ ) , B ( a n ^ ) ]
where:
  • A = A p p l i c a t i o n s e t ,
  • a ^ = A p p l i c a t i o n o p t i m a l p e r f o r m a n c e r e q u i r e m e n t s ,
  • D ( a n ^ ) = D e l a y a p p l i c a t i o n r e q u i r e m e n t ,
  • J ( a n ^ ) = J i t t e r a p p l i c a t i o n r e q u i r e m e n t ,
  • P L R ( a n ^ ) = P a c k e t l o s s a p p l i c a t i o n r e q u i r e m e n t ,
  • B ( a n ^ ) = B a n d w i d t h a p p l i c a t i o n r e q u i r e m e n t
In Equations (11)–(13), each metric has a maximum tolerable limit required by the application. In Equation (14), the application requires a minimum bandwidth limit rate. Each network parameter must comply with applications requirements:
D ( p n ) ^ D ( a n ^ )
J ( p n ) ^ J ( a n ^ )
P L R ( p n ) ^ P L R ( a n ^ )
B ( a n ) ^ B ( p n ^ )
Considering the set of network parameters for each application requirement, as described in Equations (11)–(14), in order to resolve the multi-objective problem, we used the epsilon-constraint ( ϵ -constraint) method [26] for path selection and data flow forwarding. We define the following objective function, represented in Equations (15)–(17):
m i n ( D ( p n ) , J ( p n ) , P L R ( p n ) ) ,
m a x ( B ( p n ) )
s . t :
f ( x ) = D ( p n ) ^ D ( a n ^ ) J ( p n ) ^ J ( a n ^ ) P L R ( p n ) ^ P L R ( a n ^ ) B ( a n ) ^ B ( p n ^ )
The goal of the exact epsilon-constraint method is to minimize one objective and to restrict the rest of the objectives to a value of ϵ . Equations (18) and (19), represents the general function:
m i n = f j ( x )
s . t :
f i ( x ) ϵ i i = 1 , 2 , 3 , , M , i j
In our scenario, as it considers an inter-domain data flow forwarding approach that will be used only by the applications that require QoS, the epsilon-restricted multi-objective method was used to get the set of feasible paths, considering application requirements as objectives’ restrictions [27,28]. We propose to use the delay parameter as the most important objective subjecting it to the restrictions of the other objectives (end-to-end jitter, packet loss and available bandwidth). From the set of feasible paths, we minimize the delay to get the optimal path when packet loss, jitter, and available bandwidth are less than or equal to require by the application, as is represented in Equations (20) and (21):
m i n = D ( p n ) ^
s . t :
J ( p n ) ^ J ( a n ^ ) P L R ( p n ) ^ P L R ( a n ^ ) B ( a n ) ^ B ( p n ^ )
In our model, before submitting a job to compute nodes, in that instant time, QoSComm gets network parameters querying to the SDN controller of each data center and the mathematical process calculates the feasible paths, according to application requirements. In the case of not finding a feasible path, QoSComm requires the SDN controller to forward the data flow using its default method. This process is shown in Figure 2.
Once QoSComm calculated the feasible paths and obtained the optimal path, its communication process configures and allocates data flow rules in the network switches for the QoS-based optimal path before BDA scheduler dispatch job to compute nodes in distributed data centers. Both operations, practically, do not delay application job submission process to compute nodes.

3.2. Communication Process for QoS-Based Feasible Paths

In SDN general architecture [29], the SDN controller resides in a control plane. To obtain centralized and optimal network flow management and configuration, SDN controllers have to maintain a global view of the network topology graph [30]. The SDN controller has two interfaces: The NorthBound Interface (NBI) that is used to communicate with the application plane, and the SouthBound Interface (SBI) which is used to communicate with the data plane, in our case, using the OpenFlow (OF) protocol [31].
QoSComm resides in the application plane, and according to the SDN architecture, it communicates to the SDN controller through the NBI. We developed a process to communicate to the NBI using the REST API provided by the SDN network controller.
The communication process primary function is to configure the optimal path from the decision algorithm described in the previous section, through the REST API provided by the SDN controller as is represented in Figure 3.
Once QoSComm gets the optimal path, then it configures the data flow rules containing the path required for data transfer with higher priority than controller forwarding method. Each data flow rule is allocated and configured in the OF switches through the SDN controller. Once the data flow rule is configured in each of the OF switches, these types of dynamic rules take priority in the data flow forwarding process, and in case of not finding a matching rule, the controller is responsible for setting the path for each data flow. Once the application data transfer process is complete, QoSComm removes the data flow rule on each of the OF switches.
To configure data flow rules in OF switches, we allocated a matching rule specifying the Internet Protocol (IP) address for source and destination hosts and the IP protocol number (6 for TCP and 17 for UDP data flows). We also specified an action to be executed by the OFS, indicating the switch port number to be used to output data flow, and set a higher priority for this rule, as is shown in Table 2.
QoSComm communication process can establish a connection with one or more domain network controllers and perform the data flow rules allocation and configuration on each domain switches according to particular application requirements. Furthermore, QoSComm can perform a new calculation for path selection, maintaining QoS parameters required by the applications. In the case of obtaining a new optimal path, QoSComm dynamically changes the data flow allocation, configuring new rules in OF switches with a higher priority than the previous rule. When the OF switches detect flow rules with match information, they use the one with a higher priority for data flow output.
Our proposal QoSComm provides the dynamic configuration of data flow rules in the SDN-based distributed data centers according to each BDA’s network QoS requirements.
The following section describes the simulation model and experimental setup considering application data transfer among compute nodes located in distributed data centers.

4. Simulation Model and Experimental Setup

4.1. Simulation Model

The simulation model implements a network topology considering host-to-host data flow transfers between two or more SDN-enabled distributed data centers. For the scope of our research, the experimental setup considers a small-medium network topology based on three-layer for data centers [32]. This topology consists of the access layer to connect hosts (compute nodes), an aggregation or region layer in the middle to connect the access switches and a core layer in the root of the tree.
Figure 4 shows the topology used by the simulation model. It is based on three layers, where two data centers defined as Domain A and Domain B are connected. Each domain has its SDN network controller, and they do not communicate with each other. The domains A and B are connected by a link; this could be the case of the Internet Service Provider (ISP) and the bandwidth, delay, packet loss, and jitter link values will be simulated. QoSComm gets the feasible paths based on the application requirements and configures them through each domain SDN controller. Since QoSComm does not require SDN controllers to communicate with each other, there is no need to modify the SDN controller, which is one of the main QoSComm contributions.

4.2. Experiment Setup

QoSComm’s path selection and communication process were written in Python programming language. We used OpenDayLight (ODL) [33] with RESTCONF module loaded as an SDN controller. The distributed data center topology was developed using a Mininet [34] SDN simulator.
For the simulation environment, we configured a virtual machine with Linux Debian 9 operating system (OS) with 4vCPUs, 8 GB of RAM, and 100 GB of disk space. The ODL SDN controllers were installed in two other virtual machines with the same specifications. Each of the ODL controllers were configured for Domain A and Domain B according to the simulation network topology. A list of software and tools used is as follows:
  • Linux Debian 9 OS
  • Mininet 2.3
  • OpenDayLight SDN Controller
  • Anaconda Python distribution platform
  • Spyder Scientific Python development environment
  • IPERF and D-ITG for data transfer performance
  • MPICH for MPI application performance
In the virtual machine with the Mininet simulator, the topology described in the previous section was implemented by developing a Python program that refers to components and objects of the Mininet CLI (Command Line Interface). The switches’ links and hosts were configured with their respective values in bandwidth, packet loss, delay, and jitter parameters referred to in Figure 4. We simulated two domains A and B (data centers), and each, along with their SDN controller, was configured to connect with the Mininet simulator.
Next, we describe the methodology used to get feasible and optimal paths considering applications requirements and their configuration in the Mininet simulator:
  • According to experimental topology based on three layers, convert each link to a vector system with delay, packet loss, jitter, and available bandwidth values.
  • Define the end-end paths of hosts H1 to H6, H2 to H5, and H3 to H4 (according to experimentation topology).
  • Make the sum of each of the corresponding values for each path (delay, packet loss, and jitter), for the case of bandwidth, get the minimum available in each of the paths. Store the information in a list arrangement (paths).
  • Define objectives restrictions values (delay, packet loss, jitter, and available bandwidth), with application requirements. In this case, the delay objective is minimized.
  • Search within the list of paths that comply with the imposed restrictions to get the feasible paths.
  • From the feasible path, get the minimum delay to get the optimal path.
  • Configure data flow rules in each OF switches through the SDN controller to use the feasible paths.
When running the simulation, Mininet creates networks links connecting ports for each OF switch.
The experiment consists of the measurement of the data transfer between H1 to H6, H2 to H5, and H3 to H4 hosts comparing the traditional data forwarding based in the shortest route algorithm (also referred to as controller default) with our proposal QoSComm data forwarding method. In the first phase, we used IPERF [35] and D-ITG [36] applications for the network performance test. In the second phase, we used an MPI parallel application for completion time tests. QoSComm queries the SDN controller to get network topology and calculate end-to-end network paths. Table 3, Table 4 and Table 5 show the available network paths for corresponding source to destination hosts.
In our experiment, the controller default data flow forwarding method is based on the path with minimum hops from H1 to H6, H2 to H5, and H3 to H4 hosts as referred to in Figure 4.
The following steps were defined for test execution:
  • Programming the experimentation topology in Mininet:
  • Create a network topology with two domains (A and B) and one external controller per domain.
  • Run network performance tests from H1 to H6, H2 to H5 and H3 to H4 with IPERF and D-ITG (TCP and UDP) with Controller default data flow forwarding:
    (a)
    First, execute individually data transfer test (H1 to H6, then H2 to H5, then H3 to H4). Second, execute the data transfer tests at the same time. Get metrics for the first phase: Get the number of Mbps transferred for a specific period time.
  • Run application performance tests from H1 to H6 host with Controller default data flow forwarding:
    (a)
    Get metrics for the second phase: Get MPI application completion time in seconds.
  • Repeat steps 3 and 4, using QoSComm data flow forwarding.
  • Get results.

5. Performance Evaluation

We evaluated the data flow transfer performance among two hosts. In the first phase, we used two applications to test network performance. IPERF and D-ITG were used to generate TCP and UDP traffic between two hosts since its behavior would be similar to a BDA transferring data between hosts. D-ITG was used to generate simultaneous data flow transfers between two hosts, as well as packet size variation for each transfer, simulating applications behavior transferring data flows over the Internet. In the second phase, we configured an MPI cluster in a Mininet simulator to test MPI application performance. We used an MPI application to transfer messages between two nodes and measured the job completion time. In both test phases, we compared controller default and QoSComm data flow forwarding methods. For both phases, application network QoS requirements for end-to-end paths were: bandwidth ≥ 85 mbps, delay ≤ 30 ms, PLR ≤ 0.008 and jitter ≤ 10 ms.
In H1 to H6 tests, the controller used Path 2 (S0, S1, S3, S10, S11, S13) and QoSComm selected Path 14 (S0, S2, S1, S3, S10, S11, S13) referred in Table 3. For H2 to H5 tests, Controller used Path 9 (S2, S1, S3, S10, S8, S9) and QoSComm selected Path 13 (S2, S1, S4, S3, S10, S8, S9) referred in Table 4. For H3 to H4 tests, Controller used Path 16 (S6, S4, S3, S10, S8, S7) and QoSComm selected Path 8 (S6, S5, S4, S3, S10, S8, S7) referred to in Table 5. Controller and QoSComm host-to-host selected paths have different network metrics. QoSComm calculated and configured selected paths considering application network QoS requirements.

5.1. Phase One: Network Performance

5.1.1. IPERF

We used IPERF to test network performance between hosts H1 to H6, H2 to H5, and H3 to H4, applying Controller default and QoSComm data flow forwarding methods in the simulation model. IPERF was configured with default values and set time interval to 30 s for each data transfer. The IPERF server was running in H6, H5, and H4 hosts and the client in H1, H2, and H3 hosts. We repeated the experiment 20 times for UDP and TCP data flows.
Figure 5 and Figure 6 show IPERF UDP and TCP average results in two sets: individually (i) and simultaneously (s) tests. As it is observed in Figure 5, only in H1 to H6 individual tests was Controller data transfer rate higher than QoSComm, and this is because Controller had a higher bandwidth capacity path (100 mbps) compared to QoSComm selected path (90 mbps). For network stress testing running host-to-host simultaneously transfers, QoSComm obtained a higher data transfer rate compared to Controller improving application data transfer performance.

5.1.2. D-ITG

D-ITG was used to send three simultaneous data flows between hosts H1 to H6, H2 to H5 and H3 to H4 applying Controller default and QoSComm data flow forwarding methods in the simulation model. We created a script to generate three processes to transfer simultaneous data flows with different constant packet rate (pps-C) and with 512 as constant payload size-c. In H6, H5, and H4 hosts, we launched a D-ITG Receive server. H1, H2, and H3 hosts were used to send data flows to corresponding receive server. We transferred the UDP data flows from H1 to H6, H2 to H5, and H3 to H4 hosts for 30 s. Figure 7 shows D-ITG transfer average results which are the sum of mbps transferred by the three processes for each test case. We performed two sets: individually (i) and simultaneously (s) tests.
In most of the cases, it is observed that QoSComm network path selection obtained around 2–4% of a higher data transfer rate than Controller default path selection. Only in H1 to H6 individual tests did results shows less than 1% of difference between Controller and QoSComm. For network stress testing running host-to-host simultaneously transfers, QoSComm also obtained a higher data transfer rate compared to Controller default path selection. Using QoSComm, we improved the application data transfer performance.

5.2. Phase Two: Application Performance

MPI Application Tests

We configured an MPI Cluster with MPICH [37] in a Mininet virtual machine. We use an MPI application for bandwidth tests transferring messages between compute nodes. The MPI application provides point-to-point communication for a given number of MPI tasks and uses TCP as a transport protocol. The test goal is to measure completion time running tasks between H1 and H6 nodes using Controller default and QoSComm data flow forwarding methods.
The MPI application works as follows: We execute the MPI application creating two tasks, one for each host. Each task sends and receives 1,000,000 bytes between H1 and H6 hosts 40 times and then finishes the data transfer process. We executed the MPI application 20 times for each forwarding method and measured job completion time.
The results showed that the MPI application completion time got benefits using QoSComm data flow forwarding strategy rather than Controller default, as is observed in Figure 8. As mentioned above, QoSComm used Path (S0, S2, S1, S3, S10, S11, S13) and has 11 ms end-to-end delay and 90 mbps of max bandwidth capacity; it took an average of 27 s to complete application jobs. Controller default used Path (S0, S1, S3, S10, S11, S13), which has a 20 ms delay and 100 mbps of maximum bandwidth capacity, and took an average of 32 s to complete. We can see a difference of 5 s using QoSComm instead of Controller default data flow forwarding method.
For this MPI application test, the results showed that our proposal QoSComm improved the completion time by an average of 18%. We classified the Big Data Analytics as a time-sensitive application that needs to be in constant communication with compute nodes or, at some time, will need to transfer large data sets among compute nodes. For these types of applications, the end-to-end delay becomes a critical parameter instead of just considering the network bandwidth parameter, as is observed in the test results.
The MPI application is often performed in a single domain; however, we executed an MPI application transferring messages in an inter-domain approach to test its performance when using different data flow forwarding methods versus QoSComm. In addition, there is an implementation such as MPICH-G targeting multi-domains [38], allowing for BDA to run jobs among distributed compute hosts.
Considering the application test results, QoSComm can contribute to improving connectivity aspects in IoT architecture for big data analytics [39], where authors describe BDA to process a large amount of data that may be stored in a distributed cloud environment.

6. Conclusions and Future Work

The IoT decision-making approach gets benefits when using Big Data Analytics through the intelligent processing of massive data collected from IoT devices and an analysis of the actionable data in the shortest possible time in order to make quick decisions.
Our proposal, QoSComm, provides network QoS for IoT Big Data Analytics, specifically in their job submission process without the need to modify the application and network controller schedulers. QoSComm improves BDA completion time when transferring data flows among SDN-based distributed data centers, as was shown in experiment results described in the previous section.
Evaluations results showed that, for time-sensitive applications, it is necessary to consider other network parameters to provide QoS such as delay, jitter, and packet loss rather than just considering higher bandwidth to minimize their completion time. QoSComm allows BDA to program software-defined networks dynamically according to their needs and can also communicate with multiple SDN controllers, providing an inter-domain communication approach.
As future work, the design of a REST API is proposed for a data flow distribution model enabling the BDA application scheduler to communicate its additional requirements for job execution (security, computing resources, among others). In addition, we will work on a combination of QoS based network paths and secure a forwarding scheme for sensitive-data IoT Big Data Analytics.

Author Contributions

All the authors were involved in research design and conceptualization; Writing—original draft, J.E.L.-R., J.I.N.-H., and R.R.-R.; Methodology, M.V.-B.; Formal analysis, J.C.C.-E.; Writing—review and editing, M.A.C.-L., J.I.N.-H., and R.R.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Centro de Investigacion Cientifica y de Educacion Superior de Ensenada, Baja California, Mexico (CICESE) and the Universidad Autonoma de Baja California, Campus Ensenada, Mexico (UABC).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
BDABig Data Analytics
SDNSoftware Defined Network
QoSQuality of Service
HPCHigh Performance Computing
MPIMessage Passing Interface

References

  1. Xuan, S.; Zhang, Y.; Tang, H.; Chung, I.; Wang, W.; Yang, W. Hierarchically Authorized Transactions for Massive Internet-of-Things Data Sharing Based on Multilayer Blockchain. Appl. Sci. 2019, 9, 5159. [Google Scholar] [CrossRef] [Green Version]
  2. Kos, A.; Tomazic, S.; Salom, A. Benchmarking Methodology and Programming Model for Big Data Process. Int. J. Distrib. Sens. Netw. 2015, 11, 71752. [Google Scholar] [CrossRef]
  3. Mohammadi, M.; Al-Fuqaha, A.; Sorour, S.; Guizani, M. Deep Learning for IoT Big Data and Streaming Analytics: A Survey. IEEE Commun. Surv. Tutor. 2018, 20, 2923–2960. [Google Scholar] [CrossRef] [Green Version]
  4. Mital, R.; Coughlin, J.; Canaday, M. Using Big Data Technologies and Analytics to Predict Sensor Anomalies. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 15–18 September 2014. [Google Scholar]
  5. Djedouboum, A.C.; Adamou, A.; Gueroui, A.M.; Mohamadou, A.; Aliouat, Z. Big data collection in large-scale wireless sensor networks. Sensors 2018, 18, 4474. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. Commun ACM. 2008, 51, 107–113. [Google Scholar] [CrossRef]
  7. Kijsanayothin, P.; Chalumporn, G.; Hewett, R. On using MapReduce to scale algorithms for Big Data analytics: A case study. J. Big Data 2019, 6, 105. [Google Scholar] [CrossRef]
  8. Stallins, W. Software-Defined Networks and OpenFlow. Internet Protoc. J. 2013, 16, 2–14. [Google Scholar]
  9. Sha, S.A.R.; Wu, W.; Lu, Q.; Zhan, L.; Sasidhara, S.; DeMa, P.; Guo, C.; Macaule, J.; Pouyou, E.; Ki, J.; et al. AmoebaNet: An SDN-enabled network service for big data science. J. Netw. Comput. Appl. 2018, 119, 70–82. [Google Scholar] [CrossRef] [Green Version]
  10. Lu, Y.; Fu, Q.; Xi, X.; Chen, Z.; Zou, E.; Fu, B. A policy conflict detection mechanism for multi-controller software-defined networks. Int. J. Distrib. Sens. Netw. 2019, 15. [Google Scholar] [CrossRef]
  11. Sloan, J. High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI: A Comprehensive Getting-Started Guide; O’Reilly Media Inc.: Sebastopol, CA, USA, 2004. [Google Scholar]
  12. Maitrey, S.; Jha, C. MapReduce: Simplified Data Analysis of Big Data. Procedia Comput. Sci. 2015, 57, 563–571. [Google Scholar] [CrossRef] [Green Version]
  13. Makpaisit, P.; Ichikawa, K.; Uthayopas, P. MPI Reduce Algorithm for OpenFlow-Enabled Network. In Proceedings of the 15th International Symposium on Communications and Information Technologies (ISCIT), Nara, Japan, 7–9 October 2015; pp. 261–264. [Google Scholar]
  14. U-Chupala, P.; Ichikawa, K.; Iida, H.; Kessaraphong, N.; Uthayopas, P.; Date, S.; Abe, H.; Yamanaka, H.; Kawai, E. Application-Oriented Bandwidth and Latency Aware Routing with OpenFlow Network. In Proceedings of the IEEE 6th International Conference on Cloud Computing Technology and Science, Singapore, 15–18 December 2014; pp. 775–780. [Google Scholar]
  15. Qin, P.; Dai, B.; Huang, B.; Xu, G. Bandwidth-Aware Scheduling with SDN in Hadoop: A New Trend for Big Data. IEEE Syst. J. 2017, 11, 2337–2344. [Google Scholar] [CrossRef] [Green Version]
  16. Jamalian, S.; Rajaei, H. ASETS: A SDN empowered task scheduling system for HPCAAS on the cloud. In Proceedings of the IEEE International Conference on Cloud Engineering, Tempe, AZ, USA, 9–13 March 2015. [Google Scholar]
  17. Alkaff, H.; Gupta, I.; Leslie, L. Cross-Layer Scheduling in Cloud Systems. In Proceedings of the IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, 9–13 March 2015. [Google Scholar]
  18. Shang, F.; Chen, X.; Yan, C.; Li, L.; Zhao, Y. The bandwidth-aware backup task scheduling strategy using SDN in Hadoop. Clust. Comput. 2018, 22, 5975–5985. [Google Scholar] [CrossRef]
  19. Egilmez, H.; Dane, S.; Bagci, K.; Tekalp, A.M. OpenQoS: An OpenFlow Controller Design for Multimedia Delivery with End-to-End Quality of Service over Software-Defined Networks. In Proceedings of the Signal & Information Processing Association Annual Summit and Conference, Hollywood, CA, USA, 3–6 December 2012. [Google Scholar]
  20. Owens, H.; Durresi, A. Video over Software-Defined Networking (VSDN). In Proceedings of the 16th International Conference on Network-Based Information Systems, Gwangju, Korea, 4–6 September 2013. [Google Scholar]
  21. Govindarajan, K.; Meng, K.; Ong, H.; Tat, W.M.; Sivanand, S.; Leong, L.S. Realizing the Quality of Service (QoS) in Software-Defined Networking (SDN) Based Cloud Infrastructure. In Proceedings of the 2nd International Conference on Information and Communication Technology (ICoICT), Bandung, Indonesia, 28–30 May 2014. [Google Scholar]
  22. Karaman, M.; Gorkemli, B.; Tatlicioglu, S.; Komurcuoglu, M.; Karakaya, O. Quality of Service Control and Resource Priorization with Software Defined Networking. In Proceedings of the 1st IEEE Conference on Network Softwarization (NetSoft), London, UK, 13–17 April 2015. [Google Scholar]
  23. Tomovic, S.; Prasad, N.; Radusinovic, I. SDN control frame- work for QoS provisioning. In Proceedings of the IEEE 22nd Telecommunications Forum, Belgrade, Serbia, 25–27 November 2014. [Google Scholar]
  24. Tajiki, M.; Akbari, B.; Shojafar, M.; Ghasemi, S.H.; Barazandeh, M.L.; Mokari, N.; Chiaraviglio, L.; Zink, M. CECT: Computationally efficient congestion-avoidance and traffic engineering in software-defined cloud data centers. Clust. Comput. 2018, 21, 1881–1897. [Google Scholar] [CrossRef] [Green Version]
  25. Cosio-Velazquez, E. Modelado de una Arquitectura de Red Definida por Software (SDN) Para el Aprovisionamiento de Recursos Utilizando Cross-Layer-Design (CLD); CICESE: Ensenada, Mexico, 2017. [Google Scholar]
  26. Parvizi, M.; Shadkam, E.; Jahani, N. A hybrid COA/ϵ-constraint method for solving multiobjective problems. Int. J. Found. Comput. Sci. Technol. 2015, 5, 27–40. [Google Scholar] [CrossRef] [Green Version]
  27. Pantuza-Junior, G. A multi-objective approach to the scheduling problem with workers allocation. Gestão Produção 2016, 23, 132–145. [Google Scholar]
  28. Emmerich, M.; Deutz, A. A tutorial on multiobjective optimization: Fundamentals and evolutionary methods. Nat. Comput. 2018, 17, 585–609. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Kreutz, D.; Ramos, F.; Verissimo, P.; Rothenberg, C.E.; Azodolmolky, S.; Uhlig, S. Software-Defined Networking: A Comprehensive Survey. Proc. IEEE 2015, 103, 14–76. [Google Scholar] [CrossRef] [Green Version]
  30. Alsaeedi, M.; Mohamad, M.; Al-Roubaiey, A. Toward Adaptive and Scalable OpenFlow-SDN Flow Control: A Survey. IEEE Access 2019, 7, 107346–107379. [Google Scholar]
  31. OpenFlow. Open Networking Foundation. Available online: https://www.opennetworking.org (accessed on 18 October 2019).
  32. Hwang, R.; Tseng, H.; Tang, Y. Design of SDN-enabled Cloud Data Center. In Proceedings of the IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity), Chengdu, China, 19–21 December 2015. [Google Scholar]
  33. OpenDayLight Project. Available online: https://www.opendaylight.org (accessed on 20 February 2019).
  34. Mininet SDN Simulator. Available online: http://www.mininet.org (accessed on 10 July 2019).
  35. IPERF Network Performance Tool. Available online: https://iperf.fr (accessed on 25 July 2019).
  36. Botta, A.; Dainotti, A.; Pescape, A. A tool for the generation of realistic network workload for emerging networking scenarios. Comput. Netw. 2012, 56, 3531–3547. [Google Scholar] [CrossRef]
  37. MPICH Home Page. Available online: http://www.mpich.org (accessed on 5 September 2019).
  38. MPICH G2 Web Site. Available online: http://toolkit.globus.org/ (accessed on 2 October 2019).
  39. Marjani, M.; Nasaruddin, F.; Gani, A.; Karim, A.; Hashem, I.A.T.; Siddiqa, A.; Yaqoob, I. Big IoT Data Analytics: Architecture, Opportunities, and Open Research Challenges. IEEE Access 2017, 5, 5247–5261. [Google Scholar]
Figure 1. IoT Big Data Analytics and QoSComm general view.
Figure 1. IoT Big Data Analytics and QoSComm general view.
Applsci 10 07586 g001
Figure 2. QoSComm general flow forwarding selection process.
Figure 2. QoSComm general flow forwarding selection process.
Applsci 10 07586 g002
Figure 3. QoSComm communication process and its reference in the SDN architecture.
Figure 3. QoSComm communication process and its reference in the SDN architecture.
Applsci 10 07586 g003
Figure 4. Network topology model for simulation. OpenFlow Switches (OFS): S0 to S13.
Figure 4. Network topology model for simulation. OpenFlow Switches (OFS): S0 to S13.
Applsci 10 07586 g004
Figure 5. IPERF UDP data transfer comparison (average mbps). Tests execution: (i) individually; (s) simultaneously. Time interval: 30 s. STDEV: H1-H6i (1.40, 0.81), H2-H5i (0.69, 2.23), H3-H4i (0.97, 1.79); H1-H6s (2.15, 4.02), H2-H5s (1.40, 2.12), H3-H4s (1.21, 3.46).
Figure 5. IPERF UDP data transfer comparison (average mbps). Tests execution: (i) individually; (s) simultaneously. Time interval: 30 s. STDEV: H1-H6i (1.40, 0.81), H2-H5i (0.69, 2.23), H3-H4i (0.97, 1.79); H1-H6s (2.15, 4.02), H2-H5s (1.40, 2.12), H3-H4s (1.21, 3.46).
Applsci 10 07586 g005
Figure 6. IPERF TCP data transfer comparison (average mbps). Tests execution: (i) individually; (s) simultaneously. Time interval: 30 s. STDEV: H1-H6i (2.03, 0.85), H2-H5i (0.67, 0.57), H3-H4i (4.29, 6.36); H1-H6s (1.23, 0.54), H2-H5s (0.41, 0.87), H3-H4s (3.20, 3.64).
Figure 6. IPERF TCP data transfer comparison (average mbps). Tests execution: (i) individually; (s) simultaneously. Time interval: 30 s. STDEV: H1-H6i (2.03, 0.85), H2-H5i (0.67, 0.57), H3-H4i (4.29, 6.36); H1-H6s (1.23, 0.54), H2-H5s (0.41, 0.87), H3-H4s (3.20, 3.64).
Applsci 10 07586 g006
Figure 7. D-ITG data transfer comparison (average mbps). Tests execution: (i) individually; (s) simultaneously. Time interval: 30 s. STDEV: H1-H6i (0.36, 0.60), H2-H5i (0.13, 0.21), H3-H4i (0.50, 0.05); H1-H6s (0.78, 0.05), H2-H5s (0.42, 0.16), H3-H4s (0.13, 0.07).
Figure 7. D-ITG data transfer comparison (average mbps). Tests execution: (i) individually; (s) simultaneously. Time interval: 30 s. STDEV: H1-H6i (0.36, 0.60), H2-H5i (0.13, 0.21), H3-H4i (0.50, 0.05); H1-H6s (0.78, 0.05), H2-H5s (0.42, 0.16), H3-H4s (0.13, 0.07).
Applsci 10 07586 g007
Figure 8. Data Flow forwarding method comparison for MPI Application. Average completion time results using simulation topology. STDEV: Controller: 1.15, QoSComm: 1.33.
Figure 8. Data Flow forwarding method comparison for MPI Application. Average completion time results using simulation topology. STDEV: Controller: 1.15, QoSComm: 1.33.
Applsci 10 07586 g008
Table 1. QoS proposals features comparison.
Table 1. QoS proposals features comparison.
ReferenceNetwork ParametersPath Selection AlgorithmMulti-ControllerDomain
[19]Bandwidth, delay, jitterSPF with delay restrictionNoSingle domain
[20]Bandwidth, delay, jitterSPF with bandwidth and delay as constraintsNoSingle domain
[21]BandwidthBandwidth controlNoSingle domain
[22]Bandwidth, delayBandwidth control, user-defined priorityNoSingle domain
[23]BandwidthSPF with bandwidth as constraintNoSingle domain
[24]BandwidthSPF with bandwidth as constraintNoSingle domain
[9]BandwidthSPF with bandwidth as constraintNoInter-domain
QoSCommBandwidth, delay, jitter, packet lossMulti-objective, considers application requirementsYesInter-domain
Table 2. Flow rules configuration example for each OFS.
Table 2. Flow rules configuration example for each OFS.
OFSIDMatchActionPriority
S01source: 10.0.0.1, dest: 10.0.0.6, ip-protocol: 6output-node-connector: 1201
S22source: 10.0.0.2, dest: 10.0.0.5, ip-protocol: 6output-node-connector: 2202
S63source: 10.0.0.3, dest: 10.0.0.4, ip-protocol: 6output-node-connector: 2202
S010source: 10.0.0.1 dest: 10.0.0.6, ip-protocol: 17output-node-connector: 1210
S211source: 10.0.0.2, dest: 10.0.0.5, ip-protocol: 17output-node-connector: 2211
S612source: 10.0.0.3, dest: 10.0.0.4, ip-protocol: 17output-node-connector: 2212
...............
Table 3. Available network paths from Hosts 1 to 6 in the simulation model.
Table 3. Available network paths from Hosts 1 to 6 in the simulation model.
Path IDNetwork PathBandwidth (mbps)Delay (ms)PLRJitter (ms)
1S0, S1, S3, S10, S11, S12, S131002500
2S0, S1, S3, S10, S11, S131002000
3S0, S1, S3, S10, S8, S11, S12, S13902900
4S0, S1, S3, S10, S8, S11, S13902400
5S0, S1, S4, S3, S10, S11, S12, S131002100
6S0, S1, S4, S3, S10, S11, S131001600
7S0, S1, S4, S3, S10, S8, S11, S12, S13902500
8S0, S1, S4, S3, S10, S8, S11, S13902000
9S0, S2, S1, S3, S10, S11, S12, S13902000
10S0, S2, S1, S3, S10, S11, S13901500
11S0, S2, S1, S3, S10, S8, S11, S12, S13902400
12S0, S2, S1, S3, S10, S8, S11, S13901900
13S0, S2, S1, S4, S3, S10, S8, S11, S12, S13901600
14S0, S2, S1, S4, S3, S10, S11, S13901100
15S0, S2, S1, S4, S3, S10, S8, S11, S12, S13902000
16S0, S2, S1, S4, S3, S10, S8, S11, S13901500
Table 4. Available network paths from Hosts 2 to 5 in the simulation model.
Table 4. Available network paths from Hosts 2 to 5 in the simulation model.
Path IDNetwork PathBandwidth (mbps)Delay (ms)PLRJitter (ms)
1S2, S0, S1, S3, S10, S8, S9100230.0022
2S2, S0, S1, S3, S10, S8, S7, S9100280.0054
3S2, S0, S1, S3, S10, S11, S8, S990290.0022
4S2, S0, S1, S3, S10, S11, S8, S7, S990340.0054
5S2, S0, S1, S4, S3, S10, S8, S9100190.0022
6S2, S0, S1, S4, S3, S10, S8, S7, S9100240.0052
7S2, S0, S1, S4, S3, S10, S11, S8, S990250.0022
8S2, S0, S1, S4, S3, S10, S11, S8, S7, S990300.0054
9S2, S1, S3, S10, S8, S990160.0022
10S2, S1, S3, S10, S8, S7, S990210.0054
11S2, S1, S3, S10, S11, S8, S990220.0022
12S2, S1, S3, S10, S11, S8, S7, S990270.0054
13S2, S1, S4, S3, S10, S8, S990120.0022
14S2, S1, S4, S3, S10, S8, S7, S990170.0054
15S2, S1, S4, S3, S10, S11, S8, S990180.0022
16S2, S1, S4, S3, S10, S11, S8, S7, S990230.0054
Table 5. Available network paths from Hosts 3 to 4 in the simulation model.
Table 5. Available network paths from Hosts 3 to 4 in the simulation model.
Path IDNetwork PathBandwidth (mbps)Delay (ms)PLRJitter (ms)
1S6, S5, S4, S1, S3, S10, S11, S8, S9, S790340.0117
2S6, S5, S4, S1, S3, S10, S11, S8, S790290.0085
3S6, S5, S4, S1, S3, S10, S8, S9, S7100280.0117
4S6, S5, S4, S1, S3, S10, S8, S7100230.0085
5S6, S5, S4, S3, S10, S11, S8, S9, S790280.0117
6S6, S5, S4, S3, S10, S11, S8, S790230.0085
7S6, S5, S4, S3, S10, S8, S9, S7100220.0117
8S6, S5, S4, S3, S10, S8, S7100170.0085
9S6, S4, S1, S3, S10, S11, S8, S9, S780290.0065
10S6, S4, S1, S3, S10, S11, S8, S780240.0033
11S6, S4, S1, S3, S10, S8, S9, S780230.0065
12S6, S4, S1, S3, S10, S8, S780180.0033
13S6, S4, S3, S10, S11, S8, S9, S780230.0065
14S6, S4, S3, S10, S11, S8, S780180.0033
15S6, S4, S3, S10, S8, S9, S780170.0065
16S6, S4, S3, S10, S8, S780120.0033
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lozano-Rizk, J.E.; Nieto-Hipolito, J.I.; Rivera-Rodriguez, R.; Cosio-Leon, M.A.; Vazquez-Briseño, M.; Chimal-Eguia, J.C. QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics. Appl. Sci. 2020, 10, 7586. https://doi.org/10.3390/app10217586

AMA Style

Lozano-Rizk JE, Nieto-Hipolito JI, Rivera-Rodriguez R, Cosio-Leon MA, Vazquez-Briseño M, Chimal-Eguia JC. QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics. Applied Sciences. 2020; 10(21):7586. https://doi.org/10.3390/app10217586

Chicago/Turabian Style

Lozano-Rizk, Jose E., Juan I. Nieto-Hipolito, Raul Rivera-Rodriguez, Maria A. Cosio-Leon, Mabel Vazquez-Briseño, and Juan C. Chimal-Eguia. 2020. "QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics" Applied Sciences 10, no. 21: 7586. https://doi.org/10.3390/app10217586

APA Style

Lozano-Rizk, J. E., Nieto-Hipolito, J. I., Rivera-Rodriguez, R., Cosio-Leon, M. A., Vazquez-Briseño, M., & Chimal-Eguia, J. C. (2020). QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics. Applied Sciences, 10(21), 7586. https://doi.org/10.3390/app10217586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop