1. Introduction
Given the diversity of mobile devices nowadays, the number of entities has increased explosively, resulting in the sharp increase of data traffic. To cope with such an issue, telecommunication service providers have tried to keep up with the demands for network capacity increase with network hardware equipment. To accommodate various data and signaling required by different terminals, however, it is inefficient to have the current network equipment only. This is because the current network structure that depends on a specific manufacturer can neither provide the various services represented by 5G (5th-Generation mobile communications) and IoT (Internet of Things) nor cope with the traffic changes quickly through the construction of flexible maintenance and repair environment. To deal with such problems, the need for NFV (Network Function Virtualization) is raised.
NFV technology is one of the virtualization technologies that realized the Network Function with compatible server, switch, and storage beyond the dedicated network equipment of a specific manufacturer. By implementing software, which can be operated on any infra resource on commercial hardware that is easy to install and change, the telecommunication service provider realizes operation efficiency and quick services for a cost-effective network. Therefore, this study proposes server operation and virtualization techniques for energy and cost reduction for NFV for future sustainable computing.
The purpose of this study is to design and study the virtualization network focusing on NFV technology, which is in the spotlight in telecommunication among virtualization technologies recently used for cost reduction. The study is based on the 3rd quarter of 2016.
Meanwhile, a normal system is operated with one OS per system. In this case, the use rate of a normal system is less than 2~30%. The concept of virtualization seemed to supplement the disadvantage of low use rate compared with the available resources of the equipment. Virtualization is a technology that separates resources by abstracting physical resources logically using hypervisor technology. If virtualization technology is applied, it can use the system only up to 80% by operating multiple operating systems in one system.
Virtualization technologies can be classified into application virtualization, desktop virtualization, server virtualization, storage virtualization, and network virtualization. The biggest advantage of virtualization is the reduction of cost and efficiency of operation. Since the amount of necessary hardware is reduced while operating the system efficiently, the purchase cost, power consumption, and phase area were reduced, thus resulting in the reduction of operating cost. In addition, it can realize the integrated management of dispersed resources. As a result, the demand for virtualization is increasing.
The pattern and quantity of network traffic increased alongside the diversity of demand and type of personal devices. Since mutual connection is required, the cloud service is increasing sharply. To cope with such demand, telecommunication service providers are required to provide different systems.
Since there is a possibility of expanding the network capacity to unimaginable levels, many enterprises consider virtualization as a solution. With its merits such as effective operation and cost reduction, many enterprises have conducted sustainable reviews to introduce virtualization.
In many regions, telecommunication service providers encounter a serious task. To cope with rapidly increasing 4G LTE (4th Generation Mobile Communications Long Term Evolution) traffic speed from mobile devices and IoT sensors, they have to expand core network elements including the Evolved Packet Core (EPC, 3GPP Mobile Competence Centre, Sophia Antipolis Cedex, France) and control cost.
All converged sounds and data traffic of mobile devices with 4G LTE function are transferred to the fixed core of a network of telecommunication service providers through EPC. All devices such as smart phone, tablet, connected cars, smart phone, smart building and 4G LTE are based on EPC. In the past, EPC solution is realized through an exclusive system. Today, mobile telecommunication service providers pursue an alternative model to support rapidly increasing network demand. If virtualized EPC (vEPC) solution is used, mobile telecommunication service providers can expand at a cost-effective way using a standard bulk server instead of a purpose-built system. It is impossible to continue to use a purpose-built system in line with the network traffic volume increase. Service providers must expand network capacity and introduce new services while avoiding the high purchase cost of the purpose-built equipment that cannot be used for multiple purposes. That is to say, EPC is one of the most attractive alternatives that can lead to changes. In the 4G LTE network, EPC suppliers provide signaling, management, control and account management function that are essential to all IP convergence sound and data network traffic. The function element that EPC provides is (MME) Mobile Management Entity.
In the past, many service providers have used a purpose-built system for EPC, but the expansion of EPC to accommodate traffic increase by a purpose-built system seems to require higher cost. It is an appropriate time for change. In case of areas where 4G LTE is still impossible, the service providers of relevant areas have an opportunity to substitute the exclusive system before investment. Some service providers in other areas of the world are ready to supplement the existing EPC solution or to expand EPC in order to accommodate traffic increase.
Therefore, this paper suggests the vEPC model for server operation and virtualization to save energy, and carries out test bed. The test bed is produced with the Java Android application. A system that efficiently reduces the network server operation cost based on the NFV technology is presented in this study. The performance analysis has shown that the cost was significantly reduced by 24% compared to the operation cost of the existing network server. It is expected that the technology used in this research will be foundational to the future sustainable computing. Once virtualized EPC (vEPC) solution is executed, service providers can control costs and also can increase subscribers, traffic and access volume.
2. Background
There is an assumption that the volume of CO
2 emissions incurring from Information Technology is equivalent to those from the aviation industry. The increased demand of the cloud service is pushing data centers to the limit but forcing them to achieve green computing at the same time [
1].
Not long ago, Google’s chief executive officer emphasized this by saying, “What matters most to the computer designers at Google is not speed but power, low power, because data centers can consume as much energy as a city” [
2]. Recently, the cloud service is gaining popularity as one of the most convenient IT services such that more power is required to meet the demand. Also, the sales of high-definition multimedia are increasing rapidly along with other sophisticated computer hardware. These phenomena have forced internet providers to seek a more efficient power management strategy for their data centers and consider using renewable energy sources to reduce operational cost while avoiding adverse effects on the environment. Such a challenge is also affecting algorithm development.
The United States Data Center Energy Usage Report [
3] (Lawrence Berkeley National Laboratory, for U.S. Department of Energy) issued in 2016 specifically mentioned that the data centers in the US had consumed approximately 70 billion kWh in 2014, equivalent to roughly 1.8% of the total electricity used in the US in the same year. The total volume of global CO
2 emissions generated from the IT industry is almost equivalent to the volume generated from the aviation industry in 2011 (70.9 MMTCO
2) [
4,
5]. It is interesting that some energy efficiency schemes were actually successful in preventing or at least slowing down the increase in power demand despite the rapid and significant increase in big data. That is, the power consumptions in data centers increased approximately 4% during the period of four years (2010–2014) compared with the 24% and 90% increase in the previous five-year period and in another five-year period before that, respectively. It is being considered that the major underlying cause of recent power demand increase is the growing number of hyper-scale data centers exclusive to big cloud facilities. As such, power-efficient techniques and algorithms are essential for controlling the power usage in the ever-growing IT environment [
6].
Virtualization was discussed in the research work Virtual Machine Replication on Achieving Energy-Efficiency in a Cloud by Mondal et al. [
7]. The ever-increasing demand for cloud service is compelling internet providers to construct large-scale virtualized data centers equipped with virtual machines. The some of the replication techniques were applied to reduce the possibility of service failure and the trade-offs between task completion time and energy use in the varied replication methods were determined by the comprehensive analytical models which grasp the state transitions of virtual machines as well as the power consumption patterns.
Some of the issues pertaining to energy consumption were discussed from an application-level perspective in the article Characterizing Energy per Job in Cloud Applications by Ho et al. [
8], which proposed several effective analytical models for assessing energy consumption and estimate the energy-per-job status in data centers with different configurations. The research focuses on how efficient the applications are with respect to their performances and the energy consumed per job. In this case, the resources are to be shared and the hosts with virtual machines are heterogeneous in terms of energy profiles. Such an attempt was to determine the most efficient way of utilizing the resources given.
Some of the energy-efficient pricing strategies are being discussed in the article Energy-Aware Pricing in a Three-Tiered Cloud Service Market by Paul et al. [
9], which presented an interesting new theoretical framework that could be useful when implementing a sustainable pricing policy along with a corresponding optimal resource provisioning policy. These researchers have also provided performance evaluation on subjects including electricity price, renewable generation, workload service request, and operational details of the data centers using actual datasets [
6,
10].
2.1. NFV (Network Function Virtualization) and Trend of Virtualization
Since the current telecommunication network consists of the dedicated equipment of network equipment manufacturers, it has to be remade in order to apply new services and technologies. Thus, the telecommunication service provider was burdened with excessive cost and time consumption for introduction, for instance. Such disadvantages can be reduced if NFV (Network Function Virtualization) is used. As such, telecommunication service providers grew interested in it [
10,
11].
NFV involves separating hardware and software on various NEs (pieces of Network Equipment) on the network by applying the virtualization concept. The separation between hardware and software was made possible by using a hypervisor. Hardware induces the reduction of investment cost by using a compatible IT server. Software can be operated since it is mounted on the virtualized environment regardless of hardware and OS, thereby enabling easing the dependency on hardware. Aside from this effect, it is a compatible piece of hardware [
11,
12].
Therefore, NFV is a technology that virtualizes the main necessary functions of the network at the compatible server with high performance computing power instead of expensive equipment. It is an execution of the existing Physical Network Function (PNF) by means of virtualization at the VM server or compatible equipment. It does not need to purchase expensive equipment. By installing an application that performs the networking function on the general server, however, it enables the construction of inexpensive infrastructure. The L4–L7 middle box service functions of telecommunication service providers, which are sold at a high price considering the dense combination of specific hardware can be realized as a cheap compatible service through application. Since NFV provides a network function to the server through application operation, it does not need to install new equipment and reduces the purchasing cost and operating cost. Moreover, since it realizes network infrastructure through software, network function improvement through software upgrade is made possible, including central concentrated control [
13,
14].
Meanwhile, since the existing network equipment provides one Physical Network Function (PNF) at one physical network equipment, there is no competition on the hardware resource. If multiple hypervisors or applications are installed on one server, however, hypervisors or applications compete for limited computing resources.
Due to resource competition, there may be a Virtualized Network Function (VNF) placed in the lower priority and which cannot be allocated with computing resources sufficiently. At this point, there is an imbalance of network resource use between the final user allocated with plenty of resources and the user using VNP in high priority. Accordingly, there is an unintended service quality difference or a traffic processing priority between users.
Figure 1 shows vision of network function virtualization.
2.2. NFV Standardization
The standardization of NFV technology involves the definition of the structure of NFV and block of the main functions by ETSI (European Telecommunications Standards Institute, Sophia-Antipolis, France) and promotion of the standard on the open networking technology that operates by connecting the block interface with the standardized interface.
The ETSI NFV working group is divided into 6 groups, and its main objective is to create an industrial (European Telecommunications Standards Institute, Sophia-Antipolis, France) standard on NFV (Network Function Virtualization). All documents prepared under the ISG group are not enacted as an official ESTI standard but regarded as a supplementary specification [
11].
Meanwhile, OPNFV (Open Platform for Network Function Virtualization, San Francisco, CA, USA) is a project that supplies open-source software platforms to distribute the NFV solution based on the equipment/application developers and solution vendors. OPNFV designs the framework to have wide interoperability related to the NFV use model that satisfies the requirements of platform providers, application developers, and users [
12]. In other words, OPNFV applies the cloud computing-based NFV and existing network appliance hardware equipment to VNF with software for provision as a business model according to the cost reduction effect by CAPEX and OPEX in terms of cost to business operators such as service providers and cloud infrastructure vendors [
13].
Figure 2 shows a project on VIMs based on OpenStack, an open source to the Bare Metal system. “Ceph” was used to apply the KVM hypervisor to the computer and to apply the Linux (The Linux Foundation, San Francisco, CA, USA) distribution file system of P byte size to the storage. For network virtualization, OVS (Open VSwitch, The Linux Foundation, San Francisco, CA, USA) was applied as a software switch run inside the hypervisor. As for the SDN platform, the open source ODL (Open Day-light), which was used as a platform to control the SDN software switch, was applied.
OPNFV is based on the NFV basic structure for systematic framework development; the project has been carried out by classifying it into Requirements, Integration & Testing, Collaborative Development, and Documentation.
2.3. EPC and vEPC (Virtual EPC)
The recent emergence of Distributed Cloud Environment, Network Function Virtualization (NFV), and Software Defined Networking (SDN) has affected the evolution of EPC (Evolved Packet Core), a mobile core network. In other words, it has evolved into a mobile core network structure through EPC function virtualization not only to reduce CAPEX/OPEX but also to facilitate innovative service development and application.
EPC (Evolved Packet Core) [
14] is an integrated framework for packet-based real-time and non-real-time service and is a high-performance, high-capacity new all-IP mobile core network for LTE (Long Term Evolution) as defined in the 3GPP (3rd-Generation Partnership Project) Release 8 standard. It integrates the mobile core function separated in the existing mobile network (2G/3G) as Circuit-Switched (CS) for voice and Packet-Switched for data. In other words, the 4G network is defined as End-to-End (user mobile device having IP function—IP-based LTE station (eNB)—EPC—application domain IMS (IP Multimedia Subsystem)—and non-IMS, all-IP-based flat architecture
Figure 3. Here, EPC is an essential component for the End-to-End IP service that enables introducing innovative services or applications such as new business model development. In terms of mobile network structure evolution, EPC separates control, data, and plane, and improves network performance through flat IP structure (e.g., data connection from station passes the EPC gateway only) that simplifies the hierarchical structure between mobile data components.
The precise definition of EPC component varies by network equipment vendor, but common components include S-GW (Serving Gateway), P-GW (Packet Data Network Gateway), and MME (Mobility Management Entity). Sometimes, PCRF (Policy and Charging Rules Function) is added. To examine the functions of each component briefly, S-GW is a data flat component that manages user mobility as a local mobility anchor playing the role of a border point between RAN (Radio Access Network) and core network and maintaining the data path between station and P-GW. P-GW is a data flat component performing the role of IP anchor point. In other words, it serves the external network (e.g., Internet, IMS core, and other data network) and data transmission between user terminals where the high-speed packet processing of service edge of the mobile network service provider takes place. It performs bearer creation and closing, packet inspection/filtering, policy enforcement, accounting, and reporting. Although S/P-GW is a data flat component, it works as a control flat component such as mobility control. As a control flat component, MME allocates appropriate S-GW to each user terminal, coordinates bearer channel setting in the network, tracking user terminal and network resource allocation/optimization as the user terminal moves and HSS (Home Subscriber Server), and provides security management between the user terminal and the network. As a newly defined network node in the 3GPP Release 7 standard, PCRF is an evolved form of PDF (Policy Decision Function) and CRF (Charging Rules Function). In the 3GPP Release 8 standard, it expands and improves PCC (Policy Charging and Control) to non-3GPP access such as Wi-Fi (Wireless Fidelity) and wired IP bandwidth network.
Due to the recent requirements of various multimedia services, QoE (Quality of Experience), VoLTE (Voice-over LTE), and M2M (Machine-to-Machine), mobile network service providers want to develop a network that handles unpredictable user requirements more efficiently without service quality downgrade with available low cost and which supports quick service development/innovation and expansion. Accordingly, mobile network service providers pay much attention to Network Function Virtualization as a technology that responds to the dynamic and unpredictable characteristics of mobile bandwidth service more efficiently [
15] and reviews the applicability in various aspects. NFV technology involves implementing/operating network functions (e.g., S/P-GW, Content Delivery Network (CDN), Deep Packet Inspection (DPI), etc.) on a dedicated hardware platform. The goal is to speed up and reduce the cost of bringing innovation to the network. In terms of standardization, multiple PoC (Proof-of-Concept) is carried out to virtualize the representative network function in the global mobile network service provider and network equipment vendor in ETSI (European Telecommunication Standards Institute), NFV ISG (Interest Study Group) [
15]. In particular, the first network function that many mobile network service providers plan to commercialize is the IMS function and EPC function.
2.4. Synchronization of Virtual EPC Model and Virtualization Candidate Component
Early discussions on Network Function Virtualization (NFV) dealt mainly with the reduction of equipment cost due to the use of a large amount of general-purpose hardware, increased efficiency of equipment use through the virtual network function, and reduced operation cost due to automation. Upon closer inspection, however, while the equipment cost is significant, it has lower priority in the mobile core compared to other areas of the network. This is because the hardware cost of EPC accounts for about 20–30% only, and EPC has a relatively lower portion in the overall network equipment cost. In contrast, in terms of operating cost, it is absolutely beneficial if the definition of EPC is expanded to Gi/SGi application and another core virtual network. The diversity of hardware/software provided by other vendors in the mobile core network (to avoid lock-in to a specific vendor) can be solved through automation in the network function virtualization model even if it is another overhead in terms of maintenance and management. Besides the equipment cost and operating cost, potential profit from the virtual EPC includes rapid and flexible service development and application, elastic capacity adjustment, service chaining, network slice, and interlocking with SDN technology. At present, various mobile core functions are regarded as candidate components for virtualization. The entire mobile core is expected to be virtualized fully in the next few years to operate at the same level of performance and functionality as the existing EPC equipment. The mobile core functions can be classified into data flat node (S/P-GW, etc.), control flat node (MME, Policy, IMS, etc.), and Gi/SGi LAN function.
The control flat function is mainly operated in most x86-based server platforms today. Technically, it is relatively clear and suitable for virtualization since it only needs modification to work at the virtualization hierarchy. Policy or IMS application server is considered the first function for virtualization, and mobile network service providers that have not yet secured IMS infrastructure are interested in the virtualization version. The MME function is rather different; it is the most important component in EPC functions. If there is any failure due to overload or service problem to the MME function, it leads to a critical result. Therefore, mobile network service providers or vendors seem to take a conservative stance, and the MME pulling function is highly likely to be introduced as a short-term alternative. If the usage is rapidly increased, however, it is possible to extend the capacity dynamically, and it is suitable for elasticity and failure recovery fundamentally. Thus, some network equipment vendors consider the virtual MME to be a good alternative compared with the MME pulling function in the long term.
The virtualization of S/P-GW in relation to the data flat function is far more difficult than the virtualization of control flat function since stable high-speed packet processing is required. Generally, the gateway is implemented on an edge router or an Advanced Telecom Computing Architecture (Advanced TCA, Wakefield, MA, USA) platform using dedicated line cards with CPU processors to optimize data flat performance. Since multiple vendors already support multi-functions in the traditional platforms, GGSN (Gateway GPRS Support Node), P-GW, and S-GW can be set through software. It is already a kind of virtualization, and the gateway is operated in the pool in this way for N + 1 redundancy and efficiency. Therefore, technically speaking, while it is not difficult to virtualize component, it is not easy to satisfy the performance requirements of the virtual network. As such, care must be taken from the design stage. For example, in Europe’s largest mobile core network, up to 50 Gbit/s traffic takes place. Since the standard 4 core server blade can handle traffic of about 5 Gbit/s, if the traffic can be dispersed at multiple points, the virtual EPC model can be sufficiently operated. Since packet processing in the server platform is not sufficiently stable (unable to provide stable packet processing speed due to load and other elements), however, mobile network service providers are concerned about it. Thus, the first virtual EPC application scenario is expected to be used in an enterprise that operates independently without much impact on the existing mobile network infrastructure or in a new dedicated EPC business model that does not have a large load such as M2M.
In most mobile core networks, multiple appliances are located between the 3GPP core and external network/service, especially SGi interface. Solutions in the form of middle box that provide multiple functions exist and require a complicated setting. Most settings are performed manually. Since all traffic belonging to a specific APN (Access Point Name), i.e., traffic that does not require specific middle box processing, has to pass the same processing route, the efficiency of equipment operation is low. Accurately speaking, it is not an EPC function defined in the 3GPP standard, and it does not have much effect on service supply compared with the EPC functions. Therefore, mobile network service providers are less burdened with regard to the application of new solutions. As such, it is suitable to apply virtualization technology in particular, together with SDN technology as service chaining that enables flow level traffic routing. Not all mobile network service providers pay attention to it, however. Some embodied DPI and TCP performance optimization functions in the gateway itself, operating a simple SGi interface wherein a firewall is operated in external networks such as mobile core network and Internet.
2.5. Matters to Consider in the Realization of Virtual EPC Function
How to port EPC software from a vendor hardware platform to a virtualization environment and what software should be redesigned from the beginning to take full advantage of the cloud when implementing virtual EPC functionality may give rise to controversy. The majority of mobile network service providers consider it to be an important issue. When some virtual solutions were tested, new design was required in order to optimize and operate them in the cloud environment as the existing software porting performance is not good compared with the existing (hardware-based) solution. Though it is not a problem for simple applications, it may be a big problem in main infrastructures such as EPC. In particular, it must be solved in case of data flat that requires high performance (e.g., P-GW). In general, there can be fundamental differences between the software that can derive maximum performance from dedicated hardware resources and the software that can dynamically adjust the capacity in an integrated infrastructure environment without limit to physical devices. To save status information, local saving devices or dispersed SAN (Storage Area Network) environment can be used. To cope with elastic trouble response, the 1 + 1 model or N + M model can be used. For scaling, a more powerful service can be introduced, or the capacity can be expanded in terms of software together with general load dispersion technology. For communication between components, RPC (Remote Procedure Call), share memory, message delivery, or web service API (Application Programming Interface) can be used.
When applying the virtual EPC function, if low-delay but sufficient capacity transmission is possible, the control flat functions can be operated at a large-scale central concentrated data center. As for data flat, the problem is complicated due to the requirements of high-speed stable packet processing. Basically, a COTS (Commercially Off-The-Shelf) server can be used for data flat. If only a low processing rate is the problem, it can be solved by adding more COTS servers thanks to the low cost of the COTS servers. If stable high-speed packet processing (e.g., S/P-GW) is an issue, however, the use of a hypervisor in the compatible CPU cannot guarantee performance. As an alternative, the use of SR-IOV (Single Root I/O Virtualization) [
16] or other direct I/O-style mechanism is discussed to make a direct connection between the virtual machine and core. Bypassing the hypervisor enables stable performance predictability but reduces elasticity. As such, there has yet to be a consensus regarding a better solution. For now, SR-IOV technology is applied to the data flat function. The I/O function of the COTS server is another issue. In the case of P-GW where stable high-speed performance is important, the standard network interface card is not enough to ensure sufficient performance. There are various methods to address such problems, but Intel’s DPDK (Data Plane Development Kit) [
17] solution receives much attention.
If a virtual network function is operated at the large-scale central concentration data center by introducing the NFV model, it can save cost thanks to economy of scale; it can also be applied to IT, web-scale application, and some mobile network applications without any problems. In terms of the 3GPP EPC standard, however, the trend is to move to the distributed gateway structure compared to the 3G packet core, and the close distribution of contents cash to users is receiving much attention. Accordingly, the virtual gateway is highly likely to be distributed to the distributed NFV infrastructure PoP (Point of Presence), i.e., small-scale data center due to delay time, elasticity, and transmission efficiency. If so, how to define a specific virtual network function working under the distributed NFV infrastructure and how to keep the user/service status along the movement of a user or service may be an issue. A role of orchestration and a management system is to know what kind of resources can be used in terms of I/O, memory, or computing at a specific point and to know the present and future prediction use rate. If it is necessary to move service or data, it needs automation since the network provides dynamic connection. Likewise, the SDN technology can play an important role in the NFV model. In other words, if virtual EPC is combined with SDN technology, it can allocate mobile core network functions in the NFV infrastructure suitably to facilitate performance optimization
Figure 4.
2.6. Virtualization
The capacity of an emulation test bed scales when experimental nodes are mapped onto limited physical resources [
18]. For example, the DETER (Defense Technology Experimental Research Laboratory) containers system (DETERLab, The University of Southern California, Los Angeles, CA, USA) [
18,
19] can support experiments that are two orders of magnitude larger than the test bed. Most emulation test beds support various state-of-the-art virtualization techniques. With root access on test bed machines, users are able to create different types of virtual machines on the provisioned test bed machines [
19]. For instance, if multiple experimental nodes with different operating systems are hosted on a single test bed machine, the user can apply full virtualization solutions, such as KVM [
20], VMware [
21], or Virtual-Box [
22], to create the virtual machines with different guest operating systems. If the user needs to minimize the virtualization overhead, the virtual machines can be created using lightweight OS-level virtualization techniques such as LXC [
23]. VMware, Inc. has the largest market share in the virtualization technology. They already have technologies that support Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS), all of which are verified as virtualized services.
Conversion to a virtualized system will bring about next several benefits, so we have performed the virtualization work for the server and network laying emphasis on them in this paper. The benefits are: reduction in the costs involving the Data Center Management and new investments; human resource efficiency related to the repetitive tasks; and achievement of environment-friendly IT [
23,
24] by being able to fulfill low-power operation by securing the rack space and working environment with constant temperature and humidity. The virtualization technology can be largely divided into the application virtualization, server virtualization, network virtualization, desktop virtualization and storage virtualization.
One of the most compelling aspects of distributed cloud environments based on wide area networks is reducing costs while offering redundancy, reliability, and geo-replication. Furthermore, other motivations of distributed cloud include reducing wide-area traffic and latency, efficient computation at the edge, and virtualization [
25,
26,
27,
28,
29,
30,
31], while research challenges for networking are summarized as follows: distributed cloud communications in WAN, network virtualization, network security and privacy, high performance, edge network architecture with middleware, and scalability [
31,
32,
33,
34,
35,
36,
37].
3. Server Operation and Virtualization to Save Energy and Cost
This chapter includes the test in the lab and the actual test bed in the enterprise. First, the test bed environment in the lab is as follows (the cost for virtualization is 30 million Korean won as of Q3 in 2017).
3.1. Test Bed in Large Scale
It goes from the 1st layer at the bottom to the 7th layer. The 1st layer is a Physical Layer that defines the mechanical, electric, and physical dimensions to connect the system. The 2nd layer is the Data Link Layer that sends a data link reliably through the physical link. The 3rd layer, the Network Layer, sends data to the destination safely and quickly by dividing into packets. The 4th layer is the Transport Layer, which performs transparent data transfer between the two systems related to TCP or UDP protocol. The 5th layer is the Session Layer, which provides dimension to manage the telecommunication by the application process at both ends. The 6th layer is the Presentation Layer, which has a conversion function if the data expression method between sender and receiver is different or provides a common format. Lastly, the 7th layer is the Application Layer, which enables accessing the network for data exchange between application programs in the language of the application layer.
This was applied to vEPC through NFV virtualization in this study. First, the network was configured using HP DL380 as hardware. Network construction by operating server is relevant to the Physical and Data Link Layers. Likewise, the Network and Transport Layers serve to send data packets from vEPC to the destination using TCP/IP and UDP on the network. In the Session Layer, the Application area provides actual services while being closest to the telecommunication service provider that uses the system with software of vEPC realized as virtualization.
Figure 5 shows a design for the cost reduction of network server from the center of hardware. Specifically, it shows the change of use from the hardware depending on a specific manufacturer to the compatible hardware using NFV technology. In the legacy hardware on the left, the place of use is defined per board. Thus, it divides its role including a board for call processing and a board for system management and for processing incoming call, demonstrating that it is equipment for call processing. Since the role is clearly divided, it is impossible to secure compatibility between different boards that are difficult to manage.
Providing spare items of each board in case of failure poses a disadvantage for the telecommunication service provider. Meanwhile, such phenomenon has been resolved to some degree as it goes to NFV on the right. After separating between hardware and software of VMware and KVM with Hyperwise on the compatible HP DL380 or BL460 server, call processing was realized at the manufacturer’s hardware with application.
HP (Palo Alto, CA, USA) and Dell (Round Rock, TX, USA) servers are easy to buy in the market, thereby enabling flexible and efficient network operation in the same pool with virtual resources.
Table 1 shows the hardware configured in this study.
Table 2 presents the software configured in this study.
Figure 6 presents a design for cost reduction in this network server operation focusing on software. It is for a general-format NFV product. The basic hardware of NFV uses a compatible server by HP and Dell that can be bought easily in the market where Intel’s x86 chip is used.
In this hardware, hardware and software were divided with Hypervisor such as KVM and VMware to realize application in the form of VM. To realize the product in VM format, the service provider can freely adjust VM according to the demand of customers. For example, if the data traffic increases rapidly, it can be dealt with by installing additional VM without the purchase or installation of separate hardware. In addition, since VM transfer is possible between servers when there is any server or VM failure, VM can be moved to another server. Cost reduction is also enabled since it can be purchased easily; hence the CAPEX/OPEX reduction effect.
3.2. OpenFlow Based Virtual EPC Test Bed at Lab
Figure 7 show the overall structure of the OpenFlow-based virtual EPC test bed. First, each EPC component realized with software is operated in 5 virtual machines that form multiple VLAN (gray box) to facilitate communication between components. In addition, it enables making direct communication between components through a separate Management Network (red dot line), and it is set to send and receive the message between OpenFlow data/control areas through the Management Network. The virtual machines in the test bed are created on general x86-based servers. 160 GB hard disk and 2 processors are allocated. 16 GB memory is allocated to the virtual machine on the data area switch (SGW-U and PGW-U), with 8 GB memory allocated to the other virtual machine.
The constructed virtual EPC function is connected to an actual commercial LTE station. IP.Access’s LTE 245F small cell is used as LTE station [
26,
27,
38,
39], and Nexus 5 is used as terminal. The Band 7 supported from the Nexus 5 terminal in the test bed is allocated and used. The scope of use is restricted inside the RF shield room that blocks wireless signal considering the collusion with commercial LTE frequency bandwidth that is under service in the country.
Following the increase in system size, and due to the emergence of various attack routines, the volume of data that needs to be processed has increased astronomically. As such, we have constructed a system that can process data by combining several PCs using the concept of computer clustering technique and virtualization technology (VMware). The improvement in the performance level of the system and the flexibility of the test bed have been tested by using an exclusively designed algorithm whose results were clearly preferable.
As mentioned earlier, it is quite difficult to construct an analytical model for the complex system. This is because the necessary experiments are often conducted within small-scale internal intranet networks or reduced test beds. To assist the learners in the security curricula, we have devised a clustering system-based test environment by using virtualization technology and open-source Linux.
Network Function Virtualization (NFV) is to virtualize the telecommunication device and to install it in the compatible service as a form of software for telecommunication service, unlike conventional way of equipping a specific hardware.
The greatest merit of vEPC is it can be installed within a few hours while existing LTE exchangers requires several months for installation. In addition, it can extend capacity simply by allotting additional servers without construction of new equipment. In doing so, the new communication service time is significantly reduced. Furthermore, it can cope with an unexpected increase of traffic situation flexibly so that it can provide more stable service. Also, since vEPC is commercialized for IoT service initially, it can cope with the data traffic increase due to an extensive introduction of IoT services. This paper discusses vEPC in terms of server operation and virtualization to save energy and cost, and it manufactures an Android application to provide convenience for users (server administrators).
3.3. Realization of Android Application for vEPC Server Management
The user interface of an application for vEPC server management is shown in
Figure 8. The title is in the top bar and left menus include home, message, share, load, server, and setting. Each menu realizes the smart phone User Interface (UI) for easy use considering the UX (User Experience) of smart phone users. The purpose of UI design and realization is for convenient view by arranging specific location and information of virtualized server in a table. The table in the middle is information of server services delivered when server service is used. It can see the entrance of information to the server by utilizing the fact that location can be figured out well when data move. The table in the right top shows current traffic and transaction volume as in other UI servers. From left, it shows ID, server type, status (virtualization status and idling resources), region, date, time and command languages for user convenience.
Figure 9 presents an application that monitors server use volume. It shows information on the use volume of idling resources of server. It shows a graph and measurement image for the user to see at a glance well. The figure in the left is an expression in table about idling resources at the server by day. It shows that on 16 June, the virtualized server transmitted 43 times. The figure at the right top shows the overall percentage on the use volume of idling resources. Now, as the cursor is on 11 June, the overall figure expresses the percentage on 16 June out of the total use volume. The right bottom is a screen to select servers and equipment. A server can be selected from many. The far left is a check box where modification can be made with icons. The last number is the equipment number. Meanwhile, Add and Delete functions are added to the bar at the right top to add and delete servers.
Figure 10 presents vEPC device administrator user interface.
Figure 9 and
Figure 10 monitor current server status.
Figure 11 is vEPC device administrator user interface for hardware management. The contents in this screen is a page used to manage DL380 server and switch used in vEPC system.
The devices in the figure explain with test bed, which shows quantity and current status of DL380. The below expresses quantity of switch and current status. Quantity of 13 and quantity of 4 return to ON. If it does not work, a new line appears in the table specifically to display as OFF.
Meanwhile, if DL380 in the table in
Figure 11 is pressed, it goes to particulars. In addition, it can add or delete icons below. The table displays device name, quantity and current status so that the users can use the system conveniently.
Table 3 is a cost comparison and analysis table of EPC and vEPC. EPC costs 1.8 billon Korean won and vEPC costs 1.326 billion Korean won. The breakdown of the cost is as follows.
4. Discussion
The price of the existing EPC (Evolved Packet Core) equipment is 1.8 billion Korean won per unit (KRW). Since there are expectations of reducing the equipment purchase price and installation price using NFV virtualization technology, the three major telecommunication service providers in Korea prepared the vEPC (Virtual Evolved Packet Core). In particular, SK Telecom commercialized the vEPC and network resource management system. KT developed vEPC for 5G in cooperation with Alcatel-Lucent. LGU+ started core equipment virtualization with global network companies such as Affirmed Networks.
If any, how much savings are expected for the telecommunication service providers who introduce NFV virtualization? It was compared with the previous price structure as below. As for the EPC produced by a network manufacturer, hardware, OS, middleware, and application software are all-in-one as shown on the left. For vEPC on the right; however, the server will be bought from HP, and vEPC software will be purchased from the software manufacturer separately.
Figure 11 shows a feature of vEPC that is changed by applying NFV virtualization. In the case of the existing EPC, the telecommunication service provider purchased hardware and software manufactured by the dedicated network equipment manufacturer. In vEPC, however, hardware and software were separated. The service provider bought hardware from the manufacturer of compatible hardware but purchased software from the manufacturer of the existing network equipment.
Figure 12 shows the vEPC construction designed in the paper.
Compared with the existing technology, my new content proposes a vEPC suitable for Korean data centers to reduce costs. Also, by developing an Android application which can effectively manage servers, the author allowed the server administrators to reduce energy consumption conveniently.
5. Conclusions and Future Work
The overall embodiment drawing is shown above, designed in an intuitive structure. Compared to the time when dedicated network equipment was used, the number of switches increased because of the switch for internal communication inside the server and the control switch for operating management. In the past, it seemed that additions were made when the compatible server was used, which was once included inside the dedicated network equipment. As for the specific composition, vMME includes 4 servers, and vGW includes 7 servers. For internal communication between servers, the internal switches are placed respectively. Internal switches were placed for internal communication between servers. The control switch enabled the operating management of the system by interlocking with CMS and EMS. The external switch was separately placed to interlock with the external network.
13 HP DL380 servers were used for vEPC networking, and 4 switches were used. If these are converted based on Q3 in 2016, it is about 400 million Korean won. As for the software price, if it is estimated to be 1 billion won, which is half the price of the existing dedicated network equipment; vEPC can be constructed with 1.326 billion Korean won, which is 24% less than 1.8 billion Korean won.
Figure 13 above shows the difference in cost between vEPC and the existing EPC. While there may be a difference from network manufacturers, about 1.8 billion Korean won is generally known to be spent on EPC system construction. If vEPC is constructed as proposed in this study; however, only 1.326 billion won is needed.
Also, a vEPC has been developed for the Server Operation so that it can support the system. Based on these efforts, the system performance has been improved from 3.244 h. to 2.04 h. in terms of data processing time (see
Figure 14).
When the existing costs of the telecommunication system construction and the telecommunication system cost through NFV visualization were compared, cost and energy savings were realized. The service provider can make network configuration more quick and easy in case of abrupt traffic change or a new service launching. Moreover, the price is lower than the existing network equipment.
A system that efficiently reduces the network server operation cost based on the NFV technology is presented in this study. The performance analysis has shown that the cost was significantly reduced by 24% compared to the operation cost of the existing network server. It is expected that the technology used in this research will be foundational to future sustainable computing.
While it may not be easy to replace the existing system, which works well in the NFV virtualization in the telecommunication service industry where network performance and stability are important, if such price competitiveness is demonstrated elsewhere, setting NFV virtualization may be acceptable for further extension.