1. Introduction
Disaster Response is the second phase in the four-phase process of Disaster Management (Preparedness, Response, Rehabilitation and Reconstruction, Mitigation)—a process that government and authorities follow to reduce potential damage from hazards, assure prompt and appropriate assistance to victims during a disaster, and recover after that according to WHO training package document [
1]. Disaster Response happens immediately aftermath of a disaster and aims to minimize the damage by conducting assistance services (searching and rescuing), distributing supplies, and medical care. In this phase, emergency responders are normally divided into teams with different missions. Reliable and timely information exchange between these responders and their commanders is the key to make an effective disaster response phase.
However, several challenges make it difficult to communicate efficiently in a disaster scenario. First, infrastructure is usually damaged and broken after a disaster happens. With the messaging server is down, most normal centralized message delivery applications cannot work. Second, communication using the current IP-based network show limitations. Considering responders’ dynamic roles and high mobility when participating in the disaster response phase, it is difficult to know each individual IP address to contact. This causes a significant delay in emergency information delivery during the disaster. As a result, inappropriate resource allocation, late assistance when rescuing people happen and these issues can lead to unwanted and serious consequences.
Because of these challenges, it is necessary to find a network solution that has mobility built-in support and enable location-independent, role-based group communication between responders. In recent years, several studies [
2,
3,
4] have shown that Information-Centric Networking (including NDN—one of its architectures) can improve IP-based network weaknesses in providing these mentioned features in disaster. Specifically, NDN uses the content name as the main entity inside the network to route the packet instead of IP address–content location in IP-based network. It allows NDN consumers to fetch data from anywhere in the network storing the copy of the data that has the same name as the interest packet (the NDN request packet). This location-independent feature enables group role-based message delivery in NDN. People in the same messaging group can fetch messages by using the group’s unique name prefix. Moreover, NDN location-independent feature enables native mobility support. When a user moves to a new location, data can be requested using its name instead of having to know the new location address like in an IP-based network.
Although several studies [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12] have leveraged NDN in dealing with disaster management issues, real implementation of an NDN-based architecture over existing infrastructure during a disaster is lacking. Most works’ results are limited to simulation analysis. Moreover, when deploying an NDN network in a disaster scenario, apart from role-based communication, additional challenges should be considered. First, infrastructure is normally damaged during a disaster, quick replacement time for network nodes is required. Second, despite the NDN mobility built-in support feature, only the consumer mobility issue is natively supported. Due to emergency responder high mobility, producer mobility problem is also required to be addressed. NDN needs to be enhanced so that network can automatically build routing paths to the producer’s new location after moving. Another challenge is the intermittent network caused by disaster hazards. To effectively manage tasks during the disaster response phase, reported information from responders to the management center and commands from commanders to responders should ensure to be delivered without any loss. Missing information can seriously affect searching and rescue efforts.
In this paper, we designed and implemented an NDN-based disaster response support system over Edge Computing infrastructure with KubeEdge as the chosen implementation edge platform to solve the above issues. In this proposed architecture, we design a deployment strategy to establish an NDN network at both one region level and multiple regions level. NDN network functions are deployed as containers from cloud to edge equipment to provide emergency communication. Each cluster contains several edge nodes to provide an NDN network for one region. Master nodes from each cluster are tunneled to connect the NDN network between two regions. In case of damaged edge nodes caused by the disaster, NDN containerized function will be deployed from the cloud to replaced nodes. With Named-data link-state protocol (NLSR) [
13] enabled, the NDN network can converge quickly after replacement. We also enhance NDN with a protocol between user devices and border edge nodes to solve the mobility challenge. Finally, we utilize KubeEdge to provide reliable information sharing between cloud and edge in discontinuous network conditions. The architecture is implemented using KubeEdge on multiple servers and NUCs. For end-users, we design an exclusive NDN disaster application that provides message delivery and information exchange with the management center function. Our proof-of-concept system performance shows that the architecture obtains the following achievements:
A deployment architecture to provide emergency communication over NDN network and NDN device management for disaster information exchange
Faster network convergence time after replacement in case of damaged network node caused by disaster compared to IP-based network
Faster mobility handover duration compared to Mobile IP [
14] and rendezvous mobility solution for NDN [
15]
Low information exchange transmission overhead between cloud and responder devices
Ensuring loss-free information exchange between responders and management center at cloud compared to normal NDN method without using edge platform.
Compared with the preliminary version [
16], this paper provides more details about the architecture design, presents implementation results and analysis.
The remaining sections are organized as follows:
Section 2 presents the related works. Subsequently, we present our proposed architecture and system design, including NDN network deployment architecture over edge computing, NDN mobility protocol, and information exchange mechanism between cloud and responders through KubeEdge in
Section 3.
Section 4 shows our detailed implementation and
Section 5 shows evaluation results.
3. System Design
The general architecture of deploying NDN network over edge computing infrastructure has two separated parts: cloud and edge. The Cloud side is responsible for edge nodes management, NDN containerized network functions deployment over managed edge nodes. The edge side contains all the edge nodes located around disaster areas. Each edge node is responsible for running one containerized NDN router. Among them, nodes which provide Wi-Fi access point are border edge nodes. They act as NDN gateway for responders to connect and exchange information through the NDN network.
When deploying this architecture in disaster scenarios, a suitable edge platform that can work well in disaster conditions is required. Based on the characteristics that we discussed in the previous part, we choose KubeEdge as the platform to deploy the NDN network over.
3.1. NDN Deployment over KubeEdge Architecture
We first demonstrate the general network architecture for a single region where one KubeEdge cluster is used to deploy NDN network. The architecture is shown in
Figure 3.
KubeEdge Cloudcore is the sole component on the cloud side. Cloudcore manages all edge nodes in the cluster and responder devices’ information that connect to them. It is also responsible for deploying containerized NDN routers at the correct edge nodes based on their pre-defined yaml files.
The edge side of the architecture contains edge nodes with KubeEdge Edgecore installed inside them. Each edge node runs one instance of NDN containerized router deployed from CloudCore. Border edge nodes have Wi-Fi access points to provide an NDN gateway for responders to connect to the NDN network. In case an edge node is damaged during the disaster, Edgecore binary can easily be installed into any network equipment (even resources constraint once) thanks to the lightweight size of the Edgecore. After that, given that Cloudcore information is pre-installed inside the replaced edge node by an emergency repair team member, this node can re-join the KubeEdge cluster and deploy the corresponding NDN routers based on the command from Cloudcore. The NDN network is recovered at this moment.
For large scale disaster area which spreads over multiple regions, multiple KubeEdge clusters are needed to provide NDN network.
Figure 4 shows our designed architecture for deploying the NDN network using KubeEdge over multiple regions.
The NDN network in each region will connect to an NDN router deployed at Cloudcore node. This router will act as an NDN gateway for each region and connect to an IP gateway router. To connect the network between two regions, a high-speed, high-bandwidth TCP tunnel will be created between two IP gateway routers. This design is referred to as a real NDN deployment presented in [
27].
3.2. NDN Emergency Communication Design
During the disaster response phase, emergency responders are normally split into several groups. Each group includes several members that have the same role, and do the same mission. Moreover, each responder may have multiple roles. Hence, we design an emergency communication namespace that allows each responder to participate in multiple group message rooms based on their roles.
Figure 5 shows how messages are delivered inside the NDN network using our designed namespaces.
The distributed NDN message delivery mechanism is based on ChronoSync [
28]. Each group has two kinds of interest packets: group interest and user message interest. They are named following the role of responders in the same group and their format is given in
Table 1. The group interest is used for fetching the sequence number of the message from other users. The user message interest is used for fetching the content of the message. At a balanced state, when there is no new message from any responder inside the group, everyone will send a group interest with the same sync number. When one responder establishes a new message, his device will reply with a data packet containing the new message sequence number to other responder devices. Then, they will send the message interest with that received sequence number to fetch the new message. After that, every responder device in the group will send the new sync interest with the sync number increased by one, and the balance state is achieved again.
With this namespace design, we can provide flexible and location-independent emergency communication for responders.
Figure 6 shows how we applied our namespace design for message delivery in a Seoul disaster scenario. Four responders participate in three messaging channels in this scenario. “/disaster01/seoul/dongjak/emergency” channel is for all emergency responders (including hospital, fire and police responders) who is doing their mission in Dongjak. “/disaster01/seoul/hospital” channel is for all hospital responders in Seoul. “/disaster01/seoul/general” channel is the general channel for everyone. Each responder can join multiple channels based on their roles. When responders connect to a border edge node and join a channel, route to responder devices will be automatically advertised to the NDN network by the edge node. Hence, no matter where the responders are, they can send and receive messages from their channels using the group name prefixes.
3.3. NDN Mobility Support Design
As responders need to move a lot during their missions in the disaster response phase, mobility support is an essential feature that our network architecture should provide. Our proposed design is shown in
Figure 7.
After moving, NDN name prefixes from responders’ devices need to be advertised to the NDN network to continue message delivery. To minimize the handover time, we add two pairs of client and server at user equipment and border edge so that this advertisement process can be performed immediately at the node that the responder device connects to. Normally, when responders join a group channel, the group prefix and the message prefix will be sent from the advertising client to the advertising server at the edge node. They will be advertised to the NDN network by the NLSR engine here. These prefixes will also be saved in the local database of the advertising client. When moving to a new location and connecting to a new border edge node, the advertising client will detect the network change and inform the ndn-autoconfig [
29] client. Ndn-autoconfig client will automatically create an NDN connection with the corresponding server running at the edge node. After the NDN connection is established, those saved prefixes at the advertising client will be sent to the server for advertising to the NDN network. At the same time, routes back to user equipment for these prefixes will also be created. At this moment, the responder can send and receive NDN messages normally.
3.4. NDN Device Management Using KubeEdge as Edge Computing Platform
Effective disaster response task management requires the commander at the management center to have an overview picture of what is happening around disaster areas. Hence, we utilize KubeEdge device management feature to create an NDN responder device management system for the centralized commander on the cloud. Through this system, the commander can receive the reported mission status of all responders (current event, requirement, location) from their devices. Based on this information, the commander can directly update responders’ missions, roles from cloud to responder devices. Moreover, we take advantage of the reliable cloud-edge transmission feature of KubeEdge so that no information (reports from devices and updates from cloud) between cloud and devices is dropped in intermittent network conditions.
The information exchange mechanism between cloud and responder devices is demonstrated in
Figure 8. Update information from the commander at cloud will be sent to Edgecore and publish to Mosquitto topics here. Subscribed devices will receive updates. Meanwhile, reported information from responders will be processed by the device-mapper and published to Mosquitto topic. Edgecore gets reported data from subscriptions and sends it to the cloud. Commander can get data from the Kubernetes API Server.
5. Evaluation
5.1. Network Convergence Time in Case of Node Replacement
During a disaster, network nodes are usually damaged and need to be replaced. Network convergence time after replacement should be as fast as possible to re-enable emergency communication through the network. Hence, we evaluated our system network convergence time by comparing between replacing NDN edge nodes and IP edge nodes. Both NDN routers and IP routers are deployed as containers over KubeEdge. We chose the EIGRP routing protocol [
34] for IP networks because it has been proven to be the fastest convergence IP routing protocol by several studies [
35,
36]. Network convergence time is calculated from the moment when the replaced node joins the KubeEdge cluster until the routing advertisement process is completed. The equation for calculation is:
where T
NC is network convergence time, T
CJ is cluster joining time, T
RD is containerized router deployment time and T
A is advertisement time. We gradually increased the amount of replaced nodes at the same time to evaluate the performance. The network convergence time of our system is shown in
Figure 13 and the comparison between it and the IP system is shown in
Figure 14.
Figure 13 illustrates that our system achieved a very fast convergence time even when multiple nodes needed to be replaced simultaneously. The network can recover to a normal state after replacement in less than 2 s and the impact of increasing the amount of replaced nodes simultaneously is trivial (only 0.1 s per node). Compared with the IP network,
Figure 14 shows that the NDN network with NLSR routing protocol converges much faster than the IP network with EIGRP routing protocol. The NDN network only requires 1.8 s to converge while the IP network requires 10.5 s when the amount of replaced nodes is 7. Moreover, when the number of replaced nodes at the same time increases, the convergence time for the NDN network increases slightly while it increases significantly for the IP network. The difference is caused by NLSR faster advertisement time thanks to its multipath routing calculation feature [
37].
5.2. Mobility Handover Duration
We compared our mobility support method with rendezvous NDN mobility support proposed in [
15] and Mobile-IP [
14]. We consider two levels of network topology to evaluate. One topology is for a single region and the other one is for cross regions. The scenario is group communication between two responders connecting to two border edge nodes in the same district (Gangnam in our scenario). Then, one responder will move to another district. The topologies and scenarios are shown in
Figure 15.
For the rendezvous NDN mobility method, we deploy the rendezvous server at the center of the topology for each region. Rendezvous servers between regions are interconnected. The rendezvous server functionality is kept the same as [
15]. When receiving mobility handling request (routes advertisement request from a user device when it connects to a new edge node), this server will update intermediate nodes between it and the device with new routes. Additionally, any interest that does not has the route to the newly moved device will be forwarded to the rendezvous server. After that, the interest will be forwarded to the device using new updated routes.
We measured mobility handover duration based on the amount of network hops that the mobility handling request from the device needs to go through.
Figure 16 shows the comparison between our mobility method, rendezvous NDN method, and Mobile-IP.
The comparison shows that our proposed mobility support method has the lowest mobility handover duration. In our method, because the mobility handling process is conducted right at the edge node that the moving device connects to, the number of hops needed in both topologies is only 1. In the rendezvous NDN method, the mobility handling request needs to be forwarded from the edge node to the rendezvous server. Hence, the number of hops is 2 for single region topology and 3 for cross-region topology (one more hop between two rendezvous servers at two regions). In the Mobile-IP method, an announcement packet will be sent from the newly connected edge node to the previously connected one so that the number of hops is 4 and 7 for single region topology and cross-region topology respectively.
5.3. Transmission Overhead When Exchanging Information between Cloud and Device
We evaluated the benefit of using KubeEdge to manage NDN devices by comparing it with two other disaster management systems that do not use Edge computing platforms: the normal publish/subscribe NDN management system [
38] and the state-of-the-art NDN-DM system [
39]. In our system, KubeEdge uses Mosquitto as the publish/subscribe framework for information exchange between cloud and devices. Meanwhile, the NDN publish/subscribe communication framework and NDN push-based mechanism are used in the other two systems respectively. We created two NDN device management systems that followed these works’ design to make a comparison with our system. As mentioned in previous parts, responders can upload an event, a requirement, or his location to the cloud, and the commander on the cloud can update responder missions. We considered one event/requirement/location/mission as a status. We monitored the number of transmitted packets between cloud and device when the amount of exchange status increases. The result is shown in
Figure 17.
The result shows that the number of transmitted packets when using NDN Publish/Subscribe increases proportional with the number of statuses, while they are equal when using KubeEdge and NDN push mechanism in NDN-DM. Specifically, the number of exchange packets between cloud and devices when using the NDN Publish/Subscribe framework is 4 times higher than our system and NDN-DM. The reason is that the NDN Publish/Subscribe needs to exchange four interest/data packets between them to synchronize one status [
38], while the cloud/device only needs to push one packet containing the status to the Mosquitto framework in KubeEdge. The same reason is applied for NDN-DM as only one packet is needed to push a status to the destination. With lower transmission overhead when exchange status between cloud and devices, our system can avoid congestion and packet losses in high network traffic conditions in disaster scenarios. Our system is as efficient as NDN-DM in terms of reducing information transmission overhead. The advantage of it over NDN-DM is presented in the next part.
5.4. Packet Recovery Capability in Intermittent Network
During the disaster response phase, the network is not reliable. Packet recovery capability is a key feature to avoid valuable information loss. We kept comparing our system with the NDN Publish/Subscribe system and NDN-DM to show the advantage of using KubeEdge to recover packet loss in an intermittent network. In the cross-regional topology, we disconnected several network links, performed 50 packets exchange between cloud and devices. Then, we monitored the successful status exchange ratio between them after reconnecting the network links (the amount of received status over the amount of sent status). The result is shown in
Figure 18.
The figure shows that KubeEdge can successfully recover every dropped packet caused by network disconnection. Meanwhile, for the other two systems, the successful percentage of information exchange gradually drops down when the number of disconnected links increases. This result shows the effectiveness of the cloud-edge transmission reliability design of KubeEdge which can resend packets after network links are recovered. Information exchange through the other two NDN networks suffers from packet loss because there is no recovery support.
5.5. System Performance Discussion
In this part, we aggregate our system performance evaluation presented in four previous parts in
Table 3 to highlight our system contribution based on comparison with other relevant systems.
To summarize, in this work, our system shows the advantages of integrating NDN and Edge Computing infrastructure in disaster management, especially in the disaster response phase. Firstly, we demonstrated the benefit of using NDN over IP to achieve fast network convergence time in case of damaged network node replacement. This aspect has not been studied before in any works according to our research. Secondly, we designed a better mobility support mechanism for NDN compared with previously proposed solutions. Thirdly, we show edge computing effectiveness in NDN device information management. Compared with other recent proposed NDN management systems, the usage of edge computing platform not only reduces transmission overhead but also ensures recovery ability in case of intermittent network conditions. Finally, we deploy our proof-of-concept system on real platforms with KubeEdge as the chosen edge platform while most related works are only simulation studies.
6. Conclusions
This paper presented a deployment architecture of the NDN network over Edge Computing infrastructure to provide support for the disaster response phase. We showed a proof-of-concept system by implementing the architecture using the KubeEdge edge computing platform. Our system assists the disaster response phase by enabling emergency group communication and disaster information exchange through NDN device management. Moreover, the experimental and analytical results showed that our proposed architecture deals well with disaster challenges. It achieves faster network convergence time in case of node replacement over IP-based network, faster mobility handover time compared with MobileIP, and rendezvous NDN mobility method. Moreover, KubeEdge features lower information exchange transmission overhead and enables packet recovery capability in intermittent network conditions.
For future works, our system can be improved by bringing enhanced NDN features that have been proven by simulation in previous studies into real implementation. Moreover, an edge computing platform can also be utilized to provide support for NDN IoT devices since these devices can greatly help to collect a huge amount of information in disaster areas.