MIA-NDN: Microservice-Centric Interest Aggregation in Named Data Networking
Abstract
:1. Introduction and Motivation
- We extend the vanilla NDN architecture to support the microservice-based in-network computations by proposing a state-of-the-art microservice-centric interest-naming mechanism that incorporates the content name, microservice name, input parameters, and delimiters for separating the multiple components.
- MIA-NDN developed a dynamic PIT timer based on microservice input parameters and their corresponding input values to avoid PIT entry losses in the event of long-running microservice computation interests.
- A hash-based PIT aggregation mechanism was developed to achieve efficient microservice-centric PIT aggregation, taking into consideration the input parameters and their corresponding values to make every entry unique in the PIT table.
- MIA-NDN was evaluated based on extensive NDNSim-based simulations to reveal the potential benefits in terms of efficient interest aggregation, microservice computation satisfaction, and network overhead.
2. Background and Related Work
2.1. Background
- Microservices in a nutshell: Microservices are gaining more attention from enterprises. Big tech companies, such as Netflix, Twitter, Amazon, and Spotify utilize microservices in their businesses [22]. Microservices are proposed software development architectures used to create applications as loosely coupled small components [23]. Such small components are easy to develop, deploy, and test independently. Each component performs its own task and communicates with other microservices through well-defined communication interface. The small components feature enables microservice scalability, allowing one to update and change a component without affecting other components [24].
- Named data networking in a nutshell: NDN internet architecture is a shift from host-centric to data-centric communication and provides named content-based communication [17]. In NDN, two types of packets interests and data are used in communications. NDN allows consumers to send content-named interests and retrieve corresponding data packets at the network layer. A detailed process of the NDN interests and data packets is illustrated in Figure 1. Each NDN router maintains three tables. (1) Content store (CS): CS is transient storage space at the NDN router that stores the copy of incoming data packets to satisfy future consumer-generated requests for the same data. (2) Pending interest table (PIT): The PIT stores the entries of forwarded interests that are waiting for the required data. Each PIT entry waits for the data packet until its associated timer value. (3) Forwarding information base (FIB): FIB keeps the information from the content producer or provider.
2.2. Related Work
3. Proposed Scheme
3.1. Proposed Scheme Architecture
3.2. Proposed Scheme
- Interest Packet Format: In the proposed MIA-NDN scheme, the interest can request a microservices computation or simple content. When requesting simple content, the interest packet follows the conventional NDN content-naming structure. However, when requesting the microservice-centric content, the interest packet pursues the following microservice-centric interest-naming format as shown in Figure 3.In the MIA-NDN scheme, each interest is composed of a content name followed by a microservice name and input parameters. The microservice-centric interest packet has four parts, (i) the content name, (ii) the microservice tag, (iii) the microservice name, and (iv) input parameters. Figure 3 depicts the microservice-centric interest packet structure where the /Sejong/HongikUniversity/MainGate/image represents the content name or globally routable name, MS is used as a delimiter tag to separate the microservice component from the content name, FeatureExtraction is the name of a microservice, and image1, image2, image3 are the input parameters. Among the aforementioned name components, the first component is mandatory, whereas the last three components are optional.
- Interest Aggregation: In the proposed MIA-NDN, the interest aggregation is comprised of two steps (i) the hashing and (ii) aggregation. At first, the microservice interest packet’s hash is calculated, after that, the interest aggregation is performed in the PIT table along with the hash value.A detailed description of the hashing and aggregation process is given as follows.
- (a)
- Microservice-centric interest hashing and aggregation: NDN’s core feature is content-naming, and it has a profound impact on network performance (e.g., lookup and memory consumption). microservice-centric interests may have large-sized interest packets, for example in the feature extraction scenario where input parameters may contain large-sized images, and such packets may consume a considerable amount of memory in the PIT table.The NDN is a search-based internet architecture, where tables (CS, PIT, and FIB) are consulted before interest and data packet forwarding; therefore, such large-size interest aggregation in the PIT table is not an optimal solution.The aggregation of such large-sized interests in the PIT table may exhaust the NDN node’s memory.In addition, the PIT table lookup is required for finding similar interests that have already been forwarded and are waiting for results to perform the same incoming request aggregation. Such large-sized interest-matching in the PIT table requires a high search time, which ultimately degrades the network performance. Therefore, in microservice-centric interest aggregation, the PIT table’s memory consumption, searching time, and searching costs are required to be optimized.To resolve the above-mentioned issues, the proposed MIA-NDN scheme computes a hash value of microservice interests and stores it in the PIT table.For hashing, the proposed scheme employs the SHA-256 hashing algorithm that generates the hash value of the incoming interest packet’s name components (i.e., content name, microservice name, and input parameters) after concatenating them together.The SHA-256 algorithm generates a 32-bit hash value, which is efficient to store in the PIT table instead of storing several megabytes of microservice parameter interests. The hash value is computed as soon as the networking forwarding daemon (NFD) of a consumer node receives an interest packet. The consumer then creates a unique PIT entry by storing the hash value as a content name. Finally, after updating the outgoing interface of a PIT entry and before forwarding the packet toward the upstream node, the computed hash value is stored inside the [NameHash] field of an interest packet as depicted in Figure 4. The rationale for storing the hash value is to avoid microservice interest false aggregation, optimum PIT table memory consumption, PIT searching time, and searching cost minimization. In the latter incoming interests, the stored hash value is compared for interest aggregation purposes. In case a hash match is found, the aggregation is performed, otherwise, a new PIT entry is created, and interest is forwarded to the provider/producer by following the FIB entry.
3.3. Dynamic PIT Timer
- The Role of the Network Orchestrator: A network orchestrator is a network management node that keeps network topology and cloud node computation load information. As shown in Figure 5, consumers C1 and C2 send a microservice request for the computations and it is received at R1 (e.g., in the figure, the yellow arrow represents interests, and the green arrow represents data packets). Then, R1 forwards the request toward the edge node. The edge node may not have enough resources to execute the microservice request. Therefore, the edge node offloads the request to the cloud server. However, before offloading a request, the edge node consults the network orchestrator by sending the consumer’s received microservice interest packet to obtain the information about (i) the communication time required to take the data packet from sending an interest packet and, (ii) the computation time required to perform the microservice computation on a compute node.The network orchestrator node calculates the computation time based on the cloud server’s load status and the required resources of the requested microservice interest packet (the network orchestrator forwards the request to the light-loaded cloud node). After calculating the computation time, the network orchestrator node sends the computation time and communication time (e.g, data of the intermediate nodes) by storing them inside the interest packet back to the edge node. Therefore, the edge node based on the communication time and computation time calculates and sets its PIT timer and offloads the interest packet toward the cloud by storing intermediate nodes and computation time information in the interest packet in step 2 (the intermediate nodes and computation time information is shared with upstream nodes to avoid contacting the network orchestrator node all of the time). R2, upon receiving the interest, calculates its PIT timer based on the computation time and communication time information obtained from the interest packet.The computation time remains constant for all intermediate nodes while the communication time decreases gradually as interest approaches the cloud server. Therefore, the intermediate nodes calculate the communication time based on the hop distance and set their PIT timers accordingly (step 3). Finally, at the time of interest offloading, the edge node sends an interest packet toward the downstream nodes as well as the PIT timer update to avoid the pending entry earlier timeouts. Consequently, R1 and consumers calculate and update their PIT timers of the microservices’ pending interests.
- Dynamic PIT Timer Calculation: The edge and intermediate nodes calculate their PIT timers dynamically based on the network orchestrator’s computation time and communication time information. The computation and communication time calculations are described as follows.Let be the total PIT lifetime, including the communication time and computation time required to send an interest, performing the computation at the compute node, and receiving results (data packet) of the microservice interests [36]. The can be calculated by using the following equation:The is calculated based on the load status of a compute node obtained from the network orchestrator (Section 3.3). The microservice interest execution time can be calculated by the following equation:The is the total size of the microservice interest in bytes and is the available CPU cycles on the compute node where information is obtained from the network orchestrator. The is comprised of the microservice input parameters and their corresponding sizes. The computation time is calculated by adding all input parameter sizes and dividing by the available CPU cycles of the compute node.In Equation (1), a is the communication time required to send a microservice interest packet and receive a data packet. The a can be calculated (e.g., between the edge and cloud server) by the following equation:The PIT timer on the downstream node (the edge to the consumer) is updated by the following equation.
- Interest processing pipeline of the proposed scheme: A detailed description of microservice-centric interest processing in the MIA-NDN scheme is given below with the help of a flow chart. In Figure 6, the microservice-centric interest processing steps are summarized. A detailed description of the steps is given as follows.
- (a)
- After receiving the interest packet, the edge node checks the packet type to determine whether the received interest is conventional NDN content or a microservice-centric request by searching the MS tag. In the presence of an MS tag, the interest packet is processed according to the microservice interest processing pipeline, otherwise, the interest packet is forwarded to the conventional NDN processing pipeline.
- (b)
- Once it is determined that the received interest packet contains a microservice request tag, the edge node then checks whether the NameHash field contains a value. In the presence of the NameHash value, the edge node searches the PIT entry with the NameHash value. In the absence of the NameHash value, the edge node performs the hash calculation according to step 4 and adds it to the interest packet.
- (c)
- In the presence of the NameHash value, the pending PIT entries are searched by comparing the hash value. If a hash match is found, the edge node performs the aggregation and drops the interest packet. In the absence of pending entry, the new PIT entry is created, the CS searches for the matching data (results) and is subsequently followed by the conventional interest processing steps. The results stored in the CS also contain a hash value along with the content name for the same future request fulfillment.
- (d)
- In the absence of a NameHash value, the edge node calculates the hash value after concatenating the content name, microservice name, and input parameter values using the SHA-256 hashing algorithm and inserts the obtained hash to the NameHash field before forwarding the interest packet.
- (e)
- After calculating and inserting the hash value of interest, the PIT table is consulted to check that the interest is pending. In case the interest is pending, the aggregation is performed, otherwise, the new PIT entry is created.
- (f)
- After the PIT, then CS lookup is performed to check the data availability in the router’s cache; if data are available in the cache, the interest is finalized and data are returned to the consumer, otherwise, the FIB is consulted, and interest is forwarded to the producer, otherwise, the packet is dropped.
4. Implementation
4.1. Experimental Setup
- Interest aggregation: Interest aggregation is defined as the total number of same-named microservice-centric interest packets aggregated to the total number of microservice interest packets transmitted.
- Microservices satisfaction rate: The microservice interest satisfaction rate is the ratio of the total number of data packets received against the total number of microservice interest packets sent.
- Transmission overhead: Transmission overhead measures the total number of packet transmissions (interest, acknowledgments, and data) in the network against the number of microservice computations.
- PIT density: The PIT density is the ratio of the total number of microservice interests maintained in the PIT table to the total number of microservice interests generated in the network.
4.2. Simulation Results
- Interest aggregation: The microservice-centric interest aggregation as a function of the microservice-based interest frequency is depicted in Figure 7a,b. We varied the interest frequency to analyze the interest aggregation for both low (e.g., 1 to 10 interests/s) traffic scenarios as shown in Figure 7a and high traffic conditions (i.e., 10 to 50 interests/s) shown in Figure 7b.The results shown in the figures indicate that, in both traffic conditions, MIA-NDN had less aggregation compared to the benchmark scheme. The rationale is that MIA-NDN incorporates the microservice input parameters in addition to the microservice name in the interest’s aggregation process. If both the microservice-centric interest names and the number of input parameters are the same, MIA-NDN performs interest aggregation. If microservices have the same interest names but a different number of parameters or their corresponding values, MIA NDN considers those interests as unique and avoids false interest aggregation. However, the benchmark scheme ignores the microservice input parameters as well as their corresponding values resulting in high packet aggregation (i.e., false aggregation). The false aggregated microservice interests fail to return the computation results, which turn into network resource wastage, increased latency, and congestion in the network.
- Microservice satisfaction: MIA-NDN evaluated the performance in terms of microservice satisfaction at different time intervals as well as against the microservice-centric interest frequency, as shown in Figure 8a and Figure 8b respectively.The simulation results in both scenarios revealed that MIA-NDN outperformed the serving-at-edge schemes in satisfying the microservice-centric heterogeneous computation requests. The reason is that MIA-NDN includes the microservice parameters as well as their corresponding values in the hash generation and process and inserts the generated hash in the PIT table. The aggregated hash is utilized to check the already existing same-named entry in the PIT upon a new interest packet reception. Interest aggregation is performed if the same hash value is found, otherwise the interest packet is considered a unique packet and the corresponding forwarding is performed to fetch the data. The whole procedure avoids false aggregation and increases the microservices satisfaction ratio. It is clear from the results in both cases that MIA-NDN highly reduces the false aggregation and enhances the microservices satisfaction ratio. In contrast, the benchmark scheme performs false packet aggregation due to a lack of consideration of microservice parameters and their corresponding values, resulting in a low microservice satisfaction ratio.
- Transmission overhead:Figure 9 shows the transmission overhead as a function of microservice-centric interest frequency.To analyze the transmission overhead, we vary the microservices request rate between 1 interest/s to 20 interests/s. From the figure, it can be observed that MIA-NDN has a lower transmission overhead compared to the benchmark scheme. The main reason behind this is that the MIA-NDN scheme generates only two packets against one microservice computation request, e.g., (i) a microservice computation interest toward the compute node, and (ii) the computed result data packet from the compute node. Contrarily, the benchmark scheme generates a higher number of packets to perform a microservice computation, e.g., computing interest requests, acknowledgment packet from the computing node, data packet, and acknowledgment packet from the consumer node. The large number of packets generated by the benchmark scheme to deliver the computed results produce high transmission overhead as depicted in Figure 9. However, MIA-NDN has low transmission overhead compared with the benchmark scheme as only a single data packet is generated against the consumer request.
- PIT density analysis: We analyze how densely the MIA-NDN populates the PIT table at both high and low traffic conditions by varying the number of microservice-centric computation interest packets, as shown in Figure 10a,b.We also analyzed the PIT density at different time intervals as shown in Figure 10c. The results clearly show that MIA-NDN maintains fewer entries in the PIT table and enables more computation requests to be accommodated in the PIT table. The rationale is that the MIA-NDN dynamic PIT lifetime calculation strategy evacuates the PIT entry upon computation result retrieval and enables the incoming request to be inserted in the PIT table. Therefore, in both high and low traffic conditions, the MIA-NDN occupies less in PIT. Moreover, in Figure 10c, we analyze the number of entries in the PIT table at different timer intervals (1 s to 30 s) with a request rate of 10 interests/s. The results clearly show that MIA-NDN has a smaller number of entries in the PIT table due to the provided dynamic PIT entry lifetime management mechanism.
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Fizza, K.; Banerjee, A.; Jayaraman, P.P.; Auluck, N.; Ranjan, R.; Mitra, K.; Georgakopoulos, D. A Survey on Evaluating the Quality of Autonomic Internet of Things Applications. IEEE Commun. Surv. Tutor. 2022. [Google Scholar] [CrossRef]
- Esenogho, E.; Djouani, K.; Kurien, A. Integrating Artificial Intelligence Internet of Things and 5G for Next-Generation Smartgrid: A Survey of Trends Challenges and Prospect. IEEE Access 2022, 10, 4794–4831. [Google Scholar] [CrossRef]
- Kök, İ.; Özdemir, S. Content-centric data and computation offloading in AI-supported fog networks for next generation IoT. Pervasive Mob. Comput. 2022, 85, 101654. [Google Scholar] [CrossRef]
- Mahmood, A.; Ahmed, A.; Naeem, M.; Hong, Y. Partial offloading in energy harvested mobile edge computing: A direct search approach. IEEE Access 2020, 8, 36757–36763. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, N.; Zhang, Y.; Chen, X. Dynamic computation offloading in edge computing for internet of things. IEEE Internet Things J. 2018, 6, 4242–4251. [Google Scholar] [CrossRef]
- Luo, Q.; Hu, S.; Li, C.; Li, G.; Shi, W. Resource scheduling in edge computing: A survey. IEEE Commun. Surv. Tutor. 2021, 23, 2131–2165. [Google Scholar] [CrossRef]
- Duan, S.; Wang, D.; Ren, J.; Lyu, F.; Zhang, Y.; Wu, H.; Shen, X. Distributed Artificial Intelligence Empowered by End-Edge-Cloud Computing: A Survey. IEEE Commun. Surv. Tutor. 2022. [Google Scholar] [CrossRef]
- Sittón-Candanedo, I.; Alonso, R.S.; Corchado, J.M.; Rodríguez-González, S.; Casado-Vara, R. A review of edge computing reference architectures and a new global edge proposal. Future Gener. Comput. Syst. 2019, 99, 278–294. [Google Scholar] [CrossRef]
- Mahmood, A.; Vu, T.; Khan, W.U.; Chatzinotas, S.; Ottersten, B. Optimizing Computational and Communication Resources for MEC Network Empowered UAV-RIS Communication. In Proceedings of the 2022 IEEE Globecom Workshops, Rio de Janeiro, Brazil, 4–8 December 2022. [Google Scholar]
- Almutairi, J.; Aldossary, M.; Alharbi, H.A.; Yosuf, B.A.; Elmirghani, J.M. Delay-Optimal Task Offloading for UAV-Enabled Edge-Cloud Computing Systems. IEEE Access 2022, 10, 51575–51586. [Google Scholar] [CrossRef]
- Jiang, C.; Cheng, X.; Gao, H.; Zhou, X.; Wan, J. Toward computation offloading in edge computing: A survey. IEEE Access 2019, 7, 131543–131558. [Google Scholar] [CrossRef]
- Firouzi, F.; Farahani, B.; Marinšek, A. The convergence and interplay of edge, fog, and cloud in the AI-driven Internet of Things (IoT). Inf. Syst. 2022, 107, 101840. [Google Scholar] [CrossRef]
- Saxena, D.; Raychoudhury, V.; Suri, N.; Becker, C.; Cao, J. Named data networking: A survey. Comput. Sci. Rev. 2016, 19, 15–55. [Google Scholar] [CrossRef] [Green Version]
- Imran, M.; Rehman, M.A.U.; Kim, B.S. Information-centric Edge Computing: A Survey. IEIE Trans. Smart Process. Comput. 2021, 10, 250–258. [Google Scholar] [CrossRef]
- Naeem, M.A.; Nguyen, T.N.; Ali, R.; Cengiz, K.; Meng, Y.; Khurshaid, T. Hybrid cache management in IoT-based named data networking. IEEE Internet Things J. 2021, 9, 7140–7150. [Google Scholar] [CrossRef]
- Ahlgren, B.; Dannewitz, C.; Imbrenda, C.; Kutscher, D.; Ohlman, B. A survey of information-centric networking. IEEE Commun. Mag. 2012, 50, 26–36. [Google Scholar] [CrossRef]
- Zhang, L.; Estrin, D.; Burke, J.; Jacobson, V.; Thornton, J.D.; Smetters, D.K.; Zhang, B.; Tsudik, G.; Massey, D.; Papadopoulos, C.; et al. Named data networking (ndn) project. Relat. Tec. NDN-0001 Xerox Palo Alto Res. Center-PARC 2010, 157, 158. [Google Scholar]
- Zhang, L.; Afanasyev, A.; Burke, J.; Jacobson, V.; Claffy, K.; Crowley, P.; Papadopoulos, C.; Wang, L.; Zhang, B. Named data networking. ACM SIGCOMM Comput. Commun. Rev. 2014, 44, 66–73. [Google Scholar] [CrossRef]
- Amadeo, M.; Ruggeri, G.; Campolo, C.; Molinaro, A. IoT services allocation at the edge via named data networking: From optimal bounds to practical design. IEEE Trans. Netw. Serv. Manag. 2019, 16, 661–674. [Google Scholar] [CrossRef]
- Król, M.; Habak, K.; Oran, D.; Kutscher, D.; Psaras, I. Rice: Remote method invocation in icn. In Proceedings of the 5th ACM Conference on Information-Centric Networking, Boston, MA, USA, 21–23 September 2018; pp. 1–11. [Google Scholar]
- Fan, Z.; Yang, W.; Wu, F.; Cao, J.; Shi, W. Serving at the edge: An edge computing service architecture based on icn. ACM Trans. Internet Technol. (TOIT) 2021, 22, 1–27. [Google Scholar] [CrossRef]
- Soldani, J.; Tamburri, D.A.; Van Den Heuvel, W.J. The pains and gains of microservices: A systematic grey literature review. J. Syst. Softw. 2018, 146, 215–232. [Google Scholar] [CrossRef]
- Thönes, J. Microservices. IEEE Softw. 2015, 32, 116. [Google Scholar] [CrossRef]
- Jamshidi, P.; Pahl, C.; Mendonça, N.C.; Lewis, J.; Tilkov, S. Microservices: The journey so far and challenges ahead. IEEE Softw. 2018, 35, 24–35. [Google Scholar] [CrossRef] [Green Version]
- Tschudin, C.; Sifalakis, M. Named functions and cached computations. In Proceedings of the 2014 IEEE 11th consumer communications and networking conference (CCNC), Las Vegas, NV, USA, 10–13 January 2014; IEEE: Las Vegas, NV, USA, 2014; pp. 851–857. [Google Scholar]
- Din, M.S.U.; Rehman, M.A.U.; Imran, M.; Nadeem, M.; Kim, B.S. A Testbed Implementation of Microservices-based In-Network Computing Framework for Information-Centric IoVs. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Yeosu, Republic of Korea, 26–28 October 2022; IEEE: Yeosu, Republic of Korea, 2022; pp. 1–5. [Google Scholar]
- Amadeo, M.; Campolo, C.; Ruggeri, G.; Molinaro, A.; Iera, A. SDN-managed provisioning of named computing services in edge infrastructures. IEEE Trans. Netw. Serv. Manag. 2019, 16, 1464–1478. [Google Scholar] [CrossRef]
- Król, M.; Psaras, I. NFaaS: Named function as a service. In Proceedings of the 4th ACM Conference on Information-Centric Networking, Berlin, Germany, 26–28 September 2017; pp. 134–144. [Google Scholar]
- Wang, Q.; Lee, B.; Murray, N.; Qiao, Y. CS-Man: Computation service management for IoT in-network processing. In Proceedings of the 2016 27th Irish Signals and Systems Conference (ISSC), Londonderry, UK, 21–22 June 2016; IEEE: Londonderry, UK, 2016; pp. 1–6. [Google Scholar]
- Amadeo, M.; Campolo, C.; Molinaro, A.; Ruggeri, G. IoT data processing at the edge with named data networking. In Proceedings of the European Wireless 2018; 24th European Wireless Conference, Catania, Italy, 2–4 May 2018; VDE: Catania, Italy, 2018; pp. 1–6. [Google Scholar]
- Ascigil, O.; Reñé, S.; Xylomenos, G.; Psaras, I.; Pavlou, G. A keyword-based ICN-IoT platform. In Proceedings of the 4th ACM Conference on Information-Centric Networking, Berlin, Germany, 26–28 September 2017; pp. 22–28. [Google Scholar]
- Ambalavanan, U.; Grewe, D.; Nayak, N.; Ott, J. HYDRO: Hybrid Orchestration of In-Network Computations for the Internet of Things. In Proceedings of the 11th International Conference on the Internet of Things, St. Gallen, Switzerland, 8–12 November 2021; pp. 64–71. [Google Scholar]
- Lia, G.; Amadeo, M.; Campolo, C.; Ruggeri, G.; Molinaro, A. Optimal Placement of Delay-constrained In-Network Computing Tasks at the Edge with Minimum Data Exchange. In Proceedings of the 2021 IEEE 4th 5G World Forum (5GWF), Montreal, QC, Canada, 13–15 October 2021; IEEE: Montreal, QC, Canada, 2021; pp. 481–486. [Google Scholar]
- Ingerman, P.Z. Thunks: A way of compiling procedure statements with some comments on procedure declarations. Commun. ACM 1961, 4, 55–58. [Google Scholar] [CrossRef]
- Ambalavanan, U.; Grewe, D.; Nayak, N.; Liu, L.; Mohan, N.; Ott, J. DICer: Distributed coordination for in-network computations. In Proceedings of the 9th ACM Conference on Information-Centric Networking, Osaka, Japan, 19–21 September 2022; pp. 45–55. [Google Scholar]
- Ismail, L.; Materwala, H. Escove: Energy-SLA-aware edge–cloud computation offloading in vehicular networks. Sensors 2021, 21, 5233. [Google Scholar] [CrossRef]
- Rehman, M.A.U.; Salahuddin, M.; Imran, M.; Fayyaz, S.; Kim, B.S. ndnCSIM: A Microservices based Compute Simulator for NDN. Korean Soc. Electron. Eng. Conf. 2021, 6, 1562–1564. [Google Scholar]
Parameter | Value |
---|---|
Simulator | NS3 (NDNSim) |
Communication Stack | NDN |
Environment | 802.3 |
Total number of nodes | 10 |
Edge nodes | 2 |
Consumers | 2 |
NDN Routers | 6 |
PIT Time | Dynamic |
Topology | (Figure 2) |
Simulation time | 300 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Imran, M.; Din, M.S.U.; Rehman, M.A.U.; Kim, B.-S. MIA-NDN: Microservice-Centric Interest Aggregation in Named Data Networking. Sensors 2023, 23, 1411. https://doi.org/10.3390/s23031411
Imran M, Din MSU, Rehman MAU, Kim B-S. MIA-NDN: Microservice-Centric Interest Aggregation in Named Data Networking. Sensors. 2023; 23(3):1411. https://doi.org/10.3390/s23031411
Chicago/Turabian StyleImran, Muhammad, Muhammad Salah Ud Din, Muhammad Atif Ur Rehman, and Byung-Seo Kim. 2023. "MIA-NDN: Microservice-Centric Interest Aggregation in Named Data Networking" Sensors 23, no. 3: 1411. https://doi.org/10.3390/s23031411
APA StyleImran, M., Din, M. S. U., Rehman, M. A. U., & Kim, B. -S. (2023). MIA-NDN: Microservice-Centric Interest Aggregation in Named Data Networking. Sensors, 23(3), 1411. https://doi.org/10.3390/s23031411