sensors-logo

Journal Browser

Journal Browser

Edge/Fog Computing Technologies for IoT Infrastructure II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (20 December 2022) | Viewed by 24087

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Republic of Korea
Interests: edge computing; container orchestration; Internet of Things; SDN/NFV; wireless sensor networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Artificial Intelligence, Jeonju University, Jeonju 55069, Republic of Korea
Interests: artificial intelligence; machine learning; deep learning; Internet of Things; wireless sensor network; edge computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer and Communication Engineering, Daegu University, Gyeongsan 712-714, Republic of Korea
Interests: wireless sensor networks; industrial IoT; localization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories, require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure.

Container (a lightweight virtualization) is one of the emerging fog/edge computing technologies for the IoT infrastructure. In spite of the advances of this technology, research into the integration of containers with fog/edge computing for the IoT infrastructure is still in the early stages. There are many challenges that need to be addressed, such as smart container orchestration, real-time monitoring of resources, auto-scaling, and load balancing of services.

Recently, there have been many researches and development from both academia and industries, and this Special Issue seeks recent advancements on fog/edge computing technologies for building an IoT infrastructure. The potential topics of interests for this Special Issue include, but are not limited to the following:

  • Fog/edge computing architecture for IoT infrastructure
  • Fog/edge computing-based IoT applications
  • Dynamic resource and service allocation and installment on fog/edge computing
  • Device and service management for fog/edge-based IoT infrastructure
  • Data management techniques for fog/edge-based IoT infrastructure
  • Algorithm and technologies for computation offloading on fog/edge computing
  • State-aware solutions for fog/edge computing
  • Container orchestration frameworks based on open source projects
  • Container orchestration techniques such as real-time monitoring, auto-scaling, and load balancing of services
  • Experimental testbed for fog/edge computing-based IoT application
  • Performance analysis and evaluation on fog/edge computing
  • Standards on fog/edge computing for IoT infrastructure
  • SDN/NFV techniques for fog/edge computing and IoT infrastructure
  • AI and deep learning techniques for fog/edge computing and IoT infrastructure
  • Security and privacy for fog/edge-based IoT infrastructure

Dr. Taehong Kim
Dr. Youngsoo Kim
Dr. Seong-eun Yoo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 160 KiB  
Editorial
Edge/Fog Computing Technologies for IoT Infrastructure II
by Taehong Kim, Seong-eun Yoo and Youngsoo Kim
Sensors 2023, 23(8), 3953; https://doi.org/10.3390/s23083953 - 13 Apr 2023
Viewed by 1698
Abstract
The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices [...] Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)

Research

Jump to: Editorial

13 pages, 2335 KiB  
Article
Local Scheduling in KubeEdge-Based Edge Computing Environment
by Seong-Hyun Kim and Taehong Kim
Sensors 2023, 23(3), 1522; https://doi.org/10.3390/s23031522 - 30 Jan 2023
Cited by 13 | Viewed by 4657
Abstract
KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as [...] Read more.
KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as network management between cloud and edge nodes. However, the resulting cloud-based systems are subject to several limitations. In this study, we evaluated the performance of KubeEdge in terms of the computational resource distribution and delay between edge nodes. We found that forwarding traffic between edge nodes degrades the throughput of clusters and causes service delay in edge computing environments. Based on these results, we proposed a local scheduling scheme that handles user traffic locally at each edge node. The performance evaluation results revealed that local scheduling outperforms the existing load-balancing algorithm in the edge computing environment. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

19 pages, 3127 KiB  
Article
Autonomous Mutual Authentication Protocol in the Edge Networks
by Ruey-Kai Sheu, Mayuresh Sunil Pardeshi and Lun-Chi Chen
Sensors 2022, 22(19), 7632; https://doi.org/10.3390/s22197632 - 8 Oct 2022
Cited by 5 | Viewed by 2704
Abstract
A distinct security protocol is necessary for the exponential growth in intelligent edge devices. In particular, the autonomous devices need to address significant security concern to function smoothly in the high market demand. Nevertheless, exponential increase in the connected devices has made cloud [...] Read more.
A distinct security protocol is necessary for the exponential growth in intelligent edge devices. In particular, the autonomous devices need to address significant security concern to function smoothly in the high market demand. Nevertheless, exponential increase in the connected devices has made cloud networks more complex and suffer from information processing delay. Therefore, the goal of this work is to design a novel server-less mutual authentication protocol for the edge networks. The aim is to demonstrate an autonomous mutual authentication amongst the connected smart devices within the edge networks. The solution addresses applications of autonomous cars, smart things, and Internet of Things (IoT) devices in the edge or wireless sensor networks (WSN), etc. In this paper, the design proposes use of a public-key system, octet-based balanced-tree transitions, challenge–response mechanism, device unique ID (UID), pseudo-random number generator (PRNG), time-stamps, and event specific session keys. Ultimately, server-less design requires less infrastructure and avoids several types of network-based communication attacks, e.g., impersonating, Man in the middle (MITM), IoT-DDOS, etc. Additionally, the system overhead is eliminated by no secret key requirements. The results provide sufficient evidence about the protocol market competitiveness and demonstrate better benchmark comparison results. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

12 pages, 2853 KiB  
Article
Latency-Aware Task Scheduling for IoT Applications Based on Artificial Intelligence with Partitioning in Small-Scale Fog Computing Environments
by JongBeom Lim
Sensors 2022, 22(19), 7326; https://doi.org/10.3390/s22197326 - 27 Sep 2022
Cited by 12 | Viewed by 2474
Abstract
The Internet of Things applications have become popular because of their lightweight nature and usefulness, which require low latency and response time. Hence, Internet of Things applications are deployed with the fog management layer (software) in closely located edge servers (hardware) as per [...] Read more.
The Internet of Things applications have become popular because of their lightweight nature and usefulness, which require low latency and response time. Hence, Internet of Things applications are deployed with the fog management layer (software) in closely located edge servers (hardware) as per the requirements. Due to their lightweight properties, Internet of Things applications do not consume many computing resources. Therefore, it is common that a small-scale data center can accommodate thousands of Internet of Things applications. However, in small-scale fog computing environments, task scheduling of applications is limited to offering low latency and response times. In this paper, we propose a latency-aware task scheduling method for Internet of Things applications based on artificial intelligence in small-scale fog computing environments. The core concept of the proposed task scheduling is to use artificial neural networks with partitioning capabilities. With the partitioning technique for artificial neural networks, multiple edge servers are able to learn and calculate hyperparameters in parallel, which reduces scheduling times and service level objectives. Performance evaluation with state-of-the-art studies shows the effectiveness and efficiency of the proposed task scheduling in small-scale fog computing environments while introducing negligible energy consumption. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

13 pages, 1927 KiB  
Article
Cooperative Downloading for LEO Satellite Networks: A DRL-Based Approach
by Hongrok Choi and Sangheon Pack
Sensors 2022, 22(18), 6853; https://doi.org/10.3390/s22186853 - 10 Sep 2022
Cited by 5 | Viewed by 2996
Abstract
In low earth orbit (LEO) satellite-based applications (e.g., remote sensing and surveillance), it is important to efficiently transmit collected data to ground stations (GS). However, LEO satellites’ high mobility and resultant insufficient time for downloading make this challenging. In this paper, we propose [...] Read more.
In low earth orbit (LEO) satellite-based applications (e.g., remote sensing and surveillance), it is important to efficiently transmit collected data to ground stations (GS). However, LEO satellites’ high mobility and resultant insufficient time for downloading make this challenging. In this paper, we propose a deep-reinforcement-learning (DRL)-based cooperative downloading scheme, which utilizes inter-satellite communication links (ISLs) to fully utilize satellites’ downloading capabilities. To this end, we formulate a Markov decision problem (MDP) with the objective to maximize the amount of downloaded data. To learn the optimal approach to the formulated problem, we adopt a soft-actor-critic (SAC)-based DRL algorithm in discretized action spaces. Moreover, we design a novel neural network consisting of a graph attention network (GAT) layer to extract latent features from the satellite network and parallel fully connected (FC) layers to control individual satellites of the network. Evaluation results demonstrate that the proposed DRL-based cooperative downloading scheme can enhance the average utilization of contact time by up to 17.8% compared with independent downloading and randomly offloading schemes. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

18 pages, 1457 KiB  
Article
Collaborative Task Offloading and Service Caching Strategy for Mobile Edge Computing
by Xiang Liu, Xu Zhao, Guojin Liu, Fei Huang, Tiancong Huang and Yucheng Wu
Sensors 2022, 22(18), 6760; https://doi.org/10.3390/s22186760 - 7 Sep 2022
Cited by 11 | Viewed by 2509
Abstract
Mobile edge computing (MEC), which sinks the functions of cloud servers, has become an emerging paradigm to solve the contradiction between delay-sensitive tasks and resource-constrained terminals. Task offloading assisted by service caching in a collaborative manner can reduce delay and balance the edge [...] Read more.
Mobile edge computing (MEC), which sinks the functions of cloud servers, has become an emerging paradigm to solve the contradiction between delay-sensitive tasks and resource-constrained terminals. Task offloading assisted by service caching in a collaborative manner can reduce delay and balance the edge load in MEC. Due to the limited storage resources of edge servers, it is a significant issue to develop a dynamical service caching strategy according to the actual variable user demands in task offloading. Therefore, this paper investigates the collaborative task offloading problem assisted by a dynamical caching strategy in MEC. Furthermore, a two-level computing strategy called joint task offloading and service caching (JTOSC) is proposed to solve the optimized problem. The outer layer in JTOSC iteratively updates the service caching decisions based on the Gibbs sampling. The inner layer in JTOSC adopts the fairness-aware allocation algorithm and the offloading revenue preference-based bilateral matching algorithm to get a great computing resource allocation and task offloading scheme. The simulation results indicate that the proposed strategy outperforms the other four comparison strategies in terms of maximum offloading delay, service cache hit rate, and edge load balance. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

20 pages, 4255 KiB  
Article
Distributed Agent-Based Orchestrator Model for Fog Computing
by Agnius Liutkevičius, Nerijus Morkevičius, Algimantas Venčkauskas and Jevgenijus Toldinas
Sensors 2022, 22(15), 5894; https://doi.org/10.3390/s22155894 - 7 Aug 2022
Cited by 10 | Viewed by 2173
Abstract
Fog computing is an extension of cloud computing that provides computing services closer to user end-devices at the network edge. One of the challenging topics in fog networks is the placement of tasks on fog nodes to obtain the best performance and resource [...] Read more.
Fog computing is an extension of cloud computing that provides computing services closer to user end-devices at the network edge. One of the challenging topics in fog networks is the placement of tasks on fog nodes to obtain the best performance and resource usage. The process of mapping tasks for resource-constrained devices is known as the service or fog application placement problem (SPP, FAPP). The highly dynamic fog infrastructures with mobile user end-devices and constantly changing fog nodes resources (e.g., battery life, security level) require distributed/decentralized service placement (orchestration) algorithms to ensure better resilience, scalability, and optimal real-time performance. However, recently proposed service placement algorithms rarely support user end-device mobility, constantly changing the resource availability of fog nodes and the ability to recover from fog node failures at the same time. In this article, we propose a distributed agent-based orchestrator model capable of flexible service provisioning in a dynamic fog computing environment by considering the constraints on the central processing unit (CPU), memory, battery level, and security level of fog nodes. Distributing the decision-making to multiple orchestrator fog nodes instead of relying on the mapping of a single central entity helps to spread the load and increase scalability and, most importantly, resilience. The prototype system based on the proposed orchestrator model was implemented and tested with real hardware. The results show that the proposed model is efficient in terms of response latency and computational overhead, which are minimal compared to the placement algorithm itself. The research confirms that the proposed orchestrator approach is suitable for various fog network applications when scalability, mobility, and fault tolerance must be guaranteed. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

18 pages, 1745 KiB  
Article
Dynamic Task Offloading for Cloud-Assisted Vehicular Edge Computing Networks: A Non-Cooperative Game Theoretic Approach
by Md. Delowar Hossain, Tangina Sultana, Md. Alamgir Hossain, Md. Abu Layek, Md. Imtiaz Hossain, Phoo Pyae Sone, Ga-Won Lee and Eui-Nam Huh
Sensors 2022, 22(10), 3678; https://doi.org/10.3390/s22103678 - 12 May 2022
Cited by 11 | Viewed by 3534
Abstract
Vehicular edge computing (VEC) is one of the prominent ideas to enhance the computation and storage capabilities of vehicular networks (VNs) through task offloading. In VEC, the resource-constrained vehicles offload their computing tasks to the local road-side units (RSUs) for rapid computation. However, [...] Read more.
Vehicular edge computing (VEC) is one of the prominent ideas to enhance the computation and storage capabilities of vehicular networks (VNs) through task offloading. In VEC, the resource-constrained vehicles offload their computing tasks to the local road-side units (RSUs) for rapid computation. However, due to the high mobility of vehicles and the overloaded problem, VEC experiences a great deal of challenges when determining a location for processing the offloaded task in real time. As a result, this degrades the quality of vehicular performance. Therefore, to deal with these above-mentioned challenges, an efficient dynamic task offloading approach based on a non-cooperative game (NGTO) is proposed in this study. In the NGTO approach, each vehicle can make its own strategy on whether a task is offloaded to a multi-access edge computing (MEC) server or a cloud server to maximize its benefits. Our proposed strategy can dynamically adjust the task-offloading probability to acquire the maximum utility for each vehicle. However, we used a best response offloading strategy algorithm for the task-offloading game in order to achieve a unique and stable equilibrium. Numerous simulation experiments affirm that our proposed scheme fulfills the performance guarantees and can reduce the response time and task-failure rate by almost 47.6% and 54.6%, respectively, when compared with the local RSU computing (LRC) scheme. Moreover, the reduced rates are approximately 32.6% and 39.7%, respectively, when compared with a random offloading scheme, and approximately 26.5% and 28.4%, respectively, when compared with a collaborative offloading scheme. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure II)
Show Figures

Figure 1

Back to TopTop