Distributing Computing in the Internet of Things: Cloud, Fog and Edge Computing

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 25311

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
Interests: mobile computing; edge computing; intelligent computing; cloud computing; game theory

Special Issue Information

Dear Colleagues,

The Internet of Things (IoT) has recently received considerable attention due to its capability of generating an unprecedented volume and variety of data. It has been predicted that in the near future, the data generation rate in the IoT will exceed the capacity of today’s internet. These big data can be utilized to support many computational intelligent applications. However, traditional cloud computing cannot support such intelligent computing due to its long communication latency. Fortunately, fog and edge computing extend traditional cloud computing by putting the services and resources of the cloud closer to the users. The distributed and collaboratively computing among cloud, fog, and edge provides a valid solution to support intelligent data analytics, real-time computing, big data storage, and so on.

Considering recent advances in IoT, this Special Issue will accept unpublished original research on (but not restricted to) the following research areas:

  • Architecture of distributed computing in the IoT;
  • Collaboration of cloud/fog/edge computing;
  • Resource allocation of cloud/fog/edge computing;
  • Data analytic and data mining in the IoT;
  • Machine learning in cloud/edge/fog computing;
  • Privacy preservation of the IoT;
  • Incentive mechanism in cloud/fog/edge computing;
  • Service placement and management in fog/edge computing;
  • Mobility management of users.

Dr. Xiumin Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • edge computing
  • fog computing
  • cloud computing
  • distributed computing
  • resource allocation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 1487 KiB  
Article
Artificial-Intelligence-Based Charger Deployment in Wireless Rechargeable Sensor Networks
by Hsin-Hung Cho, Wei-Che Chien, Fan-Hsun Tseng and Han-Chieh Chao
Future Internet 2023, 15(3), 117; https://doi.org/10.3390/fi15030117 - 22 Mar 2023
Cited by 2 | Viewed by 2248
Abstract
To extend a network’s lifetime, wireless rechargeable sensor networks are promising solutions. Chargers can be deployed to replenish energy for the sensors. However, deployment cost will increase when the number of chargers increases. Many metrics may affect the final policy for charger deployment, [...] Read more.
To extend a network’s lifetime, wireless rechargeable sensor networks are promising solutions. Chargers can be deployed to replenish energy for the sensors. However, deployment cost will increase when the number of chargers increases. Many metrics may affect the final policy for charger deployment, such as distance, the power requirement of the sensors and transmission radius, which makes the charger deployment problem very complex and difficult to solve. In this paper, we propose an efficient method for determining the field of interest (FoI) in which to find suitable candidate positions of chargers with lower computational costs. In addition, we designed four metaheuristic algorithms to address the local optima problem. Since we know that metaheuristic algorithms always require more computational costs for escaping local optima, we designed a new framework to reduce the searching space effectively. The simulation results show that the proposed method can achieve the best price–performance ratio. Full article
Show Figures

Figure 1

23 pages, 13136 KiB  
Article
A Mobile-Based System for Detecting Ginger Leaf Disorders Using Deep Learning
by Hamna Waheed, Waseem Akram, Saif ul Islam, Abdul Hadi, Jalil Boudjadar and Noureen Zafar
Future Internet 2023, 15(3), 86; https://doi.org/10.3390/fi15030086 - 21 Feb 2023
Cited by 5 | Viewed by 3886
Abstract
The agriculture sector plays a crucial role in supplying nutritious and high-quality food. Plant disorders significantly impact crop productivity, resulting in an annual loss of 33%. The early and accurate detection of plant disorders is a difficult task for farmers and requires specialized [...] Read more.
The agriculture sector plays a crucial role in supplying nutritious and high-quality food. Plant disorders significantly impact crop productivity, resulting in an annual loss of 33%. The early and accurate detection of plant disorders is a difficult task for farmers and requires specialized knowledge, significant effort, and labor. In this context, smart devices and advanced artificial intelligence techniques have significant potential to pave the way toward sustainable and smart agriculture. This paper presents a deep learning-based android system that can diagnose ginger plant disorders such as soft rot disease, pest patterns, and nutritional deficiencies. To achieve this, state-of-the-art deep learning models were trained on a real dataset of 4,394 ginger leaf images with diverse backgrounds. The trained models were then integrated into an Android-based mobile application that takes ginger leaf images as input and performs the real-time detection of crop disorders. The proposed system shows promising results in terms of accuracy, precision, recall, confusion matrices, computational cost, Matthews correlation coefficient (MCC), mAP, and F1-score. Full article
Show Figures

Figure 1

31 pages, 1849 KiB  
Article
Vendor-Agnostic Reconfiguration of Kubernetes Clusters in Cloud Federations
by Eddy Truyen, Hongjie Xie and Wouter Joosen
Future Internet 2023, 15(2), 63; https://doi.org/10.3390/fi15020063 - 1 Feb 2023
Cited by 1 | Viewed by 3872
Abstract
Kubernetes (K8s) defines standardized APIs for container-based cluster orchestration such that it becomes possible for application managers to deploy their applications in a portable and interopable manner. However, a practical problem arises when the same application must be replicated in a distributed fashion [...] Read more.
Kubernetes (K8s) defines standardized APIs for container-based cluster orchestration such that it becomes possible for application managers to deploy their applications in a portable and interopable manner. However, a practical problem arises when the same application must be replicated in a distributed fashion across different edge, fog and cloud sites; namely, there will not exist a single K8s vendor that is able to provision and manage K8s clusters across all these sites. Hence, the problem of feature incompatibility between different K8s vendors arises. A large number of documented features in the open-source distribution of K8s are optional features that are turned off by default but can be activated by setting specific combinations of parameters and plug-in components in configuration manifests for the K8s control plane and worker node agents. However, none of these configuration manifests are standardized, giving K8s vendors the freedom to hide the manifests behind a single, more restricted, and proprietary customization interface. Therefore, some optional K8s features cannot be activated consistently across K8s vendors and applications that require these features cannot be run on those vendors. In this paper, we present a unified, vendor-agnostic feature management approach for consistently configuring optional K8s features across a federation of clusters hosted by different Kubernetes vendors. We describe vendor-agnostic reconfiguration tactics that are already applied in industry and that cover a wide range of optional K8s features. Based on these tactics, we design and implement an autonomic controller for declarative feature compatibility management across a cluster federation. We found that the features configured through our vendor-agnostic approach have no impact on application performance when compared with a cluster where the features are configured using the configuration manifests of the open-source K8s distribution. Moreover, the maximum time to complete reconfiguration of a single feature is within 100 seconds, which is 6 times faster than using proprietary customization interfaces of mainstream K8s vendors such as Google Kubernetes Engine. However, there is a non-negligible disruption to running applications when performing the reconfiguration to an existing cluster; this disruption impact does not appear using the proprietary customization methods of the K8s vendors due to the use of rolling upgrade of cluster nodes. Therefore, our approach is best applied in the following three use cases: (i) when starting up new K8s clusters, (ii) when optional K8s features of existing clusters must be activated as quickly as possibly and temporary disruption to running applications can be tolerated or (iii) when proprietary customization interfaces do not allow to activate the desired optional feature. Full article
Show Figures

Figure 1

20 pages, 3494 KiB  
Article
ReSQoV: A Scalable Resource Allocation Model for QoS-Satisfied Cloud Services
by Hassan Mahmood Khan, Fang-Fang Chua and Timothy Tzen Vun Yap
Future Internet 2022, 14(5), 131; https://doi.org/10.3390/fi14050131 - 26 Apr 2022
Cited by 6 | Viewed by 2825
Abstract
Dynamic resource provisioning is made more accessible with cloud computing. Monitoring a running service is critical, and modifications are performed when specific criteria are exceeded. It is a standard practice to add or delete resources in such situations. We investigate the method to [...] Read more.
Dynamic resource provisioning is made more accessible with cloud computing. Monitoring a running service is critical, and modifications are performed when specific criteria are exceeded. It is a standard practice to add or delete resources in such situations. We investigate the method to ensure the Quality of Service (QoS), estimate the required resources, and modify allotted resources depending on workload, serialization, and parallelism due to resources. This article focuses on cloud QoS violation remediation using resource planning and scaling. A Resource Quantified Scaling for QoS Violation (ReSQoV) model is proposed based on the Universal Scalability Law (USL), which provides cloud service capacity for specific workloads and generates a capacity model. ReSQoV considers the system overheads while allocating resources to maintain the agreed QoS. As the QoS violation detection decision is Probably Violation and Definitely Violation, the remedial action is triggered, and required resources are added to the virtual machine as vertical scaling. The scenarios emulate QoS parameters and their respective resource utilization for ReSQoV compared to policy-based resource allocation. The results show that after USLbased Quantified resource allocation, QoS is regained, and validation of the ReSQoV is performed through the statistical test ANOVA that shows the significant difference before and after implementation. Full article
Show Figures

Figure 1

16 pages, 3098 KiB  
Article
Fog-Based CDN Framework for Minimizing Latency of Web Services Using Fog-Based HTTP Browser
by Ahmed H. Ibrahim, Zaki T. Fayed and Hossam M. Faheem
Future Internet 2021, 13(12), 320; https://doi.org/10.3390/fi13120320 - 17 Dec 2021
Cited by 5 | Viewed by 3783
Abstract
Cloud computing has been a dominant computing paradigm for many years. It provides applications with computing, storage, and networking capabilities. Furthermore, it enhances the scalability and quality of service (QoS) of applications and offers the better utilization of resources. Recently, these advantages of [...] Read more.
Cloud computing has been a dominant computing paradigm for many years. It provides applications with computing, storage, and networking capabilities. Furthermore, it enhances the scalability and quality of service (QoS) of applications and offers the better utilization of resources. Recently, these advantages of cloud computing have deteriorated in quality. Cloud services have been affected in terms of latency and QoS due to the high streams of data produced by many Internet of Things (IoT) devices, smart machines, and other computing devices joining the network, which in turn affects network capabilities. Content delivery networks (CDNs) previously provided a partial solution for content retrieval, availability, and resource download time. CDNs rely on the geographic distribution of cloud servers to provide better content reachability. CDNs are perceived as a network layer near cloud data centers. Recently, CDNs began to perceive the same degradations of QoS due to the same factors. Fog computing fills the gap between cloud services and consumers by bringing cloud capabilities close to end devices. Fog computing is perceived as another network layer near end devices. The adoption of the CDN model in fog computing is a promising approach to providing better QoS and latency for cloud services. Therefore, a fog-based CDN framework capable of reducing the load time of web services was proposed in this paper. To evaluate our proposed framework and provide a complete set of tools for its use, a fog-based browser was developed. We showed that our proposed fog-based CDN framework improved the load time of web pages compared to the results attained through the use of the traditional CDN. Different experiments were conducted with a simple network topology against six websites with different content sizes along with a different number of fog nodes at different network distances. The results of these experiments show that with a fog-based CDN framework offloading autonomy, latency can be reduced by 85% and enhance the user experience of websites. Full article
Show Figures

Figure 1

Review

Jump to: Research

44 pages, 1990 KiB  
Review
A Survey on Big IoT Data Indexing: Potential Solutions, Recent Advancements, and Open Issues
by Zineddine Kouahla, Ala-Eddine Benrazek, Mohamed Amine Ferrag, Brahim Farou, Hamid Seridi, Muhammet Kurulay, Adeel Anjum and Alia Asheralieva
Future Internet 2022, 14(1), 19; https://doi.org/10.3390/fi14010019 - 31 Dec 2021
Cited by 8 | Viewed by 7439
Abstract
The past decade has been characterized by the growing volumes of data due to the widespread use of the Internet of Things (IoT) applications, which introduced many challenges for efficient data storage and management. Thus, the efficient indexing and searching of large data [...] Read more.
The past decade has been characterized by the growing volumes of data due to the widespread use of the Internet of Things (IoT) applications, which introduced many challenges for efficient data storage and management. Thus, the efficient indexing and searching of large data collections is a very topical and urgent issue. Such solutions can provide users with valuable information about IoT data. However, efficient retrieval and management of such information in terms of index size and search time require optimization of indexing schemes which is rather difficult to implement. The purpose of this paper is to examine and review existing indexing techniques for large-scale data. A taxonomy of indexing techniques is proposed to enable researchers to understand and select the techniques that will serve as a basis for designing a new indexing scheme. The real-world applications of the existing indexing techniques in different areas, such as health, business, scientific experiments, and social networks, are presented. Open problems and research challenges, e.g., privacy and large-scale data mining, are also discussed. Full article
Show Figures

Figure 1

Back to TopTop