Internet of Things and Cloud-Fog-Edge Computing

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Internet of Things (IoT)".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 18740

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, Faculty of Engineering, University of Mons, 7000 Mons, Belgium
Interests: internet of things; edge AI-IoT; internet of medical things; cloud computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Communication Networks Department, University Mohammed V–ENSIAS, Rabat BP 713, Morocco
Interests: parallel and distributed systems; high-performance computing; virtualisation; cloud computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Science, Technology and Medicine, University of Luxembourg, L-4364 Esch-sur-Alzette, Luxembourg
Interests: cloud computing; parallel and grid computing; distributed systems and middleware; optimisation techniques
Special Issues, Collections and Topics in MDPI journals
Faculty of Science, Technology and Medicine, University of Luxembourg, L-4364 Esch-sur-Alzette, Luxembourg
Interests: artificial intelligence/machine learning; cloud computing; decision-making; internet of things
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The MDPI Journal Information invites submissions to a Special Issue on “Internet of Things and Cloud/Fog/Edge Computing”.

The ever-increasing number of connected objects requires more and more processing resources. Cloud computing has shown its limits, with problems of latency and link congestion related to the volume of data to be transferred. To remedy this, some of the processing has been shifted to the intermediate levels between the cloud and the sensors (Fog computing) or on the sensors themselves (Edge computing). New challenges have emerged related to the distribution of processing between the different processing layers, the need to ensure the end-to-end security to protect sensitive data, or the privacy.

The goal of this Special Issue is to invite high-quality, state-of-the-art research papers that deal with challenging issues in Cloud/Fog/Edge Computing across the different parts of the IoT ecosystem. We solicit original papers of unpublished and completed research that are not currently under review by another conference/journal. Topics of interest include but are not limited to the following:

  • Internet of medical things (IoT)
  • Mobile edge computing
  • Osmotic computing
  • IoT security
  • Confidential computing
  • Mobile systems and applications
  • Smart communities and ubiquitous systems
  • IoT in healthcare
  • IoT in business and industry
  • IoT for resilient organizations

Papers’ length has to be 9–15 pages and should be formatted according to the MDPI template. Complete instructions for authors can be found at: https://www.mdpi.com/journal/information/instructions.

FIRM Deadline: 30 September 2023

Dr. Olivier Debauche
Dr. Mostapha Zbakh
Prof. Dr. Pascal Bouvry
Dr. Caesar Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • internet of things
  • cloud IoT architecture
  • cloud-fog-edge computing
  • distributed architecture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 3774 KiB  
Article
Efficient Schemes for Optimizing Load Balancing and Communication Cost in Edge Computing Networks
by Efthymios Oikonomou and Angelos Rouskas
Information 2024, 15(11), 670; https://doi.org/10.3390/info15110670 - 25 Oct 2024
Viewed by 598
Abstract
Edge computing architectures promise increased quality of service with low communication delays by bringing cloud services closer to the end-users, at the distributed edge servers of the network edge. Hosting server capabilities at access nodes, thereby yielding edge service nodes, offers service proximity [...] Read more.
Edge computing architectures promise increased quality of service with low communication delays by bringing cloud services closer to the end-users, at the distributed edge servers of the network edge. Hosting server capabilities at access nodes, thereby yielding edge service nodes, offers service proximity to users and provides QoS guarantees. However, the placement of edge servers should match the level of demand for computing resources and the location of user load. Thus, it is necessary to devise schemes that select the most appropriate access nodes to host computing services and associate every remaining access node with the most proper service node to ensure optimal service delivery. In this paper, we formulate this problem as an optimization problem with a bi-objective function that aims at both communication cost minimization and load balance optimization. We propose schemes that tackle this problem and compare their performance against previously proposed heuristics that have been also adapted to target both optimization goals. We study how these algorithms behave in lattice and random grid network topologies with uniform and non-uniform workloads. The results validate the efficiency of our proposed schemes in addition to the significantly lower execution times compared to the other heuristics. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

14 pages, 887 KiB  
Article
Optimizing Task Offloading for Power Line Inspection in Smart Grid Networks with Edge Computing: A Game Theory Approach
by Xu Lu, Sihan Yuan, Zhongyuan Nian, Chunfang Mu and Xi Li
Information 2024, 15(8), 441; https://doi.org/10.3390/info15080441 - 29 Jul 2024
Cited by 1 | Viewed by 1126
Abstract
In the power grid, inspection robots enhance operational efficiency and safety by inspecting power lines for information sharing and interaction. Edge computing improves computational efficiency by positioning resources close to the data source, supporting real-time fault detection and line monitoring. However, large data [...] Read more.
In the power grid, inspection robots enhance operational efficiency and safety by inspecting power lines for information sharing and interaction. Edge computing improves computational efficiency by positioning resources close to the data source, supporting real-time fault detection and line monitoring. However, large data volumes and high latency pose challenges. Existing offloading strategies often neglect task divisibility and priority, resulting in low efficiency and poor system performance. This paper constructs a power grid inspection offloading scenario using Python 3.11.2 to study and improve various offloading strategies. Implementing a game-theory-based distributed computation offloading strategy, simulation analysis reveals issues with high latency and low resource utilization. To address these, an improved game-theory-based strategy is proposed, optimizing task allocation and priority settings. By integrating local and edge computing resources, resource utilization is enhanced, and latency is significantly reduced. Simulations show that the improved strategy lowers communication latency, enhances system performance, and increases resource utilization in the power grid inspection context, offering valuable insights for related research. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

20 pages, 7415 KiB  
Article
Model and Implementation of a Novel Heat-Powered Battery-Less IIoT Architecture for Predictive Industrial Maintenance
by Raúl Aragonés, Joan Oliver, Roger Malet, Maria Oliver-Parera and Carles Ferrer
Information 2024, 15(6), 330; https://doi.org/10.3390/info15060330 - 5 Jun 2024
Viewed by 1117
Abstract
The research and management of Industry 4.0 increasingly relies on accurate real-time quality data to apply efficient algorithms for predictive maintenance. Currently, Low-Power Wide-Area Networks (LPWANs) offer potential advantages in monitoring tasks for predictive maintenance. However, their applicability requires improvements in aspects such [...] Read more.
The research and management of Industry 4.0 increasingly relies on accurate real-time quality data to apply efficient algorithms for predictive maintenance. Currently, Low-Power Wide-Area Networks (LPWANs) offer potential advantages in monitoring tasks for predictive maintenance. However, their applicability requires improvements in aspects such as energy consumption, transmission range, data rate and constant quality of service. Commonly used battery-operated IIoT devices have several limitations in their adoption in large facilities or heat-intensive industries (iron and steel, cement, etc.). In these cases, the self-heating nodes together with the appropriate low-power processing platform and industrial sensors are aligned with the requirements and real-time criteria required for industrial monitoring. From an environmental point of view, the carbon footprint associated with human activity leads to a steady rise in global average temperature. Most of the gases emitted into the atmosphere are due to these heat-intensive industries. In fact, much of the energy consumed by industries is dissipated in the form of waste heat. With this scenario, it makes sense to build heat transformation collection systems as guarantors of battery-free self-powered IIoT devices. Thermal energy harvesters work on the physical basis of the Seebeck effect. In this way, this paper gathers the methodology that standardizes the modelling and simulation of waste heat recovery systems for IoT nodes, gathering energy from any hot surface, such as a pipe or chimney. The statistical analysis is carried out with the data obtained from two different IoT architectures showing a good correlation between model simulation and prototype behaviour. Additionally, the selected model will be coupled to a low-power processing platform with LoRaWAN connectivity to demonstrate its effectiveness and self-powering ability in a real industrial environment. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

17 pages, 3682 KiB  
Article
A Collaborative Allocation Algorithm of Communicating, Caching and Computing Resources in Local Power Wireless Communication Network
by Jiajia Tang, Sujie Shao, Shaoyong Guo, Ye Wang and Shuang Wu
Information 2024, 15(6), 309; https://doi.org/10.3390/info15060309 - 27 May 2024
Viewed by 878
Abstract
With the rapid development of new power systems, diverse new power services have imposed stricter requirements on network resources and performance. However, the traditional method of transmitting request data to the IoT management platform for unified processing suffers from large delays due to [...] Read more.
With the rapid development of new power systems, diverse new power services have imposed stricter requirements on network resources and performance. However, the traditional method of transmitting request data to the IoT management platform for unified processing suffers from large delays due to long transmission distances, making it difficult to meet the delay requirements of new power services. Therefore, to reduce the transmission delay, data transmission, storage and computation need to be performed locally. However, due to the limited resources of individual nodes in the local power wireless communication network, issues such as tight coupling between devices and resources and a lack of flexible allocation need to be addressed. The collaborative allocation of resources among multiple nodes in the local network is necessary to satisfy the multi-dimensional resource requirements of new power services. In response to the problems of limited node resources, inflexible resource allocation, and the high complexity of multi-dimensional resource allocation in local power wireless communication networks, this paper proposes a multi-objective joint optimization model for the collaborative allocation of communication, storage, and computing resources. This model utilizes the computational characteristics of communication resources to reduce the dimensionality of the objective function. Furthermore, a mouse swarm optimization algorithm based on multi-strategy improvements is proposed. The simulation results demonstrate that this method can effectively reduce the total system delay and improve the utilization of network resources. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

22 pages, 9676 KiB  
Article
Modeling- and Simulation-Driven Methodology for the Deployment of an Inland Water Monitoring System
by Giordy A. Andrade, Segundo Esteban, José L. Risco-Martín, Jesús Chacón and Eva Besada-Portas
Information 2024, 15(5), 267; https://doi.org/10.3390/info15050267 - 9 May 2024
Viewed by 1115
Abstract
In response to the challenges introduced by global warming and increased eutrophication, this paper presents an innovative modeling and simulation (M&S)-driven model for developing an automated inland water monitoring system. This system is grounded in a layered Internet of Things (IoT) architecture and [...] Read more.
In response to the challenges introduced by global warming and increased eutrophication, this paper presents an innovative modeling and simulation (M&S)-driven model for developing an automated inland water monitoring system. This system is grounded in a layered Internet of Things (IoT) architecture and seamlessly integrates cloud, fog, and edge computing to enable sophisticated, real-time environmental surveillance and prediction of harmful algal and cyanobacterial blooms (HACBs). Utilizing autonomous boats as mobile data collection units within the edge layer, the system efficiently tracks algae and cyanobacteria proliferation and relays critical data upward through the architecture. These data feed into advanced inference models within the cloud layer, which inform predictive algorithms in the fog layer, orchestrating subsequent data-gathering missions. This paper also details a complete development environment that facilitates the system lifecycle from concept to deployment. The modular design is powered by Discrete Event System Specification (DEVS) and offers unparalleled adaptability, allowing developers to simulate, validate, and deploy modules incrementally and cutting across traditional developmental phases. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

17 pages, 1166 KiB  
Article
Resource Allocation and Pricing in Energy Harvesting Serverless Computing Internet of Things Networks
by Yunqi Li and Changlin Yang
Information 2024, 15(5), 250; https://doi.org/10.3390/info15050250 - 29 Apr 2024
Viewed by 1275
Abstract
This paper considers a resource allocation problem involving servers and mobile users (MUs) operating in a serverless edge computing (SEC)-enabled Internet of Things (IoT) network. Each MU has a fixed budget, and each server is powered by the grid and has energy harvesting [...] Read more.
This paper considers a resource allocation problem involving servers and mobile users (MUs) operating in a serverless edge computing (SEC)-enabled Internet of Things (IoT) network. Each MU has a fixed budget, and each server is powered by the grid and has energy harvesting (EH) capability. Our objective is to maximize the revenue of the operator that operates the said servers and the number of resources purchased by the MUs. We propose a Stackelberg game approach, where servers and MUs act as leaders and followers, respectively. We prove the existence of a Stackelberg game equilibrium and develop an iterative algorithm to determine the final game equilibrium price. Simulation results show that the proposed scheme is efficient in terms of the SEC’s profit and MU’s demand. Moreover, both MUs and SECs gain benefits from renewable energy. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

17 pages, 1117 KiB  
Article
Design of a Meaningful Framework for Time Series Forecasting in Smart Buildings
by Louis Closson, Christophe Cérin, Didier Donsez and Jean-Luc Baudouin
Information 2024, 15(2), 94; https://doi.org/10.3390/info15020094 - 7 Feb 2024
Viewed by 1798
Abstract
This paper aims to provide discernment toward establishing a general framework, dedicated to data analysis and forecasting in smart buildings. It constitutes an industrial return of experience from an industrialist specializing in IoT supported by the academic world. With the necessary improvement of [...] Read more.
This paper aims to provide discernment toward establishing a general framework, dedicated to data analysis and forecasting in smart buildings. It constitutes an industrial return of experience from an industrialist specializing in IoT supported by the academic world. With the necessary improvement of energy efficiency, discernment is paramount for facility managers to optimize daily operations and prioritize renovation work in the building sector. With the scale of buildings and the complexity of Heating, Ventilation, and Air Conditioning (HVAC) systems, the use of artificial intelligence is deemed the cheapest tool, holding the highest potential, even if it requires IoT sensors and a deluge of data to establish genuine models. However, the wide variety of buildings, users, and data hinders the development of industrial solutions, as specific studies often lack relevance to analyze other buildings, possibly with different types of data monitored. The relevance of the modeling can also disappear over time, as buildings are dynamic systems evolving with their use. In this paper, we propose to study the forecasting ability of the widely used Long Short-Term Memory (LSTM) network algorithm, which is well-designed for time series modeling, across an instrumented building. In this way, we considered the consistency of the performances for several issues as we compared to the cases with no prediction, which is lacking in the literature. The insight provided let us examine the quality of AI models and the quality of data needed in forecasting tasks. Finally, we deduced that efficient models and smart choices about data allow meaningful insight into developing time series modeling frameworks for smart buildings. For reproducibility concerns, we also provide our raw data, which came from one “real” smart building, as well as significant information regarding this building. In summary, our research aims to develop a methodology for exploring, analyzing, and modeling data from the smart buildings sector. Based on our experiment on forecasting temperature sensor measurements, we found that a bigger AI model (1) does not always imply a longer time in training and (2) can have little impact on accuracy and (3) using more features is tied to data processing order. We also observed that providing more data is irrelevant without a deep understanding of the problem physics. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

32 pages, 1146 KiB  
Article
Online Task Scheduling of Big Data Applications in the Cloud Environment
by Laila Bouhouch, Mostapha Zbakh and Claude Tadonki
Information 2023, 14(5), 292; https://doi.org/10.3390/info14050292 - 15 May 2023
Cited by 3 | Viewed by 2122
Abstract
The development of big data has generated data-intensive tasks that are usually time-consuming, with a high demand on cloud data centers for hosting big data applications. It becomes necessary to consider both data and task management to find the optimal resource allocation scheme, [...] Read more.
The development of big data has generated data-intensive tasks that are usually time-consuming, with a high demand on cloud data centers for hosting big data applications. It becomes necessary to consider both data and task management to find the optimal resource allocation scheme, which is a challenging research issue. In this paper, we address the problem of online task scheduling combined with data migration and replication in order to reduce the overall response time as well as ensure that the available resources are efficiently used. We introduce a new scheduling technique, named Online Task Scheduling algorithm based on Data Migration and Data Replication (OTS-DMDR). The main objective is to efficiently assign online incoming tasks to the available servers while considering the access time of the required datasets and their replicas, the execution time of the task in different machines, and the computational power of each machine. The core idea is to achieve better data locality by performing an effective data migration while handling replicas. As a result, the overall response time of the online tasks is reduced, and the throughput is improved with enhanced machine resource utilization. To validate the performance of the proposed scheduling method, we run in-depth simulations with various scenarios and the results show that our proposed strategy performs better than the other existing approaches. In fact, it reduces the response time by 78% when compared to the First Come First Served scheduler (FCFS), by 58% compared to the Delay Scheduling, and by 46% compared to the technique of Li et al. Consequently, the present OTS-DMDR method is very effective and convenient for the problem of online task scheduling. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

13 pages, 573 KiB  
Article
Security Verification of an Authentication Algorithm Based on Verifiable Encryption
by Maki Kihara and Satoshi Iriyama
Information 2023, 14(2), 126; https://doi.org/10.3390/info14020126 - 15 Feb 2023
Viewed by 1939
Abstract
A new class of cryptosystems called verifiable encryption (VE) that facilitates the verification of two plaintexts without decryption was proposed in our previous paper. The main contributions of our previous study include the following. (1) Certain cryptosystems such as the one-time pad belong [...] Read more.
A new class of cryptosystems called verifiable encryption (VE) that facilitates the verification of two plaintexts without decryption was proposed in our previous paper. The main contributions of our previous study include the following. (1) Certain cryptosystems such as the one-time pad belong to the VE class. (2) We constructed an authentication algorithm for unlocking local devices via a network that utilizes the property of VE. (3) As a result of implementing the VE-based authentication algorithm using the one-time pad, the encryption, verification, and decryption processing times are less than 1 ms even with a text length of 8192 bits. All the personal information used in the algorithm is protected by Shanon’s perfect secrecy. (4) The robustness of the algorithm against man-in-the-middle attacks and plaintext attacks was discussed. However, the discussion about the security of the algorithm was insufficient from the following two perspectives: (A) its robustness against other theoretical attacks such as ciphertext-only, known-plaintext, chosen-plaintext, adaptive chosen-plaintext, chosen-ciphertext, and adaptive chosen-ciphertext attacks was not discussed; (B) a formal security analysis using security verification tools was not performed. In this paper, we analyze the security of the VE-based authentication algorithm by discussing its robustness against the above theoretical attacks and by validating the algorithm using a security verification tool. These security analyses, show that known attacks are ineffective against the algorithm. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 445 KiB  
Review
Literature Review: Clinical Data Interoperability Models
by Rachida Ait Abdelouahid, Olivier Debauche, Saïd Mahmoudi and Abdelaziz Marzak
Information 2023, 14(7), 364; https://doi.org/10.3390/info14070364 - 27 Jun 2023
Cited by 7 | Viewed by 4311
Abstract
A medical entity (hospital, nursing home, rest home, revalidation center, etc.) usually includes a multitude of information systems that allow for quick decision-making close to the medical sensors. The Internet of Medical Things (IoMT) is an area of IoT that generates a lot [...] Read more.
A medical entity (hospital, nursing home, rest home, revalidation center, etc.) usually includes a multitude of information systems that allow for quick decision-making close to the medical sensors. The Internet of Medical Things (IoMT) is an area of IoT that generates a lot of data of different natures (radio, CT scan, medical reports, medical sensor data). However, these systems need to share and exchange medical information in a seamless, timely, and efficient manner with systems that are either within the same entity or other healthcare entities. The lack of inter- and intra-entity interoperability causes major problems in the analysis of patient records and leads to additional financial costs (e.g., redone examinations). To develop a medical data interoperability architecture model that will allow providers and different actors in the medical community to exchange patient summary information with other caregivers and partners to improve the quality of care, the level of data security, and the efficiency of care should take stock of the state of knowledge. This paper discusses the challenges faced by medical entities in sharing and exchanging medical information seamlessly and efficiently. It highlights the need for inter- and intra-entity interoperability to improve the analysis of patient records, reduce financial costs, and enhance the quality of care. The paper reviews existing solutions proposed by various researchers and identifies their limitations. The analysis of the literature has shown that the HL7 FHIR standard is particularly well adapted for exchanging and storing health data, while DICOM, CDA, and JSON can be converted in HL7 FHIR or HL7 FHIR to these formats for interoperability purposes. This approach covers almost all use cases. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Framework for Cyber-Physical Production Systems in Heavy Industry
Authors: Łukasz Rauch
Affiliation: Department of Applied Computer Science and Modelling, AGH University of Science and Technology, 30-059 Kraków, Poland
Abstract: The paper will present proposition of new framework, which we have developed for Cyber-Physical System developed for CMC company, department in Poland. Company produces casted and rolled steel products. The system contains modules related to data gathering from multiple sensors and cameras + modules dedicated to computing tasks predicting various process aspects by using artificial intelligence, soft computing as well as intensive numerical procedures.

Title: Computational Approach for Internet of Things with 5G Connectivity using Quantum Algorithms
Authors: Shitharth Selvarajan
Affiliation: School of Built Environment, Engineering and Computing, Leeds Beckett University, LS1 3HE Leeds, U.K.
Abstract: This paper analyses the need for a quantum computing approach for IoT applications using the 5G resource spectrum. Most IoT devices are connected for data transmission to end users with remote monitoring units. Still, no sufficient data storage units exist, and more data cannot be processed at minimized periods. Hence, in the proposed method, quantum information processing protocol and quantum algorithms are integrated where data transmissions are maximized. Further, the system model is designed to check the external influence factors that prevent the IoT device from transmitting data to end users. Therefore, with corresponding signal and noise power, it is essential to process the transmissions, thereby increasing data proportions at end connectivity. Once quantum computations are made, it is crucial to normalize IoT data units, thus establishing control over connected nodes that create a gateway for achieving maximum throughput. The combined system model is tested under four cases where the comparative outcomes prove that with reduced queue reductions of 12%, achieving a maximum throughput of 99% is possible.

Title: Efficient Schemes for Optimizing Load Balancing and Communication Cost in Edge Computing Networks
Authors: Angelos Rouskas
Affiliation: University of Piraeus
Abstract: Edge computing architectures promise increased quality of service with low communication delays by bringing cloud services closer to the end-users, at the distributed edge servers of the network edge. Hosting server capabilities at access nodes, thus yielding edge service nodes, offers service proximity to users and provides QoS guarantees. However, the placement of edge servers should match the level of demand for computing resources and the location of user load. Thus, it is necessary to devise schemes that select the most appropriate access nodes to host computing services and associate every remaining access node with the most proper service node to receive its services. In this paper, we formulate this problem as an optimization problem with a bi-objective function that aims both communication cost minimization and load balance optimization. We propose schemes that tackle this problem and compare their performance against previously proposed heuristics that have been also adapted to target both optimization goals. We study how these algorithms behave in lattice and random grid network topologies with uniform and non-uniform workloads. The results validate the efficiency of our proposed schemes in addition to the significantly lower execution times compared to the other heuristics.

Title: Proposed Development of an Educational Mobile Application for Vehicle Management and Control Based on IoT
Authors: Pablo Alejandro Quezada-Sarmiento
Affiliation: Computer Languages and Systems Department, University of the Basque Country UPV/EHU, 20080 Donostia, Spain
Abstract: This article proposes the development of an educational mobile application for vehicle management and control in the city of Loja, Ecuador, integrating aspects of the Internet of Things (IoT). The development process began with the identification and correlation of essential requirements to select an appropriate framework and explore potential synergies with IoT. It is crucial to consider various approaches to mobile application development and choose the one that best suits the specific needs of the project. The application was developed in Android Studio, an integrated development environment (IDE) specifically designed for Android mobile application development. Android Studio provides tools for programming, debugging, and testing applications, as well as resources for software project and version management. This IDE includes a code editor compatible with languages such as Java and Kotlin, and offers templates and code snippets to streamline and accelerate the development process. It is also important to assess whether a mobile application is the most suitable solution or if other technological alternatives should be considered. The software development proposal serves as a useful guide, helping to define the processes and procedures necessary for creating software for small, wireless computing devices, such as mobile phones.

Back to TopTop