Intelligent Edge: When AI Meets Edge Computing

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "Internet of Things (IoT) and Industrial IoT".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 4385

Special Issue Editor


E-Mail Website
Guest Editor
TSYS School of Computer Science, Columbus State University, Columbus, GA 31907, USA
Interests: cloud and edge computing; big data analytics; genomics; advanced metering infrastructure (AMI); architectures for smart grids; energy consumption prediction for smart buildings

Special Issue Information

Dear Colleagues,

This Special Issue investigates the dynamic interplay between artificial intelligence (AI) and edge computing, two revolutionary technologies that are redefining the landscape of modern computing. AI brings the power of both machine learning and intelligent decision making. In contrast, edge computing offers the advantage of localized data processing. This reduces latency by reducing the load on cloud data centers and enhancing data security. The data are processed locally and in a distributed manner, which spreads out the attack surface and reduces the need to transmit sensitive information.

The convergence of these technologies leads to groundbreaking applications across various sectors, including healthcare, smart cities, industrial automation, and the Internet of Things (IoT).

Contributions in this Special Issue explore how AI algorithms can be optimized for edge devices, considering their limited resources. It also delves into how edge computing can facilitate real-time AI applications by processing data closer to its source. Additionally, the Special Issue examines security and privacy concerns in the intelligent edge network, addressing how to safeguard sensitive information in decentralized environments.

This Special Issue aims to provide insights into the current state and future prospects of intelligent edge systems, offering a comprehensive understanding for researchers, practitioners, and enthusiasts in the field.

These edits enhance readability and ensure that the text flows smoothly, maintaining the original meaning and emphasis on the importance of the Intelligent Edge.

Selected Topics (but not limited to):

  1. Optimizing AI Algorithms for Edge Computing Environments

Exploring how AI algorithms can be tailored to operate efficiently on edge devices with limited computational resources.

  1. Security and Privacy in Intelligent Edge Systems

Investigating the security challenges and privacy implications inherent in deploying AI on edge computing platforms.

  1. Edge AI in IoT Applications

Discussing the role of edge computing in enhancing AI-driven applications in the Internet of Things (IoT), particularly in smart homes, cities, and industries.

  1. Real-Time Data Processing and Decision Making

Examining how edge computing enables real-time data analysis and immediate decision-making in critical applications like autonomous vehicles and healthcare monitoring.

  1. Energy Efficiency in Edge AI Systems

Addressing the challenges and solutions for energy-efficient AI processing at the edge, crucial for battery-operated and remote devices.

  1. Edge Computing in 5G Networks

Exploring the synergy between 5G technologies and edge computing in facilitating faster and more reliable AI applications.

  1. AI-Driven Edge Computing in Healthcare

Discussing the impact of edge AI in medical diagnostics, patient monitoring, and telemedicine, with a focus on privacy and real-time data analysis.

  1. Scalability and Management of Edge AI Networks

Investigating the architectural and management challenges in scaling edge AI systems, including deployment strategies and maintenance.

  1. Edge AI for Industrial Automation

Analyzing the application of AI and edge computing in industrial settings, focusing on predictive maintenance, quality control, and supply chain optimization.

  1. Ethical and Regulatory Considerations in Intelligent Edge

Delving into the ethical implications and regulatory challenges of deploying AI at the edge, including data governance and compliance issues.

Dr. Riduan Abid
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • intelligent edge
  • edge security
  • edge applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2768 KiB  
Article
Reinforcement-Learning-Based Edge Offloading Orchestration in Computing Continuum
by Ioana Ramona Martin, Gabriel Ioan Arcas and Tudor Cioara
Computers 2024, 13(11), 295; https://doi.org/10.3390/computers13110295 - 14 Nov 2024
Viewed by 442
Abstract
The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, and computational imbalances. Addressing these is essential to shift the workloads from the cloud to the edge and across [...] Read more.
The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, and computational imbalances. Addressing these is essential to shift the workloads from the cloud to the edge and across the entire computing continuum. However, to achieve this, significant challenges must still be addressed, particularly in decision making to manage the trade-offs associated with workload offloading. In this paper, we propose a task-offloading solution using Reinforcement Learning (RL) to dynamically balance workloads and reduce overloads. We have chosen the Deep Q-Learning algorithm and adapted it to our workload offloading problem. The reward system considers the node’s computational state and type to increase the utilization of the computational resources while minimizing latency and bandwidth utilization. A knowledge graph model of the computing continuum infrastructure is used to address environment modeling challenges and facilitate RL. The learning agent’s performance was evaluated using different hyperparameter configurations and varying episode lengths or knowledge graph model sizes. Results show that for a better learning experience, a low, steady learning rate and a large buffer size are important. Additionally, it offers strong convergence features, with relevant workload tasks and node pairs identified after each learning episode. It also demonstrates good scalability, as the number of offloading pairs and actions increases with the size of the knowledge graph and the episode count. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

15 pages, 680 KiB  
Article
Enhancing 5G Vehicular Edge Computing Efficiency with the Hungarian Algorithm for Optimal Task Offloading
by Mohamed Kamel Benbraika, Okba Kraa, Yassine Himeur, Khaled Telli, Shadi Atalla and Wathiq Mansoor
Computers 2024, 13(11), 279; https://doi.org/10.3390/computers13110279 - 28 Oct 2024
Viewed by 581
Abstract
The rapid advancements in vehicular technologies have enabled modern autonomous vehicles (AVs) to perform complex tasks, such as augmented reality, real-time video surveillance, and automated parking. However, these applications require significant computational resources, which AVs often lack. To address this limitation, Vehicular Edge [...] Read more.
The rapid advancements in vehicular technologies have enabled modern autonomous vehicles (AVs) to perform complex tasks, such as augmented reality, real-time video surveillance, and automated parking. However, these applications require significant computational resources, which AVs often lack. To address this limitation, Vehicular Edge Computing (VEC) has emerged as a promising solution, allowing AVs to offload computational tasks to nearby vehicles and edge servers. This offloading process, however, is complicated by factors such as high vehicle mobility and intermittent connectivity. In this paper, we propose the Hungarian Algorithm for Task Offloading (HATO), a novel approach designed to optimize the distribution of computational tasks in 5G-enabled VEC systems. HATO leverages 5G’s low-latency, high-bandwidth communication to efficiently allocate tasks across edge servers and nearby vehicles, utilizing the Hungarian algorithm for optimal task assignment. By designating an edge server to gather contextual information from surrounding nodes and compute the best offloading scheme, HATO reduces computational burdens on AVs and minimizes task failures. Through extensive simulations in both urban and highway scenarios, HATO achieved a significant performance improvement, reducing execution time by up to 75.4% compared to existing methods under full 5G coverage in high-density environments. Additionally, HATO demonstrated zero energy constraint violations and achieved the highest task processing reliability, with an offloading success rate of 87.75% in high-density urban areas. These results highlight the potential of HATO to enhance the efficiency and scalability of VEC systems for autonomous vehicles. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

34 pages, 1042 KiB  
Article
Artificially Intelligent Vehicle-to-Grid Energy Management: A Semantic-Aware Framework Balancing Grid Demands and User Autonomy
by Mahmoud Elkhodr
Computers 2024, 13(10), 249; https://doi.org/10.3390/computers13100249 - 1 Oct 2024
Viewed by 787
Abstract
As the adoption of electric vehicles increases, the challenge of managing bidirectional energy flow while ensuring grid stability and respecting user preferences becomes increasingly critical. This paper aims to develop an intelligent framework for vehicle-to-grid (V2G) energy management that balances grid demands with [...] Read more.
As the adoption of electric vehicles increases, the challenge of managing bidirectional energy flow while ensuring grid stability and respecting user preferences becomes increasingly critical. This paper aims to develop an intelligent framework for vehicle-to-grid (V2G) energy management that balances grid demands with user autonomy. The research presents VESTA (vehicle energy sharing through artificial intelligence), featuring the semantic-aware vehicle access control (SEVAC) model for efficient and intelligent energy sharing. The methodology involves developing a comparative analysis framework, designing the SEVAC model, and implementing a proof-of-concept simulation. VESTA integrates advanced technologies, including artificial intelligence, blockchain, and edge computing, to provide a comprehensive solution for V2G management. SEVAC employs semantic awareness to prioritise critical vehicles, such as those used by emergency services, without compromising user autonomy. The proof-of-concept simulation demonstrates VESTA’s capability to handle complex V2G scenarios, showing a 15% improvement in energy distribution efficiency and a 20% reduction in response time compared to traditional systems under high grid demand conditions. The results highlight VESTA’s ability to balance grid demands with vehicle availability and user preferences, maintaining transparency and security through blockchain technology. Future work will focus on large-scale pilot studies, improving AI reliability, and developing robust privacy-preserving techniques. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

18 pages, 3199 KiB  
Article
Optimizing Convolutional Neural Networks for Image Classification on Resource-Constrained Microcontroller Units
by Susanne Brockmann and Tim Schlippe
Computers 2024, 13(7), 173; https://doi.org/10.3390/computers13070173 - 15 Jul 2024
Cited by 1 | Viewed by 1364
Abstract
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access [...] Read more.
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access memory sizes between 2 KB and 512 KB and read-only memory storage capacities between 32 KB and 2 MB. Models designed for high-end devices are usually ported to MCUs using model scaling factors provided by the model architecture’s designers. However, our analysis shows that this naive approach of substantially scaling down convolutional neural networks (CNNs) for image classification using such default scaling factors results in suboptimal performance. Consequently, in this paper we present a systematic strategy for efficiently scaling down CNN model architectures to run on MCUs. Moreover, we present our CNN Analyzer, a dashboard-based tool for determining optimal CNN model architecture scaling factors for the downscaling strategy by gaining layer-wise insights into the model architecture scaling factors that drive model size, peak memory, and inference time. Using our strategy, we were able to introduce additional new model architecture scaling factors for MobileNet v1, MobileNet v2, MobileNet v3, and ShuffleNet v2 and to optimize these model architectures. Our best model variation outperforms the MobileNet v1 version provided in the MLPerf Tiny Benchmark on the Visual Wake Words image classification task, reducing the model size by 20.5% while increasing the accuracy by 4.0%. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

23 pages, 519 KiB  
Article
Exploiting Anytime Algorithms for Collaborative Service Execution in Edge Computing
by Luís Nogueira, Jorge Coelho and David Pereira
Computers 2024, 13(6), 130; https://doi.org/10.3390/computers13060130 - 23 May 2024
Viewed by 698
Abstract
The diversity and scarcity of resources across devices in heterogeneous computing environments can impact their ability to meet users’ quality-of-service (QoS) requirements, especially in open real-time environments where computational loads are unpredictable. Despite this uncertainty, timely responses to events remain essential to ensure [...] Read more.
The diversity and scarcity of resources across devices in heterogeneous computing environments can impact their ability to meet users’ quality-of-service (QoS) requirements, especially in open real-time environments where computational loads are unpredictable. Despite this uncertainty, timely responses to events remain essential to ensure desired performance levels. To address this challenge, this paper introduces collaborative service execution, enabling resource-constrained IoT devices to collaboratively execute services with more powerful neighbors at the edge, thus meeting non-functional requirements that might be unattainable through individual execution. Nodes dynamically form clusters, allocating resources to each service and establishing initial configurations that maximize QoS satisfaction while minimizing global QoS impact. However, the complexity of open real-time environments may hinder the computation of optimal local and global resource allocations within reasonable timeframes. Thus, we reformulate the QoS optimization problem as a heuristic-based anytime optimization problem, capable of interrupting and quickly adapting to environmental changes. Extensive simulations demonstrate that our anytime algorithms rapidly yield satisfactory initial service solutions and effectively optimize the solution quality over iterations, with negligible overhead compared to the benefits gained. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

Back to TopTop