AI for Edge Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 March 2025 | Viewed by 12229

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Management, National Kaohsiung University of Science and Technology, Kaohsiung 82445, Taiwan
Interests: data mining; big data; artificial intelligence

E-Mail Website
Guest Editor
College of Science and Mathematics, California State University, Fresno, CA 93740, USA
Interests: big data; data analytics; complex network analysis

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, California State University, Fresno, CA 93740, USA
Interests: network security; cybersecurity; security and privacy; Internet of Things; internet of vehicles

Special Issue Information

Dear Colleagues,

The focus of this Special Issue proposal is on AI for edge computing, because “letting AI enhance the performance of edge computing” and “letting AI run on edge” are two promising research topics nowadays, especially in modern information systems, such as the traffic light control system and other relevant smart city applications. Among the issues, downsizing machine learning, data mining, deep learning to make them run on the edge, finding a set of suitable hyperparameters, and even searching for a good neural architecture for the deep neural network have attracted the attention of researchers from different disciplines. Different from using AI technologies to realize intelligent systems/applications, these research directions can be regarded as an essential part of studies on AI in the forthcoming future, especially for ICT and smart cities. Therefore, we can foresee that the integration of artificial intelligent methods for edge computing will become a popular research topic. That is why this Special Issue will be focusing on intelligent algorithms for edge computing and their applications, such as downsizing and accelerating solutions of AI and edge devices/servers.

It is expected that this Special Issue will attract a lot of submissions from the research societies of AI, the Internet of Things (IoT), Wireless Sensor Network (WSN), smart city, and so forth. We can then select some high-quality papers from them to further increase the reputation and impact of ICT Express.

Dr. Jimmy Ming-Tai Wu
Dr. Matin Pirouz
Dr. Shahab Tayeb
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • machine learning
  • Internet of Things (IoT)
  • artificial intelligence
  • ICT (information and communication technology)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 3711 KiB  
Article
Offload Shaping for Wearable Cognitive Assistance
by Roger Iyengar, Qifei Dong, Chanh Nguyen, Padmanabhan Pillai and Mahadev Satyanarayanan
Electronics 2024, 13(20), 4083; https://doi.org/10.3390/electronics13204083 - 17 Oct 2024
Viewed by 583
Abstract
Edge computing has much lower elasticity than cloud computing because cloudlets have much smaller physical and electrical footprints than a data center. This hurts the scalability of applications that involve low-latency edge offload. We show how this problem can be addressed by leveraging [...] Read more.
Edge computing has much lower elasticity than cloud computing because cloudlets have much smaller physical and electrical footprints than a data center. This hurts the scalability of applications that involve low-latency edge offload. We show how this problem can be addressed by leveraging the growing sophistication and compute capability of recent wearable devices. We investigate four Wearable Cognitive Assistance applications on three wearable devices, and show that the technique of offload shaping can significantly reduce network utilization and cloudlet load without compromising accuracy or performance. Our investigation considers the offload shaping strategies of mapping processes to different computing tiers, gating, and decluttering. We find that all three strategies offer a significant bandwidth savings compared to transmitting full camera images to a cloudlet. Two out of the three devices we test are capable of running all offload shaping strategies within a reasonable latency bound. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

34 pages, 40732 KiB  
Article
AWDP-FL: An Adaptive Differential Privacy Federated Learning Framework
by Zhiyan Chen, Hong Zheng and Gang Liu
Electronics 2024, 13(19), 3959; https://doi.org/10.3390/electronics13193959 - 8 Oct 2024
Viewed by 778
Abstract
Data security and user privacy concerns are receiving increasing attention. Federated learning models based on differential privacy offer a distributed machine learning framework that protects data privacy. However, the noise introduced by the differential privacy mechanism may affect the model’s usability, especially when [...] Read more.
Data security and user privacy concerns are receiving increasing attention. Federated learning models based on differential privacy offer a distributed machine learning framework that protects data privacy. However, the noise introduced by the differential privacy mechanism may affect the model’s usability, especially when reasonable gradient clipping is absent. Fluctuations in the gradients can lead to issues like gradient explosion, compromising training stability and potentially leaking privacy. Therefore, gradient clipping has become a crucial method for protecting both model performance and data privacy. To balance privacy protection and model performance, we propose the Adaptive Weight-Based Differential Privacy Federated Learning (AWDP-FL) framework, which processes model gradient parameters at the neural network layer level. First, by designing and recording the change trends of two-layer historical gradient sequences, we analyze and predict gradient variations in the current iteration and calculate the corresponding weight values. Then, based on these weights, we perform adaptive gradient clipping for each data point in each training batch, which is followed by gradient momentum updates based on the third moment. Before uploading the parameters, Gaussian noise is added to protect privacy while maintaining model accuracy. Theoretical analysis and experimental results validate the effectiveness of this framework under strong privacy constraints. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

20 pages, 2703 KiB  
Article
One-Shot Federated Learning with Label Differential Privacy
by Zikang Chen, Changli Zhou and Zhenyu Jiang
Electronics 2024, 13(10), 1815; https://doi.org/10.3390/electronics13101815 - 8 May 2024
Viewed by 1046
Abstract
Federated learning (FL) has emerged as an extremely effective strategy for dismantling data silos and has attracted significant interest from both industry and academia in recent years. However, existing iterative FL approaches often require a large number of communication rounds and struggle to [...] Read more.
Federated learning (FL) has emerged as an extremely effective strategy for dismantling data silos and has attracted significant interest from both industry and academia in recent years. However, existing iterative FL approaches often require a large number of communication rounds and struggle to perform well on unbalanced datasets. Furthermore, the increased complexity of networks makes the application of traditional differential privacy to protect client privacy expensive. In this context, the authors introduce FedGM: a method designed to reduce communication overhead and achieve outstanding results in non-IID scenarios. FedGM is capable of achieving considerable accuracy, even with a small privacy budget. Specifically, the authors devise a method to extract knowledge from each client’s data by creating a scaled-down but highly effective synthesized dataset that can perform similarly to the original data. Additionally, the authors propose an innovative approach to applying label differential privacy to protect the synthesized dataset. The authors demonstrate the superiority of the approach over traditional methods by requiring only one communication round and by testing using four classification datasets for evaluation. Furthermore, when comparing the model performance for clients using their method against traditional solutions, the authors find that the approach achieves significant accuracy and better privacy. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

16 pages, 9544 KiB  
Article
Personalized Federated Learning Incorporating Adaptive Model Pruning at the Edge
by Yueying Zhou, Gaoxiang Duan, Tianchen Qiu, Lin Zhang, Li Tian, Xiaoying Zheng and Yongxin Zhu
Electronics 2024, 13(9), 1738; https://doi.org/10.3390/electronics13091738 - 1 May 2024
Viewed by 1548
Abstract
Edge devices employing federated learning encounter several obstacles, including (1) the non-independent and identically distributed (Non-IID) nature of client data, (2) limitations due to communication bottlenecks, and (3) constraints on computational resources. To surmount the Non-IID data challenge, personalized federated learning has been [...] Read more.
Edge devices employing federated learning encounter several obstacles, including (1) the non-independent and identically distributed (Non-IID) nature of client data, (2) limitations due to communication bottlenecks, and (3) constraints on computational resources. To surmount the Non-IID data challenge, personalized federated learning has been introduced, which involves training tailored networks at the edge; nevertheless, these methods often exhibit inconsistency in performance. In response to these concerns, a novel framework for personalized federated learning that incorporates adaptive pruning of edge-side data is proposed in this paper. This approach, through a two-staged pruning process, creates customized models while ensuring strong generalization capabilities. Concurrently, by utilizing sparse models, it significantly condenses the model parameters, markedly diminishing both the computational burden and communication overhead on edge nodes. This method achieves a remarkable compression ratio of 3.7% on the Non-IID dataset FEMNIST, with the training accuracy remaining nearly unaffected. Furthermore, the total training duration is reduced by 46.4% when compared with the standard baseline method. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

18 pages, 686 KiB  
Article
Computational Offloading for MEC Networks with Energy Harvesting: A Hierarchical Multi-Agent Reinforcement Learning Approach
by Yu Sun and Qijie He
Electronics 2023, 12(6), 1304; https://doi.org/10.3390/electronics12061304 - 9 Mar 2023
Cited by 9 | Viewed by 2561
Abstract
Multi-access edge computing (MEC) is a novel computing paradigm that leverages nearby MEC servers to augment the computational capabilities of users with limited computational resources. In this paper, we investigate the computational offloading problem in multi-user multi-server MEC systems with energy harvesting, aiming [...] Read more.
Multi-access edge computing (MEC) is a novel computing paradigm that leverages nearby MEC servers to augment the computational capabilities of users with limited computational resources. In this paper, we investigate the computational offloading problem in multi-user multi-server MEC systems with energy harvesting, aiming to minimize both system latency and energy consumption by optimizing task offload location selection and task offload ratio.We propose a hierarchical computational offloading strategy based on multi-agent reinforcement learning (MARL). The proposed strategy decomposes the computational offloading problem into two sub-problems: a high-level task offloading location selection problem and a low-level task offloading ratio problem. The complexity of the problem is reduced by decoupling. To address these sub-problems, we propose a computational offloading framework based on multi-agent proximal policy optimization (MAPPO), where each agent generates actions based on its observed private state to avoid the problem of action space explosion due to the increasing number of user devices. Simulation results show that the proposed HDMAPPO strategy outperforms other baseline algorithms in terms of average task latency, energy consumption, and discard rate. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

Review

Jump to: Research

44 pages, 8494 KiB  
Review
Survey of Deep Learning Accelerators for Edge and Emerging Computing
by Shahanur Alam, Chris Yakopcic, Qing Wu, Mark Barnell, Simon Khan and Tarek M. Taha
Electronics 2024, 13(15), 2988; https://doi.org/10.3390/electronics13152988 - 29 Jul 2024
Cited by 1 | Viewed by 4752
Abstract
The unprecedented progress in artificial intelligence (AI), particularly in deep learning algorithms with ubiquitous internet connected smart devices, has created a high demand for AI computing on the edge devices. This review studied commercially available edge processors, and the processors that are still [...] Read more.
The unprecedented progress in artificial intelligence (AI), particularly in deep learning algorithms with ubiquitous internet connected smart devices, has created a high demand for AI computing on the edge devices. This review studied commercially available edge processors, and the processors that are still in industrial research stages. We categorized state-of-the-art edge processors based on the underlying architecture, such as dataflow, neuromorphic, and processing in-memory (PIM) architecture. The processors are analyzed based on their performance, chip area, energy efficiency, and application domains. The supported programming frameworks, model compression, data precision, and the CMOS fabrication process technology are discussed. Currently, most commercial edge processors utilize dataflow architectures. However, emerging non-von Neumann computing architectures have attracted the attention of the industry in recent years. Neuromorphic processors are highly efficient for performing computation with fewer synaptic operations, and several neuromorphic processors offer online training for secured and personalized AI applications. This review found that the PIM processors show significant energy efficiency and consume less power compared to dataflow and neuromorphic processors. A future direction of the industry could be to implement state-of-the-art deep learning algorithms in emerging non-von Neumann computing paradigms for low-power computing on edge devices. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

Back to TopTop