applsci-logo

Journal Browser

Journal Browser

New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 30576

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Catholic University of Daegu, Gyeongsan 38430, Korea
Interests: artificial intelligence; machine learning; big data computing; cloud computing; edge/fog computing; distributed and parallel computing
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science Engineering, Jeonju University, Jeonju, 55069, Republic of Korea
Interests: AI; big data analysis; cloud
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Edge computing and Fog computing are inextricably linked and should be considered together in cloud computing environments. At the same time, Artificial Intelligence (AI) and Machine Learning (ML) applications have been widely deployed in a variety of business sectors and industries. As a result of recent advances in the large-scale deployment of Edge/Fog applications and the development of AI and ML, orchestrations of Edge/Fog computing can stem from AI and ML.

Motivated by the above, we are launching a Special Issue on “New Technologies and Applications on Edge/Fog Computing Based on Artificial Intelligence and Machine Learning” and cordially invite your contributions.

Specific topics of interest include, but are not limited to:

  • Edge/Fog computing architecture based on AI and ML;
  • AI and ML-based Edge/Fog applications;
  • Dynamic resource and service allocation on Edge/Fog computing;
  • Device and service management for AI and ML-based Edge/Fog infrastructure;
  • Data management techniques for AI and ML-based Edge/Fog infrastructure;
  • AI and ML-based algorithms and technologies for task offloading on Edge/Fog computing;
  • State-aware solutions for Edge/Fog computing;
  • Container orchestration techniques such as real-time monitoring, auto-scaling, and load balancing of services based on AI and ML;
  • Experimental testbed for AI and ML-based Edge/Fog applications;
  • Performance analysis and evaluation on Edge/Fog computing;
  • SDN/NFV techniques for Edge/Fog computing;
  • Security and privacy for Edge/Fog computing.

We welcome different types of articles, including articles that propose new algorithms for a general class of problems, present experimental evaluations of existing algorithms, or explain how challenges related to Edge/Fog computing based on AI and ML.

Prof. Dr. Joon-Min Gil
Dr. Ji Su Park
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • fog computing
  • artificial intelligence
  • machine learning
  • cloud computing
  • container services

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

6 pages, 152 KiB  
Editorial
New Technologies and Applications of Edge/Fog Computing Based on Artificial Intelligence and Machine Learning
by Ji Su Park
Appl. Sci. 2024, 14(13), 5583; https://doi.org/10.3390/app14135583 - 27 Jun 2024
Viewed by 1596
Abstract
Multi-access edge computing (MEC) is an emerging computing architecture that enhances and extends traditional mobile cloud computing [...] Full article

Research

Jump to: Editorial, Review

18 pages, 2986 KiB  
Article
Enhancing Sequence Movie Recommendation System Using Deep Learning and KMeans
by Sophort Siet, Sony Peng, Sadriddinov Ilkhomjon, Misun Kang and Doo-Soon Park
Appl. Sci. 2024, 14(6), 2505; https://doi.org/10.3390/app14062505 - 15 Mar 2024
Cited by 4 | Viewed by 3247
Abstract
A flood of information has occurred, making it challenging for people to find and filter their favorite items. Recommendation systems (RSs) have emerged as a solution to this problem; however, traditional Appenrecommendation systems, including collaborative filtering, and content-based filtering, face significant challenges such [...] Read more.
A flood of information has occurred, making it challenging for people to find and filter their favorite items. Recommendation systems (RSs) have emerged as a solution to this problem; however, traditional Appenrecommendation systems, including collaborative filtering, and content-based filtering, face significant challenges such as data scalability, data scarcity, and the cold-start problem, all of which require advanced solutions. Therefore, we propose a ranking and enhancing sequence movie recommendation system that utilizes the combination model of deep learning to resolve the existing issues. To mitigate these challenges, we design an RSs model that utilizes user information (age, gender, occupation) to analyze new users and match them with others who have similar preferences. Initially, we construct sequences of user behavior to effectively predict the potential next target movie of users. We then incorporate user information and movie sequence embeddings as input features to reduce the dimensionality, before feeding them into a transformer architecture and multilayer perceptron (MLP). Our model integrates a transformer layer with positional encoding for user behavior sequences and multi-head attention mechanisms to enhance prediction accuracy. Furthermore, the system applies KMeans clustering to movie genre embeddings, grouping similar movies and integrating this clustering information with predicted ratings to ensure diversity in the personalized recommendations for target users. Evaluating our model on two MovieLens datasets (100 Kand 1 M) demonstrated significant improvements, achieving RMSE, MAE, precision, recall, and F1 scores of 1.0756, 0.8741, 0.5516, 0.3260, and 0.4098 for the 100 K dataset, and 0.9927, 0.8007, 0.5838, 0.4723, and 0.5222 for the 1 M dataset, respectively. This approach not only effectively mitigates cold-start and scalability issues but also surpasses baseline techniques in Top-N item recommendations, highlighting its efficacy in the contemporary environment of abundant data. Full article
Show Figures

Figure 1

18 pages, 758 KiB  
Article
Traffic-Aware Optimization of Task Offloading and Content Caching in the Internet of Vehicles
by Pengwei Wang, Yaping Wang, Junye Qiao and Zekun Hu
Appl. Sci. 2023, 13(24), 13069; https://doi.org/10.3390/app132413069 - 7 Dec 2023
Cited by 2 | Viewed by 1155
Abstract
Emerging in-vehicle applications seek to improve travel experiences, but the rising number of vehicles results in more computational tasks and redundant content requests, leading to resource waste. Efficient compute offloading and content caching strategies are crucial for the Internet of Vehicles (IoV) to [...] Read more.
Emerging in-vehicle applications seek to improve travel experiences, but the rising number of vehicles results in more computational tasks and redundant content requests, leading to resource waste. Efficient compute offloading and content caching strategies are crucial for the Internet of Vehicles (IoV) to optimize performance in time latency and energy consumption. This paper proposes a joint task offloading and content caching optimization method based on forecasting traffic streams, called TOCC. First, temporal and spatial correlations are extracted from the preprocessed dataset using the Forecasting Open Source Tool (FOST) and integrated to predict the traffic stream to obtain the number of tasks in the region at the next moment. To obtain a suitable joint optimization strategy for task offloading and content caching, the multi-objective problem of minimizing delay and energy consumption is decomposed into multiple single-objective problems using an improved Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) via the Tchebycheff weight aggregation method, and a set of Pareto-optimal solutions is obtained. Finally, the experimental results verify the effectiveness of the TOCC strategy. Compared with other methods, its latency is up to 29% higher and its energy consumption is up to 83% higher. Full article
Show Figures

Figure 1

13 pages, 3212 KiB  
Article
Enhancing the Performance of XR Environments Using Fog and Cloud Computing
by Eun-Seok Lee and Byeong-Seok Shin
Appl. Sci. 2023, 13(22), 12477; https://doi.org/10.3390/app132212477 - 18 Nov 2023
Cited by 1 | Viewed by 1843
Abstract
The extended reality (XR) environment demands high-performance computing and data processing capabilities, while requiring continuous technological development to enable a real-time integration between the physical and virtual worlds for user interactions. XR systems have traditionally been deployed in local environments primarily because of [...] Read more.
The extended reality (XR) environment demands high-performance computing and data processing capabilities, while requiring continuous technological development to enable a real-time integration between the physical and virtual worlds for user interactions. XR systems have traditionally been deployed in local environments primarily because of the need for the real-time collection of user behavioral patterns. On the other hand, these XR systems face limitations in local deployments, such as latency issues arising from factors, such as network bandwidth and GPU performance. Consequently, several studies have examined cloud-based XR solutions. While offering centralized management advantages, these solutions present bandwidth, data transmission, and real-time processing challenges. Addressing these challenges necessitates reconfiguring the XR environment and adopting new approaches and strategies focusing on network bandwidth and real-time processing optimization. This paper examines the computational complexities, latency issues, and real-time user interaction challenges of XR. A system architecture that leverages edge and fog computing is proposed to overcome these challenges and enhance the XR experience by efficiently processing input data, rendering output content, and minimizing latency for real-time user interactions. Full article
Show Figures

Figure 1

13 pages, 3413 KiB  
Article
A Proposed Settlement and Distribution Structure for Music Royalties in Korea and Their Artificial Intelligence-Based Applications
by Youngmin Kim, Donghwan Kim, Sunho Park, Yonghwa Kim, Jisoo Hong, Sunghee Hong, Jinsoo Jeong, Byounghyo Lee and Hyeonchan Oh
Appl. Sci. 2023, 13(19), 11109; https://doi.org/10.3390/app131911109 - 9 Oct 2023
Cited by 1 | Viewed by 2094
Abstract
Digital music is one of the most important commodities on the market due to music royalty distribution in Korea. As the music market has been transformed into a digital music market by means such as downloading and streaming, the distribution of music royalties [...] Read more.
Digital music is one of the most important commodities on the market due to music royalty distribution in Korea. As the music market has been transformed into a digital music market by means such as downloading and streaming, the distribution of music royalties via online service providers (OSPs) has become a highly important issue for music rights holders. Currently, one of the most important issues in music royalty distribution in Korea is the unfair distribution of royalties due to the indiscriminate repeat streaming of digital music. To prevent this, music consumption log data from several OSPs were collected via a day-based system; however, there was a limit on the identification of detailed information on the use of music in its current state. This paper analyzes the structural problems and limitations related to the settlement of music royalties and provides a structure in which there can be transparent settlement and distribution between users and rights holders as an institutional measure. We also propose various AI (artificial intelligence)-based applications using music consumption log data. The proposed system will hopefully be used for public purposes. Full article
Show Figures

Figure 1

24 pages, 2445 KiB  
Article
Performance Analysis of a Keyword-Based Trust Management System for Fog Computing
by Ahmed M. Alwakeel
Appl. Sci. 2023, 13(15), 8714; https://doi.org/10.3390/app13158714 - 28 Jul 2023
Cited by 1 | Viewed by 1167
Abstract
This study presents a novel keyword-based trust management system for fog computing networks aimed at improving network efficiency and ensuring data integrity. The proposed system establishes and maintains trust between fog nodes using trust keywords recorded in a table on each node. Simulation [...] Read more.
This study presents a novel keyword-based trust management system for fog computing networks aimed at improving network efficiency and ensuring data integrity. The proposed system establishes and maintains trust between fog nodes using trust keywords recorded in a table on each node. Simulation research is conducted using iFogSim to evaluate the efficacy of the proposed scheme in terms of latency and packet delivery ratio. The study focuses on addressing trust and security challenges in fog computing environments. By leveraging trust keywords, the proposed system enables accurate evaluation of trustworthiness and identification of potentially malicious nodes. The system enhances the security of fog computing by mitigating risks associated with unauthorized access and malicious behavior. While the study highlights the significance of trust keywords in improving network performance and trustworthiness, it fails to provide detailed explanations of the trust mechanism itself. Additionally, the role of fog computing in the proposed approach is not adequately emphasized. Future research directions include refining and optimizing the proposed framework to consider resource constraints, dynamic network conditions, and scalability. Integration of advanced security mechanisms such as encryption and authentication protocols will be explored to strengthen the trust foundation in fog computing environments. In conclusion, the proposed keyword-based trust management system offers potential benefits for improving network performance and ensuring data integrity in fog computing. However, further clarification of the trust mechanism and a stronger emphasis on the role of fog computing would enhance understanding of the proposed approach. Full article
Show Figures

Figure 1

18 pages, 1206 KiB  
Article
Use of Logarithmic Rates in Multi-Armed Bandit-Based Transmission Rate Control Embracing Frame Aggregations in Wireless Networks
by Soohyun Cho
Appl. Sci. 2023, 13(14), 8485; https://doi.org/10.3390/app13148485 - 22 Jul 2023
Cited by 2 | Viewed by 1138
Abstract
Herein, we propose the use of the logarithmic values of data transmission rates for multi-armed bandit (MAB) algorithms that adjust the modulation and coding scheme (MCS) levels of data packets in carrier-sensing multiple access/collision avoidance (CSMA/CA) wireless networks. We argue that the utilities [...] Read more.
Herein, we propose the use of the logarithmic values of data transmission rates for multi-armed bandit (MAB) algorithms that adjust the modulation and coding scheme (MCS) levels of data packets in carrier-sensing multiple access/collision avoidance (CSMA/CA) wireless networks. We argue that the utilities of the data transmission rates of the MCS levels may not be proportional to their nominal values and suggest using their logarithmic values instead of directly using their data transmission rates when MAB algorithms compute the expected throughputs of the MCS levels. To demonstrate the effectiveness of the proposal, we introduce two MAB algorithms that adopt the logarithmic rates of the transmission rates. The proposed MAB algorithms also support frame aggregations available in wireless network standards that aim for a high throughput. In addition, the proposed MAB algorithms use a sliding window over time to adapt to rapidly changing wireless channel environments. To evaluate the performance of the proposed MAB algorithms, we used the event-driven network simulator, ns-3. We evaluated their performance using various scenarios of stationary and non-stationary wireless network environments including multiple spatial streams and frame aggregations. The experiment results show that the proposed MAB algorithms outperform the MAB algorithms that do not adopt the logarithmic transmission rates in both the stationary and non-stationary scenarios. Full article
Show Figures

Figure 1

12 pages, 3015 KiB  
Article
Machine Learning Based Representative Spatio-Temporal Event Documents Classification
by Byoungwook Kim, Yeongwook Yang, Ji Su Park and Hong-Jun Jang
Appl. Sci. 2023, 13(7), 4230; https://doi.org/10.3390/app13074230 - 27 Mar 2023
Cited by 2 | Viewed by 1833
Abstract
As the scale of online news and social media expands, attempts to analyze the latest social issues and consumer trends are increasing. Research on detecting spatio-temporal event sentences in text data is being actively conducted. However, a document contains important spatio-temporal events necessary [...] Read more.
As the scale of online news and social media expands, attempts to analyze the latest social issues and consumer trends are increasing. Research on detecting spatio-temporal event sentences in text data is being actively conducted. However, a document contains important spatio-temporal events necessary for event analysis, as well as non-critical events for event analysis. It is important to increase the accuracy of event analysis by extracting only the key events necessary for event analysis from among a large number of events. In this study, we define important 'representative spatio-temporal event documents' for the core subject of documents and propose a BiLSTM-based document classification model to classify representative spatio-temporal event documents. We build 10,000 gold-standard training datasets to train the proposed BiLSTM model. The experimental results show that our BiLSTM model improves the F1 score by 2.6% and the accuracy by 4.5% compared to the baseline CNN model. Full article
Show Figures

Figure 1

19 pages, 19497 KiB  
Article
Crop Disease Diagnosis with Deep Learning-Based Image Captioning and Object Detection
by Dong In Lee, Ji Hwan Lee, Seung Ho Jang, Se Jong Oh and Ill Chul Doo
Appl. Sci. 2023, 13(5), 3148; https://doi.org/10.3390/app13053148 - 28 Feb 2023
Cited by 12 | Viewed by 4011
Abstract
The number of people participating in urban farming and its market size have been increasing recently. However, the technologies that assist the novice farmers are still limited. There are several previously researched deep learning-based crop disease diagnosis solutions. However, these techniques only focus [...] Read more.
The number of people participating in urban farming and its market size have been increasing recently. However, the technologies that assist the novice farmers are still limited. There are several previously researched deep learning-based crop disease diagnosis solutions. However, these techniques only focus on CNN-based disease detection and do not explain the characteristics of disease symptoms based on severity. In order to prevent the spread of diseases in crops, it is important to identify the characteristics of these disease symptoms in advance and cope with them as soon as possible. Therefore, we propose an improved crop disease diagnosis solution which can give practical help to novice farmers. The proposed solution consists of two representative deep learning-based methods: Image Captioning and Object Detection. The Image Captioning model describes prominent symptoms of the disease, according to severity in detail, by generating diagnostic sentences which are grammatically correct and semantically comprehensible, along with presenting the accurate name of it. Meanwhile, the Object Detection model detects the infected area to help farmers recognize which part is damaged and assure them of the accuracy of the diagnosis sentence generated by the Image Captioning model. The Image Captioning model in the proposed solution employs the InceptionV3 model as an encoder and the Transformer model as a decoder, while the Object Detection model of the proposed solution employs the YOLOv5 model. The average BLEU score of the Image Captioning model is 64.96%, which can be considered to have high performance of sentence generation and, meanwhile, the mAP50 for the Object Detection model is 0.382, which requires further improvement. Those results indicate that the proposed solution allows the precise and elaborate information of the crop diseases, thereby increasing the overall reliability of the diagnosis. Full article
Show Figures

Figure 1

22 pages, 1607 KiB  
Article
Joint Task Offloading, Resource Allocation, and Load-Balancing Optimization in Multi-UAV-Aided MEC Systems
by Ibrahim A. Elgendy, Souham Meshoul and Mohamed Hammad
Appl. Sci. 2023, 13(4), 2625; https://doi.org/10.3390/app13042625 - 17 Feb 2023
Cited by 15 | Viewed by 3705
Abstract
Due to their limited computation capabilities and battery life, Internet of Things (IoT) networks face significant challenges in executing delay-sensitive and computation-intensive mobile applications and services. Therefore, the Unmanned Aerial Vehicle (UAV) mobile edge computing (MEC) paradigm offers low latency communication, computation, and [...] Read more.
Due to their limited computation capabilities and battery life, Internet of Things (IoT) networks face significant challenges in executing delay-sensitive and computation-intensive mobile applications and services. Therefore, the Unmanned Aerial Vehicle (UAV) mobile edge computing (MEC) paradigm offers low latency communication, computation, and storage capabilities, which makes it an attractive way to mitigate these limitations by offloading them. Nevertheless, the majority of the offloading schemes let IoT devices send their intensive tasks to the connected edge server, which predictably limits the performance gain due to overload. Therefore, in this paper, besides integrating task offloading and load balancing, we study the resource allocation problem for multi-tier UAV-aided MEC systems. First, an efficient load-balancing algorithm is designed for optimizing the load among ground MEC servers through the handover process as well as hovering UAVs over the crowded areas which are still loaded due to the fixed location of the ground base stations server (GBSs). Moreover, we formulate the joint task offloading, load balancing, and resource allocation as an integer problem to minimize the system cost. Furthermore, an efficient task offloading algorithm based on deep reinforcement learning techniques is proposed to derive the offloading solution. Finally, the experimental results show that the proposed approach not only has a fast convergence performance but also has a significantly lower system cost when compared to the benchmark approaches. Full article
Show Figures

Figure 1

20 pages, 10310 KiB  
Article
High Performance IoT Cloud Computing Framework Using Pub/Sub Techniques
by Jaekyung Nam, Youngpyo Jun and Min Choi
Appl. Sci. 2022, 12(21), 11009; https://doi.org/10.3390/app122111009 - 30 Oct 2022
Cited by 6 | Viewed by 2716
Abstract
The Internet of Things is attracting attention as a solution to rural sustainability crises, such as slowing income, exports, and growth rates due to the aging of industries. To develop a high-performance IoT platform, we designed and implemented an IoT cloud platform using [...] Read more.
The Internet of Things is attracting attention as a solution to rural sustainability crises, such as slowing income, exports, and growth rates due to the aging of industries. To develop a high-performance IoT platform, we designed and implemented an IoT cloud platform using pub/sub technologies. This design reduces the difficulty of overhead for management and communication, despite the harsh IoT environment. In this study, we achieved high performance by applying the pub/sub platform with two different characteristics. As the size and frequency of data acquired from IoT nodes increase, we improved performance through MQTT and Kafka protocols and multiple server architecture. MQTT was applied for fast processing of small data, and Kafka was applied for reliable processing of large data. We also mounted various sensors and actuators to measure the data of growth for each device using the protocol. For example, DHT11, MAX30102, WK-ADB-K07-19, SG-90, and so on. As a result of performance evaluation, the MQTT Kafka platform implemented in this research was found to be effective for use in environments where network bandwidth is limited or a large amount of data is continuously transmitted and received. We realized the performance as follows: the response time for user requests was measured to be within 100 ms on average, data transmission order verification for more than 13 million requests, data processing performance per second on an average of 113,134.89 record/s, and 64,313 requests per second were performed for requests that occurred simultaneously from multiple clients. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

30 pages, 5017 KiB  
Review
Applicability of Deep Reinforcement Learning for Efficient Federated Learning in Massive IoT Communications
by Prohim Tam, Riccardo Corrado, Chanthol Eang and Seokhoon Kim
Appl. Sci. 2023, 13(5), 3083; https://doi.org/10.3390/app13053083 - 27 Feb 2023
Cited by 23 | Viewed by 4003
Abstract
To build intelligent model learning in conventional architecture, the local data are required to be transmitted toward the cloud server, which causes heavy backhaul congestion, leakage of personalization, and insufficient use of network resources. To address these issues, federated learning (FL) is introduced [...] Read more.
To build intelligent model learning in conventional architecture, the local data are required to be transmitted toward the cloud server, which causes heavy backhaul congestion, leakage of personalization, and insufficient use of network resources. To address these issues, federated learning (FL) is introduced by offering a systematical framework that converges the distributed modeling process between local participants and the parameter server. However, the challenging issues of insufficient participant scheduling, aggregation policies, model offloading, and resource management still remain within conventional FL architecture. In this survey article, the state-of-the-art solutions for optimizing the orchestration in FL communications are presented, primarily querying the deep reinforcement learning (DRL)-based autonomy approaches. The correlations between the DRL and FL mechanisms are described within the optimized system architectures of selected literature approaches. The observable states, configurable actions, and target rewards are inquired into to illustrate the applicability of DRL-assisted control toward self-organizing FL systems. Various deployment strategies for Internet of Things applications are discussed. Furthermore, this article offers a review of the challenges and future research perspectives for advancing practical performances. Advanced solutions in these aspects will drive the applicability of converged DRL and FL for future autonomous communication-efficient and privacy-aware learning. Full article
Show Figures

Figure 1

Back to TopTop