Topic Editors

Department of Computer Science, Texas A&M University, Corpus Christ, TX 78412, USA
Dr. Francesco Moscato
Dipartimento di Ingegneria, Università degli Studi della Campania "Luigi Vanvitelli, via Roma 29, 81031 Aversa, CE, Italy

Cloud and Edge Computing for Smart Devices

Abstract submission deadline
20 March 2025
Manuscript submission deadline
20 May 2025
Viewed by
8800

Topic Information

Dear Colleagues,

In recent years, mobile devices have become indispensable devices that can share a variety of data despite their distance or specifications. Additionally, cloud and edge computing are emerging as promising technologies for the Internet of Things and cyber-physical systems. Moving computing to the edge has many benefits: faster response times, data privacy and security, and resilience if the cloud becomes unavailable.

The Topic of “Cloud and Edge Computing for Smart Devices” invites papers on theoretical and applied issues including, but not limited to, the following:

  • cloud, mobile cloud and fog computing;
  • blockchain and its application;
  • vehicular cloud computing;
  • smart vehicles and connected vehicles smart city;
  • cloud computing;
  • algorithms;
  • artificial intelligence;
  • parallel computing;
  • Internet of Things.

Dr. Mehdi Sookhak
Dr. Francesco Moscato
Topic Editors

Keywords

  • cloud, mobile cloud, and fog computing
  • blockchain and its application
  • vehicular cloud computing
  • smart vehicles and connected vehicles smart city
  • Internet of Things

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400 Submit
Future Internet
futureinternet
2.8 7.1 2009 13.1 Days CHF 1600 Submit
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600 Submit
Smart Cities
smartcities
7.0 11.2 2018 25.8 Days CHF 2000 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (7 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
15 pages, 2850 KiB  
Article
Researching the CNN Collaborative Inference Mechanism for Heterogeneous Edge Devices
by Jian Wang, Chong Chen, Shiwei Li, Chaoyong Wang, Xianzhi Cao and Liusong Yang
Sensors 2024, 24(13), 4176; https://doi.org/10.3390/s24134176 - 27 Jun 2024
Viewed by 861
Abstract
Convolutional Neural Networks (CNNs) have been widely applied in various edge computing devices based on intelligent sensors. However, due to the high computational demands of CNN tasks, the limited computing resources of edge intelligent terminal devices, and significant architectural differences among these devices, [...] Read more.
Convolutional Neural Networks (CNNs) have been widely applied in various edge computing devices based on intelligent sensors. However, due to the high computational demands of CNN tasks, the limited computing resources of edge intelligent terminal devices, and significant architectural differences among these devices, it is challenging for edge devices to independently execute inference tasks locally. Collaborative inference among edge terminal devices can effectively utilize idle computing and storage resources and optimize latency characteristics, thus significantly addressing the challenges posed by the computational intensity of CNNs. This paper targets efficient collaborative execution of CNN inference tasks among heterogeneous and resource-constrained edge terminal devices. We propose a pre-partitioning deployment method for CNNs based on critical operator layers, and optimize the system bottleneck latency during pipeline parallelism using data compression, queuing, and “micro-shifting” techniques. Experimental results demonstrate that our method achieves significant acceleration in CNN inference within heterogeneous environments, improving performance by 71.6% compared to existing popular frameworks. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

21 pages, 3651 KiB  
Article
A Reinforcement Learning-Based Multi-Objective Bat Algorithm Applied to Edge Computing Task-Offloading Decision Making
by Chwan-Lu Tseng, Che-Shen Cheng and Yu-Hsuan Shen
Appl. Sci. 2024, 14(12), 5088; https://doi.org/10.3390/app14125088 - 11 Jun 2024
Viewed by 756
Abstract
Amid the escalating complexity of networks, wireless intelligent devices, constrained by energy and resources, bear the increasing burden of managing various tasks. The decision of whether to allocate tasks to edge servers or handle them locally on devices now significantly impacts network performance. [...] Read more.
Amid the escalating complexity of networks, wireless intelligent devices, constrained by energy and resources, bear the increasing burden of managing various tasks. The decision of whether to allocate tasks to edge servers or handle them locally on devices now significantly impacts network performance. This study focuses on optimizing task-offloading decisions to balance network latency and energy consumption. An advanced learning-based multi-objective bat algorithm, MOBA-CV-SARSA, tailored to the constraints of wireless devices, presents a promising solution for edge computing task offloading. Developed in C++, MOBA-CV-SARSA demonstrates significant improvements over NSGA-RL-CV and QLPSO-CV, enhancing hypervolume and diversity-metric indicators by 0.9%, 15.07%, 4.72%, and 0.1%, respectively. Remarkably, MOBA-CV-SARSA effectively reduces network energy consumption within acceptable latency thresholds. Moreover, integrating an automatic switching mechanism enables MOBA-CV-SARSA to accelerate convergence speed while conserving 150.825 W of energy, resulting in a substantial 20.24% reduction in overall network energy consumption. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

25 pages, 4173 KiB  
Article
Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment
by Jingpan Bai, Silei Zhu and Houling Ji
Sensors 2024, 24(7), 2279; https://doi.org/10.3390/s24072279 - 3 Apr 2024
Cited by 1 | Viewed by 1086
Abstract
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the [...] Read more.
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

36 pages, 446 KiB  
Review
A Survey on Modeling Languages for Applications Hosted on Cloud-Edge Computing Environments
by Ioannis Korontanis, Antonios Makris and Konstantinos Tserpes
Appl. Sci. 2024, 14(6), 2311; https://doi.org/10.3390/app14062311 - 9 Mar 2024
Cited by 1 | Viewed by 1256
Abstract
In the field of edge-cloud computing environments, there is a continuous quest for new and simplified methods to automate the deployment and runtime adaptation to application lifecycle changes. Towards that end, cloud providers promote their own service description languages to describe deployment and [...] Read more.
In the field of edge-cloud computing environments, there is a continuous quest for new and simplified methods to automate the deployment and runtime adaptation to application lifecycle changes. Towards that end, cloud providers promote their own service description languages to describe deployment and adaptation processes, whereas application developers opt for cloud-agnostic open standards capable of modeling applications. However, not all open standards are able to capture concepts that relate to the adaptation of the underlying computing environment to changes in the application lifecycle. In our quest for a formal approach to encapsulate these concepts, this study presents various Cloud Modeling Languages (CMLs). In this study, when referring to CMLs, we are discussing service description languages, domain-specific languages, and open standards. The output of this study is a review that performs a classification on CMLs based on their effectiveness in describing deployment and adaptation of applications in both cloud and edge environments. According to our findings, approximately 90.9% of the examined languages offer support for deployment descriptions overall. In contrast, only around 27.2% of examined languages allow developers the choice to specify whether their application components should be deployed on the edge or in a cloud environment. Regarding runtime adaptation descriptions, approximately 54.5% of the languages provide support in general. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

22 pages, 3856 KiB  
Article
MixMobileNet: A Mixed Mobile Network for Edge Vision Applications
by Yanju Meng, Peng Wu, Jian Feng and Xiaoming Zhang
Electronics 2024, 13(3), 519; https://doi.org/10.3390/electronics13030519 - 26 Jan 2024
Cited by 1 | Viewed by 1483
Abstract
Currently, vision transformers (ViTs) have rivaled comparable performance to convolutional neural networks (CNNs). However, the computational demands of the transformers’ self-attention mechanism pose challenges for their application on edge devices. Therefore, in this study, we propose a lightweight transformer-based network model called MixMobileNet. [...] Read more.
Currently, vision transformers (ViTs) have rivaled comparable performance to convolutional neural networks (CNNs). However, the computational demands of the transformers’ self-attention mechanism pose challenges for their application on edge devices. Therefore, in this study, we propose a lightweight transformer-based network model called MixMobileNet. Similar to the ResNet block, this model only comprises a MixMobile block (MMb), which combines the efficient local inductive bias with the explicit modeling features of a transformer to achieve the fusion of the local–global feature interactions. For local, we propose the local-feature aggregation encoder (LFAE), which incorporates a PC2P (Partial-Conv→PWconv→PWconv) inverted bottleneck structure for residual connectivity. In particular, the kernel and channel scale are adaptive, reducing feature redundancy in adjacent layers and efficiently representing parameters. For global, we propose the global-feature aggregation encoder (GFAE), which employs a pooling strategy and computes the covariance matrix between channels instead of the spatial dimensions, changing the computational complexity from quadratic to linear, and this accelerates the inference of the model. We perform extensive image classification, object detection, and segmentation experiments to validate model performance. Our MixMobileNet-XXS/XS/S achieves 70.6%/75.1%/78.8% top-1 accuracy with 1.5 M/3.2 M/7.3 M parameters and 0.2 G/0.5 G/1.2 G FLOPs on ImageNet-1K, outperforming MobileViT-XXS/XS/S with an improvement of +1.6%↑/+0.4%↑/+0.4%↑ with −38.8%↓/−51.5%↓/−39.8%↓ reduction in FLOPs. In addition, the MixMobileNet-S assembly of SSDLite and DeepLabv3 achieves an accuracy of 28.5 mAP/79.5 mIoU at COCO2017/VOC2012 with lower computation, demonstrating the competitive performance of our lightweight model. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

30 pages, 5385 KiB  
Article
Joint Optimization of Memory Sharing and Communication Distance for Virtual Machine Instantiation in Cloudlet Networks
by Jianbo Shao and Junbin Liang
Electronics 2023, 12(20), 4205; https://doi.org/10.3390/electronics12204205 - 10 Oct 2023
Viewed by 1065
Abstract
Cloudlet networks are an emerging distributed data processing paradigm, which contain multiple cloudlets deployed beside base stations to serve local user devices (UDs). Each cloudlet is a small data center with limited memory, in which multiple virtual machines (VMs) can be instantiated. Each [...] Read more.
Cloudlet networks are an emerging distributed data processing paradigm, which contain multiple cloudlets deployed beside base stations to serve local user devices (UDs). Each cloudlet is a small data center with limited memory, in which multiple virtual machines (VMs) can be instantiated. Each VM runs a UD’s application components and provides dedicated services for that UD. The number of VMs that serve UDs with low latency is limited by a lack of sufficient memory of cloudlets. Memory deduplication technology is expected to solve this problem by sharing memory pages between VMs. However, maximizing page sharing means that more VMs that can share the same memory pages should be instantiated on the same cloudlet, which prevents the communication distance between UDs and their VMs from minimizing, as each VM cannot be instantiated in the cloudlet with the shortest communication distance from its UD. In this paper, we study the problem of VM instantiation with the joint optimization of memory sharing and communication distance in cloudlet networks. First, we formulate this problem as a bi-objective optimization model. Then, we propose an iterative heuristic algorithm based on the ε-constraint method, which decomposes original problems into several single-objective optimization subproblems and iteratively obtains the subproblems’ optimal solutions. Finally, the proposed algorithm is evaluated through a large number of experiments on the Google cluster workload tracking dataset and the Shanghai Telecom base station dataset. Experimental results show that the proposed algorithm outperforms other benchmark algorithms. Overall, the memory sharing between VMs increased by 3.6%, the average communication distance between VMs and UDs was reduced by 22.7%, and the running time decreased by approximately 29.7% compared to the weighted sum method. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

19 pages, 1218 KiB  
Article
Migratory Perception in Edge-Assisted Internet of Vehicles
by Chao Cai, Bin Chen, Jiahui Qiu, Yanan Xu, Mengfei Li and Yujia Yang
Electronics 2023, 12(17), 3662; https://doi.org/10.3390/electronics12173662 - 30 Aug 2023
Cited by 1 | Viewed by 1078
Abstract
Autonomous driving technology heavily relies on the accurate perception of traffic environments, mainly through roadside cameras and LiDARs. Although several popular and robust 2D and 3D object detection methods exist, including R-CNN, YOLO, SSD, PointPillar, and VoxelNet, the perception range and accuracy of [...] Read more.
Autonomous driving technology heavily relies on the accurate perception of traffic environments, mainly through roadside cameras and LiDARs. Although several popular and robust 2D and 3D object detection methods exist, including R-CNN, YOLO, SSD, PointPillar, and VoxelNet, the perception range and accuracy of an individual vehicle can be limited by blocking from other vehicles or buildings. A solution is to harness roadside perception infrastructures for vehicle–infrastructure cooperative perception, using edge computing for real-time intermediate features extraction and V2X networks for transmitting these features to vehicles. This emerging migratory perception paradigm requires deploying exclusive cooperative perception services on edge servers and involves the migration of perception services to reduce response time. In such a setup, competition among multiple cooperative perception services exists due to limited edge resources. This study proposes a multi-agent reinforcement learning (MADRL)-based service scheduling method for migratory perception in vehicle–infrastructure cooperative perception, utilizing a discrete time-varying graph to model the relationship between service nodes and edge server nodes. This MADRL-based approach can efficiently address the challenges of service placement and migration in resource-limited environments, minimize latency, and maximize resource utilization for migratory perception services on edge servers. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

Back to TopTop