sensors-logo

Journal Browser

Journal Browser

Communications and Networking Based on Artificial Intelligence

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 20 August 2025 | Viewed by 5515

Special Issue Editors


E-Mail Website
Guest Editor
Beijing Key Laboratory of Work Safety Intelligent Monitoring, School of Electronic Egineering, Beijing University of Posts and Telecommunications, Xitucheng Road No. 10, Beijing 100876, China
Interests: energy-efficient networking; deep reinforcement learning; heterogeneous resource management; edge computing

E-Mail Website
Guest Editor
College of Engineering, Virginia Commonwealth University, Richmond, VA E4249, USA
Interests: wireless communication and networking; artificial intelligence; cyber security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronic Engineering, Dublin City University, Collins Avenue Extension, D09 D209 Dublin, Ireland
Interests: green communication and networking; network security; hardware acceleration for ML and A; AI technology in education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has been a popular research area for the past few decades and is expected to play an important role in communication networks. AI algorithms extract features from large amounts of data without relying on specific mathematical models and, thus, have great potential in complex scenarios where nonlinear relationships cannot be modeled directly via traditional methods. In recent years, with the rapid development of information and communication technology, large amounts of data are generated to promote the training of AI models. Preliminary studies have been conducted on the application of AI methods in communication networks; however, with the continuous development and progress of AI methods and technologies, as well as the higher performance requirements of communication networks, there is a need for further research on the application of AI in the field of wireless communication. In addition, AI methods need to be fully explored in terms of performance stability and interpretability, privacy, and training costs. The goal of this Special Issue is to explore the potential of AI in communication and provide a platform for presenting the latest achievements in this field of research. Potential topics include, but are not limited to, the following:

  • AI for integrated sensing and communication;
  • Blockchain-enabled federated learning in the IoT;
  • AI for network slicing and system coexistence;
  • AI for mobile edge computing;
  • Centralized and distributed learning for sensor networks;
  • Reinforcement learning approaches for wireless resource allocation;
  • AI for green communication and networking;
  • AI for environmental awareness and target location;
  • AI for reconfigurable intelligent surfaces;
  • Deep learning methods for sensors.

Prof. Dr. Yifei Wei
Dr. Changqing Luo
Dr. Xiaojun Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • communications
  • networking
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4225 KiB  
Article
Zero-Shot Traffic Identification with Attribute and Graph-Based Representations for Edge Computing
by Zikui Lu, Zixi Chang, Mingshu He and Luona Song
Sensors 2025, 25(2), 545; https://doi.org/10.3390/s25020545 - 18 Jan 2025
Viewed by 444
Abstract
With the proliferation of mobile terminals and the rapid growth of network applications, fine-grained traffic identification has become increasingly challenging. Methods based on machine learning and deep learning have achieved remarkable results, but they heavily rely on the distribution of training data, which [...] Read more.
With the proliferation of mobile terminals and the rapid growth of network applications, fine-grained traffic identification has become increasingly challenging. Methods based on machine learning and deep learning have achieved remarkable results, but they heavily rely on the distribution of training data, which makes them ineffective in handling unseen samples. In this paper, we propose AG-ZSL, a zero-shot learning framework based on traffic behavior and attribute representations for general encrypted traffic classification. AG-ZSL primarily learns two mapping functions: one that captures traffic behavior embeddings from burst-based traffic interaction graphs, and the other that learns attribute embeddings from traffic attribute descriptions. Then, the framework minimizes the distance between these embeddings within the shared feature space. The gradient rejection algorithm and K-Nearest Neighbors are introduced to implement a two-stage method for general traffic classification. Experimental results on IoT datasets demonstrate that AG-ZSL achieves exceptional performance in classifying both known and unknown traffic, highlighting its potential for enhancing secure and efficient traffic management at the network edge. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

21 pages, 4444 KiB  
Article
Network Dismantling on Signed Network by Evolutionary Deep Reinforcement Learning
by Yuxuan Ou, Fujing Xiong, Hairong Zhang and Huijia Li
Sensors 2024, 24(24), 8026; https://doi.org/10.3390/s24248026 - 16 Dec 2024
Viewed by 527
Abstract
Network dismantling is an important question that has attracted much attention from many different research areas, including the disruption of criminal organizations, the maintenance of stability in sensor networks, and so on. However, almost all current algorithms focus on unsigned networks, and few [...] Read more.
Network dismantling is an important question that has attracted much attention from many different research areas, including the disruption of criminal organizations, the maintenance of stability in sensor networks, and so on. However, almost all current algorithms focus on unsigned networks, and few studies explore the problem of signed network dismantling due to its complexity and lack of data. Importantly, there is a lack of an effective quality function to assess the performance of signed network dismantling, which seriously restricts its deeper applications. To address these questions, in this paper, we design a new objective function and further propose an effective algorithm named as DSEDR, which aims to search for the best dismantling strategy based on evolutionary deep reinforcement learning. Especially, since the evolutionary computation is able to solve global optimization and the deep reinforcement learning can speed up the network computation, we integrate it for the signed network dismantling efficiently. To verify the performance of DSEDR, we apply it to a series of representative artificial and real network data and compare the efficiency with some popular baseline methods. Based on the experimental results, DSEDR has superior performance to all other methods in both efficiency and interpretability. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

18 pages, 8743 KiB  
Article
An Improved YOLOv8-Based Foreign Detection Algorithm for Transmission Lines
by Pingting Duan and Xiao Liang
Sensors 2024, 24(19), 6468; https://doi.org/10.3390/s24196468 - 7 Oct 2024
Viewed by 1376
Abstract
This research aims to overcome three major challenges in foreign object detection on power transmission lines: data scarcity, background noise, and high computational costs. In the improved YOLOv8 algorithm, the newly introduced lightweight GSCDown (Ghost Shuffle Channel Downsampling) module effectively captures subtle image [...] Read more.
This research aims to overcome three major challenges in foreign object detection on power transmission lines: data scarcity, background noise, and high computational costs. In the improved YOLOv8 algorithm, the newly introduced lightweight GSCDown (Ghost Shuffle Channel Downsampling) module effectively captures subtle image features by combining 1 × 1 convolution and GSConv technology, thereby enhancing detection accuracy. CSPBlock (Cross-Stage Partial Block) fusion enhances the model’s accuracy and stability by strengthening feature expression and spatial perception while maintaining the algorithm’s lightweight nature and effectively mitigating the issue of vanishing gradients, making it suitable for efficient foreign object detection in complex power line environments. Additionally, PAM (pooling attention mechanism) effectively distinguishes between background and target without adding extra parameters, maintaining high accuracy even in the presence of background noise. Furthermore, AIGC (AI-generated content) technology is leveraged to produce high-quality images for training data augmentation, and lossless feature distillation ensures higher detection accuracy and reduces false positives. In conclusion, the improved architecture reduces the parameter count by 18% while improving the [email protected] metric by a margin of 5.5 points when compared to YOLOv8n. Compared to state-of-the-art real-time object detection frameworks, our research demonstrates significant advantages in both model accuracy and parameter size. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

35 pages, 1261 KiB  
Article
Utility-Driven End-to-End Network Slicing for Diverse IoT Users in MEC: A Multi-Agent Deep Reinforcement Learning Approach
by Muhammad Asim Ejaz, Guowei Wu, Adeel Ahmed, Saman Iftikhar and Shaikhan Bawazeer
Sensors 2024, 24(17), 5558; https://doi.org/10.3390/s24175558 - 28 Aug 2024
Viewed by 1149
Abstract
Mobile Edge Computing (MEC) is crucial for reducing latency by bringing computational resources closer to the network edge, thereby enhancing the quality of services (QoS). However, the broad deployment of cloudlets poses challenges in efficient network slicing, particularly when traffic distribution is uneven. [...] Read more.
Mobile Edge Computing (MEC) is crucial for reducing latency by bringing computational resources closer to the network edge, thereby enhancing the quality of services (QoS). However, the broad deployment of cloudlets poses challenges in efficient network slicing, particularly when traffic distribution is uneven. Therefore, these challenges include managing diverse resource requirements across widely distributed cloudlets, minimizing resource conflicts and delays, and maintaining service quality amid fluctuating request rates. Addressing this requires intelligent strategies to predict request types (common or urgent), assess resource needs, and allocate resources efficiently. Emerging technologies like edge computing and 5G with network slicing can handle delay-sensitive IoT requests rapidly, but a robust mechanism for real-time resource and utility optimization remains necessary. To address these challenges, we designed an end-to-end network slicing approach that predicts common and urgent user requests through T distribution. We formulated our problem as a multi-agent Markov decision process (MDP) and introduced a multi-agent soft actor–critic (MAgSAC) algorithm. This algorithm prevents the wastage of scarce resources by intelligently activating and deactivating virtual network function (VNF) instances, thereby balancing the allocation process. Our approach aims to optimize overall utility, balancing trade-offs between revenue, energy consumption costs, and latency. We evaluated our method, MAgSAC, through simulations, comparing it with the following six benchmark schemes: MAA3C, SACT, DDPG, S2Vec, Random, and Greedy. The results demonstrate that our approach, MAgSAC, optimizes utility by 30%, minimizes energy consumption costs by 12.4%, and reduces execution time by 21.7% compared to the closest related multi-agent approach named MAA3C. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

12 pages, 2334 KiB  
Article
Pantograph Slider Detection Architecture and Solution Based on Deep Learning
by Qichang Guo, Anjie Tang and Jiabin Yuan
Sensors 2024, 24(16), 5133; https://doi.org/10.3390/s24165133 - 8 Aug 2024
Viewed by 1070
Abstract
Railway transportation has been integrated into people’s lives. According to the “Notice on the release of the General Technical Specification of High-speed Railway Power Supply Safety Testing (6C System) System” issued by the National Railway Administration of China in 2012, it is required [...] Read more.
Railway transportation has been integrated into people’s lives. According to the “Notice on the release of the General Technical Specification of High-speed Railway Power Supply Safety Testing (6C System) System” issued by the National Railway Administration of China in 2012, it is required to install pantograph and slide monitoring devices in high-speed railway stations, station throats and the inlet and exit lines of high-speed railway sections, and it is required to detect the damage of the slider with high precision. It can be seen that the good condition of the pantograph slider is very important for the normal operation of the railway system. As a part of providing power for high-speed rail and subway, the pantograph must be paid attention to in railway transportation to ensure its integrity. The wear of the pantograph is mainly due to the contact power supply between the slide block and the long wire during high-speed operation, which inevitably produces scratches, resulting in depressions on the upper surface of the pantograph slide block. During long-term use, because the depression is too deep, there is a risk of fracture. Therefore, it is necessary to monitor the slider regularly and replace the slider with serious wear. At present, most of the traditional methods use automation technology or simple computer vision technology for detection, which is inefficient. Therefore, this paper introduces computer vision and deep learning technology into pantograph slide wear detection. Specifically, this paper mainly studies the wear detection of the pantograph slider based on deep learning and the main purpose is to improve the detection accuracy and improve the effect of segmentation. From a methodological perspective, this paper employs a linear array camera to enhance the quality of the data sets. Additionally, it integrates an attention mechanism to improve segmentation performance. Furthermore, this study introduces a novel image stitching method to address issues related to incomplete images, thereby providing a comprehensive solution. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop