Application of Artificial Intelligence and Deep Learning in Wireless Communications Systems

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (20 March 2021) | Viewed by 29442

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information and Communication Engineering, Institute of Marine Industry, Gyeongsang National University, Tongyeong 53064, Korea
Interests: deep learning; wireless communications; cognitive radio; smart grid; data science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, we have witnessed the rapid advance of wireless technologies, which includes fifth generation (5G) communication, fog networking, molecular communication, and millimeter wave communication. Compared to traditional wireless technologies, these new technologies will have diverse service requirements, e.g., extremely low delay, and complicated system models that are harder to properly manage with conventional approaches. Accordingly, a new research paradigm needs to be devised.

Recently, artificial intelligence (AI) and deep learning (DL) technology have gained a lot of popularity due to their remarkable performance compared to traditional schemes, and they have begun to be applied in wireless communications. In particular, these data-driven approaches have the capability to change the paradigm of research, from the sophisticated mathematical model-based approach to a learning-based approach where the scheme is designed autonomously observing large amounts of data. Given that the AI- and DL-based schemes are more adaptable to the wireless environment, do not rely on the mathematically tractable system model, and show lower computational complexity during run-time, they are more appropriate for the recent wireless technologies, i.e., future wireless communication systems.

This Special Issue encourages the submission of high-quality, innovative, and original contributions covering contributions regarding future wireless communication systems, especially the application of AI and DL in wireless communication systems.

The list of possible topics includes but is not limited to:

  • Challenges and design requirement for future wireless communication systems;
  • Application of DL and AI for wireless communication systems;
  • Evaluating the limitations of AI and DL in wireless communications;
  • Big data for future wireless communication systems;
  • Security for future wireless communication systems;
  • Implementation of DL and AI technology for wireless communication.

Dr. Woongsup Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • artificial intelligence
  • future wireless communication systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

63 pages, 4184 KiB  
Article
A Survey on Machine Learning-Based Performance Improvement of Wireless Networks: PHY, MAC and Network Layer
by Merima Kulin, Tarik Kazaz, Eli De Poorter and Ingrid Moerman
Electronics 2021, 10(3), 318; https://doi.org/10.3390/electronics10030318 - 29 Jan 2021
Cited by 50 | Viewed by 12791
Abstract
This paper presents a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack: PHY, MAC and network. First, the related work and paper [...] Read more.
This paper presents a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack: PHY, MAC and network. First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning to help non-machine learning experts understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-of-service (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed. Full article
Show Figures

Figure 1

10 pages, 686 KiB  
Article
Deep Scanning—Beam Selection Based on Deep Reinforcement Learning in Massive MIMO Wireless Communication System
by Minhoe Kim, Woongsup Lee and Dong-Ho Cho
Electronics 2020, 9(11), 1844; https://doi.org/10.3390/electronics9111844 - 4 Nov 2020
Cited by 5 | Viewed by 2356
Abstract
In this paper, we investigate a deep learning based resource allocation scheme for massive multiple-input-multiple-output (MIMO) communication systems, where a base station (BS) with a large scale antenna array communicates with a user equipment (UE) using beamforming. In particular, we propose Deep Scanning, [...] Read more.
In this paper, we investigate a deep learning based resource allocation scheme for massive multiple-input-multiple-output (MIMO) communication systems, where a base station (BS) with a large scale antenna array communicates with a user equipment (UE) using beamforming. In particular, we propose Deep Scanning, in which a near-optimal beamforming vector can be found based on deep Q-learning. Through simulations, we confirm that the optimal beam vector can be found with a high probability. We also show that the complexity required to find the optimum beam vector can be reduced significantly in comparison with conventional beam search schemes. Full article
Show Figures

Figure 1

13 pages, 2340 KiB  
Article
Harnessing the Adversarial Perturbation to Enhance Security in the Autoencoder-Based Communication System
by Zhixiang Deng and Qian Sang
Electronics 2020, 9(2), 294; https://doi.org/10.3390/electronics9020294 - 8 Feb 2020
Viewed by 2420
Abstract
Given the vulnerability of deep neural network to adversarial attacks, the application of deep learning in the wireless physical layer arouses comprehensive security concerns. In this paper, we consider an autoencoder-based communication system with a full-duplex (FD) legitimate receiver and an external eavesdropper. [...] Read more.
Given the vulnerability of deep neural network to adversarial attacks, the application of deep learning in the wireless physical layer arouses comprehensive security concerns. In this paper, we consider an autoencoder-based communication system with a full-duplex (FD) legitimate receiver and an external eavesdropper. It is assumed that the system is trained from end-to-end based on the concepts of autoencoder. The FD legitimate receiver transmits a well-designed adversary perturbation signal to jam the eavesdropper while receiving information simultaneously. To defend the self-perturbation from the loop-back channel, the legitimate receiver is re-trained with the adversarial training method. The simulation results show that with the scheme proposed in this paper, the block-error-rate (BLER) of the legitimate receiver almost remains unaffected while the BLER of the eavesdropper is increased by orders of magnitude. This ensures reliable and secure transmission between the transmitter and the legitimate receiver. Full article
Show Figures

Figure 1

10 pages, 2566 KiB  
Article
Low-Cost Image Search System on Off-Line Situation
by Mery Diana, Juntaro Chikama, Motoki Amagasaki and Masahiro Iida
Electronics 2020, 9(1), 153; https://doi.org/10.3390/electronics9010153 - 14 Jan 2020
Cited by 3 | Viewed by 2976
Abstract
Implementation of deep learning in low-cost hardware, such as an edge device, is challenging. Reducing the complexity of the network is one of the solutions to reduce resource usage in the system, which is needed by low-cost system implementation. In this study, we [...] Read more.
Implementation of deep learning in low-cost hardware, such as an edge device, is challenging. Reducing the complexity of the network is one of the solutions to reduce resource usage in the system, which is needed by low-cost system implementation. In this study, we use the general average pooling layer to replace the fully connected layers on the convolutional neural network (CNN) model, used in the previous study, to reduce the number of network properties without decreasing the model performance in developing image classification for image search tasks. We apply the cosine similarity to measure the characteristic similarity between the feature vector of image input and extracting feature vectors from testing images in the database. The result of the cosine similarity calculation will show the image as the result of the searching image task. In the implementation, we use Raspberry Pi 3 as a low-cost hardware and CIFAR-10 dataset for training and testing images. Base on the development and implementation, the accuracy of the model is 68%, and the system generates the result of the image search base on the characteristic similarity of the images. Full article
Show Figures

Figure 1

12 pages, 533 KiB  
Article
CoRL: Collaborative Reinforcement Learning-Based MAC Protocol for IoT Networks
by Taegyeom Lee, Ohyun Jo and Kyungseop Shin
Electronics 2020, 9(1), 143; https://doi.org/10.3390/electronics9010143 - 11 Jan 2020
Cited by 18 | Viewed by 3519
Abstract
Devices used in Internet of Things (IoT) networks continue to perform sensing, gathering, modifying, and forwarding data. Since IoT networks have a lot of participants, mitigating and reducing collisions among the participants becomes an essential requirement for the Medium Access Control (MAC) protocols [...] Read more.
Devices used in Internet of Things (IoT) networks continue to perform sensing, gathering, modifying, and forwarding data. Since IoT networks have a lot of participants, mitigating and reducing collisions among the participants becomes an essential requirement for the Medium Access Control (MAC) protocols to increase system performance. A collision occurs in wireless channel when two or more nodes try to access the channel at the same time. In this paper, a reinforcement learning-based MAC protocol was proposed to provide high throughput and alleviate the collision problem. A collaboratively predicted Q-value was proposed for nodes to update their value functions by using communications trial information of other nodes. Our proposed protocol was confirmed by intensive system level simulations that it can reduce convergence time in 34.1% compared to the conventional Q-learning-based MAC protocol. Full article
Show Figures

Figure 1

12 pages, 500 KiB  
Article
A Deep Learning Based Transmission Algorithm for Mobile Device-to-Device Networks
by Tae-Won Ban and Woongsup Lee
Electronics 2019, 8(11), 1361; https://doi.org/10.3390/electronics8111361 - 16 Nov 2019
Cited by 10 | Viewed by 3369
Abstract
Recently, device-to-device (D2D) communications have been attracting substantial attention because they can greatly improve coverage, spectral efficiency, and energy efficiency, compared to conventional cellular communications. They are also indispensable for the mobile caching network, which is an emerging technology for next-generation mobile networks. [...] Read more.
Recently, device-to-device (D2D) communications have been attracting substantial attention because they can greatly improve coverage, spectral efficiency, and energy efficiency, compared to conventional cellular communications. They are also indispensable for the mobile caching network, which is an emerging technology for next-generation mobile networks. We investigate a cellular overlay D2D network where a dedicated radio resource is allocated for D2D communications to remove cross-interference with cellular communications and all D2D devices share the dedicated radio resource to improve the spectral efficiency. More specifically, we study a problem of radio resource management for D2D networks, which is one of the most challenging problems in D2D networks, and we also propose a new transmission algorithm for D2D networks based on deep learning with a convolutional neural network (CNN). A CNN is formulated to yield a binary vector indicating whether to allow each D2D pair to transmit data. In order to train the CNN and verify the trained CNN, we obtain data samples from a suboptimal algorithm. Our numerical results show that the accuracies of the proposed deep learning based transmission algorithm reach about 85%∼95% in spite of its simple structure due to the limitation in computing power. Full article
Show Figures

Figure 1

Back to TopTop