AI Applications in IoT and Mobile Wireless Networks

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2020) | Viewed by 26010

Special Issue Editors


E-Mail Website
Guest Editor
1. Dept. of AI Convergence Nework, Ajou University, Suwon, Korea
2. Dept. of Industrial Engineering, Ajou University, Suwon 16499, Korea
Interests: AI network applications; edge computing architecture; Internet of Things; blockchain security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Standards & Open Source Research Division, Intelligent Convergence Laboratory, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea
Interests: 5G networks; network automation; machine learning applications; SDN; NFV

Special Issue Information

Dear Colleagues,

Machine-generated data expansion has become a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. Gartner estimates that the number of connected things will grow to over 20 billion by 2020. With recent innovative network and chip technology, IoT devices are becoming smarter in terms of high computing power, bandwidth, and storage availability. Data expansion and smart devices ensure the high availability of artificial intelligence in large networks. Edge networking for IoT and mobile devices accelerates the usability of intelligent applications. This Special Issue focuses on the analysis, design, and implementation of AI-powered IoT solutions over mobile networks.

Topics of interest include but are not limited to the following:

  • On-device machine learning;
  • Federated learning;
  • Decentralized applications;
  • AI for 5G mobile network;
  • Real-time computer processing on cognition;
  • AI for edge computing;
  • Low-power AI for IoT systems;
  • Distributed inferencing and learning;
  • AI powered security;
  • Blockchain for IoT and mobile devices.
Prof. Dr. Jae-Hoon Kim
Prof. Dr. Myung-Ki Shin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Internet of Things
  • Mobile network
  • Edge computing
  • 5G
  • Decentralized application

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3433 KiB  
Article
Two-Stage Hybrid Network Clustering Using Multi-Agent Reinforcement Learning
by Joohyun Kim, Dongkwan Ryu, Juyeon Kim and Jae-Hoon Kim
Electronics 2021, 10(3), 232; https://doi.org/10.3390/electronics10030232 - 20 Jan 2021
Cited by 6 | Viewed by 2588
Abstract
In the Internet-of-Things (IoT) environments, the publish (pub)/subscribe (sub)-operated communication is widely employed. The use of pub/sub operation as a lightweight communication protocol facilitates communication among IoTs. The protocol consists of network nodes functioning as publishers, subscribers, and brokers, wherein brokers transfer messages [...] Read more.
In the Internet-of-Things (IoT) environments, the publish (pub)/subscribe (sub)-operated communication is widely employed. The use of pub/sub operation as a lightweight communication protocol facilitates communication among IoTs. The protocol consists of network nodes functioning as publishers, subscribers, and brokers, wherein brokers transfer messages from publishers to subscribers. Thus, the communication capability of the broker is a critical factor in the overall communication performance. In this study, multi-agent reinforcement learning (MARL) is applied to find the best combination of broker nodes. MARL goes through various combinations of broker nodes to find the best combination. However, MARL is inefficient to perform with an excessive number of broker nodes. Delaunay triangulation selects candidate broker nodes among the pool of broker nodes. The selection process operates as a preprocessing of the MARL. The suggested Delaunay triangulation is improved by the custom deletion method. Consequently, the two-stage hybrid approach outperforms any methods employing single-agent reinforcement learning (SARL). The MARL eliminates the performance fluctuation of the SARL caused by the iterative selection of broker nodes. Furthermore, the proposed approach requires a fewer number of candidate broker nodes and converges faster. Full article
(This article belongs to the Special Issue AI Applications in IoT and Mobile Wireless Networks)
Show Figures

Figure 1

16 pages, 2798 KiB  
Article
Autonomous Operation Control of IoT Blockchain Networks
by Jae-Hoon Kim, Seungchul Lee and Sengphil Hong
Electronics 2021, 10(2), 204; https://doi.org/10.3390/electronics10020204 - 17 Jan 2021
Cited by 10 | Viewed by 3254
Abstract
Internet of Things (IoT) networks are typically composed of many sensors and actuators. The operation controls for robots in smart factories or drones produce a massive volume of data that requires high reliability. A blockchain architecture can be used to build highly reliable [...] Read more.
Internet of Things (IoT) networks are typically composed of many sensors and actuators. The operation controls for robots in smart factories or drones produce a massive volume of data that requires high reliability. A blockchain architecture can be used to build highly reliable IoT networks. The shared ledger and open data validation among users guarantee extremely high data security. However, current blockchain technology has limitations for its overall application across IoT networks. Because general permission-less blockchain networks typically target high-performance network nodes with sufficient computing power, a blockchain node with low computing power and memory, such as an IoT sensor/actuator, cannot operate in a blockchain as a fully functional node. A lightweight blockchain provides practical blockchain availability over IoT networks. We propose essential operational advances to develop a lightweight blockchain over IoT networks. A dynamic network configuration enforced by deep clustering provides ad-hoc flexibility for IoT network environments. The proposed graph neural network technique enhances the efficiency of dApp (distributed application) spreading across IoT networks. In addition, the proposed blockchain technology is highly implementable in software because it adopts the Hyperledger development environment. Directly embedding the proposed blockchain middleware platform in small computing devices proves the practicability of the proposed methods. Full article
(This article belongs to the Special Issue AI Applications in IoT and Mobile Wireless Networks)
Show Figures

Figure 1

16 pages, 1795 KiB  
Article
Achieving Balanced Load Distribution with Reinforcement Learning-Based Switch Migration in Distributed SDN Controllers
by Sangho Yeo, Ye Naing, Taeha Kim and Sangyoon Oh
Electronics 2021, 10(2), 162; https://doi.org/10.3390/electronics10020162 - 13 Jan 2021
Cited by 24 | Viewed by 4470
Abstract
Distributed controllers in software-defined networking (SDN) become a promising approach because of their scalable and reliable deployments in current SDN environments. Since the network traffic varies with time and space, a static mapping between switches and controllers causes uneven load distribution among controllers. [...] Read more.
Distributed controllers in software-defined networking (SDN) become a promising approach because of their scalable and reliable deployments in current SDN environments. Since the network traffic varies with time and space, a static mapping between switches and controllers causes uneven load distribution among controllers. Dynamic migration of switches methods can provide a balanced load distribution between SDN controllers. Recently, existing reinforcement learning (RL) methods for dynamic switch migration such as MARVEL are modeling the load balancing of each controller as linear optimization. Even if it is widely used for network flow modeling, this type of linear optimization is not well fitted to the real-world workload of SDN controllers because correlations between resource types are unexpectedly and continuously changed. Consequently, using the linear model for resource utilization makes it difficult to distinguish which resource types are currently overloaded. In addition, this yields a high time cost. In this paper, we propose a reinforcement learning-based switch and controller selection scheme for switch migration, switch-aware reinforcement learning load balancing (SAR-LB). SAR-LB uses the utilization ratio of various resource types in both controllers and switches as the inputs of the neural network. It also considers switches as RL agents to reduce the action space of learning, while it considers all cases of migrations. Our experimental results show that SAR-LB achieved better (close to the even) load distribution among SDN controllers because of the accurate decision-making of switch migration. The proposed scheme achieves better normalized standard deviation among distributed SDN controllers than existing schemes by up to 34%. Full article
(This article belongs to the Special Issue AI Applications in IoT and Mobile Wireless Networks)
Show Figures

Figure 1

18 pages, 1377 KiB  
Article
Internet Traffic Classification with Federated Learning
by Hyunsu Mun and Youngseok Lee
Electronics 2021, 10(1), 27; https://doi.org/10.3390/electronics10010027 - 28 Dec 2020
Cited by 28 | Viewed by 5125
Abstract
As Internet traffic classification is a typical problem for ISPs or mobile carriers, there have been a lot of studies based on statistical packet header information, deep packet inspection, or machine learning. Due to recent advances in end-to-end encryption and dynamic port policies, [...] Read more.
As Internet traffic classification is a typical problem for ISPs or mobile carriers, there have been a lot of studies based on statistical packet header information, deep packet inspection, or machine learning. Due to recent advances in end-to-end encryption and dynamic port policies, machine or deep learning has been an essential key to improve the accuracy of packet classification. In addition, ISPs or mobile carriers should carefully deal with the privacy issue while collecting user packets for accounting or security. The recent development of distributed machine learning, called federated learning, collaboratively carries out machine learning jobs on the clients without uploading data to a central server. Although federated learning provides an on-device learning framework towards user privacy protection, its feasibility and performance of Internet traffic classification have not been fully examined. In this paper, we propose a federated-learning traffic classification protocol (FLIC), which can achieve an accuracy comparable to centralized deep learning for Internet application identification without privacy leakage. FLIC can classify new applications on-the-fly when a participant joins in learning with a new application, which has not been done in previous works. By implementing the prototype of FLIC clients and a server with TensorFlow, the clients gather packets, perform the on-device training job and exchange the training results with the FLIC server. In addition, we demonstrate that federated learning-based packet classification achieves an accuracy of 88% under non-independent and identically distributed (non-IID) traffic across clients. When a new application that can be classified dynamically as a client participates in learning was added, an accuracy of 92% was achieved. Full article
(This article belongs to the Special Issue AI Applications in IoT and Mobile Wireless Networks)
Show Figures

Figure 1

24 pages, 8900 KiB  
Article
Improving Classification Accuracy of Hand Gesture Recognition Based on 60 GHz FMCW Radar with Deep Learning Domain Adaptation
by Hyo Ryun Lee, Jihun Park and Young-Joo Suh
Electronics 2020, 9(12), 2140; https://doi.org/10.3390/electronics9122140 - 14 Dec 2020
Cited by 26 | Viewed by 4348
Abstract
With the recent development of small radars with high resolution, various human–computer interaction (HCI) applications using them have been developed. In particular, a method of applying a user’s hand gesture recognition using a short-range radar to an electronic device is being actively studied. [...] Read more.
With the recent development of small radars with high resolution, various human–computer interaction (HCI) applications using them have been developed. In particular, a method of applying a user’s hand gesture recognition using a short-range radar to an electronic device is being actively studied. In general, the time delay and Doppler shift characteristics that occur when a transmitted signal that is reflected off an object returns are classified through deep learning to recognize the motion. However, the main obstacle in the commercialization of radar-based hand gesture recognition is that even for the same type of hand gesture, recognition accuracy is degraded due to a slight difference in movement for each individual user. To solve this problem, in this paper, the domain adaptation is applied to hand gesture recognition to minimize the differences among users’ gesture information in the learning and the use stage. To verify the effectiveness of domain adaptation, a domain discriminator that cheats the classifier was applied to a deep learning network with a convolutional neural network (CNN) structure. Seven different hand gesture data were collected for 10 participants and used for learning, and the hand gestures of 10 users that were not included in the training data were input to confirm the recognition accuracy of an average of 98.8%. Full article
(This article belongs to the Special Issue AI Applications in IoT and Mobile Wireless Networks)
Show Figures

Figure 1

13 pages, 6261 KiB  
Article
Improving Deep Learning-Based UWB LOS/NLOS Identification with Transfer Learning: An Empirical Approach
by JiWoong Park, SungChan Nam, HongBeom Choi, YoungEun Ko and Young-Bae Ko
Electronics 2020, 9(10), 1714; https://doi.org/10.3390/electronics9101714 - 18 Oct 2020
Cited by 46 | Viewed by 4943
Abstract
This paper presents an improved ultra-wideband (UWB) line of sight (LOS)/non-line of sight (NLOS) identification scheme based on a hybrid method of deep learning and transfer learning. Previous studies have limitations, in that the classification accuracy significantly decreases in an unknown place. To [...] Read more.
This paper presents an improved ultra-wideband (UWB) line of sight (LOS)/non-line of sight (NLOS) identification scheme based on a hybrid method of deep learning and transfer learning. Previous studies have limitations, in that the classification accuracy significantly decreases in an unknown place. To solve this problem, we propose a transfer learning-based NLOS identification method for classifying the NLOS conditions of the UWB signal in an unmeasured environment. Both the multilayer perceptron and convolutional neural network (CNN) are introduced as classifiers for NLOS conditions. We evaluate the proposed scheme by conducting experiments in both measured and unmeasured environments. Channel data were measured using a Decawave EVK1000 in two similar indoor office environments. In the unmeasured environment, the existing CNN method showed an accuracy of approximately 44%, but when the proposed scheme was applied to the CNN, it showed an accuracy of up to 98%. The training time of the proposed scheme was measured to be approximately 48 times faster than that of the existing CNN. When comparing the proposed scheme with learning a new CNN in an unmeasured environment, the proposed scheme demonstrated an approximately 10% higher accuracy and approximately five times faster training time. Full article
(This article belongs to the Special Issue AI Applications in IoT and Mobile Wireless Networks)
Show Figures

Figure 1

Back to TopTop