Deep Learning and AI in Communication and Information Technologies

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (31 May 2024) | Viewed by 6390

Special Issue Editors


E-Mail Website
Guest Editor
TK Engineering, 1712 Sofia, Bulgaria
Interests: image processing; image compression and watermarking; CNCs; programmable controllers
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou 412007, China
Interests: smart grid; electric engineering; smart energy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

AI and deep learning have recently transformed the basics of various industries—they already control complicated systems, take part in medical decision support, and transform all kinds of present-day communications. The objective of this SI is to select and present contemporary intelligent achievements in the area, aimed at multidisciplinary fields: digital twin and mipmap technologies; object analysis, classification, and recognition; deep learning in education and arts; sentiment analysis; restoration of ancient texts; intelligent design of various products; creation of semantic segmentation and interpretation multidimensional models; landscape design; augmented and virtual reality, etc. The presented analyses and research results, based on communication and information technologies, will outline the future of communications and will be used as a basis for numerous future applications in the area. The creation of efficient solutions will permit their real-time implementation.

Dr. Roumiana Kountcheva
Prof. Dr. Shengqing Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • electrical engineering
  • information technologies
  • control engineering
  • electrotechnologies
  • AI applications
  • electric vehicle technologies
  • signal and communication processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1336 KiB  
Article
GLIDE: Multi-Agent Deep Reinforcement Learning for Coordinated UAV Control in Dynamic Military Environments
by Divija Swetha Gadiraju, Prasenjit Karmakar, Vijay K. Shah and Vaneet Aggarwal
Information 2024, 15(8), 477; https://doi.org/10.3390/info15080477 - 11 Aug 2024
Viewed by 1767
Abstract
Unmanned aerial vehicles (UAVs) are widely used for missions in dynamic environments. Deep Reinforcement Learning (DRL) can find effective strategies for multiple agents that need to cooperate to complete the task. In this article, the challenge of controlling the movement of a fleet [...] Read more.
Unmanned aerial vehicles (UAVs) are widely used for missions in dynamic environments. Deep Reinforcement Learning (DRL) can find effective strategies for multiple agents that need to cooperate to complete the task. In this article, the challenge of controlling the movement of a fleet of UAVs is addressed by Multi-Agent Deep Reinforcement Learning (MARL). The collaborative movement of the UAV fleet can be controlled centrally and also in a decentralized fashion, which is studied in this work. We consider a dynamic military environment with a fleet of UAVs, whose task is to destroy enemy targets while avoiding obstacles like mines. The UAVs inherently come with a limited battery capacity directing our research to focus on the minimum task completion time. We propose a continuous-time-based Proximal Policy Optimization (PPO) algorithm for multi-aGent Learning In Dynamic Environments (GLIDE). In GLIDE, the UAVs coordinate among themselves and communicate with the central base to choose the best possible action. The action control in GLIDE can be controlled in a centralized and decentralized way, and two algorithms called Centralized-GLIDE (C-GLIDE), and Decentralized-GLIDE (D-GLIDE) are proposed on this basis. We developed a simulator called UAV SIM, in which the mines are placed at randomly generated 2D locations unknown to the UAVs at the beginning of each episode. The performance of both the proposed schemes is evaluated through extensive simulations. Both C-GLIDE and D-GLIDE converge and have comparable performance in target destruction rate for the same number of targets and mines. We observe that D-GLIDE is up to 68% faster in task completion time compared to C-GLIDE and could keep more UAVs alive at the end of the task. Full article
(This article belongs to the Special Issue Deep Learning and AI in Communication and Information Technologies)
Show Figures

Figure 1

24 pages, 5844 KiB  
Article
Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis
by Leon Tabaro, Jean Marie Vianney Kinani, Alberto Jorge Rosales-Silva, Julio César Salgado-Ramírez, Dante Mújica-Vargas, Ponciano Jorge Escamilla-Ambrosio and Eduardo Ramos-Díaz
Information 2024, 15(8), 473; https://doi.org/10.3390/info15080473 - 9 Aug 2024
Viewed by 1644
Abstract
In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn [...] Read more.
In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent’s trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL’s ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems. Full article
(This article belongs to the Special Issue Deep Learning and AI in Communication and Information Technologies)
Show Figures

Figure 1

13 pages, 2488 KiB  
Article
Design and Selection of Inductor Current Feedback for the Sliding-Mode Controlled Hybrid Boost Converter
by Satyajit Chincholkar, Mohd Tariq, Maha Abdelhaq and Raed Alsaqour
Information 2023, 14(8), 443; https://doi.org/10.3390/info14080443 - 7 Aug 2023
Cited by 1 | Viewed by 1662
Abstract
The hybrid step-up converter is a fifth-order system with a dc gain greater than the traditional second-order step-up configuration. Considering their high order, several state variables are accessible for feedback purposes in the control of such systems. Therefore, choosing the best state variables [...] Read more.
The hybrid step-up converter is a fifth-order system with a dc gain greater than the traditional second-order step-up configuration. Considering their high order, several state variables are accessible for feedback purposes in the control of such systems. Therefore, choosing the best state variables is essential since they influence the system’s dynamic response and stability. This work proposes a methodical method to identify the appropriate state variables in implementing a sliding-mode (SM) controlled hybrid boost converter. A thorough comparison of two SM controllers based on various feedback currents is conducted. The frequency response technique is used to demonstrate how the SM method employing the current through the output inductor leads to an unstable response. The right-half s-plane poles and zeroes in the converter’s inner-loop transfer function, which precisely cancel one another, are what is causing the instability. On the other hand, a stable system may result from employing a SM controller with the current through the input inductor. Lastly, some experimental outcomes using the preferred SM control method are provided. Full article
(This article belongs to the Special Issue Deep Learning and AI in Communication and Information Technologies)
Show Figures

Figure 1

Back to TopTop