entropy-logo

Journal Browser

Journal Browser

Deep Learning Application on Visual Identity, Analysis, Diagnosis and Decision-Making

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Signal and Data Analysis".

Deadline for manuscript submissions: closed (20 February 2022) | Viewed by 8836

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Electronics Engineering, University of Adelaide, Adelaide, SA 5005, Australia
Interests: biomedical engineering; artificial intelligence; deep learning; computational hemodynamics; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Autonomous intelligent visual identity, analysis, diagnosis, and decision making in a complex natural environment are a hot research field today. With the continuous improvement of manufacturing capacity, the degree of automation in the production process of products is continuously growing. For traditional visual identity, analysis, diagnosis, and decision-making algorithms, technicians with a large amount of knowledge in engineering technology and the professional domain are required to model visual recognition. Traditional algorithms can only adapt to a single scenario or task, not to multiple ones or all at once. In addition, in cases where the constraint conditions are described by fuzzy sets, fuzzy programming can seek extreme values of the fuzzy target. However, agents trained through deep learning methods have better generality. Fuzzy theory combines deep learning to obtain a fuzzy deep network model to achieve a better performance, which is also a current development trend. The promotion of deep learning applications has led to a rapid development of autonomous intelligent visual identity, analysis, diagnosis, and decision making.

In the process of autonomous intelligent visual identity, analysis, diagnosis, and decision making, entropy is a measure of the uncertainty of random variables which involves the anticipation of the amount of information generated by all possible events. Reducing entropy can be achieved through iterative training of deep learning. Contributions addressing any of these issues are very welcome.

This Special Issue aims to serve as a forum for the presentation of new and improved techniques of information theory for autonomous intelligent visual identity, analysis, diagnosis, and decision making. In particular, the analysis and interpretation of generalization and superiority of the system with the help of statistical tools based on deep learning applications falls within the scope of this Special Issue.

Dr. Kelvin Wong
Prof. Dr. Dhanjoo N. Ghista
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep learning
  • Entropy
  • Data analysis
  • Fuzzy programming
  • Visual identity
  • Intelligent analysis
  • Intelligent diagnosis
  • Intelligent decision making
  • Deep neural networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2577 KiB  
Article
Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
by Weimin Chen, Kelvin Kian Loong Wong, Sifan Long and Zhili Sun
Entropy 2022, 24(4), 440; https://doi.org/10.3390/e24040440 - 22 Mar 2022
Cited by 6 | Viewed by 2532
Abstract
In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, [...] Read more.
In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this paper establishes a strategy evaluation mechanism through the policy distribution function. Secondly, the state space function is quantified by introducing entropy, whereby the approximation policy is used to approximate the real policy distribution, and the kernel function estimation and calculation of relative entropy is used to fit the reward function based on complex problem. Finally, through the comparative analysis on the classic test cases, we demonstrated that our proposed algorithm is effective, has a faster convergence speed and better performance than the traditional PPO algorithm, and the measure of the relative entropy can show the differences. In addition, it can more efficiently use the information of complex environment to learn policies. At the same time, not only can our paper explain the rationality of the policy distribution theory, the proposed framework can also balance between iteration steps, computational complexity and convergence speed, and we also introduced an effective measure of performance using the relative entropy concept. Full article
Show Figures

Figure 1

22 pages, 4354 KiB  
Article
An Improved Approach towards Multi-Agent Pursuit–Evasion Game Decision-Making Using Deep Reinforcement Learning
by Kaifang Wan, Dingwei Wu, Yiwei Zhai, Bo Li, Xiaoguang Gao and Zijian Hu
Entropy 2021, 23(11), 1433; https://doi.org/10.3390/e23111433 - 29 Oct 2021
Cited by 28 | Viewed by 5361
Abstract
A pursuit–evasion game is a classical maneuver confrontation problem in the multi-agent systems (MASs) domain. An online decision technique based on deep reinforcement learning (DRL) was developed in this paper to address the problem of environment sensing and decision-making in pursuit–evasion games. A [...] Read more.
A pursuit–evasion game is a classical maneuver confrontation problem in the multi-agent systems (MASs) domain. An online decision technique based on deep reinforcement learning (DRL) was developed in this paper to address the problem of environment sensing and decision-making in pursuit–evasion games. A control-oriented framework developed from the DRL-based multi-agent deep deterministic policy gradient (MADDPG) algorithm was built to implement multi-agent cooperative decision-making to overcome the limitation of the tedious state variables required for the traditionally complicated modeling process. To address the effects of errors between a model and a real scenario, this paper introduces adversarial disturbances. It also proposes a novel adversarial attack trick and adversarial learning MADDPG (A2-MADDPG) algorithm. By introducing an adversarial attack trick for the agents themselves, uncertainties of the real world are modeled, thereby optimizing robust training. During the training process, adversarial learning was incorporated into our algorithm to preprocess the actions of multiple agents, which enabled them to properly respond to uncertain dynamic changes in MASs. Experimental results verified that the proposed approach provides superior performance and effectiveness for pursuers and evaders, and both can learn the corresponding confrontational strategy during training. Full article
Show Figures

Figure 1

Back to TopTop