Feature Papers on Artificial Intelligence Algorithms and Their Applications

A topical collection in Algorithms (ISSN 1999-4893). This collection belongs to the section "Evolutionary Algorithms and Machine Learning".

Viewed by 24644

Editors


E-Mail Website
Collection Editor
Data Science and Artificial Intelligence at IU International University of Applied Sciences, 53604 Bad-Honnef, Germany
Interests: data science; artificial intelligence; AI in materials science; algorithmic economy

grade E-Mail Website
Collection Editor
Department of Applied Mathematics, Faculty of Mathematics and Computer Sciences, Amirkabir University of Technology, Tehran 1591634311, Iran
Interests: meshless methods; fractional PDEs; finite element method; computational mechanics; machine learning

E-Mail Website
Collection Editor
1. Department of Applied Mathematics and Computational Sciences, University of Cantabria, C.P. 39005 Santander, Spain
2. Department of Information Science, Faculty of Sciences, Toho University, 2-2-1 Miyama, Funabashi 274-8510, Japan
Interests: swarm intelligence and swarm robotics; bio-inspired optimization; computer graphics; geometric modelling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
1. Department of Applied Mathematics and Computational Sciences, University of Cantabria, C.P. 39005 Santander, Spain
2. Department of Information Science, Faculty of Sciences, Toho University, 2-2-1 Miyama, Funabashi 274-8510, Japan
Interests: artificial Intelligence; soft computing for optimization; evolutionary computation; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

The field of artificial intelligence has made tremendous progress in recent years and many recent developments have emerged that are of strong interest both for fundamental research in the field of AI, as well as the application of AI methods in academic and industrial settings. AI continues to shape both our professional and private life.

We invite you to submit high-quality feature papers to this Topical Collection entitled “Feature Papers on Artificial Intelligence Algorithms and Their Applications”, covering the whole range of subjects from theory to applications. In particular, we welcome contributions focusing on new algorithms or new methods in AI, as well as applications in industry or academic settings, including natural or social sciences, medicine, and engineering.

Prof. Dr. Ulrich Kerzel
Dr. Mostafa Abbaszadeh
Dr. Andres Iglesias
Prof. Dr. Akemi Galvez Tomida
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • AI algorithms
  • AI methods
  • applications of AI
  • AI in natural sciences
  • AI in social sciences
  • AI in engineering
  • AI in medicine

Published Papers (10 papers)

2024

Jump to: 2023

22 pages, 6490 KiB  
Article
Rotating Machinery Fault Detection Using Support Vector Machine via Feature Ranking
by Harry Hoa Huynh and Cheol-Hong Min
Algorithms 2024, 17(10), 441; https://doi.org/10.3390/a17100441 - 2 Oct 2024
Viewed by 883
Abstract
Artificial intelligence has succeeded in many different areas in recent years. Especially the use of machine learning algorithms has been very popular in all areas, including fault detection. This paper explores a case study of applying machine learning techniques and neural networks to [...] Read more.
Artificial intelligence has succeeded in many different areas in recent years. Especially the use of machine learning algorithms has been very popular in all areas, including fault detection. This paper explores a case study of applying machine learning techniques and neural networks to detect ten different machinery fault conditions using publicly available data sets collected from a tachometer, two accelerometers, and a microphone. Ten different conditions were classified using machine learning algorithms. Fifty-eight different features are extracted from time and frequency by applying the Short-Time Fourier Transform to the data with the window size of 1000 samples with 50% overlap. The Support Vector Machine models provided fault classification with 99.8% accuracy using all fifty-eight features. The proposed study explores the dimensionality reduction of the extracted features. Fifty-eight features were ranked using the Decision Tree model to identify the essential features as the classifier predictors. Based on feature extraction and raking, eleven predictors were extracted leading to reduced training complexity, while achieving a high classification accuracy of 99.7% could be obtained in less than half of the training time. Full article
Show Figures

Figure 1

12 pages, 3623 KiB  
Article
Trajectory Classification and Recognition of Planar Mechanisms Based on ResNet18 Network
by Jianping Wang, Youchao Wang, Boyan Chen, Xiaoyue Jia and Dexi Pu
Algorithms 2024, 17(8), 324; https://doi.org/10.3390/a17080324 - 25 Jul 2024
Viewed by 773
Abstract
This study utilizes the ResNet18 network to classify and recognize trajectories of planar mechanisms. This research begins by deriving formulas for trajectory points in various typical planar mechanisms, and the resulting trajectory images are employed as samples for training and testing the network. [...] Read more.
This study utilizes the ResNet18 network to classify and recognize trajectories of planar mechanisms. This research begins by deriving formulas for trajectory points in various typical planar mechanisms, and the resulting trajectory images are employed as samples for training and testing the network. The classification of trajectory images for both upright and inverted configurations of a planar four-bar linkage is investigated. Compared with AlexNet and VGG16, the ResNet18 model demonstrates superior classification accuracy during testing, coupled with reduced training time and memory consumption. Furthermore, the ResNet18 model is applied to classify trajectory images for six different planar mechanisms in both upright and inverted configurations as well as to identify whether the trajectory images belong to the upright or inverted configuration for each mechanism. The test results affirm the feasibility and effectiveness of the ResNet18 network in the classification and recognition of planar mechanism trajectories. Full article
Show Figures

Figure 1

19 pages, 12648 KiB  
Article
A Reliability Quantification Method for Deep Reinforcement Learning-Based Control
by Hitoshi Yoshioka and Hirotada Hashimoto
Algorithms 2024, 17(7), 314; https://doi.org/10.3390/a17070314 - 18 Jul 2024
Viewed by 859
Abstract
Reliability quantification of deep reinforcement learning (DRL)-based control is a significant challenge for the practical application of artificial intelligence (AI) in safety-critical systems. This study proposes a method for quantifying the reliability of DRL-based control. First, an existing method, random network distillation, was [...] Read more.
Reliability quantification of deep reinforcement learning (DRL)-based control is a significant challenge for the practical application of artificial intelligence (AI) in safety-critical systems. This study proposes a method for quantifying the reliability of DRL-based control. First, an existing method, random network distillation, was applied to the reliability evaluation to clarify the issues to be solved. Second, a novel method for reliability quantification was proposed to solve these issues. The reliability is quantified using two neural networks: a reference and an evaluator. They have the same structure with the same initial parameters. The outputs of the two networks were the same before training. During training, the evaluator network parameters were updated to maximize the difference between the reference and evaluator networks for trained data. Thus, the reliability of the DRL-based control for a state can be evaluated based on the difference in output between the two networks. The proposed method was applied to DRL-based controls as an example of a simple task, and its effectiveness was demonstrated. Finally, the proposed method was applied to the problem of switching trained models depending on the state. Consequently, the performance of the DRL-based control was improved by switching the trained models according to their reliability. Full article
Show Figures

Figure 1

21 pages, 20486 KiB  
Article
AFE-YOLOv8: A Novel Object Detection Model for Unmanned Aerial Vehicle Scenes with Adaptive Feature Enhancement
by Shijie Wang, Zekun Zhang, Qingqing Chao and Teng Yu
Algorithms 2024, 17(7), 276; https://doi.org/10.3390/a17070276 - 24 Jun 2024
Viewed by 1174
Abstract
Object detection in unmanned aerial vehicle (UAV) scenes is a challenging task due to the varying scales and complexities of targets. To address this, we propose a novel object detection model, AFE-YOLOv8, which integrates three innovative modules: the Multi-scale Nonlinear Fusion Module (MNFM), [...] Read more.
Object detection in unmanned aerial vehicle (UAV) scenes is a challenging task due to the varying scales and complexities of targets. To address this, we propose a novel object detection model, AFE-YOLOv8, which integrates three innovative modules: the Multi-scale Nonlinear Fusion Module (MNFM), the Adaptive Feature Enhancement Module (AFEM), and the Receptive Field Expansion Module (RFEM). The MNFM introduces nonlinear mapping by exploiting the property that deformable convolution can dynamically adjust the shape of the convolution kernel according to the shape of the target, and it effectively enhances the feature extraction capability of the backbone network by integrating multi-scale feature maps from different mapping branches. Meanwhile, the AFEM introduces an adaptive fusion factor, and through the fusion factor, it adaptively integrates the small-target features contained in the feature maps of different detection branches into the small-target detection branch, thus enhancing the expression of the small-target features contained in the feature maps of the small-target detection branch. Furthermore, the RFEM expands the receptive field of the feature maps of the large- and medium-scale target detection branches through stacked convolution, so as to make the model’s receptive field cover the whole target, and thereby learn more rich and comprehensive features of the target. The experimental results demonstrate the superior performance of the proposed model compared to the baseline in detecting objects of various scales. On the VisDrone dataset, the proposed model achieves a 4.5% enhancement in mean average precision (mAP) and a 5.45% improvement in average precision at an IOU threshold of 0.5 (AP50). Additionally, ablation experiments conducted on the challenging DOTA dataset showcase the model’s robustness and generalization capabilities. Full article
Show Figures

Figure 1

19 pages, 1005 KiB  
Article
Evaluating Diffusion Models for the Automation of Ultrasonic Nondestructive Evaluation Data Analysis
by Nick Torenvliet and John Zelek
Algorithms 2024, 17(4), 167; https://doi.org/10.3390/a17040167 - 21 Apr 2024
Cited by 1 | Viewed by 1398
Abstract
We develop decision support and automation for the task of ultrasonic non-destructive evaluation data analysis. First, we develop a probabilistic model for the task and then implement the model as a series of neural networks based on Conditional Score-Based Diffusion and Denoising Diffusion [...] Read more.
We develop decision support and automation for the task of ultrasonic non-destructive evaluation data analysis. First, we develop a probabilistic model for the task and then implement the model as a series of neural networks based on Conditional Score-Based Diffusion and Denoising Diffusion Probabilistic Model architectures. We use the neural networks to generate estimates for peak amplitude response time of flight and perform a series of tests probing their behavior, capacity, and characteristics in terms of the probabilistic model. We train the neural networks on a series of datasets constructed from ultrasonic non-destructive evaluation data acquired during an inspection at a nuclear power generation facility. We modulate the partition classifying nominal and anomalous data in the dataset and observe that the probabilistic model predicts trends in neural network model performance, thereby demonstrating a principled basis for explainability. We improve on previous related work as our methods are self-supervised and require no data annotation or pre-processing, and we train on a per-dataset basis, meaning we do not rely on out-of-distribution generalization. The capacity of the probabilistic model to predict trends in neural network performance, as well as the quality of the estimates sampled from the neural networks, support the development of a technical justification for usage of the method in safety-critical contexts such as nuclear applications. The method may provide a basis or template for extension into similar non-destructive evaluation tasks in other industrial contexts. Full article
Show Figures

Figure 1

19 pages, 481 KiB  
Article
Program Code Generation with Generative AIs
by Baskhad Idrisov and Tim Schlippe
Algorithms 2024, 17(2), 62; https://doi.org/10.3390/a17020062 - 31 Jan 2024
Cited by 7 | Viewed by 5138
Abstract
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory [...] Read more.
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory usage. Additionally, we evaluated the maintainability using metrics such as lines of code, cyclomatic complexity, Halstead complexity and maintainability index. For our experiments, we had generative AIs produce program code in Java, Python, and C++ that solves problems defined on the competition coding website leetcode.com. We selected six LeetCode problems of varying difficulty, resulting in 18 program codes generated by each generative AI. GitHub Copilot, powered by Codex (GPT-3.0), performed best, solving 9 of the 18 problems (50.0%), whereas CodeWhisperer did not solve a single problem. BingAI Chat (GPT-4.0) generated correct program code for seven problems (38.9%), ChatGPT (GPT-3.5) and Code Llama (Llama 2) for four problems (22.2%) and StarCoder and InstructCodeT5+ for only one problem (5.6%). Surprisingly, although ChatGPT generated only four correct program codes, it was the only generative AI capable of providing a correct solution to a coding problem of difficulty level hard. In summary, 26 AI-generated codes (20.6%) solve the respective problem. For 11 AI-generated incorrect codes (8.7%), only minimal modifications to the program code are necessary to solve the problem, which results in time savings between 8.9% and even 71.3% in comparison to programming the program code from scratch. Full article
Show Figures

Figure 1

2023

Jump to: 2024

14 pages, 303 KiB  
Article
On Enhancement of Text Classification and Analysis of Text Emotions Using Graph Machine Learning and Ensemble Learning Methods on Non-English Datasets
by Fatemeh Gholami, Zahed Rahmati, Alireza Mofidi and Mostafa Abbaszadeh
Algorithms 2023, 16(10), 470; https://doi.org/10.3390/a16100470 - 4 Oct 2023
Viewed by 2348
Abstract
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this [...] Read more.
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this research, we investigate and elaborate graph machine learning methods on non-English datasets (such as the Persian Digikala dataset), which consists of users’ opinions for the task of text classification. More specifically, we investigate different combinations of (Pars) BERT with various graph neural network (GNN) architectures (such as GCN, GAT, and GIN) as well as use ensemble learning methods in order to tackle the text classification task on certain well-known non-English datasets. Our analysis and results demonstrate how applying GNN models helps in achieving good scores on the task of text classification by better capturing the topological information between textual data. Additionally, our experiments show how models employing language-specific pre-trained models (like ParsBERT, instead of BERT) capture better information about the data, resulting in better accuracies. Full article
Show Figures

Figure 1

18 pages, 7529 KiB  
Article
End-to-End Approach for Autonomous Driving: A Supervised Learning Method Using Computer Vision Algorithms for Dataset Creation
by Inês A. Ribeiro, Tiago Ribeiro, Gil Lopes and A. Fernando Ribeiro
Algorithms 2023, 16(9), 411; https://doi.org/10.3390/a16090411 - 28 Aug 2023
Cited by 4 | Viewed by 2866
Abstract
This paper presents a solution for an autonomously driven vehicle (a robotic car) based on artificial intelligence using a supervised learning method. A scaled-down robotic car containing only one camera as a sensor was developed to participate in the RoboCup Portuguese Open Autonomous [...] Read more.
This paper presents a solution for an autonomously driven vehicle (a robotic car) based on artificial intelligence using a supervised learning method. A scaled-down robotic car containing only one camera as a sensor was developed to participate in the RoboCup Portuguese Open Autonomous Driving League competition. This study is based solely on the development of this robotic car, and the results presented are only from this competition. Teams usually solve the competition problem by relying on computer vision algorithms, and no research could be found on neural network model-based assistance for vehicle control. This technique is commonly used in general autonomous driving, and the amount of research is increasing. To train a neural network, a large number of labelled images is necessary; however, these are difficult to obtain. In order to address this problem, a graphical simulator was used with an environment containing the track and the robot/car to extract images for the dataset. A classical computer vision algorithm developed by the authors processes the image data to extract relevant information about the environment and uses it to determine the optimal direction for the vehicle to follow on the track, which is then associated with the respective image-grab. Several trainings were carried out with the created dataset to reach the final neural network model; tests were performed within a simulator, and the effectiveness of the proposed approach was additionally demonstrated through experimental results in two real robotics cars, which performed better than expected. This system proved to be very successful in steering the robotic car on a road-like track, and the agent’s performance increased with the use of supervised learning methods. With computer vision algorithms, the system performed an average of 23 complete laps around the track before going off-track, whereas with assistance from the neural network model the system never went off the track. Full article
Show Figures

Figure 1

17 pages, 5504 KiB  
Article
Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact
by Tasha Austin and Bharat S. Rawal
Algorithms 2023, 16(8), 363; https://doi.org/10.3390/a16080363 - 28 Jul 2023
Cited by 4 | Viewed by 2078
Abstract
The purpose of this study is to show how machine learning can be leveraged as a tool to govern social impact and drive fair and equitable investments. Many organizations today are establishing financial inclusion goals to promote social impact and have been increasing [...] Read more.
The purpose of this study is to show how machine learning can be leveraged as a tool to govern social impact and drive fair and equitable investments. Many organizations today are establishing financial inclusion goals to promote social impact and have been increasing their investments in this space. Financial inclusion is the opportunity for individuals and businesses to have access to affordable financial products including loans, credit, and insurance that they may otherwise not have access to with traditional financial institutions. Peer-to-peer (P2P) lending serves as a platform that can support and foster financial inclusion and influence social impact and is becoming more popular today as a resource to underserved communities. Loans issued through P2P lending can fund projects and initiatives focused on climate change, workforce diversity, women’s rights, equity, labor practices, natural resource management, accounting standards, carbon emissions, and several other areas. With this in mind, AI can be a powerful governance tool to help manage risks and promote opportunities for an organization’s financial inclusion goals. In this paper, we explore how AI, specifically machine learning, can help manage the P2P platform Kiva’s investment risks and deliver impact, emphasizing the importance of prediction model retraining to account for regulatory and other changes across the P2P landscape to drive better decision-making. As part of this research, we also explore how changes in important model variables affect aggregate model predictions. Full article
Show Figures

Figure 1

36 pages, 6469 KiB  
Article
Physics-Informed Deep Learning for Traffic State Estimation: A Survey and the Outlook
by Xuan Di, Rongye Shi, Zhaobin Mo and Yongjie Fu
Algorithms 2023, 16(6), 305; https://doi.org/10.3390/a16060305 - 17 Jun 2023
Cited by 13 | Viewed by 5209
Abstract
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key [...] Read more.
For its robust predictive power (compared to pure physics-based models) and sample-efficient training (compared to pure deep learning models), physics-informed deep learning (PIDL), a paradigm hybridizing physics-based models and deep neural networks (DNNs), has been booming in science and engineering fields. One key challenge of applying PIDL to various domains and problems lies in the design of a computational graph that integrates physics and DNNs. In other words, how the physics is encoded into DNNs and how the physics and data components are represented. In this paper, we offer an overview of a variety of architecture designs of PIDL computational graphs and how these structures are customized to traffic state estimation (TSE), a central problem in transportation engineering. When observation data, problem type, and goal vary, we demonstrate potential architectures of PIDL computational graphs and compare these variants using the same real-world dataset. Full article
Show Figures

Figure 1

Back to TopTop