applsci-logo

Journal Browser

Journal Browser

Applied Artificial Intelligence (AI)

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 105209

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Applied Mathematics and Computational Sciences, University of Cantabria, C.P. 39005 Santander, Spain
2. Department of Information Science, Faculty of Sciences, Toho University, 2-2-1 Miyama, Funabashi 274-8510, Japan
Interests: artificial Intelligence; soft computing for optimization; evolutionary computation; computational intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Applied Mathematics and Computational Sciences, University of Cantabria, C.P. 39005 Santander, Spain
2. Department of Information Science, Faculty of Sciences, Toho University, 2-2-1 Miyama, 274-8510 Funabashi, Japan
Interests: swarm intelligence and swarm robotics; bio-inspired optimisation; computer graphics; geometric modelling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Modern life is immersed in a highly interconnected technological world. Many of the applications designed for this digital ecosystem make use of sophisticated artificial intelligence techniques to solve all kinds of problems, from optimized searching engines to advanced facial recognition features on the web, from shape recognition algorithms for image processing to pattern recognition methods for social networks and economic studies, and from complex behavioral engines for synthetic characters in computer movies and video games to advanced routines for robotics, unmanned autonomous vehicles, natural language processing, business intelligence, etc. Artificial intelligence is poised to change the world in the coming decades, from the way we do business, to domestic applications at home. It has been anticipated that AI’s contribution to the global economy will exceed that of China and India combined. It is also believed that within the next 10 years, almost any successful industry or company will use some kind of AI to ensure their business runs smoothly and efficiently.

This Special Issue aims to disseminate the most recent research results and developments in artificial intelligence, with a special focus on their practical applications to science, engineering, industry, medicine, robotics, manufacturing, entertainment, optimization, business, and other fields. We kindly invite researchers and practitioners to contribute their high-quality original research or review articles on these topics to this Special Issue.

Prof. Dr. Akemi Galvez Tomida
Prof. Dr. Andres Iglesias Prieto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Evolutionary computation
  • Nature-inspired metaheuristic techniques
  • Genetic algorithms
  • Swarm intelligence
  • Hybrid methods
  • Swarm robotics
  • Cognitive sciences
  • Neural processing
  • AI-based optimization
  • AI-based medical imaging
  • AI-based image processing
  • AI-based shape/pattern recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (29 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 1537 KiB  
Article
Imaginary Speech Recognition Using a Convolutional Network with Long-Short Memory
by Ana-Luiza Rusnac and Ovidiu Grigore
Appl. Sci. 2022, 12(22), 11873; https://doi.org/10.3390/app122211873 - 21 Nov 2022
Cited by 4 | Viewed by 2666
Abstract
In recent years, a lot of researchers’ attentions were concentrating on imaginary speech understanding, decoding, and even recognition. Speech is a complex mechanism, which involves multiple brain areas in the process of production, planning, and precise control of a large number of muscles [...] Read more.
In recent years, a lot of researchers’ attentions were concentrating on imaginary speech understanding, decoding, and even recognition. Speech is a complex mechanism, which involves multiple brain areas in the process of production, planning, and precise control of a large number of muscles and articulation involved in the actual utterance. This paper proposes an intelligent imaginary speech recognition system of eleven different utterances, seven phonemes, and four words from the Kara One database. We showed, during our research, that the feature space of the cross-covariance in frequency domain offers a better perspective of the imaginary speech by computing LDA for 2D representation of the feature space, in comparison to cross-covariance in the time domain and the raw signals without any processing. In the classification stage, we used a CNNLSTM neural network and obtained a performance of 43% accuracy for all eleven different utterances. The developed system was meant to be a subject’s shared system. We also showed that, using the channels corresponding to the anatomical structures of the brain involved in speech production, i.e., Broca area, primary motor cortex, and secondary motor cortex, 93% of information is preserved, obtaining 40% accuracy by using 29 electrodes out of the initial 62. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

15 pages, 7748 KiB  
Article
Bibliometric Analysis of the Application of Artificial Intelligence Techniques to the Management of Innovation Projects
by José Manuel Mesa Fernández, Juan José González Moreno, Eliseo P. Vergara-González and Guillermo Alonso Iglesias
Appl. Sci. 2022, 12(22), 11743; https://doi.org/10.3390/app122211743 - 18 Nov 2022
Cited by 7 | Viewed by 3407
Abstract
Due to their specific characteristics, innovation projects are developed in contexts with great volatility, uncertainty, complexity, and even ambiguity. Project management has needed to adopt changes to ensure success in this type of project. Artificial intelligence (AI) techniques are being used in these [...] Read more.
Due to their specific characteristics, innovation projects are developed in contexts with great volatility, uncertainty, complexity, and even ambiguity. Project management has needed to adopt changes to ensure success in this type of project. Artificial intelligence (AI) techniques are being used in these changing environments to increase productivity. This work collected and analyzed those areas of technological innovation project management, such as risk management, costs, and deadlines, in which the application of artificial-intelligence techniques is having the greatest impact. With this objective, a search was carried out in the Scopus database including the three areas involved, that is, artificial intelligence, project management, and research and innovation. The resulting document set was analyzed using the co-word bibliographic method. Then, the results obtained were analyzed first from a global point of view and then specifically for each of the domains that the Project Management Institute (PMI) defines in project management. Some of the findings obtained indicate that sectors such as construction, software and product development, and systems such as knowledge management or decision-support systems have studied and applied the possibilities of artificial intelligence more intensively. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

12 pages, 776 KiB  
Article
A Novel Filter-Level Deep Convolutional Neural Network Pruning Method Based on Deep Reinforcement Learning
by Yihao Feng, Chao Huang, Long Wang, Xiong Luo and Qingwen Li
Appl. Sci. 2022, 12(22), 11414; https://doi.org/10.3390/app122211414 - 10 Nov 2022
Cited by 2 | Viewed by 1740
Abstract
Deep neural networks (DNNs) have achieved great success in the field of computer vision. The high requirements for memory and storage by DNNs make it difficult to apply them to mobile or embedded devices. Therefore, compression and structure optimization of deep neural networks [...] Read more.
Deep neural networks (DNNs) have achieved great success in the field of computer vision. The high requirements for memory and storage by DNNs make it difficult to apply them to mobile or embedded devices. Therefore, compression and structure optimization of deep neural networks have become a hot research topic. To eliminate redundant structures in deep convolutional neural networks (DCNNs), we propose an efficient filter pruning framework via deep reinforcement learning (DRL). The proposed framework is based on a deep deterministic policy gradient (DDPG) algorithm for filter pruning rate optimization. The main features of the proposed framework are as follows: (1) AA tailored reward function considering both accuracy and complexity of DCNN is proposed for the training of DDPG and (2) a novel filter sorting criterion based on Taylor expansion is developed for filter pruning selection. To illustrate the effectiveness of the proposed framework, extensive comparative studies on large public datasets and well-recognized DCNNs are conducted. The experimental results demonstrate that the Taylor-expansion-based filter sorting criterion is much better than the widely used minimum-weight-based criterion. More importantly, the proposed filter pruning framework can achieve over 10× parameter compression and 3× floating point operations (FLOPs) reduction while maintaining similar accuracy to the original network. The performance of the proposed framework is promising compared with state-of-the-art DRL-based filter pruning methods. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

24 pages, 1843 KiB  
Article
Rank-Based Ant System with Originality Reinforcement and Pheromone Smoothing
by Sara Pérez-Carabaza, Akemi Gálvez and Andrés Iglesias
Appl. Sci. 2022, 12(21), 11219; https://doi.org/10.3390/app122111219 - 5 Nov 2022
Cited by 6 | Viewed by 2757
Abstract
Ant Colony Optimization (ACO) encompasses a family of metaheuristics inspired by the foraging behaviour of ants. Since the introduction of the first ACO algorithm, called Ant System (AS), several ACO variants have been proposed in the literature. Owing to their superior performance over [...] Read more.
Ant Colony Optimization (ACO) encompasses a family of metaheuristics inspired by the foraging behaviour of ants. Since the introduction of the first ACO algorithm, called Ant System (AS), several ACO variants have been proposed in the literature. Owing to their superior performance over other alternatives, the most popular ACO algorithms are Rank-based Ant System (ASRank), Max-Min Ant System (MMAS) and Ant Colony System (ACS). While ASRank shows a fast convergence to high-quality solutions, its performance is improved by other more widely used ACO variants such as MMAS and ACS, which are currently considered the state-of-the-art ACO algorithms for static combinatorial optimization problems. With the purpose of diversifying the search process and avoiding early convergence to a local optimal, the proposed approach extends ASRank with an originality reinforcement strategy of the top-ranked solutions and a pheromone smoothing mechanism that is triggered before the algorithm reaches stagnation. The approach is tested on several symmetric and asymmetric Traveling Salesman Problem and Sequential Ordering Problem instances from TSPLIB benchmark. Our experimental results show that the proposed method achieves fast convergence to high-quality solutions and outperforms the current state-of-the-art ACO algorithms ASRank, MMAS and ACS, for most instances of the benchmark. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

10 pages, 972 KiB  
Article
Using Live Spam Beater (LiSB) Framework for Spam Filtering during SMTP Transactions
by Silvana Gómez-Meire, César Gabriel Márquez, Eliana Patricia Aray-Cappello and José R. Méndez
Appl. Sci. 2022, 12(20), 10491; https://doi.org/10.3390/app122010491 - 18 Oct 2022
Viewed by 2080
Abstract
This study introduces the Live Spam Beater (LiSB) framework for the execution of email filtering techniques during SMTP (Simple Mail Transfer Protocol) transactions. It aims to increase the effectiveness and efficiency of existing proactive filtering mechanisms, mainly based on simple blacklists. Since it [...] Read more.
This study introduces the Live Spam Beater (LiSB) framework for the execution of email filtering techniques during SMTP (Simple Mail Transfer Protocol) transactions. It aims to increase the effectiveness and efficiency of existing proactive filtering mechanisms, mainly based on simple blacklists. Since it implements some proactive filtering schemes (during SMTP transaction), when an email message is classified as spam, the sender can be notified by an SMTP response code as a result of the transaction itself. The presented framework is written in Python programming language, works as an MTA (Mail Transfer Agent) server that implements an SMTP (Simple Mail Transfer Protocol) reverse proxy and allows the use of plugins to easily incorporate new filtering techniques designed to operate proactively. We also include a plugin to perform proactive content-based filtering through the analysis of words included in the body of the email message. Finally, we measured the performance of the plugin and the framework (time required for operation and accuracy) obtaining values suitable for their use during SMTP transactions. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

12 pages, 1684 KiB  
Article
Artificial Intelligence-Assisted RT-PCR Detection Model for Rapid and Reliable Diagnosis of COVID-19
by Emre Özbilge, Tamer Sanlidag, Ebru Ozbilge and Buket Baddal
Appl. Sci. 2022, 12(19), 9908; https://doi.org/10.3390/app12199908 - 1 Oct 2022
Cited by 6 | Viewed by 4039
Abstract
With the spread of SARS-CoV-2 variants with higher transmissibility and disease severity, rapid detection and isolation of patients remains a critical step in the control of the pandemic. RT-PCR is the recommended diagnostic test for the diagnosis of COVID-19. The current study aims [...] Read more.
With the spread of SARS-CoV-2 variants with higher transmissibility and disease severity, rapid detection and isolation of patients remains a critical step in the control of the pandemic. RT-PCR is the recommended diagnostic test for the diagnosis of COVID-19. The current study aims to develop an artificial intelligence (AI)-driven COVID-19 RT-PCR detection system for rapid and reliable diagnosis, facilitating the heavy burden of healthcare workers. A multi-input deep convolutional neural network (DCNN) is proposed. A MobileNetV2 DCNN architecture was used to predict the possible diagnostic result of RT-PCR fluorescence data from patient nasopharyngeal sample analyses. Amplification curves in FAM (ORF1ab and N genes, SARS-CoV-2) and HEX (human RNAse P gene, internal control) channels of 400 samples were categorized as positive, weak-positive, negative or re-run (unspecific fluorescence). During the network training, HEX and FAM channel images for each sample were simultaneously presented to the DCNN. The obtained DCNN model was verified using another 160 new test samples. The proposed DCNN classified RT-PCR amplification curves correctly for all COVID-19 diagnostic categories with an accuracy, sensitivity, specificity, F1-score, and AUC of the model reported to be 1. Furthermore, the performance of other pre-trained well-known DCNN models was also compared with the MobileNetV2 model using 5-fold cross-validation, and the results showed that there were no significant differences between the other models at the 5% significance level; however, the MobileNetV2 model outperformed others dramatically in terms of the training speed and fast convergence. The developed model can help rapidly diagnose COVID-19 patients and would be beneficial in tackling future pandemics. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

19 pages, 886 KiB  
Article
Decoding Structure–Odor Relationship Based on Hypergraph Neural Network and Deep Attentional Factorization Machine
by Yu Wang, Qilong Zhao, Mingyuan Ma and Jin Xu
Appl. Sci. 2022, 12(17), 8777; https://doi.org/10.3390/app12178777 - 31 Aug 2022
Cited by 2 | Viewed by 2259
Abstract
Understanding the relationship between the chemical structure and physicochemical properties of odor molecules and olfactory perception, i.e., the structure–odor relationship, remains a decades-old, challenging task. However, the differences among the molecular structure graphs of different molecules are subtle and complex, and the molecular [...] Read more.
Understanding the relationship between the chemical structure and physicochemical properties of odor molecules and olfactory perception, i.e., the structure–odor relationship, remains a decades-old, challenging task. However, the differences among the molecular structure graphs of different molecules are subtle and complex, and the molecular feature descriptors are numerous, with complex interactions that cause multiple odor perceptions. In this paper, we propose to decompose the features of the molecular structure graph into feature vectors corresponding to each odor perception descriptor to effectively explore higher-order semantic interactions between odor molecules and odor perception descriptors. We propose an olfactory perception prediction model noted as HGAFMN, which utilizes a hypergraph neural network with the olfactory lateral inhibition-inspired attention mechanism to learn the molecular structure feature from the odor molecular structure graph. Furthermore, existing methods cannot effectively extract interactive features in the large number of molecular feature descriptors, which have complex relations. To solve this problem, we add an attentional factorization mechanism to the deep neural network module and obtain a molecular descriptive feature through the deep feature combination based on the attention mechanism. Our proposed HGAFMN has achieved good results in extensive experiments and will help product design and quality assessment in the food, beverage, and fragrance industries. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

15 pages, 6375 KiB  
Article
Development of a Novel Object Detection System Based on Synthetic Data Generated from Unreal Game Engine
by Ingeborg Rasmussen, Sigurd Kvalsvik, Per-Arne Andersen, Teodor Nilsen Aune and Daniel Hagen
Appl. Sci. 2022, 12(17), 8534; https://doi.org/10.3390/app12178534 - 26 Aug 2022
Cited by 14 | Viewed by 3777
Abstract
This paper presents a novel approach to training a real-world object detection system based on synthetic data utilizing state-of-the-art technologies. Training an object detection system can be challenging and time-consuming as machine learning requires substantial volumes of training data with associated metadata. Synthetic [...] Read more.
This paper presents a novel approach to training a real-world object detection system based on synthetic data utilizing state-of-the-art technologies. Training an object detection system can be challenging and time-consuming as machine learning requires substantial volumes of training data with associated metadata. Synthetic data can solve this by providing unlimited desired training data with automatic generation. However, the main challenge is creating a balanced dataset that closes the reality gap and generalizes well when deployed in the real world. A state-of-the-art game engine, Unreal Engine 4, was used to approach the challenge of generating a photorealistic dataset for deep learning model training. In addition, a comprehensive domain randomized environment was implemented to create a robust dataset that generalizes the training data well. The randomized environment was reinforced by adding high-dynamic-range image scenes. Finally, a modern neural network was used to train the object detection system, providing a robust framework for an adaptive and self-learning model. The final models were deployed in simulation and in the real world to evaluate the training. The results of this study show that it is possible to train a real-world object detection system on synthetic data. However, the models showcase a lot of potential for improvements regarding the stability and confidence of the inference results. In addition, the paper provides valuable insight into how the number of assets and training data influence the resulting model. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

9 pages, 1816 KiB  
Article
A Graph-Based k-Nearest Neighbor (KNN) Approach for Predicting Phases in High-Entropy Alloys
by Raheleh Ghouchan Nezhad Noor Nia, Mehrdad Jalali and Mahboobeh Houshmand
Appl. Sci. 2022, 12(16), 8021; https://doi.org/10.3390/app12168021 - 10 Aug 2022
Cited by 10 | Viewed by 4591
Abstract
Traditional techniques for detecting materials have been unable to coordinate with the advancement of material science today due to their low accuracy and high cost. Accordingly, machine learning (ML) improves prediction efficiency in material science and high-entropy alloys’ (HEAs’) phase prediction. Unlike traditional [...] Read more.
Traditional techniques for detecting materials have been unable to coordinate with the advancement of material science today due to their low accuracy and high cost. Accordingly, machine learning (ML) improves prediction efficiency in material science and high-entropy alloys’ (HEAs’) phase prediction. Unlike traditional alloys, HEAs consist of at least five elements with equal or near-equal atomic sizes. In a previous approach, we presented an HEA interaction network based on its descriptors. In this study, the HEA phase is predicted using a graph-based k-nearest neighbor (KNN) approach. Each HEA compound has its phase, which includes five categories: FCC, BCC, HCP, Multiphase and Amorphous. A composition phase represents a state of matter with a certain energy level. Phase prediction is effective in determining its application. Each compound in the network has some neighbors, and the phase of a new compound can be predicted based on the phase of the most similar neighbors. The proposed approach is performed on the HEA network. The experimental results show that the accuracy of the proposed approach for predicting the phase of new alloys is 88.88%, which is higher than that of other ML methods. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

13 pages, 2877 KiB  
Article
Use of Artificial Neural Networks to Predict the Progression of Glaucoma in Patients with Sleep Apnea
by Nicoleta Anton, Catalin Lisa, Bogdan Doroftei, Silvia Curteanu, Camelia Margareta Bogdanici, Dorin Chiselita, Daniel Constantin Branisteanu, Ionela Nechita-Dumitriu, Ovidiu-Dumitru Ilie and Roxana Elena Ciuntu
Appl. Sci. 2022, 12(12), 6061; https://doi.org/10.3390/app12126061 - 15 Jun 2022
Cited by 6 | Viewed by 2428
Abstract
Aim: To construct neural models to predict the progression of glaucoma in patients with sleep apnea. Materials and Methods: Modeling the use of neural networks was performed using the Neurosolutions commercial simulator. The built databases gather information on a group of patients with [...] Read more.
Aim: To construct neural models to predict the progression of glaucoma in patients with sleep apnea. Materials and Methods: Modeling the use of neural networks was performed using the Neurosolutions commercial simulator. The built databases gather information on a group of patients with primitive open-angle glaucoma and normal-tension glaucoma, who have been associated with sleep apnea syndrome and various stages of disease severity. The data within the database were divided as follows: 65 were used in the neural network training stage and 8 were kept for the validation stage. In total, 21 parameters were selected as input parameters for neural models including: age of patients, BMI (body mass index), systolic and diastolic blood pressure, intraocular pressure, central corneal thickness, corneal biomechanical parameters (IOPcc, HC, CRF), AHI, desaturation index, nocturnal oxygen saturation, remaining AHI, type of apnea, and associated general conditions (diabetes, hypertension, obesity, COPD). The selected output parameters are: c/d ratio, modified visual field parameters (MD, PSD), ganglion cell layer thickness. Forward-propagation neural networks (multilayer perceptron) were constructed with a layer of hidden neurons. The constructed neural models generated the output values for these data. The obtained results were then compared with the experimental values. Results: The best results were obtained during the training stage with the ANN network (21:35:4). If we consider a 25% confidence interval, we find that very good results are obtained during the validation stage, except for the average GCL thickness, for which the errors are slightly higher. Conclusions: Excellent results were obtained during the validation stage, which support the results obtained in other studies in the literature that strengthen the connection between sleep apnea syndrome and glaucoma changes. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

16 pages, 5450 KiB  
Article
Object Detection Related to Irregular Behaviors of Substation Personnel Based on Improved YOLOv4
by Jingxin Fang and Xuwei Li
Appl. Sci. 2022, 12(9), 4301; https://doi.org/10.3390/app12094301 - 24 Apr 2022
Cited by 7 | Viewed by 2140
Abstract
The accurate and timely detection of irregular behavior of substation personnel plays an important role in maintaining personal safety and preventing power outage accidents. This paper proposes a method for irregular behaviors detection (IBD) of substation personnel based on an improved YOLOv4 which [...] Read more.
The accurate and timely detection of irregular behavior of substation personnel plays an important role in maintaining personal safety and preventing power outage accidents. This paper proposes a method for irregular behaviors detection (IBD) of substation personnel based on an improved YOLOv4 which uses MobileNetV3 to replace the CSPDarkNet53 feature extraction network, depthwise separable convolution and efficient channel attention (ECA) to optimize the SPP and PANet networks, and four scales of feature maps to fuse to improve the detection accuracy. First, an image dataset was constructed using video data and still photographs preprocessed by the gamma correction method. Then, the improved YOLOv4 model was trained by combining Mosaic data enhancement, cosine annealing, and label smoothing skills. Several detection cases were carried out, and the experimental results showed that the proposed improved YOLOv4 model has high accuracy, with a mean average precision (mAP) of 83.51%, as well as a fast detection speed, with a frames per second (FPS) of 38.06 pictures/s. This represents better performance than other object detection methods, including Faster RCNN, SSD, YOLOv3, and YOLOv4. This study offers a reference for the IBD of substation personnel and provides an automated intelligent monitoring method. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

12 pages, 1278 KiB  
Article
Few-Shot Wideband Tympanometry Classification in Otosclerosis via Domain Adaptation with Gaussian Processes
by Leixin Nie, Chao Li, Alexis Bozorg Grayeli and Franck Marzani
Appl. Sci. 2021, 11(24), 11839; https://doi.org/10.3390/app112411839 - 13 Dec 2021
Cited by 2 | Viewed by 2222
Abstract
Otosclerosis is a common middle ear disease that requires a combination of examinations for its diagnosis in routine. In a previous study, we showed that this disease could be potentially diagnosed by wideband tympanometry (WBT) coupled with a convolutional neural network (CNN) in [...] Read more.
Otosclerosis is a common middle ear disease that requires a combination of examinations for its diagnosis in routine. In a previous study, we showed that this disease could be potentially diagnosed by wideband tympanometry (WBT) coupled with a convolutional neural network (CNN) in a rapid and non-invasive manner. We showed that deep transfer learning with data augmentation could be applied successfully on such a task. However, the involved synthetic and realistic data have a significant discrepancy that impedes the performance of transfer learning. To address this issue, a Gaussian processes-guided domain adaptation (GPGDA) algorithm was developed. It leveraged both the loss about the distribution distance calculated by the Gaussian processes and the loss of conventional cross entropy during the transferring. On a WBT dataset including 80 otosclerosis and 55 control samples, it achieved an area-under-the-curve of 97.9±1.1 percent after receiver operating characteristic analysis and an F1-score of 95.7±0.9 percent that were superior to the baseline methods (r=10, p<0.05, ANOVA). To understand the algorithm’s behavior, the role of each component in the GPGDA was experimentally explored on the dataset. In conclusion, our GPGDA algorithm appears to be an effective tool to enhance CNN-based WBT classification in otosclerosis using just a limited number of realistic data samples. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

23 pages, 6906 KiB  
Article
A Proposal for Clothing Size Recommendation System Using Chinese Online Shopping Malls: The New Era of Data
by Ying Yuan, Myung-Ja Park and Jun-Ho Huh
Appl. Sci. 2021, 11(23), 11215; https://doi.org/10.3390/app112311215 - 25 Nov 2021
Cited by 4 | Viewed by 6748
Abstract
Research was conducted in this study to design data-based size recommendation and size coding systems specifically for online shopping malls, expecting to lighten the burden of holding excessive inventories often caused by the high return rate in these online malls. The recommendation system [...] Read more.
Research was conducted in this study to design data-based size recommendation and size coding systems specifically for online shopping malls, expecting to lighten the burden of holding excessive inventories often caused by the high return rate in these online malls. The recommendation system has been implemented focusing mainly on size extraction and recommendation functions along with a UI (user interface). For the former function, data are necessary to extract customers’ sizes and, for instance, the system to be used in China adopts their Chinese standard body size GB/T (Chinese national standard) considering that there are a variety of body types in their substantial population. The system shows the most similar size dataset among the body size GB/T dataset to the customer once he/she inputs his/her height and weight. Each GB/T data was entered after categorizing it according to the proportion between height and weight. For the latter function, size recommendation, size coding was performed first for all the clothes by the shop owner by entering individual size data. The clothes providing the most suitable fit for the customer are recommended by the selection of that which has the smallest deviation between coded clothes size and the customer body data after performing a series of comparative calculations. To validate the effectiveness of the extraction, a method that checks whether the difference between extracted size and the body size that has been measured remains within the error range of 4cm was used. The result showed there to be an approximate 88% matching rate for women and a slightly lower accuracy of 80% for men. Moreover, the error rate was relatively smaller for the upper half clothing such as shirts, jackets, and blouses or one-piece dresses. Such a result may have been generated since the GB/T data were actually the average data entered 10 years prior without categorizing nationalities, ages, and body types in detail. This research emphasized the necessity of a database containing a more segmented human body size data, which can be effective for extracting and recommending sizes more accurately as the latest ones continue to accumulate. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

15 pages, 3675 KiB  
Article
An Optimization Technique for Linear Manifold Learning-Based Dimensionality Reduction: Evaluations on Hyperspectral Images
by Ümit Öztürk and Atınç Yılmaz
Appl. Sci. 2021, 11(19), 9063; https://doi.org/10.3390/app11199063 - 28 Sep 2021
Cited by 2 | Viewed by 2351
Abstract
Manifold learning tries to find low-dimensional manifolds on high-dimensional data. It is useful to omit redundant data from input. Linear manifold learning algorithms have applicability for out-of-sample data, in which they are fast and practical especially for classification purposes. Locality preserving projection (LPP) [...] Read more.
Manifold learning tries to find low-dimensional manifolds on high-dimensional data. It is useful to omit redundant data from input. Linear manifold learning algorithms have applicability for out-of-sample data, in which they are fast and practical especially for classification purposes. Locality preserving projection (LPP) and orthogonal locality preserving projection (OLPP) are two known linear manifold learning algorithms. In this study, scatter information of a distance matrix is used to construct a weight matrix with a supervised approach for the LPP and OLPP algorithms to improve classification accuracy rates. Low-dimensional data are classified with SVM and the results of the proposed method are compared with some other important existing linear manifold learning methods. Class-based enhancements and coefficients proposed for the formulization are reported visually. Furthermore, the change on weight matrices, band information, and correlation matrices with p-values are extracted and visualized to understand the effect of the proposed method. Experiments are conducted on hyperspectral imaging (HSI) with two different datasets. According to the experimental results, application of the proposed method with the LPP or OLPP algorithms outperformed traditional LPP, OLPP, neighborhood preserving embedding (NPE) and orthogonal neighborhood preserving embedding (ONPE) algorithms. Furthermore, the analytical findings on visualizations show consistency with obtained classification accuracy enhancements. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

22 pages, 1465 KiB  
Article
Business Intelligence in Airline Passenger Satisfaction Study—A Fuzzy-Genetic Approach with Optimized Interpretability-Accuracy Trade-Off
by Marian B. Gorzałczany, Filip Rudziński and Jakub Piekoszewski
Appl. Sci. 2021, 11(11), 5098; https://doi.org/10.3390/app11115098 - 31 May 2021
Cited by 8 | Viewed by 3044
Abstract
The main objective and contribution of this paper is the application of our knowledge-discovery business-intelligence technique (fuzzy rule-based classification systems) characterized by genetically optimized interpretability-accuracy trade-off (using multi-objective evolutionary optimization algorithms) to decision support related to airline passenger satisfaction problems. Recently published and [...] Read more.
The main objective and contribution of this paper is the application of our knowledge-discovery business-intelligence technique (fuzzy rule-based classification systems) characterized by genetically optimized interpretability-accuracy trade-off (using multi-objective evolutionary optimization algorithms) to decision support related to airline passenger satisfaction problems. Recently published and accessible at Kaggle’s repository airline passengers satisfaction data set containing 259,760 records is used in our experiments. A comparison of our approach with an alternative method (using SAS-system’s accuracy-oriented prediction tools to determine the attribute importance hierarchy) is also performed showing the advantages of our method in terms of: (i) discovering the actual hierarchy of attribute significance for passenger satisfaction and (ii) knowledge-discovery system’s interpretability-accuracy trade-off optimization. The main results and findings of our work include: (i) an introduction of the modern fuzzy-genetic business-intelligence solution characterized both by high interpretability and high accuracy to the airline passenger satisfaction decision support, (ii) an analysis of the effect of possible "overlapping" of some input attributes over the other ones in order to discover the real hierarchy of influence of particular input attributes upon the airline passengers satisfaction, and (iii) an extended cross-validation experiment confirming high effectiveness of our approach for different learning-test splits of the data set considered. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

10 pages, 1881 KiB  
Article
Identification of Synonyms Using Definition Similarities in Japanese Medical Device Adverse Event Terminology
by Ayako Yagahara, Masahito Uesugi and Hideto Yokoi
Appl. Sci. 2021, 11(8), 3659; https://doi.org/10.3390/app11083659 - 19 Apr 2021
Cited by 3 | Viewed by 2797
Abstract
Japanese medical device adverse events terminology, published by the Japan Federation of Medical Devices Associations (JFMDA terminology), contains entries for 89 terminology items, with each of the terminology entries created independently. It is necessary to establish and verify the consistency of these terminology [...] Read more.
Japanese medical device adverse events terminology, published by the Japan Federation of Medical Devices Associations (JFMDA terminology), contains entries for 89 terminology items, with each of the terminology entries created independently. It is necessary to establish and verify the consistency of these terminology entries and map them efficiently and accurately. Therefore, developing an automatic synonym detection tool is an important concern. Such tools for edit distances and distributed representations have achieved good performance in previous studies. The purpose of this study was to identify synonyms in JFMDA terminology and evaluate the accuracy using these algorithms. A total of 125 definition sentence pairs were created from the terminology as baselines. Edit distances (Levenshtein and Jaro–Winkler distance) and distributed representations (Word2vec, fastText, and Doc2vec) were employed for calculating similarities. Receiver operating characteristic analysis was carried out to evaluate the accuracy of synonym detection. A comparison of the accuracies of the algorithms showed that the Jaro–Winkler distance had the highest sensitivity, Doc2vec with DM had the highest specificity, and the Levenshtein distance had the highest value in area under the curve. Edit distances and Doc2vec makes it possible to obtain high accuracy in predicting synonyms in JFMDA terminology. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

16 pages, 493 KiB  
Article
A General Framework Based on Machine Learning for Algorithm Selection in Constraint Satisfaction Problems
by José C. Ortiz-Bayliss, Ivan Amaya, Jorge M. Cruz-Duarte, Andres E. Gutierrez-Rodriguez, Santiago E. Conant-Pablos and Hugo Terashima-Marín
Appl. Sci. 2021, 11(6), 2749; https://doi.org/10.3390/app11062749 - 18 Mar 2021
Cited by 7 | Viewed by 2782
Abstract
Many of the works conducted on algorithm selection strategies—methods that choose a suitable solving method for a particular problem—start from scratch since only a few investigations on reusable components of such methods are found in the literature. Additionally, researchers might unintentionally omit some [...] Read more.
Many of the works conducted on algorithm selection strategies—methods that choose a suitable solving method for a particular problem—start from scratch since only a few investigations on reusable components of such methods are found in the literature. Additionally, researchers might unintentionally omit some implementation details when documenting the algorithm selection strategy. This makes it difficult for others to reproduce the behavior obtained by such an approach. To address these problems, we propose to rely on existing techniques from the Machine Learning realm to speed-up the generation of algorithm selection strategies while improving the modularity and reproducibility of the research. The proposed solution model is implemented on a domain-independent Machine Learning module that executes the core mechanism of the algorithm selection task. The algorithm selection strategies produced in this work are implemented and tested rapidly compared against the time it would take to build a similar approach from scratch. We produce four novel algorithm selectors based on Machine Learning for constraint satisfaction problems to verify our approach. Our data suggest that these algorithms outperform the best performing algorithm on a set of test instances. For example, the algorithm selectors Multiclass Neural Network (MNN) and Multiclass Logistic Regression (MLR), powered by a neural network and linear regression, respectively, reduced the search cost (in terms of consistency checks) of the best performing heuristic (KAPPA), on average, by 49% for the instances considered for this work. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

17 pages, 843 KiB  
Article
Experimental Analysis of Friend-And-Native Based Location Awareness for Accurate Collaborative Filtering
by Aaron Ling Chi Yi and Dae-Ki Kang
Appl. Sci. 2021, 11(6), 2510; https://doi.org/10.3390/app11062510 - 11 Mar 2021
Cited by 2 | Viewed by 2009
Abstract
Location-based recommender systems have gained a lot of attention in both commercial domains and research communities where there are various approaches that have shown great potential for further studies. However, there has been little attention in previous research on location-based recommender systems for [...] Read more.
Location-based recommender systems have gained a lot of attention in both commercial domains and research communities where there are various approaches that have shown great potential for further studies. However, there has been little attention in previous research on location-based recommender systems for generating recommendations considering the locations of target users. Such recommender systems sometimes recommend places that are far from the target user’s current location. In this paper, we explore the issues of generating location recommendations for users who are traveling overseas by taking into account the user’s social influence and also the native or local expert’s knowledge. Accordingly, we have proposed a collaborative filtering recommendation framework called the Friend-And-Native-Aware Approach for Collaborative Filtering (FANA-CF), to generate reasonable location recommendations for users. We have validated our approach by systematic and extensive experiments using real-world datasets collected from Foursquare TM. By comparing algorithms such as the collaborative filtering approach (item-based collaborative filtering and user-based collaborative filtering) and the personalized mean approach, we have shown that our proposed approach has slightly outperformed the conventional collaborative filtering approach and personalized mean approach. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

19 pages, 6177 KiB  
Article
A Study of Multilayer Perceptron Networks Applied to Classification of Ceramic Insulators Using Ultrasound
by Nemesio Fava Sopelsa Neto, Stéfano Frizzo Stefenon, Luiz Henrique Meyer, Rafael Bruns, Ademir Nied, Laio Oriel Seman, Gabriel Villarrubia Gonzalez, Valderi Reis Quietinho Leithardt and Kin-Choong Yow
Appl. Sci. 2021, 11(4), 1592; https://doi.org/10.3390/app11041592 - 10 Feb 2021
Cited by 43 | Viewed by 3878
Abstract
Interruptions in the supply of electricity cause numerous losses to consumers, whether residential or industrial and may result in fines being imposed on the regulatory agency’s concessionaire. In Brazil, the electrical transmission and distribution systems cover a large territorial area, and because they [...] Read more.
Interruptions in the supply of electricity cause numerous losses to consumers, whether residential or industrial and may result in fines being imposed on the regulatory agency’s concessionaire. In Brazil, the electrical transmission and distribution systems cover a large territorial area, and because they are usually outdoors, they are exposed to environmental variations. In this context, periodic inspections are carried out on the electrical networks, and ultrasound equipment is widely used, due to non-destructive analysis characteristics. Ultrasonic inspection allows the identification of defective insulators based on the signal interpreted by an operator. This task fundamentally depends on the operator’s experience in this interpretation. In this way, it is intended to test machine learning applications to interpret ultrasound signals obtained from electrical grid insulators, distribution, class 25 kV. Currently, research in the area uses several models of artificial intelligence for various types of evaluation. This paper studies Multilayer Perceptron networks’ application to the classification of the different conditions of ceramic insulators based on a restricted database of ultrasonic signals recorded in the laboratory. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

13 pages, 264 KiB  
Article
Split-Based Algorithm for Weighted Context-Free Grammar Induction
by Mateusz Gabor, Wojciech Wieczorek and Olgierd Unold
Appl. Sci. 2021, 11(3), 1030; https://doi.org/10.3390/app11031030 - 24 Jan 2021
Viewed by 2018
Abstract
The split-based method in a weighted context-free grammar (WCFG) induction was formalised and verified on a comprehensive set of context-free languages. WCFG is learned using a novel grammatical inference method. The proposed method learns WCFG from both positive and negative samples, whereas the [...] Read more.
The split-based method in a weighted context-free grammar (WCFG) induction was formalised and verified on a comprehensive set of context-free languages. WCFG is learned using a novel grammatical inference method. The proposed method learns WCFG from both positive and negative samples, whereas the weights of rules are estimated using a novel Inside–Outside Contrastive Estimation algorithm. The results showed that our approach outperforms in terms of F1 scores of other state-of-the-art methods. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

15 pages, 4888 KiB  
Article
Interferometric Wavefront Sensing System Based on Deep Learning
by Yuhao Niu, Zhan Gao, Chenjia Gao, Jieming Zhao and Xu Wang
Appl. Sci. 2020, 10(23), 8460; https://doi.org/10.3390/app10238460 - 27 Nov 2020
Cited by 1 | Viewed by 2520
Abstract
At present, most wavefront sensing methods analyze the wavefront aberration from light intensity images taken in dark environments. However, in general conditions, these methods are limited due to the interference of various external light sources. In recent years, deep learning has achieved great [...] Read more.
At present, most wavefront sensing methods analyze the wavefront aberration from light intensity images taken in dark environments. However, in general conditions, these methods are limited due to the interference of various external light sources. In recent years, deep learning has achieved great success in the field of computer vision, and it has been widely used in the research of image classification and data fitting. Here, we apply deep learning algorithms to the interferometric system to detect wavefront under general conditions. This method can accurately extract the wavefront phase distribution and analyze aberrations, and it is verified by experiments that this method not only has higher measurement accuracy and faster calculation speed but also has good performance in the noisy environments. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

31 pages, 610 KiB  
Article
Framework to Diagnose the Metabolic Syndrome Types without Using a Blood Test Based on Machine Learning
by Mauricio Barrios, Miguel Jimeno, Pedro Villalba and Edgar Navarro
Appl. Sci. 2020, 10(23), 8404; https://doi.org/10.3390/app10238404 - 26 Nov 2020
Cited by 4 | Viewed by 2760
Abstract
Metabolic Syndrome (MetS) is a set of risk factors that increase the probability of heart disease or even diabetes mellitus. The diagnosis of the pathology implies compliance with at least three of five risk factors. Doctors obtain two of those factors in a [...] Read more.
Metabolic Syndrome (MetS) is a set of risk factors that increase the probability of heart disease or even diabetes mellitus. The diagnosis of the pathology implies compliance with at least three of five risk factors. Doctors obtain two of those factors in a medical consultation: waist circumference and blood pressure. The other three factors are biochemical variables that require a blood test to determine triglyceride, high-density lipoprotein cholesterol, and fasting plasma glucose. Consequently, scientists are developing technology for non-invasive diagnostics, but medical personnel also need the risk factors involved in MetS to start a treatment. This paper describes the segmentation of MetS into ten types based on harmonized Metabolic Syndrome criteria. It proposes a framework to diagnose the types of MetS based on Artificial Neural Networks and Random undersampling Boosted tree using non-biochemical variables such as anthropometric and clinical information. The framework works over imbalanced and balanced datasets using the Synthetic Minority Oversampling Technique and for validation uses random subsampling to get performance evaluation indicators between the classifiers. The results showed an excellent framework for diagnosing the 10 MetS types that have Area under Receiver Operating Characteristic (AROC) curves with a range of 71% to 93% compared with AROC 82.86% from traditional MetS. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

18 pages, 4771 KiB  
Article
Escaping Local Minima in Path Planning Using a Robust Bacterial Foraging Algorithm
by Mohammed Isam Ismael Abdi, Muhammad Umer Khan, Ahmet Güneş and Deepti Mishra
Appl. Sci. 2020, 10(21), 7905; https://doi.org/10.3390/app10217905 - 7 Nov 2020
Cited by 10 | Viewed by 2806
Abstract
The bacterial foraging optimization (BFO) algorithm successfully searches for an optimal path from start to finish in the presence of obstacles over a flat surface map. However, the algorithm suffers from getting stuck in the local minima whenever non-circular obstacles are encountered. The [...] Read more.
The bacterial foraging optimization (BFO) algorithm successfully searches for an optimal path from start to finish in the presence of obstacles over a flat surface map. However, the algorithm suffers from getting stuck in the local minima whenever non-circular obstacles are encountered. The retrieval from the local minima is crucial, as otherwise, it can cause the failure of the whole task. This research proposes an improved version of BFO called robust bacterial foraging (RBF), which can effectively avoid obstacles, both of circular and non-circular shape, without falling into the local minima. The virtual obstacles are generated in the local minima, causing the robot to retract and regenerate a safe path. The proposed method is easily extendable to multiple robots that can coordinate with each other. The information related to the virtual obstacles is shared with the whole swarm, so that they can escape the same local minima to save time and energy. To test the effectiveness of the proposed algorithm, a comparison is made against the existing BFO algorithm. Through the results, it was witnessed that the proposed approach successfully recovered from the local minima, whereas the BFO got stuck. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

21 pages, 11295 KiB  
Article
Ensemble Deep Learning on Time-Series Representation of Tweets for Rumor Detection in Social Media
by Chandra Mouli Madhav Kotteti, Xishuang Dong and Lijun Qian
Appl. Sci. 2020, 10(21), 7541; https://doi.org/10.3390/app10217541 - 26 Oct 2020
Cited by 21 | Viewed by 3497
Abstract
Social media is a popular platform for information sharing. Any piece of information can be spread rapidly across the globe at lightning speed. The biggest challenge for social media platforms like Twitter is how to trust news shared on them when there is [...] Read more.
Social media is a popular platform for information sharing. Any piece of information can be spread rapidly across the globe at lightning speed. The biggest challenge for social media platforms like Twitter is how to trust news shared on them when there is no systematic news verification process, which is the case for traditional media. Detecting false information, for example, detection of rumors is a non-trivial task, given the fast-paced social media environment. In this work, we proposed an ensemble model, which performs majority-voting scheme on a collection of predictions of neural networks using time-series vector representation of Twitter data for fast detection of rumors. Experimental results show that proposed neural network models outperformed classical machine learning models in terms of micro F1 score. When compared to our previous works the improvements are 12.5% and 7.9%, respectively. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

23 pages, 961 KiB  
Article
Combining Machine Learning and Logical Reasoning to Improve Requirements Traceability Recovery
by Tong Li, Shiheng Wang, David Lillis and Zhen Yang
Appl. Sci. 2020, 10(20), 7253; https://doi.org/10.3390/app10207253 - 16 Oct 2020
Cited by 20 | Viewed by 4371
Abstract
Maintaining traceability links of software systems is a crucial task for software management and development. Unfortunately, dealing with traceability links are typically taken as afterthought due to time pressure. Some studies attempt to use information retrieval-based methods to automate this task, but they [...] Read more.
Maintaining traceability links of software systems is a crucial task for software management and development. Unfortunately, dealing with traceability links are typically taken as afterthought due to time pressure. Some studies attempt to use information retrieval-based methods to automate this task, but they only concentrate on calculating the textual similarity between various software artifacts and do not take into account the properties of such artifacts. In this paper, we propose a novel traceability link recovery approach, which comprehensively measures the similarity between use cases and source code by exploring their particular properties. To this end, we leverage and combine machine learning and logical reasoning techniques. On the one hand, our method extracts features by considering the semantics of the use cases and source code, and uses a classification algorithm to train the classifier. On the other hand, we utilize the relationships between artifacts and define a series of rules to recover traceability links. In particular, we not only leverage source code’s structural information, but also take into account the interrelationships between use cases. We have conducted a series of experiments on multiple datasets to evaluate our approach against existing approaches, the results of which show that our approach is substantially better than other methods. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

14 pages, 2458 KiB  
Article
Improving Machine Learning Identification of Unsafe Driver Behavior by Means of Sensor Fusion
by Emanuele Lattanzi, Giacomo Castellucci and Valerio Freschi
Appl. Sci. 2020, 10(18), 6417; https://doi.org/10.3390/app10186417 - 15 Sep 2020
Cited by 17 | Viewed by 4014
Abstract
Most road accidents occur due to human fatigue, inattention, or drowsiness. Recently, machine learning technology has been successfully applied to identifying driving styles and recognizing unsafe behaviors starting from in-vehicle sensors signals such as vehicle and engine speed, throttle position, and engine load. [...] Read more.
Most road accidents occur due to human fatigue, inattention, or drowsiness. Recently, machine learning technology has been successfully applied to identifying driving styles and recognizing unsafe behaviors starting from in-vehicle sensors signals such as vehicle and engine speed, throttle position, and engine load. In this work, we investigated the fusion of different external sensors, such as a gyroscope and a magnetometer, with in-vehicle sensors, to increase machine learning identification of unsafe driver behavior. Starting from those signals, we computed a set of features capable to accurately describe the behavior of the driver. A support vector machine and an artificial neural network were then trained and tested using several features calculated over more than 200 km of travel. The ground truth used to evaluate classification performances was obtained by means of an objective methodology based on the relationship between speed, and lateral and longitudinal acceleration of the vehicle. The classification results showed an average accuracy of about 88% using the SVM classifier and of about 90% using the neural network demonstrating the potential capability of the proposed methodology to identify unsafe driver behaviors. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 363 KiB  
Review
Advances in Machine Learning for Sensing and Condition Monitoring
by Sio-Iong Ao, Len Gelman, Hamid Reza Karimi and Monica Tiboni
Appl. Sci. 2022, 12(23), 12392; https://doi.org/10.3390/app122312392 - 3 Dec 2022
Cited by 8 | Viewed by 3034
Abstract
In order to overcome the complexities encountered in sensing devices with data collection, transmission, storage and analysis toward condition monitoring, estimation and control system purposes, machine learning algorithms have gained popularity to analyze and interpret big sensory data in modern industry. This paper [...] Read more.
In order to overcome the complexities encountered in sensing devices with data collection, transmission, storage and analysis toward condition monitoring, estimation and control system purposes, machine learning algorithms have gained popularity to analyze and interpret big sensory data in modern industry. This paper put forward a comprehensive survey on the advances in the technology of machine learning algorithms and their most recent applications in the sensing and condition monitoring fields. Current case studies of developing tailor-made data mining and deep learning algorithms from practical aspects are carefully selected and discussed. The characteristics and contributions of these algorithms to the sensing and monitoring fields are elaborated. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
28 pages, 1538 KiB  
Review
A Review on AI for Smart Manufacturing: Deep Learning Challenges and Solutions
by Jiawen Xu, Matthias Kovatsch, Denny Mattern, Filippo Mazza, Marko Harasic, Adrian Paschke and Sergio Lucia
Appl. Sci. 2022, 12(16), 8239; https://doi.org/10.3390/app12168239 - 17 Aug 2022
Cited by 21 | Viewed by 10781
Abstract
Artificial intelligence (AI) has been successfully applied in industry for decades, ranging from the emergence of expert systems in the 1960s to the wide popularity of deep learning today. In particular, inexpensive computing and storage infrastructures have moved data-driven AI methods into the [...] Read more.
Artificial intelligence (AI) has been successfully applied in industry for decades, ranging from the emergence of expert systems in the 1960s to the wide popularity of deep learning today. In particular, inexpensive computing and storage infrastructures have moved data-driven AI methods into the spotlight to aid the increasingly complex manufacturing processes. Despite the recent proverbial hype, however, there still exist non-negligible challenges when applying AI to smart manufacturing applications. As far as we know, there exists no work in the literature that summarizes and reviews the related works for these challenges. This paper provides an executive summary on AI techniques for non-experts with a focus on deep learning and then discusses the open issues around data quality, data secrecy, and AI safety that are significant for fully automated industrial AI systems. For each challenge, we present the state-of-the-art techniques that provide promising building blocks for holistic industrial AI solutions and the respective industrial use cases from several domains in order to better provide a concrete view of these techniques. All the examples we reviewed were published in the recent ten years. We hope this paper can provide the readers with a reference for further studying the related problems. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

29 pages, 778 KiB  
Review
Energy-Aware Multi-Objective Job Shop Scheduling Optimization with Metaheuristics in Manufacturing Industries: A Critical Survey, Results, and Perspectives
by Jesus Para, Javier Del Ser and Antonio J. Nebro
Appl. Sci. 2022, 12(3), 1491; https://doi.org/10.3390/app12031491 - 29 Jan 2022
Cited by 20 | Viewed by 4403
Abstract
In recent years, the application of artificial intelligence has been revolutionizing the manufacturing industry, becoming one of the key pillars of what has been called Industry 4.0. In this context, we focus on the job shop scheduling problem (JSP), which aims at productions [...] Read more.
In recent years, the application of artificial intelligence has been revolutionizing the manufacturing industry, becoming one of the key pillars of what has been called Industry 4.0. In this context, we focus on the job shop scheduling problem (JSP), which aims at productions orders to be carried out, but considering the reduction of energy consumption as a key objective to fulfill. Finding the best combination of machines and jobs to be performed is not a trivial problem and becomes even more involved when several objectives are taken into account. Among them, the improvement of energy savings may conflict with other objectives, such as the minimization of the makespan. In this paper, we provide an in-depth review of the existing literature on multi-objective job shop scheduling optimization with metaheuristics, in which one of the objectives is the minimization of energy consumption. We systematically reviewed and critically analyzed the most relevant features of both problem formulations and algorithms to solve them effectively. The manuscript also informs with empirical results the main findings of our bibliographic critique with a performance comparison among representative multi-objective evolutionary solvers applied to a diversity of synthetic test instances. The ultimate goal of this article is to carry out a critical analysis, finding good practices and opportunities for further improvement that stem from current knowledge in this vibrant research area. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence (AI))
Show Figures

Figure 1

Back to TopTop