Topic Editors

Department of Informatics, Ionian University, 491 32 Corfu, Greece
Department of Informatics, Ionian University, 491 32 Corfu, Greece
Department of Informatics, Ionian University, 491 32 Corfu, Greece

Artificial Intelligence Models, Tools and Applications

Abstract submission deadline
31 May 2025
Manuscript submission deadline
31 August 2025
Viewed by
215635

Topic Information

Dear Colleagues,

During the difficult years since the start of the ongoing COVID-19 pandemic, the need for efficient artificial intelligence models, tools, and applications has been more evident than ever. Machine learning and data science, not to mention the huge amount of data they produce, form a clear new source of valuable information. New and innovative approaches are required to tackle the new research challenges faced in this area. In this framework, artificial intelligence is crucial and thus may be described as one of the most important research areas of our time. Since this view is applicable to the research community, it also faces huge challenges from the perspective of data management and involves emerging disciplines in information processing and related tools and applications.

This Topic aims to bring together interdisciplinary approaches focusing on innovative applications and existing artificial intelligence methodologies. Since the typical notion of data is usually focused on heterogeneity and is rather dynamic in nature, computer science researchers are encouraged to develop new or adapt existing suitable artificial intelligence models, tools, and applications to effectively solve these problems. Therefore, this Topic is open to anyone who wants to submit a relevant research manuscript.

Prof. Dr. Phivos Mylonas
Dr. Katia Lida Kermanidis
Prof. Dr. Manolis Maragoudakis
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • smart tools and applications
  • computational logic
  • multi-agent systems
  • cross-disciplinary AI applications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Computers
computers
2.6 5.4 2012 17.2 Days CHF 1800 Submit
Digital
digital
- 3.1 2021 23.6 Days CHF 1000 Submit
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400 Submit
Smart Cities
smartcities
7.0 11.2 2018 25.8 Days CHF 2000 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (93 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
24 pages, 493 KiB  
Article
Timed Interpreted Systems as a New Agent-Based Formalism for Verification of Timed Security Protocols
by Agnieszka M. Zbrzezny, Olga Siedlecka-Lamch, Sabina Szymoniak, Andrzej Zbrzezny and Mirosław Kurkowski
Appl. Sci. 2024, 14(22), 10333; https://doi.org/10.3390/app142210333 - 10 Nov 2024
Viewed by 432
Abstract
This article introduces a new method for modelling and verifying the execution of timed security protocols (TSPs) and their time-dependent security properties. The method, which is novel and reliable, uses an extension of interpreted systems, accessible semantics in multi-agent systems, and timed interpreted [...] Read more.
This article introduces a new method for modelling and verifying the execution of timed security protocols (TSPs) and their time-dependent security properties. The method, which is novel and reliable, uses an extension of interpreted systems, accessible semantics in multi-agent systems, and timed interpreted systems (TISs) with dense time semantics to model TSP executions. We enhance the models of TSPs by incorporating delays and varying lifetimes to capture real-life aspects of protocol executions. To illustrate the method, we model a timed version of the Needham–Schroeder Public Key Authentication Protocol. We have also developed a new bounded model checking reachability algorithm for the proposed structures, based on Satisfiability Modulo Theories (SMTs), and implemented it within the tool. The method comprises a new procedure for modelling TSP executions, translating TSPs into TISs, and translating TISs’ reachability problem into the SMT problem. The paper also includes thorough experimental results for nine protocols modelled by TISs and discusses the findings in detail. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 1699 KiB  
Article
Deep Speech Synthesis and Its Implications for News Verification: Lessons Learned in the RTVE-UGR Chair
by Daniel Calderón-González, Nieves Ábalos, Blanca Bayo, Pedro Cánovas, David Griol, Carlos Muñoz-Romero, Carmen Pérez, Pere Vila and Zoraida Callejas
Appl. Sci. 2024, 14(21), 9916; https://doi.org/10.3390/app14219916 - 30 Oct 2024
Viewed by 512
Abstract
This paper presents the multidisciplinary work carried out in the RTVE-UGR Chair within the IVERES project, whose main objective is the development of a tool for journalists to verify the veracity of the audios that reach the newsrooms. In the current context, voice [...] Read more.
This paper presents the multidisciplinary work carried out in the RTVE-UGR Chair within the IVERES project, whose main objective is the development of a tool for journalists to verify the veracity of the audios that reach the newsrooms. In the current context, voice synthesis has both beneficial and detrimental applications, with audio deepfakes being a significant concern in the world of journalism due to their ability to mislead and misinform. This is a multifaceted problem that can only be tackled adopting a multidisciplinary perspective. In this article, we describe the approach we adopted within the RTVE-UGR Chair to successfully address the challenges derived from audio deepfakes involving a team with different backgrounds and a specific methodology of iterative co-creation. As a result, we present several outcomes including the compilation and generation of audio datasets, the development and deployment of several audio fake detection models, and the development of a web audio verification tool addressed to journalists. As a conclusion, we highlight the importance of this systematic collaborative work in the fight against misinformation and the future potential of audio verification technologies in various applications. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 5751 KiB  
Article
An Enhanced Particle Filtering Method Leveraging Particle Swarm Optimization for Simultaneous Localization and Mapping in Mobile Robots Navigating Unknown Environments
by Xu Bian, Wanqiu Zhao, Ling Tang, Hong Zhao and Xuesong Mei
Appl. Sci. 2024, 14(20), 9426; https://doi.org/10.3390/app14209426 - 16 Oct 2024
Viewed by 565
Abstract
With the rapid advancement of mobile robotics technology, Simultaneous Localization and Mapping (SLAM) has become indispensable for enabling robots to autonomously navigate and construct maps of unknown environments in real time. Traditional SLAM algorithms, such as the Extended Kalman Filter (EKF) and FastSLAM, [...] Read more.
With the rapid advancement of mobile robotics technology, Simultaneous Localization and Mapping (SLAM) has become indispensable for enabling robots to autonomously navigate and construct maps of unknown environments in real time. Traditional SLAM algorithms, such as the Extended Kalman Filter (EKF) and FastSLAM, have shown commendable performance in certain applications. However, they encounter significant limitations when dealing with nonlinear systems and non-Gaussian noise distributions, especially in dynamic and complex environments coupled with high computational complexity. To address these challenges, this study proposes an enhanced particle filtering method leveraging particle swarm optimization (PSO) to improve the accuracy of pose estimation and the efficacy of map construction in SLAM algorithms. We begin by elucidating the foundational principles of FastSLAM and its critical role in empowering robots with the ability to autonomously explore and map unknown territories. Subsequently, we delve into the innovative integration of PSO with FastSLAM, highlighting our novel approach of designing a bespoke fitness function tailored to enhance the distribution of particles. This innovation is pivotal in mitigating the degradation issues associated with particle filtering, thereby significantly improving the estimation accuracy and robustness of the SLAM solution in various operational scenarios. A series of simulation experiments and tests were conducted to substantiate the efficacy of the proposed method across diverse environments. The experimental outcomes demonstrate that, compared to the standard particle filtering algorithm, the PSO-enhanced particle filtering effectively mitigates the issue of particle degeneration, ensuring reliable and accurate SLAM performance even in challenging, unknown environments. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

20 pages, 1936 KiB  
Review
Physics Guided Neural Networks with Knowledge Graph
by Kishor Datta Gupta, Sunzida Siddique, Roy George, Marufa Kamal, Rakib Hossain Rifat and Mohd Ariful Haque
Digital 2024, 4(4), 846-865; https://doi.org/10.3390/digital4040042 - 10 Oct 2024
Viewed by 1084
Abstract
Over the past few decades, machine learning (ML) has demonstrated significant advancements in all areas of human existence. Machine learning and deep learning models rely heavily on data. Typically, basic machine learning (ML) and deep learning (DL) models receive input data and its [...] Read more.
Over the past few decades, machine learning (ML) has demonstrated significant advancements in all areas of human existence. Machine learning and deep learning models rely heavily on data. Typically, basic machine learning (ML) and deep learning (DL) models receive input data and its matching output. Within the model, these models generate rules. In a physics-guided model, input and output rules are provided to optimize the model’s learning, hence enhancing the model’s loss optimization. The concept of the physics-guided neural network (PGNN) is becoming increasingly popular among researchers and industry professionals. It has been applied in numerous fields such as healthcare, medicine, environmental science, and control systems. This review was conducted using four specific research questions. We obtained papers from six different sources and reviewed a total of 81 papers, based on the selected keywords. In addition, we have specifically addressed the difficulties and potential advantages of the PGNN. Our intention is for this review to provide guidance for aspiring researchers seeking to obtain a deeper understanding of the PGNN. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 56552 KiB  
Article
Towards Urban Accessibility: Modeling Trip Distribution to Assess the Provision of Social Facilities
by Margarita Mishina, Sergey Mityagin, Alexander Belyi, Alexander Khrulkov and Stanislav Sobolevsky
Smart Cities 2024, 7(5), 2741-2762; https://doi.org/10.3390/smartcities7050106 - 18 Sep 2024
Cited by 1 | Viewed by 939
Abstract
Assessing the accessibility and provision of social facilities in urban areas presents a significant challenge, particularly when direct data on facility utilization are unavailable or incomplete. To address this challenge, our study investigates the potential of trip distribution models in estimating facility utilization [...] Read more.
Assessing the accessibility and provision of social facilities in urban areas presents a significant challenge, particularly when direct data on facility utilization are unavailable or incomplete. To address this challenge, our study investigates the potential of trip distribution models in estimating facility utilization based on the spatial distributions of population demand and facilities’ capacities within a city. We first examine the extent to which traditional gravity-based and optimization-focused models can capture population–facilities interactions and provide a reasonable perspective on facility accessibility and provision. We then explore whether advanced deep learning techniques can produce more robust estimates of facility utilization when data are partially observed (e.g., when some of the district administrations collect and share these data). Our findings suggest that, while traditional models offer valuable insights into facility utilization, especially in the absence of direct data, their effectiveness depends on accurate assumptions about distance-related commute patterns. This limitation is addressed by our proposed novel deep learning model, incorporating supply–demand constraints, which demonstrates the ability to uncover hidden interaction patterns from partly observed data, resulting in accurate estimates of facility utilization and, thereby, more reliable provision assessments. We illustrate these findings through a case study on kindergarten accessibility in Saint Petersburg, Russia, offering urban planners a strategic toolkit for evaluating facility provision in data-limited contexts. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

30 pages, 3456 KiB  
Article
Towards Next-Generation Urban Decision Support Systems through AI-Powered Construction of Scientific Ontology Using Large Language Models—A Case in Optimizing Intermodal Freight Transportation
by Jose Tupayachi, Haowen Xu, Olufemi A. Omitaomu, Mustafa Can Camur, Aliza Sharmin and Xueping Li
Smart Cities 2024, 7(5), 2392-2421; https://doi.org/10.3390/smartcities7050094 - 31 Aug 2024
Cited by 3 | Viewed by 1527
Abstract
The incorporation of Artificial Intelligence (AI) models into various optimization systems is on the rise. However, addressing complex urban and environmental management challenges often demands deep expertise in domain science and informatics. This expertise is essential for deriving data and simulation-driven insights that [...] Read more.
The incorporation of Artificial Intelligence (AI) models into various optimization systems is on the rise. However, addressing complex urban and environmental management challenges often demands deep expertise in domain science and informatics. This expertise is essential for deriving data and simulation-driven insights that support informed decision-making. In this context, we investigate the potential of leveraging the pre-trained Large Language Models (LLMs) to create knowledge representations for supporting operations research. By adopting ChatGPT-4 API as the reasoning core, we outline an applied workflow that encompasses natural language processing, Methontology-based prompt tuning, and Generative Pre-trained Transformer (GPT), to automate the construction of scenario-based ontologies using existing research articles and technical manuals of urban datasets and simulations. From these ontologies, knowledge graphs can be derived using widely adopted formats and protocols, guiding various tasks towards data-informed decision support. The performance of our methodology is evaluated through a comparative analysis that contrasts our AI-generated ontology with the widely recognized pizza ontology, commonly used in tutorials for popular ontology software. We conclude with a real-world case study on optimizing the complex system of multi-modal freight transportation. Our approach advances urban decision support systems by enhancing data and metadata modeling, improving data integration and simulation coupling, and guiding the development of decision support strategies and essential software components. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 5060 KiB  
Article
Generative Adversarial Network-Based Voltage Fault Diagnosis for Electric Vehicles under Unbalanced Data
by Weidong Fang, Yihan Guo and Ji Zhang
Electronics 2024, 13(16), 3131; https://doi.org/10.3390/electronics13163131 - 7 Aug 2024
Cited by 1 | Viewed by 1107
Abstract
The research of electric vehicle power battery fault diagnosis technology is turning to machine learning methods. However, during operation, the time of occurrence of faults is much smaller than the normal driving time, resulting in too small a proportion of fault data as [...] Read more.
The research of electric vehicle power battery fault diagnosis technology is turning to machine learning methods. However, during operation, the time of occurrence of faults is much smaller than the normal driving time, resulting in too small a proportion of fault data as well as a single fault characteristic in the collected data. This has hindered the research progress in this field. To address this problem, this paper proposes a data enhancement method using Least Squares Generative Adversarial Networks (LSGAN). The method consists of training the original power battery fault dataset using LSGAN models to generate diverse sample data representing various fault states. The augmented dataset is then used to develop a fault diagnosis framework called LSGAN-RF-GWO, which combines a random forest (RF) model with a Gray Wolf Optimization (GWO) model for effective fault diagnosis. The performance of the framework is evaluated on the original and enhanced datasets and compared with other commonly used models such as Support Vector Machine (SVM), Gradient Boosting Machine (GBM), and Naïve Bayes (NB). The results show that the proposed fault diagnosis scheme improves the evaluation metrics and accuracy level, proving that the LSGAN-RF-GWO framework can utilize limited data resources to effectively diagnose power battery faults. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

28 pages, 16382 KiB  
Review
Overview of Pest Detection and Recognition Algorithms
by Boyu Guo, Jianji Wang, Minghui Guo, Miao Chen, Yanan Chen and Yisheng Miao
Electronics 2024, 13(15), 3008; https://doi.org/10.3390/electronics13153008 - 30 Jul 2024
Cited by 1 | Viewed by 1574
Abstract
Detecting and recognizing pests are paramount for ensuring the healthy growth of crops, maintaining ecological balance, and enhancing food production. With the advancement of artificial intelligence technologies, traditional pest detection and recognition algorithms based on manually selected pest features have gradually been substituted [...] Read more.
Detecting and recognizing pests are paramount for ensuring the healthy growth of crops, maintaining ecological balance, and enhancing food production. With the advancement of artificial intelligence technologies, traditional pest detection and recognition algorithms based on manually selected pest features have gradually been substituted by deep learning-based algorithms. In this review paper, we first introduce the primary neural network architectures and evaluation metrics in the field of pest detection and pest recognition. Subsequently, we summarize widely used public datasets for pest detection and recognition. Following this, we present various pest detection and recognition algorithms proposed in recent years, providing detailed descriptions of each algorithm and their respective performance metrics. Finally, we outline the challenges that current deep learning-based pest detection and recognition algorithms encounter and propose future research directions for related algorithms. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 1314 KiB  
Article
Joint Extraction Method for Hydraulic Engineering Entity Relations Based on Multi-Features
by Yang Liu, Xingzhi Wang, Xuemei Liu, Zehong Ren, Yize Wang and Qianqian Cai
Electronics 2024, 13(15), 2979; https://doi.org/10.3390/electronics13152979 - 28 Jul 2024
Viewed by 864
Abstract
During the joint extraction of entity and relationship from the operational management data of hydraulic engineering, complex sentences containing multiple triplets and overlapping entity relations often arise. However, traditional joint extraction models suffer from a single-feature representation approach, which hampers the effectiveness of [...] Read more.
During the joint extraction of entity and relationship from the operational management data of hydraulic engineering, complex sentences containing multiple triplets and overlapping entity relations often arise. However, traditional joint extraction models suffer from a single-feature representation approach, which hampers the effectiveness of entity relation extraction in complex sentences within hydraulic engineering datasets. To address this issue, this study proposes a multi-feature joint entity relation extraction method based on global context mechanism and graph convolutional neural networks. This method builds upon the Bidirectional Encoder Representations from Transformers (BERT) pre-trained model and utilizes a bidirectional gated recurrent unit (BiGRU) and global context mechanism (GCM) to supplement the contextual and global features of sentences. Subsequently, a graph convolutional network (GCN) based on syntactic dependencies is employed to learn inter-word dependency features, enhancing the model’s knowledge representation capabilities for complex sentences. Experimental results demonstrate the effectiveness of the proposed model in the joint extraction task on hydraulic engineering datasets. The precision, recall, and F1-score are 86.5%, 84.1%, and 85.3%, respectively, all outperforming the baseline model. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

28 pages, 3159 KiB  
Article
VEPO-S2S: A VEssel Portrait Oriented Trajectory Prediction Model Based on S2S Framework
by Xinyi Yang, Zhonghe Han, Yuanben Zhang, Hu Liu, Siye Liu, Wanzheng Ai and Junyi Liu
Appl. Sci. 2024, 14(14), 6344; https://doi.org/10.3390/app14146344 - 20 Jul 2024
Viewed by 917
Abstract
The prediction of vessel trajectories plays a crucial role in ensuring maritime safety and reducing maritime accidents. Substantial progress has been made in trajectory prediction tasks by adopting sequence modeling methods, containing recurrent neural networks (RNNs) and sequence-to-sequence networks (Seq2Seq). However, (1) most [...] Read more.
The prediction of vessel trajectories plays a crucial role in ensuring maritime safety and reducing maritime accidents. Substantial progress has been made in trajectory prediction tasks by adopting sequence modeling methods, containing recurrent neural networks (RNNs) and sequence-to-sequence networks (Seq2Seq). However, (1) most of these studies focus on the application of trajectory information, such as the longitude, latitude, course, and speed, while neglecting the impact of differing vessel features and behavioral preferences on the trajectories. (2) Challenges remain in acquiring these features and preferences, as well as enabling the model to sensibly integrate and efficiently express them. To address the issue, we introduce a novel deep framework VEPO-S2S, consisting of a Multi-level Vessel Trajectory Representation Module (Multi-Rep) and a Feature Fusion and Decoding Module (FFDM). Apart from the trajectory information, we first defined the Multi-level Vessel Characteristics in Multi-Rep, encompassing Shallow-level Attributes (vessel length, width, draft, etc.) and Deep-level Features (Sailing Location Preference, Voyage Time Preference, etc.). Subsequently, Multi-Rep was designed to obtain trajectory information and Multi-level Vessel Characteristics, applying distinct encoders for encoding. Next, the FFDM selected and integrated the above features from Multi-Rep for prediction by employing both a priori and a posteriori mechanisms, a Feature Fusion Component, and an enhanced decoder. This allows the model to efficiently leverage them and enhance overall performance. Finally, we conducted comparative experiments with several baseline models. The experimental results demonstrate that VEPO-S2S is both quantitatively and qualitatively superior to the models. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

27 pages, 11740 KiB  
Article
Site-Specific Deterministic Temperature and Dew Point Forecasts with Explainable and Reliable Machine Learning
by Mengmeng Han, Tennessee Leeuwenburg and Brad Murphy
Appl. Sci. 2024, 14(14), 6314; https://doi.org/10.3390/app14146314 - 19 Jul 2024
Viewed by 937
Abstract
Site-specific weather forecasts are essential for accurate prediction of power demand and are consequently of great interest to energy operators. However, weather forecasts from current numerical weather prediction (NWP) models lack the fine-scale detail to capture all important characteristics of localised real-world sites. [...] Read more.
Site-specific weather forecasts are essential for accurate prediction of power demand and are consequently of great interest to energy operators. However, weather forecasts from current numerical weather prediction (NWP) models lack the fine-scale detail to capture all important characteristics of localised real-world sites. Instead, they provide weather information representing a rectangular gridbox (usually kilometres in size). Even after post-processing and bias correction, area-averaged information is usually not optimal for specific sites. Prior work on site-optimised forecasts has focused on linear methods, weighted consensus averaging, and time-series methods, among others. Recent developments in machine learning (ML) have prompted increasing interest in applying ML as a novel approach towards this problem. In this study, we investigate the feasibility of optimising forecasts at sites by adopting the popular machine learning model “gradient boosted decision tree”, supported by the XGBoost package (v.1.7.3) in the Python language. Regression trees have been trained with historical NWP and site observations as training data, aimed at predicting temperature and dew point at multiple site locations across Australia. We developed a working ML framework, named “Multi-SiteBoost”, and initial test results show a significant improvement compared with gridded values from bias-corrected NWP models. The improvement from XGBoost (0.1–0.6 °C, 4–27% improvement in temperature) is found to be comparable with non-ML methods reported in the literature. With the insights provided by SHapley Additive exPlanations (SHAP), this study also tests various approaches to understand the ML predictions and increase the reliability of the forecasts generated by ML. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

21 pages, 22426 KiB  
Article
Intelligent Surveillance of Airport Apron: Detection and Location of Abnormal Behavior in Typical Non-Cooperative Human Objects
by Jun Li and Xiangqing Dong
Appl. Sci. 2024, 14(14), 6182; https://doi.org/10.3390/app14146182 - 16 Jul 2024
Viewed by 789
Abstract
Most airport surface surveillance systems focus on monitoring and commanding cooperative objects (vehicles) while neglecting the location and detection of non-cooperative objects (humans). Abnormal behavior by non-cooperative objects poses a potential threat to airport security. This study collects surveillance video data from civil [...] Read more.
Most airport surface surveillance systems focus on monitoring and commanding cooperative objects (vehicles) while neglecting the location and detection of non-cooperative objects (humans). Abnormal behavior by non-cooperative objects poses a potential threat to airport security. This study collects surveillance video data from civil aviation airports in several regions of China, and a non-cooperative abnormal behavior localization and detection framework (NC-ABLD) is established. As the focus of this paper, the proposed framework seamlessly integrates a multi-scale non-cooperative object localization module, a human keypoint detection module, and a behavioral classification module. The framework uses a serial structure, with multiple modules working in concert to achieve precise position, human keypoints, and behavioral classification of non-cooperative objects in the airport field. In addition, since there is no publicly available rich dataset of airport aprons, we propose a dataset called IIAR-30, which consists of 1736 images of airport surfaces and 506 video clips in six frequently occurring behavioral categories. The results of experiments conducted on the IIAR-30 dataset show that the framework performs well compared to mainstream behavior recognition methods and achieves fine-grained localization and refined class detection of typical non-cooperative human abnormal behavior on airport apron surfaces. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 3930 KiB  
Article
Implementation of an Automatic Meeting Minute Generation System Using YAMNet with Speaker Identification and Keyword Prompts
by Ching-Ta Lu and Liang-Yu Wang
Appl. Sci. 2024, 14(13), 5718; https://doi.org/10.3390/app14135718 - 29 Jun 2024
Viewed by 1159
Abstract
Producing conference/meeting minutes requires a person to simultaneously identify a speaker and the speaking content during the course of the meeting. This recording process is a heavy task. Reducing the workload for meeting minutes is an essential task for most people. In addition, [...] Read more.
Producing conference/meeting minutes requires a person to simultaneously identify a speaker and the speaking content during the course of the meeting. This recording process is a heavy task. Reducing the workload for meeting minutes is an essential task for most people. In addition, providing conference/meeting highlights in real time is helpful to the meeting process. In this study, we aim to implement an automatic meeting minutes generation system (AMMGS) for recording conference/meeting minutes. A speech recognizer transforms speech signals to obtain the conference/meeting text. Accordingly, the proposed AMMGS can reduce the effort in recording the minutes. All meeting members can concentrate on the meeting; taking minutes is unnecessary. The AMMGS includes speaker identification for Mandarin Chinese speakers, keyword spotting, and speech recognition. Transferring learning on YAMNet lets the network identify specified speakers. So, the proposed AMMGS can automatically generate conference/meeting minutes with labeled speakers. Furthermore, the AMMGS applies the Jieba segmentation tool for keyword spotting. The system detects the frequency of words’ occurrence. Keywords are determined from the highly segmented words. These keywords help an attendant to stay with the agenda. The experimental results reveal that the proposed AMMGS can accurately identify speakers and recognize speech. Accordingly, the AMMGS can generate conference/meeting minutes while the keywords are spotted effectively. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

50 pages, 3271 KiB  
Review
Unlocking Artificial Intelligence Adoption in Local Governments: Best Practice Lessons from Real-World Implementations
by Tan Yigitcanlar, Anne David, Wenda Li, Clinton Fookes, Simon Elias Bibri and Xinyue Ye
Smart Cities 2024, 7(4), 1576-1625; https://doi.org/10.3390/smartcities7040064 - 28 Jun 2024
Cited by 2 | Viewed by 4471
Abstract
In an era marked by rapid technological progress, the pivotal role of Artificial Intelligence (AI) is increasingly evident across various sectors, including local governments. These governmental bodies are progressively leveraging AI technologies to enhance service delivery to their communities, ranging from simple task [...] Read more.
In an era marked by rapid technological progress, the pivotal role of Artificial Intelligence (AI) is increasingly evident across various sectors, including local governments. These governmental bodies are progressively leveraging AI technologies to enhance service delivery to their communities, ranging from simple task automation to more complex engineering endeavours. As more local governments adopt AI, it is imperative to understand the functions, implications, and consequences of these advanced technologies. Despite the growing importance of this domain, a significant gap persists within the scholarly discourse. This study aims to bridge this void by exploring the applications of AI technologies within the context of local government service provision. Through this inquiry, it seeks to generate best practice lessons for local government and smart city initiatives. By conducting a comprehensive review of grey literature, we analysed 262 real-world AI implementations across 170 local governments worldwide. The findings underscore several key points: (a) there has been a consistent upward trajectory in the adoption of AI by local governments over the last decade; (b) local governments from China, the US, and the UK are at the forefront of AI adoption; (c) among local government AI technologies, natural language processing and robotic process automation emerge as the most prevalent ones; (d) local governments primarily deploy AI across 28 distinct services; and (e) information management, back-office work, and transportation and traffic management are leading domains in terms of AI adoption. This study enriches the existing body of knowledge by providing an overview of current AI applications within the sphere of local governance. It offers valuable insights for local government and smart city policymakers and decision-makers considering the adoption, expansion, or refinement of AI technologies in urban service provision. Additionally, it highlights the importance of using these insights to guide the successful integration and optimisation of AI in future local government and smart city projects, ensuring they meet the evolving needs of communities. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

24 pages, 21619 KiB  
Article
A Vehicle Velocity Prediction Method with Kinematic Segment Recognition
by Benxiang Lin, Chao Wei and Fuyong Feng
Appl. Sci. 2024, 14(12), 5030; https://doi.org/10.3390/app14125030 - 9 Jun 2024
Viewed by 1197
Abstract
Accurate vehicle velocity prediction is of great significance in vehicle energy distribution and road traffic management. In light of the high time variability of vehicle velocity itself and the limitation of single model prediction, a velocity prediction method based on K-means-QPSO-LSTM with kinematic [...] Read more.
Accurate vehicle velocity prediction is of great significance in vehicle energy distribution and road traffic management. In light of the high time variability of vehicle velocity itself and the limitation of single model prediction, a velocity prediction method based on K-means-QPSO-LSTM with kinematic segment recognition is proposed in this paper. Firstly, the K-means algorithm was used to cluster samples with similar characteristics together, extract kinematic fragment samples in typical driving conditions, calculate their feature parameters, and carry out principal component analysis on the feature parameters to achieve dimensionality reduction transformation of information. Then, the vehicle velocity prediction sub-neural network models based on long short-term memory (LSTM) with the QPSO algorithm optimized were trained under different driving condition datasets. Furthermore, the kinematic segment recognition and traditional vehicle velocity prediction were integrated to form an adaptive vehicle velocity prediction method based on driving condition identification. Finally, the current driving condition type was identified and updated in real-time during vehicle velocity prediction, and then the corresponding sub-LSTM model was used for vehicle velocity prediction. The simulation experiment demonstrated a significant enhancement in both the velocity and accuracy of prediction through the proposed method. The proposed hybrid method has the potential to improve the accuracy and reliability of vehicle velocity prediction, making it applicable in various fields such as autonomous driving, traffic management, and energy management strategies for hybrid electric vehicles. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 4629 KiB  
Article
Characterizing Smart Cities Based on Artificial Intelligence
by Laaziza Hammoumi, Mehdi Maanan and Hassan Rhinane
Smart Cities 2024, 7(3), 1330-1345; https://doi.org/10.3390/smartcities7030056 - 7 Jun 2024
Cited by 7 | Viewed by 2294
Abstract
Cities worldwide are attempting to be labelled as smart, but truly classifying as such remains a great challenge. This study aims to use artificial intelligence (AI) to classify the performance of smart cities and identify the factors linked to their smartness. Based on [...] Read more.
Cities worldwide are attempting to be labelled as smart, but truly classifying as such remains a great challenge. This study aims to use artificial intelligence (AI) to classify the performance of smart cities and identify the factors linked to their smartness. Based on residents’ perceptions of urban structures and technological applications, this study included 200 cities globally. For 147 cities, we gathered the perceptions of 120 residents per city through a survey of 39 questions covering two main pillars: ‘Structures’, referring to the existing infrastructure of the city, and the ‘Technology’ pillar that describes the technological provisions and services available to the inhabitants. These pillars were evaluated across five key areas: health and safety, mobility, activities, opportunities, and governance. For the remaining 53 cities, scores were derived by analyzing pertinent data collected from various online resources. Multiple machine learning algorithms, including Random Forest, Artificial Neural Network, Support Vector Machine, and Gradient Boost, were tested and compared in order to select the best one. The results showed that Random Forest and the Artificial Neural Network are the best trained models that achieved the highest levels of accuracy. This study provides a robust framework for using machine learning to identify and assess smart cities, offering valuable insights for future research and urban planning. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

26 pages, 6942 KiB  
Article
Effectiveness of the Fuzzy Logic Control to Manage the Microclimate Inside a Smart Insulated Greenhouse
by Jamel Riahi, Hamza Nasri, Abdelkader Mami and Silvano Vergura
Smart Cities 2024, 7(3), 1304-1329; https://doi.org/10.3390/smartcities7030055 - 6 Jun 2024
Cited by 1 | Viewed by 1080
Abstract
Agricultural greenhouses incorporate intricate systems to regulate the internal climate. Among the crucial climatic variables, indoor temperature and humidity take precedence in establishing an optimal environment for plant production and growth. The present research emphasizes the efficacy of employing intelligent control systems in [...] Read more.
Agricultural greenhouses incorporate intricate systems to regulate the internal climate. Among the crucial climatic variables, indoor temperature and humidity take precedence in establishing an optimal environment for plant production and growth. The present research emphasizes the efficacy of employing intelligent control systems in the automation of the indoor climate for smart insulated greenhouses (SIGs), utilizing a fuzzy logic controller (FLC). This paper proposes the use of an FLC to reduce the energy consumption of a greenhouse. In the first step, a thermodynamic model is presented and experimentally validated based on thermal heat exchanges between the indoor and outdoor climatic variables. The outcomes show the effectiveness of the proposed model in controlling indoor air temperature and relative humidity with a low error percentage. Secondly, several fuzzy logic control models have been developed to regulate the indoor temperature and humidity for cold and hot periods. The results show the good performance of the proposed FLC model as highlighted by the statistical analysis. In fact, the root mean squared error (RMSE) is very small and equal to 0.69% for temperature and 0.23% for humidity, whereas the efficiency factor (EF) of the fuzzy logic control is equal to 99.35% for temperature control and 99.86% for humidity control. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 2534 KiB  
Article
LeakPred: An Approach for Identifying Components with Resource Leaks in Android Mobile Applications
by Josias Gomes Lima, Rafael Giusti and Arilo Claudio Dias-Neto
Computers 2024, 13(6), 140; https://doi.org/10.3390/computers13060140 - 3 Jun 2024
Viewed by 649
Abstract
Context: Mobile devices contain some resources, for example, the camera, battery, and memory, that are allocated, used, and then deallocated by mobile applications. Whenever a resource is allocated and not correctly released, a defect called a resource leak occurs, which can cause [...] Read more.
Context: Mobile devices contain some resources, for example, the camera, battery, and memory, that are allocated, used, and then deallocated by mobile applications. Whenever a resource is allocated and not correctly released, a defect called a resource leak occurs, which can cause crashes and slowdowns. Objective: In this study, we intended to demonstrate the usefulness of the LeakPred approach in terms of the number of components with resource leak problems identified in applications. Method: We compared the approach’s effectiveness with three state-of-the-art methods in identifying leaks in 15 Android applications. Result: LeakPred obtained the best median (85.37%) of components with identified leaks, the best coverage (96.15%) of the classes of leaks that could be identified in the applications, and an accuracy of 81.25%. The Android Lint method achieved the second best median (76.92%) and the highest accuracy (100%), but only covered 1.92% of the leak classes. Conclusions: LeakPred is effective in identifying leaky components in applications. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 432 KiB  
Article
Deep Pre-Training Transformers for Scientific Paper Representation
by Jihong Wang, Zhiguang Yang and Zhanglin Cheng
Electronics 2024, 13(11), 2123; https://doi.org/10.3390/electronics13112123 - 29 May 2024
Cited by 2 | Viewed by 1500
Abstract
In the age of scholarly big data, efficiently navigating and analyzing the vast corpus of scientific literature is a significant challenge. This paper introduces a specialized pre-trained BERT-based language model, termed SPBERT, which enhances natural language processing tasks specifically tailored to the domain [...] Read more.
In the age of scholarly big data, efficiently navigating and analyzing the vast corpus of scientific literature is a significant challenge. This paper introduces a specialized pre-trained BERT-based language model, termed SPBERT, which enhances natural language processing tasks specifically tailored to the domain of scientific paper analysis. Our method employs a novel neural network embedding technique that leverages textual components, such as keywords, titles, abstracts, and full texts, to represent papers in a vector space. By integrating recent advancements in text representation and unsupervised feature aggregation, SPBERT offers a sophisticated approach to encode essential information implicitly, thereby enhancing paper classification and literature retrieval tasks. We applied our method to several real-world academic datasets, demonstrating notable improvements over existing methods. The findings suggest that SPBERT not only provides a more effective representation of scientific papers but also facilitates a deeper understanding of large-scale academic data, paving the way for more informed and accurate scholarly analysis. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 3557 KiB  
Article
EDUNet++: An Enhanced Denoising Unet++ for Ice-Covered Transmission Line Images
by Yu Zhang, Yinke Dou, Liangliang Zhao, Yangyang Jiao and Dongliang Guo
Electronics 2024, 13(11), 2085; https://doi.org/10.3390/electronics13112085 - 27 May 2024
Cited by 1 | Viewed by 710
Abstract
New technology has made it possible to monitor and analyze the condition of ice-covered transmission lines based on images. However, the collected images are frequently accompanied by noise, which results in inaccurate monitoring. Therefore, this paper proposes an enhanced denoising Unet++ for ice-covered [...] Read more.
New technology has made it possible to monitor and analyze the condition of ice-covered transmission lines based on images. However, the collected images are frequently accompanied by noise, which results in inaccurate monitoring. Therefore, this paper proposes an enhanced denoising Unet++ for ice-covered transmission line images (EDUNet++). This algorithm mainly comprises three modules: a feature encoding and decoding module (FEADM), a shared source feature fusion module (SSFFM), and an error correction module (ECM). In the FEADM, a residual attention module (RAM) and a multilevel feature attention module (MFAM) are proposed. The RAM incorporates the cascaded residual structure and hybrid attention mechanism, that effectively preserve the mapping of feature information. The MFAM uses dilated convolution to obtain features at different levels, and then uses feature attention for weighting. This module effectively combines local and global features, which can better capture the details and texture information in the image. In the SSFFM, the source features are fused to preserve low-frequency information like texture and edges in the image, hence enhancing the realism and clarity of the image. The ECM utilizes the discrepancy between the generated image and the original image to effectively capture all the potential information in the image, hence enhancing the realism of the generated image. We employ a novel piecewise joint loss. On the dataset of ice-covered transmission lines, PSNR (peak signal to noise ratio) and SSIM (structural similarity) achieved values of 29.765 dB and 0.968, respectively. Additionally, the visual effects exhibited more distinct detailed features. The proposed method exhibits superior noise suppression capabilities and robustness compared to alternative approaches. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 7828 KiB  
Article
A Few-Shot Object Detection Method for Endangered Species
by Hongmei Yan, Xiaoman Ruan, Daixian Zhu, Haoran Kong and Peixuan Liu
Appl. Sci. 2024, 14(11), 4443; https://doi.org/10.3390/app14114443 - 23 May 2024
Cited by 1 | Viewed by 806
Abstract
Endangered species detection plays an important role in biodiversity conservation and is significant in maintaining ecological balance. Existing deep learning-based object detection methods are overly dependent on a large number of supervised samples, and building such endangered species datasets is usually costly. Aiming [...] Read more.
Endangered species detection plays an important role in biodiversity conservation and is significant in maintaining ecological balance. Existing deep learning-based object detection methods are overly dependent on a large number of supervised samples, and building such endangered species datasets is usually costly. Aiming at the problems faced by endangered species detection, such as low accuracy and easy loss of location information, an efficient endangered species detection method with fewer samples is proposed to extend the few-shot object detection technique to the field of endangered species detection, which requires only a small number of training samples to obtain excellent detection results. First, SE-Res2Net is proposed to optimize the feature extraction capability. Secondly, an RPN network with multiple attention mechanism is proposed. Finally, for the classification confusion problem, a weighted prototype-based comparison branch is introduced to construct weighted category prototype vectors, which effectively improves the performance of the original classifier. Under the setting of 30 samples in the endangered species dataset, the average detection accuracy value of the method, mAP50, reaches 76.54%, which is 7.98% higher than that of the pre-improved FSCE method. This paper also compares the algorithm on the PASCOL VOC dataset, which is optimal and has good generalization ability compared to the other five algorithms. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 1252 KiB  
Article
An Artificial Intelligence Approach for Estimating the Turbidity of Artisanal Wine and Dosage of Clarifying Agents
by Erika Mishell De La Cruz Rojas, Jimmy Nuñez-Pérez, Marco Lara-Fiallos, José-Manuel Pais-Chanfrau, Rosario Espín-Valladares and Juan Carlos DelaVega-Quintero
Appl. Sci. 2024, 14(11), 4416; https://doi.org/10.3390/app14114416 - 23 May 2024
Viewed by 1073
Abstract
Red wine is a beverage consumed worldwide and contains suspended solids that cause turbidity. The study’s purpose was to mathematically model estimated turbidity in artisanal wines concerning the dosage and types of fining agents based on previous studies presenting positive results. Burgundy grape [...] Read more.
Red wine is a beverage consumed worldwide and contains suspended solids that cause turbidity. The study’s purpose was to mathematically model estimated turbidity in artisanal wines concerning the dosage and types of fining agents based on previous studies presenting positive results. Burgundy grape wine (Vitis lambrusca) was made and clarified with ‘yausabara’ (Pavonia sepium) and bentonite at different concentrations. The system was modelled using several machine learning models, including MATLAB’s Neural Net Fitting and Regression Learner applications. The results showed that the validation of the neural network trained with the Levenberg–Marquardt algorithm obtained significant statistical indicators, such as the coefficient of determination (R2) of 0.985, mean square error (MSE) of 0.004, normalized root mean square error (NRSME) of 6.01 and Akaike information criterion (AIC) of −160.12, selecting it as the representative model of the system. It presents an objective and simple alternative for measuring wine turbidity that is useful for artisanal winemakers who can improve quality and consistency. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 4056 KiB  
Article
Advanced Swine Management: Infrared Imaging for Precise Localization of Reproductive Organs in Livestock Monitoring
by Iyad Almadani, Brandon Ramos, Mohammed Abuhussein and Aaron L. Robinson
Digital 2024, 4(2), 446-460; https://doi.org/10.3390/digital4020022 - 2 May 2024
Cited by 1 | Viewed by 1338
Abstract
Traditional methods for predicting sow reproductive cycles are not only costly but also demand a larger workforce, exposing workers to respiratory toxins, repetitive stress injuries, and chronic pain. This occupational hazard can even lead to mental health issues due to repeated exposure to [...] Read more.
Traditional methods for predicting sow reproductive cycles are not only costly but also demand a larger workforce, exposing workers to respiratory toxins, repetitive stress injuries, and chronic pain. This occupational hazard can even lead to mental health issues due to repeated exposure to violence. Managing health and welfare issues becomes pivotal in group-housed animal settings, where individual care is challenging on large farms with limited staff. The necessity for computer vision systems to analyze sow behavior and detect deviations indicative of health problems is apparent. Beyond observing changes in behavior and physical traits, computer vision can accurately detect estrus based on vulva characteristics and analyze thermal imagery for temperature changes, which are crucial indicators of estrus. By automating estrus detection, farms can significantly enhance breeding efficiency, ensuring optimal timing for insemination. These systems work continuously, promptly alerting staff to anomalies for early intervention. In this research, we propose part of the solution by utilizing an image segmentation model to localize the vulva. We created our technique to identify vulvae on pig farms using infrared imagery. To accomplish this, we initially isolate the vulva region by enclosing it within a red rectangle and then generate vulva masks by applying a threshold to the red area. The system is trained using U-Net semantic segmentation, where the input for the system consists of grayscale images and their corresponding masks. We utilize U-Net semantic segmentation to find the vulva in the input image, making it lightweight, simple, and robust enough to be tested on many images. To evaluate the performance of our model, we employ the intersection over union (IOU) metric, which is a suitable indicator for determining the model’s robustness. For the segmentation model, a prediction is generally considered ‘good’ when the intersection over union score surpasses 0.5. Our model achieved this criterion with a score of 0.58, surpassing the scores of alternative methods such as the SVM with Gabor (0.515) and YOLOv3 (0.52). Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 13917 KiB  
Article
PDED-ConvLSTM: Pyramid Dilated Deeper Encoder–Decoder Convolutional LSTM for Arctic Sea Ice Concentration Prediction
by Deyu Zhang, Changying Wang, Baoxiang Huang, Jing Ren, Junli Zhao and Guojia Hou
Appl. Sci. 2024, 14(8), 3278; https://doi.org/10.3390/app14083278 - 13 Apr 2024
Viewed by 869
Abstract
Arctic sea ice concentration plays a key role in the global ecosystem. However, accurate prediction of Arctic sea ice concentration remains a challenging task due to its inherent nonlinearity and complex spatiotemporal correlations. To address these challenges, we propose an innovative encoder–decoder pyramid [...] Read more.
Arctic sea ice concentration plays a key role in the global ecosystem. However, accurate prediction of Arctic sea ice concentration remains a challenging task due to its inherent nonlinearity and complex spatiotemporal correlations. To address these challenges, we propose an innovative encoder–decoder pyramid dilated convolutional long short-term memory network (DED-ConvLSTM). The model is constructed based on the convolutional long short-term memory network (ConvLSTM) and, for the first time, integrates the encoder–decoder architecture of ConvLSTM (ED-ConvLSTM) with a pyramidal dilated convolution strategy. This approach aims to efficiently capture the spatiotemporal properties of the sea ice concentration and to enhance the identification of its nonlinear relationships. By applying convolutional layers with different dilation rates, the PDED-ConvLSTM model can capture spatial features at multiple scales and increase the receptive field without losing resolution. Further, the integration of the pyramid convolution module significantly enhances the model’s ability to understand complex spatiotemporal relationships, resulting in notable improvements in prediction accuracy and generalization ability. The experimental results show that the sea ice concentration distribution predicted by the PDED-ConvLSTM model is in high agreement with ground-based observations, with the residuals between the predictions and observations maintained within a range from −20% to 20%. PDED-ConvLSTM outperforms other models in terms of prediction performance, reducing the RMSE by 3.6% compared to the traditional ConvLSTM model and also performing well over a five-month prediction period. These experiments demonstrate the potential of PDED-ConvLSTM in predicting Arctic sea ice concentrations, making it a viable tool to meet the requirements for accurate prediction and provide technical support for safe and efficient operations in the Arctic region. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

20 pages, 6477 KiB  
Article
Integrating Multi-Criteria Decision Models in Smart Urban Planning: A Case Study of Architectural and Urban Design Competitions
by Tomaž Berčič, Marko Bohanec and Lucija Ažman Momirski
Smart Cities 2024, 7(2), 786-805; https://doi.org/10.3390/smartcities7020033 - 18 Mar 2024
Cited by 2 | Viewed by 1556
Abstract
The focus of this study is to integrate the DEX (Decision EXpert) decision-modeling method in architectural and urban design (A & UD) competitions. This study aims to assess the effectiveness of integrating the DEX (Decision EXpert) decision-modeling method into the evaluation process of [...] Read more.
The focus of this study is to integrate the DEX (Decision EXpert) decision-modeling method in architectural and urban design (A & UD) competitions. This study aims to assess the effectiveness of integrating the DEX (Decision EXpert) decision-modeling method into the evaluation process of A & UD competitions to enhance decision-making transparency, objectivity, and efficiency. By using symbolic values in decision models, the approach offers a more user-friendly alternative to the conventional jury decision-making process. The practical application of the DEX method is demonstrated in the Rhinoceros 3D environment to show its effectiveness in evaluating A & UD competition project solutions related to the development of the smart city. The results indicate that the DEX method, with its hierarchical and symbolic values, significantly improves the simplicity of the evaluation process in A & UD competitions, aligning it with the objectives of the smart cities. This method provides an efficient, accessible, and viable alternative to other multi-criteria decision-making approaches. This study importantly contributes to the field of architectural decision making by merging qualitative multi-criteria decision models into the CAD environment, thus supporting more informed, objective, and transparent decision-making processes in the planning and development of smart cities. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 1100 KiB  
Article
Leveraging Large Language Models for Sensor Data Retrieval
by Alberto Berenguer, Adriana Morejón, David Tomás and Jose-Norberto Mazón
Appl. Sci. 2024, 14(6), 2506; https://doi.org/10.3390/app14062506 - 15 Mar 2024
Cited by 1 | Viewed by 2250
Abstract
The growing significance of sensor data in the development of information technology services finds obstacles due to disparate data presentations and non-adherence to FAIR principles. This paper introduces a novel approach for sensor data gathering and retrieval. The proposal leverages large language models [...] Read more.
The growing significance of sensor data in the development of information technology services finds obstacles due to disparate data presentations and non-adherence to FAIR principles. This paper introduces a novel approach for sensor data gathering and retrieval. The proposal leverages large language models to convert sensor data into FAIR-compliant formats and to provide word embedding representations of tabular data for subsequent exploration, enabling semantic comparison. The proposed system comprises two primary components. The first focuses on gathering data from sensors and converting it into a reusable structured format, while the second component aims to identify the most relevant sensor data to augment a given user-provided dataset. The evaluation of the proposed approach involved comparing the performance of various large language models in generating representative word embeddings for each table to retrieve related sensor data. The results show promising performance in terms of precision and MRR (0.90 and 0.94 for the best-performing model, respectively), indicating the system’s ability to retrieve pertinent sensor data that fulfil user requirements. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 1061 KiB  
Article
Automatic Spell-Checking System for Spanish Based on the Ar2p Neural Network Model
by Eduard Puerto, Jose Aguilar and Angel Pinto
Computers 2024, 13(3), 76; https://doi.org/10.3390/computers13030076 - 12 Mar 2024
Viewed by 1599
Abstract
Currently, approaches to correcting misspelled words have problems when the words are complex or massive. This is even more serious in the case of Spanish, where there are very few studies in this regard. So, proposing new approaches to word recognition and correction [...] Read more.
Currently, approaches to correcting misspelled words have problems when the words are complex or massive. This is even more serious in the case of Spanish, where there are very few studies in this regard. So, proposing new approaches to word recognition and correction remains a research topic of interest. In particular, an interesting approach is to computationally simulate the brain process for recognizing misspelled words and their automatic correction. Thus, this article presents an automatic recognition and correction system of misspelled words in Spanish texts, for the detection of misspelled words, and their automatic amendments, based on the systematic theory of pattern recognition of the mind (PRTM). The main innovation of the research is the use of the PRTM theory in this context. Particularly, a corrective system of misspelled words in Spanish based on this theory, called Ar2p-Text, was designed and built. Ar2p-Text carries out a recursive process of analysis of words by a disaggregation/integration mechanism, using specialized hierarchical recognition modules that define formal strategies to determine if a word is well or poorly written. A comparative evaluation shows that the precision and coverage of our Ar2p-Text model are competitive with other spell-checkers. In the experiments, the system achieves better performance than the three other systems. In general, Ar2p-Text obtains an F-measure of 83%, above the 73% achieved by the other spell-checkers. Our hierarchical approach reuses a lot of information, allowing for the improvement of the text analysis processes in both quality and efficiency. Preliminary results show that the above will allow for future developments of technologies for the correction of words inspired by this hierarchical approach. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 3968 KiB  
Article
Electrocardiogram Signals Classification Using Deep-Learning-Based Incorporated Convolutional Neural Network and Long Short-Term Memory Framework
by Alaa Eleyan and Ebrahim Alboghbaish
Computers 2024, 13(2), 55; https://doi.org/10.3390/computers13020055 - 18 Feb 2024
Cited by 7 | Viewed by 3483
Abstract
Cardiovascular diseases (CVDs) like arrhythmia and heart failure remain the world’s leading cause of death. These conditions can be triggered by high blood pressure, diabetes, and simply the passage of time. The early detection of these heart issues, despite substantial advancements in artificial [...] Read more.
Cardiovascular diseases (CVDs) like arrhythmia and heart failure remain the world’s leading cause of death. These conditions can be triggered by high blood pressure, diabetes, and simply the passage of time. The early detection of these heart issues, despite substantial advancements in artificial intelligence (AI) and technology, is still a significant challenge. This research addresses this hurdle by developing a deep-learning-based system that is capable of predicting arrhythmias and heart failure from abnormalities in electrocardiogram (ECG) signals. The system leverages a model that combines long short-term memory (LSTM) networks with convolutional neural networks (CNNs). Extensive experiments were conducted using ECG data from both the MIT-BIH and BIDMC databases under two scenarios. The first scenario employed data from five distinct ECG classes, while the second focused on classifying data from three classes. The results from both scenarios demonstrated that the proposed deep-learning-based classification approach outperformed existing methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 3778 KiB  
Article
Multi-Layer Fusion 3D Object Detection via Lidar Point Cloud and Camera Image
by Yuhao Guo and Hui Hu
Appl. Sci. 2024, 14(4), 1348; https://doi.org/10.3390/app14041348 - 6 Feb 2024
Cited by 1 | Viewed by 1693
Abstract
Object detection is a key task in automatic driving, and the poor performance of small object detection is a challenge that needs to be overcome. Previously, object detection networks could detect large-scale objects in ideal environments, but detecting small objects was very difficult. [...] Read more.
Object detection is a key task in automatic driving, and the poor performance of small object detection is a challenge that needs to be overcome. Previously, object detection networks could detect large-scale objects in ideal environments, but detecting small objects was very difficult. To address this problem, we propose a multi-layer fusion 3D object detection network. First, a dense fusion (D-fusion) method is proposed, which is different from the traditional fusion method. By fusing the feature maps of each layer, more semantic information of the fusion network can be preserved. Secondly, in order to preserve small objects at the feature map level, we designed a feature extractor with an adaptive fusion module (AFM), which reduces the impact of the background on small objects by weighting and fusing different feature layers. Finally, an attention mechanism was added to the feature extractor to accelerate the training efficiency and convergence speed of the network by suppressing information that is irrelevant to the task. The experimental results show that our proposed approach greatly improves the baseline and outperforms most state-of-the-art methods on KITTI object detection benchmarks. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 3852 KiB  
Article
Local-Global Spatial-Temporal Graph Convolutional Network for Traffic Flow Forecasting
by Xinlu Zong, Zhen Chen, Fan Yu and Siwei Wei
Electronics 2024, 13(3), 636; https://doi.org/10.3390/electronics13030636 - 2 Feb 2024
Cited by 1 | Viewed by 1241
Abstract
Traffic forecasting’s key challenge is to extract dynamic spatial-temporal features within intricate traffic systems. This paper introduces a novel framework for traffic prediction, named Local-Global Spatial-Temporal Graph Convolutional Network (LGSTGCN). The framework consists of three core components. Firstly, a graph attention residual network [...] Read more.
Traffic forecasting’s key challenge is to extract dynamic spatial-temporal features within intricate traffic systems. This paper introduces a novel framework for traffic prediction, named Local-Global Spatial-Temporal Graph Convolutional Network (LGSTGCN). The framework consists of three core components. Firstly, a graph attention residual network layer is proposed to capture global spatial dependencies by evaluating traffic mode correlations between different nodes. The context information added in the residual connection can improve the generalization ability of the model. Secondly, a T-GCN module, combining a Graph Convolution Network (GCN) with a Gated Recurrent Unit (GRU), is introduced to capture real-time local spatial-temporal dependencies. Finally, a transformer layer is designed to extract long-term temporal dependence and to identify the sequence characteristics of traffic data through positional encoding. Experiments conducted on four real traffic datasets validate the forecasting performance of the LGSTGCN model. The results demonstrate that LGSTGCN can achieve good performance and be applicable to traffic forecasting tasks. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 5369 KiB  
Article
Identification Method for XRF Spectral Analysis Based on an AGA-BP-Attention Neural Network
by Zeyuan Chang, Qi Zhang, Yuanfeng Li, Xiangjun Xin, Ran Gao, Yun Teng, Lan Rao and Meng Sun
Electronics 2024, 13(3), 507; https://doi.org/10.3390/electronics13030507 - 25 Jan 2024
Cited by 2 | Viewed by 1400
Abstract
X-ray fluorescence (XRF) spectroscopy is a non-destructive differential measurement technique widely utilized in elemental analysis. However, due to its inherent high non-linearity and noise issues, it is challenging for XRF spectral analysis to achieve high levels of accuracy. In response to these challenges, [...] Read more.
X-ray fluorescence (XRF) spectroscopy is a non-destructive differential measurement technique widely utilized in elemental analysis. However, due to its inherent high non-linearity and noise issues, it is challenging for XRF spectral analysis to achieve high levels of accuracy. In response to these challenges, this paper proposes a method for XRF spectral analysis that integrates an adaptive genetic algorithm with a backpropagation neural network, enhanced by an attention mechanism, termed as the AGA-BP-Attention method. By leveraging the robust feature extraction capabilities of the neural network and the ability of the attention mechanism to focus on significant features, spectral features are extracted for elemental identification. The adaptive genetic algorithm is subsequently employed to optimize the parameters of the BP neural network, such as weights and thresholds, which enhances the model’s accuracy and stability. The experimental results demonstrate that, compared to traditional BP neural networks, the AGA-BP-Attention network can more effectively address the non-linearity and noise issues of XRF spectral signals. In XRF spectral analysis of air pollutant samples, it achieved superior prediction accuracy, effectively suppressing the impact of background noise on spectral element recognition. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

25 pages, 574 KiB  
Article
SCALE-BOSS-MR: Scalable Time Series Classification Using Multiple Symbolic Representations
by Apostolos Glenis and George A. Vouros
Appl. Sci. 2024, 14(2), 689; https://doi.org/10.3390/app14020689 - 13 Jan 2024
Cited by 1 | Viewed by 978
Abstract
Time-Series-Classification (TSC) is an important machine learning task for many branches of science. Symbolic representations of time series, especially Symbolic Fourier Approximation (SFA), have been proven very effective for this task, given their abilities to reduce noise. In this paper, we improve upon [...] Read more.
Time-Series-Classification (TSC) is an important machine learning task for many branches of science. Symbolic representations of time series, especially Symbolic Fourier Approximation (SFA), have been proven very effective for this task, given their abilities to reduce noise. In this paper, we improve upon SCALE-BOSS using multiple symbolic representations of time series. More specifically, the proposed SCALE-BOSS-MR incorporates into the process a variety of window sizes combined with multiple dilation parameters applied to the original and to first-order differences’ time series, with the latter modeling trend information. SCALE-BOSS-MR has been evaluated using the eight datasets with the largest training size of the UCR time series repository. The results indicate that SCALE-BOSS-MR can be instantiated to classifiers that are able to achieve state-of-the-art accuracy and can be tuned for scalability. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

21 pages, 5704 KiB  
Article
Deep Convolutional Neural Network for Indoor Regional Crowd Flow Prediction
by Qiaoshuang Teng, Shangyu Sun, Weidong Song, Jinzhong Bei and Chongchang Wang
Electronics 2024, 13(1), 172; https://doi.org/10.3390/electronics13010172 - 30 Dec 2023
Viewed by 1089
Abstract
Crowd flow prediction plays a vital role in modern city management and public safety prewarning. However, the existing approaches related to this topic mostly focus on single sites or road segments, and indoor regional crowd flow prediction has yet to receive sufficient academic [...] Read more.
Crowd flow prediction plays a vital role in modern city management and public safety prewarning. However, the existing approaches related to this topic mostly focus on single sites or road segments, and indoor regional crowd flow prediction has yet to receive sufficient academic attention. Therefore, this paper proposes a novel prediction model, named the spatial–temporal attention-based crowd flow prediction network (STA-CFPNet), to forecast the indoor regional crowd flow volume. The model has four branches of temporal closeness, periodicity, tendency and external factors. Each branch of this model takes a convolutional neural network (CNN) as its principal component, which computes spatial correlations from near to distant areas by stacking multiple CNN layers. By incorporating the output of the four branches into the model’s fusion layer, it is possible to utilize ensemble learning to mine the temporal dependence implicit within the data. In order to improve both the convergence speed and prediction performance of the model, a building block based on spatial–temporal attention mechanisms was designed. Furthermore, a fully convolutional structure was applied to the external factors branch to provide globally shared external factors contexts for the research area. The empirical study demonstrates that STA-CFPNet outperforms other well-known crowd flow prediction methods in processing the experimental datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

12 pages, 451 KiB  
Article
Interpretable Single-dimension Outlier Detection (ISOD): An Unsupervised Outlier Detection Method Based on Quantiles and Skewness Coefficients
by Yuehua Huang, Wenfen Liu, Song Li, Ying Guo and Wen Chen
Appl. Sci. 2024, 14(1), 136; https://doi.org/10.3390/app14010136 - 22 Dec 2023
Cited by 2 | Viewed by 1218
Abstract
A crucial area of study in data mining is outlier detection, particularly in the areas of network security, credit card fraud detection, industrial flaw detection, etc. Existing outlier detection algorithms, which can be divided into supervised methods, semi-supervised methods, and unsupervised methods, suffer [...] Read more.
A crucial area of study in data mining is outlier detection, particularly in the areas of network security, credit card fraud detection, industrial flaw detection, etc. Existing outlier detection algorithms, which can be divided into supervised methods, semi-supervised methods, and unsupervised methods, suffer from missing labeled data, the curse of dimensionality, low interpretability, etc. To address these issues, in this paper, we present an unsupervised outlier detection method based on quantiles and skewness coefficients called ISOD (Interpretable Single dimension Outlier Detection). ISOD first fulfils the empirical cumulative distribution function before computing the quantile and skewness coefficients of each dimension. Finally, it outputs the outlier score. This paper’s contributions are as follows: (1) we propose an unsupervised outlier detection algorithm called ISOD, which has high interpretability and scalability; (2) massive experiments on benchmark datasets demonstrated the superior performance of the ISOD algorithm compared with state-of-the-art baselines in terms of ROC and AP. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 1720 KiB  
Article
Multi-Label Diagnosis of Arrhythmias Based on a Modified Two-Category Cross-Entropy Loss Function
by Junjiang Zhu, Cheng Ma, Yihui Zhang, Hao Huang, Dongdong Kong and Wangjin Ni
Electronics 2023, 12(24), 4976; https://doi.org/10.3390/electronics12244976 - 12 Dec 2023
Cited by 3 | Viewed by 1261
Abstract
The 12-lead resting electrocardiogram (ECG) is commonly used in hospitals to assess heart health. The ECG can reflect a variety of cardiac abnormalities, requiring multi-label classification. However, the diagnosis results in previous studies have been imprecise. For example, in some previous studies, some [...] Read more.
The 12-lead resting electrocardiogram (ECG) is commonly used in hospitals to assess heart health. The ECG can reflect a variety of cardiac abnormalities, requiring multi-label classification. However, the diagnosis results in previous studies have been imprecise. For example, in some previous studies, some cardiac abnormalities that cannot coexist often appeared in the diagnostic results. In this work, we explore how to realize the effective multi-label diagnosis of ECG signals and prevent the prediction of cardiac arrhythmias that cannot coexist. In this work, a multi-label classification method based on a convolutional neural network (CNN), long short-term memory (LSTM), and an attention mechanism is presented for the multi-label diagnosis of cardiac arrhythmia using resting ECGs. In addition, this work proposes a modified two-category cross-entropy loss function by introducing a regularization term to avoid the existence of arrhythmias that cannot coexist. The effectiveness of the modified cross-entropy loss function is validated using a 12-lead resting ECG database collected by our team. Using traditional and modified cross-entropy loss functions, three deep learning methods are employed to classify six types of ECG signals. Experimental results show the modified cross-entropy loss function greatly reduces the number of non-coexisting label pairs while maintaining prediction accuracy. Deep learning methods are effective in the multi-label diagnosis of ECG signals, and diagnostic efficiency can be improved by using the modified cross-entropy loss function. In addition, the modified cross-entropy loss function helps prevent diagnostic models from outputting two arrhythmias that cannot coexist, further reducing the false positive rate of non-coexisting arrhythmic diseases, thereby demonstrating the potential value of the modified loss function in clinical applications. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 2577 KiB  
Article
Multi-Task Learning and Temporal-Fusion-Transformer-Based Forecasting of Building Power Consumption
by Wenxian Ji, Zeyu Cao and Xiaorun Li
Electronics 2023, 12(22), 4656; https://doi.org/10.3390/electronics12224656 - 15 Nov 2023
Cited by 1 | Viewed by 2548
Abstract
Improving the accuracy of the forecasting of building power consumption is helpful in reducing commercial expenses and carbon emissions. However, challenges such as the shortage of training data and the absence of efficient models are the main obstacles in this field. To address [...] Read more.
Improving the accuracy of the forecasting of building power consumption is helpful in reducing commercial expenses and carbon emissions. However, challenges such as the shortage of training data and the absence of efficient models are the main obstacles in this field. To address these issues, this work introduces a model named MTLTFT, combining multi-task learning (MTL) with the temporal fusion transformer (TFT). The MTL approach is utilized to maximize the effectiveness of the limited data by introducing multiple related forecasting tasks. This method enhances the learning process by enabling the model to learn shared representations across different tasks, although the physical number of data remains unchanged. The TFT component, which is optimized for feature learning, is integrated to further improve the model’s performance. Based on a dataset from a large exposition building in Hangzhou, we conducted several forecasting experiments. The results demonstrate that MTLTFT outperforms most baseline methods (such as LSTM, GRU, N-HiTS) in terms of Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE), suggesting that MTLTFT is a promising approach for the forecasting of building power consumption and other similar tasks. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 11823 KiB  
Article
Adaptive Smart eHealth Framework for Personalized Asthma Attack Prediction and Safe Route Recommendation
by Eman Alharbi, Asma Cherif and Farrukh Nadeem
Smart Cities 2023, 6(5), 2910-2931; https://doi.org/10.3390/smartcities6050130 - 20 Oct 2023
Cited by 3 | Viewed by 1887
Abstract
Recently, there has been growing interest in using smart eHealth systems to manage asthma. However, limitations still exist in providing smart services and accurate predictions tailored to individual patients’ needs. This study aims to develop an adaptive ubiquitous computing framework that leverages different [...] Read more.
Recently, there has been growing interest in using smart eHealth systems to manage asthma. However, limitations still exist in providing smart services and accurate predictions tailored to individual patients’ needs. This study aims to develop an adaptive ubiquitous computing framework that leverages different bio-signals and spatial data to provide personalized asthma attack prediction and safe route recommendations. We proposed a smart eHealth framework consisting of multiple layers that employ telemonitoring application, environmental sensors, and advanced machine-learning algorithms to deliver smart services to the user. The proposed smart eHealth system predicts asthma attacks and uses spatial data to provide a safe route that drives the patient away from any asthma trigger. Additionally, the framework incorporates an adaptation layer that continuously updates the system based on real-time environmental data and daily bio-signals reported by the user. The developed telemonitoring application collected a dataset containing 665 records used to train the prediction models. The testing result demonstrates a remarkable 98% accuracy in predicting asthma attacks with a recall of 96%. The eHealth system was tested online by ten asthma patients, and its accuracy achieved 94% of accuracy and a recall of 95.2% in generating safe routes for asthma patients, ensuring a safer and asthma-trigger-free experience. The test shows that 89% of patients were satisfied with the safer recommended route than their usual one. This research contributes to enhancing the capabilities of smart healthcare systems in managing asthma and improving patient outcomes. The adaptive feature of the proposed eHealth system ensures that the predictions and recommendations remain relevant and personalized to the current conditions and needs of the individual. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 703 KiB  
Article
Double Consistency Regularization for Transformer Networks
by Yuxian Wan, Wenlin Zhang and Zhen Li
Electronics 2023, 12(20), 4357; https://doi.org/10.3390/electronics12204357 - 20 Oct 2023
Cited by 1 | Viewed by 1304
Abstract
The large-scale and deep-layer deep neural network based on the Transformer model is very powerful in sequence tasks, but it is prone to overfitting for small-scale training data. Moreover, the prediction result of the model with a small disturbance input is significantly lower [...] Read more.
The large-scale and deep-layer deep neural network based on the Transformer model is very powerful in sequence tasks, but it is prone to overfitting for small-scale training data. Moreover, the prediction result of the model with a small disturbance input is significantly lower than that without disturbance. In this work, we propose a double consistency regularization (DOCR) method for the end-to-end model structure, which separately constrains the output of the encoder and decoder during the training process to alleviate the above problems. Specifically, on the basis of the cross-entropy loss function, we build the mean model by integrating the model parameters of the previous rounds and measure the consistency between the models by calculating the KL divergence between the features of the encoder output and the probability distribution of the decoder output of the mean model and the base model so as to impose regularization constraints on the solution space of the model. We conducted extensive experiments on machine translation tasks, and the results show that the BLEU score increased by 2.60 on average, demonstrating the effectiveness of DOCR in improving model performance and its complementary impacts with other regularization techniques. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 2497 KiB  
Article
A Design and Its Application of Multi-Granular Fuzzy Model with Hierarchical Tree Structures
by Chan-Uk Yeom and Keun-Chang Kwak
Appl. Sci. 2023, 13(20), 11175; https://doi.org/10.3390/app132011175 - 11 Oct 2023
Cited by 1 | Viewed by 950
Abstract
This paper is concerned with the design of a context-based fuzzy C-means (CFCM)-based multi-granular fuzzy model (MGFM) with hierarchical tree structures. For this purpose, we propose three types of hierarchical tree structures (incremental, aggregated, and cascaded types) in the design of MGFM. In [...] Read more.
This paper is concerned with the design of a context-based fuzzy C-means (CFCM)-based multi-granular fuzzy model (MGFM) with hierarchical tree structures. For this purpose, we propose three types of hierarchical tree structures (incremental, aggregated, and cascaded types) in the design of MGFM. In general, the conventional fuzzy inference system (FIS) has problems, such as time consumption and an exponential increase in the number of if–then rules when processing large-scale multivariate data. Meanwhile, the existing granular fuzzy model (GFM) reduces the number of rules that increase exponentially. However, the GFM not only has overlapping rules as the cluster centers become closer but also has problems that are difficult to interpret due to many input variables. To solve these problems, the CFCM-based MGFM can be designed as a smaller tree of interconnected GFMs. Here, the inputs of the high-level GFMs are taken from the output to the low-level GFMs. The hierarchical tree structure is more computationally efficient and easier to understand than a single GFM. Furthermore, since the output of the CFCM-based MGFM is a triangular fuzzy number, it is evaluated based on a performance measurement method suitable for the GFM. The prediction performance is analyzed from the automobile fuel consumption and Boston housing database to present the validity of the proposed approach. The experimental results demonstrate that the proposed CFCM-based MGFM based on the hierarchical tree structure creates a small number of meaningful rules and solves prediction-related problems by making them explainable. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

12 pages, 756 KiB  
Article
Unsupervised Vehicle Re-Identification Method Based on Source-Free Knowledge Transfer
by Zhigang Song, Daisong Li, Zhongyou Chen and Wenqin Yang
Appl. Sci. 2023, 13(19), 11013; https://doi.org/10.3390/app131911013 - 6 Oct 2023
Cited by 2 | Viewed by 1132
Abstract
The unsupervised domain-adaptive vehicle re-identification approach aims to transfer knowledge from a labeled source domain to an unlabeled target domain; however, there are knowledge differences between the target domain and the source domain. To mitigate domain discrepancies, existing unsupervised domain-adaptive re-identification methods typically [...] Read more.
The unsupervised domain-adaptive vehicle re-identification approach aims to transfer knowledge from a labeled source domain to an unlabeled target domain; however, there are knowledge differences between the target domain and the source domain. To mitigate domain discrepancies, existing unsupervised domain-adaptive re-identification methods typically require access to source domain data to assist in retraining the target domain model. However, for security reasons, such as data privacy, data exchange between different domains is often infeasible in many scenarios. To this end, this paper proposes an unsupervised domain-adaptive vehicle re-identification method based on source-free knowledge transfer. First, by constructing a source-free domain knowledge migration module, the target domain is consistent with the source domain model output to train a generator to generate the “source-like samples”. Then, it can effectively reduce the model knowledge difference and improve the model’s generalization performance. In the experiment, two mainstream public datasets in this field, VeRi776 and VehicleID, are tested experimentally, and the obtained rank-k (the cumulative matching features) and mAP (the mean Average Precision) indicators are both improved, which are suitable for object re-identification tasks when data between domains cannot be interoperated. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 2402 KiB  
Article
Recommendation Method of Power Knowledge Retrieval Based on Graph Neural Network
by Rongxu Hou, Yiying Zhang, Qinghai Ou, Siwei Li, Yeshen He, Hongjiang Wang and Zhenliu Zhou
Electronics 2023, 12(18), 3922; https://doi.org/10.3390/electronics12183922 - 18 Sep 2023
Cited by 2 | Viewed by 1471
Abstract
With the development of the digital and intelligent transformation of the power grid, the structure and operation and maintenance technology of the power grid are constantly updated, which leads to problems such as difficulties in information acquisition and screening. Therefore, we propose a [...] Read more.
With the development of the digital and intelligent transformation of the power grid, the structure and operation and maintenance technology of the power grid are constantly updated, which leads to problems such as difficulties in information acquisition and screening. Therefore, we propose a recommendation method for power knowledge retrieval based on a graph neural network (RPKR-GNN). The method first uses a graph neural network to learn the network structure information of the power fault knowledge graph and realize the deep semantic embedding of power entities and relations. After this, it fuses the power knowledge graph paths to mine the potential power entity relationships and completes the power fault knowledge graph through knowledge inference. At the same time, we combine the user retrieval behavior features for knowledge aggregation to form a personal subgraph, and we analyze the user retrieval subgraph by matching the similarity of retrieval keyword features. Finally, we form a fusion subgraph based on the subgraph topology and reorder the entities of the subgraph to generate a recommendation list for the target users for the prediction of user retrieval intention. Through experimental comparison with various classical models, the results show that the models have a certain generalization ability in knowledge inference. The method performs well in terms of the MR and Hit@10 indexes on each dataset, and the F1 value can reach 87.3 in the retrieval recommendation effect, which effectively enhances the automated operation and maintenance capability of the power system. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 737 KiB  
Article
Web-Based Malware Detection System Using Convolutional Neural Network
by Ali Alqahtani, Sumayya Azzony, Leen Alsharafi and Maha Alaseri
Digital 2023, 3(3), 273-285; https://doi.org/10.3390/digital3030017 - 12 Sep 2023
Cited by 2 | Viewed by 4824
Abstract
In this article, we introduce a web-based malware detection system that leverages a deep-learning approach. Our primary objective is the development of a robust deep-learning model designed for classifying malware in executable files. In contrast to conventional malware detection systems, our approach relies [...] Read more.
In this article, we introduce a web-based malware detection system that leverages a deep-learning approach. Our primary objective is the development of a robust deep-learning model designed for classifying malware in executable files. In contrast to conventional malware detection systems, our approach relies on static detection techniques to unveil the true nature of files as either malicious or benign. Our method makes use of a one-dimensional convolutional neural network 1D-CNN due to the nature of the portable executable file. Significantly, static analysis aligns perfectly with our objectives, allowing us to uncover static features within the portable executable header. This choice holds particular significance given the potential risks associated with dynamic detection, often necessitating the setup of controlled environments, such as virtual machines, to mitigate dangers. Moreover, we seamlessly integrate this effective deep-learning method into a web-based system, rendering it accessible and user-friendly via a web interface. Empirical evidence showcases the efficiency of our proposed methods, as demonstrated in extensive comparisons with state-of-the-art models across three diverse datasets. Our results undeniably affirm the superiority of our approach, delivering a practical, dependable, and rapid mechanism for identifying malware within executable files. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 3595 KiB  
Article
Joint Location–Allocation Model for Multi-Level Maintenance Service Network in Agriculture
by Jinliang Li, Weibo Ren and Xibin Wang
Appl. Sci. 2023, 13(18), 10167; https://doi.org/10.3390/app131810167 - 9 Sep 2023
Cited by 1 | Viewed by 1037
Abstract
The maintenance service network is always designed as a multi-level service network to provide timely maintenance service for failed machinery, and is rarely studied in agriculture. Thus, this paper focuses on a three-level maintenance service network location–allocation problem in agriculture, which contains several [...] Read more.
The maintenance service network is always designed as a multi-level service network to provide timely maintenance service for failed machinery, and is rarely studied in agriculture. Thus, this paper focuses on a three-level maintenance service network location–allocation problem in agriculture, which contains several spare part centres, service stations, and service units. This research aims to obtain the optimal location of spare part centres and service stations while determining service vehicle allocation results for service stations, and the problem can be called a multi-level facility location and allocation problem (MLFLAP). Considering contiguity constraints and hierarchical relationships, the proposed MLFLAP is formulated as a mixed-integer linear programming (MILP) model integrating with P-region and set covering location problems to minimize total service costs, including spare part centre construction costs, service vehicle usage costs, and service mileage costs of service stations. The Benders decomposition-based solution method with several improvements is then applied to decompose the original MLFLAP into master problem and subproblems to find the optimal solutions effectively. Finally, a real-world case in China is proposed to evaluate the performance of the model and algorithm in agriculture, and sensitivity analysis is also conducted to demonstrate the impact of several parameters. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 5133 KiB  
Article
A New Linear Model for the Calculation of Routing Metrics in 802.11s Using ns-3 and RStudio
by Juan Ochoa-Aldeán and Carlos Silva-Cárdenas
Computers 2023, 12(9), 172; https://doi.org/10.3390/computers12090172 - 28 Aug 2023
Viewed by 1188
Abstract
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and [...] Read more.
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and the low-power wide area network (LPWAN) tailored for the burgeoning Internet of Things (IoT) landscape. IEEE 802.11s is well known for its facto standard for WMN, which defines the hybrid wireless mesh protocol (HWMP) as a layer-2 routing protocol and airtime link (ALM) as a metric. In this intricate landscape, artificial intelligence (AI) plays a prominent role in the industry, particularly within the technology and telecommunication realms. This study presents a novel methodology for the computation of routing metrics, specifically the ALM. This methodology implements the network simulator ns-3 and the RStudio as a statistical computing environment for data analysis. The former has enabled for the creation of scripts that elicit a variety of scenarios for WMN where information is gathered and stored in databases. The latter (RStudio) takes this information, and at this point, two linear predictions are supported. The first uses linear models (lm) and the second employs general linear models (glm). To conclude this process, statistical tests are applied to the original model, as well as to the new suggested ones. This work substantially contributes in two ways: first, through the methodological tool for the metric calculation of the HWMP protocol that belongs to the IEEE 802.11s standard, using lm and glm for the selection and validation of the model regressors. At this stage the ANOVA and STEPWIZE tools of RStudio are used. The second contribution is a linear predictor that improves the WMN’s performance as a priori mechanism before the use of the ns-3 simulator. The ANCOVA tool of RStudio is employed in the latter. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

27 pages, 9370 KiB  
Article
Efficient On-Chip Learning of Multi-Layer Perceptron Based on Neuron Multiplexing Method
by Zhenyu Zhang, Guangsen Wang, Kang Wang, Bo Gan and Guoyong Chen
Electronics 2023, 12(17), 3607; https://doi.org/10.3390/electronics12173607 - 26 Aug 2023
Viewed by 1002
Abstract
An efficient on-chip learning method based on neuron multiplexing is proposed in this paper to address the limitations of traditional on-chip learning methods, including low resource utilization and non-tunable parallelism. The proposed method utilizes a configurable neuron calculation unit (NCU) to calculate neural [...] Read more.
An efficient on-chip learning method based on neuron multiplexing is proposed in this paper to address the limitations of traditional on-chip learning methods, including low resource utilization and non-tunable parallelism. The proposed method utilizes a configurable neuron calculation unit (NCU) to calculate neural networks in different degrees of parallelism through multiplexing NCUs at different levels, and resource utilization can be increased by reducing the number of NCUs since the resource consumption is predominantly determined by the number of NCUs and the data bit-width, which are decoupled from the specific topology. To better support the proposed method and minimize RAM block usage, a weight segmentation and recombination method is introduced, accompanied by a detailed explanation of the access order. Moreover, a performance model is developed to facilitate parameter selection process. Experimental results conducted on an FPGA development board demonstrate that the proposed method has lower resource consumption, higher resource utilization, and greater generality compared to other methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 5924 KiB  
Article
Pm2.5 Time Series Imputation with Deep Learning and Interpolation
by Anibal Flores, Hugo Tito-Chura, Deymor Centty-Villafuerte and Alejandro Ecos-Espino
Computers 2023, 12(8), 165; https://doi.org/10.3390/computers12080165 - 16 Aug 2023
Cited by 5 | Viewed by 2648
Abstract
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two [...] Read more.
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two types of interpolations are implemented: polynomial or flipped polynomial. An hourly pm2.5 time series from Ilo City in southern Peru was chosen as a study case. The results obtained show that for gaps of one NA value, the proposal in most cases presents superior results to techniques such as ARIMA, LSTM, BiLSTM, GRU, and BiGRU; thus, on average, in terms of R2, the proposal exceeds implemented benchmark models by between 2.4341% and 19.96%. Finally, supported by the results, it can be stated that the proposal constitutes a good alternative for short-gaps imputation in pm2.5 time series. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

25 pages, 4877 KiB  
Article
A Mobile Solution for Enhancing Tourist Safety in Warm and Humid Destinations
by Sairoong Dinkoksung, Rapeepan Pitakaso, Chawis Boonmee, Thanatkit Srichok, Surajet Khonjun, Ganokgarn Jirasirilerd, Ponglert Songkaphet and Natthapong Nanthasamroeng
Appl. Sci. 2023, 13(15), 9027; https://doi.org/10.3390/app13159027 - 7 Aug 2023
Cited by 1 | Viewed by 3476
Abstract
This research introduces a mobile application specifically designed to enhance tourist safety in warm and humid destinations. The proposed solution integrates advanced functionalities, including a comprehensive warning system, health recommendations, and a life rescue system. The study showcases the exceptional effectiveness of the [...] Read more.
This research introduces a mobile application specifically designed to enhance tourist safety in warm and humid destinations. The proposed solution integrates advanced functionalities, including a comprehensive warning system, health recommendations, and a life rescue system. The study showcases the exceptional effectiveness of the implemented system, consistently providing tourists with precise and timely weather and safety information. Notably, the system achieves an impressive average accuracy rate of 100%, coupled with an astonishingly rapid response time of just 0.001 s. Furthermore, the research explores the correlation between the System Usability Scale (SUS) score and tourist engagement and loyalty. The findings reveal a positive relationship between the SUS score and the level of tourist engagement and loyalty. The proposed mobile solution holds significant potential for enhancing the safety and comfort of tourists in hot and humid climates, thereby making a noteworthy contribution to the advancement of the tourism business in smart cities. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

27 pages, 2495 KiB  
Article
Toward Improved Machine Learning-Based Intrusion Detection for Internet of Things Traffic
by Sarah Alkadi, Saad Al-Ahmadi and Mohamed Maher Ben Ismail
Computers 2023, 12(8), 148; https://doi.org/10.3390/computers12080148 - 27 Jul 2023
Cited by 10 | Viewed by 2518
Abstract
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful [...] Read more.
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful design and implementation of such techniques require the use of effective methods in terms of data and model quality. This paper encloses an empirical impact analysis for the latter in the context of a multi-class classification scenario. A series of experiments were conducted using six ML models, along with four benchmarking datasets, including UNSW-NB15, BOT-IoT, ToN-IoT, and Edge-IIoT. The proposed framework investigates the marginal benefit of employing data pre-processing and model configurations considering IoT limitations. In fact, the empirical findings indicate that the accuracy of ML-based IDS detection rapidly increases when methods that use quality data and models are deployed. Specifically, data cleaning, transformation, normalization, and dimensionality reduction, along with model parameter tuning, exhibit significant potential to minimize computational complexity and yield better performance. In addition, MLP- and clustering-based algorithms outperformed the remaining models, and the obtained accuracy reached up to 99.97%. One should note that the performance of the challenger models was assessed using similar test sets, and this was compared to the results achieved using the relevant pieces of research. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

28 pages, 1744 KiB  
Article
An Incident Detection Model Using Random Forest Classifier
by Osama ElSahly and Akmal Abdelfatah
Smart Cities 2023, 6(4), 1786-1813; https://doi.org/10.3390/smartcities6040083 - 17 Jul 2023
Cited by 4 | Viewed by 2032
Abstract
Traffic incidents have adverse effects on traffic operations, safety, and the economy. Efficient Automatic Incident Detection (AID) systems are crucial for timely and accurate incident detection. This paper develops a realistic AID model using the Random Forest (RF), which is a machine learning [...] Read more.
Traffic incidents have adverse effects on traffic operations, safety, and the economy. Efficient Automatic Incident Detection (AID) systems are crucial for timely and accurate incident detection. This paper develops a realistic AID model using the Random Forest (RF), which is a machine learning technique. The model is trained and tested on simulated data from VISSIM traffic simulation software. The model considers the variations in four critical factors: congestion levels, incident severity, incident location, and detector distance. Comparative evaluation with existing AID models, in the literature, demonstrates the superiority of the developed model, exhibiting higher Detection Rate (DR), lower Mean Time to Detect (MTTD), and lower False Alarm Rate (FAR). During training, the RF model achieved a DR of 96.97%, MTTD of 1.05 min, and FAR of 0.62%. During testing, it achieved a DR of 100%, MTTD of 1.17 min, and FAR of 0.862%. Findings indicate that detecting minor incidents during low traffic volumes is challenging. FAR decreases with the increase in Demand to Capacity ratio (D/C), while MTTD increases with D/C. Higher incident severity leads to lower MTTD values, while greater distance between an incident and upstream detector has the opposite effect. The FAR is inversely proportional to the incident’s location from the upstream detector, while being directly proportional to the distance between detectors. Larger detector spacings result in longer detection times. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

9 pages, 243 KiB  
Perspective
We Are Also Metabolites: Towards Understanding the Composition of Sweat on Fingertips via Hyperspectral Imaging
by Emanuela Marasco, Karl Ricanek and Huy Le
Digital 2023, 3(2), 137-145; https://doi.org/10.3390/digital3020010 - 19 Jun 2023
Cited by 1 | Viewed by 2149
Abstract
AI-empowered sweat metabolite analysis is an emerging and open research area with great potential to add a third category to biometrics: chemical. Current biometrics use two types of information to identify humans: physical (e.g., face, eyes) and behavioral (i.e., gait, typing). Sweat offers [...] Read more.
AI-empowered sweat metabolite analysis is an emerging and open research area with great potential to add a third category to biometrics: chemical. Current biometrics use two types of information to identify humans: physical (e.g., face, eyes) and behavioral (i.e., gait, typing). Sweat offers a promising solution for enriching human identity with more discerning characteristics to overcome the limitations of current technologies (e.g., demographic differential and vulnerability to spoof attacks). The analysis of a biometric trait’s chemical properties holds potential for providing a meticulous perspective on an individual. This not only changes the taxonomy for biometrics, but also lays a foundation for more accurate and secure next-generation biometric systems. This paper discusses existing evidence about the potential held by sweat components in representing the identity of a person. We also highlight emerging methodologies and applications pertaining to sweat analysis and guide the scientific community towards transformative future research directions to design AI-empowered systems of the next generation. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
23 pages, 4738 KiB  
Article
A Nonintrusive Load Identification Method Based on Improved Gramian Angular Field and ResNet18
by Jingqin Wang, Yufeng Wu and Liang Shu
Electronics 2023, 12(11), 2540; https://doi.org/10.3390/electronics12112540 - 5 Jun 2023
Cited by 2 | Viewed by 1767
Abstract
Image classification methods based on deep learning have been widely used in the study of nonintrusive load identification. However, in the process of encoding the load electrical signals into images, how to fully retain features of the raw data and thus increase the [...] Read more.
Image classification methods based on deep learning have been widely used in the study of nonintrusive load identification. However, in the process of encoding the load electrical signals into images, how to fully retain features of the raw data and thus increase the recognizability of loads carried with very similar current signals are still challenging, and the loss of load features will cause the overall accuracy of load identification to decrease. To deal with this problem, this paper proposes a nonintrusive load identification method based on the improved Gramian angular field (iGAF) and ResNet18. In the proposed method, fast Fourier transform is used to calculate the amplitude spectrum and the phase spectrum to reconstruct the pixel matrices of the B channel, G channel, and R channel of generated GAF images so that the color image fused by the three channels contains more information. This improvement to the GAF method enables generated images to retain the amplitude feature and phase feature of the raw data that are usually missed in the general GAF image. ResNet18 is trained with iGAF images for nonintrusive load identification. Experiments are conducted on two private datasets, ESEAD and EMCAD, and two public datasets, PLAID and WHITED. Experimental results suggest that the proposed method performs well on both private and public datasets, achieving overall identification accuracies of 99.545%, 99.375%, 98.964%, and 100% on the four datasets, respectively. In particular, the method demonstrates significant identification effects for loads with similar current waveforms in private datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 4357 KiB  
Article
Assisting Heart Valve Diseases Diagnosis via Transformer-Based Classification of Heart Sound Signals
by Dongru Yang, Yi Lin, Jianwen Wei, Xiongwei Lin, Xiaobo Zhao, Yingbang Yao, Tao Tao, Bo Liang and Sheng-Guo Lu
Electronics 2023, 12(10), 2221; https://doi.org/10.3390/electronics12102221 - 13 May 2023
Cited by 7 | Viewed by 1819
Abstract
Background: In computer-aided medical diagnosis or prognosis, the automatic classification of heart valve diseases based on heart sound signals is of great importance since the heart sound signal contains a wealth of information that can reflect the heart status. Traditional binary classification algorithms [...] Read more.
Background: In computer-aided medical diagnosis or prognosis, the automatic classification of heart valve diseases based on heart sound signals is of great importance since the heart sound signal contains a wealth of information that can reflect the heart status. Traditional binary classification algorithms (normal and abnormal) currently cannot comprehensively assess the heart valve diseases based on analyzing various heart sounds. The differences between heart sound signals are relatively subtle, but the reflected heart conditions differ significantly. Consequently, from a clinical point of view, it is of utmost importance to assist in the diagnosis of heart valve disease through the multiple classification of heart sound signals. Methods: We utilized a Transformer model for the multi-classification of heart sound signals. It has achieved results from four abnormal heart sound signals and the typical type. Results: According to 5-fold cross-validation strategy as well as 10-fold cross-validation strategy, e.g., in 5-fold cross-validation, the proposed method achieved a highest accuracy of 98.74% and a mean AUC of 0.99. Furthermore, the classification accuracy for Aortic Stenosis, Mitral Regurgitation, Mitral Stenosis, Mitral Valve Prolapse, and standard heart sound signals is 98.72%, 98.50%, 98.30%, 98.56%, and 99.61%, respectively. In 10-fold cross-validation, our model obtained the highest accuracy, sensitivity, specificity, precision, and F1 score all at 100%. Conclusion: The results indicate that the framework can precisely classify five classes of heart sound signals. Our method provides an effective tool for the ancillary detection of heart valve diseases in the clinical setting. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

28 pages, 5922 KiB  
Review
A Review of Plant Disease Detection Systems for Farming Applications
by Mbulelo S. P. Ngongoma, Musasa Kabeya and Katleho Moloi
Appl. Sci. 2023, 13(10), 5982; https://doi.org/10.3390/app13105982 - 12 May 2023
Cited by 10 | Viewed by 5009
Abstract
The globe and more particularly the economically developed regions of the world are currently in the era of the Fourth Industrial Revolution (4IR). Conversely, the economically developing regions in the world (and more particularly the African continent) have not yet even fully passed [...] Read more.
The globe and more particularly the economically developed regions of the world are currently in the era of the Fourth Industrial Revolution (4IR). Conversely, the economically developing regions in the world (and more particularly the African continent) have not yet even fully passed through the Third Industrial Revolution (3IR) wave, and Africa’s economy is still heavily dependent on the agricultural field. On the other hand, the state of global food insecurity is worsening on an annual basis thanks to the exponential growth in the global human population, which continuously heightens the food demand in both quantity and quality. This justifies the significance of the focus on digitizing agricultural practices to improve the farm yield to meet the steep food demand and stabilize the economies of the African continent and countries such as India that are dependent on the agricultural sector to some extent. Technological advances in precision agriculture are already improving farm yields, although several opportunities for further improvement still exist. This study evaluated plant disease detection models (in particular, those over the past two decades) while aiming to gauge the status of the research in this area and identify the opportunities for further research. This study realized that little literature has discussed the real-time monitoring of the onset signs of diseases before they spread throughout the whole plant. There was also substantially less focus on real-time mitigation measures such as actuation operations, spraying pesticides, spraying fertilizers, etc., once a disease was identified. Very little research has focused on the combination of monitoring and phenotyping functions into one model capable of multiple tasks. Hence, this study highlighted a few opportunities for further focus. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 1772 KiB  
Article
Chinese News Text Classification Method via Key Feature Enhancement
by Bin Ge, Chunhui He, Hao Xu, Jibing Wu and Jiuyang Tang
Appl. Sci. 2023, 13(9), 5399; https://doi.org/10.3390/app13095399 - 26 Apr 2023
Cited by 3 | Viewed by 1721
Abstract
(1) Background: Chinese news text is a popular form of media communication, which can be seen everywhere in China. Chinese news text classification is an important direction in natural language processing (NLP). How to use high-quality text classification technology to help humans to [...] Read more.
(1) Background: Chinese news text is a popular form of media communication, which can be seen everywhere in China. Chinese news text classification is an important direction in natural language processing (NLP). How to use high-quality text classification technology to help humans to efficiently organize and manage the massive amount of web news is an urgent problem to be solved. It is noted that the existing deep learning methods rely on a large-scale tagged corpus for news text classification tasks and this model is poorly interpretable because the size is large. (2) Methods: To solve the above problems, this paper proposes a Chinese news text classification method based on key feature enhancement named KFE-CNN. It can effectively expand the semantic information of key features to enhance sample data and then combine the zero–one binary vector representation to transform text features into binary vectors and input them into CNN model for training and implementation, thus improving the interpretability of the model and effectively compressing the size of the model. (3) Results: The experimental results show that our method can significantly improve the overall performance of the model and the average accuracy and F1-score of the THUCNews subset of the public dataset reached 97.84% and 98%. (4) Conclusions: this fully proved the effectiveness of the KFE-CNN method for the Chinese news text classification task and it also fully demonstrates that key feature enhancement can improve classification performance. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

12 pages, 2577 KiB  
Article
Health Status Evaluation of Welding Robots Based on the Evidential Reasoning Rule
by Bang-Cheng Zhang, Ji-Dong Wang, Shuo Gao, Xiao-Jing Yin and Zhi Gao
Electronics 2023, 12(8), 1755; https://doi.org/10.3390/electronics12081755 - 7 Apr 2023
Cited by 1 | Viewed by 1374
Abstract
It is extremely important to monitor the health status of welding robots for the safe and stable operation of a body-in-white (BIW) welding production line. In the actual production process, the robot degradation rate is slow and the effective data are poor, which [...] Read more.
It is extremely important to monitor the health status of welding robots for the safe and stable operation of a body-in-white (BIW) welding production line. In the actual production process, the robot degradation rate is slow and the effective data are poor, which can reflect a degradation state in the large amount of obtained monitoring data, which causes difficulties in health status evaluation. In order to realize the accurate evaluation of the health status of welding robots, this paper proposes a health status evaluation method based on the evidential reasoning (ER) rule, which reflects the health status of welding robots by using the running state data monitored in actual engineering and through the qualitative knowledge of experts, which makes up for the lack of effective data. In the ER rule evaluation model, the covariance matrix adaptive evolutionary strategy (CMA-ES) algorithm is used to optimize the initial parameters of the evaluation model, which improved the accuracy of health status evaluations. Finally, a BIW welding robot was taken as an example for verification. The results show that the proposed model is able to accurately estimate the health status of the welding robot by using the monitored degradation data. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 331 KiB  
Article
Clustering of Monolingual Embedding Spaces
by Kowshik Bhowmik and Anca Ralescu
Digital 2023, 3(1), 48-66; https://doi.org/10.3390/digital3010004 - 23 Feb 2023
Viewed by 1819
Abstract
Suboptimal performance of cross-lingual word embeddings for distant and low-resource languages calls into question the isomorphic assumption integral to the mapping-based methods of obtaining such embeddings. This paper investigates the comparative impact of typological relationship and corpus size on the isomorphism between monolingual [...] Read more.
Suboptimal performance of cross-lingual word embeddings for distant and low-resource languages calls into question the isomorphic assumption integral to the mapping-based methods of obtaining such embeddings. This paper investigates the comparative impact of typological relationship and corpus size on the isomorphism between monolingual embedding spaces. To that end, two clustering algorithms were applied to three sets of pairwise degrees of isomorphisms. It is also the goal of the paper to determine the combination of the isomorphism measure and clustering algorithm that best captures the typological relationship among the chosen set of languages. Of the three measures investigated, Relational Similarity seemed to capture best the typological information of the languages encoded in their respective embedding spaces. These language clusters can help us identify, without any pre-existing knowledge about the real-world linguistic relationships shared among a group of languages, the related higher-resource languages of low-resource languages. The presence of such languages in the cross-lingual embedding space can help improve the performance of low-resource languages in a cross-lingual embedding space. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 10803 KiB  
Article
Learning and Compressing: Low-Rank Matrix Factorization for Deep Neural Network Compression
by Gaoyuan Cai, Juhu Li, Xuanxin Liu, Zhibo Chen and Haiyan Zhang
Appl. Sci. 2023, 13(4), 2704; https://doi.org/10.3390/app13042704 - 20 Feb 2023
Cited by 11 | Viewed by 4561
Abstract
Recently, the deep neural network (DNN) has become one of the most advanced and powerful methods used in classification tasks. However, the cost of DNN models is sometimes considerable due to the huge sets of parameters. Therefore, it is necessary to compress these [...] Read more.
Recently, the deep neural network (DNN) has become one of the most advanced and powerful methods used in classification tasks. However, the cost of DNN models is sometimes considerable due to the huge sets of parameters. Therefore, it is necessary to compress these models in order to reduce the parameters in weight matrices and decrease computational consumption, while maintaining the same level of accuracy. In this paper, in order to deal with the compression problem, we first combine the loss function and the compression cost function into a joint function, and optimize it as an optimization framework. Then we combine the CUR decomposition method with this joint optimization framework to obtain the low-rank approximation matrices. Finally, we narrow the gap between the weight matrices and the low-rank approximations to compress the DNN models on the image classification task. In this algorithm, we not only solve the optimal ranks by enumeration, but also obtain the compression result with low-rank characteristics iteratively. Experiments were carried out on three public datasets under classification tasks. Comparisons with baselines and current state-of-the-art results can conclude that our proposed low-rank joint optimization compression algorithm can achieve higher accuracy and compression ratios. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 2655 KiB  
Article
A Multi-Channel Contrastive Learning Network Based Intrusion Detection Method
by Jian Luo, Yiying Zhang, Yannian Wu, Yao Xu, Xiaoyan Guo and Boxiang Shang
Electronics 2023, 12(4), 949; https://doi.org/10.3390/electronics12040949 - 14 Feb 2023
Cited by 6 | Viewed by 2044
Abstract
Network intrusion data are characterized by high feature dimensionality, extreme category imbalance, and complex nonlinear relationships between features and categories. The actual detection accuracy of existing supervised intrusion-detection models performs poorly. To address this problem, this paper proposes a multi-channel contrastive learning network-based [...] Read more.
Network intrusion data are characterized by high feature dimensionality, extreme category imbalance, and complex nonlinear relationships between features and categories. The actual detection accuracy of existing supervised intrusion-detection models performs poorly. To address this problem, this paper proposes a multi-channel contrastive learning network-based intrusion-detection method (MCLDM), which combines feature learning in the multi-channel supervised contrastive learning stage and feature extraction in the multi-channel unsupervised contrastive learning stage to train an effective intrusion-detection model. The objective is to research whether feature enrichment and the use of contrastive learning for specific classes of network intrusion data can improve the accuracy of the model. The model is based on an autoencoder to achieve feature reconstruction with supervised contrastive learning and for implementing multi-channel data reconstruction. In the next stage of unsupervised contrastive learning, the extraction of features is implemented using triplet convolutional neural networks (TCNN) to achieve the classification of intrusion data. Through experimental analysis, the multichannel contrastive learning network-based intrusion-detection method achieves 98.43% accuracy in dataset CICIDS17 and 93.94% accuracy in dataset KDDCUP99. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 2231 KiB  
Article
Symbiotic Combination of a Bayesian Network and Fuzzy Logic to Quantify the QoS in a VANET: Application in Logistic 4.0
by Hafida Khalfaoui, Abdellah Azmani, Abderrazak Farchane and Said Safi
Computers 2023, 12(2), 40; https://doi.org/10.3390/computers12020040 - 14 Feb 2023
Cited by 3 | Viewed by 2045
Abstract
Intelligent transportation systems use new technologies to improve road safety. In them, vehicles have been equipped with wireless communication systems called on-board units (OBUs) to be able to communicate with each other. This type of wireless network refers to vehicular ad hoc networks [...] Read more.
Intelligent transportation systems use new technologies to improve road safety. In them, vehicles have been equipped with wireless communication systems called on-board units (OBUs) to be able to communicate with each other. This type of wireless network refers to vehicular ad hoc networks (VANET). The primary problem in a VANET is the quality of service (QoS) because a small problem in the services can extremely damage both human lives and the economy. From this perspective, this article makes a contribution within the framework of a new conceptual project called the Smart Digital Logistic Services Provider (Smart DLSP). This is intended to give freight vehicles more intelligence in the service of logistics on a global scale. This article proposes a model that combines two approaches—a Bayesian network and fuzzy logic for calculating the QoS in a VANET as a function of multiple criteria—and provides a database that helps determine the originality of the risk of degrading the QoS in the network. The outcome of this approach was employed in an event tree analysis to assess the impact of the system’s security mechanisms. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 10359 KiB  
Article
An Improved Algorithm for Insulator and Defect Detection Based on YOLOv4
by Gujing Han, Qiwei Yuan, Feng Zhao, Ruijie Wang, Liu Zhao, Saidian Li, Min He, Shiqi Yang and Liang Qin
Electronics 2023, 12(4), 933; https://doi.org/10.3390/electronics12040933 - 13 Feb 2023
Cited by 10 | Viewed by 2160
Abstract
To further improve the accuracy and speed of UAV inspection of transmission line insulator defects, this paper proposes an insulator detection and defect identification algorithm based on YOLOv4, which is called DSMH-YOLOv4. In the feature extraction network of the YOLOv4 model, the improved [...] Read more.
To further improve the accuracy and speed of UAV inspection of transmission line insulator defects, this paper proposes an insulator detection and defect identification algorithm based on YOLOv4, which is called DSMH-YOLOv4. In the feature extraction network of the YOLOv4 model, the improved algorithm improves the residual edges of the residual structure based on feature reuse and designs the backbone network D-CSPDarknet53, which greatly reduces the number of parameters and computation of the model. The SA-Net (Shuffle Attention Neural Networks) attention model is embedded in the feature fusion network to strengthen the attention of target features and improve the weight of the target. Multi-head output is added to the output layer to improve the ability of the model to recognize the small target of insulator damage. The experimental results show that the number of parameters of the improved algorithm model is only 25.98% of that of the original model, and the mAP (mean Average Precision) of the insulator and defect is increased from 92.44% to 96.14%, which provides an effective way for the implementation of edge end algorithm deployment. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 5512 KiB  
Article
Single Image Reflection Removal Based on Residual Attention Mechanism
by Yubin Guo, Wanzhou Lu, Ximing Li and Qiong Huang
Appl. Sci. 2023, 13(3), 1618; https://doi.org/10.3390/app13031618 - 27 Jan 2023
Cited by 2 | Viewed by 2937
Abstract
Affected by shooting angle and light intensity, shooting through transparent media may cause light reflections in an image and influence picture quality, which has a negative effect on the research of computer vision tasks. In this paper, we propose a Residual Attention Based [...] Read more.
Affected by shooting angle and light intensity, shooting through transparent media may cause light reflections in an image and influence picture quality, which has a negative effect on the research of computer vision tasks. In this paper, we propose a Residual Attention Based Reflection Removal Network (RABRRN) to tackle the issue of single image reflection removal. We hold that reflection removal is essentially an image separation problem sensitive to both spatial and channel features. Therefore, we integrate spatial attention and channel attention into the model to enhance spatial and channel feature representation. For a more feasible solution to solve the problem of gradient disappearance in the iterative training of deep neural networks, the attention module is combined with a residual network to design a residual attention module so that the performance of reflection removal can be ameliorated. In addition, we establish a reflection image dataset named the SCAU Reflection Image Dataset (SCAU-RID), providing sufficient real training data. The experimental results show that the proposed method achieves a PSNR of 23.787 dB and an SSIM value of 0.885 from four benchmark datasets. Compared with the other most advanced methods, our method has only 18.524M parameters, but it obtains the best results from test datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 8232 KiB  
Article
Research on Wickerwork Patterns Creative Design and Development Based on Style Transfer Technology
by Tianxiong Wang, Zhiqi Ma, Fan Zhang and Liu Yang
Appl. Sci. 2023, 13(3), 1553; https://doi.org/10.3390/app13031553 - 25 Jan 2023
Cited by 7 | Viewed by 2239
Abstract
Traditional craftsmanship and culture are facing a transformation in modern science and technology development, and the cultural industry is gradually stepping into the digital era, which can realize the sustainable development of intangible cultural heritage with the help of digital technology. To innovatively [...] Read more.
Traditional craftsmanship and culture are facing a transformation in modern science and technology development, and the cultural industry is gradually stepping into the digital era, which can realize the sustainable development of intangible cultural heritage with the help of digital technology. To innovatively generate wickerwork pattern design schemes that meets the user’s preferences, this study proposes a design method of wickerwork patterns based on a style migration algorithm. First, an image recognition experiment using residual network (ResNet) based on the convolutional neural network is applied to the Funan wickerwork patterns to establish an image recognition model. The experimental results illustrate that the optimal recognition rate is 93.37% for the entire dataset of ResNet50 of the pattern design images, where the recognition rate of modern patterns is 89.47%, while the recognition rate of traditional patterns is 97.14%, the recognition rate of wickerwork patterns is 95.95%, and the recognition rate of personality is 90.91%. Second, based on Cycle-Consistent Adversarial Networks (CycleGAN) to build design scheme generation models of the Funan wickerwork patterns, CycleGAN can automatically and innovatively generate the pattern design scheme that meets certain style characteristics. Finally, the designer uses the creative images as the inspiration source and participates in the detailed adjustment of the generated images to design the wickerwork patterns with various stylistic features. This proposed method could explore the application of AI technology in wickerwork pattern development, and providing more comprehensive and rich new material for the creation of wickerwork patterns, thus contributing to the sustainable development and innovation of traditional Funan wickerwork culture. In fact, this digital technology can empower the inheritance and development of more intangible cultural heritages. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

11 pages, 1283 KiB  
Article
Trunk Borer Identification Based on Convolutional Neural Networks
by Xing Zhang, Haiyan Zhang, Zhibo Chen and Juhu Li
Appl. Sci. 2023, 13(2), 863; https://doi.org/10.3390/app13020863 - 8 Jan 2023
Cited by 2 | Viewed by 1774
Abstract
The trunk borer is a great danger to forests because of its strong concealment, long lag and great destructiveness. In order to improve the early monitoring ability of trunk borers, the representative Agrilus planipennis Fairmaire was selected as the research object. The convolutional [...] Read more.
The trunk borer is a great danger to forests because of its strong concealment, long lag and great destructiveness. In order to improve the early monitoring ability of trunk borers, the representative Agrilus planipennis Fairmaire was selected as the research object. The convolutional neural network named TrunkNet was designed to identify the activity sounds of Agrilus planipennis Fairmaire larvae. The activity sounds were recorded as vibration signals in audio form. The detector was used to collect the activity sounds of Agrilus planipennis Fairmaire larvae in the wood segments and some typical outdoor noise. The vibration signal pulse duration is short, random and high energy. TrunkNet was designed to train and identify vibration signals of Agrilus planipennis Fairmaire. Over the course of the experiment, the test accuracy of TrunkNet was 96.89%, while MobileNet_V2, ResNet18 and VGGish showed 84.27%, 79.37% and 70.85% accuracy, respectively. TrunkNet based on the convolutional neural network can provide technical support for the automatic monitoring and early warning of the stealthy tree trunk borers. The work of this study is limited to a single pest. The experiment will further focus on the applicability of the network to other pests in the future. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

12 pages, 4561 KiB  
Article
An IoT-Based Deep Learning Framework for Real-Time Detection of COVID-19 through Chest X-ray Images
by Mithun Karmakar, Bikramjit Choudhury, Ranjan Patowary and Amitava Nag
Computers 2023, 12(1), 8; https://doi.org/10.3390/computers12010008 - 28 Dec 2022
Cited by 1 | Viewed by 2234
Abstract
Over the next decade, Internet of Things (IoT) and the high-speed 5G network will be crucial in enabling remote access to the healthcare system for easy and fast diagnosis. In this paper, an IoT-based deep learning computer-aided diagnosis (CAD) framework is proposed for [...] Read more.
Over the next decade, Internet of Things (IoT) and the high-speed 5G network will be crucial in enabling remote access to the healthcare system for easy and fast diagnosis. In this paper, an IoT-based deep learning computer-aided diagnosis (CAD) framework is proposed for online and real-time COVID-19 identification. The proposed work first fine-tuned the five state-of-the-art deep CNN models such as Xception, ResNet50, DenseNet201, MobileNet, and VGG19 and then combined these models into a majority voting deep ensemble CNN (DECNN) model in order to detect COVID-19 accurately. The findings demonstrate that the suggested framework, with a test accuracy of 98%, outperforms other relevant state-of-the-art methodologies in terms of overall performance. The proposed CAD framework has the potential to serve as a decision support system for general clinicians and rural health workers in order to diagnose COVID-19 at an early stage. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 1567 KiB  
Article
Framework of Meta-Heuristic Variable Length Searching for Feature Selection in High-Dimensional Data
by Tara Othman Qadir Saraf, Norfaiza Fuad and Nik Shahidah Afifi Md Taujuddin
Computers 2023, 12(1), 7; https://doi.org/10.3390/computers12010007 - 27 Dec 2022
Cited by 2 | Viewed by 2095
Abstract
Feature Selection in High Dimensional Space is a combinatory optimization problem with an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the fitness function for selecting the relevant features is used widely in current feature selection algorithms. However, the increase in [...] Read more.
Feature Selection in High Dimensional Space is a combinatory optimization problem with an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the fitness function for selecting the relevant features is used widely in current feature selection algorithms. However, the increase in the dimension of the solution space leads to a high computational cost and risk of convergence. In addition, sub-optimality might occur due to the assumption of a certain length of the optimal number of features. Alternatively, variable length searching enables searching within the variable length of the solution space, which leads to more optimality and less computational load. The literature contains various meta-heuristic algorithms with variable length searching. All of them enable searching in high dimensional problems. However, an uncertainty in their performance exists. In order to fill this gap, this article proposes a novel framework for comparing various variants of variable length-searching meta-heuristic algorithms in the application of feature selection. For this purpose, we implemented four types of variable length meta-heuristic searching algorithms, namely VLBHO-Fitness, VLBHO-Position, variable length particle swarm optimization (VLPSO) and genetic variable length (GAVL), and we compared them in terms of classification metrics. The evaluation showed the overall superiority of VLBHO over the other algorithms in terms of accomplishing lower fitness values when optimizing mathematical functions of the variable length type. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

22 pages, 5649 KiB  
Article
Multi-Objective Antenna Design Based on BP Neural Network Surrogate Model Optimized by Improved Sparrow Search Algorithm
by Zhongxin Wang, Jian Qin, Zijiang Hu, Jian He and Dong Tang
Appl. Sci. 2022, 12(24), 12543; https://doi.org/10.3390/app122412543 - 7 Dec 2022
Cited by 13 | Viewed by 2185
Abstract
To solve the time-consuming, laborious, and inefficient problems of traditional methods using classical optimization algorithms combined with electromagnetic simulation software to design antennas, an efficient design method of the multi-objective antenna is proposed based on the multi-strategy improved sparrow search algorithm (MISSA) to [...] Read more.
To solve the time-consuming, laborious, and inefficient problems of traditional methods using classical optimization algorithms combined with electromagnetic simulation software to design antennas, an efficient design method of the multi-objective antenna is proposed based on the multi-strategy improved sparrow search algorithm (MISSA) to optimize a BP neural network. Three strategies, namely Bernoulli chaotic mapping, inertial weights, and t-distribution, are introduced into the sparrow search algorithm to improve its convergent speed and accuracy. Using the Bernoulli chaotic map to process the population of sparrows to enhance its population richness, the weight is introduced into the updated position of the sparrow to improve its search ability. The adaptive t-distribution is used to interfere and mutate some individual sparrows to make the algorithm reach the optimal solution more quickly. The initial parameters of the BP neural network were optimized using the improved sparrow search algorithm to obtain the optimized MISSA-BP antenna surrogate model. This model is combined with multi-objective particle swarm optimization (MOPSO) to solve the design problem of the multi-objective antenna and verified by a triple-frequency antenna. The simulated results show that this method can predict the performance of the antennas more accurately and can also design the multi-objective antenna that meets the requirements. The practicality of the method is further verified by producing a real antenna. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

21 pages, 1101 KiB  
Article
From Ranking Search Results to Managing Investment Portfolios: Exploring Rank-Based Approaches for Portfolio Stock Selection
by Mohammad Alsulmi
Electronics 2022, 11(23), 4019; https://doi.org/10.3390/electronics11234019 - 4 Dec 2022
Cited by 2 | Viewed by 2769
Abstract
The task of investing in financial markets to make profits and grow one’s wealth is not a straightforward task. Typically, financial domain experts, such as investment advisers and financial analysts, conduct extensive research on a target financial market to decide which stock symbols [...] Read more.
The task of investing in financial markets to make profits and grow one’s wealth is not a straightforward task. Typically, financial domain experts, such as investment advisers and financial analysts, conduct extensive research on a target financial market to decide which stock symbols are worthy of investment. The research process used by those experts generally involves collecting a large volume of data (e.g., financial reports, announcements, news, etc.), performing several analytics tasks, and making inferences to reach investment decisions. The rapid increase in the volume of data generated for stock market companies makes performing thorough analytics tasks impractical given the limited time available. Fortunately, recent advancements in computational intelligence methods have been adopted in various sectors, providing opportunities to exploit such methods to address investment tasks efficiently and effectively. This paper aims to explore rank-based approaches, mainly machine-learning based, to address the task of selecting stock symbols to construct long-term investment portfolios. Relying on these approaches, we propose a feature set that contains various statistics indicating the performance of stock market companies that can be used to train several ranking models. For evaluation purposes, we selected four years of Saudi Stock Exchange data and applied our proposed framework to them in a simulated investment setting. Our results show that rank-based approaches have the potential to be adopted to construct investment portfolios, generating substantial returns and outperforming the gains produced by the Saudi Stock Market index for the tested period. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 2425 KiB  
Article
Exploration of the Impact of Cybersecurity Awareness on Small and Medium Enterprises (SMEs) in Wales Using Intelligent Software to Combat Cybercrime
by Nisha Rawindaran, Ambikesh Jayal and Edmond Prakash
Computers 2022, 11(12), 174; https://doi.org/10.3390/computers11120174 - 3 Dec 2022
Cited by 9 | Viewed by 6274
Abstract
Intelligent software packages have become fast-growing in popularity for large businesses in both developed and developing countries, due to their higher availability in detecting and preventing cybercrime. However, small and medium enterprises (SMEs) are showing prominent gaps in this adoption due to their [...] Read more.
Intelligent software packages have become fast-growing in popularity for large businesses in both developed and developing countries, due to their higher availability in detecting and preventing cybercrime. However, small and medium enterprises (SMEs) are showing prominent gaps in this adoption due to their level of awareness and knowledge towards cyber security and the security mindset. This is due to their priority of running their businesses over requiring using the right technology in protecting their data. This study explored how SMEs in Wales are handling cybercrime and managing their daily online activities the best they can, in keeping their data safe in tackling cyber threats. The sample collected consisted of 122 Welsh SME respondents in a collection of data through a survey questionnaire. The results and findings showed that there were large gaps in the awareness and knowledge of using intelligent software, in particular the uses of machine learning integration within their technology to track and combat complex cybercrime that perhaps would have been missed by standard cyber security software packages. The study’s findings showed that only 30% of the sampled SMEs understood the terminology of cyber security. The awareness of machine learning and its algorithms was also questioned in the implementation of their cyber security software packages. The study further highlighted that Welsh SMEs were unaware of what this software could do to protect their data. The findings in this paper also showed that various elements such as education and the size of SME made an impact on their choices for the right software packages being implemented, compared to elements such as age, gender, role and being a decision maker, having no impact on these choices. The study finally shares the investigations of various SME strategies to help understand the risks, and to be able to plan for future contingencies and preparation in keeping data safe and secure for the future. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 458 KiB  
Article
An Improved Binary Owl Feature Selection in the Context of Android Malware Detection
by Hadeel Alazzam, Aryaf Al-Adwan, Orieb Abualghanam, Esra’a Alhenawi and Abdulsalam Alsmady
Computers 2022, 11(12), 173; https://doi.org/10.3390/computers11120173 - 30 Nov 2022
Cited by 9 | Viewed by 2259
Abstract
Recently, the proliferation of smartphones, tablets, and smartwatches has raised security concerns from researchers. Android-based mobile devices are considered a dominant operating system. The open-source nature of this platform makes it a good target for malware attacks that result in both data exfiltration [...] Read more.
Recently, the proliferation of smartphones, tablets, and smartwatches has raised security concerns from researchers. Android-based mobile devices are considered a dominant operating system. The open-source nature of this platform makes it a good target for malware attacks that result in both data exfiltration and property loss. To handle the security issues of mobile malware attacks, researchers proposed novel algorithms and detection approaches. However, there is no standard dataset used by researchers to make a fair evaluation. Most of the research datasets were collected from the Play Store or collected randomly from public datasets such as the DREBIN dataset. In this paper, a wrapper-based approach for Android malware detection has been proposed. The proposed wrapper consists of a newly modified binary Owl optimizer and a random forest classifier. The proposed approach was evaluated using standard data splits given by the DREBIN dataset in terms of accuracy, precision, recall, false-positive rate, and F1-score. The proposed approach reaches 98.84% and 86.34% for accuracy and F-score, respectively. Furthermore, it outperforms several related approaches from the literature in terms of accuracy, precision, and recall. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 1948 KiB  
Article
Learning-Based Matched Representation System for Job Recommendation
by Suleiman Ali Alsaif, Minyar Sassi Hidri, Hassan Ahmed Eleraky, Imen Ferjani and Rimah Amami
Computers 2022, 11(11), 161; https://doi.org/10.3390/computers11110161 - 14 Nov 2022
Cited by 13 | Viewed by 5423
Abstract
Job recommender systems (JRS) are a subclass of information filtering systems that aims to help job seekers identify what might match their skills and experiences and prevent them from being lost in the vast amount of information available on job boards that aggregates [...] Read more.
Job recommender systems (JRS) are a subclass of information filtering systems that aims to help job seekers identify what might match their skills and experiences and prevent them from being lost in the vast amount of information available on job boards that aggregates postings from many sources such as LinkedIn or Indeed. A variety of strategies used as part of JRS have been implemented, most of them failed to recommend job vacancies that fit properly to the job seekers profiles when dealing with more than one job offer. They consider skills as passive entities associated with the job description, which need to be matched for finding the best job recommendation. This paper provides a recommender system to assist job seekers in finding suitable jobs based on their resumes. The proposed system recommends the top-n jobs to the job seekers by analyzing and measuring similarity between the job seeker’s skills and explicit features of job listing using content-based filtering. First-hand information was gathered by scraping jobs description from Indeed from major cities in Saudi Arabia (Dammam, Jeddah, and Riyadh). Then, the top skills required in job offers were analyzed and job recommendation was made by matching skills from resumes to posted jobs. To quantify recommendation success and error rates, we sought to compare the results of our system to reality using decision support measures. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 6821 KiB  
Article
FFSCN: Frame Fusion Spectrum Center Net for Carrier Signal Detection
by Hao Huang, Jiao Wang and Jianqing Li
Electronics 2022, 11(20), 3349; https://doi.org/10.3390/electronics11203349 - 17 Oct 2022
Viewed by 1765
Abstract
Carrier signal detection is a complicated and essential task in many domains because it demands a quick response to the existence of several carriers in the wideband, while also precisely predicting each carrier signal’s frequency centers and bandwidths, including single-carrier and multi-carrier modulation [...] Read more.
Carrier signal detection is a complicated and essential task in many domains because it demands a quick response to the existence of several carriers in the wideband, while also precisely predicting each carrier signal’s frequency centers and bandwidths, including single-carrier and multi-carrier modulation signals. Multi-carrier modulation signals, such as FSK and OFDM, could be incorrectly recognized as several single-carrier signals by using the spectrum center net (SCN) or FCN-based method. This paper designed a deep convolutional neural network (CNN) framework for multi-carrier signal detection by fusing the features of multiple consecutive frames of the broadband power spectra and estimating the information of each single-carrier or multi-carrier modulation signal in the broadband, called frame fusion spectrum center net (FFSCN), including FFSCN-R, FFSCN-MN, and FFSCN-FMN. FFSCN includes three base parts, the deep CNN-based backbone, the feature pyramid network (FPN) neck, and the regression network (RegNet) head. FFSCN-R and FFSCN-MN fusing the FPN out features, which use the Residual and MobileNetV3 backbone, respectively, and FFSCN-MN cost less inference time. To further reduce the complexity of FFSCN-MN, the designed FFSCN-FMN modifies the MobileNet blocks and fuses the features at each block of the backbone. The multiple consecutive frames of broadband power spectra not only preserve the high-resolution ratio of the broadband frequency, but also add the features of the signal changes in the time dimension. Extensive experimental results demonstrate that the proposed FFSCN can effectively detect multi-carrier and single-carrier modulation signals in the broadband power spectrum and outperform SCN in accuracy and efficiency. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

29 pages, 10938 KiB  
Article
User Analytics in Online Social Networks: Evolving from Social Instances to Social Individuals
by Gerasimos Razis, Stylianos Georgilas, Giannis Haralabopoulos and Ioannis Anagnostopoulos
Computers 2022, 11(10), 149; https://doi.org/10.3390/computers11100149 - 7 Oct 2022
Cited by 2 | Viewed by 2326
Abstract
In our era of big data and information overload, content consumers utilise a variety of sources to meet their data and informational needs for the purpose of acquiring an in-depth perspective on a subject, as each source is focused on specific aspects. The [...] Read more.
In our era of big data and information overload, content consumers utilise a variety of sources to meet their data and informational needs for the purpose of acquiring an in-depth perspective on a subject, as each source is focused on specific aspects. The same principle applies to the online social networks (OSNs), as usually, the end-users maintain accounts in multiple OSNs so as to acquire a complete social networking experience, since each OSN has a different philosophy in terms of its services, content, and interaction. Contrary to the current literature, we examine the users’ behavioural and disseminated content patterns under the assumption that accounts maintained by users in multiple OSNs are not regarded as distinct accounts, but rather as the same individual with multiple social instances. Our social analysis, enriched with information about the users’ social influences, revealed behavioural patterns depending on the examined OSN, its social entities, and the users’ exerted influence. Finally, we ranked the examined OSNs based on three types of social characteristics, revealing correlations between the users’ behavioural and content patterns, social influences, social entities, and the OSNs themselves. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 683 KiB  
Article
A Fine-Grained Modeling Approach for Systolic Array-Based Accelerator
by Yuhang Li, Mei Wen, Jiawei Fei, Junzhong Shen and Yasong Cao
Electronics 2022, 11(18), 2928; https://doi.org/10.3390/electronics11182928 - 15 Sep 2022
Cited by 1 | Viewed by 2093
Abstract
The systolic array provides extremely high efficiency for running matrix multiplication and is one of the mainstream architectures of today’s deep learning accelerators. In order to develop efficient accelerators, people usually employ simulators to make design trade-offs. However, current simulators suffer from coarse-grained [...] Read more.
The systolic array provides extremely high efficiency for running matrix multiplication and is one of the mainstream architectures of today’s deep learning accelerators. In order to develop efficient accelerators, people usually employ simulators to make design trade-offs. However, current simulators suffer from coarse-grained modeling methods and ideal assumptions, which limits their ability to describe structural characteristics of systolic arrays. In addition, they do not support the exploration of microarchitecture. This paper presents FG-SIM, a fine-grained modeling approach for evaluating systolic array accelerators by using an event-driven method. FG-SIM can obtain accurate results and provide the best mapping scheme for different workloads due to its fine-grained modeling technique and deny of ideal assumption. Experimental results show that FG-SIM plays a significant role in design trade-offs and outperforms state-of-the-art simulators, with an accuracy of more than 95%. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 2492 KiB  
Review
Impact of the Internet of Things on Psychology: A Survey
by Hamed Vahdat-Nejad, Wathiq Mansoor, Sajedeh Abbasi, Mahdi Hajiabadi, Fatemeh Salmani, Faezeh Azizi, Reyhane Mosafer, Mohadese Jamalian and Hadi Khosravi-Farsani
Smart Cities 2022, 5(3), 1193-1207; https://doi.org/10.3390/smartcities5030060 - 14 Sep 2022
Cited by 6 | Viewed by 3697
Abstract
The Internet of things (IoT) continues to “smartify” human life while influencing areas such as industry, education, economy, business, medicine, and psychology. The introduction of the IoT in psychology has resulted in various intelligent systems that aim to help people—particularly those with special [...] Read more.
The Internet of things (IoT) continues to “smartify” human life while influencing areas such as industry, education, economy, business, medicine, and psychology. The introduction of the IoT in psychology has resulted in various intelligent systems that aim to help people—particularly those with special needs, such as the elderly, disabled, and children. This paper proposes a framework to investigate the role and impact of the IoT in psychology from two perspectives: (1) the goals of using the IoT in this area, and (2) the computational technologies used towards this purpose. To this end, existing studies are reviewed from these viewpoints. The results show that the goals of using the IoT can be identified as morale improvement, diagnosis, and monitoring. Moreover, the main technical contributions of the related papers are system design, data mining, or hardware invention and signal processing. Subsequently, unique features of state-of-the-art research in this area are discussed, including the type and diversity of sensors, crowdsourcing, context awareness, fog and cloud platforms, and inference. Our concluding remarks indicate that this area is in its infancy and, consequently, the next steps of this research are discussed. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 1756 KiB  
Article
Superpixel Image Classification with Graph Convolutional Neural Networks Based on Learnable Positional Embedding
by Ji-Hun Bae, Gwang-Hyun Yu, Ju-Hwan Lee, Dang Thanh Vu, Le Hoang Anh, Hyoung-Gook Kim and Jin-Young Kim
Appl. Sci. 2022, 12(18), 9176; https://doi.org/10.3390/app12189176 - 13 Sep 2022
Cited by 13 | Viewed by 3456
Abstract
Graph convolutional neural networks (GCNNs) have been successfully applied to a wide range of problems, including low-dimensional Euclidean structural domains representing images, videos, and speech and high-dimensional non-Euclidean domains, such as social networks and chemical molecular structures. However, in computer vision, the existing [...] Read more.
Graph convolutional neural networks (GCNNs) have been successfully applied to a wide range of problems, including low-dimensional Euclidean structural domains representing images, videos, and speech and high-dimensional non-Euclidean domains, such as social networks and chemical molecular structures. However, in computer vision, the existing GCNNs are not provided with positional information to distinguish between graphs of new structures; therefore, the performance of the image classification domain represented by arbitrary graphs is significantly poor. In this work, we introduce how to initialize the positional information through a random walk algorithm and continuously learn the additional position-embedded information of various graph structures represented over the superpixel images we choose for efficiency. We call this method the graph convolutional network with learnable positional embedding applied on images (IMGCN-LPE). We apply IMGCN-LPE to three graph convolutional models (the Chebyshev graph convolutional network, graph convolutional network, and graph attention network) to validate performance on various benchmark image datasets. As a result, although not as impressive as convolutional neural networks, the proposed method outperforms various other conventional convolutional methods and demonstrates its effectiveness among the same tasks in the field of GCNNs. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

25 pages, 9223 KiB  
Article
Improved Twin Delayed Deep Deterministic Policy Gradient Algorithm Based Real-Time Trajectory Planning for Parafoil under Complicated Constraints
by Jiaming Yu, Hao Sun and Junqing Sun
Appl. Sci. 2022, 12(16), 8189; https://doi.org/10.3390/app12168189 - 16 Aug 2022
Cited by 6 | Viewed by 2323
Abstract
A parafoil delivery system has usually been used in the fields of military and civilian airdrop supply and aircraft recovery in recent years. However, since the altitude of the unpowered parafoil is monotonically decreasing, it is limited by the initial flight altitude. Thus, [...] Read more.
A parafoil delivery system has usually been used in the fields of military and civilian airdrop supply and aircraft recovery in recent years. However, since the altitude of the unpowered parafoil is monotonically decreasing, it is limited by the initial flight altitude. Thus, combining the multiple constraints, such as the ground obstacle avoidance and flight time, it puts forward a more stringent standard for the real-time performance of trajectory planning of the parafoil delivery system. Thus, to enhance the real-time performance, we propose a new parafoil trajectory planning method based on an improved twin delayed deep deterministic policy gradient. In this method, by pre-evaluating the value of the action, a scale of noise will be dynamically selected for improving the globality and randomness, especially for the actions with a low value. Furthermore, not like the traditional numerical computation algorithm, by building the planning model in advance, the deep reinforcement learning method does not recalculate the optimal flight trajectory of the system when the parafoil delivery system is launched at different initial positions. In this condition, the trajectory planning method of deep reinforcement learning has greatly improved in real-time performance. Finally, several groups of simulation data show that the trajectory planning theory in this paper is feasible and correct. Compared with the traditional twin delayed deep deterministic policy gradient and deep deterministic policy gradient, the landing accuracy and success rate of the proposed method are improved greatly. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 4803 KiB  
Article
Feature Augmentation Based on Pixel-Wise Attention for Rail Defect Detection
by Hongjue Li, Hailang Li, Zhixiong Hou, Haoran Song, Junbo Liu and Peng Dai
Appl. Sci. 2022, 12(16), 8006; https://doi.org/10.3390/app12168006 - 10 Aug 2022
Viewed by 1787
Abstract
Image-based rail defect detection could be conceptually defined as an object detection task in computer vision. However, unlike academic object detection tasks, this practical industrial application suffers from two unique challenges, including object ambiguity and insufficient annotations. To overcome these challenges, we introduce [...] Read more.
Image-based rail defect detection could be conceptually defined as an object detection task in computer vision. However, unlike academic object detection tasks, this practical industrial application suffers from two unique challenges, including object ambiguity and insufficient annotations. To overcome these challenges, we introduce the pixel-wise attention mechanism to fully exploit features of annotated defects, and develop a feature augmentation framework to tackle the defect detection problem. The pixel-wise attention is conducted through a learnable pixel-level similarity between input and support features to obtain augmented features. These augmented features contain co-existing information from input images and multi-class support defects. The final output features are augmented and refined by support features, thus endowing the model to distinguish between ambiguous defect patterns based on insufficient annotated samples. Experiments on the rail defect dataset demonstrate that feature augmentation can help balance the sensitivity and robustness of the model. On our collected dataset with eight defected classes, our algorithm achieves 11.32% higher [email protected] compared with original YOLOv5 and 4.27% higher [email protected] compared with Faster R-CNN. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 2300 KiB  
Article
FocusedDropout for Convolutional Neural Network
by Minghui Liu, Tianshu Xie, Xuan Cheng, Jiali Deng, Meiyi Yang, Xiaomin Wang and Ming Liu
Appl. Sci. 2022, 12(15), 7682; https://doi.org/10.3390/app12157682 - 30 Jul 2022
Cited by 5 | Viewed by 1931
Abstract
In a convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except for randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. [...] Read more.
In a convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except for randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. In this paper, we propose a non-random dropout method named FocusedDropout, aiming to make the network focus more on the target. In FocusedDropout, we use a simple but effective method to search for the target-related features, retain these features and discard others, which is contrary to the existing methods. We find that this novel method can improve network performance by making the network more target focused. Additionally, increasing the weight decay while using FocusedDropout can avoid overfitting and increase accuracy. Experimental results show that with a slight cost, 10% of batches employing FocusedDropout, can produce a nice performance boost over the baselines on multiple datasets of classification, including CIFAR10, CIFAR100 and Tiny ImageNet, and has a good versatility for different CNN models. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 4623 KiB  
Article
A Robust Bayesian Optimization Framework for Microwave Circuit Design under Uncertainty
by Duygu De Witte, Jixiang Qing, Ivo Couckuyt, Tom Dhaene, Dries Vande Ginste and Domenico Spina
Electronics 2022, 11(14), 2267; https://doi.org/10.3390/electronics11142267 - 20 Jul 2022
Cited by 6 | Viewed by 2364
Abstract
In modern electronics, there are many inevitable uncertainties and variations of design parameters that have a profound effect on the performance of a device. These are, among others, induced by manufacturing tolerances, assembling inaccuracies, material diversities, machining errors, etc. This prompts wide interests [...] Read more.
In modern electronics, there are many inevitable uncertainties and variations of design parameters that have a profound effect on the performance of a device. These are, among others, induced by manufacturing tolerances, assembling inaccuracies, material diversities, machining errors, etc. This prompts wide interests in enhanced optimization algorithms that take the effect of these uncertainty sources into account and that are able to find robust designs, i.e., designs that are insensitive to the uncertainties early in the design cycle. In this work, a novel machine learning-based optimization framework that accounts for uncertainty of the design parameters is presented. This is achieved by using a modified version of the expected improvement criterion. Moreover, a data-efficient Bayesian Optimization framework is leveraged to limit the number of simulations required to find a robust design solution. Two suitable application examples validate that the robustness is significantly improved compared to standard design methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 2988 KiB  
Article
IRSDet: Infrared Small-Object Detection Network Based on Sparse-Skip Connection and Guide Maps
by Xiaoli Xi, Jinxin Wang, Fang Li and Dongmei Li
Electronics 2022, 11(14), 2154; https://doi.org/10.3390/electronics11142154 - 9 Jul 2022
Cited by 9 | Viewed by 2216
Abstract
Detecting small objects in infrared images remains a challenge because most of them lack shape and texture. In this study, we proposed an infrared small-object detection method to improve the capacity for detecting thermal objects in complex scenarios. First, a sparse-skip connection block [...] Read more.
Detecting small objects in infrared images remains a challenge because most of them lack shape and texture. In this study, we proposed an infrared small-object detection method to improve the capacity for detecting thermal objects in complex scenarios. First, a sparse-skip connection block is proposed to enhance the response of small infrared objects and suppress the background response. This block is used to construct the detection model backbone. Second, a region attention module is designed to emphasize the features of infrared small objects and suppress background regions. Finally, a batch-averaged biased classification loss function is designed to improve the accuracy of the detection model. The experimental results show that the proposed small-object detection framework significantly increases precision, recall, and F1-score, showing that, compared with the current advanced detection models for small-object detection, the proposed detection framework has better performance in infrared small-object detection under complex backgrounds. The insights gained from this study may provide new ideas for infrared small object detection and tracking. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 3901 KiB  
Article
Comparison of On-Policy Deep Reinforcement Learning A2C with Off-Policy DQN in Irrigation Optimization: A Case Study at a Site in Portugal
by Khadijeh Alibabaei, Pedro D. Gaspar, Eduardo Assunção, Saeid Alirezazadeh, Tânia M. Lima, Vasco N. G. J. Soares and João M. L. P. Caldeira
Computers 2022, 11(7), 104; https://doi.org/10.3390/computers11070104 - 24 Jun 2022
Cited by 15 | Viewed by 5540
Abstract
Precision irrigation and optimization of water use have become essential factors in agriculture because water is critical for crop growth. The proper management of an irrigation system should enable the farmer to use water efficiently to increase productivity, reduce production costs, and maximize [...] Read more.
Precision irrigation and optimization of water use have become essential factors in agriculture because water is critical for crop growth. The proper management of an irrigation system should enable the farmer to use water efficiently to increase productivity, reduce production costs, and maximize the return on investment. Efficient water application techniques are essential prerequisites for sustainable agricultural development based on the conservation of water resources and preservation of the environment. In a previous work, an off-policy deep reinforcement learning model, Deep Q-Network, was implemented to optimize irrigation. The performance of the model was tested for tomato crop at a site in Portugal. In this paper, an on-policy model, Advantage Actor–Critic, is implemented to compare irrigation scheduling with Deep Q-Network for the same tomato crop. The results show that the on-policy model Advantage Actor–Critic reduced water consumption by 20% compared to Deep Q-Network with a slight change in the net reward. These models can be developed to be applied to other cultures with high production in Portugal, such as fruit, cereals, and wine, which also have large water requirements. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 3358 KiB  
Article
Robust Fingerprint Minutiae Extraction and Matching Based on Improved SIFT Features
by Samy Bakheet, Shtwai Alsubai, Abdullah Alqahtani and Adel Binbusayyis
Appl. Sci. 2022, 12(12), 6122; https://doi.org/10.3390/app12126122 - 16 Jun 2022
Cited by 10 | Viewed by 9470
Abstract
Minutiae feature extraction and matching are not only two crucial tasks for identifying fingerprints, but also play an eminent role as core components of automated fingerprint recognition (AFR) systems, which first focus primarily on the identification and description of the salient minutiae points [...] Read more.
Minutiae feature extraction and matching are not only two crucial tasks for identifying fingerprints, but also play an eminent role as core components of automated fingerprint recognition (AFR) systems, which first focus primarily on the identification and description of the salient minutiae points that impart individuality to each fingerprint and differentiate one fingerprint from another, and then matching their relative placement in a candidate fingerprint and previously stored fingerprint templates. In this paper, an automated minutiae extraction and matching framework is presented for identification and verification purposes, in which an adaptive scale-invariant feature transform (SIFT) detector is applied to high-contrast fingerprints preprocessed by means of denoising, binarization, thinning, dilation and enhancement to improve the quality of latent fingerprints. As a result, an optimized set of highly-reliable salient points discriminating fingerprint minutiae is identified and described accurately and quickly. Then, the SIFT descriptors of the local key-points in a given fingerprint are matched with those of the stored templates using a brute force algorithm, by assigning a score for each match based on the Euclidean distance between the SIFT descriptors of the two matched keypoints. Finally, a postprocessing dual-threshold filter is adaptively applied, which can potentially eliminate almost all the false matches, while discarding very few correct matches (less than 4%). The experimental evaluations on publicly available low-quality FVC2004 fingerprint datasets demonstrate that the proposed framework delivers comparable or superior performance to several state-of-the-art methods, achieving an average equal error rate (EER) value of 2.01%. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 9540 KiB  
Article
Deep Learning-Based End-to-End Carrier Signal Detection in Broadband Power Spectrum
by Hao Huang, Peng Wang, Jiao Wang and Jianqing Li
Electronics 2022, 11(12), 1896; https://doi.org/10.3390/electronics11121896 - 16 Jun 2022
Cited by 2 | Viewed by 2854
Abstract
This paper presents an end-to-end deep convolutional neural network (CNN) model for carrier signal detection in the broadband power spectrum, so-called spectrum center net (SCN). By regarding the broadband power spectrum sequence as a one-dimensional (1D) image and each subcarrier on the broadband [...] Read more.
This paper presents an end-to-end deep convolutional neural network (CNN) model for carrier signal detection in the broadband power spectrum, so-called spectrum center net (SCN). By regarding the broadband power spectrum sequence as a one-dimensional (1D) image and each subcarrier on the broadband as the target object, we can transform the carrier signal detection problem into a semantic segmentation problem on a 1D image. Here, the core task of the carrier signal detection problem turns into the frequency center (FC) and bandwidth (BW) regression. We design the SCN to classify the broadband power spectrum as inputs and extract the features of different length scales by the ResNet backbone. Then, the feature pyramid network (FPN) neck fuses the features and outputs the fusion features. Next, the RegNet head regresses the power spectrum distribution (PSD) prediction for FC and the corresponding BW prediction. Finally, we can achieve the subcarrier targets by applying non-maximum suppressions (NMS). Moreover, we train the SCN on a simulation dataset and validate it on a real satellite broadband power spectrum set. As an improvement of the fully convolutional network-based (FCN-based) method, the proposed method directly outputs the detection results without post-processing. Extensive experimental results demonstrate that the proposed method can effectively detect the subcarrier signal in the broadband power spectrum as well as achieve higher and more robust performance than the deep FCN- and threshold-based methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 3101 KiB  
Article
MBHAN: Motif-Based Heterogeneous Graph Attention Network
by Qian Hu, Weiping Lin, Minli Tang and Jiatao Jiang
Appl. Sci. 2022, 12(12), 5931; https://doi.org/10.3390/app12125931 - 10 Jun 2022
Cited by 3 | Viewed by 3667
Abstract
Graph neural networks are graph-based deep learning technologies that have attracted significant attention from researchers because of their powerful performance. Heterogeneous graph-based graph neural networks focus on the heterogeneity of the nodes and links in a graph. This is more effective at preserving [...] Read more.
Graph neural networks are graph-based deep learning technologies that have attracted significant attention from researchers because of their powerful performance. Heterogeneous graph-based graph neural networks focus on the heterogeneity of the nodes and links in a graph. This is more effective at preserving semantic knowledge when representing data interactions in real-world graph structures. Unfortunately, most heterogeneous graph neural networks tend to transform heterogeneous graphs into homogeneous graphs when using meta-paths for representation learning. This paper therefore presents a novel motif-based hierarchical heterogeneous graph attention network algorithm, MBHAN, that addresses this problem by incorporating a hierarchical dual attention mechanism at the node-level and motif-level. Node-level attention aims to learn the importance between a node and its neighboring nodes within its corresponding motif. Motif-level attention is capable of learning the importance of different motifs in the heterogeneous graph. In view of the different vector space features of different types of nodes in heterogeneous graphs, MBHAN also aggregates the features of different types of nodes, so that they can jointly participate in downstream tasks after passing through segregated independent shallow neural networks. MBHAN’s superior network representation learning capability has been validated by extensive experiments on two real-world datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

26 pages, 439 KiB  
Review
A Review of Neural Network-Based Emulation of Guitar Amplifiers
by Tara Vanhatalo, Pierrick Legrand, Myriam Desainte-Catherine, Pierre Hanna, Antoine Brusco, Guillaume Pille and Yann Bayle
Appl. Sci. 2022, 12(12), 5894; https://doi.org/10.3390/app12125894 - 9 Jun 2022
Cited by 10 | Viewed by 5010
Abstract
Vacuum tube amplifiers present sonic characteristics frequently coveted by musicians, that are often due to the distinct nonlinearities of their circuits, and accurately modelling such effects can be a challenging task. A recent rise in machine learning methods has lead to the ubiquity [...] Read more.
Vacuum tube amplifiers present sonic characteristics frequently coveted by musicians, that are often due to the distinct nonlinearities of their circuits, and accurately modelling such effects can be a challenging task. A recent rise in machine learning methods has lead to the ubiquity of neural networks in all fields of study including virtual analog modelling. This has lead to the appearance of a variety of architectures tailored to this task. This article aims to provide an overview of the current state of the research in neural emulation of analog distortion circuits by first presenting preceding methods in the field and then focusing on a complete review of the deep learning landscape that has appeared in recent years, detailing each subclass of available architectures. This is done in order to bring to light future possible avenues of work in this field. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 591 KiB  
Article
An Open Relation Extraction System for Web Text Information
by Huagang Li and Bo Liu
Appl. Sci. 2022, 12(11), 5718; https://doi.org/10.3390/app12115718 - 4 Jun 2022
Cited by 2 | Viewed by 2022
Abstract
Web texts typically undergo the open-ended growth of new relations. Traditional relation extraction methods lack automatic annotation and perform poorly on new relation extraction tasks. We propose an open-domain relation extraction system (ORES) based on distant supervision and few-shot learning to solve this [...] Read more.
Web texts typically undergo the open-ended growth of new relations. Traditional relation extraction methods lack automatic annotation and perform poorly on new relation extraction tasks. We propose an open-domain relation extraction system (ORES) based on distant supervision and few-shot learning to solve this problem. More specifically, we utilize tBERT to design instance selector 1, implementing automatic labeling in the data mining component. Meanwhile, we design example selector 2 based on K-BERT in the new relation extraction component. The real-time data management component outputs new relational data. Experiments show that ORES can filter out higher quality and diverse instances for better new relation learning. It achieves significant improvement compared to Neural Snowball with fewer seed sentences. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 1356 KiB  
Article
DeConNet: Deep Neural Network Model to Solve the Multi-Job Assignment Problem in the Multi-Agent System
by Jungwoo Lee, Youngho Choi and Jinho Suh
Appl. Sci. 2022, 12(11), 5454; https://doi.org/10.3390/app12115454 - 27 May 2022
Cited by 3 | Viewed by 2586
Abstract
In a multi-agent system, multi-job assignment is an optimization problem that seeks to minimize total cost. This can be generalized as a complex problem in which several variations of vehicle routing problems are combined, and as an NP-hard problem. The parameters considered include [...] Read more.
In a multi-agent system, multi-job assignment is an optimization problem that seeks to minimize total cost. This can be generalized as a complex problem in which several variations of vehicle routing problems are combined, and as an NP-hard problem. The parameters considered include the number of agents and jobs, the loading capacity, the speed of the agents, and the sequence of consecutive positions of jobs. In this study, a deep neural network (DNN) model was developed to solve the job assignment problem in a constant time regardless of the state of the parameters. To generate a large training dataset for the DNN, the planning domain definition language (PDDL) was used to describe the problem, and the optimal solution that was obtained using the PDDL solver was preprocessed into a sample of the dataset. A DNN was constructed by concatenating the fully-connected layers. The assignment solution obtained via DNN inference increased the average traveling time by up to 13% compared with the ground cost. As compared with the ground cost, which required hundreds of seconds, the DNN execution time was constant at approximately 20 ms regardless of the number of agents and jobs. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

10 pages, 1063 KiB  
Article
A Hierarchical Representation Model Based on Longformer and Transformer for Extractive Summarization
by Shihao Yang, Shaoru Zhang, Ming Fang, Fengqin Yang and Shuhua Liu
Electronics 2022, 11(11), 1706; https://doi.org/10.3390/electronics11111706 - 27 May 2022
Cited by 4 | Viewed by 2772
Abstract
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation [...] Read more.
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation method is crucial for the quality of the generated summarization. To effectively represent the document, we propose a hierarchical document representation model Long-Trans-Extr for Extractive Summarization, which uses Longformer as the sentence encoder and Transformer as the document encoder. The advantage of Longformer as sentence encoder is that the model can input long document up to 4096 tokens with adding relative a little calculation. The proposed model Long-Trans-Extr is evaluated on three benchmark datasets: CNN (Cable News Network), DailyMail, and the combined CNN/DailyMail. It achieves 43.78 (Rouge-1) and 39.71 (Rouge-L) on CNN/DailyMail and 33.75 (Rouge-1), 13.11 (Rouge-2), and 30.44 (Rouge-L) on the CNN datasets. They are very competitive results, and furthermore, they show that our model has better performance on long documents, such as the CNN corpus. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 837 KiB  
Article
Improved Bidirectional GAN-Based Approach for Network Intrusion Detection Using One-Class Classifier
by Wen Xu, Julian Jang-Jaccard, Tong Liu, Fariza Sabrina and Jin Kwak
Computers 2022, 11(6), 85; https://doi.org/10.3390/computers11060085 - 26 May 2022
Cited by 20 | Viewed by 4101
Abstract
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples [...] Read more.
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples that can “fool” the discriminators. We argue that this strong dependency required for GAN training on images does not necessarily work for GAN models for network intrusion detection tasks. This is because the network intrusion inputs have a simpler feature structure such as relatively low-dimension, discrete feature values, and smaller input size compared to the existing GAN-based anomaly detection tasks proposed on images. To address this issue, we propose a new Bidirectional GAN (Bi-GAN) model that is better equipped for network intrusion detection with reduced overheads involved in excessive training. In our proposed method, the training iteration of the generator (and accordingly the encoder) is increased separate from the training of the discriminator until it satisfies the condition associated with the cross-entropy loss. Our empirical results show that this proposed training strategy greatly improves the performance of both the generator and the discriminator even in the presence of imbalanced classes. In addition, our model offers a new construct of a one-class classifier using the trained encoder–discriminator. The one-class classifier detects anomalous network traffic based on binary classification results instead of calculating expensive and complex anomaly scores (or thresholds). Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks and outperforms other similar generative methods on two datasets: NSL-KDD and CIC-DDoS2019 datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

24 pages, 5684 KiB  
Article
Optimization of Apron Support Vehicle Operation Scheduling Based on Multi-Layer Coding Genetic Algorithm
by Jichao Zhang, Xiaolei Chong, Yazhi Wei, Zheng Bi and Qingkun Yu
Appl. Sci. 2022, 12(10), 5279; https://doi.org/10.3390/app12105279 - 23 May 2022
Cited by 8 | Viewed by 2494
Abstract
Operation scheduling of apron support vehicles is an important factor affecting aircraft support capability. However, at present, the traditional support methods have the problems of low utilization rate of support vehicles and low support efficiency in multi-aircraft support. In this paper, a vehicle [...] Read more.
Operation scheduling of apron support vehicles is an important factor affecting aircraft support capability. However, at present, the traditional support methods have the problems of low utilization rate of support vehicles and low support efficiency in multi-aircraft support. In this paper, a vehicle scheduling model is constructed, and a multi-layer coding genetic algorithm is designed to solve the vehicle scheduling problem. In this paper, the apron support vehicle operation scheduling problem is regarded as a Resource-Constrained Project Scheduling Problem (RCPSP), and the support vehicles and their support procedures are adjusted via the sequential sorting method to achieve the optimization goals of shortening the support time and improving the vehicle utilization rate. Based on a specific example, the job scheduling before and after the optimization of the number of support vehicles is simulated using a multi-layer coding genetic algorithm. The results show that compared with the traditional support scheme, the vehicle scheduling time optimized via the multi-layer coding genetic algorithm is obviously shortened; after the number of vehicles is optimized, the support time is further shortened and the average utilization rate of vehicles is improved. Finally, the optimized apron support vehicle number configuration and the best scheduling scheme are given. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 571 KiB  
Article
Double Linear Transformer for Background Music Generation from Videos
by Xueting Yang, Ying Yu and Xiaoyu Wu
Appl. Sci. 2022, 12(10), 5050; https://doi.org/10.3390/app12105050 - 17 May 2022
Cited by 3 | Viewed by 2776
Abstract
Many music generation research works have achieved effective performance, while rarely combining music with given videos. We propose a model with two linear Transformers to generate background music according to a given video. To enhance the melodic quality of the generated music, we [...] Read more.
Many music generation research works have achieved effective performance, while rarely combining music with given videos. We propose a model with two linear Transformers to generate background music according to a given video. To enhance the melodic quality of the generated music, we firstly input note-related and rhythm-related music features separately into each Transformer network. In particular, we pay attention to the connection and the independence of music features. Then, in order to generate the music that matches the given video, the current state-of-the-art cross-modal inference method is set up to establish the relationship between visual mode and sound mode. Subjective and objective experiment indicate that the generated background music matches the video well and is also melodious. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

12 pages, 468 KiB  
Article
Improving Non-Autoregressive Machine Translation Using Sentence-Level Semantic Agreement
by Shuheng Wang, Heyan Huang and Shumin Shi
Appl. Sci. 2022, 12(10), 5003; https://doi.org/10.3390/app12105003 - 16 May 2022
Cited by 1 | Viewed by 1915
Abstract
Theinference stage can be accelerated significantly using a Non-Autoregressive Transformer (NAT). However, the training objective used in the NAT model also aims to minimize the loss between the generated words and the golden words in the reference. Since the dependencies between the target [...] Read more.
Theinference stage can be accelerated significantly using a Non-Autoregressive Transformer (NAT). However, the training objective used in the NAT model also aims to minimize the loss between the generated words and the golden words in the reference. Since the dependencies between the target words are lacking, this training objective computed at word level can easily cause semantic inconsistency between the generated and source sentences. To alleviate this issue, we propose a new method, Sentence-Level Semantic Agreement (SLSA), to obtain consistency between the source and generated sentences. Specifically, we utilize contrastive learning to pull the sentence representations of the source and generated sentences closer together. In addition, to strengthen the capability of the encoder, we also integrate an agreement module into the encoder to obtain a better representation of the source sentence. The experiments are conducted on three translation datasets: the WMT 2014 EN → DE task, the WMT 2016 EN → RO task, and the IWSLT 2014 DE → DE task, and the improvement in the NAT model’s performance shows the effect of our proposed method. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

14 pages, 3214 KiB  
Article
A Multivariate Temporal Convolutional Attention Network for Time-Series Forecasting
by Renzhuo Wan, Chengde Tian, Wei Zhang, Wendi Deng and Fan Yang
Electronics 2022, 11(10), 1516; https://doi.org/10.3390/electronics11101516 - 10 May 2022
Cited by 8 | Viewed by 5434
Abstract
Multivariate time-series forecasting is one of the crucial and persistent challenges in time-series forecasting tasks. As a kind of data with multivariate correlation and volatility, multivariate time series impose highly nonlinear time characteristics on the forecasting model. In this paper, a new multivariate [...] Read more.
Multivariate time-series forecasting is one of the crucial and persistent challenges in time-series forecasting tasks. As a kind of data with multivariate correlation and volatility, multivariate time series impose highly nonlinear time characteristics on the forecasting model. In this paper, a new multivariate time-series forecasting model, multivariate temporal convolutional attention network (MTCAN), based on a self-attentive mechanism is proposed. MTCAN is based on the Convolution Neural Network (CNN) model, using 1D dilated convolution as the basic unit to construct asymmetric blocks, and then, the feature extraction is performed by the self-attention mechanism to finally obtain the prediction results. The input and output lengths of this network can be determined flexibly. The validation of the method is carried out with three different multivariate time-series datasets. The reliability and accuracy of the prediction results are compared with Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Long Short-Term Memory (ConvLSTM), and Temporal Convolutional Network (TCN). The prediction results show that the model proposed in this paper has significantly improved prediction accuracy and generalization. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Back to TopTop