Emerging Artificial Intelligence Technologies and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (25 September 2024) | Viewed by 17846

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer and Systems Sciences, Stockholm University, 164 55 Kista, Sweden
Interests: machine learning; artificial intelligence; learning-driven computing continuum and distributed systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is reshaping the global scenario and redefining the development of cutting-edge technologies, applications, policies, and service demands. AI is designed to perform tasks intelligently through continuous learning by acquiring reasoning or knowledge from past experiences. This problem-solving strategy is needed in many fields, both currently and in the future, to minimize human intervention. AI is divided into strong AI, applied AI, and cognitive AI. Strong AI is used to enable a machine to think and process decisions by itself. Applied AI involves processing information to make smart systems that behave like experts. Cognitive AI is more powerful than the above two categories; it thinks and acts like a human.

For a decade, AI has been applied in every field, including natural language processing, computer vision, industry, robotics, ubiquitous data analytics, cloud computing, the Internet of Things, security, medical image analysis, etc. AI-driven technology will likely continue to improve efficiency and productivity and expand into more industries over time. Nevertheless, the general public is worried that humans' opportunities are minimized due to AI, and some of them have unrealistic expectations about the changes happening in the real-world application of AI. It is essential to show how AI-driven technologies and applications can improve human life through intelligent and innovative technologies. It is also necessary to develop human–AI collaboration to use technologies more efficiently.

This Special Issue will delve into mutually dependent subfields including, but not limited to, machine learning, computer vision, natural language processing, deep learning, wireless sensor networks, blockchain technology, cryptography, big data, social networks, the Internet of Things, image processing, etc. Accepted papers will build a comprehensive collection of research and development trends on contemporary “Artificial Intelligence Technologies and Applications” that will serve as a convenient reference for AI experts and newly arrived practitioners, introducing them to the field’s trends.

Dr. Praveen Kumar Donta
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence
  • big data analytics
  • social network analysis
  • bioinformatics
  • medical image analytics
  • Internet of things
  • machine learning
  • cloud computing
  • cognitive science
  • blockchain technology
  • wireless sensor networks
  • data science
  • privacy and security
  • computer vision
  • augmented/virtual/mixed reality
  • decision support systems
  • distributed systems
  • computer networks
  • computing continuum systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 10759 KiB  
Article
Design of a Cyber-Physical System-of-Systems Architecture for Elderly Care at Home
by José Galeas, Alberto Tudela, Óscar Pons, Juan Pedro Bandera and Antonio Bandera
Electronics 2024, 13(23), 4583; https://doi.org/10.3390/electronics13234583 - 21 Nov 2024
Viewed by 230
Abstract
The idea of introducing a robot into an Ambient Assisted Living (AAL) environment to provide additional services beyond those provided by the environment itself has been explored in numerous projects. Moreover, new opportunities can arise from this symbiosis, which usually requires both systems [...] Read more.
The idea of introducing a robot into an Ambient Assisted Living (AAL) environment to provide additional services beyond those provided by the environment itself has been explored in numerous projects. Moreover, new opportunities can arise from this symbiosis, which usually requires both systems to share the knowledge (and not just the data) they capture from the context. Thus, by using knowledge extracted from the raw data captured by the sensors deployed in the environment, the robot can know where the person is and whether he/she should perform some physical exercise, as well as whether he/she should move a chair away to allow the robot to successfully complete a task. This paper describes the design of an Ambient Assisted Living system where an IoT scheme and robot coexist as independent but connected elements, forming a cyber-physical system-of-systems architecture. The IoT environment includes cameras to monitor the person’s activity and physical position (lying down, sitting…), as well as non-invasive sensors to monitor the person’s heart or breathing rate while lying in bed or sitting in the living room. Although this manuscript focuses on how both systems handle and share the knowledge they possess about the context, a couple of example use cases are included. In the first case, the environment provides the robot with information about the positions of objects in the environment, which allows the robot to augment the metric map it uses to navigate, detecting situations that prevent it from moving to a target. If there is a person nearby, the robot will approach them to ask them to move a chair or open a door. In the second case, even more use is made of the robot’s ability to interact with the person. When the IoT system detects that the person has fallen to the ground, it passes this information to the robot so that it can go to the person, talk to them, and ask for external help if necessary. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

35 pages, 15883 KiB  
Article
Bias and Cyberbullying Detection and Data Generation Using Transformer Artificial Intelligence Models and Top Large Language Models
by Yulia Kumar, Kuan Huang, Angelo Perez, Guohao Yang, J. Jenny Li, Patricia Morreale, Dov Kruger and Raymond Jiang
Electronics 2024, 13(17), 3431; https://doi.org/10.3390/electronics13173431 - 29 Aug 2024
Viewed by 1780
Abstract
Despite significant advancements in Artificial Intelligence (AI) and Large Language Models (LLMs), detecting and mitigating bias remains a critical challenge, particularly on social media platforms like X (formerly Twitter), to address the prevalent cyberbullying on these platforms. This research investigates the effectiveness of [...] Read more.
Despite significant advancements in Artificial Intelligence (AI) and Large Language Models (LLMs), detecting and mitigating bias remains a critical challenge, particularly on social media platforms like X (formerly Twitter), to address the prevalent cyberbullying on these platforms. This research investigates the effectiveness of leading LLMs in generating synthetic biased and cyberbullying data and evaluates the proficiency of transformer AI models in detecting bias and cyberbullying within both authentic and synthetic contexts. The study involves semantic analysis and feature engineering on a dataset of over 48,000 sentences related to cyberbullying collected from Twitter (before it became X). Utilizing state-of-the-art LLMs and AI tools such as ChatGPT-4, Pi AI, Claude 3 Opus, and Gemini-1.5, synthetic biased, cyberbullying, and neutral data were generated to deepen the understanding of bias in human-generated data. AI models including DeBERTa, Longformer, BigBird, HateBERT, MobileBERT, DistilBERT, BERT, RoBERTa, ELECTRA, and XLNet were initially trained to classify Twitter cyberbullying data and subsequently fine-tuned, optimized, and experimentally quantized. This study focuses on intersectional cyberbullying and multilabel classification to detect both bias and cyberbullying. Additionally, it proposes two prototype applications: one that detects cyberbullying using an intersectional approach and the innovative CyberBulliedBiasedBot that combines the generation and detection of biased and cyberbullying content. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

31 pages, 402 KiB  
Article
Hidden Variable Models in Text Classification and Sentiment Analysis
by Pantea Koochemeshkian, Eddy Ihou Koffi and Nizar Bouguila
Electronics 2024, 13(10), 1859; https://doi.org/10.3390/electronics13101859 - 10 May 2024
Cited by 1 | Viewed by 1108
Abstract
In this paper, we are proposing extensions to the multinomial principal component analysis (MPCA) framework, which is a Dirichlet (Dir)-based model widely used in text document analysis. The MPCA is a discrete analogue to the standard PCA (it operates on continuous data using [...] Read more.
In this paper, we are proposing extensions to the multinomial principal component analysis (MPCA) framework, which is a Dirichlet (Dir)-based model widely used in text document analysis. The MPCA is a discrete analogue to the standard PCA (it operates on continuous data using Gaussian distributions). With the extensive use of count data in modeling nowadays, the current limitations of the Dir prior (independent assumption within its components and very restricted covariance structure) tend to prevent efficient processing. As a result, we are proposing some alternatives with flexible priors such as generalized Dirichlet (GD) and Beta-Liouville (BL), leading to GDMPCA and BLMPCA models, respectively. Besides using these priors as they generalize the Dir, importantly, we also implement a deterministic method that uses variational Bayesian inference for the fast convergence of the proposed algorithms. Additionally, we use collapsed Gibbs sampling to estimate the model parameters, providing a computationally efficient method for inference. These two variational models offer higher flexibility while assigning each observation to a distinct cluster. We create several multitopic models and evaluate their strengths and weaknesses using real-world applications such as text classification and sentiment analysis. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

14 pages, 2920 KiB  
Article
Zero-FVeinNet: Optimizing Finger Vein Recognition with Shallow CNNs and Zero-Shuffle Attention for Low-Computational Devices
by Nghi C. Tran, Bach-Tung Pham, Vivian Ching-Mei Chu, Kuo-Chen Li, Phuong Thi Le, Shih-Lun Chen, Aufaclav Zatu Kusuma Frisky, Yung-Hui Li and Jia-Ching Wang
Electronics 2024, 13(9), 1751; https://doi.org/10.3390/electronics13091751 - 1 May 2024
Viewed by 1257
Abstract
In the context of increasing reliance on mobile devices, robust personal security solutions are critical. This paper presents Zero-FVeinNet, an innovative, lightweight convolutional neural network (CNN) tailored for finger vein recognition on mobile and embedded devices, which are typically resource-constrained. The model integrates [...] Read more.
In the context of increasing reliance on mobile devices, robust personal security solutions are critical. This paper presents Zero-FVeinNet, an innovative, lightweight convolutional neural network (CNN) tailored for finger vein recognition on mobile and embedded devices, which are typically resource-constrained. The model integrates cutting-edge features such as Zero-Shuffle Coordinate Attention and a blur pool layer, enhancing architectural efficiency and recognition accuracy under various imaging conditions. A notable reduction in computational demands is achieved through an optimized design involving only 0.3 M parameters, thereby enabling faster processing and reduced energy consumption, which is essential for mobile applications. An empirical evaluation on several leading public finger vein datasets demonstrates that Zero-FVeinNet not only outperforms traditional biometric systems in speed and efficiency but also establishes new standards in biometric identity verification. The Zero-FVeinNet achieves a Correct Identification Rate (CIR) of 99.9% on the FV-USM dataset, with a similarly high accuracy on other datasets. This paper underscores the potential of Zero-FVeinNet to significantly enhance security features on mobile devices by merging high accuracy with operational efficiency, paving the way for advanced biometric verification technologies. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

18 pages, 1456 KiB  
Article
Self-Supervised Hypergraph Learning for Knowledge-Aware Social Recommendation
by Munan Li, Jialong Li, Liping Yang and Qi Ding
Electronics 2024, 13(7), 1306; https://doi.org/10.3390/electronics13071306 - 31 Mar 2024
Cited by 2 | Viewed by 1044
Abstract
Social recommendations typically utilize social relationships and past behaviors to predict users’ preferences. In real-world scenarios, the connections between users and items often extend beyond simple pairwise relationships. Leveraging hypergraphs to capture high-order relationships provides a novel perspective to social recommendation. However, effectively [...] Read more.
Social recommendations typically utilize social relationships and past behaviors to predict users’ preferences. In real-world scenarios, the connections between users and items often extend beyond simple pairwise relationships. Leveraging hypergraphs to capture high-order relationships provides a novel perspective to social recommendation. However, effectively modeling these high-order relationships is challenging due to limited external knowledge and noisy feedback. To tackle these challenges, we propose a novel framework called self-supervised hypergraph learning for knowledge-aware social recommendation (SHLKR). In SHLKR, we incorporated three main types of connections: behavior, social, and attribute context relationships. These dependencies serve as the basis for defining hyperedges in the hypergraphs. A dual-channel hypergraph structure is created based on these relationships. Then, the hypergraph convolution is applied to model the high-order interactions between users and items. Additionally, we adopted a self-supervised learning task to maximize the consistency between different views. It helps to mitigate the model’s sensitivity to noisy feedback. We evaluated the performance of SHLKR through extensive experiments on publicly available datasets. The results demonstrate that leveraging hypergraphs for modeling can better capture the complexity and diversity of user preferences and interactions. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

23 pages, 5274 KiB  
Article
Using Sensor Fusion and Machine Learning to Distinguish Pedestrians in Artificial Intelligence-Enhanced Crosswalks
by José Manuel Lozano Domínguez, Manuel Joaquín Redondo González, Jose Miguel Davila Martin and Tomás de J. Mateo Sanguino
Electronics 2023, 12(23), 4718; https://doi.org/10.3390/electronics12234718 - 21 Nov 2023
Cited by 4 | Viewed by 2136
Abstract
Pedestrian safety is a major concern in urban areas, and crosswalks are one of the most critical locations where accidents can occur. This research introduces an intelligent crosswalk, employing sensor fusion and machine learning techniques to distinguish the presence of pedestrians and drivers. [...] Read more.
Pedestrian safety is a major concern in urban areas, and crosswalks are one of the most critical locations where accidents can occur. This research introduces an intelligent crosswalk, employing sensor fusion and machine learning techniques to distinguish the presence of pedestrians and drivers. Upon detecting a pedestrian, the system proactively activates a warning light signal. This approach aims to quickly alert nearby people and mitigate potential dangers, thereby strengthening pedestrian safety. The system integrates data from radio detection and ranging sensors and a magnetic field sensor, using a hierarchical classifier. The One-Class support vector machine algorithm is used to classify objects in the radio detection and ranging data, while fuzzy logic is used to filter out targets from the magnetic field sensor. Additionally, this work presents a novel method for the manufacture of the road signaling system, using mixtures of resins, aggregates, and reinforcing fibers that are cold-injected into an aluminum mold. The mechanical, optical, and electrical characteristics were subjected to standardized tests, validating its autonomous operation in real-world conditions. The results revealed the system’s effectiveness in detecting pedestrians with a 99.11% accuracy and a 0.0% false-positive rate, marking a substantial improvement over the previous fuzzy logic-based system with an 81.33% accuracy. Attitude testing revealed a significant 33.33% reduction in pedestrian erratic behavior and a substantial decrease in driver speed (32.83% during the day and 70.6% during the night) compared to conventional crossings. Consequently, this comprehensive work offers a unique solution to pedestrian safety at crosswalks by showcasing the potential of machine learning techniques, particularly the One-Class support vector machine algorithm, in advancing road safety through precise and reliable pattern recognition. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

19 pages, 1549 KiB  
Article
Enhancing Fashion Classification with Vision Transformer (ViT) and Developing Recommendation Fashion Systems Using DINOVA2
by Hadeer M. Abd Alaziz, Hela Elmannai, Hager Saleh, Myriam Hadjouni, Ahmed M. Anter, Abdelrahim Koura and Mohammed Kayed
Electronics 2023, 12(20), 4263; https://doi.org/10.3390/electronics12204263 - 15 Oct 2023
Cited by 2 | Viewed by 3639
Abstract
As e-commerce platforms grow, consumers increasingly purchase clothes online; however, they often need clarification on clothing choices. Consumers and stores interact through the clothing recommendation system. A recommendation system can help customers to find clothing that they are interested in and can improve [...] Read more.
As e-commerce platforms grow, consumers increasingly purchase clothes online; however, they often need clarification on clothing choices. Consumers and stores interact through the clothing recommendation system. A recommendation system can help customers to find clothing that they are interested in and can improve turnover. This work has two main goals: enhancing fashion classification and developing a fashion recommendation system. The main objective of fashion classification is to apply a Vision Transformer (ViT) to enhance performance. ViT is a set of transformer blocks; each transformer block consists of two layers: a multi-head self-attention layer and a multilayer perceptron (MLP) layer. The hyperparameters of ViT are configured based on the fashion images dataset. CNN models have different layers, including multi-convolutional layers, multi-max pooling layers, multi-dropout layers, multi-fully connected layers, and batch normalization layers. Furthermore, ViT is compared with different models, i.e., deep CNN models, VGG16, DenseNet-121, Mobilenet, and ResNet50, using different evaluation methods and two fashion image datasets. The ViT model performs the best on the Fashion-MNIST dataset (accuracy = 95.25, precision = 95.20, recall = 95.25, F1-score = 95.20). ViT records the highest performance compared to other models in the fashion product dataset (accuracy = 98.53, precision = 98.42, recall = 98.53, F1-score = 98.46). A recommendation fashion system is developed using Learning Robust Visual Features without Supervision (DINOv2) and a nearest neighbor search that is built in the FAISS library to obtain the top five similarity results for specific images. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

17 pages, 5455 KiB  
Article
Improving the Detection and Positioning of Camouflaged Objects in YOLOv8
by Tong Han, Tieyong Cao, Yunfei Zheng, Lei Chen, Yang Wang and Bingyang Fu
Electronics 2023, 12(20), 4213; https://doi.org/10.3390/electronics12204213 - 11 Oct 2023
Cited by 4 | Viewed by 2755
Abstract
Camouflaged objects can be perfectly hidden in the surrounding environment by designing their texture and color. Existing object detection models have high false-negative rates and inaccurate localization for camouflaged objects. To resolve this, we improved the YOLOv8 algorithm based on feature enhancement. In [...] Read more.
Camouflaged objects can be perfectly hidden in the surrounding environment by designing their texture and color. Existing object detection models have high false-negative rates and inaccurate localization for camouflaged objects. To resolve this, we improved the YOLOv8 algorithm based on feature enhancement. In the feature extraction stage, an edge enhancement module was built to enhance the edge feature. In the feature fusion stage, multiple asymmetric convolution branches were introduced to obtain larger receptive fields and achieve multi-scale feature fusion. In the post-processing stage, the existing non-maximum suppression algorithm was improved to address the issue of missed detection caused by overlapping boxes. Additionally, a shape-enhanced data augmentation method was designed to enhance the model’s shape perception of camouflaged objects. Experimental evaluations were carried out on camouflaged object datasets, including COD and CAMO, which are publicly accessible. The improved method exhibits enhancements in detection performance by 8.3% and 9.1%, respectively, compared to the YOLOv8 model. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

17 pages, 3726 KiB  
Article
New Hybrid Graph Convolution Neural Network with Applications in Game Strategy
by Hanyue Xu, Kah Phooi Seng and Li-Minn Ang
Electronics 2023, 12(19), 4020; https://doi.org/10.3390/electronics12194020 - 24 Sep 2023
Cited by 1 | Viewed by 1988
Abstract
Deep convolutional neural networks (DCNNs) have enjoyed much success in many applications, such as computer vision, automated medical diagnosis, autonomous systems, etc. Another application of DCNNs is for game strategies, where the deep neural network architecture can be used to directly represent and [...] Read more.
Deep convolutional neural networks (DCNNs) have enjoyed much success in many applications, such as computer vision, automated medical diagnosis, autonomous systems, etc. Another application of DCNNs is for game strategies, where the deep neural network architecture can be used to directly represent and learn strategies from expert players on different sides. Many game states can be expressed not only as a matrix data structure suitable for DCNN training but also as a graph data structure. Most of the available DCNN methods ignore the territory characteristics of both sides’ positions based on the game rules. Therefore, in this paper, we propose a hybrid approach to the graph neural network to extract the features of the model of game-playing strategies and fuse it into a DCNN. As a graph learning model, graph convolutional networks (GCNs) provide a scheme by which to extract the features in a graph structure, which can better extract the features in the relationship between the game-playing strategies. We validate the work and design a hybrid network to integrate GCNs and DCNNs in the game of Go and show that on the KGS Go dataset, the performance of the hybrid model outperforms the traditional DCNN model. The hybrid model demonstrates a good performance in extracting the game strategy of Go. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

Back to TopTop