Next Issue
Volume 13, November
Previous Issue
Volume 13, September
 
 

Computers, Volume 13, Issue 10 (October 2024) – 30 articles

Cover Story (view full-size image): Mental workload, visuospatial processes and physiological arousal are highly intertwined phenomena crucial for achieving optimal performance and improved mental health. This study investigates the relationship between these phenomena and performance, in a virtual reality (VR)-based TETRIS game with 25 participants. Multimodal data were recorded using a physiological-computing VR headset. Our findings reveal distinct patterns in EEG and cardiac activity that correlate with changes in task difficulty, in-game performance and subjective feelings of relief after a helper intervention. This study highlights the importance of multimodal physiological recording in rich environments and suggests the plausibility of workload optimization in favor of overall mental health and well-being. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 2151 KiB  
Article
CAD Sensitization, an Easy Way to Integrate Artificial Intelligence in Shipbuilding
by Arturo Benayas-Ayuso, Rodrigo Perez Fernandez and Francisco Perez-Arribas
Computers 2024, 13(10), 273; https://doi.org/10.3390/computers13100273 - 21 Oct 2024
Viewed by 652
Abstract
There are two main areas in which the Internet of Ships (IoS) can help: firstly, the production stage, in all its phases, from material bids to manufacture, and secondly, the operation of the ship. Intelligent ship management requires a lot of information, as [...] Read more.
There are two main areas in which the Internet of Ships (IoS) can help: firstly, the production stage, in all its phases, from material bids to manufacture, and secondly, the operation of the ship. Intelligent ship management requires a lot of information, as does the shipbuilding process. In these two phases of the ship’s life cycle, IoS acts as a key to the keyhole. IoS tools include sensors, process information and real-time decision-making, fog computing, or delegated processes in the cloud. The key point to address this challenge is the design phase. Getting the design process right will help in both areas, reducing costs and making agile use of technology to achieve a highly efficient and optimal outcome. But this raises a lot of new questions that need to be addressed: At what stage should we start adding control sensors? Which sensors are best suited to our solution? Is there anything that offers more than simple identification? As we begin the process of answering all these questions, we realize that a Computer Aided Design (CAD) tool, as well as Artificial Intelligence (AI), mixed in a single tool, could significantly help in all these processes. AI combined with specialized CAD tools can enhance the sensitization phases in the shipbuilding process to improve results throughout the ship’s life cycle. This is the base of the framework developed in this paper. Full article
(This article belongs to the Special Issue Artificial Intelligence in Industrial IoT Applications)
Show Figures

Figure 1

40 pages, 4555 KiB  
Article
A Novel Data Analytics Methodology for Discovering Behavioral Risk Profiles: The Case of Diners During a Pandemic
by Thouraya Gherissi Labben and Gurdal Ertek
Computers 2024, 13(10), 272; https://doi.org/10.3390/computers13100272 - 19 Oct 2024
Viewed by 1027
Abstract
Understanding tourist profiles and behaviors during health pandemics is key to better preparedness for unforeseen future outbreaks, particularly for tourism and hospitality businesses. This study develops and applies a novel data analytics methodology to gain insights into the health risk reduction behavior of [...] Read more.
Understanding tourist profiles and behaviors during health pandemics is key to better preparedness for unforeseen future outbreaks, particularly for tourism and hospitality businesses. This study develops and applies a novel data analytics methodology to gain insights into the health risk reduction behavior of restaurant diners/patrons during their dining out experiences in a pandemic. The methodology builds on data relating to four constructs (question categories) and measurements (questions and attributes), with the constructs being worry, health risk prevention behavior, health risk reduction behavior, and demographic characteristics. As a unique contribution, the methodology generates a behavioral typology by identifying risk profiles, which are expressed as one- and two-level decision rules. For example, the results highlighted the significance of restaurants’ adherence to cautionary measures and diners’ perception of seclusion. These and other factors enable a multifaceted analysis, typology, and understanding of diners’ risk profiles, offering valuable guidance for developing managerial strategies and skill development programs to promote safer dining experiences during pandemics. Besides yielding novel types of insights through rules, another practical contribution of the research is the development of a public web-based analytics dashboard for interactive insight discovery and decision support. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
Show Figures

Figure 1

34 pages, 5078 KiB  
Systematic Review
Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis
by Mosima Anna Masethe, Hlaudi Daniel Masethe and Sunday O. Ojo
Computers 2024, 13(10), 271; https://doi.org/10.3390/computers13100271 - 17 Oct 2024
Viewed by 742
Abstract
This systematic literature review aims to evaluate and synthesize the effectiveness of various embedding techniques—word embeddings, contextual word embeddings, and context-aware embeddings—in addressing Meaning Conflation Deficiency (MCD). Using the PRISMA framework, this study assesses the current state of research and provides insights into [...] Read more.
This systematic literature review aims to evaluate and synthesize the effectiveness of various embedding techniques—word embeddings, contextual word embeddings, and context-aware embeddings—in addressing Meaning Conflation Deficiency (MCD). Using the PRISMA framework, this study assesses the current state of research and provides insights into the impact of these techniques on resolving meaning conflation issues. After a thorough literature search, 403 articles on the subject were found. A thorough screening and selection process resulted in the inclusion of 25 studies in the meta-analysis. The evaluation adhered to the PRISMA principles, guaranteeing a methodical and lucid process. To estimate effect sizes and evaluate heterogeneity and publication bias among the chosen papers, meta-analytic approaches were utilized such as the tau-squared (τ2) which represents a statistical parameter used in random-effects, H-squared (H2) is a statistic used to measure heterogeneity, and I-squared (I2) quantify the degree of heterogeneity. The meta-analysis demonstrated a high degree of variation in effect sizes among the studies, with a τ2 value of 8.8724. The significant degree of heterogeneity was further emphasized by the H2 score of 8.10 and the I2 value of 87.65%. A trim and fill analysis with a beta value of 5.95, a standard error of 4.767, a Z-value (or Z-score) of 1.25 which is a statistical term used to express the number of standard deviations a data point deviates from the established mean, and a p-value (probability value) of 0.2 was performed to account for publication bias which is one statistical tool that can be used to assess the importance of hypothesis test results. The results point to a sizable impact size, but the estimates are highly unclear, as evidenced by the huge standard error and non-significant p-value. The review concludes that although contextually aware embeddings have promise in treating Meaning Conflation Deficiency, there is a great deal of variability and uncertainty in the available data. The varied findings among studies are highlighted by the large τ2, I2, and H2 values, and the trim and fill analysis show that changes in publication bias do not alter the impact size’s non-significance. To generate more trustworthy insights, future research should concentrate on enhancing methodological consistency, investigating other embedding strategies, and extending analysis across various languages and contexts. Even though the results demonstrate a significant impact size in addressing MCD through sophisticated word embedding techniques, like context-aware embeddings, there is still a great deal of variability and uncertainty because of various factors, including the different languages studied, the sizes of the corpuses, and the embedding techniques used. These differences show how future research methods must be standardized to guarantee that study results can be compared to one another. The results emphasize how crucial it is to extend the linguistic scope to more morphologically rich and low-resource languages, where MCD is especially difficult. The creation of language-specific models for low-resource languages is one way to increase performance and consistency across Natural Language Processing (NLP) applications in a practical sense. By taking these actions, we can advance our understanding of MCD more thoroughly, which will ultimately improve the performance of NLP systems in a variety of language circumstances. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

24 pages, 2613 KiB  
Review
Intelligent Tutoring Systems in Mathematics Education: A Systematic Literature Review Using the Substitution, Augmentation, Modification, Redefinition Model
by Taekwon Son
Computers 2024, 13(10), 270; https://doi.org/10.3390/computers13100270 - 15 Oct 2024
Viewed by 1023
Abstract
Scholars have claimed that artificial intelligence can be used in education to transform learning. However, there is insufficient evidence on whether intelligent tutoring systems (ITSs), a representative form of artificial intelligence in education, has transformed the teaching and learning of mathematics. To fill [...] Read more.
Scholars have claimed that artificial intelligence can be used in education to transform learning. However, there is insufficient evidence on whether intelligent tutoring systems (ITSs), a representative form of artificial intelligence in education, has transformed the teaching and learning of mathematics. To fill this gap, this systematic review was conducted to examine empirical studies from 2003 to 2023 that used ITSs in mathematics education. Technology integration was coded using the substitution, augmentation, modification, redefinition (SAMR) model, which was extended to suit ITSs in a mathematics education context. How different contexts and teacher roles are intertwined with SAMR levels were examined. The results show that while ITSs in mathematics education primarily augmented existing learning, recent ITS studies have transformed students’ learning experiences. ITSs were most commonly applied at the elementary school level, and most ITS studies focused on the areas of number and arithmetic, algebra, and geometry. The level of SAMR varied depending on the research purpose, and ITS studies in mathematics education were mainly conducted in a way that minimized teacher intervention. The results of this study suggest that the affordance of an ITS, the educational context, and the teacher’s role should be considered simultaneously to demonstrate the transformative power of ITSs in mathematics education. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

18 pages, 5170 KiB  
Article
An Efficient Detection Mechanism of Network Intrusions in IoT Environments Using Autoencoder and Data Partitioning
by Yiran Xiao, Yaokai Feng and Kouichi Sakurai
Computers 2024, 13(10), 269; https://doi.org/10.3390/computers13100269 - 14 Oct 2024
Viewed by 985
Abstract
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a [...] Read more.
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a lightweight model suitable for edge devices. Traditional intrusion detection models face two main challenges when directly ported to edge devices: inadequate computational resources to support large-scale models and the need to improve the accuracy of simpler models. To tackle these issues, this research utilizes the Extreme Learning Machine for its efficient training speed and compact model size to implement autoencoders. Two improvements over the latest related work are proposed: First, to improve data purity and ultimately enhance detection performance, the data are partitioned into multiple regions based on the prediction results of these autoencoders. Second, autoencoder characteristics are leveraged to further investigate the data within each region. We used the public dataset NSL-KDD to test the behavior of the proposed mechanism. The experimental results show that when dealing with multi-class attacks, the model’s performance was significantly improved, and the accuracy and F1-Score were improved by 3.5% and 2.9%, respectively, maintaining its lightweight nature. Full article
Show Figures

Figure 1

20 pages, 4520 KiB  
Article
Employing Different Algorithms of Lightweight Convolutional Neural Network Models in Image Distortion Classification
by Ismail Taha Ahmed, Falah Amer Abdulazeez and Baraa Tareq Hammad
Computers 2024, 13(10), 268; https://doi.org/10.3390/computers13100268 - 12 Oct 2024
Viewed by 897
Abstract
The majority of applications use automatic image recognition technologies to carry out a range of tasks. Therefore, it is crucial to identify and classify image distortions to improve image quality. Despite efforts in this area, there are still many challenges in accurately and [...] Read more.
The majority of applications use automatic image recognition technologies to carry out a range of tasks. Therefore, it is crucial to identify and classify image distortions to improve image quality. Despite efforts in this area, there are still many challenges in accurately and reliably classifying distorted images. In this paper, we offer a comprehensive analysis of models of both non-lightweight and lightweight deep convolutional neural networks (CNNs) for the classification of distorted images. Subsequently, an effective method is proposed to enhance the overall performance of distortion image classification. This method involves selecting features from the pretrained models’ capabilities and using a strong classifier. The experiments utilized the kadid10k dataset to assess the effectiveness of the results. The K-nearest neighbor (KNN) classifier showed better performance than the naïve classifier in terms of accuracy, precision, error rate, recall and F1 score. Additionally, SqueezeNet outperformed other deep CNN models, both lightweight and non-lightweight, across every evaluation metric. The experimental results demonstrate that combining SqueezeNet with KNN can effectively and accurately classify distorted images into the correct categories. The proposed SqueezeNet-KNN method achieved an accuracy rate of 89%. As detailed in the results section, the proposed method outperforms state-of-the-art methods in accuracy, precision, error, recall, and F1 score measures. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

25 pages, 9538 KiB  
Article
Internet of Things-Driven Precision in Fish Farming: A Deep Dive into Automated Temperature, Oxygen, and pH Regulation
by Md. Naymul Islam Nayoun, Syed Akhter Hossain, Karim Mohammed Rezaul, Kazy Noor e Alam Siddiquee, Md. Shabiul Islam and Tajnuva Jannat
Computers 2024, 13(10), 267; https://doi.org/10.3390/computers13100267 - 12 Oct 2024
Viewed by 1309
Abstract
The research introduces a revolutionary Internet of Things (IoT)-based system for fish farming, designed to significantly enhance efficiency and cost-effectiveness. By integrating the NodeMcu12E ESP8266 microcontroller, this system automates the management of critical water quality parameters such as pH, temperature, and oxygen levels, [...] Read more.
The research introduces a revolutionary Internet of Things (IoT)-based system for fish farming, designed to significantly enhance efficiency and cost-effectiveness. By integrating the NodeMcu12E ESP8266 microcontroller, this system automates the management of critical water quality parameters such as pH, temperature, and oxygen levels, essential for fostering optimal fish growth conditions and minimizing mortality rates. The core of this innovation lies in its intelligent monitoring and control mechanism, which not only supports accelerated fish development but also ensures the robustness of the farming process through automated adjustments whenever the monitored parameters deviate from desired thresholds. This smart fish farming solution features an Arduino IoT cloud-based framework, offering a user-friendly web interface that enables fish farmers to remotely monitor and manage their operations from any global location. This aspect of the system emphasizes the importance of efficient information management and the transformation of sensor data into actionable insights, thereby reducing the need for constant human oversight and significantly increasing operational reliability. The autonomous functionality of the system is a key highlight, designed to persist in adjusting the environmental conditions within the fish farm until the optimal parameters are restored. This capability greatly diminishes the risks associated with manual monitoring and adjustments, allowing even those with limited expertise in aquaculture to achieve high levels of production efficiency and sustainability. By leveraging data-driven technologies and IoT innovations, this study not only addresses the immediate needs of the fish farming industry but also contributes to solving the broader global challenge of protein production. It presents a scalable and accessible approach to modern aquaculture, empowering stakeholders to maximize output and minimize risks associated with fish farming, thereby paving the way for a more sustainable and efficient future in the global food supply. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

18 pages, 827 KiB  
Article
Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development
by Tahir Rashid, Inam Illahi, Qasim Umer, Muhammad Arfan Jaffar, Waheed Yousuf Ramay and Hanadi Hakami
Computers 2024, 13(10), 266; https://doi.org/10.3390/computers13100266 - 12 Oct 2024
Viewed by 546
Abstract
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task [...] Read more.
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task scheduling, developer recommendations, and reward mechanisms, there has been insufficient attention to the support of platform moderators, or copilots, who are essential to project success. A critical responsibility of copilots is estimating project duration; however, manual predictions often lead to inconsistencies and delays. This paper introduces an innovative machine learning approach designed to automate the prediction of project duration on CSD platforms. Utilizing historical data from TopCoder, the proposed method extracts pertinent project attributes and preprocesses textual data through Natural Language Processing (NLP). Bidirectional Encoder Representations from Transformers (BERT) are employed to convert textual information into vectors, which are then analyzed using various machine learning algorithms. Zero-shot learning algorithms exhibit superior performance, with an average accuracy of 92.76%, precision of 92.76%, recall of 99.33%, and an f-measure of 95.93%. The implementation of the proposed automated duration prediction model is crucial for enhancing the success rate of crowdsourcing projects, optimizing resource allocation, managing budgets effectively, and improving stakeholder satisfaction. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

30 pages, 566 KiB  
Article
Area–Time-Efficient High-Radix Modular Inversion Algorithm and Hardware Implementation for ECC over Prime Fields
by Yamin Li
Computers 2024, 13(10), 265; https://doi.org/10.3390/computers13100265 - 12 Oct 2024
Viewed by 706
Abstract
Elliptic curve cryptography (ECC) is widely used for secure communications, because it can provide the same level of security as RSA with a much smaller key size. In constrained environments, it is important to consider efficiency, in terms of execution time and hardware [...] Read more.
Elliptic curve cryptography (ECC) is widely used for secure communications, because it can provide the same level of security as RSA with a much smaller key size. In constrained environments, it is important to consider efficiency, in terms of execution time and hardware costs. Modular inversion is a key time-consuming calculation used in ECC. Its hardware implementation requires extensive hardware resources, such as lookup tables and registers. We investigate the state-of-the-art modular inversion algorithms, and evaluate the performance and cost of the algorithms and their hardware implementations. We then propose a high-radix modular inversion algorithm aimed at reducing the execution time and hardware costs. We present a detailed radix-8 hardware implementation based on 256-bit primes in Verilog HDL and compare its cost performance to other implementations. Our implementation on the Altera Cyclone V FPGA chip used 1227 ALMs (adaptive logic modules) and 1037 registers. The modular inversion calculation took 3.67 ms. The AT (area–time) factor was 8.30, outperforming the other implementations. We also present an implementation of ECC using the proposed radix-8 modular inversion algorithm. The implementation results also showed that our modular inversion algorithm was more efficient in area–time than the other algorithms. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

27 pages, 920 KiB  
Article
AI-Generated Spam Review Detection Framework with Deep Learning Algorithms and Natural Language Processing
by Mudasir Ahmad Wani, Mohammed ElAffendi and Kashish Ara Shakil
Computers 2024, 13(10), 264; https://doi.org/10.3390/computers13100264 - 12 Oct 2024
Viewed by 769
Abstract
Spam reviews pose a significant challenge to the integrity of online platforms, misleading consumers and undermining the credibility of genuine feedback. This paper introduces an innovative AI-generated spam review detection framework that leverages Deep Learning algorithms and Natural Language Processing (NLP) techniques to [...] Read more.
Spam reviews pose a significant challenge to the integrity of online platforms, misleading consumers and undermining the credibility of genuine feedback. This paper introduces an innovative AI-generated spam review detection framework that leverages Deep Learning algorithms and Natural Language Processing (NLP) techniques to identify and mitigate spam reviews effectively. Our framework utilizes multiple Deep Learning models, including Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, Gated Recurrent Unit (GRU), and Bidirectional LSTM (BiLSTM), to capture intricate patterns in textual data. The system processes and analyzes large volumes of review content to detect deceptive patterns by utilizing advanced NLP and text embedding techniques such as One-Hot Encoding, Word2Vec, and Term Frequency-Inverse Document Frequency (TF-IDF). By combining three embedding techniques with four Deep Learning algorithms, a total of twelve exhaustive experiments were conducted to detect AI-generated spam reviews. The experimental results demonstrate that our approach outperforms the traditional machine learning models, offering a robust solution for ensuring the authenticity of online reviews. Among the models evaluated, those employing Word2Vec embeddings, particularly the BiLSTM_Word2Vec model, exhibited the strongest performance. The BiLSTM model with Word2Vec achieved the highest performance, with an exceptional accuracy of 98.46%, a precision of 0.98, a recall of 0.97, and an F1-score of 0.98, reflecting a near-perfect balance between precision and recall. Its high F2-score (0.9810) and F0.5-score (0.9857) further highlight its effectiveness in accurately detecting AI-generated spam while minimizing false positives, making it the most reliable option for this task. Similarly, the Word2Vec-based LSTM model also performed exceptionally well, with an accuracy of 97.58%, a precision of 0.97, a recall of 0.96, and an F1-score of 0.97. The CNN model with Word2Vec similarly delivered strong results, achieving an accuracy of 97.61%, a precision of 0.97, a recall of 0.96, and an F1-score of 0.97. This study is unique in its focus on detecting spam reviews specifically generated by AI-based tools rather than solely detecting spam reviews or AI-generated text. This research contributes to the field of spam detection by offering a scalable, efficient, and accurate framework that can be integrated into various online platforms, enhancing user trust and the decision-making processes. Full article
Show Figures

Figure 1

29 pages, 5435 KiB  
Article
access@tour by Action: A Platform for Improving Accessible Tourism Conditions
by Pedro Teixeira, Celeste Eusébio and Leonor Teixeira
Computers 2024, 13(10), 263; https://doi.org/10.3390/computers13100263 - 12 Oct 2024
Viewed by 785
Abstract
Accessible tourism has become relevant, generating significant economic and social impacts. Even though the accessible tourism market is rising and presents an excellent business opportunity, this market is largely ignored, as it is challenging to stimulate the flow of accessibility information. Accessible technologies, [...] Read more.
Accessible tourism has become relevant, generating significant economic and social impacts. Even though the accessible tourism market is rising and presents an excellent business opportunity, this market is largely ignored, as it is challenging to stimulate the flow of accessibility information. Accessible technologies, such as tourism information systems, can be a potential solution, increasing accessibility through communication. However, these solutions are few and tend to fail the integration of users upon development processes. This research aims to present a technological platform to improve accessibility in the tourism industry. The name of this accessible and adaptable technological solution is access@tour by action, and it was created following a user-centered design methodology. This development involved a requirement engineering process based on three crucial stakeholders in accessible tourism: educational institutions, supply agents, and demand agents. The design phase was achieved with the help of a conceptual model based on a unified modeling language. The initial prototype of the solution, created in Adobe XD, implements a wide range of informational and accessibility requirements. Some access@tour by action interfaces outline the design, content, and primary functionalities. By linking technological development, tourism, and social inclusion components, this study highlights the relevance and interdisciplinarity of processes in developing accessible information systems. Full article
Show Figures

Figure 1

43 pages, 1980 KiB  
Review
A Bibliometric Analysis Exploring the Acceptance of Virtual Reality among Older Adults: A Review
by Pei-Gang Wang, Nazlena Mohamad Ali and Mahidur R. Sarker
Computers 2024, 13(10), 262; https://doi.org/10.3390/computers13100262 - 12 Oct 2024
Viewed by 1161
Abstract
In recent years, there has been a widespread integration of virtual reality (VR) technology across various sectors including healthcare, education, and entertainment, marking a significant rise in its societal importance. However, with the ongoing trend of population ageing, understanding the elderly’s acceptance of [...] Read more.
In recent years, there has been a widespread integration of virtual reality (VR) technology across various sectors including healthcare, education, and entertainment, marking a significant rise in its societal importance. However, with the ongoing trend of population ageing, understanding the elderly’s acceptance of such new technologies has become a focal point in both academic and industrial discourse. Despite the attention it garners, there exists a gap in understanding the attitudes of older adults towards VR adoption, along with evident needs and barriers within this demographic. Hence, gaining an in-depth comprehension of the factors influencing the acceptance of VR technology among older adults becomes imperative to enhance its utility and efficacy within this group. This study employs renowned databases such as WoS and Scopus to scrutinize and analyze the utilization of VR among the elderly population. Utilizing VOSviewer software (version 1.6.20), statistical analysis is conducted on the pertinent literature to delve into research lacunae, obstacles, and recommendations in this domain. The findings unveil a notable surge in literature studies concerning VR usage among older adults, particularly evident since 2019. This study documents significant journals, authors, citations, countries, and research domains contributing to this area. Furthermore, it highlights pertinent issues and challenges surrounding the adoption of VR by older users, aiming to identify prevailing constraints, research voids, and future technological trajectories. Simultaneously, this study furnishes guidelines and suggestions tailored towards enhancing VR acceptance among the elderly, thereby fostering a more inclusive technological milieu. Ultimately, this research aspires to establish an encompassing technological ecosystem empowering older adults to harness VR technology for enriched engagement, learning, and social interactions. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education 2024)
Show Figures

Figure 1

19 pages, 5118 KiB  
Article
Enhancing Information Exchange in Ship Maintenance through Digital Twins and IoT: A Comprehensive Framework
by Andrii Golovan, Vasyl Mateichyk, Igor Gritsuk, Alexander Lavrov, Miroslaw Smieszek, Iryna Honcharuk and Olena Volska
Computers 2024, 13(10), 261; https://doi.org/10.3390/computers13100261 - 11 Oct 2024
Viewed by 782
Abstract
This article proposes a comprehensive framework for enhancing information exchange in ship maintenance through the integration of Digital Twins (DTs) and the Internet of Things (IoT). The maritime industry faces significant challenges in maintaining ships due to issues like data silos, delayed information [...] Read more.
This article proposes a comprehensive framework for enhancing information exchange in ship maintenance through the integration of Digital Twins (DTs) and the Internet of Things (IoT). The maritime industry faces significant challenges in maintaining ships due to issues like data silos, delayed information flow, and insufficient real-time updates. By leveraging advanced technologies such as DTs and IoT, this framework aims to optimize maintenance processes, improve decision-making, and increase the operational efficiency of maritime vessels. Digital Twins create virtual replicas of physical assets, allowing for continuous monitoring, simulation, and prediction of ship conditions. Meanwhile, IoT devices enable real-time data collection and transmission from various ship components, facilitating a seamless flow of information. This integrated approach enhances predictive maintenance capabilities, reduces downtime, and improves resource allocation. The article will delve into the architecture of the proposed framework, implementation steps, and potential challenges, supported by case studies that demonstrate its practical application and benefits. By addressing these aspects, the framework aims to provide a robust solution for modernizing ship maintenance operations and ensuring the longevity and reliability of maritime assets. Full article
Show Figures

Figure 1

31 pages, 4735 KiB  
Article
Enhanced Neonatal Brain Tissue Analysis via Minimum Spanning Tree Segmentation and the Brier Score Coupled Classifier
by Tushar Hrishikesh Jaware, Chittaranjan Nayak, Priyadarsan Parida, Nawaf Ali, Yogesh Sharma and Wael Hadi
Computers 2024, 13(10), 260; https://doi.org/10.3390/computers13100260 - 11 Oct 2024
Viewed by 861
Abstract
Automatic assessment of brain regions in an MR image has emerged as a pivotal tool in advancing diagnosis and continual monitoring of neurological disorders through different phases of life. Nevertheless, current solutions often exhibit specificity to particular age groups, thereby constraining their utility [...] Read more.
Automatic assessment of brain regions in an MR image has emerged as a pivotal tool in advancing diagnosis and continual monitoring of neurological disorders through different phases of life. Nevertheless, current solutions often exhibit specificity to particular age groups, thereby constraining their utility in observing brain development from infancy to late adulthood. In our research, we introduce a novel approach for segmenting and classifying neonatal brain images. Our methodology capitalizes on minimum spanning tree (MST) segmentation employing the Manhattan distance, complemented by a shrunken centroid classifier empowered by the Brier score. This fusion enhances the accuracy of tissue classification, effectively addressing the complexities inherent in age-specific segmentation. Moreover, we propose a novel threshold estimation method utilizing the Brier score, further refining the classification process. The proposed approach yields a competitive Dice similarity index of 0.88 and a Jaccard index of 0.95. This approach marks a significant step toward neonatal brain tissue segmentation, showcasing the efficacy of our proposed methodology in comparison to the latest cutting-edge methods. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

27 pages, 1100 KiB  
Article
Specialized Genetic Operators for the Planning of Passive Optical Networks
by Oeber Izidoro Pereira, Edgar Manuel Carreño-Franco, Jesús M. López-Lezama and Nicolás Muñoz-Galeano
Computers 2024, 13(10), 259; https://doi.org/10.3390/computers13100259 - 10 Oct 2024
Viewed by 507
Abstract
Passive Optical Networks (PONs) are telecommunication technologies that use fiber-optic cables to deliver high-speed internet and other communication services to end users. PONs split optical signals from a single fiber into multiple fibers, serving multiple homes or businesses without requiring active electronic components. [...] Read more.
Passive Optical Networks (PONs) are telecommunication technologies that use fiber-optic cables to deliver high-speed internet and other communication services to end users. PONs split optical signals from a single fiber into multiple fibers, serving multiple homes or businesses without requiring active electronic components. PONs planning involves designing and optimizing the infrastructure for delivering fiber-optic communications to end users. The main contribution of this paper is the introduction of tailored operators within a genetic algorithm (GA) optimization approach for PONs planning. A three vector and an aggregator vector are devised to account, respectively, for physical and logical connections of the network, facilitating the execution of GA operators. This codification and these operators are versatile and can be applied to any population-based algorithm, not limited to GAs alone. Furthermore, the proposed operators are specifically designed to exploit the unique characteristics of PONs, thereby minimizing the occurrence of unfeasible solutions and accelerating convergence towards an optimal network design. By incorporating these specialized operators, this research aims to enhance the efficiency of PONs planning, ultimately leading to reduced costs and improved network performance. Full article
Show Figures

Figure 1

16 pages, 2121 KiB  
Article
Enhancement of Named Entity Recognition in Low-Resource Languages with Data Augmentation and BERT Models: A Case Study on Urdu
by Fida Ullah, Alexander Gelbukh, Muhammad Tayyab Zamir, Edgardo Manuel Felipe Riverόn and Grigori Sidorov
Computers 2024, 13(10), 258; https://doi.org/10.3390/computers13100258 - 10 Oct 2024
Viewed by 843
Abstract
Identifying and categorizing proper nouns in text, known as named entity recognition (NER), is crucial for various natural language processing tasks. However, developing effective NER techniques for low-resource languages like Urdu poses challenges due to limited training data, particularly in the nastaliq script. [...] Read more.
Identifying and categorizing proper nouns in text, known as named entity recognition (NER), is crucial for various natural language processing tasks. However, developing effective NER techniques for low-resource languages like Urdu poses challenges due to limited training data, particularly in the nastaliq script. To address this, our study introduces a novel data augmentation method, “contextual word embeddings augmentation” (CWEA), for Urdu, aiming to enrich existing datasets. The extended dataset, comprising 160,132 tokens and 114,912 labeled entities, significantly enhances the coverage of named entities compared to previous datasets. We evaluated several transformer models on this augmented dataset, including BERT-multilingual, RoBERTa-Urdu-small, BERT-base-cased, and BERT-large-cased. Notably, the BERT-multilingual model outperformed others, achieving the highest macro F1 score of 0.982%. This surpassed the macro f1 scores of the RoBERTa-Urdu-small (0.884%), BERT-large-cased (0.916%), and BERT-base-cased (0.908%) models. Additionally, our neural network model achieved a micro F1 score of 96%, while the RNN model achieved 97% and the BiLSTM model achieved a macro F1 score of 96% on augmented data. Our findings underscore the efficacy of data augmentation techniques in enhancing NER performance for low-resource languages like Urdu. Full article
Show Figures

Figure 1

13 pages, 853 KiB  
Article
Assessing Large Language Models Used for Extracting Table Information from Annual Financial Reports
by David Balsiger, Hans-Rudolf Dimmler, Samuel Egger-Horstmann and Thomas Hanne
Computers 2024, 13(10), 257; https://doi.org/10.3390/computers13100257 - 9 Oct 2024
Viewed by 896
Abstract
The extraction of data from tables in PDF documents has been a longstanding challenge in the field of data processing and analysis. While traditional methods have been explored in depth, the rise of Large Language Models (LLMs) offers new possibilities. This article addresses [...] Read more.
The extraction of data from tables in PDF documents has been a longstanding challenge in the field of data processing and analysis. While traditional methods have been explored in depth, the rise of Large Language Models (LLMs) offers new possibilities. This article addresses the knowledge gaps regarding LLMs, specifically ChatGPT-4 and BARD, for extracting and interpreting data from financial tables in PDF format. This research is motivated by the real-world need to efficiently gather and analyze corporate financial information. The hypothesis is that LLMs—in this case, ChatGPT-4 and BARD—can accurately extract key financial data, such as balance sheets and income statements. The methodology involves selecting representative pages from 46 annual reports of large Swiss corporations listed in the SMI Expanded Index from 2022 and copy–pasting text from these into LLMs. Eight analytical questions were posed to the LLMs, and their responses were assessed for accuracy and for identifying potential error sources in data extraction. The findings revealed significant variance in the performance of ChatGPT-4 and another LLM, BARD, with ChatGPT-4 generally exhibiting superior accuracy. This research contributes to understanding the capabilities and limitations of LLMs in processing and interpreting complex financial data from corporate documents. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

13 pages, 849 KiB  
Article
Audio Deep Fake Detection with Sonic Sleuth Model
by Anfal Alshehri, Danah Almalki, Eaman Alharbi and Somayah Albaradei
Computers 2024, 13(10), 256; https://doi.org/10.3390/computers13100256 - 8 Oct 2024
Viewed by 1332
Abstract
Information dissemination and preservation are crucial for societal progress, especially in the technological age. While technology fosters knowledge sharing, it also risks spreading misinformation. Audio deepfakes—convincingly fabricated audio created using artificial intelligence (AI)—exacerbate this issue. We present Sonic Sleuth, a novel AI model [...] Read more.
Information dissemination and preservation are crucial for societal progress, especially in the technological age. While technology fosters knowledge sharing, it also risks spreading misinformation. Audio deepfakes—convincingly fabricated audio created using artificial intelligence (AI)—exacerbate this issue. We present Sonic Sleuth, a novel AI model designed specifically for detecting audio deepfakes. Our approach utilizes advanced deep learning (DL) techniques, including a custom CNN model, to enhance detection accuracy in audio misinformation, with practical applications in journalism and social media. Through meticulous data preprocessing and rigorous experimentation, we achieved a remarkable 98.27% accuracy and a 0.016 equal error rate (EER) on a substantial dataset of real and synthetic audio. Additionally, Sonic Sleuth demonstrated 84.92% accuracy and a 0.085 EER on an external dataset. The novelty of this research lies in its integration of datasets that closely simulate real-world conditions, including noise and linguistic diversity, enabling the model to generalize across a wide array of audio inputs. These results underscore Sonic Sleuth’s potential as a powerful tool for combating misinformation and enhancing integrity in digital communications. Full article
Show Figures

Figure 1

18 pages, 8530 KiB  
Article
Spatiotemporal Bayesian Machine Learning for Estimation of an Empirical Lower Bound for Probability of Detection with Applications to Stationary Wildlife Photography
by Mohamed Jaber, Robert D. Breininger, Farag Hamad and Nezamoddin N. Kachouie
Computers 2024, 13(10), 255; https://doi.org/10.3390/computers13100255 - 8 Oct 2024
Viewed by 613
Abstract
An important parameter in the monitoring and surveillance systems is the probability of detection. Advanced wildlife monitoring systems rely on camera traps for stationary wildlife photography and have been broadly used for estimation of population size and density. Camera encounters are collected for [...] Read more.
An important parameter in the monitoring and surveillance systems is the probability of detection. Advanced wildlife monitoring systems rely on camera traps for stationary wildlife photography and have been broadly used for estimation of population size and density. Camera encounters are collected for estimation and management of a growing population size using spatial capture models. The accuracy of the estimated population size relies on the detection probability of the individual animals, and in turn depends on observed frequency of the animal encounters with the camera traps. Therefore, optimal coverage by the camera grid is essential for reliable estimation of the population size and density. The goal of this research is implementing a spatiotemporal Bayesian machine learning model to estimate a lower bound for probability of detection of a monitoring system. To obtain an accurate estimate of population size in this study, an empirical lower bound for probability of detection is realized considering the sensitivity of the model to the augmented sample size. The monitoring system must attain a probability of detection greater than the established empirical lower bound to achieve a pertinent estimation accuracy. It was found that for stationary wildlife photography, a camera grid with a detection probability of at least 0.3 is required for accurate estimation of the population size. A notable outcome is that a moderate probability of detection or better is required to obtain a reliable estimate of the population size using spatiotemporal machine learning. As a result, the required probability of detection is recommended when designing an automated monitoring system. The number and location of cameras in the camera grid will determine the camera coverage. Consequently, camera coverage and the individual home-range verify the probability of detection. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

29 pages, 3031 KiB  
Article
Technical Innovations and Social Implications: Mapping Global Research Focus in AI, Blockchain, Cybersecurity, and Privacy
by Emanuela Bran, Răzvan Rughiniș, Dinu Țurcanu and Gheorghe Nadoleanu
Computers 2024, 13(10), 254; https://doi.org/10.3390/computers13100254 - 8 Oct 2024
Viewed by 1591
Abstract
This study examines the balance between technical and social focus in artificial intelligence, blockchain, cybersecurity, and privacy publications in Web of Science across countries, exploring the social factors that influence these research priorities. We use regression analysis to identify predictors of research focus [...] Read more.
This study examines the balance between technical and social focus in artificial intelligence, blockchain, cybersecurity, and privacy publications in Web of Science across countries, exploring the social factors that influence these research priorities. We use regression analysis to identify predictors of research focus and cluster analysis to reveal patterns across countries, combining these methods to provide a broader view of global research priorities. Our findings reveal that liberal democracy index, life expectancy, and happiness are significant predictors of research focus, while traditional indicators like education and income show weaker relationships. This unexpected result challenges conventional assumptions about the drivers of research priorities in digital technologies. The study identifies distinct clusters of countries with similar patterns of research focus across the four technologies, revealing previously unrecognized global typologies. Notably, more democratic societies tend to emphasize social implications of technologies, while some rapidly developing countries focus more on technical aspects. These findings suggest that political and social factors may play a larger role in shaping research agendas than previously thought, necessitating a re-evaluation of how we understand and predict research focus in rapidly evolving technological fields. The study provides valuable information for policymakers and researchers, informing strategies for technological development and international collaboration in an increasingly digital world. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

34 pages, 4479 KiB  
Article
Development of a Children’s Educational Dictionary for a Low-Resource Language Using AI Tools
by Diana Rakhimova, Aidana Karibayeva, Vladislav Karyukin, Assem Turarbek, Zhansaya Duisenbekkyzy and Rashid Aliyev
Computers 2024, 13(10), 253; https://doi.org/10.3390/computers13100253 - 2 Oct 2024
Viewed by 751
Abstract
Today, various interactive tools or partially available artificial intelligence applications are actively used in educational processes to solve multiple problems for resource-rich languages, such as English, Spanish, French, etc. Unfortunately, the situation is different and more complex for low-resource languages, like Kazakh, Uzbek, [...] Read more.
Today, various interactive tools or partially available artificial intelligence applications are actively used in educational processes to solve multiple problems for resource-rich languages, such as English, Spanish, French, etc. Unfortunately, the situation is different and more complex for low-resource languages, like Kazakh, Uzbek, Mongolian, and others, due to the lack of qualitative and accessible resources, morphological complexity, and the semantics of agglutinative languages. This article presents research on early childhood learning resources for the low-resource Kazakh language. Generally, a dictionary for children differs from classical educational dictionaries. The difference between dictionaries for children and adults lies in their purpose and methods of presenting information. A themed dictionary will make learning and remembering new words easier for children because they will be presented in a specific context. This article discusses developing an approach to creating a thematic children’s dictionary of the low-resource Kazakh language using artificial intelligence. The proposed approach is based on several important stages: the initial formation of a list of English words with the use of ChatGPT; identification of their semantic weights; generation of phrases and sentences with the use of the list of semantically related words; translation of obtained phrases and sentences from English to Kazakh, dividing them into bigrams and trigrams; and processing with Kazakh language POS pattern tag templates to adapt them for children. When the dictionary was formed, the semantic proximity of words and phrases to the given theme and age restrictions for children were taken into account. The formed dictionary phrases were evaluated using the cosine similarity, Euclidean similarity, and Manhattan distance metrics. Moreover, the dictionary was extended with video and audio data by implementing models like DALL-E 3, Midjourney, and Stable Diffusion to illustrate the dictionary data and TTS (Text to Speech) technology for the Kazakh language for voice synthesis. The developed thematic dictionary approach was tested, and a SUS (System Usability Scale) assessment of the application was conducted. The experimental results demonstrate the proposed approach’s high efficiency and its potential for wide use in educational purposes. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

50 pages, 19482 KiB  
Article
The Use of eXplainable Artificial Intelligence and Machine Learning Operation Principles to Support the Continuous Development of Machine Learning-Based Solutions in Fault Detection and Identification
by Tuan-Anh Tran, Tamás Ruppert and János Abonyi
Computers 2024, 13(10), 252; https://doi.org/10.3390/computers13100252 - 2 Oct 2024
Viewed by 598
Abstract
Machine learning (ML) revolutionized traditional machine fault detection and identification (FDI), as complex-structured models with well-designed unsupervised learning strategies can detect abnormal patterns from abundant data, which significantly reduces the total cost of ownership. However, their opaqueness raised human concern and intrigued the [...] Read more.
Machine learning (ML) revolutionized traditional machine fault detection and identification (FDI), as complex-structured models with well-designed unsupervised learning strategies can detect abnormal patterns from abundant data, which significantly reduces the total cost of ownership. However, their opaqueness raised human concern and intrigued the eXplainable artificial intelligence (XAI) concept. Furthermore, the development of ML-based FDI models can be improved fundamentally with machine learning operations (MLOps) guidelines, enhancing reproducibility and operational quality. This study proposes a framework for the continuous development of ML-based FDI solutions, which contains a general structure to simultaneously visualize and check the performance of the ML model while directing the resource-efficient development process. A use case is conducted on sensor data of a hydraulic system with a simple long short-term memory (LSTM) network. Proposed XAI principles and tools supported the model engineering and monitoring, while additional system optimization can be made regarding input data preparation, feature selection, and model usage. Suggested MLOps principles help developers create a minimum viable solution and involve it in a continuous improvement loop. The promising result motivates further adoption of XAI and MLOps while endorsing the generalization of modern ML-based FDI applications with the HITL concept. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

20 pages, 2154 KiB  
Article
Green Communication in IoT for Enabling Next-Generation Wireless Systems
by Mohammad Aljaidi, Omprakash Kaiwartya, Ghassan Samara, Ayoub Alsarhan, Mufti Mahmud, Sami M. Alenezi, Raed Alazaidah and Jaime Lloret
Computers 2024, 13(10), 251; https://doi.org/10.3390/computers13100251 - 2 Oct 2024
Cited by 1 | Viewed by 653
Abstract
Recent developments and the widespread use of IoT-enabled technologies has led to the Research and Development (R&D) efforts in green communication. Traditional dynamic-source routing is one of the well-known protocols that was suggested to solve the information dissemination problem in an IoT environment. [...] Read more.
Recent developments and the widespread use of IoT-enabled technologies has led to the Research and Development (R&D) efforts in green communication. Traditional dynamic-source routing is one of the well-known protocols that was suggested to solve the information dissemination problem in an IoT environment. However, this protocol suffers from a high level of energy consumption in sensor-enabled device-to-device and device-to-base station communications. As a result, new information dissemination protocols should be developed to overcome the challenge of dynamic-source routing, and other similar protocols regarding green communication. In this context, a new energy-efficient routing protocol (EFRP) is proposed using the hybrid adopted heuristic techniques. In the densely deployed sensor-enabled IoT environment, an optimal information dissemination path for device-to-device and device-to-base station communication was identified using a hybrid genetic algorithm (GA) and the antlion optimization (ALO) algorithms. An objective function is formulated focusing on energy consumption-centric cost minimization. The evaluation results demonstrate that the proposed protocol outperforms the Greedy approach and the DSR protocol in terms of a range of green communication metrics. It was noticed that the number of alive sensor nodes in the experimental network increased by more than 26% compared to the other approaches and lessened energy consumption by about 33%. This leads to a prolonged IoT network lifetime, increased by about 25%. It is evident that the proposed scheme greatly improves the information dissemination efficiency of the IoT network, significantly increasing the network’s throughput. Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
Show Figures

Figure 1

19 pages, 15516 KiB  
Article
Effects of OpenCL-Based Parallelization Methods on Explicit Numerical Methods to Solve the Heat Equation
by Dániel Koics, Endre Kovács and Olivér Hornyák
Computers 2024, 13(10), 250; https://doi.org/10.3390/computers13100250 - 1 Oct 2024
Viewed by 578
Abstract
In recent years, the need for high-performance computing solutions has increased due to the growing complexity of computational tasks. The use of parallel processing techniques has become essential to address this demand. In this study, an Open Computing Language (OpenCL)-based parallelization algorithm is [...] Read more.
In recent years, the need for high-performance computing solutions has increased due to the growing complexity of computational tasks. The use of parallel processing techniques has become essential to address this demand. In this study, an Open Computing Language (OpenCL)-based parallelization algorithm is implemented for the Constant Neighbors (CNe) and CNe with Predictor–Corrector (CpC) numerical methods, which are recently developed explicit and stable numerical algorithms to solve the heat conduction equation. The CPU time and error rate performance of these two methods are compared with the sequential implementation and Euler’s explicit method. The results demonstrate that the parallel version’s CPU time remains nearly constant under the examined circumstances, regardless of the number of spatial mesh points. This leads to a remarkable speed advantage over the sequential version for larger data point counts. Furthermore, the impact of the number of timesteps on the crossover point where the parallel version becomes faster than the sequential one is investigated. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

34 pages, 1042 KiB  
Article
Artificially Intelligent Vehicle-to-Grid Energy Management: A Semantic-Aware Framework Balancing Grid Demands and User Autonomy
by Mahmoud Elkhodr
Computers 2024, 13(10), 249; https://doi.org/10.3390/computers13100249 - 1 Oct 2024
Viewed by 770
Abstract
As the adoption of electric vehicles increases, the challenge of managing bidirectional energy flow while ensuring grid stability and respecting user preferences becomes increasingly critical. This paper aims to develop an intelligent framework for vehicle-to-grid (V2G) energy management that balances grid demands with [...] Read more.
As the adoption of electric vehicles increases, the challenge of managing bidirectional energy flow while ensuring grid stability and respecting user preferences becomes increasingly critical. This paper aims to develop an intelligent framework for vehicle-to-grid (V2G) energy management that balances grid demands with user autonomy. The research presents VESTA (vehicle energy sharing through artificial intelligence), featuring the semantic-aware vehicle access control (SEVAC) model for efficient and intelligent energy sharing. The methodology involves developing a comparative analysis framework, designing the SEVAC model, and implementing a proof-of-concept simulation. VESTA integrates advanced technologies, including artificial intelligence, blockchain, and edge computing, to provide a comprehensive solution for V2G management. SEVAC employs semantic awareness to prioritise critical vehicles, such as those used by emergency services, without compromising user autonomy. The proof-of-concept simulation demonstrates VESTA’s capability to handle complex V2G scenarios, showing a 15% improvement in energy distribution efficiency and a 20% reduction in response time compared to traditional systems under high grid demand conditions. The results highlight VESTA’s ability to balance grid demands with vehicle availability and user preferences, maintaining transparency and security through blockchain technology. Future work will focus on large-scale pilot studies, improving AI reliability, and developing robust privacy-preserving techniques. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

24 pages, 3036 KiB  
Article
Comparing Machine Learning Models for Sentiment Analysis and Rating Prediction of Vegan and Vegetarian Restaurant Reviews
by Sanja Hanić, Marina Bagić Babac, Gordan Gledec and Marko Horvat
Computers 2024, 13(10), 248; https://doi.org/10.3390/computers13100248 - 1 Oct 2024
Viewed by 813
Abstract
The paper investigates the relationship between written reviews and numerical ratings of vegan and vegetarian restaurants, aiming to develop a predictive model that accurately determines numerical ratings based on review content. The dataset was obtained by scraping reviews from November 2022 until January [...] Read more.
The paper investigates the relationship between written reviews and numerical ratings of vegan and vegetarian restaurants, aiming to develop a predictive model that accurately determines numerical ratings based on review content. The dataset was obtained by scraping reviews from November 2022 until January 2023 from the TripAdvisor website. The study applies multidimensional scaling and clustering using the KNN algorithm to visually represent the textual data. Sentiment analysis and rating predictions are conducted using neural networks, support vector machines (SVM), random forest, Naïve Bayes, and BERT models. Text vectorization is accomplished through term frequency-inverse document frequency (TF-IDF) and global vectors (GloVe). The analysis identified three main topics related to vegan and vegetarian restaurant experiences: (1) restaurant ambiance, (2) personal feelings towards the experience, and (3) the food itself. The study processed a total of 33,439 reviews, identifying key aspects of the dining experience and testing various machine learning methods for sentiment and rating predictions. Among the models tested, BERT outperformed the others, and TF-IDF proved slightly more effective than GloVe for word representation. Full article
Show Figures

Figure 1

24 pages, 893 KiB  
Article
Why Are Other Teachers More Inclusive in Online Learning Than Us? Exploring Challenges Faced by Teachers of Blind and Visually Impaired Students: A Literature Review
by Rana Ghoneim, Wajdi Aljedaani, Renee Bryce, Yasir Javed and Zafar Iqbal Khan
Computers 2024, 13(10), 247; https://doi.org/10.3390/computers13100247 - 27 Sep 2024
Viewed by 1007
Abstract
Distance learning has grown rapidly in recent years. E-learning can aid teachers of students with disabilities, particularly visually impaired students (VISs), by offering versatility, accessibility, enhanced communication, adaptability, and a wide range of multimedia and non-verbal teaching methods. However, the shift from traditional [...] Read more.
Distance learning has grown rapidly in recent years. E-learning can aid teachers of students with disabilities, particularly visually impaired students (VISs), by offering versatility, accessibility, enhanced communication, adaptability, and a wide range of multimedia and non-verbal teaching methods. However, the shift from traditional face-to-face instruction to online platforms, especially during the pandemic, introduced unique challenges for VISs, with respect to including instructional methodologies, accessibility, and the integration of suitable technology. Recent research has shown that the resources and facilities of educational institutions pose challenges for teachers of visually impaired students (TVISs). This study conducts a literature review of research studies from the years 2000 to 2024 to identify significant issues encountered by TVISs with online learning to show the effects of distance learning before, during, and after the pandemic. This systematic literature review examines 25 publications. The evaluation reveals technological problems affecting the educational experience of visually impaired educators through a methodical categorization and analysis of these papers. The results emphasize important problems and suggest solutions, providing valuable knowledge for experts in education and legislation. The study recommends technology solutions to support instructors in providing inclusive online learning environments for VISs. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

16 pages, 5613 KiB  
Article
Unraveling the Dynamics of Mental and Visuospatial Workload in Virtual Reality Environments
by Guillermo Bernal, Hahrin Jung, İsmail Emir Yassı, Nelson Hidalgo, Yodahe Alemu, Tyler Barnes-Diana and Pattie Maes
Computers 2024, 13(10), 246; https://doi.org/10.3390/computers13100246 - 26 Sep 2024
Viewed by 986
Abstract
Mental workload, visuospatial processes and autonomic nervous system (ANS) activity are highly intertwined phenomena crucial for achieving optimal performance and improved mental health. Virtual reality (VR) serves as an effective tool for creating variety of controlled environments to better probe these features. This [...] Read more.
Mental workload, visuospatial processes and autonomic nervous system (ANS) activity are highly intertwined phenomena crucial for achieving optimal performance and improved mental health. Virtual reality (VR) serves as an effective tool for creating variety of controlled environments to better probe these features. This study investigates the relationship between mental and visuospatial workload, physiological arousal, and performance during a high-demand task in a VR environment. We utilized a modified version of the popular computer game TETRIS as the task, involving 25 participants, and employed a physiological computing VR headset that simultaneously records multimodal physiological data. Our findings indicate a broadband increase in EEG power just prior to a helper event, followed by a spike of visuospatial engagement (parietal alpha and beta 0-1-3 s) occurring concurrently with a decrease in mental workload (frontal theta 2–4 s), and subsequent decreases in visuospatial engagement (parietal theta at 14 s) and physiological arousal (HRV at 20 s). Regression analysis indicated that the subjective relief and helpfulness of the helper intervention was primarily driven by a decrease in physiological arousal and an increase in visuospatial engagement. These findings highlight the importance of multimodal physiological recording in rich environments, such as real world scenarios and VR, to understand the interplay between the various physiological responses involved in mental and visuospatial workload. Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
Show Figures

Figure 1

25 pages, 6632 KiB  
Article
Construction of the Invoicing Process through Process Mining and Business Intelligence in the Colombian Pharmaceutical Sector
by Jhon Wilder Sanchez-Obando, Néstor Darío Duque-Méndez and Oscar Mauricio Bedoya Herrera
Computers 2024, 13(10), 245; https://doi.org/10.3390/computers13100245 - 25 Sep 2024
Viewed by 718
Abstract
The invoicing process is critical to the financial management of organizations. However, modeling this process presents challenges such as data updating, information availability, and aligning planned activities with the actual execution of the process. One difficulty is that designing the invoicing process requires [...] Read more.
The invoicing process is critical to the financial management of organizations. However, modeling this process presents challenges such as data updating, information availability, and aligning planned activities with the actual execution of the process. One difficulty is that designing the invoicing process requires extensive knowledge of the activities involved, and process representations based on organizational repositories are not necessarily aligned with the actual invoicing processes in the organization. Process Mining is complemented by the use of dashboards, which are inherent to business intelligence and allow for visual tracking of process behavior. This paper explores how the combination of process mining and business intelligence can enable a new level of process modeling that addresses specific issues in constructing processes that are aligned with real-world activities. To accomplish this, we first propose the Design Science Research (DSR) methodology, which outlines how a researcher or practitioner should approach the task of modeling a specific process using Process Mining augmented with dashboard resources. The research strategy was to identify the most appropriate methodology to construct the actual billing process, which led to the identification of the DSR methodology. This methodology, with its 12-step plan, allowed the construction of an artifact representing the actual invoicing process. Ultimately, the objective of constructing a real invoicing process in the Colombian pharmaceutical sector is achieved through the development of an artifact, complemented by business intelligence dashboards that ensure the alignment of the execution of activities within the process. Full article
Show Figures

Figure 1

24 pages, 1353 KiB  
Article
Application of Deep Learning for Heart Attack Prediction with Explainable Artificial Intelligence
by Elias Dritsas and Maria Trigka
Computers 2024, 13(10), 244; https://doi.org/10.3390/computers13100244 - 25 Sep 2024
Viewed by 1387
Abstract
Heart disease remains a leading cause of mortality worldwide, and the timely and accurate prediction of heart attack is crucial yet challenging due to the complexity of the condition and the limitations of traditional diagnostic methods. These challenges include the need for resource-intensive [...] Read more.
Heart disease remains a leading cause of mortality worldwide, and the timely and accurate prediction of heart attack is crucial yet challenging due to the complexity of the condition and the limitations of traditional diagnostic methods. These challenges include the need for resource-intensive diagnostics and the difficulty in interpreting complex predictive models in clinical settings. In this study, we apply and compare the performance of five well-known Deep Learning (DL) models, namely Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a Hybrid model, to a heart attack prediction dataset. Each model was properly tuned and evaluated using accuracy, precision, recall, F1-score, and Area Under the Receiver Operating Characteristic Curve (AUC) as performance metrics. Additionally, by integrating an Explainable Artificial intelligence (XAI) technique, specifically Shapley Additive Explanations (SHAP), we enhance the interpretability of the predictions, making them actionable for healthcare professionals and thereby enhancing clinical applicability. The experimental results revealed that the Hybrid model prevailed, achieving the highest performance across all metrics. Specifically, the Hybrid model attained an accuracy of 91%, precision of 89%, recall of 90%, F1-score of 89%, and an AUC of 0.95. These results highlighted the Hybrid model’s superior ability to predict heart attacks, attributed to its efficient handling of sequential data and long-term dependencies. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop