Next Issue
Volume 14, February
Previous Issue
Volume 13, December
 
 

Computers, Volume 14, Issue 1 (January 2025) – 30 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image):
Order results
Result details
Section
Select all
Export citation of selected articles as:
35 pages, 633 KiB  
Article
Set-Word Embeddings and Semantic Indices: A New Contextual Model for Empirical Language Analysis
by Pedro Fernández de Córdoba, Carlos A. Reyes Pérez, Claudia Sánchez Arnau and Enrique A. Sánchez Pérez
Computers 2025, 14(1), 30; https://doi.org/10.3390/computers14010030 - 20 Jan 2025
Viewed by 437
Abstract
We present a new word embedding technique in a (non-linear) metric space based on the shared membership of terms in a corpus of textual documents, where the metric is naturally defined by the Boolean algebra of all subsets of the corpus and a [...] Read more.
We present a new word embedding technique in a (non-linear) metric space based on the shared membership of terms in a corpus of textual documents, where the metric is naturally defined by the Boolean algebra of all subsets of the corpus and a measure μ defined on it. Once the metric space is constructed, a new term (a noun, an adjective, a classification term) can be introduced into the model and analyzed by means of semantic projections, which in turn are defined as indexes using the measure μ and the word embedding tools. We formally define all necessary elements and prove the main results about the model, including a compatibility theorem for estimating the representability of semantically meaningful external terms in the model (which are written as real Lipschitz functions in the metric space), proving the relation between the semantic index and the metric of the space (Theorem 1). Our main result proves the universality of our word-set embedding, proving mathematically that every word embedding based on linear space can be written as a word-set embedding (Theorem 2). Since we adopt an empirical point of view for the semantic issues, we also provide the keys for the interpretation of the results using probabilistic arguments (to facilitate the subsequent integration of the model into Bayesian frameworks for the construction of inductive tools), as well as in fuzzy set-theoretic terms. We also show some illustrative examples, including a complete computational case using big-data-based computations. Thus, the main advantages of the proposed model are that the results on distances between terms are interpretable in semantic terms once the semantic index used is fixed and, although the calculations could be costly, it is possible to calculate the value of the distance between two terms without the need to calculate the whole distance matrix. “Wovon man nicht sprechen kann, darüber muss man schweigen”. Tractatus Logico-Philosophicus. L. Wittgenstein. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

33 pages, 19016 KiB  
Article
Multitask Learning-Based Pipeline-Parallel Computation Offloading Architecture for Deep Face Analysis
by Faris S. Alghareb and Balqees Talal Hasan
Computers 2025, 14(1), 29; https://doi.org/10.3390/computers14010029 - 20 Jan 2025
Viewed by 968
Abstract
Deep Neural Networks (DNNs) have been widely adopted in several advanced artificial intelligence applications due to their competitive accuracy to the human brain. Nevertheless, the superior accuracy of a DNN is achieved at the expense of intensive computations and storage complexity, requiring custom [...] Read more.
Deep Neural Networks (DNNs) have been widely adopted in several advanced artificial intelligence applications due to their competitive accuracy to the human brain. Nevertheless, the superior accuracy of a DNN is achieved at the expense of intensive computations and storage complexity, requiring custom expandable hardware, i.e., graphics processing units (GPUs). Interestingly, leveraging the synergy of parallelism and edge computing can significantly improve CPU-based hardware platforms. Therefore, this manuscript explores levels of parallelism techniques along with edge computation offloading to develop an innovative hardware platform that improves the efficacy of deep learning computing architectures. Furthermore, the multitask learning (MTL) approach is employed to construct a parallel multi-task classification network. These tasks include face detection and recognition, age estimation, gender recognition, smile detection, and hair color and style classification. Additionally, both pipeline and parallel processing techniques are utilized to expedite complicated computations, boosting the overall performance of the presented deep face analysis architecture. A computation offloading approach, on the other hand, is leveraged to distribute computation-intensive tasks to the server edge, whereas lightweight computations are offloaded to edge devices, i.e., Raspberry Pi 4. To train the proposed deep face analysis network architecture, two custom datasets (HDDB and FRAED) were created for head detection and face-age recognition. Extensive experimental results demonstrate the efficacy of the proposed pipeline-parallel architecture in terms of execution time. It requires 8.2 s to provide detailed face detection and analysis for an individual and 23.59 s for an inference containing 10 individuals. Moreover, a speedup of 62.48% is achieved compared to the sequential-based edge computing architecture. Meanwhile, 25.96% speed performance acceleration is realized when implementing the proposed pipeline-parallel architecture only on the server edge compared to the sever sequential implementation. Considering classification efficiency, the proposed classification modules achieve an accuracy of 88.55% for hair color and style classification and a remarkable prediction outcome of 100% for face recognition and age estimation. To summarize, the proposed approach can assist in reducing the required execution time and memory capacity by processing all facial tasks simultaneously on a single deep neural network rather than building a CNN model for each task. Therefore, the presented pipeline-parallel architecture can be a cost-effective framework for real-time computer vision applications implemented on resource-limited devices. Full article
Show Figures

Figure 1

18 pages, 731 KiB  
Review
Computational Methods for Information Processing from Natural Language Complaint Processes—A Systematic Review
by J. C. Blandón Andrade, A. Castaño Toro, A. Morales Ríos and D. Orozco Ospina
Computers 2025, 14(1), 28; https://doi.org/10.3390/computers14010028 - 20 Jan 2025
Viewed by 674
Abstract
Complaint processing is of great importance for companies because it allows them to understand customer satisfaction levels, which is crucial for business success. It allows them to show the real perceptions of users and thus visualize the problems, which are regularly processed from [...] Read more.
Complaint processing is of great importance for companies because it allows them to understand customer satisfaction levels, which is crucial for business success. It allows them to show the real perceptions of users and thus visualize the problems, which are regularly processed from oral or written natural language, derived from the provision of a service. In addition, the treatment of complaints is relevant because according to the laws of each country, companies have the obligation to respond to these complaints in a specified time. The specialized literature mentions that enterprises lost USD 75 billion due to poor customer service, highlighting that companies need to know and understand customer perceptions, especially emotions, and product reviews to gain insight and learn about customer feedback because of the importance of the voice of the customer for an organization. In general, it is evident that there is a need for research related to computational language processing to handle user requests. The authors show great interest in computational techniques for the processing of this information in natural language and how this could contribute to the improvement of processes within the productive sector. This work searches in indexed journals for information related to computational methods for processing relevant data from user complaints. It is proposed to apply a systematic literature review (SLR) method combining literature review guides by Kitchenham and the PRISMA statement. The systematic process allows the extraction of consistent information, and after applying it, 27 articles were obtained from which the analysis was conducted. The results show various proposals using linguistic, statistical, machine learning, and hybrid methods. We find that most authors combine Natural Language Processing (NLP) and Machine Learning (ML) to create hybrid methods. The methods extract relevant information from complaints of the customers in natural language in various domains, such as government, medical, banks, e-commerce, public services, agriculture, customer service, environmental, and tourism, among others. This work contributes as support for the creation of new systems that can give companies a significant competitive advantage due to their ability to reduce the response time of the complaints as established by law. Full article
Show Figures

Figure 1

40 pages, 21233 KiB  
Article
Large-Scale Cross-Cultural Tourism Analytics: Integrating Transformer-Based Text Mining and Network Analysis
by Dian Puteri Ramadhani, Andry Alamsyah, Mochamad Yudha Febrianta, Muhammad Nadhif Fajriananda, Mahira Shafiya Nada and Fathiyyah Hasanah
Computers 2025, 14(1), 27; https://doi.org/10.3390/computers14010027 - 16 Jan 2025
Viewed by 637
Abstract
The growth of the tourism industry in Southeast Asia, particularly in Indonesia, Thailand, and Vietnam, establishes the region as a leading global tourism destination. Numerous studies have explored tourist behavior within specific regions. However, the question of whether tourists’ experience perceptions differ based [...] Read more.
The growth of the tourism industry in Southeast Asia, particularly in Indonesia, Thailand, and Vietnam, establishes the region as a leading global tourism destination. Numerous studies have explored tourist behavior within specific regions. However, the question of whether tourists’ experience perceptions differ based on their cultural backgrounds is still insufficiently addressed. Previous articles suggest that an individual’s cultural background plays a significant role in shaping tourist values and expectations. This study investigates how tourists’ cultural backgrounds, represented by their geographical regions of origin, impact their entertainment experiences, sentiments, and mobility patterns across the three countries. We gathered 387,010 TripAdvisor reviews and analyzed them using a combination of advanced text mining techniques and network analysis to map tourist mobility patterns. Comparing sentiments and behaviors across cultural backgrounds, this study found that entertainment preferences vary by origin. The network analysis reveals distinct exploration patterns: diverse and targeted exploration. Vietnam achieves the highest satisfaction across the cultural groups through balanced development, while Thailand’s integrated entertainment creates cultural divides, and Indonesia’s generates moderate satisfaction regardless of cultural background. This study contributes to understanding tourism dynamics in Southeast Asia through a data-driven, comparative analysis of tourist behaviors. The findings provide insights for destination management, marketing strategies, and policy development, highlighting the importance of tailoring tourism offerings to meet the diverse preferences of visitors from different global regions. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

17 pages, 7797 KiB  
Article
A 32 μm2 MOS-Based Remote Sensing Temperature Sensor with 1.29 °C Inaccuracy for Thermal Management
by Ruohan Yang, Kwabena Oppong Banahene, Bryce Gadogbe, Randall Geiger and Degang Chen
Computers 2025, 14(1), 26; https://doi.org/10.3390/computers14010026 - 16 Jan 2025
Viewed by 545
Abstract
This paper introduces a compact NMOS-based temperature sensor designed for precise thermal management in high-performance integrated circuits. Fabricated using the TSMC 180 nm process with a 1.8 V supply, this sensor employs a single diode-connected NMOS transistor, achieving a significant size reduction and [...] Read more.
This paper introduces a compact NMOS-based temperature sensor designed for precise thermal management in high-performance integrated circuits. Fabricated using the TSMC 180 nm process with a 1.8 V supply, this sensor employs a single diode-connected NMOS transistor, achieving a significant size reduction and improved voltage headroom. The sensor’s area is 32 µm2 per unit, enabling dense integration around thermal hotspots. A novel voltage calibration method ensures accurate temperature extraction. The measurement results demonstrate three-sigma errors within ±0.1 °C in the critical range of 75 °C to 95 °C and +1.29/−1.08 °C outside this range, confirming the sensor’s high accuracy and suitability for advanced thermal management applications. Full article
Show Figures

Figure 1

18 pages, 4133 KiB  
Article
An Investigation of Hand Gestures for Controlling Video Games in a Rehabilitation Exergame System
by Radhiatul Husna, Komang Candra Brata, Irin Tri Anggraini, Nobuo Funabiki, Alfiandi Aulia Rahmadani and Chih-Peng Fan
Computers 2025, 14(1), 25; https://doi.org/10.3390/computers14010025 - 15 Jan 2025
Viewed by 497
Abstract
Musculoskeletal disorders (MSDs) can significantly impact individuals’ quality of life (QoL), often requiring effective rehabilitation strategies to promote recovery. However, traditional rehabilitation methods can be expensive and may lack engagement, leading to poor adherence to therapy exercise routines. An exergame system can [...] Read more.
Musculoskeletal disorders (MSDs) can significantly impact individuals’ quality of life (QoL), often requiring effective rehabilitation strategies to promote recovery. However, traditional rehabilitation methods can be expensive and may lack engagement, leading to poor adherence to therapy exercise routines. An exergame system can be a solution to this problem. In this paper, we investigate appropriate hand gestures for controlling video games in a rehabilitation exergame system. The Mediapipe Python library is adopted for the real-time recognition of gestures. We choose 10 easy gestures among 32 possible simple gestures. Then, we specify and compare the best and the second-best groups used to control the game. Comprehensive experiments are conducted with 16 students at Andalas University, Indonesia, to find appropriate gestures and evaluate user experiences of the system using the System Usability Scale (SUS) and User Experience Questionnaire (UEQ). The results show that the hand gestures in the best group are more accessible than in the second-best group. The results suggest appropriate hand gestures for game controls and confirm the proposal’s validity. In future work, we plan to enhance the exergame system by integrating a diverse set of video games, while expanding its application to a broader and more diverse sample. We will also study other practical applications of the hand gesture control function. Full article
Show Figures

Figure 1

25 pages, 1993 KiB  
Article
Hacking Exposed: Leveraging Google Dorks, Shodan, and Censys for Cyber Attacks and the Defense Against Them
by Abdullah Alabdulatif and Navod Neranjan Thilakarathne
Computers 2025, 14(1), 24; https://doi.org/10.3390/computers14010024 - 15 Jan 2025
Viewed by 688
Abstract
In recent years, cyberattacks have increased in sophistication, using a variety of tools to exploit vulnerabilities across the global digital landscapes. Among the most commonly used tools at an attacker’s disposal are Google dorks, Shodan, and Censys, which offer unprecedented access to exposed [...] Read more.
In recent years, cyberattacks have increased in sophistication, using a variety of tools to exploit vulnerabilities across the global digital landscapes. Among the most commonly used tools at an attacker’s disposal are Google dorks, Shodan, and Censys, which offer unprecedented access to exposed systems, devices, and sensitive data on the World Wide Web. While these tools can be leveraged by professional hackers, they have also empowered “Script Kiddies”, who are low-skill, inexperienced attackers who use readily available exploits and scanning tools without deep technical knowledge. Consequently, cyberattacks targeting critical infrastructure are growing at a rapid rate, driven by the ease with which these solutions can be operated with minimal expertise. This paper explores the potential for cyberattacks enabled by these tools, presenting use cases where these platforms have been used for both offensive and defensive purposes. By examining notable incidents and analyzing potential threats, we outline proactive measures to protect against these emerging risks. In this study, we delve into how these tools have been used offensively by attackers and how they serve defensive functions within cybersecurity. Additionally, we also introduce an automated all-in-one tool designed to consolidate the functionalities of Google dorks, Shodan, and Censys, offering a streamlined solution for vulnerability detection and analysis. Lastly, we propose proactive defense strategies to mitigate exploitation risks associated with such tools, aiming to enhance the resilience of critical digital infrastructure against evolving cyber threats. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Figure 1

27 pages, 1409 KiB  
Article
Adaptive Handover Management in High-Mobility Networks for Smart Cities
by Yahya S. Junejo, Faisal K. Shaikh, Bhawani S. Chowdhry and Waleed Ejaz
Computers 2025, 14(1), 23; https://doi.org/10.3390/computers14010023 - 14 Jan 2025
Viewed by 838
Abstract
The seamless handover of mobile devices is critical for maximizing the potential of smart city applications, which demand uninterrupted connectivity, ultra-low latency, and performance in diverse environments. Fifth-generation (5G) and beyond-5G networks offer advancements in massive connectivity and ultra-low latency by leveraging advanced [...] Read more.
The seamless handover of mobile devices is critical for maximizing the potential of smart city applications, which demand uninterrupted connectivity, ultra-low latency, and performance in diverse environments. Fifth-generation (5G) and beyond-5G networks offer advancements in massive connectivity and ultra-low latency by leveraging advanced technologies like millimeter wave, massive machine-type communication, non-orthogonal multiple access, and beam forming. However, challenges persist in ensuring smooth handovers in dense deployments, especially in higher frequency bands and with increased user mobility. This paper presents an adaptive handover management scheme that utilizes reinforcement learning to optimize handover decisions in dynamic environments. The system selects the best target cell from the available neighbor cell list by predicting key performance indicators, such as reference signal received power and the signal–interference–noise ratio, while considering the fixed time-to-trigger and hysteresis margin values. It dynamically adjusts handover thresholds by incorporating an offset based on real-time network conditions and user mobility patterns. This adaptive approach minimizes handover failures and the ping-pong effect. Compared to the baseline LIM2 model, the proposed system demonstrates a 15% improvement in handover success rate, a 3% improvement in user throughput, and an approximately 6 sec reduction in the latency at 200 km/h speed in high-mobility scenarios. Full article
Show Figures

Figure 1

30 pages, 5494 KiB  
Article
The Right to the Night City: Exploring the Temporal Variability of the 15-min City in Milan and Its Implications for Nocturnal Communities
by Lamia Abdelfattah, Abubakr Albashir, Giulia Ceccarelli, Andrea Gorrini, Federico Messa and Dante Presicce
Computers 2025, 14(1), 22; https://doi.org/10.3390/computers14010022 - 11 Jan 2025
Viewed by 809
Abstract
The needs of night communities and the barriers they face in accessing diverse urban amenities are underexplored in urban planning research. Focus is primarily given to the needs of cultural consumers, frequently overlooking the challenges faced by regular nighttime communities, including night workers. [...] Read more.
The needs of night communities and the barriers they face in accessing diverse urban amenities are underexplored in urban planning research. Focus is primarily given to the needs of cultural consumers, frequently overlooking the challenges faced by regular nighttime communities, including night workers. Through a GIS-based analysis, the aim of this research is to shed light on differences in accessibility to core urban services between day and night in the city of Milan. The spatiotemporal analysis was performed using a customized version of the 15-min City Score Toolkit, an open-source, Python-based proprietary tool developed to automate the 15 min access metric estimation. Proprietary Point-Of-Interest (POI) data that were retrieved, sorted and filtered from the Google Places API are used to simulate time-variant walkability maps based on opening hour information contained in the dataset. The research reveals significant differences in walkability potential, both in spatial and temporal terms, and highlights gaps in nighttime service availability. The work presents an innovation on the 15 min city approach that highlights the impact of 24-h urban rhythms on real walkability outcomes. The quality limitations of the Google data are extensively explored in the article, providing further insight into the replicability and scalability of the methodology for future research. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

29 pages, 6016 KiB  
Article
Impact of Chatbots on User Experience and Data Quality on Citizen Science Platforms
by Akasha-Leonie Kessel, Soror Sahri, Sven Groppe, Jinghua Groppe, Hanieh Khorashadizadeh, Marc Pignal, Eva Perez Pimparé and Régine Vignes-Lebbe
Computers 2025, 14(1), 21; https://doi.org/10.3390/computers14010021 - 10 Jan 2025
Viewed by 666
Abstract
Citizen science (CS) projects, which engage the general public in scientific research, often face challenges in ensuring high-quality data collection and maintaining user engagement. Recent advancements in Large Language Models (LLMs) present a promising solution by providing automated, real-time assistance to users, reducing [...] Read more.
Citizen science (CS) projects, which engage the general public in scientific research, often face challenges in ensuring high-quality data collection and maintaining user engagement. Recent advancements in Large Language Models (LLMs) present a promising solution by providing automated, real-time assistance to users, reducing the need for extensive human intervention, and offering instant support. The CS project Les Herbonautes, dedicated to mass digitization of the French National Herbarium, serves as a case study for this paper, which details the development and evaluation of a network of open source LLM agents to assist users during data collection. The research involved the review of related work, stakeholder meetings with the Muséum National d’Histoire Naturelle, and user and context analyses to formalize system requirements. With these, a prototype with a user interface in the form of a chatbot was designed and implemented using LangGraph, and afterward evaluated through expert evaluation to assess its effect on usability and user experience (UX). The findings indicate that such a chatbot can enhance UX and improve data quality by guiding users and providing immediate feedback. However, limitations due to the non-deterministic nature of LLMs exist, suggesting that workflows must be carefully designed to mitigate potential errors and ensure reliable performance. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

18 pages, 1697 KiB  
Article
Reputation-Based Leader Selection Consensus Algorithm with Rewards for Blockchain Technology
by Munir Hussain, Amjad Mehmood, Muhammad Altaf Khan, Rabia Khan and Jaime Lloret
Computers 2025, 14(1), 20; https://doi.org/10.3390/computers14010020 - 8 Jan 2025
Viewed by 709
Abstract
Blockchain technology is an emerging decentralized and distributed technology that can maintain data security. It has the potential to transform many sectors completely. The core component of blockchain networks is the consensus algorithm because its efficiency, security, and scalability depend on it. A [...] Read more.
Blockchain technology is an emerging decentralized and distributed technology that can maintain data security. It has the potential to transform many sectors completely. The core component of blockchain networks is the consensus algorithm because its efficiency, security, and scalability depend on it. A consensus problem is a difficult and significant task that must be considered carefully in a blockchain network. It has several practical applications such as distributed computing, load balancing, and blockchain transaction validation. Even though a lot of consensus algorithms have been proposed, the majority of them require many computational and communication resources. Similarly, they also suffer from high latency and low throughput. In this work, we proposed a new consensus algorithm for consortium blockchain for a leader selection using the reputation value of nodes and the voting process to ensure high performance. A security analysis is conducted to demonstrate the security of the proposed algorithm. The outcomes show that the proposed algorithm provides a strong defense against the network nodes’ abnormal behavior. The performance analysis is performed by using Hyperledger Fabric v2.1 and the results show that it performs better in terms of throughput, latency, CPU utilization, and communications costs than its rivals Trust-Varying Algo, FP-BFT, and Scalable and Trust-based algorithms. Full article
Show Figures

Figure 1

19 pages, 21558 KiB  
Article
Visualizing Ambiguity: Analyzing Linguistic Ambiguity Resolution in Text-to-Image Models
by Wala Elsharif, Mahmood Alzubaidi, James She and Marco Agus
Computers 2025, 14(1), 19; https://doi.org/10.3390/computers14010019 - 8 Jan 2025
Viewed by 468
Abstract
Text-to-image models have demonstrated remarkable progress in generating visual content from textual descriptions. However, the presence of linguistic ambiguity in the text prompts poses a potential challenge to these models, possibly leading to undesired or inaccurate outputs. This work conducts a preliminary study [...] Read more.
Text-to-image models have demonstrated remarkable progress in generating visual content from textual descriptions. However, the presence of linguistic ambiguity in the text prompts poses a potential challenge to these models, possibly leading to undesired or inaccurate outputs. This work conducts a preliminary study and provides insights into how text-to-image diffusion models resolve linguistic ambiguity through a series of experiments. We investigate a set of prompts that exhibit different types of linguistic ambiguities with different models and the images they generate, focusing on how the models’ interpretations of linguistic ambiguity compare to those of humans. In addition, we present a curated dataset of ambiguous prompts and their corresponding images known as the Visual Linguistic Ambiguity Benchmark (V-LAB) dataset. Furthermore, we report a number of limitations and failure modes caused by linguistic ambiguity in text-to-image models and propose prompt engineering guidelines to minimize the impact of ambiguity. The findings of this exploratory study contribute to the ongoing improvement of text-to-image models and provide valuable insights for future advancements in the field. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

16 pages, 1981 KiB  
Article
Optimizing Natural Image Quality Evaluators for Quality Measurement in CT Scan Denoising
by Rudy Gunawan, Yvonne Tran, Jinchuan Zheng, Hung Nguyen and Rifai Chai
Computers 2025, 14(1), 18; https://doi.org/10.3390/computers14010018 - 7 Jan 2025
Viewed by 505
Abstract
Evaluating the results of image denoising algorithms in Computed Tomography (CT) scans typically involves several key metrics to assess noise reduction while preserving essential details. Full Reference (FR) quality evaluators are popular for evaluating image quality in denoising CT scans. There is limited [...] Read more.
Evaluating the results of image denoising algorithms in Computed Tomography (CT) scans typically involves several key metrics to assess noise reduction while preserving essential details. Full Reference (FR) quality evaluators are popular for evaluating image quality in denoising CT scans. There is limited information about using Blind/No Reference (NR) quality evaluators in the medical image area. This paper shows the previously utilized Natural Image Quality Evaluator (NIQE) in CT scans; this NIQE is commonly used as a photolike image evaluator and provides an extensive assessment of the optimum NIQE setting. The result was obtained using the library of good images. Most are also part of the Convolutional Neural Network (CNN) training dataset against the testing dataset, and a new dataset shows an optimum patch size and contrast levels suitable for the task. This evidence indicates a possibility of using the NIQE as a new option in evaluating denoised quality to find improvement or compare the quality between CNN models. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

20 pages, 3589 KiB  
Article
Real-Time Physics Simulation Method for XR Application
by Nak-Jun Sung, Jun Ma, Kunthroza Hor, Taeheon Kim, Hongly Va, Yoo-Joo Choi and Min Hong
Computers 2025, 14(1), 17; https://doi.org/10.3390/computers14010017 - 6 Jan 2025
Viewed by 606
Abstract
Real-time physics simulations are vital for creating immersive and interactive experiences in extended reality (XR) applications. Balancing computational efficiency and simulation accuracy is challenging, especially in environments with multiple deformable objects that require complex interactions. In this study, we introduce a GPU-based parallel [...] Read more.
Real-time physics simulations are vital for creating immersive and interactive experiences in extended reality (XR) applications. Balancing computational efficiency and simulation accuracy is challenging, especially in environments with multiple deformable objects that require complex interactions. In this study, we introduce a GPU-based parallel processing framework combined with a position-based dynamics (PBD) solver to tackle these challenges. The system is deployed within the Unity engine and enhances real-time performance through the use of sophisticated collision detection and response algorithms. Our method employs an AABB-based bounding volume hierarchy (BVH) structure to efficiently detect collisions, and incorporates the Möller–Trumbore algorithm for precise triangle-level interactions. We also boost computational efficiency by storing collision data in GPU-accessible 2D textures. Experimental assessments show performance improvements of up to 1705% in GPU simulations over CPU counterparts, achieving stable real-time frame rates for complex models such as the Stanford Bunny and Armadillo. Furthermore, utilizing 2D texture storage improves the FPS by up to 117%, confirming its efficacy for XR applications. This study offers a robust, scalable framework for real-time physics simulations, facilitating more natural and immersive XR experiences. Full article
Show Figures

Figure 1

21 pages, 1808 KiB  
Article
An Authentication Approach in a Distributed System Through Synergetic Computing
by Jia-Jen Wang, Yaw-Chung Chen and Meng-Chang Chen
Computers 2025, 14(1), 16; https://doi.org/10.3390/computers14010016 - 6 Jan 2025
Viewed by 426
Abstract
A synergetic computing mechanism is proposed to authenticate the validity of event data in merchandise exchange applications. The events are handled by the proposed synergetic computing system which is composed of edge devices. Asteroid_Node_on_Duty (ANOD) acts like a supernode to take the duty [...] Read more.
A synergetic computing mechanism is proposed to authenticate the validity of event data in merchandise exchange applications. The events are handled by the proposed synergetic computing system which is composed of edge devices. Asteroid_Node_on_Duty (ANOD) acts like a supernode to take the duty of coordination. The computation performed by nodes in local area can reduce round-trip data propagation delay to distant data centers. Events with different risk levels are processed in parallel through different flows by using Chief chain (CC) and Telstar chain (TC) methods. Low-risk events are computed in edge nodes to form TC, which can be periodically integrated into CC that contains data of high-risk events. New authentication methods are proposed. The difficulty of authentication tasks is adjusted for different scenarios where lower difficulty in low-risk tasks may accelerate the process of validation. Authentication by a certain number of nodes is required so that the system may ensure the consistency of data. Participants in the system may need to register as members. The transaction processing speed on low-risk events may reach 25,000 TPS based on the assumption of certain member classes given that all of ANOD, and Asteroid_Node_of_Backup (ANB), Edge Cloud, and Core Cloud function normally. Full article
Show Figures

Figure 1

57 pages, 2877 KiB  
Review
A Comprehensive Exploration of 6G Wireless Communication Technologies
by Md Nurul Absar Siddiky, Muhammad Enayetur Rahman, Md Shahriar Uzzal and H. M. Dipu Kabir
Computers 2025, 14(1), 15; https://doi.org/10.3390/computers14010015 - 3 Jan 2025
Viewed by 925
Abstract
As the telecommunications landscape braces for the post-5G era, this paper embarks on delineating the foundational pillars and pioneering visions that define the trajectory toward 6G wireless communication systems. Recognizing the insatiable demand for higher data rates, enhanced connectivity, and broader network coverage, [...] Read more.
As the telecommunications landscape braces for the post-5G era, this paper embarks on delineating the foundational pillars and pioneering visions that define the trajectory toward 6G wireless communication systems. Recognizing the insatiable demand for higher data rates, enhanced connectivity, and broader network coverage, we unravel the evolution from the existing 5G infrastructure to the nascent 6G framework, setting the stage for transformative advancements anticipated in the 2030s. Our discourse navigates through the intricate architecture of 6G, highlighting the paradigm shifts toward superconvergence, non-IP-based networking protocols, and information-centric networks, all underpinned by a robust 360-degree cybersecurity and privacy-by-engineering design. Delving into the core of 6G, we articulate a systematic exploration of the key technologies earmarked to revolutionize wireless communication including terahertz (THz) waves, optical wireless technology, and dynamic spectrum management while elucidating the intricate trade-offs necessitated by the integration of such innovations. This paper not only lays out a comprehensive 6G vision accentuated by high security, affordability, and intelligence but also charts the course for addressing the pivotal challenges of spectrum efficiency, energy consumption, and the seamless integration of emerging technologies. In this study, our goal is to enrich the existing discussions and research efforts by providing comprehensive insights into the development of 6G technology, ultimately supporting the creation of a thoroughly connected future world that meets evolving demands. Full article
Show Figures

Figure 1

12 pages, 3640 KiB  
Article
Design of Morlet Wavelet Neural Networks for Solving the Nonlinear Van der Pol–Mathieu–Duffing Oscillator Model
by Ali Hasan Ali, Muhammad Amir, Jamshaid Ul Rahman, Ali Raza and Ghassan Ezzulddin Arif
Computers 2025, 14(1), 14; https://doi.org/10.3390/computers14010014 - 3 Jan 2025
Viewed by 397
Abstract
The motivation behind this study is to simplify the complex mathematical formulations and reduce the time-consuming processes involved in traditional numerical methods for solving differential equations. This study develops a computational intelligence approach with a Morlet wavelet neural network (MWNN) to solve the [...] Read more.
The motivation behind this study is to simplify the complex mathematical formulations and reduce the time-consuming processes involved in traditional numerical methods for solving differential equations. This study develops a computational intelligence approach with a Morlet wavelet neural network (MWNN) to solve the nonlinear Van der Pol–Mathieu–Duffing oscillator (Vd-PM-DO), including parameter excitation and dusty plasma studies. The proposed technique utilizes artificial neural networks to model equations and optimize error functions using global search with a genetic algorithm (GA) and fast local convergence with an interior-point algorithm (IPA). We develop an MWNN-based fitness function to predict the dynamic behavior of nonlinear Vd-PM-DO differential equations. Then, we apply a novel hybrid approach combining WCA and ABC to optimize this fitness function, and determine the optimal weight and biases for MWNN. Three different variants of the Vd-PM-DO model were numerically evaluated and compared with the reference solution to demonstrate the correctness of the designed technique. Moreover, statistical analyses using twenty trials were conducted to determine the reliability and accuracy of the suggested MWNN-GA-IPA by utilizing mean absolute deviation (MAD), Theil’s inequality coefficient (TIC), and mean square error (MSE). Full article
Show Figures

Figure 1

20 pages, 3207 KiB  
Article
Computer-Aided Efficient Routing and Reliable Protocol Optimization for Autonomous Vehicle Communication Networks
by Alaa Kamal Yousif Dafhalla, Mohamed Elshaikh Elobaid, Amira Elsir Tayfour Ahmed, Ameni Filali, Nada Mohamed Osman SidAhmed, Tahani A. Attia, Badria Abaker Ibrahim Mohajir, Jawaher Suliman Altamimi and Tijjani Adam
Computers 2025, 14(1), 13; https://doi.org/10.3390/computers14010013 - 3 Jan 2025
Viewed by 617
Abstract
The rise of autonomous vehicles necessitates advanced communication networks for effective data exchange. The routing protocols Ad hoc On-Demand Distance Vector (AODV) and Greedy Perimeter Stateless Routing (GPSR) are vital in mobile networks (MANETs) and vehicular ad hoc networks (VANETs). However, their performance [...] Read more.
The rise of autonomous vehicles necessitates advanced communication networks for effective data exchange. The routing protocols Ad hoc On-Demand Distance Vector (AODV) and Greedy Perimeter Stateless Routing (GPSR) are vital in mobile networks (MANETs) and vehicular ad hoc networks (VANETs). However, their performance is affected by changing network conditions. This study examines key routing parameters—MaxJitter, Hello/Beacon Interval, and route validity time—and their impact on AODV and GPSR performance in urban and highway scenarios. The simulation results reveal that increasing MaxJitter enhances AODV throughput by 12% in cities but decreases it by 8% on highways, while GPSR throughput declines by 15% in cities and 10% on highways. Longer Hello intervals improve AODV performance by 10% in urban settings but reduce it by 6% on highways. Extending route validity time increases GPSR’s Packet Delivery Ratio (PDR) by 10% in cities, underscoring the need to optimize routing parameters for enhanced VANET performance. Full article
Show Figures

Figure 1

21 pages, 342 KiB  
Article
Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
by Kyle Lucke, Aleksandar Vakanski and Min Xian
Computers 2025, 14(1), 12; https://doi.org/10.3390/computers14010012 - 2 Jan 2025
Viewed by 419
Abstract
Despite the recent success of deep-learning models, traditional models are overconfident and poorly calibrated. This poses a serious problem when applied to high-stakes applications. To solve this issue, uncertainty quantification (UQ) models have been developed to allow the detection of misclassifications. Meta-model-based UQ [...] Read more.
Despite the recent success of deep-learning models, traditional models are overconfident and poorly calibrated. This poses a serious problem when applied to high-stakes applications. To solve this issue, uncertainty quantification (UQ) models have been developed to allow the detection of misclassifications. Meta-model-based UQ methods are promising due to the lack of predictive model re-training and low resource requirement. However, there are still several issues present in the training process. (1) Most current meta-models are trained using hard labels that do not allow quantification of the uncertainty associated with a given data sample; and (2) in most cases, the base model has a high test accuracy. Therefore, the samples used to train the meta-model primarily consist of correctly classified samples. This leads the meta-model to learn a poor approximation of the true decision boundary. To address these problems, we propose a novel soft-label formulation that better differentiates between correct and incorrect classifications, thereby allowing the meta-model to distinguish between correct and incorrect classifications with high uncertainty (i.e., low confidence). In addition, a novel training framework using adversarial samples is proposed to explore the decision boundary of the base model and mitigate issues related to training datasets with label imbalance. To validate the effectiveness of our approach, we use two predictive models trained on SVHN and CIFAR10 and evaluate performance according to sensitivity, specificity, an F1-score-style metric, average precision, and the Area Under the Receiver Operating Characteristic curve. We find the soft-label approach can significantly increase the model’s sensitivity and specificity, while the training with adversarial samples can noticeably improve the balance between sensitivity and specificity. We also compare our method against four state-of-the-art meta-model-based UQ methods, where we achieve significantly better performance than most models. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

13 pages, 1227 KiB  
Article
Leveraging Scene Geometry and Depth Information for Robust Image Deraining
by Ningning Xu and Jidong J. Yang
Computers 2025, 14(1), 11; https://doi.org/10.3390/computers14010011 - 2 Jan 2025
Viewed by 463
Abstract
Image deraining holds great potential for enhancing the vision of autonomous vehicles in rainy conditions, contributing to safer driving. Previous works have primarily focused on employing a single network architecture to generate derained images. However, they often fail to fully exploit the rich [...] Read more.
Image deraining holds great potential for enhancing the vision of autonomous vehicles in rainy conditions, contributing to safer driving. Previous works have primarily focused on employing a single network architecture to generate derained images. However, they often fail to fully exploit the rich prior knowledge embedded in the scenes. Particularly, most methods overlook the depth information that can provide valuable context about scene geometry and guide more robust deraining. In this work, we introduce a novel learning framework that integrates multiple networks: an AutoEncoder for deraining, an auxiliary network to incorporate depth information, and two supervision networks to enforce feature consistency between rainy and clear scenes. This multi-network design enables our model to effectively capture the underlying scene structure, producing clearer and more accurately derained images, leading to improved object detection for autonomous vehicles. Extensive experiments on three widely used datasets demonstrated the effectiveness of our proposed method. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

27 pages, 10178 KiB  
Article
Trust-Centric and Economically Optimized Resource Management for 6G-Enabled Internet of Things Environment
by Osama Z. Aletri, Kamran Ahmad Awan and Abdullah M. Alqahtani
Computers 2025, 14(1), 10; https://doi.org/10.3390/computers14010010 - 31 Dec 2024
Viewed by 534
Abstract
The continuous evolvement of IoT networks has introduced significant optimization challenges, particularly in resource management, energy efficiency, and performance enhancement. Most state-of-the-art solutions lack adequate adaptability and runtime cost-efficiency in dynamic 6G-enabled IoT environments. Accordingly, this paper proposes the Trust-centric Economically Optimized 6G-IoT [...] Read more.
The continuous evolvement of IoT networks has introduced significant optimization challenges, particularly in resource management, energy efficiency, and performance enhancement. Most state-of-the-art solutions lack adequate adaptability and runtime cost-efficiency in dynamic 6G-enabled IoT environments. Accordingly, this paper proposes the Trust-centric Economically Optimized 6G-IoT (TEO-IoT) framework, which incorporates an adaptive trust management system based on historical behavior, data integrity, and compliance with security protocols. Additionally, dynamic pricing models, incentive mechanisms, and adaptive routing protocols are integrated into the framework to optimize resource usage in diverse IoT scenarios. TEO-IoT presents an end-to-end solution for security management and network traffic optimization, utilizing advanced algorithms for trust score estimation and anomaly detection. The proposed solution is emulated using the NS-3 network simulator across three datasets: Edge-IIoTset, N-BaIoT, and IoT-23. Results demonstrate that TEO-IoT achieves an optimal resource usage of 92.5% in Edge-IIoTset and reduces power consumption by 15.2% in IoT-23, outperforming state-of-the-art models like IDSOFT and RAT6G. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

24 pages, 2096 KiB  
Article
Human Activity Recognition Using Graph Structures and Deep Neural Networks
by Abed Al Raoof K. Bsoul
Computers 2025, 14(1), 9; https://doi.org/10.3390/computers14010009 - 30 Dec 2024
Viewed by 501
Abstract
Human activity recognition (HAR) systems are essential in healthcare, surveillance, and sports analytics, enabling automated movement analysis. This research presents a novel HAR system combining graph structures with deep neural networks to capture both spatial and temporal patterns in activities. While CNN-based models [...] Read more.
Human activity recognition (HAR) systems are essential in healthcare, surveillance, and sports analytics, enabling automated movement analysis. This research presents a novel HAR system combining graph structures with deep neural networks to capture both spatial and temporal patterns in activities. While CNN-based models excel at spatial feature extraction, they struggle with temporal dynamics, limiting their ability to classify complex actions. To address this, we applied the Firefly Optimization Algorithm to fine-tune the hyperparameters of both the graph-based model and a CNN baseline for comparison. The optimized graph-based system, evaluated on the UCF101 and Kinetics-400 datasets, achieved 88.9% accuracy with balanced precision, recall, and F1-scores, outperforming the baseline. It demonstrated robustness across diverse activities, including sports, household routines, and musical performances. This study highlights the potential of graph-based HAR systems for real-world applications, with future work focused on multi-modal data integration and improved handling of occlusions to enhance adaptability and performance. Full article
Show Figures

Figure 1

19 pages, 532 KiB  
Article
The Relevance of Cognitive and Affective Factors to Explain the Acceptance of Blockchain Use: The Case of Loyalty Programmes
by Mar Souto-Romero, Mario Arias-Oliva, Jorge de Andrés-Sánchez and Miguel Llorens-Marín
Computers 2025, 14(1), 8; https://doi.org/10.3390/computers14010008 - 28 Dec 2024
Viewed by 742
Abstract
Blockchain technology has been highlighted as one of the most promising technologies to emerge in the 21st century. However, the expansion of blockchain applications is progressing much more slowly than initially expected, despite its promising properties. These considerations motivate this study, which evaluates [...] Read more.
Blockchain technology has been highlighted as one of the most promising technologies to emerge in the 21st century. However, the expansion of blockchain applications is progressing much more slowly than initially expected, despite its promising properties. These considerations motivate this study, which evaluates the drivers that facilitate the adoption of this technology through blockchain-based loyalty programs (BBLPs). The analytical framework used is the conceptual groundwork known as the cognitive–affective–normative model. Thus, we propose to explain the behavioural intention to use BBLPs (BEHAV) with two cognitive variables, namely perceived usefulness (USEFUL) and perceived ease of use (EASE); two affective variables, namely positive emotions (PEMO) and negative emotions (NEMO); and a normative factor, namely, the subjective norm (SNORM). A partial least squares-structural equation modelling analysis suggests that, to explain the expected response of BEHAV, only the positive relationships of the cognitive constructs with the response variable are significant. The results of the quantile regression suggest that the cognitive constructs, especially USEFUL, have a consistently significant positive influence across the entire response range of the response variable. The affective variables are significant in explaining the lower quantiles of BEHAV but not across the full response range. NEMO consistently has a significant negative influence on BEHAV in the percentiles at or below the median response. PEMO has a significantly positive influence on some of the BEHAV percentiles below the median, although this impact is not consistent across the lower quantiles of the median. The normative variable appears to have a residual influence on BEHAV, which, when significant (at the 90th quantile), is, contrary to expectations, negative. The results highlight that, while cognitive variables are essential in the acceptance of BBLPs, emotions—particularly negative ones—play an especially significant role among potential users whose level of acceptance falls below the central trend. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

27 pages, 2436 KiB  
Article
Seeing the Sound: Multilingual Lip Sync for Real-Time Face-to-Face Translation
by Amirkia Rafiei Oskooei, Mehmet S. Aktaş and Mustafa Keleş
Computers 2025, 14(1), 7; https://doi.org/10.3390/computers14010007 - 28 Dec 2024
Viewed by 815
Abstract
Imagine a future where language is no longer a barrier to real-time conversations, enabling instant and lifelike communication across the globe. As cultural boundaries blur, the demand for seamless multilingual communication has become a critical technological challenge. This paper addresses the lack of [...] Read more.
Imagine a future where language is no longer a barrier to real-time conversations, enabling instant and lifelike communication across the globe. As cultural boundaries blur, the demand for seamless multilingual communication has become a critical technological challenge. This paper addresses the lack of robust solutions for real-time face-to-face translation, particularly for low-resource languages, by introducing a comprehensive framework that not only translates language but also replicates voice nuances and synchronized facial expressions. Our research tackles the primary challenge of achieving accurate lip synchronization across culturally diverse languages, filling a significant gap in the literature by evaluating the generalizability of lip sync models beyond English. Specifically, we develop a novel evaluation framework combining quantitative lip sync error metrics and qualitative assessments by human observers. This framework is applied to assess two state-of-the-art lip sync models with different architectures for Turkish, Persian, and Arabic languages, using a newly collected dataset. Based on these findings, we propose and implement a modular system that integrates language-agnostic lip sync models with neural networks to deliver a fully functional face-to-face translation experience. Inference Time Analysis shows this system achieves highly realistic, face-translated talking heads in real time, with a throughput as low as 0.381 s. This transformative framework is primed for deployment in immersive environments such as VR/AR, Metaverse ecosystems, and advanced video conferencing platforms. It offers substantial benefits to developers and businesses aiming to build next-generation multilingual communication systems for diverse applications. While this work focuses on three languages, its modular design allows scalability to additional languages. However, further testing in broader linguistic and cultural contexts is required to confirm its universal applicability, paving the way for a more interconnected and inclusive world where language ceases to hinder human connection. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

18 pages, 1484 KiB  
Article
Noise-Based Active Defense Strategy for Mitigating Eavesdropping Threats in Internet of Things Environments
by Abdallah Farraj and Eman Hammad
Computers 2025, 14(1), 6; https://doi.org/10.3390/computers14010006 - 27 Dec 2024
Viewed by 542
Abstract
Establishing robust cybersecurity for Internet of Things (IoT) ecosystems poses significant challenges for system operators due to IoT resource constraints, trade-offs between security and performance, diversity of applications, and their security requirements, usability, and scalability. This article introduces a physical-layer security (PLS) approach [...] Read more.
Establishing robust cybersecurity for Internet of Things (IoT) ecosystems poses significant challenges for system operators due to IoT resource constraints, trade-offs between security and performance, diversity of applications, and their security requirements, usability, and scalability. This article introduces a physical-layer security (PLS) approach that enables IoT devices to maintain specified levels of information confidentiality against wireless channel eavesdropping threats. This work proposes applying PLS active defense mechanisms utilizing spectrum-sharing schemes combined with fair scheduling and power management algorithms to mitigate the risk of eavesdropping attacks on resource-constrained IoT environments. Specifically, an IoT device communicating over an insecure wireless channel will utilize intentional noise signals transmitted alongside the actual IoT information signal. The intentional noise signal will appear to an eavesdropper (EVE) as additional noise, reducing the EVE’s signal-to-interference-plus-noise ratio (SINR) and increasing the EVE’s outage probability, thereby restricting their capacity to decode the transmitted IoT information, resulting in better protection for the confidentiality of the IoT device’s transmission. The proposed communication strategy serves as a complementary solution to existing security methods. Analytical and numerical analyses presented in this article validate the effectiveness of the proposed strategy, demonstrating that IoT devices can achieve the desired levels of confidentiality. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

12 pages, 2105 KiB  
Article
An Automated Marker-Less Registration Approach Using Neural Radiance Fields for Potential Use in Mixed Reality-Based Computer-Aided Surgical Navigation of Paranasal Sinus
by Suhyeon Kim, Hyeonji Kim and Younhyun Jung
Computers 2025, 14(1), 5; https://doi.org/10.3390/computers14010005 - 27 Dec 2024
Viewed by 388
Abstract
Paranasal sinus surgery, a common treatment for chronic rhinosinusitis, requires exceptional precision due to the proximity of critical anatomical structures. To ensure accurate instrument control and clear visualization of the surgical site, surgeons utilize computer-aided surgical navigation (CSN). A key component of CSN [...] Read more.
Paranasal sinus surgery, a common treatment for chronic rhinosinusitis, requires exceptional precision due to the proximity of critical anatomical structures. To ensure accurate instrument control and clear visualization of the surgical site, surgeons utilize computer-aided surgical navigation (CSN). A key component of CSN is the registration process, which is traditionally reliant on manual or marker-based techniques. However, there is a growing shift toward marker-less registration methods. In previous work, we investigated a mesh-based registration approach using a Mixed Reality Head-Mounted Display (MR-HMD), specifically the Microsoft HoloLens 2. However, this method faced limitations, including depth holes and invalid values. These issues stemmed from the device’s low-resolution camera specifications and the 3D projection steps required to upscale RGB camera spaces. In this study, we propose a novel automated marker-less registration method leveraging Neural Radiance Field (NeRF) technology with an MR-HMD. To address insufficient depth information in the previous approach, we utilize rendered-depth images generated by the trained NeRF model. We evaluated our method against two other techniques, including prior mesh-based registration, using a facial phantom and three participants. The results demonstrate our proposed method achieves at least a 0.873 mm (12%) improvement in registration accuracy compared to others. Full article
Show Figures

Figure 1

21 pages, 542 KiB  
Article
WGAN-DL-IDS: An Efficient Framework for Intrusion Detection System Using WGAN, Random Forest, and Deep Learning Approaches
by Shehla Gul, Sobia Arshad, Sanay Muhammad Umar Saeed, Adeel Akram and Muhammad Awais Azam
Computers 2025, 14(1), 4; https://doi.org/10.3390/computers14010004 - 27 Dec 2024
Viewed by 551
Abstract
The rise in cyber security issues has caused significant harm to tech world and thus society in recent years. Intrusion detection systems (IDSs) are crucial for the detection and the mitigation of the increasing risk of cyber attacks. False and disregarded alarms are [...] Read more.
The rise in cyber security issues has caused significant harm to tech world and thus society in recent years. Intrusion detection systems (IDSs) are crucial for the detection and the mitigation of the increasing risk of cyber attacks. False and disregarded alarms are a common problem for traditional IDSs in high-bandwidth and large-scale network systems. While applying learning techniques to intrusion detection, researchers are facing challenges mainly due to the imbalanced training sets and the high dimensionality of datasets, resulting from the scarcity of attack data and longer training periods, respectively. Thus, this leads to reduced efficiency. In this research study, we propose a strategy for dealing with the problems of imbalanced datasets and high dimensionality in IDSs. In our efficient and novel framework, we integrate an oversampling strategy that uses Generative Adversarial Networks (GANs) to overcome the difficulties introduced by imbalanced datasets, and we use the Random Forest (RF) importance algorithm to select a subset of features that best represent the dataset to reduce the dimensionality of a training dataset. Then, we use three deep learning techniques, Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM), to classify the attacks. We implement and evaluate this proposed framework on the CICIDS2017 dataset. Experimental results show that our proposed framework outperforms state-of-the-art approaches, vastly improving DL model detection accuracy by 98% using CNN. Full article
Show Figures

Figure 1

16 pages, 2199 KiB  
Article
Bioinspired Blockchain Framework for Secure and Scalable Wireless Sensor Network Integration in Fog–Cloud Ecosystems
by Abdul Rehman and Omar Alharbi
Computers 2025, 14(1), 3; https://doi.org/10.3390/computers14010003 - 26 Dec 2024
Viewed by 591
Abstract
WSNs are significant components of modern IoT systems, which typically operate in resource-constrained environments integrated with fog and cloud computing to achieve scalability and real-time performance. Integrating these systems brings challenges such as security threats, scalability bottlenecks, and energy constraints. In this work, [...] Read more.
WSNs are significant components of modern IoT systems, which typically operate in resource-constrained environments integrated with fog and cloud computing to achieve scalability and real-time performance. Integrating these systems brings challenges such as security threats, scalability bottlenecks, and energy constraints. In this work, we propose a bioinspired blockchain framework aimed at addressing those challenges through the emulation of biological immune adaptation mechanisms, such as the self-recovery of swarm intelligence. It integrates lightweight blockchain technology with bioinspired algorithms, including an AIS for anomaly detection and a Proof of Adaptive Immunity Consensus mechanism for secure resource-efficient blockchain validation. Experimental evaluations give proof of the superior performance reached within this framework: up to 95.2% of anomaly detection accuracy, average energy efficiency of 91.2% when the traffic flow is normal, and latency as low as 15.2 ms during typical IoT scenarios. Moreover, the framework has very good scalability since it can handle up to 500 nodes with only a latency of about 6.0 ms. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices 2024)
Show Figures

Figure 1

18 pages, 2021 KiB  
Article
New Predictive Models for the Computation of Reinforced Concrete Columns Shear Strength
by Anthos I. Ioannou, David Galbraith, Nikolaos Bakas, George Markou and John Bellos
Computers 2025, 14(1), 2; https://doi.org/10.3390/computers14010002 - 24 Dec 2024
Viewed by 437
Abstract
The assessment methods for estimating the behavior of the complex mechanics of reinforced concrete (RC) structural elements were primarily based on experimental investigation, followed by the collective evaluation of experimental databases from the available literature. There is still a lot of uncertainty in [...] Read more.
The assessment methods for estimating the behavior of the complex mechanics of reinforced concrete (RC) structural elements were primarily based on experimental investigation, followed by the collective evaluation of experimental databases from the available literature. There is still a lot of uncertainty in relation to the strength and deformability criteria that have been derived from tests due to the differences in the experimental test setups of the individual research studies that are being fed into the databases used to derive predictive models. This research work focuses on structural elements that exhibit pronounced strength degradation with plastic deformation and brittle failure characteristics. The study’s focus is on evaluating existing models that predict the shear strength of RC columns, which take into account important factors including the structural element’s ductility and axial load, as well as the contributions of specific resistance mechanisms like that of concrete, transverse, and longitudinal reinforcement. Significantly improved predictive models are proposed herein through the implementation of machine learning (ML) algorithms on refined datasets. Three ML models, LREGR, POLYREG-HYT, and XGBoost-HYT-CV, were used to develop different predictive models that were able to compute the shear strength of RC columns. According to the numerical findings, POLYREG-HYT- and XGBoost-HYT-CV-derived models outperformed other ML models in predicting the shear strength of rectangular RC columns with the correlation coefficient having a value R greater than 99% and minimal errors. It was also found that the newly proposed predictive model derived a 2-fold improvement in terms of the correlation coefficient compared to the best available equation in international literature. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

33 pages, 3827 KiB  
Review
Distinguishing Reality from AI: Approaches for Detecting Synthetic Content
by David Ghiurău and Daniela Elena Popescu
Computers 2025, 14(1), 1; https://doi.org/10.3390/computers14010001 - 24 Dec 2024
Viewed by 1226
Abstract
The advancement of artificial intelligence (AI) technologies, including generative pre-trained transformers (GPTs) and generative models for text, image, audio, and video creation, has revolutionized content generation, creating unprecedented opportunities and critical challenges. This paper systematically examines the characteristics, methodologies, and challenges associated with [...] Read more.
The advancement of artificial intelligence (AI) technologies, including generative pre-trained transformers (GPTs) and generative models for text, image, audio, and video creation, has revolutionized content generation, creating unprecedented opportunities and critical challenges. This paper systematically examines the characteristics, methodologies, and challenges associated with detecting the synthetic content across multiple modalities, to safeguard digital authenticity and integrity. Key detection approaches reviewed include stylometric analysis, watermarking, pixel prediction techniques, dual-stream networks, machine learning models, blockchain, and hybrid approaches, highlighting their strengths and limitations, as well as their detection accuracy, independent accuracy of 80% for stylometric analysis and up to 92% using multiple modalities in hybrid approaches. The effectiveness of these techniques is explored in diverse contexts, from identifying deepfakes and synthetic media to detecting AI-generated scientific texts. Ethical concerns, such as privacy violations, algorithmic bias, false positives, and overreliance on automated systems, are also critically discussed. Furthermore, the paper addresses legal and regulatory frameworks, including intellectual property challenges and emerging legislation, emphasizing the need for robust governance to mitigate misuse. Real-world examples of detection systems are analyzed to provide practical insights into implementation challenges. Future directions include developing generalizable and adaptive detection models, hybrid approaches, fostering collaboration between stakeholders, and integrating ethical safeguards. By presenting a comprehensive overview of AIGC detection, this paper aims to inform stakeholders, researchers, policymakers, and practitioners on addressing the dual-edged implications of AI-driven content creation. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop