Next Issue
Volume 15, June
Previous Issue
Volume 15, April
 
 

Information, Volume 15, Issue 5 (May 2024) – 48 articles

Cover Story (view full-size image): The Dynamic Distributed Constraint Optimisation Problem (D-DCOP) framework has enabled the development of proactive decision-making methods for multi-agent systems. Existing approaches, like Proactive Dynamic DCOP algorithms, rely on known environment models. This paper tackles proactive behaviour in D-DCOPs with unknown dynamics, proposing that agents learn autoregressive models from shared temporal experiences to predict future states. The proposed method employs a temporal experience-sharing message-passing algorithm, utilising dynamic agent connections and a distance metric for data collation. Experimental results using the RoboCup Rescue Simulation show that the proposed method achieves better total building damage when combined with decision-switching costs than baseline methods. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 21213 KiB  
Article
A Lightweight Face Detector via Bi-Stream Convolutional Neural Network and Vision Transformer
by Zekun Zhang, Qingqing Chao, Shijie Wang and Teng Yu
Information 2024, 15(5), 290; https://doi.org/10.3390/info15050290 - 20 May 2024
Viewed by 1143
Abstract
Lightweight convolutional neural networks are widely used for face detection due to their ability to learn local representations through spatial induction bias and translational invariance. However, convolutional face detectors have limitations in detecting faces under challenging conditions like occlusion, blurring, or changes in [...] Read more.
Lightweight convolutional neural networks are widely used for face detection due to their ability to learn local representations through spatial induction bias and translational invariance. However, convolutional face detectors have limitations in detecting faces under challenging conditions like occlusion, blurring, or changes in facial poses, primarily attributed to fixed-size receptive fields and a lack of global modeling. Transformer-based models have advantages on learning global representations but are insensitive to capture local patterns. To address these limitations, we propose an efficient face detector that combines convolutional neural network and transformer architectures. We introduce a bi-stream structure that integrates convolutional neural network and transformer blocks within the backbone network, enabling the preservation of local pattern features and the extraction of global context. To further preserve the local details captured by convolutional neural networks, we propose a feature enhancement convolution block in a hierarchical backbone structure. Additionally, we devise a multiscale feature aggregation module to enhance obscured and blurred facial features. Experimental results demonstrate that our method has achieved improved lightweight face detection accuracy with an average precision of 95.30%, 94.20%, and 87.56% across the easy, medium, and hard subdatasets of WIDER FACE, respectively. Therefore, we believe our method will be a useful supplement to the collection of current artificial intelligence models and benefit the engineering applications of face detection. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

18 pages, 2621 KiB  
Article
NDNOTA: NDN One-Time Authentication
by Manar Aldaoud, Dawood Al-Abri, Firdous Kausar and Medhat Awadalla
Information 2024, 15(5), 289; https://doi.org/10.3390/info15050289 - 20 May 2024
Viewed by 801
Abstract
Named Data Networking (NDN) stands out as a prominent architectural framework for the future Internet, aiming to address deficiencies present in IP networks, specifically in the domain of security. Although NDN packets containing requested content are signed with the publisher’s signature which establishes [...] Read more.
Named Data Networking (NDN) stands out as a prominent architectural framework for the future Internet, aiming to address deficiencies present in IP networks, specifically in the domain of security. Although NDN packets containing requested content are signed with the publisher’s signature which establishes data provenance for content, the NDN domain still requires more holistic frameworks that address consumers’ identity verification while accessing protected contents or services using producer/publisher-preapproved authentication servers. In response, this paper introduces the NDN One-Time Authentication (NDNOTA) framework, designed to authenticate NDN online services, applications, and data in real time. NDNOTA comprises three fundamental elements: the consumer, producer, and authentication server. Employing a variety of security measures such as single sign-on (SSO), token credentials, certified asymmetric keys, and signed NDN packets, NDNOTA aims to reinforce the security of NDN-based interactions. To assess the effectiveness of the proposed framework, we validate and evaluate its impact on the three core elements in terms of time performance. For example, when accessing authenticated content through the entire NDNOTA process, consumers experience an additional time overhead of 70 milliseconds, making the total process take 83 milliseconds. In contrast, accessing normal content that does not require authentication does not incur this delay. The additional NDNOTA delay is mitigated once the authentication token is generated and stored, resulting in a comparable time frame to unauthenticated content requests. Additionally, obtaining private content through the authentication process requires 10 messages, whereas acquiring public data only requires two messages. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

21 pages, 1356 KiB  
Article
Technoeconomic Analysis for Deployment of Gait-Oriented Wearable Medical Internet-of-Things Platform in Catalonia
by Marc Codina, David Castells-Rufas, Maria-Jesus Torrelles and Jordi Carrabina
Information 2024, 15(5), 288; https://doi.org/10.3390/info15050288 - 18 May 2024
Cited by 1 | Viewed by 907
Abstract
The Internet of Medical Things (IoMT) extends the concept of eHealth and mHealth for patients with continuous monitoring requirements. This research concentrates on the use of wearable devices based on the use of inertial measurement units (IMUs) that account for a gait analysis [...] Read more.
The Internet of Medical Things (IoMT) extends the concept of eHealth and mHealth for patients with continuous monitoring requirements. This research concentrates on the use of wearable devices based on the use of inertial measurement units (IMUs) that account for a gait analysis for its use in three health cases, equilibrium evaluation, fall prevention and surgery recovery, that impact a large elderly population. We also analyze two different scenarios for data capture: supervised by clinicians and unsupervised during activities of daily life (ADLs). The continuous monitoring of patients produces large amounts of data that are analyzed in specific IoMT platforms that must be connected to the health system platforms containing the health records of the patients. The aim of this study is to evaluate the factors that impact the cost of the deployment of such an IoMT solution. We use population data from Catalonia together with an IoMT deployment model for costs from the current deployment of connected devices for monitoring diabetic patients. Our study reveals the critical dependencies of the proposed IoMT platforms: from the devices and cloud cost, the size of the population using these services and the savings from the current model under key parameters such as fall reduction or rehabilitation duration. Future research should investigate the benefit of continuous monitoring in improving the quality of life of patients. Full article
(This article belongs to the Special Issue Technoeconomics of the Internet of Things)
Show Figures

Figure 1

25 pages, 668 KiB  
Article
Principle of Information Increase: An Operational Perspective on Information Gain in the Foundations of Quantum Theory
by Yang Yu and Philip Goyal
Information 2024, 15(5), 287; https://doi.org/10.3390/info15050287 - 17 May 2024
Viewed by 841
Abstract
A measurement performed on a quantum system is an act of gaining information about its state. However, in the foundations of quantum theory, the concept of information is multiply defined, particularly in the area of quantum reconstruction, and its conceptual foundations remain surprisingly [...] Read more.
A measurement performed on a quantum system is an act of gaining information about its state. However, in the foundations of quantum theory, the concept of information is multiply defined, particularly in the area of quantum reconstruction, and its conceptual foundations remain surprisingly under-explored. In this paper, we investigate the gain of information in quantum measurements from an operational viewpoint in the special case of a two-outcome probabilistic source. We show that the continuous extension of the Shannon entropy naturally admits two distinct measures of information gain, differential information gain and relative information gain, and that these have radically different characteristics. In particular, while differential information gain can increase or decrease as additional data are acquired, relative information gain consistently grows and, moreover, exhibits asymptotic indifference to the data or choice of Bayesian prior. In order to make a principled choice between these measures, we articulate a Principle of Information Increase, which incorporates a proposal due to Summhammer that more data from measurements leads to more knowledge about the system, and also takes into consideration black swan events. This principle favours differential information gain as the more relevant metric and guides the selection of priors for these information measures. Finally, we show that, of the symmetric beta distribution priors, the Jeffreys binomial prior is the prior that ensures maximal robustness of information gain for the particular data sequence obtained in a run of experiments. Full article
Show Figures

Figure 1

16 pages, 1836 KiB  
Article
Telehealth-Based Information Retrieval and Extraction for Analysis of Clinical Characteristics and Symptom Patterns in Mild COVID-19 Patients
by Edison Jahaj, Parisis Gallos, Melina Tziomaka, Athanasios Kallipolitis, Apostolos Pasias, Christos Panagopoulos, Andreas Menychtas, Ioanna Dimopoulou, Anastasia Kotanidou, Ilias Maglogiannis and Alice Georgia Vassiliou
Information 2024, 15(5), 286; https://doi.org/10.3390/info15050286 - 17 May 2024
Viewed by 1061
Abstract
Clinical characteristics of COVID-19 patients have been mostly described in hospitalised patients, yet most are managed in an outpatient setting. The COVID-19 pandemic transformed healthcare delivery models and accelerated the implementation and adoption of telemedicine solutions. We employed a modular remote monitoring system [...] Read more.
Clinical characteristics of COVID-19 patients have been mostly described in hospitalised patients, yet most are managed in an outpatient setting. The COVID-19 pandemic transformed healthcare delivery models and accelerated the implementation and adoption of telemedicine solutions. We employed a modular remote monitoring system with multi-modal data collection, aggregation, and analytics features to monitor mild COVID-19 patients and report their characteristics and symptoms. At enrolment, the patients were equipped with wearables, which were associated with their accounts, provided the respective in-system consents, and, in parallel, reported the demographics and patient characteristics. The patients monitored their vitals and symptoms daily during a 14-day monitoring period. Vital signs were entered either manually or automatically through wearables. We enrolled 162 patients from February to May 2022. The median age was 51 (42–60) years; 44% were male, 22% had at least one comorbidity, and 73.5% were fully vaccinated. The vitals of the patients were within normal range throughout the monitoring period. Thirteen patients were asymptomatic, while the rest had at least one symptom for a median of 11 (7–16) days. Fatigue was the most common symptom, followed by fever and cough. Loss of taste and smell was the longest-lasting symptom. Age positively correlated with the duration of fatigue, anorexia, and low-grade fever. Comorbidities, the number of administered doses, the days since the last dose, and the days since the positive test did not seem to affect the number of sick days or symptomatology. The i-COVID platform allowed us to provide remote monitoring and reporting of COVID-19 outpatients. We were able to report their clinical characteristics while simultaneously helping reduce the spread of the virus through hospitals by minimising hospital visits. The monitoring platform also offered advanced knowledge extraction and analytic capabilities to detect health condition deterioration and automatically trigger personalised support workflows. Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
Show Figures

Figure 1

15 pages, 6946 KiB  
Article
MCF-YOLOv5: A Small Target Detection Algorithm Based on Multi-Scale Feature Fusion Improved YOLOv5
by Song Gao, Mingwang Gao and Zhihui Wei
Information 2024, 15(5), 285; https://doi.org/10.3390/info15050285 - 17 May 2024
Viewed by 1837
Abstract
In recent years, many deep learning-based object detection methods have performed well in various applications, especially in large-scale object detection. However, when detecting small targets, previous object detection algorithms cannot achieve good results due to the characteristics of the small targets themselves. To [...] Read more.
In recent years, many deep learning-based object detection methods have performed well in various applications, especially in large-scale object detection. However, when detecting small targets, previous object detection algorithms cannot achieve good results due to the characteristics of the small targets themselves. To address the aforementioned issues, we propose the small object algorithm model MCF-YOLOv5, which has undergone three improvements based on YOLOv5. Firstly, a data augmentation strategy combining Mixup and Mosaic is used to increase the number of small targets in the image and reduce the interference of noise and changes in detection. Secondly, in order to accurately locate the position of small targets and reduce the impact of unimportant information on small targets in the image, the attention mechanism coordinate attention is introduced in YOLOv5’s neck network. Finally, we improve the Feature Pyramid Network (FPN) structure and add a small object detection layer to enhance the feature extraction ability of small objects and improve the detection accuracy of small objects. The experimental results show that, with a small increase in computational complexity, the proposed MCF-YOLOv5 achieves better performance than the baseline on both the VisDrone2021 dataset and the Tsinghua Tencent100K dataset. Compared with YOLOv5, MCF-YOLOv5 has improved detection APsmall by 3.3% and 3.6%, respectively. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning)
Show Figures

Figure 1

28 pages, 5912 KiB  
Article
Resonating with the World: Thinking Critically about Brain Criticality in Consciousness and Cognition
by Gerry Leisman and Paul Koch
Information 2024, 15(5), 284; https://doi.org/10.3390/info15050284 - 17 May 2024
Cited by 1 | Viewed by 2014
Abstract
Aim: Biofields combine many physiological levels, both spatially and temporally. These biofields reflect naturally resonant forms of synaptic energy reflected in growing and spreading waves of brain activity. This study aims to theoretically understand better how resonant continuum waves may be reflective of [...] Read more.
Aim: Biofields combine many physiological levels, both spatially and temporally. These biofields reflect naturally resonant forms of synaptic energy reflected in growing and spreading waves of brain activity. This study aims to theoretically understand better how resonant continuum waves may be reflective of consciousness, cognition, memory, and thought. Background: The metabolic processes that maintain animal cellular and physiological functions are enhanced by physiological coherence. Internal biological-system coordination and sensitivity to particular stimuli and signal frequencies are two aspects of coherent physiology. There exists significant support for the notion that exogenous biologically and non-biologically generated energy entrains human physiological systems. All living things have resonant frequencies that are either comparable or coherent; therefore, eventually, all species will have a shared resonance. An organism’s biofield activity and resonance are what support its life and allow it to react to stimuli. Methods: As the naturally resonant forms of synaptic energy grow and spread waves of brain activity, the temporal and spatial frequency of the waves are effectively regulated by a time delay (T) in inter-layer signals in a layered structure that mimics the structure of the mammalian cortex. From ubiquitous noise, two different types of waves can arise as a function of T. One is coherent, and as T rises, so does its resonant spatial frequency. Results: Continued growth eventually causes both the wavelength and the temporal frequency to abruptly increase. Two waves expand simultaneously and randomly interfere in an area of T values as a result. Conclusion: We suggest that because of this extraordinary dualism, which has its roots in the phase relationships of amplified waves, coherent waves are essential for memory retrieval, whereas random waves represent original cognition. Full article
Show Figures

Figure 1

16 pages, 1002 KiB  
Article
Optimizing Energy Efficiency in Opportunistic Networks: A Heuristic Approach to Adaptive Cluster-Based Routing Protocol
by Meisam Sharifi Sani, Saeid Iranmanesh, Hamidreza Salarian, Faisel Tubbal and Raad Raad
Information 2024, 15(5), 283; https://doi.org/10.3390/info15050283 - 16 May 2024
Cited by 1 | Viewed by 1103
Abstract
Opportunistic Networks (OppNets) are characterized by intermittently connected nodes with fluctuating performance. Their dynamic topology, caused by node movement, activation, and deactivation, often relies on controlled flooding for routing, leading to significant resource consumption and network congestion. To address this challenge, we propose [...] Read more.
Opportunistic Networks (OppNets) are characterized by intermittently connected nodes with fluctuating performance. Their dynamic topology, caused by node movement, activation, and deactivation, often relies on controlled flooding for routing, leading to significant resource consumption and network congestion. To address this challenge, we propose the Adaptive Clustering-based Routing Protocol (ACRP). This ACRP protocol uses the common member-based adaptive dynamic clustering approach to produce optimal clusters, and the OppNet is converted into a TCP/IP network. This protocol adaptively creates dynamic clusters in order to facilitate the routing by converting the network from a disjointed to a connected network. This strategy creates a persistent connection between nodes, resulting in more effective routing and enhanced network performance. It should be noted that ACRP is scalable and applicable to a variety of applications and scenarios, including smart cities, disaster management, military networks, and distant places with inadequate infrastructure. Simulation findings demonstrate that the ACRP protocol outperforms alternative clustering approaches such as kRop, QoS-OLSR, LBC, and CBVRP. The analysis of the ACRP approach reveals that it can boost packet delivery by 28% and improve average end-to-end, throughput, hop count, and reachability metrics by 42%, 45%, 44%, and 80%, respectively. Full article
(This article belongs to the Special Issue Advances in Communication Systems and Networks)
Show Figures

Figure 1

19 pages, 2836 KiB  
Article
Cost-Effective Signcryption for Securing IoT: A Novel Signcryption Algorithm Based on Hyperelliptic Curves
by Junaid Khan, Congxu Zhu, Wajid Ali, Muhammad Asim and Sadique Ahmad
Information 2024, 15(5), 282; https://doi.org/10.3390/info15050282 - 15 May 2024
Viewed by 1187
Abstract
Security and efficiency remain a serious concern for Internet of Things (IoT) environments due to the resource-constrained nature and wireless communication. Traditional schemes are based on the main mathematical operations, including pairing, pairing-based scalar multiplication, bilinear pairing, exponential operations, elliptic curve scalar multiplication, [...] Read more.
Security and efficiency remain a serious concern for Internet of Things (IoT) environments due to the resource-constrained nature and wireless communication. Traditional schemes are based on the main mathematical operations, including pairing, pairing-based scalar multiplication, bilinear pairing, exponential operations, elliptic curve scalar multiplication, and point multiplication operations. These traditional operands are cost-intensive and require high computing power and bandwidth overload, thus affecting efficiency. Due to the cost-intensive nature and high resource requirements, traditional approaches are not feasible and are unsuitable for resource-limited IoT devices. Furthermore, the lack of essential security attributes in traditional schemes, such as unforgeability, public verifiability, non-repudiation, forward secrecy, and resistance to denial-of-service attacks, puts data security at high risk. To overcome these challenges, we have introduced a novel signcryption algorithm based on hyperelliptic curve divisor multiplication, which is much faster than other traditional mathematical operations. Hence, the proposed methodology is based on a hyperelliptic curve, due to which it has enhanced security with smaller key sizes that reduce computational complexity by 38.16% and communication complexity by 62.5%, providing a well-balanced solution by utilizing few resources while meeting the security and efficiency requirements of resource-constrained devices. The proposed strategy also involves formal security validation, which provides confidence for the proposed methodology in practical implementations. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

4 pages, 147 KiB  
Editorial
Preface to the Special Issue on Computational Linguistics and Natural Language Processing
by Peter Z. Revesz
Information 2024, 15(5), 281; https://doi.org/10.3390/info15050281 - 15 May 2024
Viewed by 1053
Abstract
Computational linguistics and natural language processing are at the heart of the AI revolution that is currently transforming our lives [...] Full article
(This article belongs to the Special Issue Computational Linguistics and Natural Language Processing)
34 pages, 10124 KiB  
Article
Fuzzy Integrated Delphi-ISM-MICMAC Hybrid Multi-Criteria Approach to Optimize the Artificial Intelligence (AI) Factors Influencing Cost Management in Civil Engineering
by Hongxia Hu, Shouguo Jiang, Shankha Shubhra Goswami and Yafei Zhao
Information 2024, 15(5), 280; https://doi.org/10.3390/info15050280 - 14 May 2024
Cited by 3 | Viewed by 1670
Abstract
This research paper presents a comprehensive study on optimizing the critical artificial intelligence (AI) factors influencing cost management in civil engineering projects using a multi-criteria decision-making (MCDM) approach. The problem addressed revolves around the need to effectively manage costs in civil engineering endeavors [...] Read more.
This research paper presents a comprehensive study on optimizing the critical artificial intelligence (AI) factors influencing cost management in civil engineering projects using a multi-criteria decision-making (MCDM) approach. The problem addressed revolves around the need to effectively manage costs in civil engineering endeavors amidst the growing complexity of projects and the increasing integration of AI technologies. The methodology employed involves the utilization of three MCDM tools, specifically Delphi, interpretive structural modeling (ISM), and Cross-Impact Matrix Multiplication Applied to Classification (MICMAC). A total of 17 AI factors, categorized into eight broad groups, were identified and analyzed. Through the application of different MCDM techniques, the relative importance and interrelationships among these factors were determined. The key findings reveal the critical role of certain AI factors, such as risk mitigation and cost components, in optimizing the cost management processes. Moreover, the hierarchical structure generated through ISM and the influential factors identified via MICMAC provide insights for prioritizing strategic interventions. The implications of this study extend to informing decision-makers in the civil engineering domain about effective strategies for leveraging AI in their cost management practices. By adopting a systematic MCDM approach, stakeholders can enhance project outcomes while optimizing resource allocation and mitigating financial risks. Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
Show Figures

Figure 1

19 pages, 2056 KiB  
Article
Locally Centralized Execution for Less Redundant Computation in Multi-Agent Cooperation
by Yidong Bai and Toshiharu Sugawara
Information 2024, 15(5), 279; https://doi.org/10.3390/info15050279 - 14 May 2024
Viewed by 1037
Abstract
Decentralized execution is a widely used framework in multi-agent reinforcement learning. However, it has a well-known but neglected shortcoming, redundant computation, that is, the same/similar computation is performed redundantly in different agents owing to their overlapping observations. This study proposes a novel method, [...] Read more.
Decentralized execution is a widely used framework in multi-agent reinforcement learning. However, it has a well-known but neglected shortcoming, redundant computation, that is, the same/similar computation is performed redundantly in different agents owing to their overlapping observations. This study proposes a novel method, the locally centralized team transformer (LCTT), to address this problem. This method first proposes a locally centralized execution framework that autonomously determines some agents as leaders that generate instructions and other agents as workers to act according to the received instructions without running their policy networks. For the LCTT, we subsequently propose the team-transformer (T-Trans) structure, which enables leaders to generate targeted instructions for each worker, and the leadership shift, which enables agents to determine those that should instruct or be instructed by others. The experimental results demonstrated that the proposed method significantly reduces redundant computations without decreasing rewards and achieves faster learning convergence. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Figure 1

11 pages, 1357 KiB  
Article
Impact of Handedness on Driver’s Situation Awareness When Driving under Unfamiliar Traffic Regulations
by Nesreen M. Alharbi and Hasan J. Alyamani
Information 2024, 15(5), 278; https://doi.org/10.3390/info15050278 - 13 May 2024
Viewed by 922
Abstract
Situation awareness (SA) describes an individual’s understanding of their surroundings and actions in the near future based on the individual’s comprehension and understanding of the surrounding inputs. SA measurements can be applied to improve system performance or human effectiveness in many fields of [...] Read more.
Situation awareness (SA) describes an individual’s understanding of their surroundings and actions in the near future based on the individual’s comprehension and understanding of the surrounding inputs. SA measurements can be applied to improve system performance or human effectiveness in many fields of study, including driving. However, in some scenarios drivers might need to drive in unfamiliar traffic regulations (UFTRs), where the traffic rules and vehicle configurations are a bit different from what the drivers are used to under familiar traffic regulations. Such driving conditions require drivers to adapt their attention, knowledge, and reactions to safely reach the destination. This ability is influenced by the degree of handedness. In such tasks, mixed-/left-handed people show better performance than strong right-handed people. This paper aims to explore the influence of the degree of handedness on SA when driving under UFTRs. We analyzed the SA of two groups of drivers: strong right-handed drivers and mixed-/left-handed drivers. Both groups were not familiar with driving in keep-left traffic regulations. Using a driving simulator, all participants drove in a simulated keep-left traffic system. The participants’ SA was measured using a subjective assessment, named the Participant Situation Awareness Questionnaire PSAQ, and performance-based assessment. The results of the study indicate that mixed-/left-handed participants had significantly higher SA than strong right-handed participants when measured by performance-based assessment. Also, in the subjective assessment, mixed-/left-handed participants had significantly higher PSAQ performance scores than strong right-handed participants. The findings of this study suggest that advanced driver assistance systems (ADAS), which show improvement in road safety, should adapt the system functionality based on the driver’s degree of handedness when driving under UFTRs. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

30 pages, 1115 KiB  
Article
Enhancing E-Learning Adaptability with Automated Learning Style Identification and Sentiment Analysis: A Hybrid Deep Learning Approach for Smart Education
by Tahir Hussain, Lasheng Yu, Muhammad Asim, Afaq Ahmed and Mudasir Ahmad Wani
Information 2024, 15(5), 277; https://doi.org/10.3390/info15050277 - 13 May 2024
Cited by 4 | Viewed by 2980
Abstract
In smart education, adaptive e-learning systems personalize the educational process by tailoring it to individual learning styles. Traditionally, identifying these styles relies on learners completing surveys and questionnaires, which can be tedious and may not reflect their true preferences. Additionally, this approach assumes [...] Read more.
In smart education, adaptive e-learning systems personalize the educational process by tailoring it to individual learning styles. Traditionally, identifying these styles relies on learners completing surveys and questionnaires, which can be tedious and may not reflect their true preferences. Additionally, this approach assumes that learning styles are fixed, leading to a cold-start problem when automatically identifying styles based on e-learning platform behaviors. To address these challenges, we propose a novel approach that annotates unlabeled student feedback using multi-layer topic modeling and implements the Felder–Silverman Learning Style Model (FSLSM) to identify learning styles automatically. Our method involves learners answering four FSLSM-based questions upon logging into the e-learning platform and providing personal information like age, gender, and cognitive characteristics, which are weighted using fuzzy logic. We then analyze learners’ behaviors and activities using web usage mining techniques, classifying their learning sequences into specific styles with an advanced deep learning model. Additionally, we analyze textual feedback using latent Dirichlet allocation (LDA) for sentiment analysis to enhance the learning experience further. The experimental results demonstrate that our approach outperforms existing models in accurately detecting learning styles and improves the overall quality of personalized content delivery. Full article
(This article belongs to the Special Issue Artificial Intelligence and Games Science in Education)
Show Figures

Figure 1

13 pages, 1337 KiB  
Review
A Neoteric Approach toward Social Media in Public Health Informatics: A Narrative Review of Current Trends and Future Directions
by Asma Tahir Awan, Ana Daniela Gonzalez and Manoj Sharma
Information 2024, 15(5), 276; https://doi.org/10.3390/info15050276 - 13 May 2024
Viewed by 2353
Abstract
Social media has become more popular in the last few years. It has been used in public health development and healthcare settings to promote healthier lifestyles. Given its important role in today’s culture, it is necessary to understand its current trends and future [...] Read more.
Social media has become more popular in the last few years. It has been used in public health development and healthcare settings to promote healthier lifestyles. Given its important role in today’s culture, it is necessary to understand its current trends and future directions in public health. This review aims to describe and summarize how public health professionals have been using social media to improve population outcomes. This review highlights the substantial influence of social media in advancing public health objectives. The key themes explored encompass the utilization of social media to advance health initiatives, monitor diseases, track behaviors, and interact with communities. Additionally, it discusses potential future directions on how social media can be used to improve population health. The findings show how social media has been used as a tool for research, implementing health campaigns, and health promotion. Social media integration with artificial intelligence (AI) and Generative Pre-Trained Transformers (GPTs) can impact and offer an innovative approach to tackle the problems and difficulties in health informatics. The research shows how social media will keep growing and evolving and, if used effectively, has the potential to help close public health gaps across different cultures and improve population health. Full article
(This article belongs to the Special Issue Recent Advances in Social Media Mining and Analysis)
Show Figures

Figure 1

22 pages, 2639 KiB  
Article
Crowd Counting in Diverse Environments Using a Deep Routing Mechanism Informed by Crowd Density Levels
by Abdullah N Alhawsawi, Sultan Daud Khan and Faizan Ur Rehman
Information 2024, 15(5), 275; https://doi.org/10.3390/info15050275 - 13 May 2024
Cited by 2 | Viewed by 1327
Abstract
Automated crowd counting is a crucial aspect of surveillance, especially in the context of mass events attended by large populations. Traditional methods of manually counting the people attending an event are error-prone, necessitating the development of automated methods. Accurately estimating crowd counts across [...] Read more.
Automated crowd counting is a crucial aspect of surveillance, especially in the context of mass events attended by large populations. Traditional methods of manually counting the people attending an event are error-prone, necessitating the development of automated methods. Accurately estimating crowd counts across diverse scenes is challenging due to high variations in the sizes of human heads. Regression-based crowd-counting methods often overestimate counts in low-density situations, while detection-based models struggle in high-density scenarios to precisely detect the head. In this work, we propose a unified framework that integrates regression and detection models to estimate the crowd count in diverse scenes. Our approach leverages a routing strategy based on crowd density variations within an image. By classifying image patches into density levels and employing a Patch-Routing Module (PRM) for routing, the framework directs patches to either the Detection or Regression Network to estimate the crowd count. The proposed framework demonstrates superior performance across various datasets, showcasing its effectiveness in handling diverse scenes. By effectively integrating regression and detection models, our approach offers a comprehensive solution for accurate crowd counting in scenarios ranging from low-density to high-density situations. Full article
Show Figures

Figure 1

16 pages, 3162 KiB  
Article
Utilizing Machine Learning for Context-Aware Digital Biomarker of Stress in Older Adults
by Md Saif Hassan Onim, Himanshu Thapliyal and Elizabeth K. Rhodus
Information 2024, 15(5), 274; https://doi.org/10.3390/info15050274 - 12 May 2024
Cited by 2 | Viewed by 1455
Abstract
Identifying stress in older adults is a crucial field of research in health and well-being. This allows us to take timely preventive measures that can help save lives. That is why a nonobtrusive way of accurate and precise stress detection is necessary. Researchers [...] Read more.
Identifying stress in older adults is a crucial field of research in health and well-being. This allows us to take timely preventive measures that can help save lives. That is why a nonobtrusive way of accurate and precise stress detection is necessary. Researchers have proposed many statistical measurements to associate stress with sensor readings from digital biomarkers. With the recent progress of Artificial Intelligence in the healthcare domain, the application of machine learning is showing promising results in stress detection. Still, the viability of machine learning for digital biomarkers of stress is under-explored. In this work, we first investigate the performance of a supervised machine learning algorithm (Random Forest) with manual feature engineering for stress detection with contextual information. The concentration of salivary cortisol was used as the golden standard here. Our framework categorizes stress into No Stress, Low Stress, and High Stress by analyzing digital biomarkers gathered from wearable sensors. We also provide a thorough knowledge of stress in older adults by combining physiological data obtained from wearable sensors with contextual clues from a stress protocol. Our context-aware machine learning model, using sensor fusion, achieved a macroaverage F-1 score of 0.937 and an accuracy of 92.48% in identifying three stress levels. We further extend our work to get rid of the burden of manual feature engineering. We explore Convolutional Neural Network (CNN)-based feature encoder and cortisol biomarkers to detect stress using contextual information. We provide an in-depth look at the CNN-based feature encoder, which effectively separates useful features from physiological inputs. Both of our proposed frameworks, i.e., Random Forest with engineered features and a Fully Connected Network with CNN-based features validate that the integration of digital biomarkers of stress can provide more insight into the stress response even without any self-reporting or caregiver labels. Our method with sensor fusion shows an accuracy and F-1 score of 83.7797% and 0.7552, respectively, without context and 96.7525% accuracy and 0.9745 F-1 score with context, which also constitutes a 4% increase in accuracy and a 0.04 increase in F-1 score from RF. Full article
Show Figures

Figure 1

18 pages, 1456 KiB  
Article
Insights into Cybercrime Detection and Response: A Review of Time Factor
by Hamed Taherdoost
Information 2024, 15(5), 273; https://doi.org/10.3390/info15050273 - 12 May 2024
Cited by 2 | Viewed by 3823
Abstract
Amidst an unprecedented period of technological progress, incorporating digital platforms into diverse domains of existence has become indispensable, fundamentally altering the operational processes of governments, businesses, and individuals. Nevertheless, the swift process of digitization has concurrently led to the emergence of cybercrime, which [...] Read more.
Amidst an unprecedented period of technological progress, incorporating digital platforms into diverse domains of existence has become indispensable, fundamentally altering the operational processes of governments, businesses, and individuals. Nevertheless, the swift process of digitization has concurrently led to the emergence of cybercrime, which takes advantage of weaknesses in interconnected systems. The growing dependence of society on digital communication, commerce, and information sharing has led to the exploitation of these platforms by malicious actors for hacking, identity theft, ransomware, and phishing attacks. With the growing dependence of organizations, businesses, and individuals on digital platforms for information exchange, commerce, and communication, malicious actors have identified the susceptibilities present in these systems and have begun to exploit them. This study examines 28 research papers focusing on intrusion detection systems (IDS), and phishing detection in particular, and how quickly responses and detections in cybersecurity may be made. We investigate various approaches and quantitative measurements to comprehend the link between reaction time and detection time and emphasize the necessity of minimizing both for improved cybersecurity. The research focuses on reducing detection and reaction times, especially for phishing attempts, to improve cybersecurity. In smart grids and automobile control networks, faster attack detection is important, and machine learning can help. It also stresses the necessity to improve protocols to address increasing cyber risks while maintaining scalability, interoperability, and resilience. Although machine-learning-based techniques have the potential for detection precision and reaction speed, obstacles still need to be addressed to attain real-time capabilities and adjust to constantly changing threats. To create effective defensive mechanisms against cyberattacks, future research topics include investigating innovative methodologies, integrating real-time threat intelligence, and encouraging collaboration. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

29 pages, 1097 KiB  
Article
Control of Qubit Dynamics Using Reinforcement Learning
by Dimitris Koutromanos, Dionisis Stefanatos and Emmanuel Paspalakis
Information 2024, 15(5), 272; https://doi.org/10.3390/info15050272 - 11 May 2024
Cited by 1 | Viewed by 1401
Abstract
The progress in machine learning during the last decade has had a considerable impact on many areas of science and technology, including quantum technology. This work explores the application of reinforcement learning (RL) methods to the quantum control problem of state transfer in [...] Read more.
The progress in machine learning during the last decade has had a considerable impact on many areas of science and technology, including quantum technology. This work explores the application of reinforcement learning (RL) methods to the quantum control problem of state transfer in a single qubit. The goal is to create an RL agent that learns an optimal policy and thus discovers optimal pulses to control the qubit. The most crucial step is to mathematically formulate the problem of interest as a Markov decision process (MDP). This enables the use of RL algorithms to solve the quantum control problem. Deep learning and the use of deep neural networks provide the freedom to employ continuous action and state spaces, offering the expressivity and generalization of the process. This flexibility helps to formulate the quantum state transfer problem as an MDP in several different ways. All the developed methodologies are applied to the fundamental problem of population inversion in a qubit. In most cases, the derived optimal pulses achieve fidelity equal to or higher than 0.9999, as required by quantum computing applications. The present methods can be easily extended to quantum systems with more energy levels and may be used for the efficient control of collections of qubits and to counteract the effect of noise, which are important topics for quantum sensing applications. Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
Show Figures

Figure 1

13 pages, 1519 KiB  
Article
Cyclic Air Braking Strategy for Heavy Haul Trains on Long Downhill Sections Based on Q-Learning Algorithm
by Changfan Zhang, Shuo Zhou, Jing He and Lin Jia
Information 2024, 15(5), 271; https://doi.org/10.3390/info15050271 - 11 May 2024
Viewed by 953
Abstract
Cyclic air braking is a key factor affecting the safe operation of trains on long downhill sections. However, a train’s cycle braking strategy is constrained by multiple factors such as driving environment, speed, and air-refilling time. A Q-learning algorithm-based cyclic braking strategy for [...] Read more.
Cyclic air braking is a key factor affecting the safe operation of trains on long downhill sections. However, a train’s cycle braking strategy is constrained by multiple factors such as driving environment, speed, and air-refilling time. A Q-learning algorithm-based cyclic braking strategy for a heavy haul train on long downhill sections is proposed to address this challenge. First, the operating environment of a heavy haul train on long downhill sections is designed, considering various constraint parameters, such as the characteristics of special operating routes, allowable operating speeds, and train tube air-refilling time. Second, the operating status and braking operation of a heavy haul train on long downhill sections are discretized in order to establish a Q-table based on state–action pairs. The training of algorithm performance is achieved by continuously updating Q-tables. Finally, taking the heavy haul train formation as the study object, actual line data from the Shuozhou–Huanghua Railway are used for experimental simulation, and different hyperparameters and entry speed conditions are considered. The results show that the safe and stable cyclic braking of a heavy haul train on long downhill sections is achieved. The effectiveness of the Q-learning control strategy is verified. Full article
Show Figures

Figure 1

25 pages, 6156 KiB  
Article
A Comparative Analysis of the Bayesian Regularization and Levenberg–Marquardt Training Algorithms in Neural Networks for Small Datasets: A Metrics Prediction of Neolithic Laminar Artefacts
by Maurizio Troiano, Eugenio Nobile, Fabio Mangini, Marco Mastrogiuseppe, Cecilia Conati Barbaro and Fabrizio Frezza
Information 2024, 15(5), 270; https://doi.org/10.3390/info15050270 - 10 May 2024
Cited by 4 | Viewed by 1538
Abstract
This study aims to present a comparative analysis of the Bayesian regularization backpropagation and Levenberg–Marquardt training algorithms in neural networks for the metrics prediction of damaged archaeological artifacts, of which the state of conservation is often fragmented due to different reasons, such as [...] Read more.
This study aims to present a comparative analysis of the Bayesian regularization backpropagation and Levenberg–Marquardt training algorithms in neural networks for the metrics prediction of damaged archaeological artifacts, of which the state of conservation is often fragmented due to different reasons, such as ritual, use wear, or post-depositional processes. The archaeological artifacts, specifically laminar blanks (so-called blades), come from different sites located in the Southern Levant that belong to the Pre-Pottery B Neolithic (PPNB) (10,100/9500–400 cal B.P.). This paper shows the entire procedure of the analysis, from its normalization of the dataset to its comparative analysis and overfitting problem resolution. Full article
(This article belongs to the Special Issue Techniques and Data Analysis in Cultural Heritage)
Show Figures

Figure 1

23 pages, 2622 KiB  
Article
L-PCM: Localization and Point Cloud Registration-Based Method for Pose Calibration of Mobile Robots
by Dandan Ning and Shucheng Huang
Information 2024, 15(5), 269; https://doi.org/10.3390/info15050269 - 10 May 2024
Viewed by 1349
Abstract
The autonomous navigation of mobile robots contains three parts: map building, global localization, and path planning. Precise pose data directly affect the accuracy of global localization. However, the cumulative error problems of sensors and various estimation strategies cause the pose to have a [...] Read more.
The autonomous navigation of mobile robots contains three parts: map building, global localization, and path planning. Precise pose data directly affect the accuracy of global localization. However, the cumulative error problems of sensors and various estimation strategies cause the pose to have a large gap in data accuracy. To address these problems, this paper proposes a pose calibration method based on localization and point cloud registration, which is called L-PCM. Firstly, the method obtains the odometer and IMU (inertial measurement unit) data through the sensors mounted on the mobile robot and uses the UKF (unscented Kalman filter) algorithm to filter and fuse the odometer data and IMU data to obtain the estimated pose of the mobile robot. Secondly, the AMCL (adaptive Monte Carlo localization) is improved by combining the UKF fusion model of the IMU and odometer to obtain the modified global initial pose of the mobile robot. Finally, PL-ICP (point to line-iterative closest point) point cloud registration is used to calibrate the modified global initial pose to obtain the global pose of the mobile robot. Through simulation experiments, it is verified that the UKF fusion algorithm can reduce the influence of cumulative errors and the improved AMCL algorithm can optimize the pose trajectory. The average value of the position error is about 0.0447 m, and the average value of the angle error is stabilized at about 0.0049 degrees. Meanwhile, it has been verified that the L-PCM is significantly better than the existing AMCL algorithm, with a position error of about 0.01726 m and an average angle error of about 0.00302 degrees, effectively improving the accuracy of the pose. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

33 pages, 857 KiB  
Article
The Convergence of Artificial Intelligence and Blockchain: The State of Play and the Road Ahead
by Dhanasak Bhumichai, Christos Smiliotopoulos, Ryan Benton, Georgios Kambourakis and Dimitrios Damopoulos
Information 2024, 15(5), 268; https://doi.org/10.3390/info15050268 - 9 May 2024
Cited by 4 | Viewed by 13066
Abstract
Artificial intelligence (AI) and blockchain technology have emerged as increasingly prevalent and influential elements shaping global trends in Information and Communications Technology (ICT). Namely, the synergistic combination of blockchain and AI introduces beneficial, unique features with the potential to enhance the performance and [...] Read more.
Artificial intelligence (AI) and blockchain technology have emerged as increasingly prevalent and influential elements shaping global trends in Information and Communications Technology (ICT). Namely, the synergistic combination of blockchain and AI introduces beneficial, unique features with the potential to enhance the performance and efficiency of existing ICT systems. However, presently, the confluence of these two disruptive technologies remains in a rather nascent stage, undergoing continuous exploration and study. In this context, the work at hand offers insight regarding the most significant features of the AI and blockchain intersection. Sixteen outstanding, recent articles exploring the combination of AI and blockchain technology have been systematically selected and thoroughly investigated. From them, fourteen key features have been extracted, including data security and privacy, data encryption, data sharing, decentralized intelligent systems, efficiency, automated decision systems, collective decision making, scalability, system security, transparency, sustainability, device cooperation, and mining hardware design. Moreover, drawing upon the related literature stemming from major digital databases, we constructed a timeline of this technological convergence comprising three eras: emerging, convergence, and application. For the convergence era, we categorized the pertinent features into three primary groups: data manipulation, potential applicability to legacy systems, and hardware issues. For the application era, we elaborate on the impact of this technology fusion from the perspective of five distinct focus areas, from Internet of Things applications and cybersecurity, to finance, energy, and smart cities. This multifaceted, but succinct analysis is instrumental in delineating the timeline of AI and blockchain convergence and pinpointing the unique characteristics inherent in their integration. The paper culminates by highlighting the prevailing challenges and unresolved questions in blockchain and AI-based systems, thereby charting potential avenues for future scholarly inquiry. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

22 pages, 9676 KiB  
Article
Modeling- and Simulation-Driven Methodology for the Deployment of an Inland Water Monitoring System
by Giordy A. Andrade, Segundo Esteban, José L. Risco-Martín, Jesús Chacón and Eva Besada-Portas
Information 2024, 15(5), 267; https://doi.org/10.3390/info15050267 - 9 May 2024
Viewed by 1100
Abstract
In response to the challenges introduced by global warming and increased eutrophication, this paper presents an innovative modeling and simulation (M&S)-driven model for developing an automated inland water monitoring system. This system is grounded in a layered Internet of Things (IoT) architecture and [...] Read more.
In response to the challenges introduced by global warming and increased eutrophication, this paper presents an innovative modeling and simulation (M&S)-driven model for developing an automated inland water monitoring system. This system is grounded in a layered Internet of Things (IoT) architecture and seamlessly integrates cloud, fog, and edge computing to enable sophisticated, real-time environmental surveillance and prediction of harmful algal and cyanobacterial blooms (HACBs). Utilizing autonomous boats as mobile data collection units within the edge layer, the system efficiently tracks algae and cyanobacteria proliferation and relays critical data upward through the architecture. These data feed into advanced inference models within the cloud layer, which inform predictive algorithms in the fog layer, orchestrating subsequent data-gathering missions. This paper also details a complete development environment that facilitates the system lifecycle from concept to deployment. The modular design is powered by Discrete Event System Specification (DEVS) and offers unparalleled adaptability, allowing developers to simulate, validate, and deploy modules incrementally and cutting across traditional developmental phases. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

19 pages, 4615 KiB  
Article
Fake User Detection Based on Multi-Model Joint Representation
by Jun Li, Wentao Jiang, Jianyi Zhang, Yanhua Shao and Wei Zhu
Information 2024, 15(5), 266; https://doi.org/10.3390/info15050266 - 9 May 2024
Viewed by 1253
Abstract
The existing deep learning-based detection of fake information focuses on the transient detection of news itself. Compared to user category profile mining and detection, transient detection is prone to higher misjudgment rates due to the limitations of insufficient temporal information, posing new challenges [...] Read more.
The existing deep learning-based detection of fake information focuses on the transient detection of news itself. Compared to user category profile mining and detection, transient detection is prone to higher misjudgment rates due to the limitations of insufficient temporal information, posing new challenges to social public opinion monitoring tasks such as fake user detection. This paper proposes a multimodal aggregation portrait model (MAPM) based on multi-model joint representation for social media platforms. It constructs a deep learning-based multimodal fake user detection framework by analyzing user behavior datasets within a time retrospective window. It integrates a pre-trained Domain Large Model to represent user behavior data across multiple modalities, thereby constructing a high-generalization implicit behavior feature spectrum for users. In response to the tendency of existing fake user behavior mining to neglect time-series features, this study introduces an improved network called Sequence Interval Detection Net (SIDN) based on Sequence to Sequence (seq2seq) to characterize time interval sequence behaviors, achieving strong expressive capabilities for detecting fake behaviors within the time window. Ultimately, the amalgamation of latent behavioral features and explicit characteristics serves as the input for spectral clustering in detecting fraudulent users. The experimental results on Weibo real dataset demonstrate that the proposed model outperforms the detection utilizing explicit user features, with an improvement of 27.0% in detection accuracy. Full article
(This article belongs to the Special Issue 2nd Edition of Information Retrieval and Social Media Mining)
Show Figures

Figure 1

19 pages, 4471 KiB  
Article
Detection of Korean Phishing Messages Using Biased Discriminant Analysis under Extreme Class Imbalance Problem
by Siyoon Kim, Jeongmin Park, Hyun Ahn and Yonggeol Lee
Information 2024, 15(5), 265; https://doi.org/10.3390/info15050265 - 7 May 2024
Viewed by 1616
Abstract
In South Korea, the rapid proliferation of smartphones has led to an uptick in messenger phishing attacks associated with electronic communication financial scams. In response to this, various phishing detection algorithms have been proposed. However, collecting messenger phishing data poses challenges due to [...] Read more.
In South Korea, the rapid proliferation of smartphones has led to an uptick in messenger phishing attacks associated with electronic communication financial scams. In response to this, various phishing detection algorithms have been proposed. However, collecting messenger phishing data poses challenges due to concerns about its potential use in criminal activities. Consequently, a Korean phishing dataset can be composed of imbalanced data, where the number of general messages might outnumber the phishing ones. This class imbalance problem and data scarcity can lead to overfitting issues, making it difficult to achieve high performance. To solve this problem, this paper proposes a phishing messages classification method using Biased Discriminant Analysis without resorting to data augmentation techniques. In this paper, by optimizing the parameters for BDA, we achieved exceptionally high performances in the phishing messages classification experiment, with 95.45% for Recall and 96.85% for the BA metric. Moreover, when compared with other algorithms, the proposed method demonstrated robustness against overfitting due to the class imbalance problem and exhibited minimal performance disparity between training and testing datasets. Full article
Show Figures

Figure 1

43 pages, 8370 KiB  
Article
Addressing Data Scarcity in the Medical Domain: A GPT-Based Approach for Synthetic Data Generation and Feature Extraction
by Fahim Sufi
Information 2024, 15(5), 264; https://doi.org/10.3390/info15050264 - 6 May 2024
Cited by 1 | Viewed by 2431
Abstract
This research confronts the persistent challenge of data scarcity in medical machine learning by introducing a pioneering methodology that harnesses the capabilities of Generative Pre-trained Transformers (GPT). In response to the limitations posed by a dearth of labeled medical data, our approach involves [...] Read more.
This research confronts the persistent challenge of data scarcity in medical machine learning by introducing a pioneering methodology that harnesses the capabilities of Generative Pre-trained Transformers (GPT). In response to the limitations posed by a dearth of labeled medical data, our approach involves the synthetic generation of comprehensive patient discharge messages, setting a new standard in the field with GPT autonomously generating 20 fields. Through a meticulous review of the existing literature, we systematically explore GPT’s aptitude for synthetic data generation and feature extraction, providing a robust foundation for subsequent phases of the research. The empirical demonstration showcases the transformative potential of our proposed solution, presenting over 70 patient discharge messages with synthetically generated fields, including severity and chances of hospital re-admission with justification. Moreover, the data had been deployed in a mobile solution where regression algorithms autonomously identified the correlated factors for ascertaining the severity of patients’ conditions. This study not only establishes a novel and comprehensive methodology but also contributes significantly to medical machine learning, presenting the most extensive patient discharge summaries reported in the literature. The results underscore the efficacy of GPT in overcoming data scarcity challenges and pave the way for future research to refine and expand the application of GPT in diverse medical contexts. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

25 pages, 397 KiB  
Review
Cybercrime Intention Recognition: A Systematic Literature Review
by Yidnekachew Worku Kassa, Joshua Isaac James and Elefelious Getachew Belay
Information 2024, 15(5), 263; https://doi.org/10.3390/info15050263 - 5 May 2024
Cited by 2 | Viewed by 2773
Abstract
In this systematic literature review, we delve into the realm of intention recognition within the context of digital forensics and cybercrime. The rise of cybercrime has become a major concern for individuals, organizations, and governments worldwide. Digital forensics is a field that deals [...] Read more.
In this systematic literature review, we delve into the realm of intention recognition within the context of digital forensics and cybercrime. The rise of cybercrime has become a major concern for individuals, organizations, and governments worldwide. Digital forensics is a field that deals with the investigation and analysis of digital evidence in order to identify, preserve, and analyze information that can be used as evidence in a court of law. Intention recognition is a subfield of artificial intelligence that deals with the identification of agents’ intentions based on their actions and change of states. In the context of cybercrime, intention recognition can be used to identify the intentions of cybercriminals and even to predict their future actions. Employing a PRISMA systematic review approach, we curated research articles from reputable journals and categorized them into three distinct modeling approaches: logic-based, classical machine learning-based, and deep learning-based. Notably, intention recognition has transcended its historical confinement to network security, now addressing critical challenges across various subdomains, including social engineering attacks, artificial intelligence black box vulnerabilities, and physical security. While deep learning emerges as the dominant paradigm, its inherent lack of transparency poses a challenge in the digital forensics landscape. However, it is imperative that models developed for digital forensics possess intrinsic attributes of explainability and logical coherence, thereby fostering judicial confidence, mitigating biases, and upholding accountability for their determinations. To this end, we advocate for hybrid solutions that blend explainability, reasonableness, efficiency, and accuracy. Furthermore, we propose the creation of a taxonomy to precisely define intention recognition, paving the way for future advancements in this pivotal field. Full article
(This article belongs to the Special Issue Digital Forensic Investigation and Incident Response)
Show Figures

Figure 1

21 pages, 2289 KiB  
Article
Novel Ransomware Detection Exploiting Uncertainty and Calibration Quality Measures Using Deep Learning
by Mazen Gazzan and Frederick T. Sheldon
Information 2024, 15(5), 262; https://doi.org/10.3390/info15050262 - 5 May 2024
Cited by 1 | Viewed by 1485
Abstract
Ransomware poses a significant threat by encrypting files or systems demanding a ransom be paid. Early detection is essential to mitigate its impact. This paper presents an Uncertainty-Aware Dynamic Early Stopping (UA-DES) technique for optimizing Deep Belief Networks (DBNs) in ransomware detection. UA-DES [...] Read more.
Ransomware poses a significant threat by encrypting files or systems demanding a ransom be paid. Early detection is essential to mitigate its impact. This paper presents an Uncertainty-Aware Dynamic Early Stopping (UA-DES) technique for optimizing Deep Belief Networks (DBNs) in ransomware detection. UA-DES leverages Bayesian methods, dropout techniques, and an active learning framework to dynamically adjust the number of epochs during the training of the detection model, preventing overfitting while enhancing model accuracy and reliability. Our solution takes a set of Application Programming Interfaces (APIs), representing ransomware behavior as input we call “UA-DES-DBN”. The method incorporates uncertainty and calibration quality measures, optimizing the training process for better more accurate ransomware detection. Experiments demonstrate the effectiveness of UA-DES-DBN compared to more conventional models. The proposed model improved accuracy from 94% to 98% across various input sizes, surpassing other models. UA-DES-DBN also decreased the false positive rate from 0.18 to 0.10, making it more useful in real-world cybersecurity applications. Full article
Show Figures

Figure 1

21 pages, 2447 KiB  
Article
The Impact of Immersive Virtual Reality on Knowledge Acquisition and Adolescent Perceptions in Cultural Education
by Athanasios Christopoulos, Maria Styliou, Nikolaos Ntalas and Chrysostomos Stylios
Information 2024, 15(5), 261; https://doi.org/10.3390/info15050261 - 3 May 2024
Cited by 1 | Viewed by 2575
Abstract
Understanding local history is fundamental to fostering a comprehensive global viewpoint. As technological advances shape our pedagogical tools, Virtual Reality (VR) stands out for its potential educational impact. Though its promise in educational settings is widely acknowledged, especially in science, technology, engineering and [...] Read more.
Understanding local history is fundamental to fostering a comprehensive global viewpoint. As technological advances shape our pedagogical tools, Virtual Reality (VR) stands out for its potential educational impact. Though its promise in educational settings is widely acknowledged, especially in science, technology, engineering and mathematics (STEM) fields, there is a noticeable decrease in research exploring VR’s efficacy in arts. The present study examines the effects of VR-mediated interventions on cultural education. In greater detail, secondary school adolescents (N = 52) embarked on a journey into local history through an immersive 360° VR experience. As part of our research approach, we conducted pre- and post-intervention assessments to gauge participants’ grasp of the content and further distributed psychometric instruments to evaluate their reception of VR as an instructional approach. The analysis indicates that VR’s immersive elements enhance knowledge acquisition but the impact is modulated by the complexity of the subject matter. Additionally, the study reveals that a tailored, context-sensitive, instructional design is paramount for optimising learning outcomes and mitigating educational inequities. This work challenges the “one-size-fits-all” approach to educational VR, advocating for a more targeted instructional approach. Consequently, it emphasises the need for educators and VR developers to collaboratively tailor interventions that are both culturally and contextually relevant. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop