Journal Description
Informatics
Informatics
is an international, peer-reviewed, open access journal on information and communication technologies, human–computer interaction, and social informatics, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Communication)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 33 days after submission; acceptance to publication is undertaken in 5.7 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.4 (2023);
5-Year Impact Factor:
3.1 (2023)
Latest Articles
Hybrid Machine Learning for Stunting Prevalence: A Novel Comprehensive Approach to Its Classification, Prediction, and Clustering Optimization in Aceh, Indonesia
Informatics 2024, 11(4), 89; https://doi.org/10.3390/informatics11040089 - 21 Nov 2024
Abstract
Stunting remains a significant public health issue in Aceh, Indonesia, and is influenced by various socio-economic and environmental factors. This study aims to address key challenges in accurately classifying stunting prevalence, predicting future trends, and optimizing clustering methods to support more effective interventions.
[...] Read more.
Stunting remains a significant public health issue in Aceh, Indonesia, and is influenced by various socio-economic and environmental factors. This study aims to address key challenges in accurately classifying stunting prevalence, predicting future trends, and optimizing clustering methods to support more effective interventions. To this end, we propose a novel hybrid machine learning framework that integrates classification, predictive modeling, and clustering optimization. Support Vector Machines (SVM) with Radial Basis Function (RBF) and Sigmoid kernels were employed to improve the classification accuracy, with the RBF kernel outperforming the Sigmoid kernel, achieving an accuracy rate of 91.3% compared with 85.6%. This provides a more reliable tool for identifying high-risk populations. Furthermore, linear regression was used for predictive modeling, yielding a low Mean Squared Error (MSE) of 0.137, demonstrating robust predictive accuracy for future stunting prevalence. Finally, the clustering process was optimized using a weighted-product approach to enhance the efficiency of K-Medoids. This optimization reduced the number of iterations from seven to three and improved the Calinski–Harabasz Index from 85.2 to 93.7. This comprehensive framework not only enhances the classification, prediction, and clustering of results but also delivers actionable insights for targeted public health interventions and policymaking aimed at reducing stunting in Aceh.
Full article
(This article belongs to the Section Health Informatics)
►
Show Figures
Open AccessArticle
Influencing Mechanism of Signal Design Elements in Complex Human–Machine System: Evidence from Eye Movement Data
by
Siu Shing Man, Wenbo Hu, Hanxing Zhou, Tingru Zhang and Alan Hoi Shou Chan
Informatics 2024, 11(4), 88; https://doi.org/10.3390/informatics11040088 - 21 Nov 2024
Abstract
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible
[...] Read more.
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task.
Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Estimation of Mango Fruit Production Using Image Analysis and Machine Learning Algorithms
by
Liliana Arcila-Diaz, Heber I. Mejia-Cabrera and Juan Arcila-Diaz
Informatics 2024, 11(4), 87; https://doi.org/10.3390/informatics11040087 - 16 Nov 2024
Abstract
Mango production is fundamental to the agricultural economy, generating income and employment in various communities. Accurate estimation of its production optimizes the planning and logistics of harvesting; traditionally, manual methods are inefficient and prone to errors. Currently, machine learning, by handling large volumes
[...] Read more.
Mango production is fundamental to the agricultural economy, generating income and employment in various communities. Accurate estimation of its production optimizes the planning and logistics of harvesting; traditionally, manual methods are inefficient and prone to errors. Currently, machine learning, by handling large volumes of data, emerges as an innovative solution to enhance the precision of mango production estimation. This study presents an analysis of mango fruit detection using machine learning algorithms, specifically YOLO version 8 and Faster R-CNN. The present study employs a dataset consisting of 212 original images, annotated with a total of 9604 labels, which has been expanded to include 2449 additional images and 116,654 annotations. This significant increase in dataset size notably enhances the robustness and generalization capacity of the model. The YOLO-trained model achieves an accuracy of 96.72%, a recall of 77.4%, and an F1 Score of 86%, compared to the results of Faster R-CNN, which are 98.57%, 63.80%, and 77.46%, respectively. YOLO demonstrates greater efficiency, being faster in training, consuming less memory, and utilizing fewer CPU resources. Furthermore, this study has developed a web application with a user interface that facilitates the uploading of images from mango trees considered samples. The YOLO-trained model detects the fruits of each tree in the representative sample and uses extrapolation techniques to estimate the total number of fruits across the entire population of mango trees.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessReview
The Use of Artificial Intelligence to Analyze the Exposome in the Development of Chronic Diseases: A Review of the Current Literature
by
Stefania Isola, Giuseppe Murdaca, Silvia Brunetto, Emanuela Zumbo, Alessandro Tonacci and Sebastiano Gangemi
Informatics 2024, 11(4), 86; https://doi.org/10.3390/informatics11040086 - 12 Nov 2024
Abstract
►▼
Show Figures
The “Exposome” is a concept that indicates the set of exposures to which a human is subjected during their lifetime. These factors influence the health state of individuals and can drive the development of Noncommunicable Diseases (NCDs). Artificial Intelligence (AI) allows one to
[...] Read more.
The “Exposome” is a concept that indicates the set of exposures to which a human is subjected during their lifetime. These factors influence the health state of individuals and can drive the development of Noncommunicable Diseases (NCDs). Artificial Intelligence (AI) allows one to analyze large amounts of data in a short time. As such, several authors have used AI to study the relationship between exposome and chronic diseases. Under such premises, this study reviews the use of AI in analyzing the exposome to understand its role in the development of chronic diseases, focusing on how AI can identify patterns in exposure-related data and support prevention strategies. To achieve this, we carried out a search on multiple databases, including PubMed, ScienceDirect, and SCOPUS, from 1 January 2019 to 31 May 2023, using the MeSH terms (exposome) and (‘Artificial Intelligence’ OR ‘Machine Learning’ OR ‘Deep Learning’) to identify relevant studies on this topic. After completing the identification, screening, and eligibility assessment, a total of 18 studies were included in this literature review. According to the search, most authors used supervised or unsupervised machine learning models to study multiple exposure factors’ role in the risk of developing cardiovascular, metabolic, and chronic respiratory diseases. In some more recent studies, authors also used deep learning. Furthermore, the exposome analysis is useful to study the risk of developing neuropsychiatric disorders or evaluating pregnancy outcomes and child growth. Understanding the role of the exposome is pivotal to overcome the classic concept of a single exposure/disease. The application of AI allows one to analyze multiple environmental risks and their combined effects on health conditions. In the future, AI could be helpful in the prevention of chronic diseases, providing new diagnostic, therapeutic, and follow-up strategies.
Full article
Figure 1
Open AccessArticle
Modeling Zika Virus Disease Dynamics with Control Strategies
by
Mlyashimbi Helikumi, Paride O. Lolika, Kimulu Ancent Makau, Muli Charles Ndambuki and Adquate Mhlanga
Informatics 2024, 11(4), 85; https://doi.org/10.3390/informatics11040085 - 11 Nov 2024
Abstract
In this research, we formulated a fractional-order model for the transmission dynamics of Zika virus, incorporating three control strategies: health education campaigns, the use of insecticides, and preventive measures. We conducted a theoretical analysis of the model, obtaining the disease-free equilibrium and the
[...] Read more.
In this research, we formulated a fractional-order model for the transmission dynamics of Zika virus, incorporating three control strategies: health education campaigns, the use of insecticides, and preventive measures. We conducted a theoretical analysis of the model, obtaining the disease-free equilibrium and the basic reproduction number, and analyzing the existence and uniqueness of the model. Additionally, we performed model parameter estimation using real data on Zika virus cases reported in Colombia. We found that the fractional-order model provided a better fit to the real data compared to the classical integer-order model. A sensitivity analysis of the basic reproduction number was conducted using computed partial rank correlation coefficients to assess the impact of each parameter on Zika virus transmission. Furthermore, we performed numerical simulations to determine the effect of memory on the spread of Zika virus. The simulation results showed that the order of derivatives significantly impacts the dynamics of the disease. We also assessed the effect of the control strategies through simulations, concluding that the proposed interventions have the potential to significantly reduce the spread of Zika virus in the population.
Full article
(This article belongs to the Section Health Informatics)
►▼
Show Figures
Figure 1
Open AccessCase Report
Can ChatGPT Support Clinical Coding Using the ICD-10-CM/PCS?
by
Bernardo Nascimento Teixeira, Ana Leitão, Generosa Nascimento, Adalberto Campos-Fernandes and Francisco Cercas
Informatics 2024, 11(4), 84; https://doi.org/10.3390/informatics11040084 - 7 Nov 2024
Abstract
►▼
Show Figures
Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant
[...] Read more.
Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant to medical coders using the ICD-10-CM/PCS. Methodology: We conducted a prospective exploratory study between 2023 and 2024 over 6 months. A total of 150 clinical cases coded using the ICD-10-CM/PCS, extracted from technical coding books, were systematically randomized. All cases were translated into Portuguese (the native language of the authors) and English (the native language of the ICD-10-CM/PCS). These clinical cases varied in complexity levels regarding the quantity of diagnoses and procedures, as well as the nature of the clinical information. Each case was input into the 2023 ChatGPT free version. The coding obtained from ChatGPT was analyzed by a senior medical auditor/coder and compared with the expected results. Results: Regarding the correct codes, ChatGPT’s performance was higher by approximately 29 percentage points between diagnoses and procedures, with greater proficiency in diagnostic codes. The accuracy rate for codes was similar across languages, with rates of 31.0% and 31.9%. The error rate in procedure codes was substantially higher than that in diagnostic codes by almost four times. For missing information, a higher incidence was observed in diagnoses compared to procedures of slightly more than double the comparative rates. Additionally, there was a statistically significant excess of codes not related to clinical information, which was higher in procedures and nearly the same value in both languages under study. Conclusion: Given the ease of access to these tools, this investigation serves as an awareness factor, demonstrating that ChatGPT can assist the medical coder in directed research. However, it does not replace their technical validation in this process. Therefore, further developments of this tool are necessary to increase the quality and reliability of the results.
Full article
Figure 1
Open AccessArticle
Web Traffic Anomaly Detection Using Isolation Forest
by
Wilson Chua, Arsenn Lorette Diamond Pajas, Crizelle Shane Castro, Sean Patrick Panganiban, April Joy Pasuquin, Merwin Jan Purganan, Rica Malupeng, Divine Jessa Pingad, John Paul Orolfo, Haron Hakeen Lua and Lemuel Clark Velasco
Informatics 2024, 11(4), 83; https://doi.org/10.3390/informatics11040083 - 5 Nov 2024
Abstract
As companies increasingly undergo digital transformation, the value of their data assets also rises, making them even more attractive targets for hackers. The large volume of weblogs warrants the use of advanced classification methodologies in order for cybersecurity specialists to identify web traffic
[...] Read more.
As companies increasingly undergo digital transformation, the value of their data assets also rises, making them even more attractive targets for hackers. The large volume of weblogs warrants the use of advanced classification methodologies in order for cybersecurity specialists to identify web traffic anomalies. This study aims to implement Isolation Forest, an unsupervised machine learning methodology in the identification of anomalous and non-anomalous web traffic. The publicly available weblogs dataset from an e-commerce website underwent data preparation through a systematic pipeline of processes involving data ingestion, data type conversion, data cleaning, and normalization. This led to the addition of derived columns in the training set and manually labeled testing set that was then used to compare the anomaly detection performance of the Isolation Forest model with that of cybersecurity experts. The developed Isolation Forest model was implemented using the Python Scikit-learn library, and exhibited a superior Accuracy of 93%, Precision of 95%, Recall of 90% and F1-Score of 92%. By appropriate data preparation, model development, model implementation, and model evaluation, this study shows that Isolation Forest can be a viable solution for close to accurate web traffic anomaly detection.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Perceptions of AI Integration in the UAE’s Creative Sector
by
Asma Hassouni and Noha Mellor
Informatics 2024, 11(4), 82; https://doi.org/10.3390/informatics11040082 - 4 Nov 2024
Abstract
This study explores the perceptions of artificial intelligence (AI) within the creative sector of the United Arab Emirates (UAE) based on 13 semi-structured interviews and a survey with 224 participants among media professionals and their stakeholders. The findings indicate considerable enthusiasm surrounding AI’s
[...] Read more.
This study explores the perceptions of artificial intelligence (AI) within the creative sector of the United Arab Emirates (UAE) based on 13 semi-structured interviews and a survey with 224 participants among media professionals and their stakeholders. The findings indicate considerable enthusiasm surrounding AI’s potential to augment creativity and drive operational efficiency, a perspective that the study’s participants share. However, there are also apprehensions regarding job displacement and the necessity for strategic upskilling. Participants generally regard AI as an unavoidable technological influence that demands adaptation and seamless integration into daily workflows. The study underscores the disparity between the UAE’s government-led digital transformation objectives and the actual implementation within organizations, underscoring the urgent need for cohesive strategic alignment. The findings caution that the absence of clear directives and strategic planning may precipitate a new digital schism, impeding progress in the sector.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
Early Estimation in Agile Software Development Projects: A Systematic Mapping Study
by
José Gamaliel Rivera Ibarra, Gilberto Borrego and Ramón R. Palacio
Informatics 2024, 11(4), 81; https://doi.org/10.3390/informatics11040081 - 4 Nov 2024
Abstract
►▼
Show Figures
Estimating during the early stages is crucial for determining the feasibility and conducting the budgeting and planning of agile software development (ASD) projects. However, due to the characteristics of ASD and limited initial information, these estimates are often complicated and inaccurate. This study
[...] Read more.
Estimating during the early stages is crucial for determining the feasibility and conducting the budgeting and planning of agile software development (ASD) projects. However, due to the characteristics of ASD and limited initial information, these estimates are often complicated and inaccurate. This study aims to systematically map the literature to identify the most used estimation techniques; the reasons for their selection; the input artifacts, predictors, and metrics associated with these techniques; as well as research gaps in early-stage estimations in ASD. This study was based on the guidelines proposed by Kitchenham for systematic literature reviews in software engineering; a review protocol was defined with research questions and criteria for the selection of empirical studies. Results show that data-driven techniques are preferred to reduce biases and inconsistencies of expert-driven techniques. Most selected studies do not mention input artifacts, and software size is the most commonly used predictor. Machine learning-based techniques use publicly available data but often contain records of old projects from before the agile movement. The study highlights the need for tools supporting estimation activities and identifies key areas for future research, such as evaluating hybrid approaches and creating datasets of recent projects with sufficient contextual information and standardized metrics.
Full article
Figure 1
Open AccessArticle
Enhancing Visible Light Communication Channel Estimation in Complex 3D Environments: An Open-Source Ray Tracing Simulation Framework
by
Véronique Georlette, Nicolas Vallois, Véronique Moeyaert and Bruno Quoitin
Informatics 2024, 11(4), 80; https://doi.org/10.3390/informatics11040080 - 31 Oct 2024
Abstract
►▼
Show Figures
Estimating the optical power distribution in a room in order to assess the performance of a visible light communication (VLC) system is nothing new. It can be estimated using a Monte Carlo optical ray tracing algorithm that sums the contribution of each ray
[...] Read more.
Estimating the optical power distribution in a room in order to assess the performance of a visible light communication (VLC) system is nothing new. It can be estimated using a Monte Carlo optical ray tracing algorithm that sums the contribution of each ray on the reception plane. For now, research has focused on rectangular parallelepipedic rooms with single-textured walls, when studying indoor applications. This article presents a new open-source simulator that answers the case of more complex rooms by analysing them using a 3D STL (stereolithography) model. This paper describes this new tool in detail, with the material used, the software architecture, the ray tracing algorithm, and validates it against the literature and presents new use cases. To the best of our knowledge, this simulator is the only free and open-source ray tracing analysis for complex 3D rooms for VLC research. In particular, this simulator is capable of studying any room shape, such as an octagon or an L-shape. The user has the opportunity to control the number of emitters, their orientation, and especially the number of rays emitted and reflected. The final results are detailed heat maps, enabling the visualization of the optical power distribution across any 3D room. This tool is innovative both visually (using 3D models) and mathematically (estimating the coverage of a VLC system).
Full article
Figure 1
Open AccessArticle
Blockchain Technology in K-12 Computer Science Education?!
by
Rupert Gehrlein and Andreas Dengel
Informatics 2024, 11(4), 79; https://doi.org/10.3390/informatics11040079 - 30 Oct 2024
Abstract
►▼
Show Figures
The blockchain technology and its applications, such as cryptocurrencies or non-fungible tokens, represent significant advancements in computer science. Alongside its transformative potential, human interaction with blockchain has led to notable negative implications, including cybersecurity vulnerabilities, high energy consumption in mining activities, environmental impacts,
[...] Read more.
The blockchain technology and its applications, such as cryptocurrencies or non-fungible tokens, represent significant advancements in computer science. Alongside its transformative potential, human interaction with blockchain has led to notable negative implications, including cybersecurity vulnerabilities, high energy consumption in mining activities, environmental impacts, and the prevalence of economic fraud and high-risk financial products. Considering the expanding range of blockchain applications, there is interest in exploring its integration into K-12 education. For this purpose, this paper examines existing and documented attempts through a systematic literature review. Although the findings are quantitatively limited, they reveal initial concepts and ideas.
Full article
Figure 1
Open AccessArticle
Educational Roles and Scenarios for Large Language Models: An Ethnographic Research Study of Artificial Intelligence
by
Nikša Alfirević, Darko Rendulić, Maja Fošner and Ajda Fošner
Informatics 2024, 11(4), 78; https://doi.org/10.3390/informatics11040078 - 29 Oct 2024
Abstract
►▼
Show Figures
This paper reviews the theoretical background and potential applications of Large Language Models (LLMs) in educational processes and academic research. Utilizing a novel digital ethnographic approach, we engaged in iterative research with OpenAI’s ChatGPT-4 and Google’s Gemini Ultra—two advanced commercial LLMs. The methodology
[...] Read more.
This paper reviews the theoretical background and potential applications of Large Language Models (LLMs) in educational processes and academic research. Utilizing a novel digital ethnographic approach, we engaged in iterative research with OpenAI’s ChatGPT-4 and Google’s Gemini Ultra—two advanced commercial LLMs. The methodology treated LLMs as research participants, emphasizing the AI-guided perspectives and their envisioned roles in educational settings. Our findings identified the potential LLM roles in educational and research processes and we discussed the AI challenges, which included potential biases in decision-making and AI as a potential source of discrimination and conflict of interest. In addition to practical implications, we used the qualitative research results to advise on the relevant topics for future research.
Full article
Figure 1
Open AccessArticle
TableExtractNet: A Model of Automatic Detection and Recognition of Table Structures from Unstructured Documents
by
Thokozani Ngubane and Jules-Raymond Tapamo
Informatics 2024, 11(4), 77; https://doi.org/10.3390/informatics11040077 - 25 Oct 2024
Abstract
This paper presents TableExtractNet, a model that automatically finds and understands tables from scanned documents, tasks that are essential for quick use of information in many fields. This is driven by the growing need for efficient and accurate table interpretation in business documents
[...] Read more.
This paper presents TableExtractNet, a model that automatically finds and understands tables from scanned documents, tasks that are essential for quick use of information in many fields. This is driven by the growing need for efficient and accurate table interpretation in business documents where tables enhance data communication and aid decision-making. The model uses a mix of two advanced techniques, CornerNet and Faster R-CNN, to accurately locate tables and understand their layout. Tests on standard datasets, IIIT-AR-13K, STDW, SciTSR, and PubTabNet, show that this model performs better than previous ones, making it very good at dealing with tables that have complicated designs or are in documents with a lot of detail. The success of this model marks a step forward in making document analysis more automated. It makes it easier to turn complex scanned documents containing tables into data that are more manipulable by computers.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
In-Bed Monitoring: A Systematic Review of the Evaluation of In-Bed Movements Through Bed Sensors
by
Honoria Ocagli, Corrado Lanera, Carlotta Borghini, Noor Muhammad Khan, Alessandra Casamento and Dario Gregori
Informatics 2024, 11(4), 76; https://doi.org/10.3390/informatics11040076 - 22 Oct 2024
Abstract
►▼
Show Figures
The growing popularity of smart beds and devices for remote healthcare monitoring is based on advances in artificial intelligence (AI) applications. This systematic review aims to evaluate and synthesize the growing literature on the use of machine learning (ML) techniques to characterize patient
[...] Read more.
The growing popularity of smart beds and devices for remote healthcare monitoring is based on advances in artificial intelligence (AI) applications. This systematic review aims to evaluate and synthesize the growing literature on the use of machine learning (ML) techniques to characterize patient in-bed movements and bedsore development. This review is conducted according to the principles of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and is registered in the International Prospective Register of Systematic Reviews (PROSPERO CRD42022314329). The search was performed through nine scientific databases. The review included 78 articles, including 142 ML models. The applied ML models revealed significant heterogeneity in the various methodologies used to identify and classify patient behaviors and postures. The assortment of ML models encompassed artificial neural networks, deep learning architectures, and multimodal sensor integration approaches. This review shows that the models for analyzing and interpreting in-bed movements perform well in experimental settings. Large-scale real-life studies are lacking in diverse patient populations.
Full article
Figure 1
Open AccessArticle
Why Do People Gather? A Study on Factors Affecting Emotion and Participation in Group Chats
by
Lu Yan, Kenta Ono, Makoto Watanabe and Weijia Wang
Informatics 2024, 11(4), 75; https://doi.org/10.3390/informatics11040075 - 17 Oct 2024
Abstract
►▼
Show Figures
Group chat socialization is increasingly central to online activities, yet design strategies to enhance this experience remain underexplored. This study builds on the Stimuli–Organism–Response (SOR) framework to examine how usability, chat rhythm, and user behavior influence emotions and participation in group chats. Using
[...] Read more.
Group chat socialization is increasingly central to online activities, yet design strategies to enhance this experience remain underexplored. This study builds on the Stimuli–Organism–Response (SOR) framework to examine how usability, chat rhythm, and user behavior influence emotions and participation in group chats. Using data from 546 users in China, a relevant demographic given the dominance of platforms like WeChat in both social and professional settings, we uncover insights that are particularly applicable to highly connected digital environments. Our analysis shows significant relationships between usability (γ = 0.236, p < 0.001), chat rhythm (γ = 0.172, p < 0.001), user behavior (γ = 0.214, p < 0.001), and emotions, which directly impact participation. Positive emotions (γ = 0.128, p < 0.05) boost participation, while negative emotions (γ = −0.144, p < 0.01), particularly when linked to user behaviors, reduce it. Additionally, we discussed the mediating effects, notably that usability significantly impacts participation through positive emotions, while user behavior exerts a significant influence on participation through negative emotions. This research offers actionable design strategies, such as tailoring sensory inputs to reduce cognitive load and implementing reward systems to motivate participation. Positive feedback mechanisms enhance engagement by leveraging the brain’s reward systems, while optimized error messages can minimize frustration. These insights, which are particularly relevant for China’s active group chat culture, provide a framework to improve platform design and contribute valuable findings to the broader HCI field.
Full article
Graphical abstract
Open AccessArticle
Artificial Intelligence in Retail Marketing: Research Agenda Based on Bibliometric Reflection and Content Analysis (2000–2023)
by
Ahasanul Haque, Naznin Akther, Irfanuzzaman Khan, Khushbu Agarwal and Nazim Uddin
Informatics 2024, 11(4), 74; https://doi.org/10.3390/informatics11040074 - 9 Oct 2024
Abstract
Artificial intelligence (AI) is fundamentally transforming the marketing landscape, enabling significant progress in customer engagement, personalization, and operational efficiency. The retail sector has been at the forefront of the AI revolution, adopting AI technologies extensively to transform consumer interactions, supply chain management, and
[...] Read more.
Artificial intelligence (AI) is fundamentally transforming the marketing landscape, enabling significant progress in customer engagement, personalization, and operational efficiency. The retail sector has been at the forefront of the AI revolution, adopting AI technologies extensively to transform consumer interactions, supply chain management, and business performance. Given its early adoption of AI, the retail industry serves as an essential case context for investigating the broader implications of AI for consumer behavior. Drawing on 404 articles published between 2000 and 2023, this study presents a comprehensive bibliometric and content analysis of AI applications in retail marketing. The analysis used VOSviewer (1.6.20.0 version) and Bibliometrix (version 4.3.1) to identify important contributors, top institutions, and key publication sources. Co-occurrence keyword and co-citation analyses were used to map intellectual networks and highlight emerging themes. Additionally, a focused content analysis of 50 recent articles was selected based on their relevance, timeliness, and citation influence. It revealed six primary research streams: (1) consumer behavior, (2) AI in retail marketing, (3) business performance, (4) sustainability, (5) supply chain management, and (6) trust. These streams were categorized through thematic relevance and theoretical significance, emphasizing AI’s impact on the retail sector. The contributions of this study are twofold. Theoretically, it integrates existing research on AI in retail marketing and outlines future research in areas such as AI’s role in the domain of consumer behavior. From an empirical standpoint, the study highlights how AI can be applied to enhance customer experiences and improve business operations.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Utilizing LSTM-GRU for IOT-Based Water Level Prediction Using Multi-Variable Rainfall Time Series Data
by
Indrastanti Ratna Widiasari and Rissal Efendi
Informatics 2024, 11(4), 73; https://doi.org/10.3390/informatics11040073 - 8 Oct 2024
Abstract
►▼
Show Figures
This research describes experiments using LSTM, GRU models, and a combination of both to predict floods in Semarang based on time series data. The results show that the LSTM model is superior in capturing long-term dependencies, while GRU is better in processing short-term
[...] Read more.
This research describes experiments using LSTM, GRU models, and a combination of both to predict floods in Semarang based on time series data. The results show that the LSTM model is superior in capturing long-term dependencies, while GRU is better in processing short-term patterns. By combining the strengths of both models, this hybrid approach achieves better accuracy and robustness in flood prediction. The LSTM-GRU hybrid model outperforms the individual models, providing a more reliable prediction framework. This performance improvement is due to the complementary strengths of LSTM and GRU in handling various aspects of time series data. These findings emphasize the potential of advanced neural network models in addressing complex environmental challenges, paving the way for more effective flood management strategies in Semarang. The performance graph of the LSTM, GRU, and LSTM-GRU models in various scenarios shows significant differences in the performance of predicting river water levels based on rainfall input. The MAPE, MSE, RMSE, and MAD metrics are presented for training and validation data in six scenarios. Overall, the GRU model and the LSTM-GRU combination provide good performance when using more complete input variables, namely, downstream and upstream rainfall, compared to only using downstream rainfall.
Full article
Figure 1
Open AccessArticle
Using Artificial Intelligence-Based Tools to Improve the Literature Review Process: Pilot Test with the Topic “Hybrid Meat Products”
by
Juana Fernández-López, Fernando Borrás-Rocher, Manuel Viuda-Martos and José Ángel Pérez-Álvarez
Informatics 2024, 11(4), 72; https://doi.org/10.3390/informatics11040072 - 5 Oct 2024
Abstract
►▼
Show Figures
Conducting a literature review is a mandatory initial stage in scientific research on a specific topic. However, this task is becoming much more complicated in certain areas (such as food science and technology) due to the huge increase in the number of scientific
[...] Read more.
Conducting a literature review is a mandatory initial stage in scientific research on a specific topic. However, this task is becoming much more complicated in certain areas (such as food science and technology) due to the huge increase in the number of scientific publications. Different tools based on artificial intelligence could be very useful for this purpose. This paper addresses this challenge by developing and checking different tools applicated to an emerging topic in food science and technology: “hybrid meat products”. The first tool to be applied was based on Natural Language Processing and was used to select and reduce the initial number of papers obtained from a traditional bibliographic search (using common scientific databases such as Web Science and Scopus) from 938 to 178 (a 87% reduction). The second tool was a project based on the interplay between Retrieval-Augmented Generation (RAG) and LLAMA 3, which was used to answer key questions relating to the topic under review (“hybrid meat products”) but limiting the context to the scientific review obtained after applying the first AI tool. This new strategy for reviewing scientific literature could be a major advance on from the traditional literature review procedure, making it faster, more open, more accessible to everyone, more effective, more objective, and more efficient—all of which help to fulfill the principles of open science.
Full article
Figure 1
Open AccessReview
Edge Computing and Cloud Computing for Internet of Things: A Review
by
Francesco Cosimo Andriulo, Marco Fiore, Marina Mongiello, Emanuele Traversa and Vera Zizzo
Informatics 2024, 11(4), 71; https://doi.org/10.3390/informatics11040071 - 30 Sep 2024
Abstract
►▼
Show Figures
The rapid expansion of the Internet of Things ecosystem has created an urgent need for efficient data processing and analysis technologies. This review aims to systematically examine and compare edge computing, cloud computing, and hybrid architectures, focusing on their applications within IoT environments.
[...] Read more.
The rapid expansion of the Internet of Things ecosystem has created an urgent need for efficient data processing and analysis technologies. This review aims to systematically examine and compare edge computing, cloud computing, and hybrid architectures, focusing on their applications within IoT environments. The methodology involved a comprehensive search and analysis of peer-reviewed journals, conference proceedings, and industry reports, highlighting recent advancements in computing technologies for IoT. Key findings reveal that edge computing excels in reducing latency and enhancing data privacy through localized processing, while cloud computing offers superior scalability and flexibility. Hybrid approaches, such as fog and mist computing, present a promising solution by combining the strengths of both edge and cloud systems. These hybrid models optimize bandwidth use and support low-latency, privacy-sensitive applications in IoT ecosystems. Hybrid architectures are identified as particularly effective for scenarios requiring efficient bandwidth management and low-latency processing. These models represent a significant step forward in addressing the limitations of both edge and cloud computing for IoT, offering a balanced approach to data analysis and resource management.
Full article
Figure 1
Open AccessReview
A Review on Trending Machine Learning Techniques for Type 2 Diabetes Mellitus Management
by
Panagiotis D. Petridis, Aleksandra S. Kristo, Angelos K. Sikalidis and Ilias K. Kitsas
Informatics 2024, 11(4), 70; https://doi.org/10.3390/informatics11040070 - 27 Sep 2024
Abstract
Type 2 diabetes mellitus (T2DM) is a chronic disease characterized by elevated blood glucose levels and insulin resistance, leading to multiple organ damage with implications for quality of life and lifespan. In recent years, the rising prevalence of T2DM globally has coincided with
[...] Read more.
Type 2 diabetes mellitus (T2DM) is a chronic disease characterized by elevated blood glucose levels and insulin resistance, leading to multiple organ damage with implications for quality of life and lifespan. In recent years, the rising prevalence of T2DM globally has coincided with the digital transformation of medicine and healthcare, including extensive electronic health records (EHRs) for patients and healthy individuals. Numerous research articles as well as systematic reviews have been conducted to produce innovative findings and summarize current developments and applications of data science in the life sciences, medicine and healthcare. The present review is conducted in the context of T2DM and Machine Learning, examining relatively recent publications using tabular data and demonstrating the relevant use cases, the workflows during model building and the candidate predictors. Our work indicates that Gradient Boosting and tree-based models are the most successful ones, the SHAPley and Wrapper algorithms being quite popular feature interpretation and evaluation methods, highlighting urinary markers and dietary intake as emerging diabetes predictors besides the typical invasive ones. These results could offer insight toward better management of diabetes and open new avenues for research.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Informatics Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
- 10th Anniversary
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
Topic in
Brain Sciences, Healthcare, Informatics, IJERPH, JCM, Reports
Applications of Virtual Reality Technology in Rehabilitation
Topic Editors: Jorge Oliveira, Pedro GamitoDeadline: 30 June 2025
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2025
Conferences
Special Issues
Special Issue in
Informatics
The Smart Cities Continuum via Machine Learning and Artificial Intelligence
Guest Editors: Augusto Neto, Roger ImmichDeadline: 31 December 2024
Special Issue in
Informatics
AI for the People: An Ubuntu Approach to Transforming Health, Education, and Economic Landscapes
Guest Editors: Lufuno Makhado, Takalani Samuel Mashau, Nombulelo SepengDeadline: 31 May 2025