Next Issue
Volume 13, August
Previous Issue
Volume 13, June
 
 

Computers, Volume 13, Issue 7 (July 2024) – 28 articles

Cover Story (view full-size image): Adam, a resident of a modern apartment complex, returns home after a long day at work. As they approach the gate, the Bluetooth Low Energy (BLE) beacon installed near the gate detects their smartphone. The beacon signals the gate's Identification: Legitimate or Counterfeit (ILC) system, integrated with Adam's mobile app. Upon receiving the signal, the ILC system is able to recognize Adam’s walking steps and compare them with the stored Adam’s walking pattern to verify their authorization. Once confirmed, the motor activates with a soft hum, and the gate smoothly slides open, allowing Adam to enter without physical interaction. Adam appreciates the convenience of using their phone to access the gate, especially since they do not have to fumble for keys or access cards. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 1542 KiB  
Review
Data Lakes: A Survey of Concepts and Architectures
by Sarah Azzabi, Zakiya Alfughi and Abdelkader Ouda
Computers 2024, 13(7), 183; https://doi.org/10.3390/computers13070183 - 22 Jul 2024
Viewed by 5213
Abstract
This paper presents a comprehensive literature review on the evolution of data-lake technology, with a particular focus on data-lake architectures. By systematically examining the existing body of research, we identify and classify the major types of data-lake architectures that have been proposed and [...] Read more.
This paper presents a comprehensive literature review on the evolution of data-lake technology, with a particular focus on data-lake architectures. By systematically examining the existing body of research, we identify and classify the major types of data-lake architectures that have been proposed and implemented over time. The review highlights key trends in the development of data-lake architectures, identifies the primary challenges faced in their implementation, and discusses future directions for research and practice in this rapidly evolving field. We have developed diagrammatic representations to highlight the evolution of various architectures. These diagrams use consistent notations across all architectures to further enhance the comparative analysis of the different architectural components. We also explore the differences between data warehouses and data lakes. Our findings provide valuable insights for researchers and practitioners seeking to understand the current state of data-lake technology and its potential future trajectory. Full article
Show Figures

Figure 1

21 pages, 430 KiB  
Article
Real-Time Detection of Face Mask Usage Using Convolutional Neural Networks
by Athanasios Kanavos, Orestis Papadimitriou, Khalil Al-Hussaeni, Manolis Maragoudakis and Ioannis Karamitsos
Computers 2024, 13(7), 182; https://doi.org/10.3390/computers13070182 - 22 Jul 2024
Viewed by 1078
Abstract
The widespread adoption of face masks has been a crucial strategy in mitigating the spread of infectious diseases, particularly in communal settings. However, ensuring compliance with mask-wearing directives remains a significant challenge due to inconsistencies in usage and the difficulty in monitoring adherence [...] Read more.
The widespread adoption of face masks has been a crucial strategy in mitigating the spread of infectious diseases, particularly in communal settings. However, ensuring compliance with mask-wearing directives remains a significant challenge due to inconsistencies in usage and the difficulty in monitoring adherence in real time. This paper addresses these challenges by leveraging advanced deep learning techniques within computer vision to develop a real-time mask detection system. We have designed a sophisticated convolutional neural network (CNN) model, trained on a diverse and comprehensive dataset that includes various environmental conditions and mask-wearing behaviors. Our model demonstrates a high degree of accuracy in detecting proper mask usage, thereby significantly enhancing the ability of organizations and public health authorities to enforce mask-wearing rules effectively. The key contributions of this research include the development of a robust real-time monitoring system that can be integrated into existing surveillance infrastructures to improve public health safety measures during ongoing and future health crises. Furthermore, this study lays the groundwork for future advancements in automated compliance monitoring systems, extending their applicability to other areas of public health and safety. Full article
Show Figures

Figure 1

6 pages, 944 KiB  
Reply
Reply to Damaševičius, R. Comment on “Cárdenas-García, J.F. Info-Autopoiesis and the Limits of Artificial General Intelligence. Computers 2023, 12, 102”
by Jaime F. Cárdenas-García
Computers 2024, 13(7), 181; https://doi.org/10.3390/computers13070181 - 22 Jul 2024
Viewed by 722
Abstract
The author thanks and acknowledges the many positive and critical comments by Robertas Damaševičius [...] Full article
Show Figures

Figure 1

20 pages, 17178 KiB  
Article
Stego-STFAN: A Novel Neural Network for Video Steganography
by Guilherme Fay Vergara, Pedro Giacomelli, André Luiz Marques Serrano, Fábio Lúcio Lopes de Mendonça, Gabriel Arquelau Pimenta Rodrigues, Guilherme Dantas Bispo, Vinícius Pereira Gonçalves, Robson de Oliveira Albuquerque and Rafael Timóteo de Sousa Júnior
Computers 2024, 13(7), 180; https://doi.org/10.3390/computers13070180 - 19 Jul 2024
Viewed by 1315
Abstract
This article presents an innovative approach to video steganography called Stego-STFAN, as by using a cheap model process to use the temporal and spatial domains together, they end up presenting fine adjustments in each frame, the Stego-STFAN had a [...] Read more.
This article presents an innovative approach to video steganography called Stego-STFAN, as by using a cheap model process to use the temporal and spatial domains together, they end up presenting fine adjustments in each frame, the Stego-STFAN had a PSNRc metric of 27.03 and PSNRS of 23.09, which is close to the state-of-art. Steganography is the ability to hide a message so that third parties cannot perceive communication between them. Thus, one of the precautions in steganography is the size of the message you want to hide, as the security of the message is inversely proportional to its size. Inspired by this principle, video steganography appears to expand channels further and incorporate data into a message. To improve the construction of better stego-frames and recovered secrets, we propose a new architecture for video steganography derived from the Spatial-Temporal Adaptive Filter Network (STFAN) in conjunction with the Attention mechanism, which together generates filters and maps dynamic frames to increase the efficiency and effectiveness of frame processing, exploiting the redundancy present in the temporal dimension of the video, as well as fine details such as edges, fast-moving pixels and the context of secret and cover frames and by using the DWT method as another feature extraction level, having the same characteristics as when applied to an image file. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

29 pages, 4733 KiB  
Article
Flexural Eigenfrequency Analysis of Healthy and Pathological Tissues Using Machine Learning and Nonlocal Viscoelasticity
by Ali Farajpour and Wendy V. Ingman
Computers 2024, 13(7), 179; https://doi.org/10.3390/computers13070179 - 19 Jul 2024
Cited by 1 | Viewed by 1078
Abstract
Biomechanical characteristics can be used to assist the early detection of many diseases, including breast cancer, thyroid nodules, prostate cancer, liver fibrosis, ovarian diseases, and tendon disorders. In this paper, a scale-dependent viscoelastic model is developed to assess the biomechanical behaviour of biological [...] Read more.
Biomechanical characteristics can be used to assist the early detection of many diseases, including breast cancer, thyroid nodules, prostate cancer, liver fibrosis, ovarian diseases, and tendon disorders. In this paper, a scale-dependent viscoelastic model is developed to assess the biomechanical behaviour of biological tissues subject to flexural waves. The nonlocal strain gradient theory, in conjunction with machine learning techniques such as extreme gradient boosting, k-nearest neighbours, support vector machines, and random forest, is utilised to develop a computational platform for biomechanical analysis. The coupled governing differential equations are derived using Hamilton’s law. Transverse wave analysis is conducted to investigate different normal and pathological human conditions including ovarian cancer, breast cancer, and ovarian fibrosis. Viscoelastic, strain gradient, and nonlocal effects are used to describe the impact of fluid content, stiffness hardening caused by the gradients of strain components, and stiffness softening associated with the nonlocality of stress components within the biological tissues and cells. The integration of the scale-dependent biomechanical continuum model with machine learning facilitates the adoption of the developed model in practical applications by allowing for learning from clinical data, alongside the intrinsic mechanical laws that govern biomechanical responses. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

5 pages, 167 KiB  
Comment
Comment on Cárdenas-García, J.F. Info-Autopoiesis and the Limits of Artificial General Intelligence. Computers 2023, 12, 102
by Robertas Damaševičius
Computers 2024, 13(7), 178; https://doi.org/10.3390/computers13070178 - 19 Jul 2024
Cited by 1 | Viewed by 829
Abstract
In the article by Jaime F [...] Full article
20 pages, 298 KiB  
Article
Teachers’ Needs for Support during Emergency Remote Teaching in Greek Schools: Role of Social Networks
by Stefanos Nikiforos, Eleftheria Anastasopoulou, Athina Pappa, Spyros Tzanavaris and Katia Lida Kermanidis
Computers 2024, 13(7), 177; https://doi.org/10.3390/computers13070177 - 18 Jul 2024
Viewed by 957
Abstract
The onset of the COVID-19 pandemic prompted a rapid shift to Emergency Remote Teaching (ERT). Social networks had a key role in supporting the educational community in facing challenges and opportunities. A quantitative study was conducted to assess the Greek teachers’ perceptions of [...] Read more.
The onset of the COVID-19 pandemic prompted a rapid shift to Emergency Remote Teaching (ERT). Social networks had a key role in supporting the educational community in facing challenges and opportunities. A quantitative study was conducted to assess the Greek teachers’ perceptions of social network support. Findings indicated that teachers turned to universities, educational institutions, the Ministry of Education, school support groups, and virtual communities for support. Additionally, the study revealed the barriers faced by teachers, including infrastructure limitations, technical difficulties, skill deficiencies, problems with students’ engagement, and school policies. Teachers’ evaluation of support regarding ERT provided fruitful insight. The results illustrate teachers’ perspectives on ERT, contributing to the ongoing discourse on educational resilience to unpredictable disruptions. In conclusion, the role of social networks was considered as critical for the teachers to overcome barriers during ERT with the formation of social communities for support and the sharing of common experiences. Expertise in internet use and social networking played a significant role in readiness for the abrupt shift to distance education. The present study uniquely contributes to the educational field by emphasizing the role of teachers’ support as an innovative approach to holistically enhance teachers’ performance in ERT. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
19 pages, 1015 KiB  
Article
A Regularized Physics-Informed Neural Network to Support Data-Driven Nonlinear Constrained Optimization
by Diego Armando Perez-Rosero, Andrés Marino Álvarez-Meza and Cesar German Castellanos-Dominguez
Computers 2024, 13(7), 176; https://doi.org/10.3390/computers13070176 - 18 Jul 2024
Viewed by 1057
Abstract
Nonlinear optimization (NOPT) is a meaningful tool for solving complex tasks in fields like engineering, economics, and operations research, among others. However, NOPT has problems when it comes to dealing with data variability and noisy input measurements that lead to incorrect solutions. Furthermore, [...] Read more.
Nonlinear optimization (NOPT) is a meaningful tool for solving complex tasks in fields like engineering, economics, and operations research, among others. However, NOPT has problems when it comes to dealing with data variability and noisy input measurements that lead to incorrect solutions. Furthermore, nonlinear constraints may result in outcomes that are either infeasible or suboptimal, such as nonconvex optimization. This paper introduces a novel regularized physics-informed neural network (RPINN) framework as a new NOPT tool for both supervised and unsupervised data-driven scenarios. Our RPINN is threefold: By using custom activation functions and regularization penalties in an artificial neural network (ANN), RPINN can handle data variability and noisy inputs. Furthermore, it employs physics principles to construct the network architecture, computing the optimization variables based on network weights and learned features. In addition, it uses automatic differentiation training to make the system scalable and cut down on computation time through batch-based back-propagation. The test results for both supervised and unsupervised NOPT tasks show that our RPINN can provide solutions that are competitive compared to state-of-the-art solvers. In turn, the robustness of RPINN against noisy input measurements makes it particularly valuable in environments with fluctuating information. Specifically, we test a uniform mixture model and a gas-powered system as NOPT scenarios. Overall, with RPINN, its ANN-based foundation offers significant flexibility and scalability. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

21 pages, 3742 KiB  
Article
A Framework for Cleaning Streaming Data in Healthcare: A Context and User-Supported Approach
by Obaid Alotaibi, Sarath Tomy and Eric Pardede
Computers 2024, 13(7), 175; https://doi.org/10.3390/computers13070175 - 16 Jul 2024
Viewed by 1104
Abstract
Nowadays, ubiquitous technology makes life easier, especially devices that use the internet (IoT). IoT devices have been used to generate data in various domains, including healthcare, industry, and education. However, there are often problems with this generated data such as missing values, duplication, [...] Read more.
Nowadays, ubiquitous technology makes life easier, especially devices that use the internet (IoT). IoT devices have been used to generate data in various domains, including healthcare, industry, and education. However, there are often problems with this generated data such as missing values, duplication, and data errors, which can significantly affect data analysis results and lead to inaccurate decision making. Enhancing the quality of real-time data streams has become a challenging task as it is crucial for better decisions. In this paper, we propose a framework to improve the quality of a real-time data stream by considering different aspects, including context-awareness. The proposed framework tackles several issues in the data stream, including duplicated data, missing values, and outliers to improve data quality. The proposed framework also provides recommendations on appropriate data cleaning techniques to the user to help improve data quality in real time. Also, the data quality assessment is included in the proposed framework to provide insight to the user about the data stream quality for better decisions. We present a prototype to examine the concept of the proposed framework. We use a dataset that is collected in healthcare and process these data using a case study. The effectiveness of the proposed framework is verified by the ability to detect and repair stream data quality issues in selected context and to provide a recommended context and data cleaning techniques to the expert for better decision making in providing healthcare advice to the patient. We evaluate our proposed framework by comparing the proposed framework against previous works. Full article
Show Figures

Figure 1

26 pages, 2622 KiB  
Review
A Comprehensive Review of Processing-in-Memory Architectures for Deep Neural Networks
by Rupinder Kaur, Arghavan Asad and Farah Mohammadi
Computers 2024, 13(7), 174; https://doi.org/10.3390/computers13070174 - 16 Jul 2024
Cited by 2 | Viewed by 4053
Abstract
This comprehensive review explores the advancements in processing-in-memory (PIM) techniques and chiplet-based architectures for deep neural networks (DNNs). It addresses the challenges of monolithic chip architectures and highlights the benefits of chiplet-based designs in terms of scalability and flexibility. This review emphasizes dataflow-awareness, [...] Read more.
This comprehensive review explores the advancements in processing-in-memory (PIM) techniques and chiplet-based architectures for deep neural networks (DNNs). It addresses the challenges of monolithic chip architectures and highlights the benefits of chiplet-based designs in terms of scalability and flexibility. This review emphasizes dataflow-awareness, communication optimization, and thermal considerations in PIM-enabled manycore architectures. It discusses tailored dataflow requirements for different machine learning workloads and presents a heterogeneous PIM system for energy-efficient neural network training. Additionally, it explores thermally efficient dataflow-aware monolithic 3D (M3D) NoC architectures for accelerating CNN inferencing. Overall, this review provides valuable insights into the development and evaluation of chiplet and PIM architectures, emphasizing improved performance, energy efficiency, and inference accuracy in deep learning applications. Full article
Show Figures

Figure 1

18 pages, 3199 KiB  
Article
Optimizing Convolutional Neural Networks for Image Classification on Resource-Constrained Microcontroller Units
by Susanne Brockmann and Tim Schlippe
Computers 2024, 13(7), 173; https://doi.org/10.3390/computers13070173 - 15 Jul 2024
Cited by 1 | Viewed by 1353
Abstract
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access [...] Read more.
Running machine learning algorithms for image classification locally on small, cheap, and low-power microcontroller units (MCUs) has advantages in terms of bandwidth, inference time, energy, reliability, and privacy for different applications. Therefore, TinyML focuses on deploying neural networks on MCUs with random access memory sizes between 2 KB and 512 KB and read-only memory storage capacities between 32 KB and 2 MB. Models designed for high-end devices are usually ported to MCUs using model scaling factors provided by the model architecture’s designers. However, our analysis shows that this naive approach of substantially scaling down convolutional neural networks (CNNs) for image classification using such default scaling factors results in suboptimal performance. Consequently, in this paper we present a systematic strategy for efficiently scaling down CNN model architectures to run on MCUs. Moreover, we present our CNN Analyzer, a dashboard-based tool for determining optimal CNN model architecture scaling factors for the downscaling strategy by gaining layer-wise insights into the model architecture scaling factors that drive model size, peak memory, and inference time. Using our strategy, we were able to introduce additional new model architecture scaling factors for MobileNet v1, MobileNet v2, MobileNet v3, and ShuffleNet v2 and to optimize these model architectures. Our best model variation outperforms the MobileNet v1 version provided in the MLPerf Tiny Benchmark on the Visual Wake Words image classification task, reducing the model size by 20.5% while increasing the accuracy by 4.0%. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

14 pages, 5149 KiB  
Article
Implementation of Integrated Development Environment for Machine Vision-Based IEC 61131-3
by Sun Lim, Un-Hyeong Ham and Seong-Min Han
Computers 2024, 13(7), 172; https://doi.org/10.3390/computers13070172 - 15 Jul 2024
Viewed by 1104
Abstract
IEC 61131-3 is an international standard for developing standardized software for automation and control systems. Machine vision systems are a prominent technology in the field of computer vision and are widely used in various industries, such as manufacturing, robotics, healthcare, and automotive, and [...] Read more.
IEC 61131-3 is an international standard for developing standardized software for automation and control systems. Machine vision systems are a prominent technology in the field of computer vision and are widely used in various industries, such as manufacturing, robotics, healthcare, and automotive, and are often combined with AI technologies. In industrial automation systems, software developed for defect detection or product classification typically involves separate systems for automation and machine vision programs, leading to increased system complexity and unnecessary resource wastage. To address these limitations, this study proposes an IEC 61131-3-based integrated development environment for programmable machine vision. We selected 11 APIs commonly used in machine vision systems, evaluated their functions in an IEC 61131-3 compliant development environment, and measured the performance of representative machine vision applications. This approach demonstrates the feasibility of developing PLC and machine vision programs within a single-controller system. We investigated the impact of controller performance on function execution. Full article
Show Figures

Figure 1

37 pages, 18036 KiB  
Article
Node Classification of Network Threats Leveraging Graph-Based Characterizations Using Memgraph
by Sadaf Charkhabi, Peyman Samimi, Sikha S. Bagui, Dustin Mink and Subhash C. Bagui
Computers 2024, 13(7), 171; https://doi.org/10.3390/computers13070171 - 15 Jul 2024
Viewed by 1208
Abstract
This research leverages Memgraph, an open-source graph database, to analyze graph-based network data and apply Graph Neural Networks (GNNs) for a detailed classification of cyberattack tactics categorized by the MITRE ATT&CK framework. As part of graph characterization, the page rank, degree centrality, betweenness [...] Read more.
This research leverages Memgraph, an open-source graph database, to analyze graph-based network data and apply Graph Neural Networks (GNNs) for a detailed classification of cyberattack tactics categorized by the MITRE ATT&CK framework. As part of graph characterization, the page rank, degree centrality, betweenness centrality, and Katz centrality are presented. Node classification is utilized to categorize network entities based on their role in the traffic. Graph-theoretic features such as in-degree, out-degree, PageRank, and Katz centrality were used in node classification to ensure that the model captures the structure of the graph. The study utilizes the UWF-ZeekDataFall22 dataset, a newly created dataset which consists of labeled network logs from the University of West Florida’s Cyber Range. The uniqueness of this study is that it uses the power of combining graph-based characterization or analysis with machine learning to enhance the understanding and visualization of cyber threats, thereby improving the network security measures. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence 2024)
Show Figures

Figure 1

16 pages, 9862 KiB  
Article
Interactive Application as a Teaching Aid in Mechanical Engineering
by Peter Weis, Lukáš Smetanka, Slavomír Hrček and Matúš Vereš
Computers 2024, 13(7), 170; https://doi.org/10.3390/computers13070170 - 10 Jul 2024
Viewed by 1164
Abstract
This paper examines the integration of interactive 3D applications into the teaching process in mechanical engineering education. An innovative interactive 3D application has been developed as a teaching aid for engineering students. The main advantage is its easy availability through a web browser [...] Read more.
This paper examines the integration of interactive 3D applications into the teaching process in mechanical engineering education. An innovative interactive 3D application has been developed as a teaching aid for engineering students. The main advantage is its easy availability through a web browser on mobile devices or desktop computers. It includes four explorable 3D gearbox models with assembly animations, linked technical information, and immersive virtual and augmented reality (AR) experiences. The benefits of using this application in the teaching process were monitored on a group of students at the end of the semester. Assessments conducted before and after the use of the interactive 3D application measured learning outcomes. Qualitative feedback from students was also collected. The results demonstrated significant improvements in engagement, spatial awareness, and understanding of gearbox principles compared to traditional methods. The versatility and accessibility of the application also facilitated self-directed learning, reducing the need for external resources. These findings indicate that interactive 3D tools have the potential to enhance student learning and engagement and to promote sustainable practices in engineering education. Future research could explore the scalability and applicability of these tools across different engineering disciplines and educational contexts. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education 2024)
Show Figures

Figure 1

26 pages, 17391 KiB  
Article
Internet of Things-Based Robust Green Smart Grid
by Rania A. Ahmed, M. Abdelraouf, Shaimaa Ahmed Elsaid, Mohammed ElAffendi, Ahmed A. Abd El-Latif, A. A. Shaalan and Abdelhamied A. Ateya
Computers 2024, 13(7), 169; https://doi.org/10.3390/computers13070169 - 8 Jul 2024
Viewed by 1538
Abstract
Renewable energy sources play a critical role in all governments’ and organizations’ energy management and sustainability plans. The solar cell represents one such renewable energy resource, generating power in a population-free circumference. Integrating these renewable sources with the smart grids leads to the [...] Read more.
Renewable energy sources play a critical role in all governments’ and organizations’ energy management and sustainability plans. The solar cell represents one such renewable energy resource, generating power in a population-free circumference. Integrating these renewable sources with the smart grids leads to the generation of green smart grids. Smart grids are critical for modernizing electricity distribution by using new communication technologies that improve power system efficiency, reliability, and sustainability. Smart grids assist in balancing supply and demand by allowing for real-time monitoring and administration, as well as accommodating renewable energy sources and reducing outages. However, their execution presents considerable problems. High upfront expenditures and the need for substantial and reliable infrastructure changes present challenges. Despite these challenges, shifting to green smart grids is critical for a resilient and adaptable energy future that can fulfill changing consumer demands and environmental aims. To this end, this work considers developing a reliable Internet of Things (IoT)-based green smart grid. The proposed green grid integrates traditional grids with solar energy and provides a control unit between the generation and consumption parts of the grid. The work deploys intelligent IoT units to control energy demands and manage energy consumption effectively. The proposed framework deploys the paradigm of distributed edge computing in four levels to provide efficient data offloading and power management. The developed green grid outperformed traditional grids in terms of its reliability and energy efficiency. The proposed green grid reduces energy consumption over the distribution area by an average of 24.3% compared to traditional grids. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

24 pages, 13967 KiB  
Article
Transforming Digital Marketing with Generative AI
by Tasin Islam, Alina Miron, Monomita Nandy, Jyoti Choudrie, Xiaohui Liu and Yongmin Li
Computers 2024, 13(7), 168; https://doi.org/10.3390/computers13070168 - 8 Jul 2024
Cited by 2 | Viewed by 6589
Abstract
The current marketing landscape faces challenges in content creation and innovation, relying heavily on manually created content and traditional channels like social media and search engines. While effective, these methods often lack the creativity and uniqueness needed to stand out in a competitive [...] Read more.
The current marketing landscape faces challenges in content creation and innovation, relying heavily on manually created content and traditional channels like social media and search engines. While effective, these methods often lack the creativity and uniqueness needed to stand out in a competitive market. To address this, we introduce MARK-GEN, a conceptual framework that utilises generative artificial intelligence (AI) models to transform marketing content creation. MARK-GEN provides a comprehensive, structured approach for businesses to employ generative AI in producing marketing materials, representing a new method in digital marketing strategies. We present two case studies within the fashion industry, demonstrating how MARK-GEN can generate compelling marketing content using generative AI technologies. This proposition paper builds on our previous technical developments in virtual try-on models, including image-based, multi-pose, and image-to-video techniques, and is intended for a broad audience, particularly those in business management. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1849 KiB  
Article
Using Artificial Intelligence to Predict the Aerodynamic Properties of Wind Turbine Profiles
by Ziemowit Malecha and Adam Sobczyk
Computers 2024, 13(7), 167; https://doi.org/10.3390/computers13070167 - 8 Jul 2024
Viewed by 1184
Abstract
This study describes the use of artificial intelligence to predict the aerodynamic properties of wind turbine profiles. The goal was to determine the lift coefficient for an airfoil using its geometry as input. Calculations based on XFoil were taken as a target for [...] Read more.
This study describes the use of artificial intelligence to predict the aerodynamic properties of wind turbine profiles. The goal was to determine the lift coefficient for an airfoil using its geometry as input. Calculations based on XFoil were taken as a target for the predictions. The lift coefficient for a single case scenario was set as a value to find by training an algorithm. Airfoil geometry data were collected from the UIUC Airfoil Data Site. Geometries in the coordinate format were converted to PARSEC parameters, which became a direct feature for the random forest regression algorithm. The training dataset included 60% of the base dataset records. The rest of the dataset was used to test the model. Five different datasets were tested. The results calculated for the test part of the base dataset were compared with the actual values of the lift coefficients. The developed prediction model obtained a coefficient of determination ranging from 0.83 to 0.87, which is a good prognosis for further research. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

22 pages, 4727 KiB  
Article
Hardware-Based Implementation of Algorithms for Data Replacement in Cache Memory of Processor Cores
by Larysa Titarenko, Vyacheslav Kharchenko, Vadym Puidenko, Artem Perepelitsyn and Alexander Barkalov
Computers 2024, 13(7), 166; https://doi.org/10.3390/computers13070166 - 5 Jul 2024
Viewed by 958
Abstract
Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most [...] Read more.
Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most often evaluated by the percentage of cache hits during the cycles of the processor bus when accessing the cache memory. The policies that focus on replacing the Least Recently Used (LRU) or Least Frequently Used (LFU) elements, whether instructions or data, are relevant for use. It should be noted that in the paging cache buffer, the above replacement policies can also be used to replace address information. The pseudo LRU (PLRU) policy introduces replacing based on approximate information about the age of the elements in the cache memory. The hardware implementation of any replacement policy algorithm is the circuit. This hardware part of the processor core has certain characteristics: the latency of the search process for a candidate element for replacement, the gate complexity, and the reliability. The characteristics of the PLRUt and PLRUm replacement policies are synthesized and investigated. Both are the varieties of the PLRU replacement policy, which is close to the LRU policy in terms of the percentage of cache hits. In the current study, the hardware implementation of these policies is evaluated, and the possibility of adaptation to each of the policies in the processor core according to a selected priority characteristic is analyzed. The dependency of the rise in the delay and gate complexity in the case of an increase in the associativity of the cache memory is shown. The advantage of the hardware implementation of the PLRUt algorithm in comparison with the PLRUm algorithm for higher values of associativity is shown. Full article
Show Figures

Figure 1

22 pages, 1911 KiB  
Article
Automation Bias and Complacency in Security Operation Centers
by Jack Tilbury and Stephen Flowerday
Computers 2024, 13(7), 165; https://doi.org/10.3390/computers13070165 - 3 Jul 2024
Cited by 2 | Viewed by 1212
Abstract
The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To [...] Read more.
The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To identify automation characteristics that assist in the mitigation of automation bias and complacency, we investigated the current and proposed application areas of automation in SOCs and discussed its implications for security analysts. A scoping review of 599 articles from four databases was conducted. The final 48 articles were reviewed by two researchers for quality control and were imported into NVivo14. Thematic analysis was performed, and the use of automation throughout the incident response lifecycle was recognized, predominantly in the detection and response phases. Artificial intelligence and machine learning solutions are increasingly prominent in SOCs, yet support for the human-in-the-loop component is evident. The research culminates by contributing the SOC Automation Implementation Guidelines (SAIG), comprising functional and non-functional requirements for SOC automation tools that, if implemented, permit a mutually beneficial relationship between security analysts and intelligent machines. This is of practical value to human automation researchers and SOCs striving to optimize processes. Theoretically, a continued understanding of automation bias and its components is achieved. Full article
Show Figures

Figure 1

27 pages, 6430 KiB  
Article
Integrity and Privacy Assurance Framework for Remote Healthcare Monitoring Based on IoT
by Salah Hamza Alharbi, Ali Musa Alzahrani, Toqeer Ali Syed and Saad Said Alqahtany
Computers 2024, 13(7), 164; https://doi.org/10.3390/computers13070164 - 3 Jul 2024
Cited by 2 | Viewed by 1593
Abstract
Remote healthcare monitoring (RHM) has become a pivotal component of modern healthcare, offering a crucial lifeline to numerous patients. Ensuring the integrity and privacy of the data generated and transmitted by IoT devices is of paramount importance. The integration of blockchain technology and [...] Read more.
Remote healthcare monitoring (RHM) has become a pivotal component of modern healthcare, offering a crucial lifeline to numerous patients. Ensuring the integrity and privacy of the data generated and transmitted by IoT devices is of paramount importance. The integration of blockchain technology and smart contracts has emerged as a pioneering solution to fortify the security of internet of things (IoT) data transmissions within the realm of healthcare monitoring. In today’s healthcare landscape, the IoT plays a pivotal role in remotely monitoring and managing patients’ well-being. Furthermore, blockchain’s decentralized and immutable ledger ensures that all IoT data transactions are securely recorded, timestamped, and resistant to unauthorized modifications. This heightened level of data security is critical in healthcare, where the integrity and privacy of patient information are nonnegotiable. This research endeavors to harness the power of blockchain and smart contracts to establish a robust and tamper-proof framework for healthcare IoT data. Employing smart contracts, which are self-executing agreements programmed with predefined rules, enables us to automate and validate data transactions within the IoT ecosystem. These contracts execute automatically when specific conditions are met, eliminating the need for manual intervention and oversight. This automation not only streamlines the process of data processing but also enhances its accuracy and reliability by reducing the risk of human error. Additionally, smart contracts provide a transparent and tamper-proof mechanism for verifying the validity of transactions, thereby mitigating the risk of fraudulent activities. By leveraging smart contracts, organizations can ensure the integrity and efficiency of data transactions within the IoT ecosystem, leading to improved trust, transparency, and security. Our experiments demonstrate the application of a blockchain approach to secure transmissions in IoT for RHM, as will be illustrated in the paper. This showcases the practical applicability of blockchain technology in real-world scenarios. Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
Show Figures

Figure 1

25 pages, 437 KiB  
Article
Enhancing the Security of Classical Communication with Post-Quantum Authenticated-Encryption Schemes for the Quantum Key Distribution
by Farshad Rahimi Ghashghaei, Yussuf Ahmed, Nebrase Elmrabit and Mehdi Yousefi
Computers 2024, 13(7), 163; https://doi.org/10.3390/computers13070163 - 1 Jul 2024
Cited by 1 | Viewed by 2157
Abstract
This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address [...] Read more.
This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address critical security challenges in QKD, particularly in authentication and encryption, to ensure the reliable communication across quantum and classical channels. The other objective of this study is to balance security and communication speed among various PQC algorithms in different security levels, specifically CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon, which are finalists in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography Standardization project. The quantum channel of QKD is simulated with Qiskit, which is a comprehensive and well-supported tool in the field of quantum computing. By providing a detailed analysis of the performance of these three algorithms with Rivest–Shamir–Adleman (RSA), the results will guide companies and organizations in selecting an optimal combination for their QKD systems to achieve a reliable balance between efficiency and security. Our findings demonstrate that the implemented PQC schemes effectively address security challenges posed by quantum computers, while keeping the the performance similar to RSA. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

13 pages, 803 KiB  
Article
Bridging the Gap between Project-Oriented and Exercise-Oriented Automatic Assessment Tools
by Bruno Pereira Cipriano, Bernardo Baltazar, Nuno Fachada, Athanasios Vourvopoulos and Pedro Alves
Computers 2024, 13(7), 162; https://doi.org/10.3390/computers13070162 - 30 Jun 2024
Cited by 1 | Viewed by 1008
Abstract
In this study, we present the DP Plugin for IntelliJ IDEA, designed to extend the Drop Project (DP) Automatic Assessment Tool (AAT) by making it more suitable for handling small exercises in exercise-based learning environments. Our aim was to address the limitations of [...] Read more.
In this study, we present the DP Plugin for IntelliJ IDEA, designed to extend the Drop Project (DP) Automatic Assessment Tool (AAT) by making it more suitable for handling small exercises in exercise-based learning environments. Our aim was to address the limitations of DP in supporting small assignments while retaining its strengths in project-based learning. The plugin leverages DP’s REST API to streamline the submission process, integrating assignment instructions and feedback directly within the IDE. A student survey conducted during the 2022/23 academic year revealed a positive reception, highlighting benefits such as time efficiency and ease of use. Students also provided valuable feedback, leading to various improvements that have since been integrated into the plugin. Despite these promising results, the study is limited by the relatively small percentage of survey respondents. Our findings suggest that an IDE plugin can significantly improve the usability of project-oriented AATs for small exercises, informing the development of future educational tools suitable for mixed project-based and exercise-based learning environments. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

17 pages, 1626 KiB  
Article
Modeling Autonomous Vehicle Responses to Novel Observations Using Hierarchical Cognitive Representations Inspired Active Inference
by Sheida Nozari, Ali Krayani, Pablo Marin, Lucio Marcenaro, David Martin Gomez and Carlo Regazzoni
Computers 2024, 13(7), 161; https://doi.org/10.3390/computers13070161 - 28 Jun 2024
Viewed by 1072
Abstract
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making [...] Read more.
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making processes similar to the human brain, focusing on the agent’s preferences and the principle of free energy. This approach is combined with imitation learning to enhance the vehicle’s ability to adapt to new observations and make human-like decisions. The research involved developing a multi-modal self-awareness architecture for autonomous driving systems and testing this model in driving scenarios, including abnormal observations. The results demonstrated the model’s effectiveness in enabling the vehicle to make safe decisions, particularly in unobserved or dynamic environments. The study concludes that the integration of active inference with imitation learning significantly improves the performance of autonomous vehicles, offering a promising direction for future developments in intelligent transportation systems. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023)
Show Figures

Figure 1

24 pages, 578 KiB  
Article
An NLP-Based Exploration of Variance in Student Writing and Syntax: Implications for Automated Writing Evaluation
by Maria Goldshtein, Amin G. Alhashim and Rod D. Roscoe
Computers 2024, 13(7), 160; https://doi.org/10.3390/computers13070160 - 25 Jun 2024
Viewed by 1031
Abstract
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they [...] Read more.
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they must be able to consider similar kinds of variance. The current study employed natural language processing (NLP) to explore variance in syntactic complexity and sophistication across clusters characterized in a large corpus (n = 36,207) of middle school and high school argumentative essays. Using NLP tools, k-means clustering, and discriminant function analysis (DFA), we observed that student writers employed four distinct syntactic patterns: (1) familiar and descriptive language, (2) consistently simple noun phrases, (3) variably complex noun phrases, and (4) moderate complexity with less familiar language. Importantly, each pattern spanned the full range of writing quality; there were no syntactic patterns consistently evaluated as “good” or “bad”. These findings support the need for nuanced approaches in automated writing assessment while informing ways that AWE can participate in that process. Future AWE research can and should explore similar variability across other detectable elements of writing (e.g., vocabulary, cohesion, discursive cues, and sentiment) via diverse modeling methods. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

23 pages, 8046 KiB  
Article
Enhanced Security Access Control Using Statistical-Based Legitimate or Counterfeit Identification System
by Aisha Edrah and Abdelkader Ouda
Computers 2024, 13(7), 159; https://doi.org/10.3390/computers13070159 - 22 Jun 2024
Cited by 1 | Viewed by 1350
Abstract
With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. [...] Read more.
With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. To ensure the security and accuracy of smartphone-centric biometric identification, it is crucial that the phone reliably identifies its legitimate owner. Once the legitimate holder has been successfully determined, the phone can effortlessly provide real-time identity verification for various applications. To achieve this, we introduce a novel smartphone-integrated detection and control system called Identification: Legitimate or Counterfeit (ILC), which utilizes gait cycle analysis. The ILC system employs the smartphone’s accelerometer sensor, along with advanced statistical methods, to detect the user’s gait pattern, enabling real-time identification of the smartphone owner. This approach relies on statistical analysis of measurements obtained from the accelerometer sensor, specifically, peaks extracted from the X-axis data. Subsequently, the derived feature’s probability distribution function (PDF) is computed and compared to the known user’s PDF. The calculated probability verifies the similarity between the distributions, and a decision is made with 92.18% accuracy based on a predetermined verification threshold. Full article
Show Figures

Figure 1

15 pages, 617 KiB  
Article
Personalized Classifier Selection for EEG-Based BCIs
by Javad Rahimipour Anaraki, Antonina Kolokolova and Tom Chau
Computers 2024, 13(7), 158; https://doi.org/10.3390/computers13070158 - 21 Jun 2024
Viewed by 984
Abstract
The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and [...] Read more.
The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and inter-subject variability in EEG data, complicating the choice of the best classifier for different individuals over time. There is a keen need for an automatic approach to selecting a personalized classifier suited to an individual’s current needs. To this end, we have developed a systematic methodology for individual classifier selection, wherein the structural characteristics of an EEG dataset are used to predict a classifier that will perform with high accuracy. The method was evaluated using motor imagery EEG data from Physionet. We confirmed that our approach could consistently predict a classifier whose performance was no worse than the single-best-performing classifier across the participants. Furthermore, Kullback–Leibler divergences between reference distributions and signal amplitude and class label distributions emerged as the most important characteristics for classifier prediction, suggesting that classifier choice depends heavily on the morphology of signal amplitude densities and the degree of class imbalance in an EEG dataset. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

24 pages, 960 KiB  
Article
Advancing Skin Cancer Prediction Using Ensemble Models
by Priya Natha and Pothuraju RajaRajeswari
Computers 2024, 13(7), 157; https://doi.org/10.3390/computers13070157 - 21 Jun 2024
Cited by 1 | Viewed by 1178
Abstract
There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other [...] Read more.
There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other advanced imaging techniques have enhanced early detection by providing detailed images of lesions. However, accurately interpreting these images to distinguish between benign and malignant tumors remains a difficult task. Improved predictive modeling techniques are necessary due to the frequent occurrence of erroneous and inconsistent outcomes in the present diagnostic processes. Machine learning (ML) models have become essential in the field of dermatology for the automated identification and categorization of skin cancer lesions using image data. The aim of this work is to develop improved skin cancer predictions by using ensemble models, which combine numerous machine learning approaches to maximize their combined strengths and reduce their individual shortcomings. This paper proposes a fresh and special approach for ensemble model optimization for skin cancer classification: the Max Voting method. We trained and assessed five different ensemble models using the ISIC 2018 and HAM10000 datasets: AdaBoost, CatBoost, Random Forest, Gradient Boosting, and Extra Trees. Their combined predictions enhance the overall performance with the Max Voting method. Moreover, the ensemble models were fed with feature vectors that were optimally generated from the image data by a genetic algorithm (GA). We show that, with an accuracy of 95.80%, the Max Voting approach significantly improves the predictive performance when compared to the five ensemble models individually. Obtaining the best results for F1-measure, recall, and precision, the Max Voting method turned out to be the most dependable and robust. The novel aspect of this work is that skin cancer lesions are more robustly and reliably classified using the Max Voting technique. Several pre-trained machine learning models’ benefits are combined in this approach. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

21 pages, 4836 KiB  
Article
Chef Dalle: Transforming Cooking with Multi-Model Multimodal AI
by Brendan Hannon, Yulia Kumar, J. Jenny Li and Patricia Morreale
Computers 2024, 13(7), 156; https://doi.org/10.3390/computers13070156 - 21 Jun 2024
Viewed by 2911
Abstract
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application [...] Read more.
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application integrates voice-to-text conversion via Whisper and ingredient image recognition through GPT-Vision. It employs an advanced recipe filtering system that utilizes user-provided ingredients to fetch recipes, which are then evaluated through multi-model AI through integrations of OpenAI, Google Gemini, Claude, and/or Anthropic APIs to deliver highly personalized recommendations. These methods enable users to interact with the system using voice, text, or images, accommodating various dietary restrictions and preferences. Furthermore, the utilization of DALL-E 3 for generating recipe images enhances user engagement. User feedback mechanisms allow for the refinement of future recommendations, demonstrating the system’s adaptability. Chef Dalle showcases potential applications ranging from home kitchens to grocery stores and restaurant menu customization, addressing accessibility and promoting healthier eating habits. This paper underscores the significance of multimodal HCI in enhancing culinary experiences, setting a precedent for future developments in the field. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop