Next Issue
Volume 13, October
Previous Issue
Volume 13, August
 
 

Computers, Volume 13, Issue 9 (September 2024) – 29 articles

Cover Story (view full-size image): This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov–Arnold Networks (KANs) to improve road safety. We created a unique dataset for our research on bus drivers for two scenarios: driving without phone interaction and driving while on a phone call. A different KAN-based network was developed for custom action recognition tailored to identifying drivers holding phones. We evaluated the performance of our system against convolutional neural network-based solutions and showed the differences in accuracy and robustness. The work has implications beyond enforcement, providing foundational technology for automating monitoring and improving safety protocols in commercial and public transport sectors. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 760 KiB  
Article
The Influence of National Digital Identities and National Profiling Systems on Accelerating the Processes of Digital Transformation: A Mixed Study Report
by Abdelrahman Ahmed Alhammadi, Saadat M. Alhashmi, Mohammad Lataifeh and John Lewis Rice
Computers 2024, 13(9), 243; https://doi.org/10.3390/computers13090243 - 23 Sep 2024
Viewed by 974
Abstract
The United Arab Emirates (UAE) is a frontrunner in digitalising government services, demonstrating the successful implementation of National Digital Identity (NDI) systems. Unlike many developing nations with varying levels of success with electronic ID systems due to legal, socio-cultural, and ethical concerns, the [...] Read more.
The United Arab Emirates (UAE) is a frontrunner in digitalising government services, demonstrating the successful implementation of National Digital Identity (NDI) systems. Unlike many developing nations with varying levels of success with electronic ID systems due to legal, socio-cultural, and ethical concerns, the UAE has seamlessly integrated digital identities into various sectors, including security, transportation, and more, through initiatives like UAE Pass. This study draws on the UAE’s functional digital ID systems, such as those utilised in the Dubai Smart City project, to highlight the potential efficiencies and productivity gains in public services while addressing the associated risks of cybersecurity and privacy. This paper provides a comprehensive understanding of the UAE’s NDI and its impact on the nation’s digital transformation agenda, offering a thorough analysis of the effectiveness and challenges of NDIs, explicitly focusing on the UAE’s approach. Full article
Show Figures

Figure 1

15 pages, 1681 KiB  
Article
Parallel Attention-Driven Model for Student Performance Evaluation
by Deborah Olaniyan, Julius Olaniyan, Ibidun Christiana Obagbuwa, Bukohwo Michael Esiefarienrhe and Olorunfemi Paul Bernard
Computers 2024, 13(9), 242; https://doi.org/10.3390/computers13090242 - 23 Sep 2024
Viewed by 740
Abstract
This study presents the development and evaluation of a Multi-Task Long Short-Term Memory (LSTM) model with an attention mechanism for predicting students’ academic performance. The research is motivated by the need for efficient tools to enhance student assessment and support tailored educational interventions. [...] Read more.
This study presents the development and evaluation of a Multi-Task Long Short-Term Memory (LSTM) model with an attention mechanism for predicting students’ academic performance. The research is motivated by the need for efficient tools to enhance student assessment and support tailored educational interventions. The model tackles two tasks: predicting overall performance (total score) as a regression task and classifying performance levels (remarks) as a classification task. By handling both tasks simultaneously, it improves computational efficiency and resource utilization. The dataset includes metrics such as Continuous Assessment, Practical Skills, Presentation Quality, Attendance, and Participation. The model achieved strong results, with a Mean Absolute Error (MAE) of 0.0249, Mean Squared Error (MSE) of 0.0012, and Root Mean Squared Error (RMSE) of 0.0346 for the regression task. For the classification task, it achieved perfect scores with an accuracy, precision, recall, and F1 score of 1.0. The attention mechanism enhanced performance by focusing on the most relevant features. This study demonstrates the effectiveness of the Multi-Task LSTM model with an attention mechanism in educational data analysis, offering a reliable and efficient tool for predicting student performance. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

17 pages, 2984 KiB  
Article
Educational Resource Private Cloud Platform Based on OpenStack
by Linchang Zhao, Guoqing Hu and Yongchi Xu
Computers 2024, 13(9), 241; https://doi.org/10.3390/computers13090241 - 23 Sep 2024
Viewed by 548
Abstract
With the rapid development of the education industry and the expansion of university enrollment scale, it is difficult for the original teaching resource operation and maintenance management mode and utilization efficiency to meet the demands of teachers and students for high-quality teaching resources. [...] Read more.
With the rapid development of the education industry and the expansion of university enrollment scale, it is difficult for the original teaching resource operation and maintenance management mode and utilization efficiency to meet the demands of teachers and students for high-quality teaching resources. OpenStack and Ceph technologies provide a new solution for optimizing the utilization and management of educational resources. The educational resource private cloud platform built by them can achieve the unified management and self-service use of the computing resources, storage resources, and network resources required for student learning and teacher instruction. It meets the flexible and efficient use requirements of high-quality teaching resources for student learning and teacher instruction, reduces the construction cost of informationization investment in universities, and improves the efficiency of teaching resource utilization. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

16 pages, 1575 KiB  
Article
A Secure and Verifiable Blockchain-Based Framework for Personal Data Validation
by Junyan Yu, Ximing Li and Yubin Guo
Computers 2024, 13(9), 240; https://doi.org/10.3390/computers13090240 - 23 Sep 2024
Viewed by 537
Abstract
The online services provided by the Service Provider (SP) have brought significant convenience to people’s lives. Nowadays, people have grown accustomed to obtaining diverse services via the Internet. However, some SP utilize or even tamper with personal data without the awareness or authorization [...] Read more.
The online services provided by the Service Provider (SP) have brought significant convenience to people’s lives. Nowadays, people have grown accustomed to obtaining diverse services via the Internet. However, some SP utilize or even tamper with personal data without the awareness or authorization of the Data Provider (DP), a practice that seriously undermines the authenticity of the DP’s authorization and the integrity of personal data. To address this issue, we propose a Verifiable Authorization Information Management Scheme (VAIMS). During the authorization process, the authorization information and personal data fingerprints will be uploaded to the blockchain for permanent record, and then the SP will store such authorization information and personal data. The DP generates corresponding authorization fingerprints based on the authorization information and stores them independently. Through the authorization information and authorization fingerprints on the chain, the DP can verify the authenticity of the authorization information stored by the SP at any time. Meanwhile, by leveraging the personal data fingerprint on the blockchain, the DP can check whether the personal data stored by the SP has been tampered with. Additionally, the scheme incorporates database technology to accelerate data query. We implemented a VAIMS prototype on Etherum, and experiments demonstrate that the scheme is effective. Full article
Show Figures

Figure 1

25 pages, 896 KiB  
Article
Enhancing Fake News Detection with Word Embedding: A Machine Learning and Deep Learning Approach
by Mutaz A. B. Al-Tarawneh, Omar Al-irr, Khaled S. Al-Maaitah, Hassan Kanj and Wael Hosny Fouad Aly
Computers 2024, 13(9), 239; https://doi.org/10.3390/computers13090239 - 19 Sep 2024
Cited by 1 | Viewed by 1966
Abstract
The widespread dissemination of fake news on social media has necessitated the development of more sophisticated detection methods to maintain information integrity. This research systematically investigates the effectiveness of different word embedding techniques—TF-IDF, Word2Vec, and FastText—when applied to a variety of machine learning [...] Read more.
The widespread dissemination of fake news on social media has necessitated the development of more sophisticated detection methods to maintain information integrity. This research systematically investigates the effectiveness of different word embedding techniques—TF-IDF, Word2Vec, and FastText—when applied to a variety of machine learning (ML) and deep learning (DL) models for fake news detection. Leveraging the TruthSeeker dataset, which includes a diverse set of labeled news articles and social media posts spanning over a decade, we evaluated the performance of classifiers such as Support Vector Machines (SVMs), Multilayer Perceptrons (MLPs), and Convolutional Neural Networks (CNNs). Our analysis demonstrates that SVMs using TF-IDF embeddings and CNNs employing TF-IDF embeddings achieve the highest overall performance in terms of accuracy, precision, recall, and F1 score. These results suggest that TF-IDF, with its capacity to highlight discriminative features in text, enhances the performance of models like SVMs, which are adept at handling sparse data representations. Additionally, CNNs benefit from TF-IDF by effectively capturing localized features and patterns within the textual data. In contrast, while Word2Vec and FastText embeddings capture semantic and syntactic nuances, they introduce complexities that may not always benefit traditional ML models like MLPs or SVMs, which could explain their relatively lower performance in some cases. This study emphasizes the importance of selecting appropriate embedding techniques based on the model architecture to maximize fake news detection performance. Future research should consider integrating contextual embeddings and exploring hybrid model architectures to further enhance detection capabilities. These findings contribute to the ongoing development of advanced computational tools for combating misinformation. Full article
Show Figures

Figure 1

21 pages, 6438 KiB  
Article
Weighted Averages and Polynomial Interpolation for PM2.5 Time Series Forecasting
by Anibal Flores, Hugo Tito-Chura, Victor Yana-Mamani, Charles Rosado-Chavez and Alejandro Ecos-Espino
Computers 2024, 13(9), 238; https://doi.org/10.3390/computers13090238 - 18 Sep 2024
Viewed by 525
Abstract
This article describes a novel method for the multi-step forecasting of PM2.5 time series based on weighted averages and polynomial interpolation. Multi-step prediction models enable decision makers to build an understanding of longer future terms than the one-step-ahead prediction models, allowing for more [...] Read more.
This article describes a novel method for the multi-step forecasting of PM2.5 time series based on weighted averages and polynomial interpolation. Multi-step prediction models enable decision makers to build an understanding of longer future terms than the one-step-ahead prediction models, allowing for more timely decision-making. As the cases for this study, hourly data from three environmental monitoring stations from Ilo City in Southern Peru were selected. The results show average RMSEs of between 1.60 and 9.40 ug/m3 and average MAPEs of between 17.69% and 28.91%. Comparing the results with those derived using the presently implemented benchmark models (such as LSTM, BiLSTM, GRU, BiGRU, and LSTM-ATT) in different prediction horizons, in the majority of environmental monitoring stations, the proposed model outperformed them by between 2.40% and 17.49% in terms of the average MAPE derived. It is concluded that the proposed model constitutes a good alternative for multi-step PM2.5 time series forecasting, presenting similar and superior results to the benchmark models. Aside from the good results, one of the main advantages of the proposed model is that it requires fewer data in comparison with the benchmark models. Full article
Show Figures

Figure 1

19 pages, 7837 KiB  
Article
Evaluating the Impact of Filtering Techniques on Deep Learning-Based Brain Tumour Segmentation
by Sofia Rosa, Verónica Vasconcelos and Pedro J. S. B. Caridade
Computers 2024, 13(9), 237; https://doi.org/10.3390/computers13090237 - 18 Sep 2024
Viewed by 685
Abstract
Gliomas are a common and aggressive kind of brain tumour that is difficult to diagnose due to their infiltrative development, variable clinical presentation, and complex behaviour, making them an important focus in neuro-oncology. Segmentation of brain tumour images is critical for improving diagnosis, [...] Read more.
Gliomas are a common and aggressive kind of brain tumour that is difficult to diagnose due to their infiltrative development, variable clinical presentation, and complex behaviour, making them an important focus in neuro-oncology. Segmentation of brain tumour images is critical for improving diagnosis, prognosis, and treatment options. Manually segmenting brain tumours is time-consuming and challenging. Automatic segmentation algorithms can significantly improve the accuracy and efficiency of tumour identification, thus improving treatment planning and outcomes. Deep learning-based segmentation tumours have shown significant advances in the last few years. This study evaluates the impact of four denoising filters, namely median, Gaussian, anisotropic diffusion, and bilateral, on tumour detection and segmentation. The U-Net architecture is applied for the segmentation of 3064 contrast-enhanced magnetic resonance images from 233 patients diagnosed with meningiomas, gliomas, and pituitary tumours. The results of this work demonstrate that bilateral filtering yields superior outcomes, proving to be a robust and computationally efficient approach in brain tumour segmentation. This method reduces the processing time by 12 epochs, which in turn contributes to lowering greenhouse gas emissions by optimizing computational resources and minimizing energy consumption. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

15 pages, 683 KiB  
Article
Cross-Lingual Short-Text Semantic Similarity for Kannada–English Language Pair
by Muralikrishna S N, Raghurama Holla, Harivinod N and Raghavendra Ganiga
Computers 2024, 13(9), 236; https://doi.org/10.3390/computers13090236 - 18 Sep 2024
Viewed by 761
Abstract
Analyzing the semantic similarity of cross-lingual texts is a crucial part of natural language processing (NLP). The computation of semantic similarity is essential for a variety of tasks such as evaluating machine translation systems, quality checking human translation, information retrieval, plagiarism checks, etc. [...] Read more.
Analyzing the semantic similarity of cross-lingual texts is a crucial part of natural language processing (NLP). The computation of semantic similarity is essential for a variety of tasks such as evaluating machine translation systems, quality checking human translation, information retrieval, plagiarism checks, etc. In this paper, we propose a method for measuring the semantic similarity of Kannada–English sentence pairs that uses embedding space alignment, lexical decomposition, word order, and a convolutional neural network. The proposed method achieves a maximum correlation of 83% with human annotations. Experiments on semantic matching and retrieval tasks resulted in promising results in terms of precision and recall. Full article
Show Figures

Figure 1

18 pages, 5532 KiB  
Article
Enhancing Solar Power Efficiency: Smart Metering and ANN-Based Production Forecasting
by Younes Ledmaoui, Asmaa El Fahli, Adila El Maghraoui, Abderahmane Hamdouchi, Mohamed El Aroussi, Rachid Saadane and Ahmed Chebak
Computers 2024, 13(9), 235; https://doi.org/10.3390/computers13090235 - 17 Sep 2024
Viewed by 893
Abstract
This paper presents a comprehensive and comparative study of solar energy forecasting in Morocco, utilizing four machine learning algorithms: Extreme Gradient Boosting (XGBoost), Gradient Boosting Machine (GBM), recurrent neural networks (RNNs), and artificial neural networks (ANNs). The study is conducted using a smart [...] Read more.
This paper presents a comprehensive and comparative study of solar energy forecasting in Morocco, utilizing four machine learning algorithms: Extreme Gradient Boosting (XGBoost), Gradient Boosting Machine (GBM), recurrent neural networks (RNNs), and artificial neural networks (ANNs). The study is conducted using a smart metering device designed for a photovoltaic system at an industrial site in Benguerir, Morocco. The smart metering device collects energy usage data from a submeter and transmits it to the cloud via an ESP-32 card, enhancing monitoring, efficiency, and energy utilization. Our methodology includes an analysis of solar resources, considering factors such as location, temperature, and irradiance levels, with PVSYST simulation software version 7.2, employed to evaluate system performance under varying conditions. Additionally, a data logger is developed to monitor solar panel energy production, securely storing data in the cloud while accurately measuring key parameters and transmitting them using reliable communication protocols. An intuitive web interface is also created for data visualization and analysis. The research demonstrates a holistic approach to smart metering devices for photovoltaic systems, contributing to sustainable energy utilization, smart grid development, and environmental conservation in Morocco. The performance analysis indicates that ANNs are the most effective predictive model for solar energy forecasting in similar scenarios, demonstrating the lowest RMSE and MAE values, along with the highest R2 value. Full article
Show Figures

Figure 1

17 pages, 3728 KiB  
Article
YOLOv8-Based Drone Detection: Performance Analysis and Optimization
by Betul Yilmaz and Ugurhan Kutbay
Computers 2024, 13(9), 234; https://doi.org/10.3390/computers13090234 - 17 Sep 2024
Viewed by 1555
Abstract
The extensive utilization of drones has led to numerous scenarios that encompass both advantageous and perilous outcomes. By using deep learning techniques, this study aimed to reduce the dangerous effects of drone use through early detection of drones. The purpose of this study [...] Read more.
The extensive utilization of drones has led to numerous scenarios that encompass both advantageous and perilous outcomes. By using deep learning techniques, this study aimed to reduce the dangerous effects of drone use through early detection of drones. The purpose of this study is the evaluation of deep learning approaches such as pre-trained YOLOv8 drone detection for security issues. This study focuses on the YOLOv8 model to achieve optimal performance in object detection tasks using a publicly available dataset collected by Mehdi Özel for a UAV competition that is sourced from GitHub. These images are labeled using Roboflow, and the model is trained on Google Colab. YOLOv8, known for its advanced architecture, was selected due to its suitability for real-time detection applications and its ability to process complex visual data. Hyperparameter tuning and data augmentation techniques were applied to maximize the performance of the model. Basic hyperparameters such as learning rate, batch size, and optimization settings were optimized through iterative experiments to provide the best performance. In addition to hyperparameter tuning, various data augmentation strategies were used to increase the robustness and generalization ability of the model. Techniques such as rotation, scaling, flipping, and color adjustments were applied to the dataset to simulate different conditions and variations. Among the augmentation techniques applied to the specific dataset in this study, rotation was found to deliver the highest performance. Blurring and cropping methods were observed to follow closely behind. The combination of optimized hyperparameters and strategic data augmentation allowed YOLOv8 to achieve high detection accuracy and reliable performance on the publicly available dataset. This method demonstrates the effectiveness of YOLOv8 in real-world scenarios, while also highlighting the importance of hyperparameter tuning and data augmentation in increasing model capabilities. To enhance model performance, dataset augmentation techniques including rotation and blurring are implemented. Following these steps, a significant precision value of 0.946, a notable recall value of 0.9605, and a considerable precision–recall curve value of 0.978 are achieved, surpassing many popular models such as Mask CNN, CNN, and YOLOv5. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

16 pages, 1081 KiB  
Article
Optimized Machine Learning Classifiers for Symptom-Based Disease Screening
by Auba Fuster-Palà, Francisco Luna-Perejón, Lourdes Miró-Amarante and Manuel Domínguez-Morales
Computers 2024, 13(9), 233; https://doi.org/10.3390/computers13090233 - 14 Sep 2024
Viewed by 1124
Abstract
This work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes [...] Read more.
This work presents a disease detection classifier based on symptoms encoded by their severity. This model is presented as part of the solution to the saturation of the healthcare system, aiding in the initial screening stage. An open-source dataset is used, which undergoes pre-processing and serves as the data source to train and test various machine learning models, including SVM, RFs, KNN, and ANNs. A three-phase optimization process is developed to obtain the best classifier: first, the dataset is pre-processed; secondly, a grid search is performed with several hyperparameter variations to each classifier; and, finally, the best models obtained are subjected to additional filtering processes. The best-results model, selected based on the performance and the execution time, is a KNN with 2 neighbors, which achieves an accuracy and F1 score of over 98%. These results demonstrate the effectiveness and improvement of the evaluated models compared to previous studies, particularly in terms of accuracy. Although the ANN model has a longer execution time compared to KNN, it is retained in this work due to its potential to handle more complex datasets in a real clinical context. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
Show Figures

Figure 1

20 pages, 2961 KiB  
Article
Leveraging Large Language Models with Chain-of-Thought and Prompt Engineering for Traffic Crash Severity Analysis and Inference
by Hao Zhen, Yucheng Shi, Yongcan Huang, Jidong J. Yang and Ninghao Liu
Computers 2024, 13(9), 232; https://doi.org/10.3390/computers13090232 - 14 Sep 2024
Viewed by 1353
Abstract
Harnessing the power of Large Language Models (LLMs), this study explores the use of three state-of-the-art LLMs, specifically GPT-3.5-turbo, LLaMA3-8B, and LLaMA3-70B, for crash severity analysis and inference, framing it as a classification task. We generate textual narratives from original traffic crash tabular [...] Read more.
Harnessing the power of Large Language Models (LLMs), this study explores the use of three state-of-the-art LLMs, specifically GPT-3.5-turbo, LLaMA3-8B, and LLaMA3-70B, for crash severity analysis and inference, framing it as a classification task. We generate textual narratives from original traffic crash tabular data using a pre-built template infused with domain knowledge. Additionally, we incorporated Chain-of-Thought (CoT) reasoning to guide the LLMs in analyzing the crash causes and then inferring the severity. This study also examine the impact of prompt engineering specifically designed for crash severity inference. The LLMs were tasked with crash severity inference to: (1) evaluate the models’ capabilities in crash severity analysis, (2) assess the effectiveness of CoT and domain-informed prompt engineering, and (3) examine the reasoning abilities with the CoT framework. Our results showed that LLaMA3-70B consistently outperformed the other models, particularly in zero-shot settings. The CoT and Prompt Engineering techniques significantly enhanced performance, improving logical reasoning and addressing alignment issues. Notably, the CoT offers valuable insights into LLMs’ reasoning process, unleashing their capacity to consider diverse factors such as environmental conditions, driver behavior, and vehicle characteristics in severity analysis and inference. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

13 pages, 3622 KiB  
Article
Assessing the Impact of Prolonged Sitting and Poor Posture on Lower Back Pain: A Photogrammetric and Machine Learning Approach
by Valentina Markova, Miroslav Markov, Zornica Petrova and Silviya Filkova
Computers 2024, 13(9), 231; https://doi.org/10.3390/computers13090231 - 14 Sep 2024
Viewed by 4219
Abstract
Prolonged static sitting at the workplace is considered one of the main risks for the development of musculoskeletal disorders (MSDs) and adverse health effects. Factors such as poor posture and extended sitting are perceived to be a reason for conditions such as lumbar [...] Read more.
Prolonged static sitting at the workplace is considered one of the main risks for the development of musculoskeletal disorders (MSDs) and adverse health effects. Factors such as poor posture and extended sitting are perceived to be a reason for conditions such as lumbar discomfort and lower back pain (LBP), even though the scientific explanation of this relationship is still unclear and raises disputes in the scientific community. The current study focused on evaluating the relationship between LBP and prolonged sitting in poor posture using photogrammetric images, postural angle calculation, machine learning models, and questionnaire-based self-reports regarding the occurrence of LBP and similar symptoms among the participants. Machine learning models trained with this data are employed to recognize poor body postures. Two scenarios have been elaborated for modeling purposes: scenario 1, based on natural body posture tagged as correct and incorrect, and scenario 2, based on incorrect body postures, corrected additionally by the rehabilitator. The achieved accuracies of respectively 75.3% and 85% for both scenarios reveal the potential for future research in enhancing awareness and actively managing posture-related issues that elevate the likelihood of developing lower back pain symptoms. Full article
Show Figures

Figure 1

16 pages, 13238 KiB  
Article
Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net
by Bastian Oldach, Philipp Wintermeyer and Norbert Kockmann
Computers 2024, 13(9), 230; https://doi.org/10.3390/computers13090230 - 13 Sep 2024
Viewed by 535
Abstract
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in [...] Read more.
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in a second immiscible liquid are of great interest for screening applications or chemical and biochemical reactions. However, since very small dimensions are characterized by phenomena that differ from those at macroscopic scales, a deep understanding of physics is crucial for effective device design. Due to small volumes in miniaturized systems, common measurement techniques are not applicable as they exceed the dimensions of the device by a multitude. Hence, image analysis is commonly chosen as a method to understand ongoing phenomena. Artificial Intelligence is now the state of the art for recognizing patterns in images or analyzing datasets that are too large for humans to handle. X-ray-based Computer Tomography adds a third dimension to images, which results in more information, but ultimately, also in more complex image analysis. In this work, we present the application of the U-Net neural network to extract certain states during droplet formation in a capillary, which forms a constantly repeated process that is captured on tens of thousands of CT images. The experimental setup features a co-flow setup that is based on 3D-printed capillaries with two different cross-sections with an inner diameter, respectively edge length of 1.6 mm. For droplet formation, water was dispersed in silicon oil. The classification into different droplet states allows for 3D reconstruction and a time-resolved 3D analysis of the present phenomena. The original U-Net was modified to process input images of a size of 688 × 432 pixels while the structure of the encoder and decoder path feature 23 convolutional layers. The U-Net consists of four max pooling layers and four upsampling layers. The training was performed on 90% and validated on 10% of a dataset containing 492 images showing different states of droplet formation. A mean Intersection over Union of 0.732 was achieved for a training of 50 epochs, which is considered a good performance. The presented U-Net needs 120 ms per image to process 60,000 images to categorize emerging droplets into 24 states at 905 angles. Once the model is trained sufficiently, it provides accurate segmentation for various flow conditions. The selected images are used for 3D reconstruction enabling the 2D and 3D quantification of emerging droplets in capillaries that feature circular and square cross-sections. By applying this method, a temporal resolution of 25–40 ms was achieved. Droplets that are emerging in capillaries with a square cross-section become bigger under the same flow conditions in comparison to capillaries with a circular cross section. The presented methodology is promising for other periodic phenomena in different scientific disciplines that focus on imaging techniques. Full article
Show Figures

Figure 1

19 pages, 1495 KiB  
Article
Deep Learning for Predicting Attrition Rate in Open and Distance Learning (ODL) Institutions
by Juliana Ngozi Ndunagu, David Opeoluwa Oyewola, Farida Shehu Garki, Jude Chukwuma Onyeakazi, Christiana Uchenna Ezeanya and Elochukwu Ukwandu
Computers 2024, 13(9), 229; https://doi.org/10.3390/computers13090229 - 11 Sep 2024
Viewed by 730
Abstract
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate [...] Read more.
Student enrollment is a vital aspect of educational institutions, encompassing active, registered and graduate students. All the same, some students fail to engage with their studies after admission and drop out along the line; this is known as attrition. The student attrition rate is acknowledged as the most complicated and significant problem facing educational systems and is caused by institutional and non-institutional challenges. In this study, the researchers utilized a dataset obtained from the National Open University of Nigeria (NOUN) from 2012 to 2022, which included comprehensive information about students enrolled in various programs at the university who were inactive and had dropped out. The researchers used deep learning techniques, such as the Long Short-Term Memory (LSTM) model and compared their performance with the One-Dimensional Convolutional Neural Network (1DCNN) model. The results of this study revealed that the LSTM model achieved overall accuracy of 57.29% on the training data, while the 1DCNN model exhibited lower accuracy of 49.91% on the training data. The LSTM indicated a superior correct classification rate compared to the 1DCNN model. Full article
Show Figures

Figure 1

17 pages, 1343 KiB  
Review
The State of the Art of Digital Twins in Health—A Quick Review of the Literature
by Leonardo El-Warrak and Claudio M. de Farias
Computers 2024, 13(9), 228; https://doi.org/10.3390/computers13090228 - 11 Sep 2024
Viewed by 1737
Abstract
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things [...] Read more.
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things (IoT), cloud computing, and artificial intelligence (AI). Digital twins have applications in a wide range of sectors, from manufacturing and engineering to healthcare. They have been used in managing healthcare facilities, streamlining care processes, personalizing treatments, and enhancing patient recovery. By analysing data from sensors and other sources, healthcare professionals can develop virtual models of patients, organs, and human systems, experimenting with various strategies to identify the most effective approach. This approach can lead to more targeted and efficient therapies while reducing the risk of collateral effects. Digital twin technology can also be used to generate a virtual replica of a hospital to review operational strategies, capabilities, personnel, and care models to identify areas for improvement, predict future challenges, and optimize organizational strategies. The potential impact of this tool on our society and its well-being is quite significant. This article explores how digital twins are being used in healthcare. This article also introduces some discussions on the impact of this use and future research and technology development projections for the use of digital twins in the healthcare sector. Full article
Show Figures

Figure 1

13 pages, 2937 KiB  
Article
An Unsupervised Approach for Treatment Effectiveness Monitoring Using Curvature Learning
by Hersh Sagreiya, Isabelle Durot and Alireza Akhbardeh
Computers 2024, 13(9), 227; https://doi.org/10.3390/computers13090227 - 9 Sep 2024
Viewed by 729
Abstract
Contrast-enhanced ultrasound could assess whether cancer chemotherapeutic agents work in days, rather than waiting 2–3 months, as is typical using the Response Evaluation Criteria in Solid Tumors (RECIST), therefore avoiding toxic side effects and expensive, ineffective therapy. A total of 40 mice were [...] Read more.
Contrast-enhanced ultrasound could assess whether cancer chemotherapeutic agents work in days, rather than waiting 2–3 months, as is typical using the Response Evaluation Criteria in Solid Tumors (RECIST), therefore avoiding toxic side effects and expensive, ineffective therapy. A total of 40 mice were implanted with human colon cancer cells: treatment-sensitive mice in control (n = 10, receiving saline) and treated (n = 10, receiving bevacizumab) groups and treatment-resistant mice in control (n = 10) and treated (n = 10) groups. Each mouse was imaged using 3D dynamic contrast-enhanced ultrasound with Definity microbubbles. Curvature learning, an unsupervised learning approach, quantized pixels into three classes—blue, yellow, and red—representing normal, intermediate, and high cancer probability, both at baseline and after treatment. Next, a curvature learning score was calculated for each mouse using statistical measures representing variations in these three color classes across each frame from cine ultrasound images obtained during contrast administration on a given day (intra-day variability) and between pre- and post-treatment days (inter-day variability). A Wilcoxon rank-sum test compared score distributions between treated, treatment-sensitive mice and all others. There was a statistically significant difference in tumor score between the treated, treatment-sensitive group (n = 10) and all others (n = 30) (p = 0.0051). Curvature learning successfully identified treatment response, detecting changes in tumor perfusion before changes in tumor size. A similar technique could be developed for humans. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Graphical abstract

17 pages, 9168 KiB  
Article
An Integrated Software-Defined Networking–Network Function Virtualization Architecture for 5G RAN–Multi-Access Edge Computing Slice Management in the Internet of Industrial Things
by Francesco Chiti, Simone Morosi and Claudio Bartoli
Computers 2024, 13(9), 226; https://doi.org/10.3390/computers13090226 - 9 Sep 2024
Viewed by 987
Abstract
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction [...] Read more.
The Internet of Things (IoT), namely, the set of intelligent devices equipped with sensors and actuators and capable of connecting to the Internet, has now become an integral part of the most competitive industries, as it enables optimization of production processes and reduction in operating costs and maintenance time, together with improving the quality of products and services. More specifically, the term Industrial Internet of Things (IIoT) identifies the system which consists of advanced Internet-connected equipment and analytics platforms specialized for industrial activities, where IIoT devices range from small environmental sensors to complex industrial robots. This paper presents an integrated high-level SDN-NFV architecture enabling clusters of smart devices to interconnect and manage the exchange of data with distributed control processes and databases. In particular, it is focused on 5G RAN-MEC slice management in the IIoT context. The proposed system is emulated by means of two distinct real-time frameworks, demonstrating improvements in connectivity, energy efficiency, end-to-end latency and throughput. In addition, its scalability, modularity and flexibility are assessed, making this framework suitable to test advanced and more applications. Full article
Show Figures

Figure 1

22 pages, 1904 KiB  
Article
SLACPSS: Secure Lightweight Authentication for Cyber–Physical–Social Systems
by Ahmed Zedaan M. Abed, Tamer Abdelkader and Mohamed Hashem
Computers 2024, 13(9), 225; https://doi.org/10.3390/computers13090225 - 9 Sep 2024
Viewed by 1142
Abstract
The concept of Cyber–Physical–Social Systems (CPSSs) has emerged as a response to the need to understand the interaction between Cyber–Physical Systems (CPSs) and humans. This shift from CPSs to CPSSs is primarily due to the widespread use of sensor-equipped smart devices that are [...] Read more.
The concept of Cyber–Physical–Social Systems (CPSSs) has emerged as a response to the need to understand the interaction between Cyber–Physical Systems (CPSs) and humans. This shift from CPSs to CPSSs is primarily due to the widespread use of sensor-equipped smart devices that are closely connected to users. CPSSs have been a topic of interest for more than ten years, gaining increasing attention in recent years. The inclusion of human elements in CPS research has presented new challenges, particularly in understanding human dynamics, which adds complexity that has yet to be fully explored. CPSSs are a base class and consist of three basic components (cyberspace, physical space, and social space). We map the components of the metaverse with that of a CPSS, and we show that the metaverse is an implementation of a Cyber–Physical–Social System (CPSS). The metaverse is made up of computer systems with many elements, such as artificial intelligence, computer vision, image processing, mixed reality, augmented reality, and extended reality. It also comprises physical systems, controlled objects, and human interaction. The identification process in CPSSs suffers from weak security, and the authentication problem requires heavy computation. Therefore, we propose a new protocol for secure lightweight authentication in Cyber–Physical–Social Systems (SLACPSSs) to offer secure communication between platform servers and users as well as secure interactions between avatars. We perform a security analysis and compare the proposed protocol to the related previous ones. The analysis shows that the proposed protocol is lightweight and secure. Full article
Show Figures

Figure 1

21 pages, 431 KiB  
Article
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Viewed by 822
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced [...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Figure 1

30 pages, 5636 KiB  
Review
A Survey of Blockchain Applicability, Challenges, and Key Threats
by Catalin Daniel Morar and Daniela Elena Popescu
Computers 2024, 13(9), 223; https://doi.org/10.3390/computers13090223 - 6 Sep 2024
Viewed by 1914
Abstract
With its decentralized, immutable, and consensus-based validation features, blockchain technology has grown from early financial applications to a variety of different sectors. This paper aims to outline various applications of the blockchain, and systematically identify general challenges and key threats regarding its adoption. [...] Read more.
With its decentralized, immutable, and consensus-based validation features, blockchain technology has grown from early financial applications to a variety of different sectors. This paper aims to outline various applications of the blockchain, and systematically identify general challenges and key threats regarding its adoption. The challenges are organized into even broader groups, to allow a clear overview and identification of interconnected issues. Potential solutions are introduced into the discussion, addressing their possible ways of mitigating these challenges and their forward-looking effects in fostering the adoption of blockchain technology. The paper also highlights some potential directions for future research that may overcome these challenges to unlock further applications. More generally, the article attempts to describe the potential transformational implications of blockchain technology, through the manner in which it may contribute to the advancement of a diversity of industries. Full article
Show Figures

Figure 1

24 pages, 1191 KiB  
Article
Usability Heuristics for Metaverse
by Khalil Omar, Hussam Fakhouri, Jamal Zraqou and Jorge Marx Gómez
Computers 2024, 13(9), 222; https://doi.org/10.3390/computers13090222 - 6 Sep 2024
Cited by 1 | Viewed by 1015
Abstract
The inclusion of usability heuristics into the metaverse is aimed at solving the unique issues raised by virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments. This research points out the usability challenges of metaverse user interfaces (UIs), such as information [...] Read more.
The inclusion of usability heuristics into the metaverse is aimed at solving the unique issues raised by virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments. This research points out the usability challenges of metaverse user interfaces (UIs), such as information overloading, complex navigation, and the need for intuitive control mechanisms in these immersive spaces. By adapting the existing usability models to suit the metaverse context, this study presents a detailed list of heuristics and sub-heuristics that are designed to improve the overall usability of metaverse UIs. These heuristics are essential when it comes to creating user-friendly, inclusive, and captivating virtual environments (VEs) that take care of the needs of three-dimensional interactions, social dynamics demands, and integration with digital–physical worlds. It should be noted that these heuristics have to keep up with new technological advancements, as well as changing expectations from users, hence ensuring a positive user experience (UX) within the metaverse. Full article
Show Figures

Figure 1

29 pages, 1466 KiB  
Article
Teach Programming Using Task-Driven Case Studies: Pedagogical Approach, Guidelines, and Implementation
by Jaroslav Porubän, Milan Nosál’, Matúš Sulír and Sergej Chodarev
Computers 2024, 13(9), 221; https://doi.org/10.3390/computers13090221 - 5 Sep 2024
Viewed by 726
Abstract
Despite the effort invested to improve the teaching of programming, students often face problems with understanding its principles when using traditional learning approaches. This paper presents a novel teaching method for programming, combining the task-driven methodology and the case study approach. This method [...] Read more.
Despite the effort invested to improve the teaching of programming, students often face problems with understanding its principles when using traditional learning approaches. This paper presents a novel teaching method for programming, combining the task-driven methodology and the case study approach. This method is called a task-driven case study. The case study aspect should provide a real-world context for the examples used to explain the required knowledge. The tasks guide students during the course to ensure that they will not fall into bad practices. We provide reasoning for using the combination of these two methodologies and define the essential properties of our method. Using a specific example of the Minesweeper case study from the Java technologies course, the readers are guided through the process of the case study selection, solution implementation, study guide writing, and course execution. The teachers’ and students’ experiences with this approach, including its advantages and potential drawbacks, are also summarized. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

21 pages, 3534 KiB  
Article
Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History
by Rao Mikkilineni, W. Patrick Kelly and Gideon Crawley
Computers 2024, 13(9), 220; https://doi.org/10.3390/computers13090220 - 5 Sep 2024
Cited by 1 | Viewed by 743
Abstract
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. [...] Read more.
Biological systems have a unique ability inherited through their genome. It allows them to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals with shared knowledge. The system receives information from various senses, makes sense of what is being observed, and acts using its experience while the observations are still in progress. We use the General Theory of Information (GTI) to implement a digital genome, specifying the operational processes that design, deploy, operate, and manage a cloud-agnostic distributed application that is independent of IaaS and PaaS infrastructure, which provides the resources required to execute the software components. The digital genome specifies the functional and non-functional requirements that define the goals and best-practice policies to evolve the system using associative memory and event-driven interaction history to maintain stability and safety while achieving the system’s objectives. We demonstrate a structural machine, cognizing oracles, and knowledge structures derived from GTI used for designing, deploying, operating, and managing a distributed video streaming application with autopoietic self-regulation that maintains structural stability and communication among distributed components with shared knowledge while maintaining expected behaviors dictated by functional requirements. Full article
Show Figures

Figure 1

19 pages, 675 KiB  
Review
Predicting Student Performance in Introductory Programming Courses
by João P. J. Pires, Fernanda Brito Correia, Anabela Gomes, Ana Rosa Borges and Jorge Bernardino
Computers 2024, 13(9), 219; https://doi.org/10.3390/computers13090219 - 5 Sep 2024
Viewed by 733
Abstract
The importance of accurately predicting student performance in education, especially in the challenging curricular unit of Introductory Programming, cannot be overstated. As institutions struggle with high failure rates and look for solutions to improve the learning experience, the need for effective prediction methods [...] Read more.
The importance of accurately predicting student performance in education, especially in the challenging curricular unit of Introductory Programming, cannot be overstated. As institutions struggle with high failure rates and look for solutions to improve the learning experience, the need for effective prediction methods becomes critical. This study aims to conduct a systematic review of the literature on methods for predicting student performance in higher education, specifically in Introductory Programming, focusing on machine learning algorithms. Through this study, we not only present different applicable algorithms but also evaluate their performance, using identified metrics and considering the applicability in the educational context, specifically in higher education and in Introductory Programming. The results obtained through this study allowed us to identify trends in the literature, such as which machine learning algorithms were most applied in the context of predicting students’ performance in Introductory Programming in higher education, as well as which evaluation metrics and datasets are usually used. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

18 pages, 5905 KiB  
Article
Detection of Bus Driver Mobile Phone Usage Using Kolmogorov-Arnold Networks
by János Hollósi, Áron Ballagi, Gábor Kovács, Szabolcs Fischer and Viktor Nagy
Computers 2024, 13(9), 218; https://doi.org/10.3390/computers13090218 - 3 Sep 2024
Viewed by 858
Abstract
This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov-Arnold Networks (KAN) to improve road safety and comply with regulations prohibiting phone use while driving. To address the lack of available data for this specific [...] Read more.
This research introduces a new approach for detecting mobile phone use by drivers, exploiting the capabilities of Kolmogorov-Arnold Networks (KAN) to improve road safety and comply with regulations prohibiting phone use while driving. To address the lack of available data for this specific task, a unique dataset was constructed consisting of images of bus drivers in two scenarios: driving without phone interaction and driving while on a phone call. This dataset provides the basis for the current research. Different KAN-based networks were developed for custom action recognition tailored to the nuanced task of identifying drivers holding phones. The system’s performance was evaluated against convolutional neural network-based solutions, and differences in accuracy and robustness were observed. The aim was to propose an appropriate solution for professional Driver Monitoring Systems (DMS) in research and development and to investigate the efficiency of KAN solutions for this specific sub-task. The implications of this work extend beyond enforcement, providing a foundational technology for automating monitoring and improving safety protocols in the commercial and public transport sectors. In conclusion, this study demonstrates the efficacy of KAN network layers in neural network designs for driver monitoring applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

21 pages, 3689 KiB  
Article
Introducing HeliEns: A Novel Hybrid Ensemble Learning Algorithm for Early Diagnosis of Helicobacter pylori Infection
by Sultan Noman Qasem
Computers 2024, 13(9), 217; https://doi.org/10.3390/computers13090217 - 2 Sep 2024
Viewed by 821
Abstract
The Gram-negative bacterium Helicobacter pylori (H. infection) infects the human stomach and is a major cause of gastritis, peptic ulcers, and gastric cancer. With over 50% of the global population affected, early and accurate diagnosis of H. infection infection is crucial for effective [...] Read more.
The Gram-negative bacterium Helicobacter pylori (H. infection) infects the human stomach and is a major cause of gastritis, peptic ulcers, and gastric cancer. With over 50% of the global population affected, early and accurate diagnosis of H. infection infection is crucial for effective treatment and prevention of severe complications. Traditional diagnostic methods, such as endoscopy with biopsy, serology, urea breath tests, and stool antigen tests, are often invasive, costly, and can lack precision. Recent advancements in machine learning (ML) and quantum machine learning (QML) offer promising non-invasive alternatives capable of analyzing complex datasets to identify patterns not easily discernible by human analysis. This research aims to develop and evaluate HeliEns, a novel quantum hybrid ensemble learning algorithm designed for the early and accurate diagnosis of H. infection infection. HeliEns combines the strengths of multiple quantum machine learning models, specifically Quantum K-Nearest Neighbors (QKNN), Quantum Naive Bayes (QNB), and Quantum Logistic Regression (QLR), to enhance diagnostic accuracy and reliability. The development of HeliEns involved rigorous data preprocessing steps, including data cleaning, encoding of categorical variables, and feature scaling, to ensure the dataset’s suitability for quantum machine learning algorithms. Individual models (QKNN, QNB, and QLR) were trained and evaluated using metrics such as accuracy, precision, recall, and F1-score. The ensemble model was then constructed by integrating these quantum models using a hybrid approach that leverages their diverse strengths. The HeliEns model demonstrated superior performance compared to individual models, achieving an accuracy of 94%, precision of 97%, recall of 92%, and an F1-score of 94% in detecting H. infection infection. The quantum ensemble approach effectively mitigated the limitations of individual models, providing a robust and reliable diagnostic tool. HeliEns significantly improved diagnostic accuracy and reliability for early H. infection detection. The integration of multiple quantum ML algorithms within the HeliEns framework enhanced overall model performance. The non-invasive nature of the HeliEns model offers a cost-effective and user-friendly alternative to traditional diagnostic methods. This research underscores the transformative potential of quantum machine learning in healthcare, particularly in enhancing diagnostic efficiency and patient outcomes. HeliEns represents a significant advancement in the early diagnosis of H. infection infection, leveraging quantum machine learning to provide a non-invasive, accurate, and reliable diagnostic tool. This research highlights the importance of QML-driven solutions in healthcare and sets the stage for future research to further refine and validate the HeliEns model in real-world clinical settings. Full article
Show Figures

Figure 1

15 pages, 2504 KiB  
Article
Research on Identification of Critical Quality Features of Machining Processes Based on Complex Networks and Entropy-CRITIC Methods
by Dongyue Qu, Wenchao Liang, Yuting Zhang, Chaoyun Gu, Guangyu Zhou and Yong Zhan
Computers 2024, 13(9), 216; https://doi.org/10.3390/computers13090216 - 30 Aug 2024
Viewed by 680
Abstract
Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network [...] Read more.
Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network theory. The LeaderRank algorithm is used to identify the critical processes in the machining process. Secondly, the Entropy-CRITIC method is used to calculate the weight of the quality features of the critical processes, and the critical quality features of the critical processes are determined according to weight ranking results. Finally, the feasibility and effectiveness of the method are verified by taking the medium-speed marine diesel engine coupling rod machining as an example. The results show that the method can still effectively identify the critical quality features in the case of small sample data and provide support for machining process optimization and quality control, thus improving product consistency, reliability, and machining efficiency. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

14 pages, 1893 KiB  
Article
A Study of a Drawing Exactness Assessment Method Using Localized Normalized Cross-Correlations in a Portrait Drawing Learning Assistant System
by Yue Zhang, Zitong Kong, Nobuo Funabiki and Chen-Chien Hsu
Computers 2024, 13(9), 215; https://doi.org/10.3390/computers13090215 - 23 Aug 2024
Viewed by 575
Abstract
Nowadays, portrait drawing has gained significance in cultivating painting skills and human sentiments. In practice, novices often struggle with this art form without proper guidance from professionals, since they lack understanding of the proportions and structures of facial features. To solve this limitation, [...] Read more.
Nowadays, portrait drawing has gained significance in cultivating painting skills and human sentiments. In practice, novices often struggle with this art form without proper guidance from professionals, since they lack understanding of the proportions and structures of facial features. To solve this limitation, we have developed a Portrait Drawing Learning Assistant System (PDLAS) to assist novices in learning portrait drawing. The PDLAS provides auxiliary lines as references for facial features that are extracted by applying OpenPose and OpenCV libraries to a face photo image of the target. A learner can draw a portrait on an iPad using drawing software where the auxiliary lines appear on a different layer to the portrait. However, in the current implementation, the PDLAS does not offer a function to assess the exactness of the drawing result for feedback to the learner. In this paper, we present a drawing exactness assessment method using a Localized Normalized Cross-Correlation (NCC) algorithm in the PDLAS. NCC gives a similarity score between the original face photo and drawing result images by calculating the correlation of the brightness distributions. For precise feedback, the method calculates the NCC for each face component by extracting the bounding box. In addition, in this paper, we improve the auxiliary lines for the nose. For evaluations, we asked students at Okayama University, Japan, to draw portraits using the PDLAS, and applied the proposed method to their drawing results, where the application results validated the effectiveness by suggesting improvements in drawing components. The system usability was also confirmed through a questionnaire with a SUS score. The main finding of this research is that the implementation of the NCC algorithm within the PDLAS significantly enhances the accuracy of novice portrait drawings by providing detailed feedback on specific facial features, proving the system’s efficacy in art education and training. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop