Next Issue
Volume 11, September
Previous Issue
Volume 11, July
 
 

Computers, Volume 11, Issue 8 (August 2022) – 9 articles

Cover Story (view full-size image): Short-term electric power load forecasting is a critical and essential task for utilities in the electric power industry for proper energy trading, which enables the independent system operator to operate the network without any technical and economical issues. From an electric power distribution system point of view, it is essential for proper planning and operation. A machine learning model, namely a regression tree, is used to forecast the active power load an hour and one day ahead. Real-time active power load data to train and test the machine learning models are collected from a 33/11 kV substation located in Telangana State, India. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1094 KiB  
Article
Interpretable Lightweight Ensemble Classification of Normal versus Leukemic Cells
by Yúri Faro Dantas de Sant’Anna, José Elwyslan Maurício de Oliveira and Daniel Oliveira Dantas
Computers 2022, 11(8), 125; https://doi.org/10.3390/computers11080125 - 19 Aug 2022
Cited by 1 | Viewed by 2288
Abstract
The lymphocyte classification problem is usually solved by deep learning approaches based on convolutional neural networks with multiple layers. However, these techniques require specific hardware and long training times. This work proposes a lightweight image classification system capable of discriminating between healthy and [...] Read more.
The lymphocyte classification problem is usually solved by deep learning approaches based on convolutional neural networks with multiple layers. However, these techniques require specific hardware and long training times. This work proposes a lightweight image classification system capable of discriminating between healthy and cancerous lymphocytes of leukemia patients using image processing and feature-based machine learning techniques that require less training time and can run on a standard CPU. The features are composed of statistical, morphological, textural, frequency, and contour features extracted from each image and used to train a set of lightweight algorithms that classify the lymphocytes into malignant or healthy. After the training, these classifiers were combined into an ensemble classifier to improve the results. The proposed method has a lower computational cost than most deep learning approaches in learning time and neural network size. Our results contribute to the leukemia classification system, showing that high performance can be achieved by classifiers trained with a rich set of features. This study extends a previous work by combining simple classifiers into a single ensemble solution. With principal component analysis, it is possible to reduce the number of features used while maintaining a high accuracy. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

21 pages, 8319 KiB  
Article
UXO-AID: A New UXO Classification Application Based on Augmented Reality to Assist Deminers
by Qabas A. Hameed, Harith A. Hussein, Mohamed A. Ahmed, Mahmood M. Salih, Reem D. Ismael and Mohammed Basim Omar
Computers 2022, 11(8), 124; https://doi.org/10.3390/computers11080124 - 19 Aug 2022
Cited by 3 | Viewed by 2950
Abstract
Unexploded ordnance (UXO) is a worldwide problem and a long-term hazard because of its ability to harm humanity by remaining active and destructive decades after a conflict has concluded. In addition, the current UXO clearance methods mainly involve manual clearance and depend on [...] Read more.
Unexploded ordnance (UXO) is a worldwide problem and a long-term hazard because of its ability to harm humanity by remaining active and destructive decades after a conflict has concluded. In addition, the current UXO clearance methods mainly involve manual clearance and depend on the deminer’s experience. However, this approach has a high misclassification rate, which increases the likelihood of an explosion ending the deminer’s life. This study proposes a new approach to identifying the UXO based on augmented reality technology. The methodology is presented based on two phases. Firstly, a new dataset of UXO samples is created by printing 3D samples and building a 3D model of the object data file with accurate data for 3D printed samples. Secondly, the development of the UXO-AID mobile application prototype, which is based on augmented reality technology, is provided. The proposed prototype was evaluated and tested with different methods. The prototype’s performance was measured at different light intensities and distances for testing. The testing results revealed that the application could successfully perform in excellent and moderate lighting with a distance of 10 to 30 cm. As for recognition accuracy, the overall recognition success rate of reached 82.5%, as the disparity in the number of features of each object affected the accuracy of object recognition. Additionally, the application’s ability to support deminers was assessed through a usability questionnaire submitted by 20 deminers. The questionnaire was based on three factors: satisfaction, effectiveness, and efficiency. The proposed UXO-AID mobile application prototype supports deminers to classify the UXO accurately and in real time, reducing the cognitive load of complex tasks. UXO-AID is simple to use, requires no prior training, and takes advantage of the wide availability of mobile devices. Full article
(This article belongs to the Special Issue Advances in Augmented and Mixed Reality to the Industry 4.0)
Show Figures

Figure 1

21 pages, 1584 KiB  
Article
Extract Class Refactoring Based on Cohesion and Coupling: A Greedy Approach
by Musaad Alzahrani
Computers 2022, 11(8), 123; https://doi.org/10.3390/computers11080123 - 16 Aug 2022
Cited by 3 | Viewed by 2523
Abstract
A large class with many responsibilities is a design flaw that commonly occurs in real-world object-oriented systems during their lifespan. Such a class tends to be more difficult to comprehend, test, and change. Extract class refactoring (ECR) is the technique that is used [...] Read more.
A large class with many responsibilities is a design flaw that commonly occurs in real-world object-oriented systems during their lifespan. Such a class tends to be more difficult to comprehend, test, and change. Extract class refactoring (ECR) is the technique that is used to address this design flaw by trying to extract a set of smaller classes with better quality from the large class. Unfortunately, ECR is a costly process that takes great time and effort when it is conducted completely by hand. Thus, many approaches have been introduced in the literature that tried to automatically suggest the best set of classes that can be extracted from a large class. However, most of these approaches focus on improving the cohesion of the extracted classes yet neglect the coupling between them which can lead to the extraction of highly coupled classes. Therefore, this paper proposes a novel approach that considers the combination of the cohesion and coupling to identify the set of classes that can be extracted from a large class. The proposed approach was empirically evaluated based on real-world Blobs taken from two open-source object-oriented systems. The results of the empirical evaluation revealed that the proposed approach is potentially useful and leads to improvement in the overall quality. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

29 pages, 344 KiB  
Article
Transforming Points of Single Contact Data into Linked Data
by Pavlina Fragkou and Leandros Maglaras
Computers 2022, 11(8), 122; https://doi.org/10.3390/computers11080122 - 11 Aug 2022
Cited by 1 | Viewed by 2141
Abstract
Open data portals contain valuable information for citizens and business. However, searching for information can prove to be tiresome even in portals tackling domains similar information. A typical case is the information residing in the European Commission’s portals supported by Member States aiming [...] Read more.
Open data portals contain valuable information for citizens and business. However, searching for information can prove to be tiresome even in portals tackling domains similar information. A typical case is the information residing in the European Commission’s portals supported by Member States aiming to facilitate service provision activities for EU citizens and businesses. The current work followed the FAIR principles (Findability, Accessibility, Interoperability, and Reuse of digital assets) as well as the GO-FAIR principles and tried to transform raw data into fair data. The innovative part of this work is the mapping of information residing in various governmental portals (Points of Single Contacts) by transforming information appearing in them in RDF format (i.e., as Linked data), in order to make them easily accessible, exchangeable, interoperable and publishable as linked open data. Mapping was performed using the semantic model of a single portal, i.e., the enriched Greek e-GIF ontology and by retrieving and analyzing raw, i.e., non-FAIR data, by defining the semantic model and by making data linkable. The Data mapping process proved to require a significant manual effort and revealed that data value remains unexplored due to poor data representation. It also highlighted the need for appropriately designing and implementing horizontal actions addressing an important number of recipients in an interoperable way. Full article
Show Figures

Figure 1

18 pages, 5553 KiB  
Article
A Lightweight In-Vehicle Alcohol Detection Using Smart Sensing and Supervised Learning
by Qasem Abu Al-Haija and Moez Krichen
Computers 2022, 11(8), 121; https://doi.org/10.3390/computers11080121 - 3 Aug 2022
Cited by 15 | Viewed by 9056
Abstract
According to the risk investigations of being involved in an accident, alcohol-impaired driving is one of the major causes of motor vehicle accidents. Preventing highly intoxicated persons from driving could potentially save many lives. This paper proposes a lightweight in-vehicle alcohol detection that [...] Read more.
According to the risk investigations of being involved in an accident, alcohol-impaired driving is one of the major causes of motor vehicle accidents. Preventing highly intoxicated persons from driving could potentially save many lives. This paper proposes a lightweight in-vehicle alcohol detection that processes the data generated from six alcohol sensors (MQ-3 alcohol sensors) using an optimizable shallow neural network (O-SNN). The experimental evaluation results exhibit a high-performance detection system, scoring a 99.8% detection accuracy with a very short inferencing delay of 2.22 μs. Hence, the proposed model can be efficiently deployed and used to discover in-vehicle alcohol with high accuracy and low inference overhead as a part of the driver alcohol detection system for safety (DADSS) system aiming at the massive deployment of alcohol-sensing systems that could potentially save thousands of lives annually. Full article
(This article belongs to the Special Issue Real-Time Embedded Systems in IoT)
Show Figures

Figure 1

32 pages, 5708 KiB  
Article
A Novel Criticality Analysis Technique for Detecting Dynamic Disturbances in Human Gait
by Shadi Eltanani, Tjeerd V. olde Scheper and Helen Dawes
Computers 2022, 11(8), 120; https://doi.org/10.3390/computers11080120 - 3 Aug 2022
Cited by 3 | Viewed by 2109
Abstract
The application of machine learning (ML) has made an unprecedented change in the field of medicine, showing a significant potential to automate tasks and to achieve objectives that are closer to human cognitive capabilities. Human gait, in particular, is a series of continuous [...] Read more.
The application of machine learning (ML) has made an unprecedented change in the field of medicine, showing a significant potential to automate tasks and to achieve objectives that are closer to human cognitive capabilities. Human gait, in particular, is a series of continuous metabolic interactions specific for humans. The need for an intelligent recognition of dynamic changes of gait enables physicians in clinical practice to early identify impaired gait and to reach proper decision making. Because of the underlying complexity of the biological system, it can be difficult to create an accurate detection and analysis of imbalanced gait. This paper proposes a novel Criticality Analysis (CA) methodology as a feasible method to extract the dynamic interactions involved in human gait. This allows a useful scale-free representation of multivariate dynamic data in a nonlinear representation space. To quantify the effectiveness of the CA methodology, a Support Vector Machine (SVM) algorithm is implemented in order to identify the nonlinear relationships and high-order interactions between multiple gait data variables. The gait features extracted from the CA method were used for training and testing the SVM algorithm. The simulation results of this paper show that the implemented SVM model with the support of the CA method increases the accuracy and enhances the efficiency of gait analysis to extremely high levels. Therefore, it can perform as a robust classification tool for detection of dynamic disturbances of biological data patterns and creates a tremendous opportunity for clinical diagnosis and rehabilitation. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

30 pages, 20558 KiB  
Article
Platform-Independent Web Application for Short-Term Electric Power Load Forecasting on 33/11 kV Substation Using Regression Tree
by Venkataramana Veeramsetty, Modem Sai Pavan Kumar and Surender Reddy Salkuti
Computers 2022, 11(8), 119; https://doi.org/10.3390/computers11080119 - 29 Jul 2022
Cited by 6 | Viewed by 2226
Abstract
Short-term electric power load forecasting is a critical and essential task for utilities in the electric power industry for proper energy trading, which enables the independent system operator to operate the network without any technical and economical issues. From an electric power distribution [...] Read more.
Short-term electric power load forecasting is a critical and essential task for utilities in the electric power industry for proper energy trading, which enables the independent system operator to operate the network without any technical and economical issues. From an electric power distribution system point of view, accurate load forecasting is essential for proper planning and operation. In order to build most robust machine learning model to forecast the load with a good accuracy irrespective of weather condition and type of day, features such as the season, temperature, humidity and day-status are incorporated into the data. In this paper, a machine learning model, namely a regression tree, is used to forecast the active power load an hour and one day ahead. Real-time active power load data to train and test the machine learning models are collected from a 33/11 kV substation located in Telangana State, India. Based on the simulation results, it is observed that the regression tree model is able to forecast the load with less error. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

21 pages, 1207 KiB  
Article
Computational and Communication Infrastructure Challenges for Resilient Cloud Services
by Heberth F. Martinez, Oscar H. Mondragon, Helmut A. Rubio and Jack Marquez
Computers 2022, 11(8), 118; https://doi.org/10.3390/computers11080118 - 29 Jul 2022
Cited by 5 | Viewed by 3136
Abstract
Fault tolerance and the availability of applications, computing infrastructure, and communications systems during unexpected events are critical in cloud environments. The microservices architecture, and the technologies that it uses, should be able to maintain acceptable service levels in the face of adverse circumstances. [...] Read more.
Fault tolerance and the availability of applications, computing infrastructure, and communications systems during unexpected events are critical in cloud environments. The microservices architecture, and the technologies that it uses, should be able to maintain acceptable service levels in the face of adverse circumstances. In this paper, we discuss the challenges faced by cloud infrastructure in relation to providing resilience to applications. Based on this analysis, we present our approach for a software platform based on a microservices architecture, as well as the resilience mechanisms to mitigate the impact of infrastructure failures on the availability of applications. We demonstrate the capacity of our platform to provide resilience to analytics applications, minimizing service interruptions and keeping acceptable response times. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

20 pages, 1838 KiB  
Article
Combining Log Files and Monitoring Data to Detect Anomaly Patterns in a Data Center
by Laura Viola, Elisabetta Ronchieri and Claudia Cavallaro
Computers 2022, 11(8), 117; https://doi.org/10.3390/computers11080117 - 26 Jul 2022
Cited by 1 | Viewed by 3606
Abstract
Context—Anomaly detection in a data center is a challenging task, having to consider different services on various resources. Current literature shows the application of artificial intelligence and machine learning techniques to either log files or monitoring data: the former created by services at [...] Read more.
Context—Anomaly detection in a data center is a challenging task, having to consider different services on various resources. Current literature shows the application of artificial intelligence and machine learning techniques to either log files or monitoring data: the former created by services at run time, while the latter produced by specific sensors directly on the physical or virtual machine. Objectives—We propose a model that exploits information both in log files and monitoring data to identify patterns and detect anomalies over time both at the service level and at the machine level. Methods—The key idea is to construct a specific dictionary for each log file which helps to extract anomalous n-grams in the feature matrix. Several techniques of Natural Language Processing, such as wordclouds and Topic modeling, have been used to enrich such dictionary. A clustering algorithm was then applied to the feature matrix to identify and group the various types of anomalies. On the other side, time series anomaly detection technique has been applied to sensors data in order to combine problems found in the log files with problems stored in the monitoring data. Several services (i.e., log files) running on the same machine have been grouped together with the monitoring metrics. Results—We have tested our approach on a real data center equipped with log files and monitoring data that can characterize the behaviour of physical and virtual resources in production. The data have been provided by the National Institute for Nuclear Physics in Italy. We have observed a correspondence between anomalies in log files and monitoring data, e.g., a decrease in memory usage or an increase in machine load. The results are extremely promising. Conclusions—Important outcomes have emerged thanks to the integration between these two types of data. Our model requires to integrate site administrators’ expertise in order to consider all critical scenarios in the data center and understand results properly. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

Previous Issue
Back to TopTop