Next Issue
Volume 10, February
Previous Issue
Volume 9, December
 
 

Computers, Volume 10, Issue 1 (January 2021) – 13 articles

Cover Story (view full-size image): Population health management is the automated process of using big data to define patient cohorts and stratify groups by risk, with the final aim of improving clinical outcomes and quality of life while also reducing healthcare costs. This paper presents the trade-off of multiple machine learning algorithms to identify high-risk patients, which are usually affected by multimorbidity and represent major healthcare system users. Input datasets consist of administrative and socioeconomic data from periods of different lengths. Random Forest with 1 year of historical data achieves the best results, enabling real-time risk prediction updates whenever new data are collected and giving physicians the possibility to define appropriate personalized medicine for patients. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 529 KiB  
Article
Anomalies Detection Using Isolation in Concept-Drifting Data Streams
by Maurras Ulbricht Togbe, Yousra Chabchoub, Aliou Boly, Mariam Barry, Raja Chiky and Maroua Bahri
Computers 2021, 10(1), 13; https://doi.org/10.3390/computers10010013 - 19 Jan 2021
Cited by 40 | Viewed by 8189
Abstract
Detecting anomalies in streaming data is an important issue for many application domains, such as cybersecurity, natural disasters, or bank frauds. Different approaches have been designed in order to detect anomalies: statistics-based, isolation-based, clustering-based, etc. In this paper, we present a structured survey [...] Read more.
Detecting anomalies in streaming data is an important issue for many application domains, such as cybersecurity, natural disasters, or bank frauds. Different approaches have been designed in order to detect anomalies: statistics-based, isolation-based, clustering-based, etc. In this paper, we present a structured survey of the existing anomaly detection methods for data streams with a deep view on Isolation Forest (iForest). We first provide an implementation of Isolation Forest Anomalies detection in Stream Data (IForestASD), a variant of iForest for data streams. This implementation is built on top of scikit-multiflow (River), which is an open source machine learning framework for data streams containing a single anomaly detection algorithm in data streams, called Streaming half-space trees. We performed experiments on different real and well known data sets in order to compare the performance of our implementation of IForestASD and half-space trees. Moreover, we extended the IForestASD algorithm to handle drifting data by proposing three algorithms that involve two main well known drift detection methods: ADWIN and KSWIN. ADWIN is an adaptive sliding window algorithm for detecting change in a data stream. KSWIN is a more recent method and it refers to the Kolmogorov–Smirnov Windowing method for concept drift detection. More precisely, we extended KSWIN to be able to deal with n-dimensional data streams. We validated and compared all of the proposed methods on both real and synthetic data sets. In particular, we evaluated the F1-score, the execution time, and the memory consumption. The experiments show that our extensions have lower resource consumption than the original version of IForestASD with a similar or better detection efficiency. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2020)
Show Figures

Figure 1

15 pages, 884 KiB  
Article
A Generic Encapsulation to Unravel Social Spreading of a Pandemic: An Underlying Architecture
by Saad Alqithami
Computers 2021, 10(1), 12; https://doi.org/10.3390/computers10010012 - 17 Jan 2021
Cited by 3 | Viewed by 3717
Abstract
Cases of a new emergent infectious disease caused by mutations in the coronavirus family, called “COVID-19,” have spiked recently, affecting millions of people, and this has been classified as a global pandemic due to the wide spread of the virus. Epidemiologically, humans are [...] Read more.
Cases of a new emergent infectious disease caused by mutations in the coronavirus family, called “COVID-19,” have spiked recently, affecting millions of people, and this has been classified as a global pandemic due to the wide spread of the virus. Epidemiologically, humans are the targeted hosts of COVID-19, whereby indirect/direct transmission pathways are mitigated by social/spatial distancing. People naturally exist in dynamically cascading networks of social/spatial interactions. Their rational actions and interactions have huge uncertainties in regard to common social contagions with rapid network proliferations on a daily basis. Different parameters play big roles in minimizing such uncertainties by shaping the understanding of such contagions to include cultures, beliefs, norms, values, ethics, etc. Thus, this work is directed toward investigating and predicting the viral spread of the current wave of COVID-19 based on human socio-behavioral analyses in various community settings with unknown structural patterns. We examine the spreading and social contagions in unstructured networks by proposing a model that should be able to (1) reorganize and synthesize infected clusters of any networked agents, (2) clarify any noteworthy members of the population through a series of analyses of their behavioral and cognitive capabilities, (3) predict where the direction is heading with any possible outcomes, and (4) propose applicable intervention tactics that can be helpful in creating strategies to mitigate the spread. Such properties are essential in managing the rate of spread of viral infections. Furthermore, a novel spectra-based methodology that leverages configuration models as a reference network is proposed to quantify spreading in a given candidate network. We derive mathematical formulations to demonstrate the viral spread in the network structures. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health)
Show Figures

Figure 1

27 pages, 7225 KiB  
Article
An Empirical Review of Automated Machine Learning
by Lorenzo Vaccaro, Giuseppe Sansonetti and Alessandro Micarelli
Computers 2021, 10(1), 11; https://doi.org/10.3390/computers10010011 - 13 Jan 2021
Cited by 49 | Viewed by 8260
Abstract
In recent years, Automated Machine Learning (AutoML) has become increasingly important in Computer Science due to the valuable potential it offers. This is testified by the high number of works published in the academic field and the significant efforts made in the industrial [...] Read more.
In recent years, Automated Machine Learning (AutoML) has become increasingly important in Computer Science due to the valuable potential it offers. This is testified by the high number of works published in the academic field and the significant efforts made in the industrial sector. However, some problems still need to be resolved. In this paper, we review some Machine Learning (ML) models and methods proposed in the literature to analyze their strengths and weaknesses. Then, we propose their use—alone or in combination with other approaches—to provide possible valid AutoML solutions. We analyze those solutions from a theoretical point of view and evaluate them empirically on three Atari games from the Arcade Learning Environment. Our goal is to identify what, we believe, could be some promising ways to create truly effective AutoML frameworks, therefore able to replace the human expert as much as possible, thereby making easier the process of applying ML approaches to typical problems of specific domains. We hope that the findings of our study will provide useful insights for future research work in AutoML. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2020)
Show Figures

Figure 1

21 pages, 2016 KiB  
Article
Energy-Efficient Task Partitioning for Real-Time Scheduling on Multi-Core Platforms
by Manal A. El Sayed, El Sayed M. Saad, Rasha F. Aly and Shahira M. Habashy
Computers 2021, 10(1), 10; https://doi.org/10.3390/computers10010010 - 8 Jan 2021
Cited by 19 | Viewed by 5310
Abstract
Multi-core processors have become widespread computing engines for recent embedded real-time systems. Efficient task partitioning plays a significant role in real-time computing for achieving higher performance alongside sustaining system correctness and predictability and meeting all hard deadlines. This paper deals with the problem [...] Read more.
Multi-core processors have become widespread computing engines for recent embedded real-time systems. Efficient task partitioning plays a significant role in real-time computing for achieving higher performance alongside sustaining system correctness and predictability and meeting all hard deadlines. This paper deals with the problem of energy-aware static partitioning of periodic, dependent real-time tasks on a homogenous multi-core platform. Concurrent access of the tasks to shared resources by multiple tasks running on different cores induced a higher blocking time, which increases the worst-case execution time (WCET) of tasks and can cause missing the hard deadlines, consequently resulting in system failure. The proposed blocking-aware-based partitioning (BABP) algorithm aims to reduce the overall energy consumption while avoiding deadline violations. Compared to existing partitioning strategies, the proposed technique achieves more energy-saving. A series of experiments test the capabilities of the suggested algorithm compared to popular heuristics partitioning algorithms. A comparison was made between the most used bin-packing algorithms and the proposed algorithm in terms of energy consumption and system schedulability. Experimental results demonstrate that the designed algorithm outperforms the Worst Fit Decreasing (WFD), Best Fit Decreasing (BFD), and Similarity-Based Partitioning (SBP) algorithms of bin-packing algorithms, reduces the energy consumption of the overall system, and improves schedulability. Full article
(This article belongs to the Special Issue Real-Time Systems in Emerging IoT-Embedded Applications)
Show Figures

Figure 1

28 pages, 5718 KiB  
Article
Simulation and Analysis of Self-Replicating Robot Decision-Making Systems
by Andrew Jones and Jeremy Straub
Computers 2021, 10(1), 9; https://doi.org/10.3390/computers10010009 - 6 Jan 2021
Cited by 2 | Viewed by 3303
Abstract
Self-replicating robot systems (SRRSs) are a new prospective paradigm for robotic exploration. They can potentially facilitate lower mission costs and enhance mission capabilities by allowing some materials, which are needed for robotic system construction, to be collected in situ and used for robot [...] Read more.
Self-replicating robot systems (SRRSs) are a new prospective paradigm for robotic exploration. They can potentially facilitate lower mission costs and enhance mission capabilities by allowing some materials, which are needed for robotic system construction, to be collected in situ and used for robot fabrication. The use of a self-replicating robot system can potentially lower risk aversion, due to the ability to potentially replenish lost or damaged robots, and may increase the likelihood of mission success. This paper proposes and compares system configurations of an SRRS. A simulation system was designed and is used to model how an SRRS performs based on its system configuration, attributes, and operating environment. Experiments were conducted using this simulation and the results are presented. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

12 pages, 584 KiB  
Article
Methodology to Improve Services in Small IT Centers: Application to Educational Centers
by Juan Luis Rubio Sánchez
Computers 2021, 10(1), 8; https://doi.org/10.3390/computers10010008 - 4 Jan 2021
Cited by 4 | Viewed by 3403
Abstract
Educational centers (schools, academies, high schools, etc.) are usually small companies, which make them special in terms of management. The management of IT services is far from standard and based in home solutions. The disadvantage of this approach is clear, as it happened [...] Read more.
Educational centers (schools, academies, high schools, etc.) are usually small companies, which make them special in terms of management. The management of IT services is far from standard and based in home solutions. The disadvantage of this approach is clear, as it happened during the COVID-19 pandemic period. The solution to properly managing IT services is based on the use of the ITIL (Information Technology Infrastructure Library). The question is how to apply this standard that only defines the processes to implement, but does not describe the way or the order to implement them. In this article it is shown which IT processes are really needed in any educational center and the order in which they should be implemented. The method used consists of fulfilling a knowledge database with extensive information from schools, academies, and other educational centers. After that, an existing optimization model is adopted and a representative learning center is defined, which is used to propose the IT processes sequence; finally, a set of optimal IT processes and the order to implement them is defined. These ordered processes optimize the quality of IT for learning services. The main result is an ordered set of IT processes that best fit the needs of IT departments in small educational centers. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

12 pages, 956 KiB  
Article
Smart Contract Data Feed Framework for Privacy-Preserving Oracle System on Blockchain
by Junhoo Park, Hyekjin Kim, Geunyoung Kim and Jaecheol Ryou
Computers 2021, 10(1), 7; https://doi.org/10.3390/computers10010007 - 28 Dec 2020
Cited by 19 | Viewed by 7492
Abstract
As blockchain-based applications and research such as cryptocurrency increase, an oracle problem to bring external data in the blockchain is emerging. Among the methods to solve the oracle problem, a method of configuring oracle based on TLS, an existing internet infrastructure, has been [...] Read more.
As blockchain-based applications and research such as cryptocurrency increase, an oracle problem to bring external data in the blockchain is emerging. Among the methods to solve the oracle problem, a method of configuring oracle based on TLS, an existing internet infrastructure, has been proposed. However, these methods currently have the disadvantage of not supporting privacy protection for external data, and there are limitations in configuring the process of a smart contract based on external data verification for automation. To solve this problem, we propose a framework consisting of middleware of external source server, data prover, and verification contract. The framework converts the data signed in the web server into a proof that the owner can prove with zk-SNARKs and provides a smart contract that can verify this. Through these procedures, data owners not only protect their privacy by proving themselves, but they can also automate on-chain processing through smart contract verification. For the proposed framework, we create a proof using libsnark for server data and show the performance and cost to verify with Solidity the smart contract language of the Ethereum platform. Full article
Show Figures

Figure 1

15 pages, 658 KiB  
Article
Online Learning of Finite and Infinite Gamma Mixture Models for COVID-19 Detection in Medical Images
by Hassen Sallay, Sami Bourouis and Nizar Bouguila
Computers 2021, 10(1), 6; https://doi.org/10.3390/computers10010006 - 27 Dec 2020
Cited by 19 | Viewed by 3964
Abstract
The accurate detection of abnormalities in medical images (like X-ray and CT scans) is a challenging problem due to images’ blurred boundary contours, different sizes, variable shapes, and uneven density. In this paper, we tackle this problem via a new effective online variational [...] Read more.
The accurate detection of abnormalities in medical images (like X-ray and CT scans) is a challenging problem due to images’ blurred boundary contours, different sizes, variable shapes, and uneven density. In this paper, we tackle this problem via a new effective online variational learning model for both mixtures of finite and infinite Gamma distributions. The proposed approach takes advantage of the Gamma distribution flexibility, the online learning scalability, and the variational inference efficiency. Three different batch and online learning methods based on robust texture-based feature extraction are proposed. Our work is evaluated and validated on several real challenging data sets for different kinds of pneumonia infection detection. The obtained results are very promising given that we approach the classification problem in an unsupervised manner. They also confirm the superiority of the Gamma mixture model compared to the Gaussian mixture model for medical images’ classification. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health)
Show Figures

Figure 1

25 pages, 10390 KiB  
Article
Elderly Care Based on Hand Gestures Using Kinect Sensor
by Munir Oudah, Ali Al-Naji and Javaan Chahl
Computers 2021, 10(1), 5; https://doi.org/10.3390/computers10010005 - 26 Dec 2020
Cited by 25 | Viewed by 5289
Abstract
Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, [...] Read more.
Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, we proposed three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario used joint tracking combined with a depth threshold to enhance hand segmentation and efficiently recognise the number of fingers extended. The second scenario utilised the metadata parameters provided by the Kinect V2 depth sensor, which provided 11 parameters related to the tracked body and gave information about three gestures for each hand. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognise and classify five hand gesture categories. In this study, deaf-mute elderly people performed five different hand gestures, each related to a specific request, such as needing water, meal, toilet, help and medicine. Next, the request was sent via the global system for mobile communication (GSM) as a text message to the care provider’s smartphone because the elderly subjects could not execute any activity independently. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health)
Show Figures

Figure 1

21 pages, 798 KiB  
Article
Trading-Off Machine Learning Algorithms towards Data-Driven Administrative-Socio-Economic Population Health Management
by Silvia Panicacci, Massimiliano Donati, Francesco Profili, Paolo Francesconi and Luca Fanucci
Computers 2021, 10(1), 4; https://doi.org/10.3390/computers10010004 - 25 Dec 2020
Cited by 9 | Viewed by 3681
Abstract
Together with population ageing, the number of people suffering from multimorbidity is increasing, up to more than half of the population by 2035. This part of the population is composed by the highest-risk patients, who are, at the same time, the major users [...] Read more.
Together with population ageing, the number of people suffering from multimorbidity is increasing, up to more than half of the population by 2035. This part of the population is composed by the highest-risk patients, who are, at the same time, the major users of the healthcare systems. The early identification of this sub-population can really help to improve people’s quality of life and reduce healthcare costs. In this paper, we describe a population health management tool based on state-of-the-art intelligent algorithms, starting from administrative and socio-economic data, for the early identification of high-risk patients. The study refers to the population of the Local Health Unit of Central Tuscany in 2015, which amounts to 1,670,129 residents. After a trade-off on machine learning models and on input data, Random Forest applied to 1-year of historical data achieves the best results, outperforming state-of-the-art models. The most important variables for this model, in terms of mean minimal depth, accuracy decrease and Gini decrease, result to be age and some group of drugs, such as high-ceiling diuretics. Thanks to the low inference time and reduced memory usage, the resulting model allows for real-time risk prediction updates whenever new data become available, giving General Practitioners the possibility to early adopt personalised medicine. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health)
Show Figures

Figure 1

22 pages, 2229 KiB  
Article
NADAL: A Neighbor-Aware Deep Learning Approach for Inferring Interpersonal Trust Using Smartphone Data
by Ghassan F. Bati and Vivek K. Singh
Computers 2021, 10(1), 3; https://doi.org/10.3390/computers10010003 - 24 Dec 2020
Cited by 3 | Viewed by 3698
Abstract
Interpersonal trust mediates multiple socio-technical systems and has implications for personal and societal well-being. Consequently, it is crucial to devise novel machine learning methods to infer interpersonal trust automatically using mobile sensor-based behavioral data. Considering that social relationships are often affected by neighboring [...] Read more.
Interpersonal trust mediates multiple socio-technical systems and has implications for personal and societal well-being. Consequently, it is crucial to devise novel machine learning methods to infer interpersonal trust automatically using mobile sensor-based behavioral data. Considering that social relationships are often affected by neighboring relationships within the same network, this work proposes using a novel neighbor-aware deep learning architecture (NADAL) to enhance the inference of interpersonal trust scores. Based on analysis of call, SMS, and Bluetooth interaction data from a one-year field study involving 130 participants, we report that: (1) adding information about neighboring relationships improves trust score prediction in both shallow and deep learning approaches; and (2) a custom-designed neighbor-aware deep learning architecture outperforms a baseline feature concatenation based deep learning approach. The results obtained at interpersonal trust prediction are promising and have multiple implications for trust-aware applications in the emerging social internet of things. Full article
(This article belongs to the Special Issue Smart Computing for Smart Cities (SC2))
Show Figures

Figure 1

13 pages, 2000 KiB  
Article
Performance Optimization of MANET Networks through Routing Protocol Analysis
by Tri Kuntoro Priyambodo, Danur Wijayanto and Made Santo Gitakarma
Computers 2021, 10(1), 2; https://doi.org/10.3390/computers10010002 - 22 Dec 2020
Cited by 44 | Viewed by 5735
Abstract
A Mobile Ad Hoc Network (MANET) protocol requires proper settings to perform data transmission optimally. To overcome this problem, it is necessary to select the correct routing protocol and use the routing protocol’s default parameter values. This study examined the effect of route [...] Read more.
A Mobile Ad Hoc Network (MANET) protocol requires proper settings to perform data transmission optimally. To overcome this problem, it is necessary to select the correct routing protocol and use the routing protocol’s default parameter values. This study examined the effect of route request parameters, such as RREQ_RETRIES and MAX_RREQ_TIMOUT, on the Ad Hoc On-demand Distance Vector (AODV) protocol, which was then compared with the default AODV performance Optimized Link State Routing (OLSR) protocols. The performance metrics used for measuring performance were Packet Delivery Ratio (PDR), throughput, delay, packet loss, energy consumption, and routing overhead. The results show that the OLSR protocol has a smaller delay than the AODV protocol, while in other measurements, the AODV protocol is better than OLSR. By reducing the combination value of RREQ_RETRIES, MAX_RREQ_TIMEOUT in AODV routing to (2, 10 s) and (3, 5 s), the protocol’s performance can be improved. The two combinations result in an average increase in throughput performance of 3.09%, a decrease in delay of 17.7%, a decrease in packet loss of 27.15%, and an increase in PDR of 4.8%. For variations in the speed of movement of nodes, 20 m/s has the best performance, while 5 m/s has the worst performance. Full article
Show Figures

Graphical abstract

14 pages, 4060 KiB  
Article
Design and Evaluation of Anthropomorphic Robotic Hand for Object Grasping and Shape Recognition
by Rahul Raj Devaraja, Rytis Maskeliūnas and Robertas Damaševičius
Computers 2021, 10(1), 1; https://doi.org/10.3390/computers10010001 - 22 Dec 2020
Cited by 20 | Viewed by 4933
Abstract
We developed an anthropomorphic multi-finger artificial hand for a fine-scale object grasping task, sensing the grasped object’s shape. The robotic hand was created using the 3D printer and has the servo bed for stand-alone finger movement. The data containing the robotic fingers’ angular [...] Read more.
We developed an anthropomorphic multi-finger artificial hand for a fine-scale object grasping task, sensing the grasped object’s shape. The robotic hand was created using the 3D printer and has the servo bed for stand-alone finger movement. The data containing the robotic fingers’ angular position are acquired using the Leap Motion device, and a hybrid Support Vector Machine (SVM) classifier is used for object shape identification. We trained the designed robotic hand on a few monotonous convex-shaped items similar to everyday objects (ball, cylinder, and rectangular box) using supervised learning techniques. We achieve the mean accuracy of object shape recognition of 94.4%. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2020)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop