Next Issue
Volume 13, November
Previous Issue
Volume 13, September
 
 

Algorithms, Volume 13, Issue 10 (October 2020) – 26 articles

Cover Story (view full-size image): A tool that allows respecting social distancing and providing emergency plans for crowds in smart cities was proposed. An IoT wireless sensor network consisting of sensor units containing several sensors, a microcontroller and a transceiver, and machine-learning-based and short path finding algorithms were designed for detecting crowds from camera and sensor data, for triggering overcrowding, environmental, and structural alarms, for closing overcrowded areas, and for providing emergency plans using a specifically designed web dashboard, messages, and NFC technology. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 21178 KiB  
Article
Model-Based Real-Time Motion Tracking Using Dynamical Inverse Kinematics
by Lorenzo Rapetti, Yeshasvi Tirupachuri, Kourosh Darvish, Stefano Dafarra, Gabriele Nava, Claudia Latella and Daniele Pucci
Algorithms 2020, 13(10), 266; https://doi.org/10.3390/a13100266 - 20 Oct 2020
Cited by 20 | Viewed by 5303
Abstract
This paper contributes towards the development of motion tracking algorithms for time-critical applications, proposing an infrastructure for dynamically solving the inverse kinematics of highly articulate systems such as humans. The method presented is model-based, it makes use of velocity correction and differential kinematics [...] Read more.
This paper contributes towards the development of motion tracking algorithms for time-critical applications, proposing an infrastructure for dynamically solving the inverse kinematics of highly articulate systems such as humans. The method presented is model-based, it makes use of velocity correction and differential kinematics integration in order to compute the system configuration. The convergence of the model towards the measurements is proved using Lyapunov analysis. An experimental scenario, where the motion of a human subject is tracked in static and dynamic configurations, is used to validate the inverse kinematics method performance on human and humanoid models. Moreover, the method is tested on a human-humanoid retargeting scenario, verifying the usability of the computed solution in real-time robotics applications. Our approach is evaluated both in terms of accuracy and computational load, and compared to iterative optimization algorithms. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 6404 KiB  
Article
Application of the Approximate Bayesian Computation Algorithm to Gamma-Ray Spectroscopy
by Tom Burr, Andrea Favalli, Marcie Lombardi and Jacob Stinnett
Algorithms 2020, 13(10), 265; https://doi.org/10.3390/a13100265 - 19 Oct 2020
Cited by 7 | Viewed by 3512
Abstract
Radioisotope identification (RIID) algorithms for gamma-ray spectroscopy aim to infer what isotopes are present and in what amounts in test items. RIID algorithms either use all energy channels in the analysis region or only energy channels in and near identified peaks. Because many [...] Read more.
Radioisotope identification (RIID) algorithms for gamma-ray spectroscopy aim to infer what isotopes are present and in what amounts in test items. RIID algorithms either use all energy channels in the analysis region or only energy channels in and near identified peaks. Because many RIID algorithms rely on locating peaks and estimating each peak’s net area, peak location and peak area estimation algorithms continue to be developed for gamma-ray spectroscopy. This paper shows that approximate Bayesian computation (ABC) can be effective for peak location and area estimation. Algorithms to locate peaks can be applied to raw or smoothed data, and among several smoothing options, the iterative bias reduction algorithm (IBR) is recommended; the use of IBR with ABC is shown to potentially reduce uncertainty in peak location estimation. Extracted peak locations and areas can then be used as summary statistics in a new ABC-based RIID. ABC allows for easy experimentation with candidate summary statistics such as goodness-of-fit scores and peak areas that are extracted from relatively high dimensional gamma spectra with photopeaks (1024 or more energy channels) consisting of count rates versus energy for a large number of gamma energies. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

14 pages, 423 KiB  
Article
The Auto-Diagnosis of Granulation of Information Retrieval on the Web
by Anna Bryniarska
Algorithms 2020, 13(10), 264; https://doi.org/10.3390/a13100264 - 16 Oct 2020
Cited by 2 | Viewed by 2435
Abstract
In this paper, a postulation on the relationship between the memory structure of the brain’s neural network and the representation of information granules in the semantic web is presented. In order to show this connection, abstract operations of inducing information granules are proposed [...] Read more.
In this paper, a postulation on the relationship between the memory structure of the brain’s neural network and the representation of information granules in the semantic web is presented. In order to show this connection, abstract operations of inducing information granules are proposed to be used for the proposed logical operations systems, hereinafter referred to as: analysis, reduction, deduction and synthesis. Firstly, the searched information is compared with the information represented by the thesaurus, which is equivalent to the auto-diagnosis of this system. Secondly, triangular norm systems (information perception systems) are built for fuzzy or vague information. These are fuzzy sets. The introduced logical operations and their logical values, denoted as problematic, hypothetical, validity and decidability, are interpreted in these fuzzy sets. In this way, the granularity of the information retrieval on the Web is determined according to the type of reasoning. Full article
(This article belongs to the Special Issue Granular Computing: From Foundations to Applications)
Show Figures

Figure 1

12 pages, 3913 KiB  
Article
Lung Lobe Segmentation Based on Lung Fissure Surface Classification Using a Point Cloud Region Growing Approach
by Xin Chen, Hong Zhao and Ping Zhou
Algorithms 2020, 13(10), 263; https://doi.org/10.3390/a13100263 - 15 Oct 2020
Cited by 8 | Viewed by 8278
Abstract
In anatomy, the lung can be divided by lung fissures into several pulmonary lobe units with specific functions. Identifying the lung lobes and the distribution of various diseases among different lung lobes from CT images is important for disease diagnosis and tracking after [...] Read more.
In anatomy, the lung can be divided by lung fissures into several pulmonary lobe units with specific functions. Identifying the lung lobes and the distribution of various diseases among different lung lobes from CT images is important for disease diagnosis and tracking after recovery. In order to solve the problems of low tubular structure segmentation accuracy and long algorithm time in segmenting lung lobes based on lung anatomical structure information, we propose a segmentation algorithm based on lung fissure surface classification using a point cloud region growing approach. We cluster the pulmonary fissures, transformed into point cloud data, according to the differences in the pulmonary fissure surface normal vector and curvature estimated by principal component analysis. Then, a multistage spline surface fitting method is used to fill and expand the lung fissure surface to realize the lung lobe segmentation. The proposed approach was qualitatively and quantitatively evaluated on a public dataset from Lobe and Lung Analysis 2011 (LOLA11), and obtained an overall score of 0.84. Although our approach achieved a slightly lower overall score compared to the deep learning based methods (LobeNet_V2 and V-net), the inter-lobe boundaries from our approach were more accurate for the CT images with visible lung fissures. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Figure 1

17 pages, 374 KiB  
Article
CYK Parsing over Distributed Representations
by Fabio Massimo Zanzotto, Giorgio Satta and Giordano Cristini
Algorithms 2020, 13(10), 262; https://doi.org/10.3390/a13100262 - 15 Oct 2020
Cited by 2 | Viewed by 3950
Abstract
Parsing is a key task in computer science, with applications in compilers, natural language processing, syntactic pattern matching, and formal language theory. With the recent development of deep learning techniques, several artificial intelligence applications, especially in natural language processing, have combined traditional parsing [...] Read more.
Parsing is a key task in computer science, with applications in compilers, natural language processing, syntactic pattern matching, and formal language theory. With the recent development of deep learning techniques, several artificial intelligence applications, especially in natural language processing, have combined traditional parsing methods with neural networks to drive the search in the parsing space, resulting in hybrid architectures using both symbolic and distributed representations. In this article, we show that existing symbolic parsing algorithms for context-free languages can cross the border and be entirely formulated over distributed representations. To this end, we introduce a version of the traditional Cocke–Younger–Kasami (CYK) algorithm, called distributed (D)-CYK, which is entirely defined over distributed representations. D-CYK uses matrix multiplication on real number matrices of a size independent of the length of the input string. These operations are compatible with recurrent neural networks. Preliminary experiments show that D-CYK approximates the original CYK algorithm. By showing that CYK can be entirely performed on distributed representations, we open the way to the definition of recurrent layer neural networks that can process general context-free languages. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

23 pages, 414 KiB  
Article
On Multidimensional Congestion Games
by Vittorio Bilò, Michele Flammini, Vasco Gallotti and Cosimo Vinci
Algorithms 2020, 13(10), 261; https://doi.org/10.3390/a13100261 - 15 Oct 2020
Cited by 3 | Viewed by 3041
Abstract
We introduce multidimensional congestion games, that is, congestion games whose set of players is partitioned into d+1 clusters C0,C1,,Cd. Players in C0 have full information about all the other participants [...] Read more.
We introduce multidimensional congestion games, that is, congestion games whose set of players is partitioned into d+1 clusters C0,C1,,Cd. Players in C0 have full information about all the other participants in the game, while players in Ci, for any 1id, have full information only about the members of C0Ci and are unaware of all the others. This model has at least two interesting applications: (i) it is a special case of graphical congestion games induced by an undirected social knowledge graph with independence number equal to d, and (ii) it represents scenarios in which players have a type and the level of competition they experience on a resource depends on their type and on the types of the other players using it. We focus on the case in which the cost function associated with each resource is affine and bound the price of anarchy and stability as a function of d with respect to two meaningful social cost functions and for both weighted and unweighted players. We also provide refined bounds for the special case of d=2 in presence of unweighted players. Full article
(This article belongs to the Special Issue Graph Algorithms and Applications)
Show Figures

Figure A1

12 pages, 1231 KiB  
Article
A Solving Algorithm for Nonlinear Bilevel Programing Problems Based on Human Evolutionary Model
by Linmao Ma and Guangmin Wang
Algorithms 2020, 13(10), 260; https://doi.org/10.3390/a13100260 - 13 Oct 2020
Cited by 11 | Viewed by 2480
Abstract
An algorithm based on the human evolutionary model is proposed for solving nonlinear bilevel programing problems. In view of the hierarchical structure of this problem, the algorithm is designed through feeding back the optimal solution of the lower-level problem to the upper-level. Based [...] Read more.
An algorithm based on the human evolutionary model is proposed for solving nonlinear bilevel programing problems. In view of the hierarchical structure of this problem, the algorithm is designed through feeding back the optimal solution of the lower-level problem to the upper-level. Based on the quality of individuals at each iteration, this proposed algorithm can independently change the population size to achieve the balance between global and local searching ability during the progress of evolution, which can perform an exhaustive search in the whole landscape through creating an individual by using the tabu search method. Finally, we test four typical bilevel programing problems by using the proposed algorithm to verify its feasibility. The experimental results indicate the proposed algorithm can not only solve bilevel programing problems but also get the global optimal solution. Full article
Show Figures

Figure 1

20 pages, 3707 KiB  
Article
An EEG Feature Extraction Method Based on Sparse Dictionary Self-Organizing Map for Event-Related Potential Recognition
by Shang Feng, Haifeng Li, Lin Ma and Zhongliang Xu
Algorithms 2020, 13(10), 259; https://doi.org/10.3390/a13100259 - 13 Oct 2020
Cited by 3 | Viewed by 3032
Abstract
In the application of the brain-computer interface, feature extraction is an important part of Electroencephalography (EEG) signal classification. Using sparse modeling to extract EEG signal features is a common approach. However, the features extracted by common sparse decomposition methods are only of analytical [...] Read more.
In the application of the brain-computer interface, feature extraction is an important part of Electroencephalography (EEG) signal classification. Using sparse modeling to extract EEG signal features is a common approach. However, the features extracted by common sparse decomposition methods are only of analytical meaning, and cannot relate to actual EEG waveforms, especially event-related potential waveforms. In this article, we propose a feature extraction method based on a self-organizing map of sparse dictionary atoms, which can aggregate event-related potential waveforms scattered inside an over-complete sparse dictionary into the code book of neurons in the self-organizing map network. Then, the cosine similarity between the EEG signal sample and the code vector is used as the classification feature. Compared with traditional feature extraction methods based on sparse decomposition, the classification features obtained by this method have more intuitive electrophysiological meaning. The experiment conducted on a public auditory event-related potential (ERP) brain-computer interface dataset showed that, after the self-organized mapping of dictionary atoms, the neurons’ code vectors in the self-organized mapping network were remarkably similar to the ERP waveform obtained after superposition and averaging. The feature extracted by the proposed method used a smaller amount of data to obtain classification accuracy comparable to the traditional method. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Biomedical Signal Processing)
Show Figures

Figure 1

30 pages, 14630 KiB  
Article
An Efficient Data Retrieval Parallel Reeb Graph Algorithm
by Mustafa Hajij and Paul Rosen
Algorithms 2020, 13(10), 258; https://doi.org/10.3390/a13100258 - 12 Oct 2020
Cited by 6 | Viewed by 3894
Abstract
The Reeb graph of a scalar function that is defined on a domain gives a topologically meaningful summary of that domain. Reeb graphs have been shown in the past decade to be of great importance in geometric processing, image processing, computer graphics, and [...] Read more.
The Reeb graph of a scalar function that is defined on a domain gives a topologically meaningful summary of that domain. Reeb graphs have been shown in the past decade to be of great importance in geometric processing, image processing, computer graphics, and computational topology. The demand for analyzing large data sets has increased in the last decade. Hence, the parallelization of topological computations needs to be more fully considered. We propose a parallel augmented Reeb graph algorithm on triangulated meshes with and without a boundary. That is, in addition to our parallel algorithm for computing a Reeb graph, we describe a method for extracting the original manifold data from the Reeb graph structure. We demonstrate the running time of our algorithm on standard datasets. As an application, we show how our algorithm can be utilized in mesh segmentation algorithms. Full article
(This article belongs to the Special Issue Topological Data Analysis)
Show Figures

Figure 1

23 pages, 3985 KiB  
Article
Blind Quality Evaluation for Screen Content Images Based on Regionalized Structural Features
by Wu Dong, Hongxia Bie, Likun Lu and Yeli Li
Algorithms 2020, 13(10), 257; https://doi.org/10.3390/a13100257 - 11 Oct 2020
Viewed by 2461
Abstract
Currently, screen content images (SCIs) are widely used in our modern society. However, since SCIs have distinctly different properties compared to natural images, traditional quality assessment methods of natural images cannot precisely evaluate the quality of SCIs. Thus, we propose a blind quality [...] Read more.
Currently, screen content images (SCIs) are widely used in our modern society. However, since SCIs have distinctly different properties compared to natural images, traditional quality assessment methods of natural images cannot precisely evaluate the quality of SCIs. Thus, we propose a blind quality evaluation method for SCIs based on regionalized structural features that are closely relevant to the intrinsic quality of SCIs. Firstly, the features of textual and pictorial regions of SCIs are extracted separately. For textual regions, since they contain noticeable structural information, we propose improved histograms of oriented gradients extracted from multi-order derivatives as structural features. For pictorial regions, since human vision is sensitive to texture information and luminance variation, we adopt texture as the structural feature; meanwhile, luminance is used as the auxiliary feature. The local derivative pattern and the shearlet local binary pattern are used to extract texture in the spatial and shearlet domains, respectively. Secondly, to derive the quality of textual and pictorial regions, two mapping functions are respectively trained from their features to subjective values. Finally, an activity weighting strategy is proposed to combine the quality of textual and pictorial regions. Experimental results show that the proposed method achieves better performance than the state-of-the-art methods. Full article
Show Figures

Figure 1

31 pages, 2431 KiB  
Article
Solution Merging in Matheuristics for Resource Constrained Job Scheduling
by Dhananjay Thiruvady, Christian Blum and Andreas T. Ernst
Algorithms 2020, 13(10), 256; https://doi.org/10.3390/a13100256 - 9 Oct 2020
Cited by 15 | Viewed by 3753
Abstract
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build [...] Read more.
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build a neighbourhood for integer programs by merging information from several heuristic solutions, namely construct, solve, merge and adapt (CMSA). In this study, we investigate this method alongside a closely related novel approach—merge search (MS). Both methods rely on a population of solutions, and for the purposes of this study, we examine two options: (a) a constructive heuristic and (b) ant colony optimisation (ACO); that is, a method based on learning. These methods are also implemented in a parallel framework using multi-core shared memory, which leads to improving the overall efficiency. Using a resource constrained job scheduling problem as a test case, different aspects of the algorithms are investigated. We find that both methods, using ACO, are competitive with current state-of-the-art methods, outperforming them for a range of problems. Regarding MS and CMSA, the former seems more effective on medium-sized problems, whereas the latter performs better on large problems. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

18 pages, 1492 KiB  
Article
A Weighted Ensemble Learning Algorithm Based on Diversity Using a Novel Particle Swarm Optimization Approach
by Gui-Rong You, Yeou-Ren Shiue, Wei-Chang Yeh, Xi-Li Chen and Chih-Ming Chen
Algorithms 2020, 13(10), 255; https://doi.org/10.3390/a13100255 - 9 Oct 2020
Cited by 8 | Viewed by 3395
Abstract
In ensemble learning, accuracy and diversity are the main factors affecting its performance. In previous studies, diversity was regarded only as a regularization term, which does not sufficiently indicate that diversity should implicitly be treated as an accuracy factor. In this study, a [...] Read more.
In ensemble learning, accuracy and diversity are the main factors affecting its performance. In previous studies, diversity was regarded only as a regularization term, which does not sufficiently indicate that diversity should implicitly be treated as an accuracy factor. In this study, a two-stage weighted ensemble learning method using the particle swarm optimization (PSO) algorithm is proposed to balance the diversity and accuracy in ensemble learning. The first stage is to enhance the diversity of the individual learner, which can be achieved by manipulating the datasets and the input features via a mixed-binary PSO algorithm to search for a set of individual learners with appropriate diversity. The purpose of the second stage is to improve the accuracy of the ensemble classifier using a weighted ensemble method that considers both diversity and accuracy. The set of weighted classifier ensembles is obtained by optimization via the PSO algorithm. The experimental results on 30 UCI datasets demonstrate that the proposed algorithm outperforms other state-of-the-art baselines. Full article
(This article belongs to the Special Issue Classification and Regression in Machine Learning)
Show Figures

Figure 1

24 pages, 8360 KiB  
Article
An IoT System for Social Distancing and Emergency Management in Smart Cities Using Multi-Sensor Data
by Rosario Fedele and Massimo Merenda
Algorithms 2020, 13(10), 254; https://doi.org/10.3390/a13100254 - 7 Oct 2020
Cited by 36 | Viewed by 5879
Abstract
Smart cities need technologies that can be really applied to raise the quality of life and environment. Among all the possible solutions, Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) have the potentialities to satisfy multiple needs, such as offering real-time plans for [...] Read more.
Smart cities need technologies that can be really applied to raise the quality of life and environment. Among all the possible solutions, Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) have the potentialities to satisfy multiple needs, such as offering real-time plans for emergency management (due to accidental events or inadequate asset maintenance) and managing crowds and their spatiotemporal distribution in highly populated areas (e.g., cities or parks) to face biological risks (e.g., from a virus) by using strategies such as social distancing and movement restrictions. Consequently, the objective of this study is to present an IoT system, based on an IoT-WSN and on algorithms (Neural Network, NN, and Shortest Path Finding) that are able to recognize alarms, available exits, assembly points, safest and shortest paths, and overcrowding from real-time data gathered by sensors and cameras exploiting computer vision. Subsequently, this information is sent to mobile devices using a web platform and the Near Field Communication (NFC) technology. The results refer to two different case studies (i.e., emergency and monitoring) and show that the system is able to provide customized strategies and to face different situations, and that this is also applies in the case of a connectivity shutdown. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities)
Show Figures

Figure 1

12 pages, 247 KiB  
Article
The Use of an Exact Algorithm within a Tabu Search Maximum Clique Algorithm
by Derek H. Smith, Roberto Montemanni and Stephanie Perkins
Algorithms 2020, 13(10), 253; https://doi.org/10.3390/a13100253 - 4 Oct 2020
Cited by 8 | Viewed by 3015
Abstract
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A [...] Read more.
Let G=(V,E) be an undirected graph with vertex set V and edge set E. A clique C of G is a subset of the vertices of V with every pair of vertices of C adjacent. A maximum clique is a clique with the maximum number of vertices. A tabu search algorithm for the maximum clique problem that uses an exact algorithm on subproblems is presented. The exact algorithm uses a graph coloring upper bound for pruning, and the best such algorithm to use in this context is considered. The final tabu search algorithm successfully finds the optimal or best known solution for all standard benchmarks considered. It is compared with a state-of-the-art algorithm that does not use exact search. It is slower to find the known optimal solution for most instances but is faster for five instances and finds a larger clique for two instances. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
19 pages, 479 KiB  
Review
Multi-objective Beam-ACO for Maximising Reliability and Minimising Communication Overhead in the Component Deployment Problem
by Dhananjay Thiruvady, Asef Nazari and Aldeida Aleti
Algorithms 2020, 13(10), 252; https://doi.org/10.3390/a13100252 - 3 Oct 2020
Cited by 3 | Viewed by 2537
Abstract
Automated deployment of software components into hardware resources is a highly constrained optimisation problem. Hardware memory limits which components can be deployed into the particular hardware unit. Interacting software components have to be deployed either into the same hardware unit, or connected units. [...] Read more.
Automated deployment of software components into hardware resources is a highly constrained optimisation problem. Hardware memory limits which components can be deployed into the particular hardware unit. Interacting software components have to be deployed either into the same hardware unit, or connected units. Safety concerns could restrict the deployment of two software components into the same unit. All these constraints hinder the search for high quality solutions that optimise quality attributes, such as reliability and communication overhead. When the optimisation problem is multi-objective, as it is the case when considering reliability and communication overhead, existing methods often fail to produce feasible results. Moreover, this problem can be modelled by bipartite graphs with complicating constraints, but known methods do not scale well under the additional restrictions. In this paper, we develop a novel multi-objective Beam search and ant colony optimisation (Beam-ACO) hybrid method, which uses problem specific bounds derived from communication, co-localisation and memory constraints, to guide the search towards feasibility. We conduct an experimental evaluation on a range of component deployment problem instances with varying levels of difficulty. We find that Beam-ACO guided by the co-localisation constraint is most effective in finding high quality feasible solutions. Full article
(This article belongs to the Special Issue Algorithms for Graphs and Networks)
Show Figures

Figure 1

20 pages, 378 KiB  
Article
Multi-Winner Election Control via Social Influence: Hardness and Algorithms for Restricted Cases
by Mohammad Abouei Mehrizi and Gianlorenzo D'Angelo
Algorithms 2020, 13(10), 251; https://doi.org/10.3390/a13100251 - 2 Oct 2020
Cited by 5 | Viewed by 2862
Abstract
Nowadays, many political campaigns are using social influence in order to convince voters to support/oppose a specific candidate/party. In election control via social influence problem, an attacker tries to find a set of limited influencers to start disseminating a political message in a [...] Read more.
Nowadays, many political campaigns are using social influence in order to convince voters to support/oppose a specific candidate/party. In election control via social influence problem, an attacker tries to find a set of limited influencers to start disseminating a political message in a social network of voters. A voter will change his opinion when he receives and accepts the message. In constructive case, the goal is to maximize the number of votes/winners of a target candidate/party, while in destructive case, the attacker tries to minimize them. Recent works considered the problem in different models and presented some hardness and approximation results. In this work, we consider multi-winner election control through social influence on different graph structures and diffusion models, and our goal is to maximize/minimize the number of winners in our target party. We show that the problem is hard to approximate when voters’ connections form a graph, and the diffusion model is the linear threshold model. We also prove the same result considering an arborescence under independent cascade model. Moreover, we present a dynamic programming algorithm for the cases that the voting system is a variation of straight-party voting, and voters form a tree. Full article
(This article belongs to the Special Issue Graph Algorithms and Applications)
18 pages, 1389 KiB  
Article
A Clustering Routing Algorithm Based on Improved Ant Colony Optimization Algorithms for Underwater Wireless Sensor Networks
by Xingxing Xiao and Haining Huang
Algorithms 2020, 13(10), 250; https://doi.org/10.3390/a13100250 - 1 Oct 2020
Cited by 42 | Viewed by 4408
Abstract
Because of the complicated underwater environment, the efficiency of data transmission from underwater sensor nodes to a sink node (SN) is faced with great challenges. Aiming at the problem of energy consumption in underwater wireless sensor networks (UWSNs), this paper proposes an energy-efficient [...] Read more.
Because of the complicated underwater environment, the efficiency of data transmission from underwater sensor nodes to a sink node (SN) is faced with great challenges. Aiming at the problem of energy consumption in underwater wireless sensor networks (UWSNs), this paper proposes an energy-efficient clustering routing algorithm based on an improved ant colony optimization (ACO) algorithm. In clustering routing algorithms, the network is divided into many clusters, and each cluster consists of one cluster head node (CHN) and several cluster member nodes (CMNs). This paper optimizes the CHN selection based on the residual energy of nodes and the distance factor. The selected CHN gathers data sent by the CMNs and transmits them to the sink node by multiple hops. Optimal multi-hop paths from the CHNs to the SN are found by an improved ACO algorithm. This paper presents the ACO algorithm through the improvement of the heuristic information, the evaporation parameter for the pheromone update mechanism, and the ant searching scope. Simulation results indicate the high effectiveness and efficiency of the proposed algorithm in reducing the energy consumption, prolonging the network lifetime, and decreasing the packet loss ratio. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

36 pages, 9880 KiB  
Article
COVID-19 Outbreak Prediction with Machine Learning
by Sina F. Ardabili, Amir Mosavi, Pedram Ghamisi, Filip Ferdinand, Annamaria R. Varkonyi-Koczy, Uwe Reuter, Timon Rabczuk and Peter M. Atkinson
Algorithms 2020, 13(10), 249; https://doi.org/10.3390/a13100249 - 1 Oct 2020
Cited by 291 | Viewed by 25609
Abstract
Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, [...] Read more.
Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and these models are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models need to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to susceptible–infected–recovered (SIR) and susceptible-exposed-infectious-removed (SEIR) models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP; and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior across nations, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. This paper further suggests that a genuine novelty in outbreak prediction can be realized by integrating machine learning and SEIR models. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

25 pages, 1478 KiB  
Article
Multi-Fidelity Gradient-Based Strategy for Robust Optimization in Computational Fluid Dynamics
by Aldo Serafino, Benoit Obert and Paola Cinnella
Algorithms 2020, 13(10), 248; https://doi.org/10.3390/a13100248 - 30 Sep 2020
Cited by 5 | Viewed by 3463
Abstract
Efficient Robust Design Optimization (RDO) strategies coupling a parsimonious uncertainty quantification (UQ) method with a surrogate-based multi-objective genetic algorithm (SMOGA) are investigated for a test problem in computational fluid dynamics (CFD), namely the inverse robust design of an expansion nozzle. The low-order statistics [...] Read more.
Efficient Robust Design Optimization (RDO) strategies coupling a parsimonious uncertainty quantification (UQ) method with a surrogate-based multi-objective genetic algorithm (SMOGA) are investigated for a test problem in computational fluid dynamics (CFD), namely the inverse robust design of an expansion nozzle. The low-order statistics (mean and variance) of the stochastic cost function are computed through either a gradient-enhanced kriging (GEK) surrogate or through the less expensive, lower fidelity, first-order method of moments (MoM). Both the continuous (non-intrusive) and discrete (intrusive) adjoint methods are evaluated for computing the gradients required for GEK and MoM. In all cases, the results are assessed against a reference kriging UQ surrogate not using gradient information. Subsequently, the GEK and MoM UQ solvers are fused together to build a multi-fidelity surrogate with adaptive infill enrichment for the SMOGA optimizer. The resulting hybrid multi-fidelity SMOGA RDO strategy ensures a good tradeoff between cost and accuracy, thus representing an efficient approach for complex RDO problems. Full article
Show Figures

Figure 1

20 pages, 1658 KiB  
Article
A Novel Global Key-Value Storage System Based on Kinetic Drives
by Xiang Cao and Cheng Li
Algorithms 2020, 13(10), 247; https://doi.org/10.3390/a13100247 - 29 Sep 2020
Viewed by 2867
Abstract
NoSQL databases are flexible and efficient for many data intensive applications, and the key-value store is one of them. In recent years, a new Ethernet accessed disk drive called the “Kinetic Drive” was developed by Seagate. This new Kinetic Drive is specially designed [...] Read more.
NoSQL databases are flexible and efficient for many data intensive applications, and the key-value store is one of them. In recent years, a new Ethernet accessed disk drive called the “Kinetic Drive” was developed by Seagate. This new Kinetic Drive is specially designed for key-value stores. Users can directly access data with a Kinetic Drive via its IP address without going through a storage server/layer. With this new innovation, the storage stack and architectures of key-value store systems have been greatly changed. In this paper, we propose a novel global key-value store system based on Kinetic Drives. We explore data management issues including data access, key indexing, data backup, and recovery. We offer scalable solutions with small storage overhead. The performance evaluation shows that our location-aware design and backup approach can reduce the average distance traveled for data access requests. Full article
(This article belongs to the Special Issue Algorithms for NoSQL Databases)
Show Figures

Figure 1

20 pages, 2065 KiB  
Article
Recognition of Cross-Language Acoustic Emotional Valence Using Stacked Ensemble Learning
by Kudakwashe Zvarevashe and Oludayo O. Olugbara
Algorithms 2020, 13(10), 246; https://doi.org/10.3390/a13100246 - 27 Sep 2020
Cited by 11 | Viewed by 3332
Abstract
Most of the studies on speech emotion recognition have used single-language corpora, but little research has been done in cross-language valence speech emotion recognition. Research has shown that the models developed for single-language speech recognition systems perform poorly when used in different environments. [...] Read more.
Most of the studies on speech emotion recognition have used single-language corpora, but little research has been done in cross-language valence speech emotion recognition. Research has shown that the models developed for single-language speech recognition systems perform poorly when used in different environments. Cross-language speech recognition is a craving alternative, but it is highly challenging because the corpora used will have been recorded in different environments and under varying conditions. The differences in the quality of recording devices, elicitation techniques, languages, and accents of speakers make the recognition task even more arduous. In this paper, we propose a stacked ensemble learning algorithm to recognize valence emotion in a cross-language speech environment. The proposed ensemble algorithm was developed from random decision forest, AdaBoost, logistic regression, and gradient boosting machine and is therefore called RALOG. In addition, we propose feature scaling using random forest recursive feature elimination and a feature selection algorithm to boost the performance of RALOG. The algorithm has been evaluated against four widely used ensemble algorithms to appraise its performance. The amalgam of five benchmarked corpora has resulted in a cross-language corpus to validate the performance of RALOG trained with the selected acoustic features. The comparative analysis results have shown that RALOG gave better performance than the other ensemble learning algorithms investigated in this study. Full article
Show Figures

Figure 1

16 pages, 3700 KiB  
Article
Dynamic Virtual Network Slicing and Orchestration for Selective MEC Services over Wide-Area SDN
by Dongkyun Kim and Yong-Hwan Kim
Algorithms 2020, 13(10), 245; https://doi.org/10.3390/a13100245 - 27 Sep 2020
Cited by 3 | Viewed by 3285
Abstract
Multi-access edge computing (MEC) has become an essential technology for collecting, analyzing, and processing data generated by widely distributed user equipment (UE), wireless end-hosts, Internet of things (IoT) sensors, etc., providing real-time and high-quality networking services with ultralow end-to-end latency guaranteed between various [...] Read more.
Multi-access edge computing (MEC) has become an essential technology for collecting, analyzing, and processing data generated by widely distributed user equipment (UE), wireless end-hosts, Internet of things (IoT) sensors, etc., providing real-time and high-quality networking services with ultralow end-to-end latency guaranteed between various user devices and edge cloud computing nodes. However, the cloud resources at the MEC on-site (access point) and edge site are restricted and insufficient mainly because of the operation and management constraints (e.g., limited space and capacity), particularly in the case of on-demand and dynamic service resource deployment. In this regard, we propose a selective MEC resource allocation scheme adopting a multitier architecture over a wide-area software-defined network (SDN) on the basis of our recent research work on virtual network slicing and resource orchestration. The proposed scheme provides an optimized MEC selection model considering end-to-end latency and efficient service resource utilization on the basis of the hierarchical MEC service architecture. Full article
(This article belongs to the Special Issue Virtual Network Embedding)
Show Figures

Figure 1

26 pages, 9735 KiB  
Article
Understanding Contrail Business Processes through Hierarchical Clustering: A Multi-Stage Framework
by Zeeshan Tariq, Naveed Khan, Darryl Charles, Sally McClean, Ian McChesney and Paul Taylor
Algorithms 2020, 13(10), 244; https://doi.org/10.3390/a13100244 - 27 Sep 2020
Cited by 8 | Viewed by 3338
Abstract
Real-world business processes are dynamic, with event logs that are generally unstructured and contain heterogeneous business classes. Process mining techniques derive useful knowledge from such logs but translating them into simplified and logical segments is crucial. Complexity is increased when dealing with business [...] Read more.
Real-world business processes are dynamic, with event logs that are generally unstructured and contain heterogeneous business classes. Process mining techniques derive useful knowledge from such logs but translating them into simplified and logical segments is crucial. Complexity is increased when dealing with business processes with a large number of events with no outcome labels. Techniques such as trace clustering and event clustering, tend to simplify the complex business logs but the resulting clusters are generally not understandable to the business users as the business aspects of the process are not considered while clustering the process log. In this paper, we provided a multi-stage hierarchical framework for business-logic driven clustering of highly variable process logs with extensively large number of events. Firstly, we introduced a term contrail processes for describing the characteristics of such complex real-world business processes and their logs presenting contrail-like models. Secondly, we proposed an algorithm Novel Hierarchical Clustering (NoHiC) to discover business-logic driven clusters from these contrail processes. For clustering, the raw event log is initially decomposed into high-level business classes, and later feature engineering is performed exclusively based on the business-context features, to support the discovery of meaningful business clusters. We used a hybrid approach which combines rule-based mining technique with a novel form of agglomerative hierarchical clustering for the experiments. A case-study of a CRM process of the UK’s renowned telecommunication firm is presented and the quality of the proposed framework is verified through several measures, such as cluster segregation, classification accuracy, and fitness of the log. We compared NoHiC technique with two trace clustering techniques using two real world process logs. The discovered clusters through NoHiC are found to have improved fitness as compared to the other techniques, and they also hold valuable information about the business context of the process log. Full article
(This article belongs to the Special Issue Process Mining and Emerging Applications)
Show Figures

Figure 1

17 pages, 1011 KiB  
Article
A Multiobjective Large Neighborhood Search Metaheuristic for the Vehicle Routing Problem with Time Windows
by Grigorios D. Konstantakopoulos, Sotiris P. Gayialis, Evripidis P. Kechagias, Georgios A. Papadopoulos and Ilias P. Tatsiopoulos
Algorithms 2020, 13(10), 243; https://doi.org/10.3390/a13100243 - 26 Sep 2020
Cited by 24 | Viewed by 4637
Abstract
The Vehicle Routing Problem with Time Windows (VRPTW) is an NP-Hard optimization problem which has been intensively studied by researchers due to its applications in real-life cases in the distribution and logistics sector. In this problem, customers define a time slot, within which [...] Read more.
The Vehicle Routing Problem with Time Windows (VRPTW) is an NP-Hard optimization problem which has been intensively studied by researchers due to its applications in real-life cases in the distribution and logistics sector. In this problem, customers define a time slot, within which they must be served by vehicles of a standard capacity. The aim is to define cost-effective routes, minimizing both the number of vehicles and the total traveled distance. When we seek to minimize both attributes at the same time, the problem is considered as multiobjective. Although numerous exact, heuristic and metaheuristic algorithms have been developed to solve the various vehicle routing problems, including the VRPTW, only a few of them face these problems as multiobjective. In the present paper, a Multiobjective Large Neighborhood Search (MOLNS) algorithm is developed to solve the VRPTW. The algorithm is implemented using the Python programming language, and it is evaluated in Solomon’s 56 benchmark instances with 100 customers, as well as in Gehring and Homberger’s benchmark instances with 1000 customers. The results obtained from the algorithm are compared to the best-published, in order to validate the algorithm’s efficiency and performance. The algorithm is proven to be efficient both in the quality of results, as it offers three new optimal solutions in Solomon’s dataset and produces near optimal results in most instances, and in terms of computational time, as, even in cases with up to 1000 customers, good quality results are obtained in less than 15 min. Having the potential to effectively solve real life distribution problems, the present paper also discusses a practical real-life application of this algorithm. Full article
Show Figures

Figure 1

23 pages, 7469 KiB  
Article
A Compact FEM Implementation for Parabolic Integro-Differential Equations in 2D
by Gujji Murali Mohan Reddy, Alan B. Seitenfuss, Débora de Oliveira Medeiros, Luca Meacci, Milton Assunção and Michael Vynnycky
Algorithms 2020, 13(10), 242; https://doi.org/10.3390/a13100242 - 24 Sep 2020
Cited by 4 | Viewed by 3284
Abstract
Although two-dimensional (2D) parabolic integro-differential equations (PIDEs) arise in many physical contexts, there is no generally available software that is able to solve them numerically. To remedy this situation, in this article, we provide a compact implementation for solving 2D PIDEs using the [...] Read more.
Although two-dimensional (2D) parabolic integro-differential equations (PIDEs) arise in many physical contexts, there is no generally available software that is able to solve them numerically. To remedy this situation, in this article, we provide a compact implementation for solving 2D PIDEs using the finite element method (FEM) on unstructured grids. Piecewise linear finite element spaces on triangles are used for the space discretization, whereas the time discretization is based on the backward-Euler and the Crank–Nicolson methods. The quadrature rules for discretizing the Volterra integral term are chosen so as to be consistent with the time-stepping schemes; a more efficient version of the implementation that uses a vectorization technique in the assembly process is also presented. The compactness of the approach is demonstrated using the software Matrix Laboratory (MATLAB). The efficiency is demonstrated via a numerical example on an L-shaped domain, for which a comparison is possible against the commercially available finite element software COMSOL Multiphysics. Moreover, further consideration indicates that COMSOL Multiphysics cannot be directly applied to 2D PIDEs containing more complex kernels in the Volterra integral term, whereas our method can. Consequently, the subroutines we present constitute a valuable open and validated resource for solving more general 2D PIDEs. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 399 KiB  
Article
The Online Reservation Problem
by Shashank Goyal and Diwakar Gupta
Algorithms 2020, 13(10), 241; https://doi.org/10.3390/a13100241 - 23 Sep 2020
Cited by 4 | Viewed by 3670
Abstract
Many sharing-economy platforms operate as follows. Owners list the availability of resources, prices, and contract-length limits. Customers propose contract start times and lengths. The owners decide immediately whether to accept or decline each proposal, even if the contract is for a future date. [...] Read more.
Many sharing-economy platforms operate as follows. Owners list the availability of resources, prices, and contract-length limits. Customers propose contract start times and lengths. The owners decide immediately whether to accept or decline each proposal, even if the contract is for a future date. Accepted proposals generate revenue. Declined proposals are lost. At any decision epoch, the owner has no information regarding future proposals. The owner seeks easy-to-implement algorithms that achieve the best competitive ratio (CR). We first derive a lower bound on the CR of any algorithm. We then analyze CRs of all intuitive “greedy” algorithms. We propose two new algorithms that have significantly better CRs than that of any greedy algorithm for certain parameter-value ranges. The key idea behind these algorithms is that owners may reserve some amount of capacity for late-arriving higher-value proposals in an attempt to improve revenue. Our contribution lies in operationalizing this idea with the help of algorithms that utilize thresholds. Moreover, we show that if non-optimal thresholds are chosen, then those may lead to poor CRs. We provide a rigorous method by which an owner can decide the best approach in their context by analyzing the CRs of greedy algorithms and those proposed by us. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop