Next Issue
Volume 14, April
Previous Issue
Volume 14, February
 
 

Algorithms, Volume 14, Issue 3 (March 2021) – 33 articles

Cover Story (view full-size image): In this work, we propose a deep learning architecture (BrainGNN) that learns the connectivity structure while learning to classify subjects. It simultaneously trains a graphical neural network on this graph and learns to select a sparse subset of brain regions important to the prediction task. We demonstrate the model’s state-of-the-art classification performance on a schizophrenia fMRI dataset and show how introspection leads to disorder-relevant findings. The graphs learned by the model exhibit strong class discrimination, and the identified sparse subset of relevant regions is consistent with the schizophrenia literature. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3137 KiB  
Article
3D Mesh Model Classification with a Capsule Network
by Yang Zheng, Jieyu Zhao, Yu Chen, Chen Tang and Shushi Yu
Algorithms 2021, 14(3), 99; https://doi.org/10.3390/a14030099 - 22 Mar 2021
Cited by 7 | Viewed by 3308
Abstract
With the widespread success of deep learning in the two-dimensional field, how to apply deep learning methods from two-dimensional to three-dimensional field has become a current research hotspot. Among them, the polygon mesh structure in the three-dimensional representation as a complex data structure [...] Read more.
With the widespread success of deep learning in the two-dimensional field, how to apply deep learning methods from two-dimensional to three-dimensional field has become a current research hotspot. Among them, the polygon mesh structure in the three-dimensional representation as a complex data structure provides an effective shape approximate representation for the three-dimensional object. Although the traditional method can extract the characteristics of the three-dimensional object through the graphical method, it cannot be applied to more complex objects. However, due to the complexity and irregularity of the mesh data, it is difficult to directly apply convolutional neural networks to 3D mesh data processing. Considering this problem, we propose a deep learning method based on a capsule network to effectively classify mesh data. We first design a polynomial convolution template. Through a sliding operation similar to a two-dimensional image convolution window, we directly sample on the grid surface, and use the window sampling surface as the minimum unit of calculation. Because a high-order polynomial can effectively represent a surface, we fit the approximate shape of the surface through the polynomial, use the polynomial parameter as the shape feature of the surface, and add the center point coordinates and normal vector of the surface as the pose feature of the surface. The feature is used as the feature vector of the surface. At the same time, to solve the problem of the introduction of a large number of pooling layers in traditional convolutional neural networks, the capsule network is introduced. For the problem of nonuniform size of the input grid model, the capsule network attitude parameter learning method is improved by sharing the weight of the attitude matrix. The amount of model parameters is reduced, and the training efficiency of the 3D mesh model is further improved. The experiment is compared with the traditional method and the latest two methods on the SHREC15 data set. Compared with the MeshNet and MeshCNN, the average recognition accuracy in the original test set is improved by 3.4% and 2.1%, and the average after fusion of features the accuracy reaches 93.8%. At the same time, under the premise of short training time, this method can also achieve considerable recognition results through experimental verification. The three-dimensional mesh classification method proposed in this paper combines the advantages of graphics and deep learning methods, and effectively improves the classification effect of 3D mesh model. Full article
Show Figures

Figure 1

16 pages, 430 KiB  
Article
A Feature Selection Algorithm Performance Metric for Comparative Analysis
by Werner Mostert, Katherine M. Malan and Andries P. Engelbrecht
Algorithms 2021, 14(3), 100; https://doi.org/10.3390/a14030100 - 22 Mar 2021
Cited by 10 | Viewed by 3401
Abstract
This study presents a novel performance metric for feature selection algorithms that is unbiased and can be used for comparative analysis across feature selection problems. The baseline fitness improvement (BFI) measure quantifies the potential value gained by applying feature selection. The BFI measure [...] Read more.
This study presents a novel performance metric for feature selection algorithms that is unbiased and can be used for comparative analysis across feature selection problems. The baseline fitness improvement (BFI) measure quantifies the potential value gained by applying feature selection. The BFI measure can be used to compare the performance of feature selection algorithms across datasets by measuring the change in classifier performance as a result of feature selection, with respect to the baseline where all features are included. Empirical results are presented to show that there is performance complementarity for a suite of feature selection algorithms on a variety of real world datasets. The BFI measure is a normalised performance metric that can be used to correlate problem characteristics with feature selection algorithm performance, across multiple datasets. This ability paves the way towards describing the performance space of the per-instance algorithm selection problem for feature selection algorithms. Full article
Show Figures

Figure 1

20 pages, 1211 KiB  
Article
Multiagent Hierarchical Cognition Difference Policy for Multiagent Cooperation
by Huimu Wang, Zhen Liu, Jianqiang Yi and Zhiqiang Pu
Algorithms 2021, 14(3), 98; https://doi.org/10.3390/a14030098 - 21 Mar 2021
Viewed by 3139
Abstract
Multiagent cooperation is one of the most attractive research fields in multiagent systems. There are many attempts made by researchers in this field to promote cooperation behavior. However, several issues still exist, such as complex interactions among different groups of agents, redundant communication [...] Read more.
Multiagent cooperation is one of the most attractive research fields in multiagent systems. There are many attempts made by researchers in this field to promote cooperation behavior. However, several issues still exist, such as complex interactions among different groups of agents, redundant communication contents of irrelevant agents, which prevents the learning and convergence of agent cooperation behaviors. To address the limitations above, a novel method called multiagent hierarchical cognition difference policy (MA-HCDP) is proposed in this paper. It includes a hierarchical group network (HGN), a cognition difference network (CDN), and a soft communication network (SCN). HGN is designed to distinguish different underlying information of diverse groups’ observations (including friendly group, enemy group, and object group) and extract different high-dimensional state representations of different groups. CDN is designed based on a variational auto-encoder to allow each agent to choose its neighbors (communication targets) adaptively with its environment cognition difference. SCN is designed to handle the complex interactions among the agents with a soft attention mechanism. The results of simulations demonstrate the superior effectiveness of our method compared with existing methods. Full article
(This article belongs to the Special Issue Reinforcement Learning Algorithms)
Show Figures

Figure 1

25 pages, 573 KiB  
Article
Lexicographic Unranking of Combinations Revisited
by Antoine Genitrini and Martin Pépin
Algorithms 2021, 14(3), 97; https://doi.org/10.3390/a14030097 - 19 Mar 2021
Cited by 11 | Viewed by 3313
Abstract
In the context of combinatorial sampling, the so-called “unranking method” can be seen as a link between a total order over the objects and an effective way to construct an object of given rank. The most classical order used in this context is [...] Read more.
In the context of combinatorial sampling, the so-called “unranking method” can be seen as a link between a total order over the objects and an effective way to construct an object of given rank. The most classical order used in this context is the lexicographic order, which corresponds to the familiar word ordering in the dictionary. In this article, we propose a comparative study of four algorithms dedicated to the lexicographic unranking of combinations, including three algorithms that were introduced decades ago. We start the paper with the introduction of our new algorithm using a new strategy of computations based on the classical factorial numeral system (or factoradics). Then, we present, in a high level, the three other algorithms. For each case, we analyze its time complexity on average, within a uniform framework, and describe its strengths and weaknesses. For about 20 years, such algorithms have been implemented using big integer arithmetic rather than bounded integer arithmetic which makes the cost of computing some coefficients higher than previously stated. We propose improvements for all implementations, which take this fact into account, and we give a detailed complexity analysis, which is validated by an experimental analysis. Finally, we show that, even if the algorithms are based on different strategies, all are doing very similar computations. Lastly, we extend our approach to the unranking of other classical combinatorial objects such as families counted by multinomial coefficients and k-permutations. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From CSR 2020)
Show Figures

Figure 1

37 pages, 471 KiB  
Article
On the Descriptive Complexity of Color Coding
by Max Bannach and Till Tantau
Algorithms 2021, 14(3), 96; https://doi.org/10.3390/a14030096 - 19 Mar 2021
Viewed by 2864
Abstract
Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to [...] Read more.
Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas. Full article
(This article belongs to the Special Issue Parameterized Complexity and Algorithms for Nonclassical Logics)
Show Figures

Figure 1

29 pages, 2357 KiB  
Article
Towards Understanding Clustering Problems and Algorithms: An Instance Space Analysis
by Luiz Henrique dos Santos Fernandes, Ana Carolina Lorena and Kate Smith-Miles
Algorithms 2021, 14(3), 95; https://doi.org/10.3390/a14030095 - 19 Mar 2021
Cited by 9 | Viewed by 4267
Abstract
Various criteria and algorithms can be used for clustering, leading to very distinct outcomes and potential biases towards datasets with certain structures. More generally, the selection of the most effective algorithm to be applied for a given dataset, based on its characteristics, is [...] Read more.
Various criteria and algorithms can be used for clustering, leading to very distinct outcomes and potential biases towards datasets with certain structures. More generally, the selection of the most effective algorithm to be applied for a given dataset, based on its characteristics, is a problem that has been largely studied in the field of meta-learning. Recent advances in the form of a new methodology known as Instance Space Analysis provide an opportunity to extend such meta-analyses to gain greater visual insights of the relationship between datasets’ characteristics and the performance of different algorithms. The aim of this study is to perform an Instance Space Analysis for the first time for clustering problems and algorithms. As a result, we are able to analyze the impact of the choice of the test instances employed, and the strengths and weaknesses of some popular clustering algorithms, for datasets with different structures. Full article
Show Figures

Figure 1

14 pages, 611 KiB  
Article
An Integrated Neural Network and SEIR Model to Predict COVID-19
by Sharif Noor Zisad, Mohammad Shahadat Hossain, Mohammed Sazzad Hossain and Karl Andersson
Algorithms 2021, 14(3), 94; https://doi.org/10.3390/a14030094 - 19 Mar 2021
Cited by 33 | Viewed by 6020
Abstract
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If [...] Read more.
A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If the number of infected cases can be predicted in advance, it would have a large contribution to controlling this pandemic in any area. Therefore, this study introduces an integrated model for predicting the number of confirmed cases from the perspective of Bangladesh. Moreover, the number of quarantined patients and the change in basic reproduction rate (the R0-value) can also be evaluated using this model. This integrated model combines the SEIR (Susceptible, Exposed, Infected, Removed) epidemiological model and neural networks. The model was trained using available data from 250 days. The accuracy of the prediction of confirmed cases is almost between 90% and 99%. The performance of this integrated model was evaluated by showing the difference in accuracy between the integrated model and the general SEIR model. The result shows that the integrated model is more accurate than the general SEIR model while predicting the number of confirmed cases in Bangladesh. Full article
Show Figures

Figure 1

21 pages, 4664 KiB  
Article
Study of Motion Control and a Virtual Reality System for Autonomous Underwater Vehicles
by Minghui Wang, Bi Zeng and Qiujie Wang
Algorithms 2021, 14(3), 93; https://doi.org/10.3390/a14030093 - 18 Mar 2021
Cited by 6 | Viewed by 3123
Abstract
This paper studies a novel intelligent motion control algorithm for Autonomous Underwater Vehicles (AUV) and develops a virtual reality system for a new interactive experimental platform. The paper designs a robust neuro-fuzzy controller to tackle system uncertainties and external disturbances. Fuzzy control can [...] Read more.
This paper studies a novel intelligent motion control algorithm for Autonomous Underwater Vehicles (AUV) and develops a virtual reality system for a new interactive experimental platform. The paper designs a robust neuro-fuzzy controller to tackle system uncertainties and external disturbances. Fuzzy control can solve the uncertainty problem of control systems. The neural network model self-tunes the controller parameters to improve the anti-interference ability. The designed control algorithm is verified using a MATLAB implementation and a virtual reality system. The virtual reality system developed in this paper can be used to debug the control algorithm, simulate the marine environment, and establish an ocean current interference model. The paper uses the MATLAB engine to realize the data communication between the MATLAB and the AUV virtual reality system. This allows the output order of the controller in MATLAB to drive the AUV in a virtual simulation system to simulate the 3D space motion. Full article
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
An Automatic Participant Detection Framework for Event Tracking on Twitter
by Nicholas Mamo, Joel Azzopardi and Colin Layfield
Algorithms 2021, 14(3), 92; https://doi.org/10.3390/a14030092 - 18 Mar 2021
Cited by 4 | Viewed by 2914
Abstract
Topic Detection and Tracking (TDT) on Twitter emulates human identifying developments in events from a stream of tweets, but while event participants are important for humans to understand what happens during events, machines have no knowledge of them. Our evaluation on football matches [...] Read more.
Topic Detection and Tracking (TDT) on Twitter emulates human identifying developments in events from a stream of tweets, but while event participants are important for humans to understand what happens during events, machines have no knowledge of them. Our evaluation on football matches and basketball games shows that identifying event participants from tweets is a difficult problem exacerbated by Twitter’s noise and bias. As a result, traditional Named Entity Recognition (NER) approaches struggle to identify participants from the pre-event Twitter stream. To overcome these challenges, we describe Automatic Participant Detection (APD) to detect an event’s participants before the event starts and improve the machine understanding of events. We propose a six-step framework to identify participants and present our implementation, which combines information from Twitter’s pre-event stream and Wikipedia. In spite of the difficulties associated with Twitter and NER in the challenging context of events, our approach manages to restrict noise and consistently detects the majority of the participants. By empowering machines with some of the knowledge that humans have about events, APD lays the foundation not just for improved TDT systems, but also for a future where machines can model and mine events for themselves. Full article
Show Figures

Figure 1

12 pages, 393 KiB  
Article
UAV Formation Shape Control via Decentralized Markov Decision Processes
by Md Ali Azam, Hans D. Mittelmann and Shankarachary Ragi
Algorithms 2021, 14(3), 91; https://doi.org/10.3390/a14030091 - 17 Mar 2021
Cited by 21 | Viewed by 4256
Abstract
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive [...] Read more.
In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive the UAV swarm from an initial geographical region to another geographical region where the swarm must form a three-dimensional shape (e.g., surface of a sphere). As most decision-theoretic formulations suffer from the curse of dimensionality, we adapt an existing fast approximate dynamic programming method called nominal belief-state optimization (NBO) to approximately solve the formation control problem. We perform numerical studies in MATLAB to validate the performance of the above control algorithms. Full article
(This article belongs to the Special Issue Algorithms in Stochastic Models)
Show Figures

Figure 1

31 pages, 1067 KiB  
Article
Space-Efficient, Fast and Exact Routing in Time-Dependent Road Networks
by Ben Strasser, Dorothea Wagner and Tim Zeitz
Algorithms 2021, 14(3), 90; https://doi.org/10.3390/a14030090 - 16 Mar 2021
Cited by 13 | Viewed by 4188
Abstract
We study the problem of quickly computing point-to-point shortest paths in massive road networks with traffic predictions. Incorporating traffic predictions into routing allows, for example, to avoid commuter traffic congestions. Existing techniques follow a two-phase approach: In a preprocessing step, an index is [...] Read more.
We study the problem of quickly computing point-to-point shortest paths in massive road networks with traffic predictions. Incorporating traffic predictions into routing allows, for example, to avoid commuter traffic congestions. Existing techniques follow a two-phase approach: In a preprocessing step, an index is built. The index depends on the road network and the traffic patterns but not on the path start and end. The latter are the input of the query phase, in which shortest paths are computed. All existing techniques have large index size, slow query running times or may compute suboptimal paths. In this work, we introduce CATCHUp (Customizable Approximated Time-dependent Contraction Hierarchies through Unpacking), the first algorithm that simultaneously achieves all three objectives. The core idea of CATCHUp is to store paths instead of travel times at shortcuts. Shortcut travel times are derived lazily from the stored paths. We perform an experimental study on a set of real world instances and compare our approach with state-of-the-art techniques. Our approach achieves the fastest preprocessing, competitive query running times and up to 38 times smaller indexes than competing approaches. Full article
(This article belongs to the Special Issue Algorithms for Shortest Paths in Dynamic and Evolving Networks)
Show Figures

Figure 1

14 pages, 1913 KiB  
Article
Fundamental Matrix Computing Based on 3D Metrical Distance
by Xinsheng Li and Xuedong Yuan
Algorithms 2021, 14(3), 89; https://doi.org/10.3390/a14030089 - 15 Mar 2021
Cited by 2 | Viewed by 2558
Abstract
To reconstruct point geometry from multiple images, computation of the fundamental matrix is always necessary. With a new optimization criterion, i.e., the re-projective 3D metric geometric distance rather than projective space under RANSAC (Random Sample And Consensus) framework, our method can reveal the [...] Read more.
To reconstruct point geometry from multiple images, computation of the fundamental matrix is always necessary. With a new optimization criterion, i.e., the re-projective 3D metric geometric distance rather than projective space under RANSAC (Random Sample And Consensus) framework, our method can reveal the quality of the fundamental matrix visually through 3D reconstruction. The geometric distance is the projection error of 3D points to the corresponding image pixel coordinates in metric space. The reasonable visual figures of the reconstructed scenes are shown but only some numerical result were compared, as is standard practice. This criterion can lead to a better 3D reconstruction result especially in 3D metric space. Our experiments validate our new error criterion and the quality of fundamental matrix under the new criterion. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 9016 KiB  
Article
Union and Intersection Operators for Thick Ellipsoid State Enclosures: Application to Bounded-Error Discrete-Time State Observer Design
by Andreas Rauh, Auguste Bourgois and Luc Jaulin
Algorithms 2021, 14(3), 88; https://doi.org/10.3390/a14030088 - 14 Mar 2021
Cited by 12 | Viewed by 3689
Abstract
Thick ellipsoids were recently introduced by the authors to represent uncertainty in state variables of dynamic systems, not only in terms of guaranteed outer bounds but also in terms of an inner enclosure that belongs to the true solution set with certainty. Because [...] Read more.
Thick ellipsoids were recently introduced by the authors to represent uncertainty in state variables of dynamic systems, not only in terms of guaranteed outer bounds but also in terms of an inner enclosure that belongs to the true solution set with certainty. Because previous work has focused on the definition and computationally efficient implementation of arithmetic operations and extensions of nonlinear standard functions, where all arguments are replaced by thick ellipsoids, this paper introduces novel operators for specifically evaluating quasi-linear system models with bounded parameters as well as for the union and intersection of thick ellipsoids. These techniques are combined in such a way that a discrete-time state observer can be designed in a predictor-corrector framework. Estimation results are presented for a combined observer-based estimation of state variables as well as disturbance forces and torques in the sense of an unknown input estimator for a hovercraft. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
Show Figures

Figure 1

36 pages, 2499 KiB  
Article
Local Data Debiasing for Fairness Based on Generative Adversarial Training
by Ulrich Aïvodji, François Bidet, Sébastien Gambs, Rosin Claude Ngueveu and Alain Tapp
Algorithms 2021, 14(3), 87; https://doi.org/10.3390/a14030087 - 14 Mar 2021
Cited by 4 | Viewed by 6845
Abstract
The widespread use of automated decision processes in many areas of our society raises serious ethical issues with respect to the fairness of the process and the possible resulting discrimination. To solve this issue, we propose a novel adversarial training approach called GANSan [...] Read more.
The widespread use of automated decision processes in many areas of our society raises serious ethical issues with respect to the fairness of the process and the possible resulting discrimination. To solve this issue, we propose a novel adversarial training approach called GANSan for learning a sanitizer whose objective is to prevent the possibility of any discrimination (i.e., direct and indirect) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes. Our method GANSan is partially inspired by the powerful framework of generative adversarial networks (in particular Cycle-GANs), which offers a flexible way to learn a distribution empirically or to translate between two different distributions. In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible, thus preserving the interpretability of the sanitized data. Consequently, once the sanitizer is trained, it can be applied to new data locally by an individual on their profile before releasing it. Finally, experiments on real datasets demonstrate the effectiveness of the approach as well as the achievable trade-off between fairness and utility. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

1 pages, 147 KiB  
Correction
Correction: Razgon, M., et al. Relaxed Rule-Based Learning for Automated Predictive Maintenance: Proof of Concept. Algorithms 2020, 13, 219
by Margarita Razgon and Alireza Mousavi
Algorithms 2021, 14(3), 86; https://doi.org/10.3390/a14030086 - 12 Mar 2021
Viewed by 2062
Abstract
The authors wish to make the following corrections to their paper [...] Full article
18 pages, 9372 KiB  
Article
Transformation of Uncertain Linear Systems with Real Eigenvalues into Cooperative Form: The Case of Constant and Time-Varying Bounded Parameters
by Andreas Rauh and Julia Kersten
Algorithms 2021, 14(3), 85; https://doi.org/10.3390/a14030085 - 8 Mar 2021
Cited by 6 | Viewed by 2489
Abstract
Continuous-time linear systems with uncertain parameters are widely used for modeling real-life processes. The uncertain parameters, contained in the system and input matrices, can be constant or time-varying. In the latter case, they may represent state dependencies of these matrices. Assuming bounded uncertainties, [...] Read more.
Continuous-time linear systems with uncertain parameters are widely used for modeling real-life processes. The uncertain parameters, contained in the system and input matrices, can be constant or time-varying. In the latter case, they may represent state dependencies of these matrices. Assuming bounded uncertainties, interval methods become applicable for a verified reachability analysis, for feasibility analysis of feedback controllers, or for the design of robust set-valued state estimators. The evaluation of these system models becomes computationally efficient after a transformation into a cooperative state-space representation, where the dynamics satisfy certain monotonicity properties with respect to the initial conditions. To obtain such representations, similarity transformations are required which are not trivial to find for sufficiently wide a-priori bounds of the uncertain parameters. This paper deals with the derivation and algorithmic comparison of two different transformation techniques for which their applicability to processes with constant and time-varying parameters has to be distinguished. An interval-based reachability analysis of the states of a simple electric step-down converter concludes this paper. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control II)
Show Figures

Figure 1

10 pages, 235 KiB  
Article
Accounting for Attribute Non-Attendance and Common-Metric Aggregation in the Choice of Seat Belt Use, a Latent Class Model with Preference Heterogeneity
by Mahdi Rezapour and Khaled Ksaibati
Algorithms 2021, 14(3), 84; https://doi.org/10.3390/a14030084 - 6 Mar 2021
Cited by 2 | Viewed by 2045
Abstract
A choice to use a seat belt is largely dependent on the psychology of the vehicles’ occupants, and thus those decisions are expected to be characterized by preference heterogeneity. Despite the importance of seat belt use on the safety of the roadways, the [...] Read more.
A choice to use a seat belt is largely dependent on the psychology of the vehicles’ occupants, and thus those decisions are expected to be characterized by preference heterogeneity. Despite the importance of seat belt use on the safety of the roadways, the majority of existing studies ignored the heterogeneity in the data and used a very standard statistical or descriptive method to identify the factors of using a seatbelt. Application of the right statistical method is of crucial importance to unlock the underlying factors of the choice being made by vehicles’ occupants. Thus, this study was conducted to identify the contributory factors to the front-seat passengers’ choice of seat belt usage, while accounting for the choice preference heterogeneity. The latent class model has been offered to replace the mixed logit model by replacing a continuous distribution with a discrete one. However, one of the shortcomings of the latent class model is that the homogeneity is assumed across a same class. A further extension is to relax the assumption of homogeneity by allowing some parameters to vary across the same group. The model could still be extended to overlay some attributes by considering attributes non-attendance (ANA), and aggregation of common-metric attributes (ACMA). Thus, this study was conducted to make a comparison across goodness of fit of the discussed models. Beside a comparison based on goodness of fit, the share of individuals in each class was used to see how it changes based on various model specifications. In summary, the results indicated that adding another layer to account for the heterogeneity within the same class of the latent class (LC) model, and accounting for ANA and ACMA would improve the model fit. It has been discussed in the content of the manuscript that accounting for ANA, ACMA and an extra layer of heterogeneity does not just improve the model goodness of fit, but largely impacts the share of class allocation of the models. Full article
16 pages, 1120 KiB  
Article
Typhoon Intensity Forecasting Based on LSTM Using the Rolling Forecast Method
by Shijin Yuan, Cheng Wang, Bin Mu, Feifan Zhou and Wansuo Duan
Algorithms 2021, 14(3), 83; https://doi.org/10.3390/a14030083 - 4 Mar 2021
Cited by 32 | Viewed by 5065
Abstract
A typhoon is an extreme weather event with strong destructive force, which can bring huge losses of life and economic damage to people. Thus, it is meaningful to reduce the prediction errors of typhoon intensity forecasting. Artificial and deep neural networks have recently [...] Read more.
A typhoon is an extreme weather event with strong destructive force, which can bring huge losses of life and economic damage to people. Thus, it is meaningful to reduce the prediction errors of typhoon intensity forecasting. Artificial and deep neural networks have recently become widely used for typhoon forecasting in order to ensure typhoon intensity forecasting is accurate and timely. Typhoon intensity forecasting models based on long short-term memory (LSTM) are proposed herein, which forecast typhoon intensity as a time series problem based on historical typhoon data. First, the typhoon intensity forecasting models are trained and tested with processed typhoon data from 2000 to 2014 to find the optimal prediction factors. Then, the models are validated using the optimal prediction factors compared to a feed-forward neural network (FNN). As per the results of the model applied for typhoons Chan-hom and Soudelor in 2015, the model based on LSTM using the optimal prediction factors shows the best performance and lowest prediction errors. Thus, the model based on LSTM is practical and meaningful for predicting typhoon intensity within 120 h. Full article
(This article belongs to the Special Issue Algorithms and Applications of Time Series Analysis)
Show Figures

Figure 1

12 pages, 1592 KiB  
Article
Identifying and Ranking Influential Nodes in Complex Networks Based on Dynamic Node Strength
by Xu Li and Qiming Sun
Algorithms 2021, 14(3), 82; https://doi.org/10.3390/a14030082 - 2 Mar 2021
Cited by 10 | Viewed by 2786
Abstract
Identifying and ranking the node influence in complex networks is an important issue. It helps to understand the dynamics of spreading process for designing efficient strategies to hinder or accelerate information spreading. The idea of decomposing network to rank node influence is adopted [...] Read more.
Identifying and ranking the node influence in complex networks is an important issue. It helps to understand the dynamics of spreading process for designing efficient strategies to hinder or accelerate information spreading. The idea of decomposing network to rank node influence is adopted widely because of low computational complexity. Of this type, decomposition is a dynamic process, and each iteration could be regarded as an inverse process of spreading. In this paper, we propose a new ranking method, Dynamic Node Strength Decomposition, based on decomposing network. The spreading paths are distinguished by weighting the edges according to the nodes at both ends. The change of local structure in the process of decomposition is considered. Our experimental results on four real networks with different sizes show that the proposed method can generate a more monotonic ranking list and identify node influence more effectively. Full article
Show Figures

Figure 1

28 pages, 751 KiB  
Article
DynASP2.5: Dynamic Programming on Tree Decompositions in Action
by Johannes K. Fichte, Markus Hecher, Michael Morak and Stefan Woltran
Algorithms 2021, 14(3), 81; https://doi.org/10.3390/a14030081 - 2 Mar 2021
Viewed by 2985
Abstract
Efficient exact parameterized algorithms are an active research area. Such algorithms exhibit a broad interest in the theoretical community. In the last few years, implementations for computing various parameters (parameter detection) have been established in parameterized challenges, such as treewidth, treedepth, hypertree width, [...] Read more.
Efficient exact parameterized algorithms are an active research area. Such algorithms exhibit a broad interest in the theoretical community. In the last few years, implementations for computing various parameters (parameter detection) have been established in parameterized challenges, such as treewidth, treedepth, hypertree width, feedback vertex set, or vertex cover. In theory, instances, for which the considered parameter is small, can be solved fast (problem evaluation), i.e., the runtime is bounded exponential in the parameter. While such favorable theoretical guarantees exists, it is often unclear whether one can successfully implement these algorithms under practical considerations. In other words, can we design and construct implementations of parameterized algorithms such that they perform similar or even better than well-established problem solvers on instances where the parameter is small. Indeed, we can build an implementation that performs well under the theoretical assumptions. However, it could also well be that an existing solver implicitly takes advantage of a structure, which is often claimed for solvers that build on Sat-solving. In this paper, we consider finding one solution to instances of answer set programming (ASP), which is a logic-based declarative modeling and solving framework. Solutions for ASP instances are so-called answer sets. Interestingly, the problem of deciding whether an instance has an answer set is already located on the second level of the polynomial hierarchy. An ASP solver that employs treewidth as parameter and runs dynamic programming on tree decompositions is DynASP2. Empirical experiments show that this solver is fast on instances of small treewidth and can outperform modern ASP when one counts answer sets. It remains open, whether one can improve the solver such that it also finds one answer set fast and shows competitive behavior to modern ASP solvers on instances of low treewidth. Unfortunately, theoretical models of modern ASP solvers already indicate that these solvers can solve instances of low treewidth fast, since they are based on Sat-solving algorithms. In this paper, we improve DynASP2 and construct the solver DynASP2.5, which uses a different approach. The new solver shows competitive behavior to state-of-the-art ASP solvers even for finding just one solution. We present empirical experiments where one can see that our new implementation solves ASP instances, which encode the Steiner tree problem on graphs with low treewidth, fast. Our implementation is based on a novel approach that we call multi-pass dynamic programming (M-DPSINC). In the paper, we describe the underlying concepts of our implementation (DynASP2.5) and we argue why the techniques still yield correct algorithms. Full article
(This article belongs to the Special Issue Parameterized Complexity and Algorithms for Nonclassical Logics)
Show Figures

Figure 1

18 pages, 590 KiB  
Article
D2D Assisted Cellular Networks in Licensed and Unlicensed Spectrum: Matching-Iteration-Based Joint User Access and Resource Allocation
by Qiuqi Han, Guangyuan Zheng and Chen Xu
Algorithms 2021, 14(3), 80; https://doi.org/10.3390/a14030080 - 2 Mar 2021
Cited by 3 | Viewed by 2292
Abstract
Device-to-Device (D2D) communications, which enable direct communication between nearby user devices over the licensed spectrum, have been considered a key technique to improve spectral efficiency and system throughput in cellular networks (CNs). However, the limited spectrum resources cannot be sufficient to support more [...] Read more.
Device-to-Device (D2D) communications, which enable direct communication between nearby user devices over the licensed spectrum, have been considered a key technique to improve spectral efficiency and system throughput in cellular networks (CNs). However, the limited spectrum resources cannot be sufficient to support more cellular users (CUs) and D2D users to meet the growth of the traffic data in future wireless networks. Therefore, Long-Term Evolution-Unlicensed (LTE-U) and D2D-Unlicensed (D2D-U) technologies have been proposed to further enhance system capacity by extending the CUs and D2D users on the unlicensed spectrum for communications. In this paper, we consider an LTE network where the CUs and D2D users are allowed to share the unlicensed spectrum with Wi-Fi users. To maximize the sum rate of all users while guaranteeing each user’s quality of service (QoS), we jointly consider user access and resource allocation. To tackle the formulated problem, we propose a matching-iteration-based joint user access and resource allocation algorithm. Simulation results show that the proposed algorithm can significantly improve system throughput compared to the other benchmark algorithms. Full article
Show Figures

Figure 1

15 pages, 367 KiB  
Article
An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks
by Salim Bouamama and Christian Blum
Algorithms 2021, 14(3), 79; https://doi.org/10.3390/a14030079 - 28 Feb 2021
Cited by 16 | Viewed by 5630
Abstract
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify [...] Read more.
This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

31 pages, 4809 KiB  
Article
An Exploratory Landscape Analysis-Based Benchmark Suite
by Ryan Dieter Lang and Andries Petrus Engelbrecht
Algorithms 2021, 14(3), 78; https://doi.org/10.3390/a14030078 - 27 Feb 2021
Cited by 23 | Viewed by 3459
Abstract
The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part of the algorithm selection framework. Benchmark suites that are often used in the literature have been shown to exhibit poor coverage of [...] Read more.
The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part of the algorithm selection framework. Benchmark suites that are often used in the literature have been shown to exhibit poor coverage of the problem space. Exploratory landscape analysis can be used to quantify characteristics of objective functions. However, exploratory landscape analysis measures are based on samples of the objective function, and there is a lack of work on the appropriate choice of sample size needed to produce reliable measures. This study presents an approach to determine the minimum sample size needed to obtain robust exploratory landscape analysis measures. Based on reliable exploratory landscape analysis measures, a self-organizing feature map is used to cluster a comprehensive set of benchmark functions. From this, a benchmark suite that has better coverage of the single-objective, boundary-constrained problem space is proposed. Full article
Show Figures

Figure 1

21 pages, 2044 KiB  
Article
Multi-Objective Task Scheduling Optimization in Spatial Crowdsourcing
by Afra A. Alabbadi and Maysoon F. Abulkhair
Algorithms 2021, 14(3), 77; https://doi.org/10.3390/a14030077 - 27 Feb 2021
Cited by 16 | Viewed by 3438
Abstract
Recently, with the development of mobile devices and the crowdsourcing platform, spatial crowdsourcing (SC) has become more widespread. In SC, workers need to physically travel to complete spatial–temporal tasks during a certain period of time. The main problem in SC platforms is scheduling [...] Read more.
Recently, with the development of mobile devices and the crowdsourcing platform, spatial crowdsourcing (SC) has become more widespread. In SC, workers need to physically travel to complete spatial–temporal tasks during a certain period of time. The main problem in SC platforms is scheduling a set of proper workers to achieve a set of spatial tasks based on different objectives. In actuality, real-world applications of SC need to optimize multiple objectives together, and these objectives may sometimes conflict with one another. Furthermore, there is a lack of research dealing with the multi-objective optimization (MOO) problem within an SC environment. Thus, in this work we focused on task scheduling based on multi-objective optimization (TS-MOO) in SC, which is based on maximizing the number of completed tasks, minimizing the total travel costs, and ensuring the balance of the workload between workers. To solve the previous problem, we developed a new method, i.e., the multi-objective task scheduling optimization (MOTSO) model that consists of two algorithms, namely, the multi-objective particle swarm optimization (MOPSO) algorithm with our fitness function Alabbadi, et al. and the ranking strategy algorithm based on the task entropy concept and task execution duration. The main purpose of our ranking strategy is to improve and enhance the performance of our MOPSO. The primary goal of the proposed MOTSO model is to find an optimal solution based on the multiple objectives that conflict with one another. We conducted our experiment with both synthetic and real datasets; the experimental results and statistical analysis showed that our proposed model is effective in terms of maximizing the number of completed tasks, minimizing the total travel costs, and balancing the workload between workers. Full article
Show Figures

Figure 1

17 pages, 2299 KiB  
Article
Feature and Language Selection in Temporal Symbolic Regression for Interpretable Air Quality Modelling
by Estrella Lucena-Sánchez, Guido Sciavicco and Ionel Eduard Stan
Algorithms 2021, 14(3), 76; https://doi.org/10.3390/a14030076 - 26 Feb 2021
Cited by 7 | Viewed by 2437
Abstract
Air quality modelling that relates meteorological, car traffic, and pollution data is a fundamental problem, approached in several different ways in the recent literature. In particular, a set of such data sampled at a specific location and during a specific period of time [...] Read more.
Air quality modelling that relates meteorological, car traffic, and pollution data is a fundamental problem, approached in several different ways in the recent literature. In particular, a set of such data sampled at a specific location and during a specific period of time can be seen as a multivariate time series, and modelling the values of the pollutant concentrations can be seen as a multivariate temporal regression problem. In this paper, we propose a new method for symbolic multivariate temporal regression, and we apply it to several data sets that contain real air quality data from the city of Wrocław (Poland). Our experiments show that our approach is superior to classical, especially symbolic, ones, both in statistical performances and the interpretability of the results. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

11 pages, 2264 KiB  
Article
A Deep Learning Model for Data-Driven Discovery of Functional Connectivity
by Usman Mahmood, Zening Fu, Vince D. Calhoun and Sergey Plis
Algorithms 2021, 14(3), 75; https://doi.org/10.3390/a14030075 - 26 Feb 2021
Cited by 14 | Viewed by 5192
Abstract
Functional connectivity (FC) studies have demonstrated the overarching value of studying the brain and its disorders through the undirected weighted graph of functional magnetic resonance imaging (fMRI) correlation matrix. However, most of the work with the FC depends on the way the connectivity [...] Read more.
Functional connectivity (FC) studies have demonstrated the overarching value of studying the brain and its disorders through the undirected weighted graph of functional magnetic resonance imaging (fMRI) correlation matrix. However, most of the work with the FC depends on the way the connectivity is computed, and it further depends on the manual post-hoc analysis of the FC matrices. In this work, we propose a deep learning architecture BrainGNN that learns the connectivity structure as part of learning to classify subjects. It simultaneously applies a graphical neural network to this learned graph and learns to select a sparse subset of brain regions important to the prediction task. We demonstrate that the model’s state-of-the-art classification performance on a schizophrenia fMRI dataset and demonstrate how introspection leads to disorder relevant findings. The graphs that are learned by the model exhibit strong class discrimination and the sparse subset of relevant regions are consistent with the schizophrenia literature. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application)
Show Figures

Figure 1

11 pages, 536 KiB  
Article
Adaptive Behaviour for a Self-Organising Video Surveillance System Using a Genetic Algorithm
by Fabrice Saffre and Hanno Hildmann
Algorithms 2021, 14(3), 74; https://doi.org/10.3390/a14030074 - 25 Feb 2021
Cited by 3 | Viewed by 2203
Abstract
Genetic algorithms (GA’s) are mostly used as an offline optimisation method to discover a suitable solution to a complex problem prior to implementation. In this paper, we present a different application in which a GA is used to progressively adapt the collective performance [...] Read more.
Genetic algorithms (GA’s) are mostly used as an offline optimisation method to discover a suitable solution to a complex problem prior to implementation. In this paper, we present a different application in which a GA is used to progressively adapt the collective performance of an ad hoc collection of devices that are being integrated post-deployment. Adaptive behaviour in the context of this article refers to two dynamic aspects of the problem: (a) the availability of individual devices as well as the objective functions for the performance of the entire population. We illustrate this concept in a video surveillance scenario in which already installed cameras are being retrofitted with networking capabilities to form a coherent closed-circuit television (CCTV) system. We show that this can be conceived as a multi-objective optimisation problem which can be solved at run-time, with the added benefit that solutions can be refined or modified in response to changing priorities or even unpredictable events such as faults. We present results of a detailed simulation study, the implications of which are being discussed from both a theoretical and practical viewpoint (trade-off between saving computational resources and surveillance coverage). Full article
Show Figures

Figure 1

12 pages, 310 KiB  
Article
Online Facility Location in Evolving Metrics
by Dimitris Fotakis, Loukas Kavouras and Lydia Zakynthinou
Algorithms 2021, 14(3), 73; https://doi.org/10.3390/a14030073 - 25 Feb 2021
Cited by 7 | Viewed by 2425
Abstract
The Dynamic Facility Location problem is a generalization of the classic Facility Location problem, in which the distance metric between clients and facilities changes over time. Such metrics that develop as a function of time are usually called “evolving metrics”, thus Dynamic Facility [...] Read more.
The Dynamic Facility Location problem is a generalization of the classic Facility Location problem, in which the distance metric between clients and facilities changes over time. Such metrics that develop as a function of time are usually called “evolving metrics”, thus Dynamic Facility Location can be alternatively interpreted as a Facility Location problem in evolving metrics. The objective in this time-dependent variant is to balance the trade-off between optimizing the classic objective function and the stability of the solution, which is modeled by charging a switching cost when a client’s assignment changes from one facility to another. In this paper, we study the online variant of Dynamic Facility Location. We present a randomized O(logm+logn)-competitive algorithm, where m is the number of facilities and n is the number of clients. In the first step, our algorithm produces a fractional solution, in each timestep, to the objective of Dynamic Facility Location involving a regularization function. This step is an adaptation of the generic algorithm proposed by Buchbinder et al. in their work “Competitive Analysis via Regularization.” Then, our algorithm rounds the fractional solution of this timestep to an integral one with the use of exponential clocks. We complement our result by proving a lower bound of Ω(m) for deterministic algorithms and lower bound of Ω(logm) for randomized algorithms. To the best of our knowledge, these are the first results for the online variant of the Dynamic Facility Location problem. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

16 pages, 452 KiB  
Article
Fast Overlap Detection between Hard-Core Colloidal Cuboids and Spheres. The OCSI Algorithm
by Luca Tonti and Alessandro Patti
Algorithms 2021, 14(3), 72; https://doi.org/10.3390/a14030072 - 25 Feb 2021
Cited by 5 | Viewed by 2502
Abstract
Collision between rigid three-dimensional objects is a very common modelling problem in a wide spectrum of scientific disciplines, including Computer Science and Physics. It spans from realistic animation of polyhedral shapes for computer vision to the description of thermodynamic and dynamic properties in [...] Read more.
Collision between rigid three-dimensional objects is a very common modelling problem in a wide spectrum of scientific disciplines, including Computer Science and Physics. It spans from realistic animation of polyhedral shapes for computer vision to the description of thermodynamic and dynamic properties in simple and complex fluids. For instance, colloidal particles of especially exotic shapes are commonly modelled as hard-core objects, whose collision test is key to correctly determine their phase and aggregation behaviour. In this work, we propose the Oriented Cuboid Sphere Intersection (OCSI) algorithm to detect collisions between prolate or oblate cuboids and spheres. We investigate OCSI’s performance by bench-marking it against a number of algorithms commonly employed in computer graphics and colloidal science: Quick Rejection First (QRI), Quick Rejection Intertwined (QRF) and a vectorized version of the OBB-sphere collision detection algorithm that explicitly uses SIMD Streaming Extension (SSE) intrinsics, here referred to as SSE-intr. We observed that QRI and QRF significantly depend on the specific cuboid anisotropy and sphere radius, while SSE-intr and OCSI maintain their speed independently of the objects’ geometry. While OCSI and SSE-intr, both based on SIMD parallelization, show excellent and very similar performance, the former provides a more accessible coding and user-friendly implementation as it exploits OpenMP directives for automatic vectorization. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 361 KiB  
Article
On Computational Hardness of Multidimensional Subtraction Games
by Vladimir Gurvich and Mikhail Vyalyi
Algorithms 2021, 14(3), 71; https://doi.org/10.3390/a14030071 - 25 Feb 2021
Cited by 2 | Viewed by 2207
Abstract
We study the algorithmic complexity of solving subtraction games in a fixed dimension with a finite difference set. We prove that there exists a game in this class such that solving the game is EXP-complete and requires time [...] Read more.
We study the algorithmic complexity of solving subtraction games in a fixed dimension with a finite difference set. We prove that there exists a game in this class such that solving the game is EXP-complete and requires time 2Ω(n), where n is the input size. This bound is optimal up to a polynomial speed-up. The results are based on a construction introduced by Larsson and Wästlund. It relates subtraction games and cellular automata. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From CSR 2020)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop