Next Issue
Volume 14, July
Previous Issue
Volume 14, May
 
 

Algorithms, Volume 14, Issue 6 (June 2021) – 30 articles

Cover Story (view full-size image): An approximate 2-decomposable rigid-backbone sequence energy landscape (top) is superimposed over the true sequence energy landscape (bottom). While the two landscapes share a lot, the GMEC in the top landscape corresponds to a shallow basin in the more complex real energy landscape. By enumerating diverse low-energy solutions in the approximate landscape (blacks dots surrounded by circles denoting distance constraints), a third solution emerges in a shallow basin which almost matches the GMEC of the true landscape. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 15180 KiB  
Article
A Safety Prediction System for Lunar Orbit Rendezvous and Docking Mission
by Dan Yu, Peng Liu, Dezhi Qiao and Xianglong Tang
Algorithms 2021, 14(6), 188; https://doi.org/10.3390/a14060188 - 21 Jun 2021
Cited by 1 | Viewed by 2437
Abstract
In view of the characteristics of the guidance, navigation and control (GNC) system of the lunar orbit rendezvous and docking (RVD), we design an auxiliary safety prediction system based on the human–machine collaboration framework. The system contains two parts, including the construction of [...] Read more.
In view of the characteristics of the guidance, navigation and control (GNC) system of the lunar orbit rendezvous and docking (RVD), we design an auxiliary safety prediction system based on the human–machine collaboration framework. The system contains two parts, including the construction of the rendezvous and docking safety rule knowledge base by the use of machine learning methods, and the prediction of safety by the use of the base. First, in the ground semi-physical simulation test environment, feature extraction and matching are performed on the images taken by the navigation surveillance camera. Then, the matched features and the rendezvous and docking deviation are used to form training sample pairs, which are further used to construct the safety rule knowledge base by using the decision tree method. Finally, the safety rule knowledge base is used to predict the safety of the subsequent process of the rendezvous and docking based on the current images taken by the surveillance camera, and the probability of success is obtained. Semi-physical experiments on the ground show that the system can improve the level of intelligence in the flight control process and effectively assist ground flight controllers in data monitoring and mission decision-making. Full article
(This article belongs to the Special Issue Machine-Learning in Computer Vision Applications)
Show Figures

Figure 1

11 pages, 5368 KiB  
Article
Using Machine Learning for Quantum Annealing Accuracy Prediction
by Aaron Barbosa, Elijah Pelofske, Georg Hahn and Hristo N. Djidjev
Algorithms 2021, 14(6), 187; https://doi.org/10.3390/a14060187 - 21 Jun 2021
Cited by 7 | Viewed by 2799
Abstract
Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem [...] Read more.
Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave. Full article
(This article belongs to the Special Issue Quantum Optimization and Machine Learning)
Show Figures

Figure 1

15 pages, 2767 KiB  
Article
Combining Optimization Methods Using an Adaptive Meta Optimizer
by Nicola Landro, Ignazio Gallo and Riccardo La Grassa
Algorithms 2021, 14(6), 186; https://doi.org/10.3390/a14060186 - 19 Jun 2021
Cited by 11 | Viewed by 4422
Abstract
Optimization methods are of great importance for the efficient training of neural networks. There are many articles in the literature that propose particular variants of existing optimizers. In our article, we propose the use of the combination of two very different optimizers that, [...] Read more.
Optimization methods are of great importance for the efficient training of neural networks. There are many articles in the literature that propose particular variants of existing optimizers. In our article, we propose the use of the combination of two very different optimizers that, when used simultaneously, can exceed the performance of the single optimizers in very different problems. We propose a new optimizer called ATMO (AdapTive Meta Optimizers), which integrates two different optimizers simultaneously weighing the contributions of both. Rather than trying to improve each single one, we leverage both at the same time, as a meta-optimizer, by taking the best of both. We have conducted several experiments on the classification of images and text documents, using various types of deep neural models, and we have demonstrated through experiments that the proposed ATMO produces better performance than the single optimizers. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

23 pages, 3028 KiB  
Article
Unbiased Fuzzy Estimators in Fuzzy Hypothesis Testing
by Nikos Mylonas and Basil Papadopoulos
Algorithms 2021, 14(6), 185; https://doi.org/10.3390/a14060185 - 17 Jun 2021
Cited by 4 | Viewed by 2219
Abstract
In this paper, we develop fuzzy, possibilistic hypothesis tests for testing crisp hypotheses for a distribution parameter from crisp data. In these tests, fuzzy statistics are used, which are produced by the possibility distribution of the estimated parameter, constructed by the known from [...] Read more.
In this paper, we develop fuzzy, possibilistic hypothesis tests for testing crisp hypotheses for a distribution parameter from crisp data. In these tests, fuzzy statistics are used, which are produced by the possibility distribution of the estimated parameter, constructed by the known from crisp statistics confidence intervals. The results of these tests are in much better agreement with crisp statistics than the ones produced by the respective tests of a popular book on fuzzy statistics, which uses fuzzy critical values. We also present an error that we found in the implementation of the unbiased fuzzy estimator of the variance in this book, due to a poor interpretation of its mathematical content, which leads to disagreement of some fuzzy hypotheses tests with their respective crisp ones. Implementing correctly this estimator, we produce test statistics that achieve results in hypotheses tests that are in much better agreement with the results of the respective crisp ones. Full article
Show Figures

Figure 1

12 pages, 315 KiB  
Article
A Similarity Measurement with Entropy-Based Weighting for Clustering Mixed Numerical and Categorical Datasets
by Xia Que, Siyuan Jiang, Jiaoyun Yang and Ning An
Algorithms 2021, 14(6), 184; https://doi.org/10.3390/a14060184 - 15 Jun 2021
Cited by 6 | Viewed by 2560
Abstract
Many mixed datasets with both numerical and categorical attributes have been collected in various fields, including medicine, biology, etc. Designing appropriate similarity measurements plays an important role in clustering these datasets. Many traditional measurements treat various attributes equally when measuring the similarity. However, [...] Read more.
Many mixed datasets with both numerical and categorical attributes have been collected in various fields, including medicine, biology, etc. Designing appropriate similarity measurements plays an important role in clustering these datasets. Many traditional measurements treat various attributes equally when measuring the similarity. However, different attributes may contribute differently as the amount of information they contained could vary a lot. In this paper, we propose a similarity measurement with entropy-based weighting for clustering mixed datasets. The numerical data are first transformed into categorical data by an automatic categorization technique. Then, an entropy-based weighting strategy is applied to denote the different importances of various attributes. We incorporate the proposed measurement into an iterative clustering algorithm, and extensive experiments show that this algorithm outperforms OCIL and K-Prototype methods with 2.13% and 4.28% improvements, respectively, in terms of accuracy on six mixed datasets from UCI. Full article
Show Figures

Figure 1

24 pages, 7917 KiB  
Article
The Practicality of Deep Learning Algorithms in COVID-19 Detection: Application to Chest X-ray Images
by Abdulaziz Alorf
Algorithms 2021, 14(6), 183; https://doi.org/10.3390/a14060183 - 13 Jun 2021
Cited by 10 | Viewed by 3771
Abstract
Since January 2020, the outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has affected the whole world, producing a respiratory disease that can become severe and even cause death in certain groups of people. The main method for diagnosing coronavirus disease 2019 [...] Read more.
Since January 2020, the outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has affected the whole world, producing a respiratory disease that can become severe and even cause death in certain groups of people. The main method for diagnosing coronavirus disease 2019 (COVID-19) is performing viral tests. However, the kits for carrying out these tests are scarce in certain regions of the world. Lung conditions as perceived in computed tomography and radiography images exhibit a high correlation with the presence of COVID-19 infections. This work attempted to assess the feasibility of using convolutional neural networks for the analysis of pulmonary radiography images to distinguish COVID-19 infections from non-infected cases and other types of viral or bacterial pulmonary conditions. The results obtained indicate that these networks can successfully distinguish the pulmonary radiographies of COVID-19-infected patients from radiographies that exhibit other or no pathology, with a sensitivity of 100% and specificity of 97.6%. This could help future efforts to automate the process of identifying lung radiography images of suspicious cases, thereby supporting medical personnel when many patients need to be rapidly checked. The automated analysis of pulmonary radiography is not intended to be a substitute for formal viral tests or formal diagnosis by a properly trained physician but rather to assist with identification when the need arises. Full article
Show Figures

Figure 1

9 pages, 987 KiB  
Article
A Study of Ising Formulations for Minimizing Setup Cost in the Two-Dimensional Cutting Stock Problem
by Hiroshi Arai and Harumi Haraguchi
Algorithms 2021, 14(6), 182; https://doi.org/10.3390/a14060182 - 9 Jun 2021
Cited by 2 | Viewed by 2567
Abstract
We proposed the method that translates the two-dimensional CSP for minimizing the number of cuts to the Ising model. After that, we conducted computer experiments of the proposed model using the benchmark problem. From the above, the following results are obtained. (1) The [...] Read more.
We proposed the method that translates the two-dimensional CSP for minimizing the number of cuts to the Ising model. After that, we conducted computer experiments of the proposed model using the benchmark problem. From the above, the following results are obtained. (1) The proposed Ising model adequately represents the target problem. (2) Acceptance rates were as low as 0.2% to 9.8% and from 21.8% to 49.4%. (3) Error rates from optimal solution were as broad as 0% to 25.9%. For future work, we propose the following changes: (1) Improve the Hamiltonian for constraints. (2) Improve the proposed model to adjust more complex two-dimensional CSP and reduce the number of spins when it deals with large materials and components. (3) Conduct experiments using a quantum annealer. Full article
Show Figures

Figure 1

22 pages, 930 KiB  
Article
Approximately Optimal Control of Nonlinear Dynamic Stochastic Problems with Learning: The OPTCON Algorithm
by Dmitri Blueschke, Viktoria Blueschke-Nikolaeva and Reinhard Neck
Algorithms 2021, 14(6), 181; https://doi.org/10.3390/a14060181 - 8 Jun 2021
Cited by 8 | Viewed by 2424
Abstract
OPTCON is an algorithm for the optimal control of nonlinear stochastic systems which is particularly applicable to economic models. It delivers approximate numerical solutions to optimum control (dynamic optimization) problems with a quadratic objective function for nonlinear economic models with additive and multiplicative [...] Read more.
OPTCON is an algorithm for the optimal control of nonlinear stochastic systems which is particularly applicable to economic models. It delivers approximate numerical solutions to optimum control (dynamic optimization) problems with a quadratic objective function for nonlinear economic models with additive and multiplicative (parameter) uncertainties. The algorithm was first programmed in C# and then in MATLAB. It allows for deterministic and stochastic control, the latter with open loop (OPTCON1), passive learning (open-loop feedback, OPTCON2), and active learning (closed-loop, dual, or adaptive control, OPTCON3) information patterns. The mathematical aspects of the algorithm with open-loop feedback and closed-loop information patterns are presented in more detail in this paper. Full article
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)
Show Figures

Figure 1

20 pages, 2967 KiB  
Article
Damage Identification of Long-Span Bridges Using the Hybrid of Convolutional Neural Network and Long Short-Term Memory Network
by Lei Fu, Qizhi Tang, Peng Gao, Jingzhou Xin and Jianting Zhou
Algorithms 2021, 14(6), 180; https://doi.org/10.3390/a14060180 - 8 Jun 2021
Cited by 23 | Viewed by 3717
Abstract
The shallow features extracted by the traditional artificial intelligence algorithm-based damage identification methods pose low sensitivity and ignore the timing characteristics of vibration signals. Thus, this study uses the high-dimensional feature extraction advantages of convolutional neural networks (CNNs) and the time series modeling [...] Read more.
The shallow features extracted by the traditional artificial intelligence algorithm-based damage identification methods pose low sensitivity and ignore the timing characteristics of vibration signals. Thus, this study uses the high-dimensional feature extraction advantages of convolutional neural networks (CNNs) and the time series modeling capability of long short-term memory networks (LSTM) to identify damage to long-span bridges. Firstly, the features extracted by CNN and LSTM are fused as the input of the fully connected layer to train the CNN-LSTM model. After that, the trained CNN-LSTM model is employed for damage identification. Finally, a numerical example of a large-span suspension bridge was carried out to investigate the effectiveness of the proposed method. Furthermore, the performance of CNN-LSTM and CNN under different noise levels was compared to test the feasibility of application in practical engineering. The results demonstrate the following: (1) the combination of CNN and LSTM is satisfactory with 94% of the damage localization accuracy and only 8.0% of the average relative identification error (ARIE) of damage severity identification; (2) in comparison to the CNN, the CNN-LSTM results in superior identification accuracy; the damage localization accuracy is improved by 8.13%, while the decrement of ARIE of damage severity identification is 5.20%; and (3) the proposed method is capable of resisting the influence of environmental noise and acquires an acceptable recognition effect for multi-location damage; in a database with a lower signal-to-noise ratio of 3.33, the damage localization accuracy of the CNN-LSTM model is 67.06%, and the ARIE of the damage severity identification is 31%. This work provides an innovative idea for damage identification of long-span bridges and is conducive to promote follow-up studies regarding structural condition evaluation. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 5353 KiB  
Article
Analysis and Prediction of Carsharing Demand Based on Data Mining Methods
by Chunxia Wang, Jun Bi, Qiuyue Sai and Zun Yuan
Algorithms 2021, 14(6), 179; https://doi.org/10.3390/a14060179 - 5 Jun 2021
Cited by 10 | Viewed by 3215
Abstract
With the development of the sharing economy, carsharing is a major achievement in the current mode of transportation in sharing economies. Carsharing can effectively alleviate traffic congestion and reduce the travel cost of residents. However, due to the randomness of users’ travel demand, [...] Read more.
With the development of the sharing economy, carsharing is a major achievement in the current mode of transportation in sharing economies. Carsharing can effectively alleviate traffic congestion and reduce the travel cost of residents. However, due to the randomness of users’ travel demand, carsharing operators are faced with problems, such as imbalance in vehicle demand at stations. Therefore, scientific prediction of users’ travel demand is important to ensure the efficient operation of carsharing. The main purpose of this study is to use gradient boosting decision tree to predict the travel demand of station-based carsharing users. The case study is conducted in Lanzhou City, Gansu Province, China. To improve the accuracy, gradient boosting decision tree is designed to predict the demands of users at different stations at various times based on the actual operating data of carsharing. The prediction results are compared with results of the autoregressive integrated moving average. The conclusion shows that gradient boosting decision tree has higher prediction accuracy. This study can provide a reference value for user demand prediction in practical application. Full article
Show Figures

Figure 1

24 pages, 2522 KiB  
Article
Optimised Tuning of a PID-Based Flight Controller for a Medium-Scale Rotorcraft
by Lindokuhle J. Mpanza and Jimoh Olarewaju Pedro
Algorithms 2021, 14(6), 178; https://doi.org/10.3390/a14060178 - 3 Jun 2021
Cited by 9 | Viewed by 3875
Abstract
This paper presents the parameter optimisation of the flight control system of a singlerotor medium-scale rotorcraft. The six degrees-of-freedom (DOF) nonlinear mathematical model of the rotorcraft is developed. This model is then used to develop proportional–integral–derivative (PID)-based controllers. Since the majority of PID [...] Read more.
This paper presents the parameter optimisation of the flight control system of a singlerotor medium-scale rotorcraft. The six degrees-of-freedom (DOF) nonlinear mathematical model of the rotorcraft is developed. This model is then used to develop proportional–integral–derivative (PID)-based controllers. Since the majority of PID controllers installed in industry are poorly tuned, this paper presents a comparison of the optimised tuning of the flight controller parameters using particle swarm optimisation (PSO), genetic algorithm (GA), ant colony optimisation (ACO) and cuckoo search (CS) optimisation algorithms. The aim is to find the best PID parameters that minimise the specified objective function. Two trim conditions are investigated, i.e., hover and 10 m/s forward flight. The four algorithms performed better than manual tuning of the PID controllers. It was found, through numerical simulation, that the ACO algorithm converges the fastest and finds the best gains for the selected objective function in hover trim conditions. However, for 10 m/s forward flight trim, the GA algorithm was found to be the best. Both the tuned flight controllers managed to reject a gust wind of up to 5 m/s in the lateral axis in hover and in forward flight. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2021)
Show Figures

Figure 1

23 pages, 1949 KiB  
Article
Twenty-Four-Hour Ahead Probabilistic Global Horizontal Irradiance Forecasting Using Gaussian Process Regression
by Edina Chandiwana, Caston Sigauke and Alphonce Bere
Algorithms 2021, 14(6), 177; https://doi.org/10.3390/a14060177 - 2 Jun 2021
Cited by 10 | Viewed by 2953
Abstract
Probabilistic solar power forecasting has been critical in Southern Africa because of major shortages of power due to climatic changes and other factors over the past decade. This paper discusses Gaussian process regression (GPR) coupled with core vector regression for short-term hourly global [...] Read more.
Probabilistic solar power forecasting has been critical in Southern Africa because of major shortages of power due to climatic changes and other factors over the past decade. This paper discusses Gaussian process regression (GPR) coupled with core vector regression for short-term hourly global horizontal irradiance (GHI) forecasting. GPR is a powerful Bayesian non-parametric regression method that works well for small data sets and quantifies the uncertainty in the predictions. The choice of a kernel that characterises the covariance function is a crucial issue in Gaussian process regression. In this study, we adopt the minimum enclosing ball (MEB) technique. The MEB improves the forecasting power of GPR because the smaller the ball is, the shorter the training time, hence performance is robust. Forecasting of real-time data was done on two South African radiometric stations, Stellenbosch University (SUN) in a coastal area of the Western Cape Province, and the University of Venda (UNV) station in the Limpopo Province. Variables were selected using the least absolute shrinkage and selection operator via hierarchical interactions. The Bayesian approach using informative priors was used for parameter estimation. Based on the root mean square error, mean absolute error and percentage bias the results showed that the GPR model gives the most accurate predictions compared to those from gradient boosting and support vector regression models, making this study a useful tool for decision-makers and system operators in power utility companies. The main contribution of this paper is in the use of a GPR model coupled with the core vector methodology which is used in forecasting GHI using South African data. This is the first application of GPR coupled with core vector regression in which the minimum enclosing ball is applied on GHI data, to the best of our knowledge. Full article
Show Figures

Figure 1

17 pages, 4997 KiB  
Article
Decision Tree-Based Adaptive Reconfigurable Cache Scheme
by Wei Zhu and Xiaoyang Zeng
Algorithms 2021, 14(6), 176; https://doi.org/10.3390/a14060176 - 1 Jun 2021
Cited by 7 | Viewed by 3509
Abstract
Applications have different preferences for caches, sometimes even within the different running phases. Caches with fixed parameters may compromise the performance of a system. To solve this problem, we propose a real-time adaptive reconfigurable cache based on the decision tree algorithm, which can [...] Read more.
Applications have different preferences for caches, sometimes even within the different running phases. Caches with fixed parameters may compromise the performance of a system. To solve this problem, we propose a real-time adaptive reconfigurable cache based on the decision tree algorithm, which can optimize the average memory access time of cache without modifying the cache coherent protocol. By monitoring the application running state, the cache associativity is periodically tuned to the optimal cache associativity, which is determined by the decision tree model. This paper implements the proposed decision tree-based adaptive reconfigurable cache in the GEM5 simulator and designs the key modules using Verilog HDL. The simulation results show that the proposed decision tree-based adaptive reconfigurable cache reduces the average memory access time compared with other adaptive algorithms. Full article
(This article belongs to the Special Issue Algorithms in Reconfigurable Computing)
Show Figures

Figure 1

18 pages, 578 KiB  
Article
Approximation Algorithms for Sorting λ-Permutations by λ-Operations
by Guilherme Henrique Santos Miranda, Alexsandro Oliveira Alexandrino, Carla Negri Lintzmayer and Zanoni Dias
Algorithms 2021, 14(6), 175; https://doi.org/10.3390/a14060175 - 1 Jun 2021
Cited by 2 | Viewed by 2621
Abstract
Understanding how different two organisms are is one question addressed by the comparative genomics field. A well-accepted way to estimate the evolutionary distance between genomes of two organisms is finding the rearrangement distance, which is the smallest number of rearrangements needed to transform [...] Read more.
Understanding how different two organisms are is one question addressed by the comparative genomics field. A well-accepted way to estimate the evolutionary distance between genomes of two organisms is finding the rearrangement distance, which is the smallest number of rearrangements needed to transform one genome into another. By representing genomes as permutations, one of them can be represented as the identity permutation, and, so, we reduce the problem of transforming one permutation into another to the problem of sorting a permutation using the minimum number of rearrangements. This work investigates the problems of sorting permutations using reversals and/or transpositions, with some additional restrictions of biological relevance. Given a value λ, the problem now is how to sort a λ-permutation, which is a permutation whose elements are less than λ positions away from their correct places (regarding the identity), by applying the minimum number of rearrangements. Each λ-rearrangement must have size, at most, λ, and, when applied to a λ-permutation, the result should also be a λ-permutation. We present algorithms with approximation factors of O(λ2), O(λ), and O(1) for the problems of Sorting λ-Permutations by λ-Reversals, by λ-Transpositions, and by both operations. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

15 pages, 805 KiB  
Article
An Introduction to Development of Centralized and Distributed Stochastic Approximation Algorithm with Expanding Truncations
by Wenxiao Zhao
Algorithms 2021, 14(6), 174; https://doi.org/10.3390/a14060174 - 31 May 2021
Viewed by 2567
Abstract
The stochastic approximation algorithm (SAA), starting from the pioneer work by Robbins and Monro in 1950s, has been successfully applied in systems and control, statistics, machine learning, and so forth. In this paper, we will review the development of SAA in China, to [...] Read more.
The stochastic approximation algorithm (SAA), starting from the pioneer work by Robbins and Monro in 1950s, has been successfully applied in systems and control, statistics, machine learning, and so forth. In this paper, we will review the development of SAA in China, to be specific, the stochastic approximation algorithm with expanding truncations (SAAWET) developed by Han-Fu Chen and his colleagues during the past 35 years. We first review the historical development for the centralized algorithm including the probabilistic method (PM) and the ordinary differential equation (ODE) method for SAA and the trajectory-subsequence method for SAAWET. Then, we will give an application example of SAAWET to the recursive principal component analysis. We will also introduce the recent progress on SAAWET in a networked and distributed setting, named the distributed SAAWET (DSAAWET). Full article
(This article belongs to the Special Issue State-of-the-Art Algorithms and Their Applications in China)
14 pages, 2458 KiB  
Article
A PID Parameter Tuning Method Based on the Improved QUATRE Algorithm
by Zhuo-Qiang Zhao, Shi-Jian Liu and Jeng-Shyang Pan
Algorithms 2021, 14(6), 173; https://doi.org/10.3390/a14060173 - 31 May 2021
Cited by 11 | Viewed by 3370
Abstract
The PID (proportional–integral–derivative) controller is the most widely used control method in modern engineering control because it has the characteristics of a simple algorithm structure and easy implementation. The traditional PID controller, in the face of complex control objects, has been unable to [...] Read more.
The PID (proportional–integral–derivative) controller is the most widely used control method in modern engineering control because it has the characteristics of a simple algorithm structure and easy implementation. The traditional PID controller, in the face of complex control objects, has been unable to meet the expected requirements. The emergence of the intelligent algorithm makes intelligent control widely usable. The Quasi-Affine Transformation Evolutionary (QUATRE) algorithm is a new evolutionary algorithm. Compared with other intelligent algorithms, the QUATRE algorithm has a strong global search ability. To improve the accuracy of the algorithm, the adaptive mechanism of online adjusting control parameters was introduced and the linear population reduction strategy was adopted to improve the performance of the algorithm. The standard QUATRE algorithm, particle swarm optimization algorithm and improved QUATRE algorithm were tested by the test function. The experimental results verify the advantages of the improved QUATRE algorithm. The improved QUATRE algorithm was combined with PID parameters, and the simulation results were compared with the PID parameter tuning method based on the particle swarm optimization algorithm and standard QUATRE algorithm. From the experimental results, the control effect of the improved QUATRE algorithm is more effective. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2021)
Show Figures

Figure 1

23 pages, 795 KiB  
Article
Storing Set Families More Compactly with Top ZDDs
by Kotaro Matsuda, Shuhei Denzumi and Kunihiko Sadakane
Algorithms 2021, 14(6), 172; https://doi.org/10.3390/a14060172 - 31 May 2021
Cited by 1 | Viewed by 2682
Abstract
Zero-suppressed Binary Decision Diagrams (ZDDs) are data structures for representing set families in a compressed form. With ZDDs, many valuable operations on set families can be done in time polynomial in ZDD size. In some cases, however, the size of ZDDs for representing [...] Read more.
Zero-suppressed Binary Decision Diagrams (ZDDs) are data structures for representing set families in a compressed form. With ZDDs, many valuable operations on set families can be done in time polynomial in ZDD size. In some cases, however, the size of ZDDs for representing large set families becomes too huge to store them in the main memory. This paper proposes top ZDD, a novel representation of ZDDs which uses less space than existing ones. The top ZDD is an extension of the top tree, which compresses trees, to compress directed acyclic graphs by sharing identical subgraphs. We prove that navigational operations on ZDDs can be done in time poly-logarithmic in ZDD size, and show that there exist set families for which the size of the top ZDD is exponentially smaller than that of the ZDD. We also show experimentally that our top ZDDs have smaller sizes than ZDDs for real data. Full article
(This article belongs to the Special Issue Algorithms and Data-Structures for Compressed Computation)
Show Figures

Figure 1

39 pages, 32554 KiB  
Article
Adaptive Versioning in Transactional Memory Systems
by Pavan Poudel and Gokarna Sharma
Algorithms 2021, 14(6), 171; https://doi.org/10.3390/a14060171 - 31 May 2021
Viewed by 2809
Abstract
Transactional memory has been receiving much attention from both academia and industry. In transactional memory, program code is split into transactions, blocks of code that appear to execute atomically. Transactions are executed speculatively and the speculative execution is supported through data versioning [...] Read more.
Transactional memory has been receiving much attention from both academia and industry. In transactional memory, program code is split into transactions, blocks of code that appear to execute atomically. Transactions are executed speculatively and the speculative execution is supported through data versioning mechanism. Lazy versioning makes aborts fast but penalizes commits, whereas eager versioning makes commits fast but penalizes aborts. However, whether to use eager or lazy versioning to execute those transactions is still a hotly debated topic. Lazy versioning seems appropriate for write-dominated workloads and transactions in high contention scenarios whereas eager versioning seems appropriate for read-dominated workloads and transactions in low contention scenarios. This necessitates a priori knowledge on the workload and contention scenario to select an appropriate versioning method to achieve better performance. In this article, we present an adaptive versioning approach, called Adaptive, that dynamically switches between eager and lazy versioning at runtime, without the need of a priori knowledge on the workload and contention scenario but based on appropriate system parameters, so that the performance of a transactional memory system is always better than that is obtained using either eager or lazy versioning individually. We provide Adaptive for both persistent and non-persistent transactional memory systems using performance parameters appropriate for those systems. We implemented our adaptive versioning approach in the latest software transactional memory distribution TinySTM and extensively evaluated it through 5 micro-benchmarks and 8 complex benchmarks from STAMP and STAMPEDE suites. The results show significant benefits of our approach. Specifically, in persistent TM systems, our approach achieved performance improvements as much as 1.5× for execution time and as much as 240× for number of aborts, whereas our approach achieved performance improvements as much as 6.3× for execution time and as much as 170× for number of aborts in non-persistent transactional memory systems. Full article
Show Figures

Figure 1

14 pages, 3208 KiB  
Article
A Greedy Heuristic for Maximizing the Lifetime of Wireless Sensor Networks Based on Disjoint Weighted Dominating Sets
by Samir Balbal, Salim Bouamama and Christian Blum
Algorithms 2021, 14(6), 170; https://doi.org/10.3390/a14060170 - 31 May 2021
Cited by 8 | Viewed by 3422
Abstract
Dominating sets are among the most well-studied concepts in graph theory, with many real-world applications especially in the area of wireless sensor networks. One way to increase network lifetime in wireless sensor networks consists of assigning sensors to disjoint dominating node sets, which [...] Read more.
Dominating sets are among the most well-studied concepts in graph theory, with many real-world applications especially in the area of wireless sensor networks. One way to increase network lifetime in wireless sensor networks consists of assigning sensors to disjoint dominating node sets, which are then sequentially used by a sleep–wake cycling mechanism. This paper presents a greedy heuristic for solving a weighted version of the maximum disjoint dominating sets problem for energy conservation purposes in wireless sensor networks. Moreover, an integer linear programming model is presented. Experimental results based on a large set of 640 problem instances show, first, that the integer linear programming model is only useful for small problem instances. Moreover, they show that our algorithm outperforms recent local search algorithms from the literature with respect to both solution quality and computation time. Full article
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

22 pages, 524 KiB  
Article
Sorting by Multi-Cut Rearrangements
by Laurent Bulteau, Guillaume Fertin, Géraldine Jean and Christian Komusiewicz
Algorithms 2021, 14(6), 169; https://doi.org/10.3390/a14060169 - 29 May 2021
Cited by 4 | Viewed by 2515
Abstract
A multi-cut rearrangement of a string S is a string S obtained from S by an operation called k-cut rearrangement, that consists of (1) cutting S at a given number k of places in S, making S the concatenated string [...] Read more.
A multi-cut rearrangement of a string S is a string S obtained from S by an operation called k-cut rearrangement, that consists of (1) cutting S at a given number k of places in S, making S the concatenated string X1·X2·X3··Xk·Xk+1, where X1 and Xk+1 are possibly empty, and (2) rearranging the Xis so as to obtain S=Xπ(1)·Xπ(2)·Xπ(3)··Xπ(k+1), π being a permutation on 1,2,,k+1 satisfying π(1)=1 and π(k+1)=k+1. Given two strings S and T built on the same multiset of characters from an alphabet Σ, the Sorting by Multi-Cut Rearrangements (SMCR) problem asks whether a given number of k-cut rearrangements suffices to transform S into T. The SMCR problem generalizes several classical genomic rearrangements problems, such as Sorting by Transpositions and Sorting by Block Interchanges. It may also model chromoanagenesis, a recently discovered phenomenon consisting of massive simultaneous rearrangements. In this paper, we study the SMCR problem from an algorithmic complexity viewpoint. More precisely, we investigate its classical and parameterized complexity, as well as its approximability, in the general case or when S and T are permutations. Full article
Show Figures

Figure 1

27 pages, 1377 KiB  
Article
Guaranteed Diversity and Optimality in Cost Function Network Based Computational Protein Design Methods
by Manon Ruffini, Jelena Vucinic, Simon de Givry, George Katsirelos, Sophie Barbe and Thomas Schiex
Algorithms 2021, 14(6), 168; https://doi.org/10.3390/a14060168 - 28 May 2021
Cited by 10 | Viewed by 4177
Abstract
Proteins are the main active molecules of life. Although natural proteins play many roles, as enzymes or antibodies for example, there is a need to go beyond the repertoire of natural proteins to produce engineered proteins that precisely meet application requirements, in terms [...] Read more.
Proteins are the main active molecules of life. Although natural proteins play many roles, as enzymes or antibodies for example, there is a need to go beyond the repertoire of natural proteins to produce engineered proteins that precisely meet application requirements, in terms of function, stability, activity or other protein capacities. Computational Protein Design aims at designing new proteins from first principles, using full-atom molecular models. However, the size and complexity of proteins require approximations to make them amenable to energetic optimization queries. These approximations make the design process less reliable, and a provable optimal solution may fail. In practice, expensive libraries of solutions are therefore generated and tested. In this paper, we explore the idea of generating libraries of provably diverse low-energy solutions by extending cost function network algorithms with dedicated automaton-based diversity constraints on a large set of realistic full protein redesign problems. We observe that it is possible to generate provably diverse libraries in reasonable time and that the produced libraries do enhance the Native Sequence Recovery, a traditional measure of design methods reliability. Full article
(This article belongs to the Special Issue Algorithms in Computational Biology)
Show Figures

Figure 1

15 pages, 828 KiB  
Article
The Locating-Chromatic Number of Origami Graphs
by Agus Irawan, Asmiati Asmiati, La Zakaria and Kurnia Muludi
Algorithms 2021, 14(6), 167; https://doi.org/10.3390/a14060167 - 27 May 2021
Cited by 5 | Viewed by 2812
Abstract
The locating-chromatic number of a graph combines two graph concepts, namely coloring vertices and partition dimension of a graph. The locating-chromatic number is the smallest k such that G has a locating k-coloring, denoted by χL(G). This [...] Read more.
The locating-chromatic number of a graph combines two graph concepts, namely coloring vertices and partition dimension of a graph. The locating-chromatic number is the smallest k such that G has a locating k-coloring, denoted by χL(G). This article proposes a procedure for obtaining a locating-chromatic number for an origami graph and its subdivision (one vertex on an outer edge) through two theorems with proofs. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

17 pages, 985 KiB  
Article
Efficient and Scalable Initialization of Partitioned Coupled Simulations with preCICE
by Amin Totounferoush, Frédéric Simonis, Benjamin Uekermann and Miriam Schulte
Algorithms 2021, 14(6), 166; https://doi.org/10.3390/a14060166 - 27 May 2021
Cited by 3 | Viewed by 3702
Abstract
preCICE is an open-source library, that provides comprehensive functionality to couple independent parallelized solver codes to establish a partitioned multi-physics multi-code simulation environment. For data communication between the respective executables at runtime, it implements a peer-to-peer concept, which renders the computational cost of [...] Read more.
preCICE is an open-source library, that provides comprehensive functionality to couple independent parallelized solver codes to establish a partitioned multi-physics multi-code simulation environment. For data communication between the respective executables at runtime, it implements a peer-to-peer concept, which renders the computational cost of the coupling per time step negligible compared to the typical run time of the coupled codes. To initialize the peer-to-peer coupling, the mesh partitions of the respective solvers need to be compared to determine the point-to-point communication channels between the processes of both codes. This initialization effort can become a limiting factor, if we either reach memory limits or if we have to re-initialize communication relations in every time step. In this contribution, we remove two remaining bottlenecks: (i) We base the neighborhood search between mesh entities of two solvers on a tree data structure to avoid quadratic complexity, and (ii) we replace the sequential gather-scatter comparison of both mesh partitions by a two-level approach that first compares bounding boxes around mesh partitions in a sequential manner, subsequently establishes pairwise communication between processes of the two solvers, and finally compares mesh partitions between connected processes in parallel. We show, that the two-level initialization method is fives times faster than the old one-level scheme on 24,567 CPU-cores using a mesh with 628,898 vertices. In addition, the two-level scheme is able to handle much larger computational meshes, since the central mesh communication of the one-level scheme is replaced with a fully point-to-point mesh communication scheme. Full article
(This article belongs to the Special Issue High-Performance Computing Algorithms and Their Applications 2021)
Show Figures

Figure 1

18 pages, 1180 KiB  
Article
An Efficient Convolutional Blind Source Separation Algorithm for Speech Signals under Chaotic Masking
by Shiyu Guo, Mengna Shi, Yanqi Zhou, Jiayin Yu and Erfu Wang
Algorithms 2021, 14(6), 165; https://doi.org/10.3390/a14060165 - 26 May 2021
Cited by 5 | Viewed by 2691
Abstract
As the main method of information transmission, it is particularly important to ensure the security of speech communication. Considering the more complex multipath channel transmission situation in the wireless communication of speech signals and separating or extracting the source signal from the convolutional [...] Read more.
As the main method of information transmission, it is particularly important to ensure the security of speech communication. Considering the more complex multipath channel transmission situation in the wireless communication of speech signals and separating or extracting the source signal from the convolutional signal are crucial steps in obtaining source information. In this paper, chaotic masking technology is used to guarantee the transmission safety of speech signals, and a fast fixed-point independent vector analysis algorithm is used to solve the problem of convolutional blind source separation. First, the chaotic masking is performed before the speech signal is sent, and the convolutional mixing process of multiple signals is simulated by impulse response filter. Then, the observed signal is transformed to the frequency domain by short-time Fourier transform, and instantaneous blind source separation is performed using a fast fixed-point independent vector analysis algorithm. The algorithm can preserve the high-order statistical correlation between frequencies to solve the permutation ambiguity problem in independent component analysis. Simulation experiments show that this algorithm can efficiently complete the blind extraction of convolutional signals, and the quality of recovered speech signals is better. It provides a solution for the secure transmission and effective separation of speech signals in multipath transmission channels. Full article
Show Figures

Figure 1

19 pages, 3269 KiB  
Article
A Lower Bound for the Query Phase of Contraction Hierarchies and Hub Labels and a Provably Optimal Instance-Based Schema
by Tobias Rupp and Stefan Funke
Algorithms 2021, 14(6), 164; https://doi.org/10.3390/a14060164 - 25 May 2021
Cited by 5 | Viewed by 2651
Abstract
We prove a Ω(n) lower bound on the query time for contraction hierarchies (CH) as well as hub labels, two popular speed-up techniques for shortest path routing. Our construction is based on a graph family not too far from subgraphs [...] Read more.
We prove a Ω(n) lower bound on the query time for contraction hierarchies (CH) as well as hub labels, two popular speed-up techniques for shortest path routing. Our construction is based on a graph family not too far from subgraphs that occur in real-world road networks, in particular, it is planar and has a bounded degree. Additionally, we borrow ideas from our lower bound proof to come up with instance-based lower bounds for concrete road network instances of moderate size, reaching up to 96% of an upper bound given by a constructed CH. For a variant of our instance-based schema applied to some special graph classes, we can even show matching upper and lower bounds. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From CSR 2020)
Show Figures

Figure 1

13 pages, 1461 KiB  
Article
A New Hyper-Parameter Optimization Method for Power Load Forecast Based on Recurrent Neural Networks
by Yaru Li, Yulai Zhang and Yongping Cai
Algorithms 2021, 14(6), 163; https://doi.org/10.3390/a14060163 - 24 May 2021
Cited by 17 | Viewed by 3668
Abstract
The selection of the hyper-parameters plays a critical role in the task of prediction based on the recurrent neural networks (RNN). Traditionally, the hyper-parameters of the machine learning models are selected by simulations as well as human experiences. In recent years, multiple algorithms [...] Read more.
The selection of the hyper-parameters plays a critical role in the task of prediction based on the recurrent neural networks (RNN). Traditionally, the hyper-parameters of the machine learning models are selected by simulations as well as human experiences. In recent years, multiple algorithms based on Bayesian optimization (BO) are developed to determine the optimal values of the hyper-parameters. In most of these methods, gradients are required to be calculated. In this work, the particle swarm optimization (PSO) is used under the BO framework to develop a new method for hyper-parameter optimization. The proposed algorithm (BO-PSO) is free of gradient calculation and the particles can be optimized in parallel naturally. So the computational complexity can be effectively reduced which means better hyper-parameters can be obtained under the same amount of calculation. Experiments are done on real world power load data, where the proposed method outperforms the existing state-of-the-art algorithms, BO with limit-BFGS-bound (BO-L-BFGS-B) and BO with truncated-newton (BO-TNC), in terms of the prediction accuracy. The errors of the prediction result in different models show that BO-PSO is an effective hyper-parameter optimization method. Full article
Show Figures

Figure 1

20 pages, 3379 KiB  
Article
Community Structure and Systemic Risk of Bank Correlation Networks Based on the U.S. Financial Crisis in 2008
by Yajing Huang and Feng Chen
Algorithms 2021, 14(6), 162; https://doi.org/10.3390/a14060162 - 22 May 2021
Cited by 5 | Viewed by 2670
Abstract
This paper studies the community structure of the bank correlation network in the financial system and analyzes the systemic risk of the community sub-networks. Based on the balance sheet data of U.S. commercial banks from 2008, we establish a bank correlation network for [...] Read more.
This paper studies the community structure of the bank correlation network in the financial system and analyzes the systemic risk of the community sub-networks. Based on the balance sheet data of U.S. commercial banks from 2008, we establish a bank correlation network for each state according to the banks’ investment portfolio ratio. First, we analyze the community structure of each bank’s correlation network and verify the effectiveness of the community division from the point of view of the importance of nodes. Then, combining the data of failed banks after the 2008 financial crisis, we find that for small communities, the financial systemic risk will appear to have obvious volatility, and it is quite likely to reach an extremely high level. With the increase in the number of nodes in the community, systemic risk will tend towards a stable and low level. Furthermore, if only communities with failed banks are considered, the regression analysis shows that systemic risk and the size of the community almost follow a power law distribution trend. These results reveal the importance of supervising the banking system at the level of community sub-networks, which has certain guiding significance for the stability of the financial system. Full article
(This article belongs to the Special Issue Network Science: Algorithms and Applications)
Show Figures

Figure 1

26 pages, 1892 KiB  
Article
Reversed Lempel–Ziv Factorization with Suffix Trees
by Dominik Köppl
Algorithms 2021, 14(6), 161; https://doi.org/10.3390/a14060161 - 21 May 2021
Cited by 1 | Viewed by 3161
Abstract
We present linear-time algorithms computing the reversed Lempel–Ziv factorization [Kolpakov and Kucherov, TCS’09] within the space bounds of two different suffix tree representations. We can adapt these algorithms to compute the longest previous non-overlapping reverse factor table [Crochemore et al., JDA’12] within the [...] Read more.
We present linear-time algorithms computing the reversed Lempel–Ziv factorization [Kolpakov and Kucherov, TCS’09] within the space bounds of two different suffix tree representations. We can adapt these algorithms to compute the longest previous non-overlapping reverse factor table [Crochemore et al., JDA’12] within the same space but pay a multiplicative logarithmic time penalty. Full article
(This article belongs to the Special Issue Combinatorial Methods for String Processing)
Show Figures

Figure 1

13 pages, 1300 KiB  
Article
Validation of Automated Chromosome Recovery in the Reconstruction of Ancestral Gene Order
by Qiaoji Xu, Lingling Jin, James H. Leebens-Mack and David Sankoff
Algorithms 2021, 14(6), 160; https://doi.org/10.3390/a14060160 - 21 May 2021
Cited by 6 | Viewed by 2589
Abstract
The RACCROCHE pipeline reconstructs ancestral gene orders and chromosomal contents of the ancestral genomes at all internal vertices of a phylogenetic tree. The strategy is to accumulate a very large number of generalized adjacencies, phylogenetically justified for each ancestor, to produce long ancestral [...] Read more.
The RACCROCHE pipeline reconstructs ancestral gene orders and chromosomal contents of the ancestral genomes at all internal vertices of a phylogenetic tree. The strategy is to accumulate a very large number of generalized adjacencies, phylogenetically justified for each ancestor, to produce long ancestral contigs through maximum weight matching. It constructs chromosomes by counting the frequencies of ancestral contig co-occurrences on the extant genomes, clustering these for each ancestor and ordering them. The main objective of this paper is to closely simulate the evolutionary process giving rise to the gene content and order of a set of extant genomes (six distantly related monocots), and to assess to what extent an updated version of RACCROCHE can recover the artificial ancestral genome at the root of the phylogenetic tree relating to the simulated genomes. Full article
(This article belongs to the Special Issue Algorithms in Computational Biology)
Show Figures

Figure 1

12 pages, 3024 KiB  
Article
Circle-U-Net: An Efficient Architecture for Semantic Segmentation
by Feng Sun, Ajith Kumar V, Guanci Yang, Ansi Zhang and Yiyun Zhang
Algorithms 2021, 14(6), 159; https://doi.org/10.3390/a14060159 - 21 May 2021
Cited by 13 | Viewed by 5069
Abstract
State-of-the-art semantic segmentation methods rely too much on complicated deep networks and thus cannot train efficiently. This paper introduces a novel Circle-U-Net architecture that exceeds the original U-Net on several standards. The proposed model includes circle connect layers, which is the backbone of [...] Read more.
State-of-the-art semantic segmentation methods rely too much on complicated deep networks and thus cannot train efficiently. This paper introduces a novel Circle-U-Net architecture that exceeds the original U-Net on several standards. The proposed model includes circle connect layers, which is the backbone of ResUNet-a architecture. The model possesses a contracting part with residual bottleneck and circle connect layers that capture context and expanding paths, with sampling layers and merging layers for a pixel-wise localization. The results of the experiment show that the proposed Circle-U-Net achieves an improved accuracy of 5.6676%, 2.1587% IoU (Intersection of union, IoU) and can detect 67% classes greater than U-Net, which is better than current results. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop