Next Issue
Volume 13, June
Previous Issue
Volume 13, April
 
 

Algorithms, Volume 13, Issue 5 (May 2020) – 25 articles

Cover Story (view full-size image): Civil engineering applications are often characterized by large uncertainties regarding the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin finite element method. The uncertain material parameter can then be expressed as a random field represented by means of a Karhunen–Loève expansion. Computation of the stochastic responses remains very costly, even when state-of-the-art multilevel Monte Carlo is used. A significant cost reduction can be achieved by using p-refined multilevel quasi-Monte Carlo (p-MLQMC). This novel method is based on a variance reduction scheme by employing a hierarchical p-refinement discretization of the problem, which is then combined with a rank-1 lattice rule. In this work, we developed algorithms for the p-MLQMC method and benchmarked them on two model problems. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 7760 KiB  
Article
Uncertainty Quantification Approach on Numerical Simulation for Supersonic Jets Performance
by Carlo Cravero, Davide De Domenico and Andrea Ottonello
Algorithms 2020, 13(5), 130; https://doi.org/10.3390/a13050130 - 22 May 2020
Cited by 5 | Viewed by 3817
Abstract
One of the main issues addressed in any engineering design problem is to predict the performance of the component or system as accurately and realistically as possible, taking into account the variability of operating conditions or the uncertainty on input data (boundary conditions [...] Read more.
One of the main issues addressed in any engineering design problem is to predict the performance of the component or system as accurately and realistically as possible, taking into account the variability of operating conditions or the uncertainty on input data (boundary conditions or geometry tolerance). In this paper, the propagation of uncertainty on boundary conditions through a numerical model of supersonic nozzle is investigated. The evaluation of the statistics of the problem response functions is performed following ‘Surrogate-Based Uncertainty Quantification’. The approach involves: (a) the generation of a response surface starting from a DoE in order to approximate the convergent–divergent ‘physical’ model (expensive to simulate), (b) the application of the UQ technique based on the LHS to the meta-model. Probability Density Functions are introduced for the inlet boundary conditions in order to quantify their effects on the output nozzle performance. The physical problem considered is very relevant for the experimental tests on the UQ approach because of its high non-linearity. A small perturbation to the input data can drive the solution to a completely different output condition. The CFD simulations and the Uncertainty Quantification were performed by coupling the open source Dakota platform with the ANSYS Fluent® CFD commercial software: the process is automated through scripting. The procedure adopted in this work demonstrate the applicability of advanced simulation techniques (such as UQ analysis) to industrial technical problems. Moreover, the analysis highlights the practical use of the uncertainty quantification techniques in predicting the performance of a nozzle design affected by off-design conditions with fluid-dynamic complexity due to strong nonlinearity. Full article
Show Figures

Figure 1

16 pages, 22626 KiB  
Article
Image Resolution Enhancement of Highly Compressively Sensed CT/PET Signals
by Krzysztof Malczewski
Algorithms 2020, 13(5), 129; https://doi.org/10.3390/a13050129 - 21 May 2020
Cited by 5 | Viewed by 3605
Abstract
One of the most challenging aspects of medical modalities such as Computed Tomography (CT) as well hybrid techniques such as CT/PET (Computed Tomography/Positron emission tomography) and PET/MRI is finding a balance between examination time, radiation dose, and image quality. The need for a [...] Read more.
One of the most challenging aspects of medical modalities such as Computed Tomography (CT) as well hybrid techniques such as CT/PET (Computed Tomography/Positron emission tomography) and PET/MRI is finding a balance between examination time, radiation dose, and image quality. The need for a dense sampling grid is associated with two major factors: image resolution enhancement, which leads to a strengthening of human perception, and image features interpretation. All these aspects make an unsupervised image processing much easier. The presented algorithm employs super-resolution-reconstruction with high accuracy motion fields’ estimation at its core for Computed Tomography/Positron Emission Tomography (CT/PET) images enhancement. The suggested method starts with processing compressively sensed input signals. This paper shows that it is possible to achieve higher image resolution while keeping the same radiation dose. The purpose of this paper is to propose a highly effective CT/PET image reconstruction strategy, allowing for simultaneous resolution enhancing and scanning time minimisation. The algorithm aims to overcome two major obstacles—image resolution limitation and algorithm reconstruction time efficiency-by combining a highly-sparse Ridgelet analysis based sampling pattern as well as PET signal sensing with super-resolution (SR) image enhancement. Due to the diverse nature of Computed Tomography, the applied Ridgelet analysis arguing its usability turned out to be efficient in reducing acquisition times in regard to maintaining satisfying scan quality. This paper presents a super-resolution image enhancement algorithm designed for handling highly sensitively compressed hybrid CT/PET scanners raw data. The presented technique allows for improving image resolution while reducing motion artefacts and keeping scanning times at pretty low levels. Full article
Show Figures

Figure 1

16 pages, 424 KiB  
Article
Change-Point Detection in Autoregressive Processes via the Cross-Entropy Method
by Lijing Ma and Georgy Sofronov
Algorithms 2020, 13(5), 128; https://doi.org/10.3390/a13050128 - 20 May 2020
Cited by 5 | Viewed by 4202
Abstract
It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we [...] Read more.
It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we need to find a method that divides the original nonstationary time series into a piecewise stationary segments. In this paper, we develop a flexible method to estimate the unknown number and the locations of change-points in autoregressive time series. In order to find the optimal value of a performance function, which is based on the Minimum Description Length principle, we develop a Cross-Entropy algorithm for the combinatorial optimization problem. Our numerical experiments show that the proposed approach is very efficient in detecting multiple change-points when the underlying process has moderate to substantial variations in the mean and the autocorrelation coefficient. We also apply the proposed method to real data of daily AUD/CNY exchange rate series from 2 January 2018 to 24 March 2020. Full article
Show Figures

Figure 1

18 pages, 8816 KiB  
Article
The Effect of Different Deep Network Architectures upon CNN-Based Gaze Tracking
by Hui-Hui Chen, Bor-Jiunn Hwang, Jung-Shyr Wu and Po-Ting Liu
Algorithms 2020, 13(5), 127; https://doi.org/10.3390/a13050127 - 19 May 2020
Cited by 5 | Viewed by 3738
Abstract
In this paper, we explore the effect of using different convolutional layers, batch normalization and the global average pooling layer upon a convolutional neural network (CNN) based gaze tracking system. A novel method is proposed to label the participant’s face images as gaze [...] Read more.
In this paper, we explore the effect of using different convolutional layers, batch normalization and the global average pooling layer upon a convolutional neural network (CNN) based gaze tracking system. A novel method is proposed to label the participant’s face images as gaze points retrieved from eye tracker while watching videos for building a training dataset that is closer to human visual behavior. The participants can swing their head freely; therefore, the most real and natural images can be obtained without too many restrictions. The labeled data are classified according to the coordinate of gaze and area of interest on the screen. Therefore, varied network architectures are applied to estimate and compare the effects including the number of convolutional layers, batch normalization (BN) and the global average pooling (GAP) layer instead of the fully connected layer. Three schemes, including the single eye image, double eyes image and facial image, with data augmentation are used to feed into neural network to train and evaluate the efficiency. The input image of the eye or face for an eye tracking system is mostly a small-sized image with relatively few features. The results show that BN and GAP are helpful in overcoming the problem to train models and in reducing the amount of network parameters. It is shown that the accuracy is significantly improved when using GAP and BN at the mean time. Overall, the face scheme has a highest accuracy of 0.883 when BN and GAP are used at the mean time. Additionally, comparing to the fully connected layer set to 512 cases, the number of parameters is reduced by less than 50% and the accuracy is improved by about 2%. A detection accuracy comparison of our model with the existing George and Routray methods shows that our proposed method achieves better prediction accuracy of more than 6%. Full article
Show Figures

Figure 1

14 pages, 4959 KiB  
Article
PUB-SalNet: A Pre-Trained Unsupervised Self-Aware Backpropagation Network for Biomedical Salient Segmentation
by Feiyang Chen, Ying Jiang, Xiangrui Zeng, Jing Zhang, Xin Gao and Min Xu
Algorithms 2020, 13(5), 126; https://doi.org/10.3390/a13050126 - 19 May 2020
Cited by 3 | Viewed by 5105
Abstract
Salient segmentation is a critical step in biomedical image analysis, aiming to cut out regions that are most interesting to humans. Recently, supervised methods have achieved promising results in biomedical areas, but they depend on annotated training data sets, which requires labor and [...] Read more.
Salient segmentation is a critical step in biomedical image analysis, aiming to cut out regions that are most interesting to humans. Recently, supervised methods have achieved promising results in biomedical areas, but they depend on annotated training data sets, which requires labor and proficiency in related background knowledge. In contrast, unsupervised learning makes data-driven decisions by obtaining insights directly from the data themselves. In this paper, we propose a completely unsupervised self-aware network based on pre-training and attentional backpropagation for biomedical salient segmentation, named as PUB-SalNet. Firstly, we aggregate a new biomedical data set from several simulated Cellular Electron Cryo-Tomography (CECT) data sets featuring rich salient objects, different SNR settings, and various resolutions, which is called SalSeg-CECT. Based on the SalSeg-CECT data set, we then pre-train a model specially designed for biomedical tasks as a backbone module to initialize network parameters. Next, we present a U-SalNet network to learn to selectively attend to salient objects. It includes two types of attention modules to facilitate learning saliency through global contrast and local similarity. Lastly, we jointly refine the salient regions together with feature representations from U-SalNet, with the parameters updated by self-aware attentional backpropagation. We apply PUB-SalNet for analysis of 2D simulated and real images and achieve state-of-the-art performance on simulated biomedical data sets. Furthermore, our proposed PUB-SalNet can be easily extended to 3D images. The experimental results on the 2d and 3d data sets also demonstrate the generalization ability and robustness of our method. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Figure 1

33 pages, 464 KiB  
Review
Moving Deep Learning to the Edge
by Mário P. Véstias, Rui Policarpo Duarte, José T. de Sousa and Horácio C. Neto
Algorithms 2020, 13(5), 125; https://doi.org/10.3390/a13050125 - 18 May 2020
Cited by 55 | Viewed by 6929
Abstract
Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. [...] Read more.
Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devices themselves, in order to alleviate cloud server workloads and improve latency. However, edge devices are less powerful than cloud servers, and many are subject to energy constraints. Hence, new resource and energy-oriented deep learning models are required, as well as new computing platforms. This paper reviews the main research directions for edge computing deep learning algorithms. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

30 pages, 853 KiB  
Article
A Novel Method for Inference of Chemical Compounds of Cycle Index Two with Desired Properties Based on Artificial Neural Networks and Integer Programming
by Jianshen Zhu, Chenxi Wang, Aleksandar Shurbevski, Hiroshi Nagamochi and Tatsuya Akutsu
Algorithms 2020, 13(5), 124; https://doi.org/10.3390/a13050124 - 18 May 2020
Cited by 9 | Viewed by 4343
Abstract
Inference of chemical compounds with desired properties is important for drug design, chemo-informatics, and bioinformatics, to which various algorithmic and machine learning techniques have been applied. Recently, a novel method has been proposed for this inference problem using both artificial neural networks (ANN) [...] Read more.
Inference of chemical compounds with desired properties is important for drug design, chemo-informatics, and bioinformatics, to which various algorithmic and machine learning techniques have been applied. Recently, a novel method has been proposed for this inference problem using both artificial neural networks (ANN) and mixed integer linear programming (MILP). This method consists of the training phase and the inverse prediction phase. In the training phase, an ANN is trained so that the output of the ANN takes a value nearly equal to a given chemical property for each sample. In the inverse prediction phase, a chemical structure is inferred using MILP and enumeration so that the structure can have a desired output value for the trained ANN. However, the framework has been applied only to the case of acyclic and monocyclic chemical compounds so far. In this paper, we significantly extend the framework and present a new method for the inference problem for rank-2 chemical compounds (chemical graphs with cycle index 2). The results of computational experiments using such chemical properties as octanol/water partition coefficient, melting point, and boiling point suggest that the proposed method is much more useful than the previous method. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

34 pages, 752 KiB  
Article
Mining Sequential Patterns with VC-Dimension and Rademacher Complexity
by Diego Santoro, Andrea Tonon and Fabio Vandin
Algorithms 2020, 13(5), 123; https://doi.org/10.3390/a13050123 - 18 May 2020
Cited by 13 | Viewed by 4787
Abstract
Sequential pattern mining is a fundamental data mining task with application in several domains. We study two variants of this task—the first is the extraction of frequent sequential patterns, whose frequency in a dataset of sequential transactions is higher than a user-provided threshold; [...] Read more.
Sequential pattern mining is a fundamental data mining task with application in several domains. We study two variants of this task—the first is the extraction of frequent sequential patterns, whose frequency in a dataset of sequential transactions is higher than a user-provided threshold; the second is the mining of true frequent sequential patterns, which appear with probability above a user-defined threshold in transactions drawn from the generative process underlying the data. We present the first sampling-based algorithm to mine, with high confidence, a rigorous approximation of the frequent sequential patterns from massive datasets. We also present the first algorithms to mine approximations of the true frequent sequential patterns with rigorous guarantees on the quality of the output. Our algorithms are based on novel applications of Vapnik-Chervonenkis dimension and Rademacher complexity, advanced tools from statistical learning theory, to sequential pattern mining. Our extensive experimental evaluation shows that our algorithms provide high-quality approximations for both problems we consider. Full article
(This article belongs to the Special Issue Big Data Algorithmics)
Show Figures

Figure 1

18 pages, 865 KiB  
Article
Incremental FPT Delay
by Arne Meier
Algorithms 2020, 13(5), 122; https://doi.org/10.3390/a13050122 - 15 May 2020
Cited by 1 | Viewed by 3223
Abstract
In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they [...] Read more.
In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
21 pages, 433 KiB  
Article
Ensemble Deep Learning Models for Forecasting Cryptocurrency Time-Series
by Ioannis E. Livieris, Emmanuel Pintelas, Stavros Stavroyiannis and Panagiotis Pintelas
Algorithms 2020, 13(5), 121; https://doi.org/10.3390/a13050121 - 10 May 2020
Cited by 89 | Viewed by 12529
Abstract
Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high [...] Read more.
Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high volatility and strong fluctuations of prices over time. Therefore, the development of an intelligent forecasting model is considered essential for portfolio optimization and decision making. The main contribution of this research is the combination of three of the most widely employed ensemble learning strategies: ensemble-averaging, bagging and stacking with advanced deep learning models for forecasting major cryptocurrency hourly prices. The proposed ensemble models were evaluated utilizing state-of-the-art deep learning models as component learners, which were comprised by combinations of long short-term memory (LSTM), Bi-directional LSTM and convolutional layers. The ensemble models were evaluated on prediction of the cryptocurrency price on the following hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). Additionally, the reliability of each forecasting model and the efficiency of its predictions is evaluated by examining for autocorrelation of the errors. Our detailed experimental analysis indicates that ensemble learning and deep learning can be efficiently beneficial to each other, for developing strong, stable, and reliable forecasting models. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

17 pages, 3114 KiB  
Article
A Novel Data-Driven Magnetic Resonance Spectroscopy Signal Analysis Framework to Quantify Metabolite Concentration
by Omid Bazgir, Eric Walden, Brian Nutter and Sunanda Mitra
Algorithms 2020, 13(5), 120; https://doi.org/10.3390/a13050120 - 10 May 2020
Viewed by 4465
Abstract
Developing tools for precise quantification of brain metabolites using magnetic resonance spectroscopy (MRS) is an active area of research with broad application in non-invasive neurodegenerative disease studies. The tools are mainly developed based on black box (data-driven), or basis sets approaches. In this [...] Read more.
Developing tools for precise quantification of brain metabolites using magnetic resonance spectroscopy (MRS) is an active area of research with broad application in non-invasive neurodegenerative disease studies. The tools are mainly developed based on black box (data-driven), or basis sets approaches. In this study, we offer a multi-stage framework that integrates data-driven and basis sets methods. We first use truncated Hankel singular value decomposition (HSVD) to decompose free induction decay (FID) signals into single tone FIDs, as the data-driven stage. Subsequently, single tone FIDs are clustered into basis sets while using initialized K-means with prior knowledge of the metabolites, as the basis set stage. The generated basis sets are fitted with the magnetic resonance (MR) spectra while using a linear constrained least square, and then the metabolite concentration is calculated. Prior to using our proposed multi-stage approach, a sequence of preprocessing blocks: water peak removal, phase correction, and baseline correction (developed in house) are used. Full article
Show Figures

Figure 1

16 pages, 2048 KiB  
Article
Forecasting Electricity Prices: A Machine Learning Approach
by Mauro Castelli, Aleš Groznik and Aleš Popovič
Algorithms 2020, 13(5), 119; https://doi.org/10.3390/a13050119 - 8 May 2020
Cited by 11 | Viewed by 6283
Abstract
The electricity market is a complex, evolutionary, and dynamic environment. Forecasting electricity prices is an important issue for all electricity market participants. In this study, we shed light on how to improve electricity price forecasting accuracy through the use of a machine learning [...] Read more.
The electricity market is a complex, evolutionary, and dynamic environment. Forecasting electricity prices is an important issue for all electricity market participants. In this study, we shed light on how to improve electricity price forecasting accuracy through the use of a machine learning technique—namely, a novel genetic programming approach. Drawing on empirical data from the largest EU energy markets, we propose a forecasting model that considers variables related to weather conditions, oil prices, and CO2 coupons and predicts energy prices 24 h ahead. We show that the proposed model provides more accurate predictions of future electricity prices than existing prediction methods. Our important findings will assist the electricity market participants in forecasting future price movements. Full article
(This article belongs to the Special Issue Genetic Programming)
Show Figures

Figure 1

12 pages, 499 KiB  
Article
Distributional Reinforcement Learning with Ensembles
by Björn Lindenberg, Jonas Nordqvist and Karl-Olof Lindahl
Algorithms 2020, 13(5), 118; https://doi.org/10.3390/a13050118 - 7 May 2020
Cited by 1 | Viewed by 3902
Abstract
It is well known that ensemble methods often provide enhanced performance in reinforcement learning. In this paper, we explore this concept further by using group-aided training within the distributional reinforcement learning paradigm. Specifically, we propose an extension to categorical reinforcement learning, where distributional [...] Read more.
It is well known that ensemble methods often provide enhanced performance in reinforcement learning. In this paper, we explore this concept further by using group-aided training within the distributional reinforcement learning paradigm. Specifically, we propose an extension to categorical reinforcement learning, where distributional learning targets are implicitly based on the total information gathered by an ensemble. We empirically show that this may lead to much more robust initial learning, a stronger individual performance level, and good efficiency on a per-sample basis. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 3268 KiB  
Article
A Novel Hybrid Metaheuristic Algorithm for Optimization of Construction Management Site Layout Planning
by Doddy Prayogo, Min-Yuan Cheng, Yu-Wei Wu, A. A. N. Perwira Redi, Vincent F. Yu, Satria Fadil Persada and Reny Nadlifatin
Algorithms 2020, 13(5), 117; https://doi.org/10.3390/a13050117 - 6 May 2020
Cited by 13 | Viewed by 5341
Abstract
Symbiotic organisms search (SOS) is a promising metaheuristic algorithm that has been studied recently by numerous researchers due to its capability to solve various hard and complex optimization problems. SOS is a powerful optimization technique that mimics the simulation of the typical symbiotic [...] Read more.
Symbiotic organisms search (SOS) is a promising metaheuristic algorithm that has been studied recently by numerous researchers due to its capability to solve various hard and complex optimization problems. SOS is a powerful optimization technique that mimics the simulation of the typical symbiotic interactions among organisms in an ecosystem. This study presents a new SOS-based hybrid algorithm for solving the challenging construction site layout planning (CSLP) discrete problems. A new algorithm called the hybrid symbiotic organisms search with local operators (HSOS-LO) represents a combination of the canonical SOS and several local search mechanisms aimed at increasing the searching capability in discrete-based solution space. In this study, three CSLP problems that consist of single and multi-floor facility layout problems are tested, and the obtained results were compared with other widely used metaheuristic algorithms. The results indicate the robust performance of the HSOS-LO algorithm in handling discrete-based CSLP problems. Full article
(This article belongs to the Special Issue Optimization Algorithms for Allocation Problems)
Show Figures

Figure 1

18 pages, 315 KiB  
Article
The Expected Utility Insurance Premium Principle with Fourth-Order Statistics: Does It Make a Difference?
by Alessandro Mazzoccoli and Maurizio Naldi
Algorithms 2020, 13(5), 116; https://doi.org/10.3390/a13050116 - 6 May 2020
Cited by 8 | Viewed by 4454
Abstract
The expected utility principle is often used to compute the insurance premium through a second-order approximation of the expected value of the utility of losses. We investigate the impact of using a more accurate approximation based on the fourth-order statistics of the expected [...] Read more.
The expected utility principle is often used to compute the insurance premium through a second-order approximation of the expected value of the utility of losses. We investigate the impact of using a more accurate approximation based on the fourth-order statistics of the expected loss and derive the premium under this expectedly more accurate approximation. The comparison between the two approximation levels shows that the second-order-based premium is always lower (i.e., an underestimate of the correct one) for the commonest loss distributions encountered in insurance. The comparison is also carried out for real cases, considering the loss parameters values estimated in the literature. The increased risk of the insurer is assessed through the Value-at-Risk. Full article
Show Figures

Figure 1

18 pages, 2030 KiB  
Article
A Fuzzy-Based Decision Support Model for Risk Maturity Evaluation of Construction Organizations
by Mohammadjavad Arabpour Roghabadi and Osama Moselhi
Algorithms 2020, 13(5), 115; https://doi.org/10.3390/a13050115 - 2 May 2020
Cited by 9 | Viewed by 3935
Abstract
Risk maturity evaluation is an efficient tool which can assist construction organizations in the identification of their strengths and weaknesses in risk management processes and in taking necessary actions for the improvement of these processes. The accuracy of its results relies heavily on [...] Read more.
Risk maturity evaluation is an efficient tool which can assist construction organizations in the identification of their strengths and weaknesses in risk management processes and in taking necessary actions for the improvement of these processes. The accuracy of its results relies heavily on the quality of responses provided by participants specialized in these processes across the organization. Risk maturity models reported in the literature gave equal importance to participants’ responses during the model development, neglecting their level of authority in the organization as well as their level of expertise in risk management processes. Unlike the existing models, this paper presents a new risk maturity model that considers the relative importance of the responses provided by the participants in the model development. It considered their authority in the organization and their level of involvement in the risk management processes for calculating the relative weights associated with the risk maturity attributes. It employed an analytic network process (ANP) to model the interdependencies among the risk maturity attributes and utilizes the fuzzy set theory to incorporate the uncertainty associated with the ambiguity of the responses used in the model development. The developed model allows the construction organizations to have a more accurate and realistic view of their current performance in risk management processes. The application of the developed model was investigated by measuring the risk maturity level of an industrial partner working on civil infrastructure projects in Canada. Full article
(This article belongs to the Special Issue Fuzzy Hybrid Systems for Construction Engineering and Management)
Show Figures

Figure 1

22 pages, 6084 KiB  
Article
Automobile Fine-Grained Detection Algorithm Based on Multi-Improved YOLOv3 in Smart Streetlights
by Fan Yang, Deming Yang, Zhiming He, Yuanhua Fu and Kui Jiang
Algorithms 2020, 13(5), 114; https://doi.org/10.3390/a13050114 - 2 May 2020
Cited by 7 | Viewed by 3975
Abstract
Upgrading ordinary streetlights to smart streetlights to help monitor traffic flow is a low-cost and pragmatic option for cities. Fine-grained classification of vehicles in the sight of smart streetlights is essential for intelligent transportation and smart cities. In order to improve the classification [...] Read more.
Upgrading ordinary streetlights to smart streetlights to help monitor traffic flow is a low-cost and pragmatic option for cities. Fine-grained classification of vehicles in the sight of smart streetlights is essential for intelligent transportation and smart cities. In order to improve the classification accuracy of distant cars, we propose a reformed YOLOv3 (You Only Look Once, version 3) algorithm to realize the detection of various types of automobiles, such as SUVs, sedans, taxis, commercial vehicles, small commercial vehicles, vans, buses, trucks and pickup trucks. Based on the dataset UA-DETRAC-LITE, manually labeled data is added to improve the data balance. First, data optimization for the vehicle target is performed to improve the generalization ability and position regression loss function of the model. The experimental results show that, within the range of 67 m, and through scale optimization (i.e., by introducing multi-scale training and anchor clustering), the classification accuracies of trucks and pickup trucks are raised by 26.98% and 16.54%, respectively, and the overall accuracy is increased by 8%. Secondly, label smoothing and mixup optimization is also performed to improve the generalization ability of the model. Compared with the original YOLO algorithm, the accuracy of the proposed algorithm is improved by 16.01%. By combining the optimization of the position regression loss function of GIOU (Generalized Intersection Over Union), the overall system accuracy can reach 92.7%, which improves the performance by 21.28% compared with the original YOLOv3 algorithm. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities)
Show Figures

Figure 1

22 pages, 1105 KiB  
Article
Goal Oriented Time Adaptivity Using Local Error Estimates
by Peter Meisrimel and Philipp Birken
Algorithms 2020, 13(5), 113; https://doi.org/10.3390/a13050113 - 30 Apr 2020
Cited by 1 | Viewed by 4312
Abstract
We consider initial value problems (IVPs) where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution. For these, we analyze goal oriented time adaptive methods that use only local error estimates. [...] Read more.
We consider initial value problems (IVPs) where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution. For these, we analyze goal oriented time adaptive methods that use only local error estimates. A local error estimate and timestep controller for step-wise contributions to the QoI are derived. We prove convergence of the error in the QoI for tolerance to zero under a controllability assumption. By analyzing global error propagation with respect to the QoI, we can identify possible issues and make performance predictions. Numerical tests verify these results. We compare performance with classical local error based time-adaptivity and a posteriori based adaptivity using the dual-weighted residual (DWR) method. For dissipative problems, local error based methods show better performance than DWR and the goal oriented method shows good results in most examples, with significant speedups in some cases. Full article
Show Figures

Figure 1

14 pages, 1054 KiB  
Article
Two NEH Heuristic Improvements for Flowshop Scheduling Problem with Makespan Criterion
by Christophe Sauvey and Nathalie Sauer
Algorithms 2020, 13(5), 112; https://doi.org/10.3390/a13050112 - 29 Apr 2020
Cited by 20 | Viewed by 6564
Abstract
Since its creation by Nawaz, Enscore, and Ham in 1983, NEH remains the best heuristic method to solve flowshop scheduling problems. In the large body of literature dealing with the application of this heuristic, it can be clearly noted that results differ from [...] Read more.
Since its creation by Nawaz, Enscore, and Ham in 1983, NEH remains the best heuristic method to solve flowshop scheduling problems. In the large body of literature dealing with the application of this heuristic, it can be clearly noted that results differ from one paper to another. In this paper, two methods are proposed to improve the original NEH, based on the two points in the method where choices must be made, in case of equivalence between two job orders or partial sequences. When an equality occurs in a sorting method, two results are equivalent, but can lead to different final results. In order to propose the first improvement to NEH, the factorial basis decomposition method is introduced, which makes a number computationally correspond to a permutation. This method is very helpful for the first improvement, and allows testing of all the sequencing possibilities for problems counting up to 50 jobs. The second improvement is located where NEH keeps the best partial sequence. Similarly, a list of equivalent partial sequences is kept, rather than only one, to provide the global method a chance of better performance. The results obtained with the successive use of the two methods of improvement present an average improvement of 19% over the already effective results of the original NEH method. Full article
Show Figures

Figure 1

18 pages, 4372 KiB  
Article
Multi-Level Joint Feature Learning for Person Re-Identification
by Shaojun Wu and Ling Gao
Algorithms 2020, 13(5), 111; https://doi.org/10.3390/a13050111 - 29 Apr 2020
Cited by 6 | Viewed by 4303
Abstract
In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient [...] Read more.
In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%). Full article
(This article belongs to the Special Issue Algorithms for Smart Cities)
Show Figures

Figure 1

30 pages, 1297 KiB  
Article
p-Refined Multilevel Quasi-Monte Carlo for Galerkin Finite Element Methods with Applications in Civil Engineering
by Philippe Blondeel, Pieterjan Robbe, Cédric Van hoorickx, Stijn François, Geert Lombaert and Stefan Vandewalle
Algorithms 2020, 13(5), 110; https://doi.org/10.3390/a13050110 - 28 Apr 2020
Cited by 6 | Viewed by 3947
Abstract
Civil engineering applications are often characterized by a large uncertainty on the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can be expressed as a random field represented by, [...] Read more.
Civil engineering applications are often characterized by a large uncertainty on the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can be expressed as a random field represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic responses, i.e., the expected value and variance of a chosen quantity of interest, remains very costly, even when state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using a recently developed multilevel method: p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). This method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. It is combined with a rank-1 Quasi-Monte Carlo (QMC) lattice rule, which yields faster convergence compared to the use of random Monte Carlo points. In this work, we developed algorithms for the p-MLQMC method for two dimensional problems. The p-MLQMC method is first benchmarked on an academic beam problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering, and typically suffers from large parameter uncertainty. For both considered problems, we observe a very significant reduction in the amount of computational work with respect to MLMC. Full article
Show Figures

Figure 1

24 pages, 6129 KiB  
Article
Evolution of SOMs’ Structure and Learning Algorithm: From Visualization of High-Dimensional Data to Clustering of Complex Data
by Marian B. Gorzałczany and Filip Rudziński
Algorithms 2020, 13(5), 109; https://doi.org/10.3390/a13050109 - 28 Apr 2020
Cited by 2 | Viewed by 4148
Abstract
In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing [...] Read more.
In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

18 pages, 11016 KiB  
Article
Investigation of the iCC Framework Performance for Solving Constrained LSGO Problems
by Alexey Vakhnin and Evgenii Sopov
Algorithms 2020, 13(5), 108; https://doi.org/10.3390/a13050108 - 26 Apr 2020
Cited by 6 | Viewed by 4035
Abstract
Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special [...] Read more.
Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications)
Show Figures

Figure 1

18 pages, 3042 KiB  
Article
How to Inspect and Measure Data Quality about Scientific Publications: Use Case of Wikipedia and CRIS Databases
by Otmane Azeroual and Włodzimierz Lewoniewski
Algorithms 2020, 13(5), 107; https://doi.org/10.3390/a13050107 - 26 Apr 2020
Cited by 5 | Viewed by 5335
Abstract
The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary [...] Read more.
The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary to be able to recognize and assess their quality. Only then is it possible to compile a result from the available data that fulfills its purpose for the user, namely to deliver reliable data and information. This paper discussed the quality problems of source metadata in Wikipedia and CRIS. Based on real data from over 40 million Wikipedia articles in various languages, we performed preliminary quality analysis of the metadata of scientific publications using a data quality tool. So far, no data quality measurements have been programmed with Python to assess the quality of metadata from scientific publications in Wikipedia and CRIS. With this in mind, we programmed the methods and algorithms as code, but presented it in the form of pseudocode in this paper to measure the quality related to objective data quality dimensions such as completeness, correctness, consistency, and timeliness. This was prepared as a macro service so that the users can use the measurement results with the program code to make a statement about their scientific publications metadata so that the management can rely on high-quality data when making decisions. Full article
(This article belongs to the Special Issue Data Quality Theory and Applications)
Show Figures

Figure 1

15 pages, 299 KiB  
Article
Diagnosis in Tennis Serving Technique
by Eugenio Roanes-Lozano, Eduardo A. Casella, Fernando Sánchez and Antonio Hernando
Algorithms 2020, 13(5), 106; https://doi.org/10.3390/a13050106 - 25 Apr 2020
Cited by 8 | Viewed by 4358
Abstract
Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like ‘hit [...] Read more.
Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like ‘hit the ball a little higher when serving’. However, the biomechanical of a tennis stroke is only clear to an expert. We, therefore, developed a prototype of a rule-based expert system (RBES) aimed at an amateur competition player that is not accompanied by his/her coach to a championship and is not serving as usual (the RBES is so far restricted to serving). The player has to answer a set of questions about how he/she is serving that day and his/her usual serving technique and the RBES obtains a diagnosis using logic inference about the possible reasons (according of the logic rules that have been previously given to the RBES). A certain knowledge of the tennis terminology and technique is required from the player, but that is something known at this level. The underlying logic is Boolean and the inference engine is algebraic (it uses Groebner bases). Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop