Next Issue
Volume 16, February
Previous Issue
Volume 15, December
 
 

Algorithms, Volume 16, Issue 1 (January 2023) – 60 articles

Cover Story (view full-size image): Knowledge graphs (KGs) are becoming more and more prevalent on the web, containing a vast amount of information and requiring tools for their quick understanding. Structural semantic summarization methods primarily exploit the structure of the graph in order to facilitate KGs’ exploration. However, state-of-the-art approaches focus on identifying the most important nodes, usually through a single or just a few centrality measures. SumMER is the first structural summarization technique exploiting machine learning, moving beyond a single centrality measure and effectively combining multiple ones for optimally selecting the most important nodes. Then, those nodes are linked formulating a subgraph out of the original graph, effectively increasing the quality of the generated summaries. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1408 KiB  
Article
Iterative Image Reconstruction Algorithm with Parameter Estimation by Neural Network for Computed Tomography
by Takeshi Kojima and Tetsuya Yoshinaga
Algorithms 2023, 16(1), 60; https://doi.org/10.3390/a16010060 - 16 Jan 2023
Cited by 3 | Viewed by 2459
Abstract
Recently, an extended family of power-divergence measures with two parameters was proposed together with an iterative reconstruction algorithm based on minimization of the divergence measure as an objective function of the reconstructed images for computed tomography. Numerical experiments on the reconstruction algorithm illustrated [...] Read more.
Recently, an extended family of power-divergence measures with two parameters was proposed together with an iterative reconstruction algorithm based on minimization of the divergence measure as an objective function of the reconstructed images for computed tomography. Numerical experiments on the reconstruction algorithm illustrated that it has advantages over conventional iterative methods from noisy measured projections by setting appropriate values of the parameters. In this paper, we present a novel neural network architecture for determining the most appropriate parameters depending on the noise level of the projections and the shape of the target image. Through experiments, we show that the algorithm of the architecture, which has an optimization sub-network with multiplicative connections rather than additive ones, works well. Full article
Show Figures

Figure 1

21 pages, 7405 KiB  
Article
Data Augmentation Methods for Enhancing Robustness in Text Classification Tasks
by Huidong Tang, Sayaka Kamei and Yasuhiko Morimoto
Algorithms 2023, 16(1), 59; https://doi.org/10.3390/a16010059 - 16 Jan 2023
Cited by 4 | Viewed by 3440
Abstract
Text classification is widely studied in natural language processing (NLP). Deep learning models, including large pre-trained models like BERT and DistilBERT, have achieved impressive results in text classification tasks. However, these models’ robustness against adversarial attacks remains an area of concern. To address [...] Read more.
Text classification is widely studied in natural language processing (NLP). Deep learning models, including large pre-trained models like BERT and DistilBERT, have achieved impressive results in text classification tasks. However, these models’ robustness against adversarial attacks remains an area of concern. To address this concern, we propose three data augmentation methods to improve the robustness of such pre-trained models. We evaluated our methods on four text classification datasets by fine-tuning DistilBERT on the augmented datasets and exposing the resulting models to adversarial attacks to evaluate their robustness. In addition to enhancing the robustness, our proposed methods can improve the accuracy and F1-score on three datasets. We also conducted comparison experiments with two existing data augmentation methods. We found that one of our proposed methods demonstrates a similar improvement in terms of performance, but all demonstrate a superior robustness improvement. Full article
Show Figures

Figure 1

19 pages, 2313 KiB  
Article
A Design Concept for a Tourism Recommender System for Regional Development
by Leyla Gamidullaeva, Alexey Finogeev, Mikhail Kataev and Larisa Bulysheva
Algorithms 2023, 16(1), 58; https://doi.org/10.3390/a16010058 - 16 Jan 2023
Cited by 13 | Viewed by 3856
Abstract
Despite of tourism infrastructure and software, the development of tourism is hampered due to the lack of information support, which encapsulates various aspects of travel implementation. This paper highlights a demand for integrating various approaches and methods to develop a universal tourism information [...] Read more.
Despite of tourism infrastructure and software, the development of tourism is hampered due to the lack of information support, which encapsulates various aspects of travel implementation. This paper highlights a demand for integrating various approaches and methods to develop a universal tourism information recommender system when building individual tourist routes. The study objective is proposing a concept of a universal information recommender system for building a personalized tourist route. The developed design concept for such a system involves a procedure for data collection and preparation for tourism product synthesis; a methodology for tourism product formation according to user preferences; the main stages of this methodology implementation. To collect and store information from real travelers, this paper proposes to use elements of blockchain technology in order to ensure information security. A model that specifies the key elements of a tourist route planning process is presented. This article can serve as a reference and knowledge base for digital business system analysts, system designers, and digital tourism business implementers for better digital business system design and implementation in the tourism sector. Full article
Show Figures

Figure 1

29 pages, 14391 KiB  
Article
Extending Process Discovery with Model Complexity Optimization and Cyclic States Identification: Application to Healthcare Processes
by Liubov O. Elkhovskaya, Alexander D. Kshenin, Marina A. Balakhontceva, Mikhail V. Ionov and Sergey V. Kovalchuk
Algorithms 2023, 16(1), 57; https://doi.org/10.3390/a16010057 - 15 Jan 2023
Cited by 4 | Viewed by 2119
Abstract
Within process mining, discovery techniques make it possible to construct business process models automatically from event logs. However, results often do not achieve a balance between model complexity and fitting accuracy, establishing a need for manual model adjusting. This paper presents an approach [...] Read more.
Within process mining, discovery techniques make it possible to construct business process models automatically from event logs. However, results often do not achieve a balance between model complexity and fitting accuracy, establishing a need for manual model adjusting. This paper presents an approach to process mining that provides semi-automatic support to model optimization based on the combined assessment of model complexity and fitness. To balance complexity and fitness, a model simplification approach is proposed, which abstracts the raw model at the desired granularity. Additionally, we introduce a concept of meta-states, a cycle collapsing in the model, which can potentially simplify the model and interpret it. We aim to demonstrate the capabilities of our technological solution using three datasets from different applications in the healthcare domain. These are remote monitoring processes for patients with arterial hypertension and workflows of healthcare workers during the COVID-19 pandemic. A case study also investigates the use of various complexity measures and different ways of solution application, providing insights on better practices in improving interpretability and complexity/fitness balance in process models. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application II)
Show Figures

Figure 1

21 pages, 1379 KiB  
Article
Development and Implementation of an ANN Based Flow Law for Numerical Simulations of Thermo-Mechanical Processes at High Temperatures in FEM Software
by Olivier Pantalé
Algorithms 2023, 16(1), 56; https://doi.org/10.3390/a16010056 - 13 Jan 2023
Cited by 7 | Viewed by 2486
Abstract
Numerical methods based on finite element (FE) have proven their efficiency for many years in the thermomechanical simulation of forming processes. Nevertheless, the application of these methods to new materials requires the identification and implementation of constitutive and flow laws within FE codes, [...] Read more.
Numerical methods based on finite element (FE) have proven their efficiency for many years in the thermomechanical simulation of forming processes. Nevertheless, the application of these methods to new materials requires the identification and implementation of constitutive and flow laws within FE codes, which sometimes pose problems, particularly because of the strongly non-linear character of the behavior of these materials. Computational techniques based on machine learning and artificial neural networks are becoming more and more important in the development of these models and help the FE codes to integrate more complex behavior. In this paper, we present the development, implementation and use of an artificial neural network (ANN) based flow law for a GrC15 alloy under high temperature thermomechanical solicitations. The flow law modeling by ANN shows a significant superiority in terms of model prediction quality compared to classical approaches based on widely used Johnson–Cook or Arrhenius models. Once the ANN parameters have been identified on the base of experiments, the implementation of this flow law in a finite element code shows promising results in terms of solution quality and respect of the material behavior. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

15 pages, 1713 KiB  
Article
Set-Point Control of a Spatially Distributed Buck Converter
by Klaus Röbenack and Stefan Palis
Algorithms 2023, 16(1), 55; https://doi.org/10.3390/a16010055 - 13 Jan 2023
Cited by 4 | Viewed by 1802
Abstract
The classical buck converter is a very common DC–DC converter, which reduces an higher input supply voltage to a lower output load voltage. Replacing the inductor and the capacitor by a transmission line, we obtain a distributed buck converter, which can be described [...] Read more.
The classical buck converter is a very common DC–DC converter, which reduces an higher input supply voltage to a lower output load voltage. Replacing the inductor and the capacitor by a transmission line, we obtain a distributed buck converter, which can be described by partial differential equations. Therefore, we obtain a completely new class of model. This new topology can be used if the load is operated at some spatial distance from the power supply, where the power supply line is directly used as a reactive network element of the converter. In addition to the analysis and simulation we will also investigate the control of such a converter. In this contribution, we employ a discrepancy-based control technique. Approximating the theoretically derived feedback law yields an easy to implement sliding mode control scheme. The controller design is based on an ideal circuit model and verified by numerical simulation. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

15 pages, 1356 KiB  
Article
A Discrete Partially Observable Markov Decision Process Model for the Maintenance Optimization of Oil and Gas Pipelines
by Ezra Wari, Weihang Zhu and Gino Lim
Algorithms 2023, 16(1), 54; https://doi.org/10.3390/a16010054 - 12 Jan 2023
Cited by 7 | Viewed by 2644
Abstract
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes [...] Read more.
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion growth, and implementing optimal maintenance policies. This paper proposes a partially observable Markov decision process (POMDP) model for optimizing maintenance based on the corrosion progress, which is monitored by an inline inspection to assess the extent of pipeline corrosion. The states are defined by dividing the deterioration range equally, whereas the actions are determined based on the specific states and pipeline attributes. Monte Carlo simulation and a pure birth Markov process method are used for computing the transition matrix. The cost of maintenance and failure are considered when calculating the rewards. The inline inspection methods and tool measurement errors may cause reading distortion, which is used to formulate the observations and the observation function. The model is demonstrated with two numerical examples constructed based on problems and parameters in the literature. The result shows that the proposed model performs well with the added advantage of integrating measurement errors and recommending actions for multiple-state situations. Overall, this discrete model can serve the maintenance decision-making process by better representing the stochastic features. Full article
(This article belongs to the Special Issue Algorithms in Monte Carlo Methods)
Show Figures

Figure 1

19 pages, 2827 KiB  
Article
Investigating Novice Developers’ Code Commenting Trends Using Machine Learning Techniques
by Tahira Niazi, Teerath Das, Ghufran Ahmed, Syed Muhammad Waqas, Sumra Khan, Suleman Khan, Ahmed Abdelaziz Abdelatif and Shaukat Wasi
Algorithms 2023, 16(1), 53; https://doi.org/10.3390/a16010053 - 12 Jan 2023
Cited by 3 | Viewed by 3061
Abstract
Code comments are considered an efficient way to document the functionality of a particular block of code. Code commenting is a common practice among developers to explain the purpose of the code in order to improve code comprehension and readability. Researchers investigated the [...] Read more.
Code comments are considered an efficient way to document the functionality of a particular block of code. Code commenting is a common practice among developers to explain the purpose of the code in order to improve code comprehension and readability. Researchers investigated the effect of code comments on software development tasks and demonstrated the use of comments in several ways, including maintenance, reusability, bug detection, etc. Given the importance of code comments, it becomes vital for novice developers to brush up on their code commenting skills. In this study, we initially investigated what types of comments novice students document in their source code and further categorized those comments using a machine learning approach. The work involves the initial manual classification of code comments and then building a machine learning model to classify student code comments automatically. The findings of our study revealed that novice developers/students’ comments are mainly related to Literal (26.66%) and Insufficient (26.66%). Further, we proposed and extended the taxonomy of such source code comments by adding a few more categories, i.e., License (5.18%), Profile (4.80%), Irrelevant (4.80%), Commented Code (4.44%), Autogenerated (1.48%), and Improper (1.10%). Moreover, we assessed our approach with three different machine-learning classifiers. Our implementation of machine learning models found that Decision Tree resulted in the overall highest accuracy, i.e., 85%. This study helps in predicting the type of code comments for a novice developer using a machine learning approach that can be implemented to generate automated feedback for students, thus saving teachers time for manual one-on-one feedback, which is a time-consuming activity. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

20 pages, 3002 KiB  
Article
Novel MIA-LSTM Deep Learning Hybrid Model with Data Preprocessing for Forecasting of PM2.5
by Gaurav Narkhede, Anil Hiwale, Bharat Tidke and Chetan Khadse
Algorithms 2023, 16(1), 52; https://doi.org/10.3390/a16010052 - 12 Jan 2023
Cited by 10 | Viewed by 2675
Abstract
Day by day pollution in cities is increasing due to urbanization. One of the biggest challenges posed by the rapid migration of inhabitants into cities is increased air pollution. Sustainable Development Goal 11 indicates that 99 percent of the world’s urban population breathes [...] Read more.
Day by day pollution in cities is increasing due to urbanization. One of the biggest challenges posed by the rapid migration of inhabitants into cities is increased air pollution. Sustainable Development Goal 11 indicates that 99 percent of the world’s urban population breathes polluted air. In such a trend of urbanization, predicting the concentrations of pollutants in advance is very important. Predictions of pollutants would help city administrations to take timely measures for ensuring Sustainable Development Goal 11. In data engineering, imputation and the removal of outliers are very important steps prior to forecasting the concentration of air pollutants. For pollution and meteorological data, missing values and outliers are critical problems that need to be addressed. This paper proposes a novel method called multiple iterative imputation using autoencoder-based long short-term memory (MIA-LSTM) which uses iterative imputation using an extra tree regressor as an estimator for the missing values in multivariate data followed by an LSTM autoencoder for the detection and removal of outliers present in the dataset. The preprocessed data were given to a multivariate LSTM for forecasting PM2.5 concentration. This paper also presents the effect of removing outliers and missing values from the dataset as well as the effect of imputing missing values in the process of forecasting the concentrations of air pollutants. The proposed method provides better results for forecasting with a root mean square error (RMSE) value of 9.8883. The obtained results were compared with the traditional gated recurrent unit (GRU), 1D convolutional neural network (CNN), and long short-term memory (LSTM) approaches for a dataset of the Aotizhonhxin area of Beijing in China. Similar results were observed for another two locations in China and one location in India. The results obtained show that imputation and outlier/anomaly removal improve the accuracy of air pollution forecasting. Full article
Show Figures

Figure 1

18 pages, 553 KiB  
Article
Discovering Critical Factors in the Content of Crowdfunding Projects
by Kai-Fu Yang, Yi-Ru Lin and Long-Sheng Chen
Algorithms 2023, 16(1), 51; https://doi.org/10.3390/a16010051 - 12 Jan 2023
Cited by 2 | Viewed by 2289
Abstract
Crowdfunding can simplify the financing process to raise large amounts of money to complete projects for startups. However, improving the success rate has become one of critical issues. To achieve this goal, fundraisers need to create a short video, attractive promotional content, and [...] Read more.
Crowdfunding can simplify the financing process to raise large amounts of money to complete projects for startups. However, improving the success rate has become one of critical issues. To achieve this goal, fundraisers need to create a short video, attractive promotional content, and present themselves on social media to attract investors. Previous studies merely discussed project factors that affect crowdfunding success rates. However, from the available literature, relatively few studies have studied what elements should be involved in the project content for the success of crowdfunding projects. Consequently, this study aims to extract the crucial factors that can enhance the crowdfunding project success rate based on the project content description. To identify the crucial project content factors of movie projects, this study employed two real cases from famous platforms by using natural language processing (NLP) and feature selection algorithms including rough set theory (RST), decision trees (DT), and ReliefF, from 12 pre-defined candidate factors. Then, support vector machines (SVM) were used to evaluate the performance. Finally, “Role”, “Cast”, “Merchandise”, “Sound effects”, and “Sentiment” were identified as important content factors for movie projects. The findings also could provide fundraisers with suggestions on how to make their movie crowdfunding projects more successful. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection)
Show Figures

Figure 1

22 pages, 3150 KiB  
Article
Numerical Study of Viscoplastic Flows Using a Multigrid Initialization Algorithm
by Souhail Maazioui, Imad Kissami, Fayssal Benkhaldoun and Driss Ouazar
Algorithms 2023, 16(1), 50; https://doi.org/10.3390/a16010050 - 11 Jan 2023
Cited by 2 | Viewed by 1908
Abstract
In this paper, an innovative methodology to handle the numerical simulation of viscoplastic flows is proposed based on a multigrid initialization algorithm in conjunction with the SIMPLE procedure. The governing equations for incompressible flow, which consist of continuity and momentum equations, are solved [...] Read more.
In this paper, an innovative methodology to handle the numerical simulation of viscoplastic flows is proposed based on a multigrid initialization algorithm in conjunction with the SIMPLE procedure. The governing equations for incompressible flow, which consist of continuity and momentum equations, are solved on a collocated grid by combining the finite volume discretization and Rhie and chow interpolation for pressure–velocity coupling. Using the proposed solver in combination with the regularization scheme of Papanastasiou, we chose the square lid-driven cavity flow and pipe flow as test cases for validation and discussion. In doing so, we study the influence of the Bingham number and the Reynolds number on the development of rigid areas and the features of the vortices within the flow domain. Pipe flow results illustrate the flow’s response to the stress growth parameter values. We show that the representation of the yield surface and the plug zone is influenced by the chosen value. Regarding viscoplastic flows, our experiments demonstrate that our approach based on using the multigrid method as an initialization procedure makes a significant contribution by outperforming the classic single grid method. A computation speed-up ratio of 6.45 was achieved for the finest grid size (320 × 320). Full article
(This article belongs to the Topic Advances in Computational Materials Sciences)
Show Figures

Figure 1

4 pages, 175 KiB  
Editorial
Special Issue on Ensemble Learning and/or Explainability
by Panagiotis Pintelas and Ioannis E. Livieris
Algorithms 2023, 16(1), 49; https://doi.org/10.3390/a16010049 - 11 Jan 2023
Viewed by 2069
Abstract
This article will summarize the works published in a Special Issue of Algorithms, entitled “Ensemble Learning and/or Explainability”(https://www [...] Full article
(This article belongs to the Special Issue Ensemble Algorithms and/or Explainability)
15 pages, 505 KiB  
Article
An Algorithm for Solving Zero-Sum Differential Game Related to the Nonlinear H Control Problem
by Vladimir Milić, Josip Kasać and Marin Lukas
Algorithms 2023, 16(1), 48; https://doi.org/10.3390/a16010048 - 10 Jan 2023
Cited by 1 | Viewed by 2478
Abstract
This paper presents an approach for the solution of a zero-sum differential game associated with a nonlinear state-feedback H control problem. Instead of using the approximation methods for solving the corresponding Hamilton–Jacobi–Isaacs (HJI) partial differential equation, we propose an algorithm that calculates [...] Read more.
This paper presents an approach for the solution of a zero-sum differential game associated with a nonlinear state-feedback H control problem. Instead of using the approximation methods for solving the corresponding Hamilton–Jacobi–Isaacs (HJI) partial differential equation, we propose an algorithm that calculates the explicit inputs to the dynamic system by directly performing minimization with simultaneous maximization of the same objective function. In order to achieve numerical robustness and stability, the proposed algorithm uses: quasi-Newton method, conjugate gradient method, line search method with Wolfe conditions, Adams approximation method for time discretization and complex-step calculation of derivatives. The algorithm is evaluated in computer simulations on examples of first- and second-order nonlinear systems with analytical solutions of H control problem. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

16 pages, 1014 KiB  
Article
The Importance of Modeling Path Choice Behavior in the Vehicle Routing Problem
by Antonino Vitetta
Algorithms 2023, 16(1), 47; https://doi.org/10.3390/a16010047 - 10 Jan 2023
Cited by 2 | Viewed by 1881
Abstract
Given two pick-up and delivery points, the best path chosen does not necessarily follow the criteria of minimum travel time or generalized minimum cost evaluated with a deterministic approach. Given a criterion, the perceived cost is not deterministic for many reasons (congestion, incomplete [...] Read more.
Given two pick-up and delivery points, the best path chosen does not necessarily follow the criteria of minimum travel time or generalized minimum cost evaluated with a deterministic approach. Given a criterion, the perceived cost is not deterministic for many reasons (congestion, incomplete information on the state of the system, inexact prediction of the system state, etc.). The same consideration applies to the best-chosen route, assuming that the route is an ordered list of network nodes to visit. The paths and routes perceived and chosen (drivers or companies) could follow different criteria (i.e., minizmum congested travel time for the path and minimum monetary cost for the route). In this context, the paths chosen between two pick-up and delivery points, studied with the path choice problem (PCP), influence the best route, studied with the vehicle routing problem (VRP). This paper reports some considerations on the importance of modelling the path choice behavior in the VRP; the influence of the PCP on the VRP is studied. The considerations are supported by a numerical example in a small network in which the results obtained by adopting the deterministic or probabilistic models for the PCP are compared. To validate the reported thesis, the models are applied in a small test system, and it allows the reader to follow the numerical results step by step. Full article
(This article belongs to the Special Issue Optimization for Vehicle Routing Problems)
Show Figures

Figure 1

14 pages, 349 KiB  
Article
Hyperparameter Black-Box Optimization to Improve the Automatic Classification of Support Tickets
by Renato Bruni, Gianpiero Bianchi and Pasquale Papa
Algorithms 2023, 16(1), 46; https://doi.org/10.3390/a16010046 - 10 Jan 2023
Cited by 1 | Viewed by 2317
Abstract
User requests to a customer service, also known as tickets, are essentially short texts in natural language. They should be grouped by topic to be answered efficiently. The effectiveness increases if this semantic categorization becomes automatic. We pursue this goal by using text [...] Read more.
User requests to a customer service, also known as tickets, are essentially short texts in natural language. They should be grouped by topic to be answered efficiently. The effectiveness increases if this semantic categorization becomes automatic. We pursue this goal by using text mining to extract the features from the tickets, and classification to perform the categorization. This is however a difficult multi-class problem, and the classification algorithm needs a suitable hyperparameter configuration to produce a practically useful categorization. As recently highlighted by several researchers, the selection of these hyperparameters is often the crucial aspect. Therefore, we propose to view the hyperparameter choice as a higher-level optimization problem where the hyperparameters are the decision variables and the objective is the predictive performance of the classifier. However, an explicit analytical model of this problem cannot be defined. Therefore, we propose to solve it as a black-box model by means of derivative-free optimization techniques. We conduct experiments on a relevant application: the categorization of the requests received by the Contact Center of the Italian National Statistics Institute (Istat). Results show that the proposed approach is able to effectively categorize the requests, and that its performance is increased by the proposed hyperparameter optimization. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

25 pages, 2715 KiB  
Article
Three Diverse Applications of General-Purpose Parameter Optimization Algorithm
by Yuanzhi Huo, Pradini Puspitaningayu, Nobuo Funabiki, Kazushi Hamazaki, Minoru Kuribayashi, Yihan Zhao and Kazuyuki Kojima
Algorithms 2023, 16(1), 45; https://doi.org/10.3390/a16010045 - 9 Jan 2023
Cited by 2 | Viewed by 1762
Abstract
Parameters often take key roles in determining the accuracy of algorithms, logics, and models for practical applications. Previously, we have proposed a general-purpose parameter optimization algorithm, and studied its applications in various practical problems. This algorithm optimizes the parameter values by repeating [...] Read more.
Parameters often take key roles in determining the accuracy of algorithms, logics, and models for practical applications. Previously, we have proposed a general-purpose parameter optimization algorithm, and studied its applications in various practical problems. This algorithm optimizes the parameter values by repeating small changes of them based on a local search method with hill-climbing capabilities. In this paper, we present three diverse applications of this algorithm to show the versatility and effectiveness. The first application is the fingerprint-based indoor localization system using IEEE802.15.4 devices called FILS15.4 that can detect the location of a user in an indoor environment. It is shown that the number of fingerprints for each detection point, the fingerprint values, and the detection interval are optimized together, and the average detection accuracy exceeds 99%. The second application is the human face contour approximation model that is described by a combination of half circles, line segments, and a quadratic curve. It is shown that the simple functions can well approximate the face contour of various persons by optimizing the center coordinates, radii, and coefficients. The third application is the computational fluid dynamic (CFD) simulation to estimate temperature changes in a room. It is shown that the thermal conductivity is optimized to make the average temperature difference between the estimated and measured 0.22C. Full article
(This article belongs to the Special Issue Algorithms in Complex Networks)
Show Figures

Figure 1

16 pages, 2958 KiB  
Article
Evolutionary Algorithm with Geometrical Heuristics for Solving the Close Enough Traveling Salesman Problem: Application to the Trajectory Planning of an Unmanned Aerial Vehicle
by Christophe Cariou, Laure Moiroux-Arvis, François Pinet and Jean-Pierre Chanet
Algorithms 2023, 16(1), 44; https://doi.org/10.3390/a16010044 - 9 Jan 2023
Cited by 5 | Viewed by 1993
Abstract
Evolutionary algorithms have been widely studied in the literature to find sub-optimal solutions to complex problems as the Traveling Salesman Problem (TSP). In such a problem, the target positions are usually static and punctually defined. The objective is to minimize a cost function [...] Read more.
Evolutionary algorithms have been widely studied in the literature to find sub-optimal solutions to complex problems as the Traveling Salesman Problem (TSP). In such a problem, the target positions are usually static and punctually defined. The objective is to minimize a cost function as the minimal distance, time or energy. However, in some applications, as the one addressed in this paper—namely the data collection of buried sensor nodes by means of an Unmanned Aerial Vehicle— the targets are areas with varying sizes: they are defined with respect to the radio communication range of each node, ranging from a few meters to several hundred meters according to various parameters (e.g., soil moisture, burial depth, transmit power). The Unmanned Aerial Vehicle has to enter successively in these dynamic areas to collect the data, without the need to pass at the vertical of each node. Some areas can obviously intersect. That leads to solve the Close Enough TSP. To determine a sub-optimal trajectory for the Unmanned Aerial Vehicle, this paper presents an original and efficient strategy based on an evolutionary algorithm completed with geometrical heuristics. The performances of the algorithm are highlighted through scenarios with respectively 15 and 50 target locations. The results are analyzed with respect to the total route length. Finally, conclusions and future research directions are discussed. Full article
(This article belongs to the Special Issue Metaheuristics Algorithms and Their Applications)
Show Figures

Figure 1

22 pages, 868 KiB  
Article
The D-Bar Algorithm Fusing Electrical Impedance Tomography with A Priori Radar Data: A Hands-On Analysis
by Jöran Rixen, Steffen Leonhardt, Jochen Moll, Duy Hai Nguyen and Chuong Ngo
Algorithms 2023, 16(1), 43; https://doi.org/10.3390/a16010043 - 9 Jan 2023
Cited by 2 | Viewed by 2049
Abstract
Electrical impedance tomography (EIT) is an imaging modality that can estimate a visualization of the conductivity distribution inside the human body. However, the spatial resolution of EIT is limited because measurements are sensitive to noise. We investigate a technique to incorporate a priori [...] Read more.
Electrical impedance tomography (EIT) is an imaging modality that can estimate a visualization of the conductivity distribution inside the human body. However, the spatial resolution of EIT is limited because measurements are sensitive to noise. We investigate a technique to incorporate a priori information into the EIT reconstructions of the D-Bar algorithm. Our paper aims to help engineers understand the behavior of the D-Bar algorithm and its implementation. The a priori information is provided by a radar setup and a one-dimensional reconstruction of the radar data. The EIT reconstruction is carried out with a D-Bar algorithm. An intermediate step in the D-Bar algorithm is the scattering transform. The a priori information is added in this exact step to increase the spatial resolution of the reconstruction. As the D-Bar algorithm is widely used in the mathematical community and thus far has limited usage in the engineering domain, we also aim to explain the implementation of the algorithm and give an intuitive understanding where possible. Different parameters of the reconstruction algorithm are analyzed systematically with the help of the GREIT figures of merit. Even a limited one-dimensional a priori information can increase the reconstruction quality considerably. Artifacts from noisy EIT measurements are reduced. However, the selection of the amount of a priori information and the estimation of its value can worsen the reconstruction results again. Full article
Show Figures

Figure 1

30 pages, 6634 KiB  
Article
A Pixel-Wise k-Immediate Neighbour-Based Image Analysis Approach for Identifying Rock Pores and Fractures from Grayscale Image Samples
by Pradeep S. Naulia, Arunava Roy, Junzo Watada and Izzatdin B. A. Aziz
Algorithms 2023, 16(1), 42; https://doi.org/10.3390/a16010042 - 9 Jan 2023
Cited by 4 | Viewed by 2900
Abstract
The purpose of the current study is to propose a novel meta-heuristic image analysis approach using multi-objective optimization, named ‘Pixel-wise k-Immediate Neighbors’ to identify pores and fractures (both natural and induced, even in the micro-level) in the wells of a hydrocarbon reservoir, which [...] Read more.
The purpose of the current study is to propose a novel meta-heuristic image analysis approach using multi-objective optimization, named ‘Pixel-wise k-Immediate Neighbors’ to identify pores and fractures (both natural and induced, even in the micro-level) in the wells of a hydrocarbon reservoir, which presents better identification accuracy in the presence of the grayscale sample rock images. Pores and fractures imaging is currently being used extensively to predict the amount of petroleum under adequate trap conditions in the oil and gas industry. These properties have tremendous applications in contaminant transport, radioactive waste storage in the bedrock, and CO2 storage. A few strategies to automatically identify the pores and fractures from the images can be found in the contemporary literature. Several researchers employed classification technique using support vector machines (SVMs), whereas a few of them adopted deep learning systems. However, in these cases, the reported accuracy was not satisfactory in the presence of grayscale, low quality (poor resolution and chrominance), and irregular geometric-shaped images. The classification accuracy of the proposed multi-objective method outperformed the most influential contemporary approaches using deep learning systems, although with a few restrictions, which have been articulated later in the current work. Full article
Show Figures

Figure 1

15 pages, 1459 KiB  
Article
An Effective Staff Scheduling for Shift Workers in Social Welfare Facilities for the Disabled
by Hee Jun Ryu, Ye Na Jo, Won Jun Lee, Ji Won Cheong, Boo Yong Moon and Young Dae Ko
Algorithms 2023, 16(1), 41; https://doi.org/10.3390/a16010041 - 9 Jan 2023
Viewed by 3286
Abstract
The efficient management of social worker personnel is important since it involves a huge portion in its operations. However, the burnout and turnover rates of social workers are very high, which is due to dissatisfaction with the irregular and unequal schedules, despite the [...] Read more.
The efficient management of social worker personnel is important since it involves a huge portion in its operations. However, the burnout and turnover rates of social workers are very high, which is due to dissatisfaction with the irregular and unequal schedules, despite the continuous improvement in the treatment of social workers and the enactment of work-related legislation in Korea. This means that changes in policy do not significantly contribute to improving worker satisfaction, which shows the necessity of the strategies to prevent the turnover of workers. Therefore, this study aims to propose a strategy for the staff scheduling of workers that considers the fairness in the shift distribution among workers and the individual preference for shift work by using the linear programming. A survey about the preferences for shift work is conducted that targeted the employees of a welfare facility in Korea to enhance the practicality of the model. The effectiveness and applicability of the developed mathematical model are verified by deriving a deterministic schedule for a worker via the system parameters that were obtained based on the survey and the rules of the welfare facility in the numerical experiment. Compared to the conventional schedule, the derived schedule shows an improvement in the deviations in the number of shifts workers and a reflection of the personal preferences. This can raise the social worker’s satisfaction, which will decrease intention on burnouts and turnover. It will consequently facilitate on managing human resources in welfare facilities. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications IV)
Show Figures

Figure 1

13 pages, 378 KiB  
Article
Solving the Parallel Drone Scheduling Traveling Salesman Problem via Constraint Programming
by Roberto Montemanni and Mauro Dell’Amico
Algorithms 2023, 16(1), 40; https://doi.org/10.3390/a16010040 - 8 Jan 2023
Cited by 13 | Viewed by 2983
Abstract
Drones are currently seen as a viable way of improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles, such as trucks. In this paper, we consider the parallel drone scheduling traveling salesman problem, where a [...] Read more.
Drones are currently seen as a viable way of improving the distribution of parcels in urban and rural environments, while working in coordination with traditional vehicles, such as trucks. In this paper, we consider the parallel drone scheduling traveling salesman problem, where a set of customers requiring a delivery is split between a truck and a fleet of drones, with the aim of minimizing the total time required to serve all the customers. We propose a constraint programming model for the problem, discuss its implementation and present the results of an experimental program on the instances previously cited in the literature to validate exact and heuristic algorithms. We were able to decrease the cost (the time required to serve customers) for some of the instances and, for the first time, to provide a demonstrated optimal solution for all the instances considered. These results show that constraint programming can be a very effective tool for attacking optimization problems with traveling salesman components, such as the one discussed. Full article
Show Figures

Figure 1

14 pages, 1460 KiB  
Article
Fuzzy Algorithmic Modeling of Economics and Innovation Process Dynamics Based on Preliminary Component Allocation by Singular Spectrum Analysis Method
by Alexey F. Rogachev, Alexey B. Simonov, Natalia V. Ketko and Natalia N. Skiter
Algorithms 2023, 16(1), 39; https://doi.org/10.3390/a16010039 - 8 Jan 2023
Cited by 1 | Viewed by 1921
Abstract
In this article, the authors propose an algorithmic approach to building a model of the dynamics of economic and, in particular, innovation processes. The approach under consideration is based on a complex algorithm that includes (1) decomposition of the time series into components [...] Read more.
In this article, the authors propose an algorithmic approach to building a model of the dynamics of economic and, in particular, innovation processes. The approach under consideration is based on a complex algorithm that includes (1) decomposition of the time series into components using singular spectrum analysis; (2) recognition of the optimal component model based on fuzzy rules, and (3) creation of statistical models of individual components with their combination. It is shown that this approach corresponds to the high uncertainty characteristic of the tasks of the dynamics of innovation processes. The proposed algorithm makes it possible to create effective models that can be used both for analysis and for predicting the future states of the processes under study. The advantage of this algorithm is the possibility to expand the base of rules and components used for modeling. This is an important condition for improving the algorithm and its applicability for solving a wide range of problems. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

11 pages, 287 KiB  
Article
Nero: A Deterministic Leaderless Consensus Algorithm for DAG-Based Cryptocurrencies
by Rui Morais, Paul Crocker and Valderi Leithardt
Algorithms 2023, 16(1), 38; https://doi.org/10.3390/a16010038 - 7 Jan 2023
Cited by 7 | Viewed by 2918
Abstract
This paper presents the research undertaken with the goal of designing a consensus algorithm for cryptocurrencies with less latency than the current state-of-the-art while maintaining a level of throughput and scalability sufficient for real-world payments. The result is Nero, a new deterministic leaderless [...] Read more.
This paper presents the research undertaken with the goal of designing a consensus algorithm for cryptocurrencies with less latency than the current state-of-the-art while maintaining a level of throughput and scalability sufficient for real-world payments. The result is Nero, a new deterministic leaderless byzantine consensus algorithm in the partially synchronous model that is especially suited for Directed Acyclic Graph (DAG)-based cryptocurrencies. In fact, Nero has a communication complexity of O(n3) and terminates in two message delays in the good case (when there is synchrony). The algorithm is shown to be correct, and we also show that it can provide eventual order. Finally, some performance results are given based on a proof of concept implementation in the Rust language. Full article
(This article belongs to the Special Issue Blockchain Consensus Algorithms)
Show Figures

Figure 1

17 pages, 397 KiB  
Article
Ant-Balanced Multiple Traveling Salesmen: ACO-BmTSP
by Sílvia de Castro Pereira, Eduardo J. Solteiro Pires and Paulo B. de Moura Oliveira
Algorithms 2023, 16(1), 37; https://doi.org/10.3390/a16010037 - 7 Jan 2023
Cited by 5 | Viewed by 2739
Abstract
A new algorithm based on the ant colony optimization (ACO) method for the multiple traveling salesman problem (mTSP) is presented and defined as ACO-BmTSP. This paper addresses the problem of solving the mTSP while considering several salesmen and keeping both the total travel [...] Read more.
A new algorithm based on the ant colony optimization (ACO) method for the multiple traveling salesman problem (mTSP) is presented and defined as ACO-BmTSP. This paper addresses the problem of solving the mTSP while considering several salesmen and keeping both the total travel cost at the minimum and the tours balanced. Eleven different problems with several variants were analyzed to validate the method. The 20 variants considered three to twenty salesmen regarding 11 to 783 cities. The results were compared with best-known solutions (BKSs) in the literature. Computational experiments showed that a total of eight final results were better than those of the BKSs, and the others were quite promising, showing that with few adaptations, it will be possible to obtain better results than those of the BKSs. Although the ACO metaheuristic does not guarantee that the best solution will be found, it is essential in problems with non-deterministic polynomial time complexity resolution or when used as an initial bound solution in an integer programming formulation. Computational experiments on a wide range of benchmark problems within an acceptable time limit showed that compared with four existing algorithms, the proposed algorithm presented better results for several problems than the other algorithms did. Full article
Show Figures

Figure 1

16 pages, 309 KiB  
Article
A Symbolic Method for Solving a Class of Convolution-Type Volterra–Fredholm–Hammerstein Integro-Differential Equations under Nonlocal Boundary Conditions
by Efthimios Providas and Ioannis Nestorios Parasidis
Algorithms 2023, 16(1), 36; https://doi.org/10.3390/a16010036 - 7 Jan 2023
Viewed by 1501
Abstract
Integro-differential equations involving Volterra and Fredholm operators (VFIDEs) are used to model many phenomena in science and engineering. Nonlocal boundary conditions are more effective, and in some cases necessary, because they are more accurate measurements of the true state than classical (local) initial [...] Read more.
Integro-differential equations involving Volterra and Fredholm operators (VFIDEs) are used to model many phenomena in science and engineering. Nonlocal boundary conditions are more effective, and in some cases necessary, because they are more accurate measurements of the true state than classical (local) initial and boundary conditions. Closed-form solutions are always desirable, not only because they are more efficient, but also because they can be valuable benchmarks for validating approximate and numerical procedures. This paper presents a direct operator method for solving, in closed form, a class of Volterra–Fredholm–Hammerstein-type integro-differential equations under nonlocal boundary conditions when the inverse operator of the associated Volterra integro-differential operator exists and can be found explicitly. A technique for constructing inverse operators of convolution-type Volterra integro-differential operators (VIDEs) under multipoint and integral conditions is provided. The proposed methods are suitable for integration into any computer algebra system. Several linear and nonlinear examples are solved to demonstrate the effectiveness of the method. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
2 pages, 172 KiB  
Editorial
Special Issue on Algorithms for PID Controllers 2021
by Ramiro S. Barbosa and Isabel S. Jesus
Algorithms 2023, 16(1), 35; https://doi.org/10.3390/a16010035 - 6 Jan 2023
Cited by 2 | Viewed by 2578
Abstract
The PID is the most common type of algorithm used in control system applications [...] Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2021)
122 pages, 1505 KiB  
Systematic Review
Sybil in the Haystack: A Comprehensive Review of Blockchain Consensus Mechanisms in Search of Strong Sybil Attack Resistance
by Moritz Platt and Peter McBurney
Algorithms 2023, 16(1), 34; https://doi.org/10.3390/a16010034 - 6 Jan 2023
Cited by 23 | Viewed by 15359
Abstract
Consensus algorithms are applied in the context of distributed computer systems to improve their fault tolerance. The explosive development of distributed ledger technology following the proposal of ‘Bitcoin’ led to a sharp increase in research activity in this area. Specifically, public and permissionless [...] Read more.
Consensus algorithms are applied in the context of distributed computer systems to improve their fault tolerance. The explosive development of distributed ledger technology following the proposal of ‘Bitcoin’ led to a sharp increase in research activity in this area. Specifically, public and permissionless networks require robust leader selection strategies resistant to Sybil attacks in which malicious attackers present bogus identities to induce byzantine faults. Our goal is to analyse the entire breadth of works in this area systematically, thereby uncovering trends and research directions regarding Sybil attack resistance in today’s blockchain systems to benefit the designs of the future. Through a systematic literature review, we condense an immense set of research records (N = 21,799) to a relevant subset (N = 483). We categorise these mechanisms by their Sybil attack resistance characteristics, leader selection methodology, and incentive scheme. Mechanisms with strong Sybil attack resistance commonly adopt the principles underlying ‘Proof-of-Work’ or ‘Proof-of-Stake’ while mechanisms with limited resistance often use reputation systems or physical world linking. We find that only a few fundamental paradigms exist that can resist Sybil attacks in a permissionless setting but discover numerous innovative mechanisms that can deliver weaker protection in system scenarios with smaller attack surfaces. Full article
(This article belongs to the Special Issue Blockchain Consensus Algorithms)
Show Figures

Figure 1

14 pages, 1125 KiB  
Article
Solving of the Inverse Boundary Value Problem for the Heat Conduction Equation in Two Intervals of Time
by Bashar Talib Al-Nuaimi, H.K. Al-Mahdawi, Zainalabideen Albadran, Hussein Alkattan, Mostafa Abotaleb and El-Sayed M. El-kenawy
Algorithms 2023, 16(1), 33; https://doi.org/10.3390/a16010033 - 6 Jan 2023
Cited by 19 | Viewed by 2171
Abstract
The boundary value problem, BVP, for the PDE heat equation is studied and explained in this article. The problem declaration comprises two intervals; the (0, T) is the first interval and labels the heating of the inside burning chamber, and the second (T, [...] Read more.
The boundary value problem, BVP, for the PDE heat equation is studied and explained in this article. The problem declaration comprises two intervals; the (0, T) is the first interval and labels the heating of the inside burning chamber, and the second (T, ∞) interval defines the normal cooling of the chamber wall when the chamber temperature concurs with the ambient temperature. It is necessary to prove the boundary function of this problem has its place in the space H10, in order to successfully apply the Fourier transform method. The applicability of the Fourier transform for time to this problem is verified. The method of projection regularization is used to solve the inverse boundary value problem for the heat equation and to obtain an evaluation for the error between the approximate and the real solution. These results are new and of practical interest as shown in the numerical case study. Full article
Show Figures

Figure 1

26 pages, 3109 KiB  
Article
Deep Reinforcement Learning-Based Dynamic Pricing for Parking Solutions
by Li Zhe Poh, Tee Connie, Thian Song Ong and Michael Kah Ong Goh
Algorithms 2023, 16(1), 32; https://doi.org/10.3390/a16010032 - 5 Jan 2023
Cited by 6 | Viewed by 4052
Abstract
The growth in the number of automobiles in metropolitan areas has drawn attention to the need for more efficient carpark control in public spaces such as healthcare, retail stores, and office blocks. In this research, dynamic pricing is integrated with real-time parking data [...] Read more.
The growth in the number of automobiles in metropolitan areas has drawn attention to the need for more efficient carpark control in public spaces such as healthcare, retail stores, and office blocks. In this research, dynamic pricing is integrated with real-time parking data to optimise parking utilisation and reduce traffic jams. Dynamic pricing is the practice of changing the price of a product or service in response to market trends. This approach has the potential to manage car traffic in the parking space during peak and off-peak hours. The dynamic pricing method can set the parking fee at a greater price during peak hours and a lower rate during off-peak times. A method called deep reinforcement learning-based dynamic pricing (DRL-DP) is proposed in this paper. Dynamic pricing is separated into episodes and shifted back and forth on an hourly basis. Parking utilisation rates and profits are viewed as incentives for pricing control. The simulation output illustrates that the proposed solution is credible and effective under circumstances where the parking market around the parking area is competitive among each parking provider. Full article
Show Figures

Figure 1

16 pages, 2083 KiB  
Article
Optimization of Linear Quantization for General and Effective Low Bit-Width Network Compression
by Wenxin Yang, Xiaoli Zhi and Weiqin Tong
Algorithms 2023, 16(1), 31; https://doi.org/10.3390/a16010031 - 4 Jan 2023
Cited by 2 | Viewed by 2465
Abstract
Current edge devices for neural networks such as FPGA, CPLD, and ASIC can support low bit-width computing to improve the execution latency and energy efficiency, but traditional linear quantization can only maintain the inference accuracy of neural networks at a bit-width above 6 [...] Read more.
Current edge devices for neural networks such as FPGA, CPLD, and ASIC can support low bit-width computing to improve the execution latency and energy efficiency, but traditional linear quantization can only maintain the inference accuracy of neural networks at a bit-width above 6 bits. Different from previous studies that address this problem by clipping the outliers, this paper proposes a two-stage quantization method. Before converting the weights into fixed-point numbers, this paper first prunes the network by unstructured pruning and then uses the K-means algorithm to cluster the weights in advance to protect the distribution of the weights. To solve the instability problem of the K-means results, the PSO (particle swarm optimization) algorithm is exploited to obtain the initial cluster centroids. The experimental results on baseline deep networks such as ResNet-50, Inception-v3, and DenseNet-121 show the proposed optimized quantization method can generate a 5-bit network with an accuracy loss of less than 5% and a 4-bit network with only 10% accuracy loss as compared to 8-bit quantization. By quantization and pruning, this method reduces the model bit-width from 32 to 4 and the number of neurons by 80%. Additionally, it can be easily integrated into frameworks such as TensorRt and TensorFlow-Lite for low bit-width network quantization. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop