2022 and 2023 Selected Papers from Algorithms Editorial Board Members

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (30 October 2023) | Viewed by 37470

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Mathematics, Otto-von-Guericke-University, P.O. Box 4120, D-39016 Magdeburg, Germany
Interests: scheduling, in particular development of exact and approximate algorithms; stability investigations is discrete optimization; scheduling with interval processing times; complexity investigations for scheduling problems; train scheduling; graph theory; logistics; supply chains; packing; simulation and applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

I am pleased to announce the third edition of a Special Issue in Algortihms, that is quite different from our typical ones, will mainly focus on either selected areas of research or special techniques. Being creative in many ways, with this Special Issue, Algorithms is compiling a collection of papers submitted exclusively by its Editorial Board Members (EBMs) covering different areas of algorithms and their applications. The main idea behind this issue is to turn the tables and allow our readers to be the judges of our board members.

Prof. Dr. Frank Werner
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 141 KiB  
Editorial
Special Issue: “2022 and 2023 Selected Papers from Algorithms’ Editorial Board Members”
by Frank Werner
Algorithms 2024, 17(2), 65; https://doi.org/10.3390/a17020065 - 3 Feb 2024
Viewed by 1542
Abstract
This is the third edition of a Special Issue of Algorithms; it is of a rather different nature compared to other Special Issues in the journal, which are usually dedicated to a particular subject in the area of algorithms [...] Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)

Research

Jump to: Editorial

29 pages, 911 KiB  
Article
Efficient Time-Series Clustering through Sparse Gaussian Modeling
by Dimitris Fotakis, Panagiotis Patsilinakos, Eleni Psaroudaki and Michalis Xefteris
Algorithms 2024, 17(2), 61; https://doi.org/10.3390/a17020061 - 30 Jan 2024
Viewed by 2155
Abstract
In this work, we consider the problem of shape-based time-series clustering with the widely used Dynamic Time Warping (DTW) distance. We present a novel two-stage framework based on Sparse Gaussian Modeling. In the first stage, we apply Sparse Gaussian Process Regression and obtain [...] Read more.
In this work, we consider the problem of shape-based time-series clustering with the widely used Dynamic Time Warping (DTW) distance. We present a novel two-stage framework based on Sparse Gaussian Modeling. In the first stage, we apply Sparse Gaussian Process Regression and obtain a sparse representation of each time series in the dataset with a logarithmic (in the original length T) number of inducing data points. In the second stage, we apply k-means with DTW Barycentric Averaging (DBA) to the sparsified dataset using a generalization of DTW, which accounts for the fact that each inducing point serves as a representative of many original data points. The asymptotic running time of our Sparse Time-Series Clustering framework is Ω(T2/log2T) times faster than the running time of applying k-means to the original dataset because sparsification reduces the running time of DTW from Θ(T2) to Θ(log2T). Moreover, sparsification tends to smoothen outliers and particularly noisy parts of the original time series. We conduct an extensive experimental evaluation using datasets from the UCR Time-Series Classification Archive, showing that the quality of clustering computed by our Sparse Time-Series Clustering framework is comparable to the clustering computed by the standard k-means algorithm. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

23 pages, 7475 KiB  
Article
Ensemble Heuristic–Metaheuristic Feature Fusion Learning for Heart Disease Diagnosis Using Tabular Data
by Mohammad Shokouhifar, Mohamad Hasanvand, Elaheh Moharamkhani and Frank Werner
Algorithms 2024, 17(1), 34; https://doi.org/10.3390/a17010034 - 14 Jan 2024
Cited by 6 | Viewed by 2457
Abstract
Heart disease is a global health concern of paramount importance, causing a significant number of fatalities and disabilities. Precise and timely diagnosis of heart disease is pivotal in preventing adverse outcomes and improving patient well-being, thereby creating a growing demand for intelligent approaches [...] Read more.
Heart disease is a global health concern of paramount importance, causing a significant number of fatalities and disabilities. Precise and timely diagnosis of heart disease is pivotal in preventing adverse outcomes and improving patient well-being, thereby creating a growing demand for intelligent approaches to predict heart disease effectively. This paper introduces an ensemble heuristic–metaheuristic feature fusion learning (EHMFFL) algorithm for heart disease diagnosis using tabular data. Within the EHMFFL algorithm, a diverse ensemble learning model is crafted, featuring different feature subsets for each heterogeneous base learner, including support vector machine, K-nearest neighbors, logistic regression, random forest, naive bayes, decision tree, and XGBoost techniques. The primary objective is to identify the most pertinent features for each base learner, leveraging a combined heuristic–metaheuristic approach that integrates the heuristic knowledge of the Pearson correlation coefficient with the metaheuristic-driven grey wolf optimizer. The second objective is to aggregate the decision outcomes of the various base learners through ensemble learning. The performance of the EHMFFL algorithm is rigorously assessed using the Cleveland and Statlog datasets, yielding remarkable results with an accuracy of 91.8% and 88.9%, respectively, surpassing state-of-the-art techniques in heart disease diagnosis. These findings underscore the potential of the EHMFFL algorithm in enhancing diagnostic accuracy for heart disease and providing valuable support to clinicians in making more informed decisions regarding patient care. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

16 pages, 270 KiB  
Article
Compact Models to Solve the Precedence-Constrained Minimum-Cost Arborescence Problem with Waiting Times
by Mauro Dell’Amico, Jafar Jamal and Roberto Montemanni
Algorithms 2024, 17(1), 12; https://doi.org/10.3390/a17010012 - 27 Dec 2023
Viewed by 1666
Abstract
The minimum-cost arborescence problem is a well-studied problem. Polynomial-time algorithms for solving it exist. Recently, a new variation of the problem called the Precedence-Constrained Minimum-Cost Arborescence Problem with Waiting Times was presented and proven to be NP-hard. In this work, we [...] Read more.
The minimum-cost arborescence problem is a well-studied problem. Polynomial-time algorithms for solving it exist. Recently, a new variation of the problem called the Precedence-Constrained Minimum-Cost Arborescence Problem with Waiting Times was presented and proven to be NP-hard. In this work, we propose new polynomial-size models for the problem that are considerably smaller in size compared to those previously proposed. We experimentally evaluate and compare each new model in terms of computation time and quality of the solutions. Several improvements to the best-known upper and lower bounds of optimal solution costs emerge from the study. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

15 pages, 464 KiB  
Article
Enhancing Cryptocurrency Price Forecasting by Integrating Machine Learning with Social Media and Market Data
by Loris Belcastro, Domenico Carbone, Cristian Cosentino, Fabrizio Marozzo and Paolo Trunfio
Algorithms 2023, 16(12), 542; https://doi.org/10.3390/a16120542 - 27 Nov 2023
Cited by 5 | Viewed by 3319
Abstract
Since the advent of Bitcoin, the cryptocurrency landscape has seen the emergence of several virtual currencies that have quickly established their presence in the global market. The dynamics of this market, influenced by a multitude of factors that are difficult to predict, pose [...] Read more.
Since the advent of Bitcoin, the cryptocurrency landscape has seen the emergence of several virtual currencies that have quickly established their presence in the global market. The dynamics of this market, influenced by a multitude of factors that are difficult to predict, pose a challenge to fully comprehend its underlying insights. This paper proposes a methodology for suggesting when it is appropriate to buy or sell cryptocurrencies, in order to maximize profits. Starting from large sets of market and social media data, our methodology combines different statistical, text analytics, and deep learning techniques to support a recommendation trading algorithm. In particular, we exploit additional information such as correlation between social media posts and price fluctuations, causal connection among prices, and the sentiment of social media users regarding cryptocurrencies. Several experiments were carried out on historical data to assess the effectiveness of the trading algorithm, achieving an overall average gain of 194% without transaction fees and 117% when considering fees. In particular, among the different types of cryptocurrencies considered (i.e., high capitalization, solid projects, and meme coins), the trading algorithm has proven to be very effective in predicting the price trends of influential meme coins, yielding considerably higher profits compared to other cryptocurrency types. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

18 pages, 837 KiB  
Article
HyperDE: An Adaptive Hyper-Heuristic for Global Optimization
by Alexandru-Razvan Manescu and Bogdan Dumitrescu
Algorithms 2023, 16(9), 451; https://doi.org/10.3390/a16090451 - 20 Sep 2023
Cited by 3 | Viewed by 1845
Abstract
In this paper, a novel global optimization approach in the form of an adaptive hyper-heuristic, namely HyperDE, is proposed. As the naming suggests, the method is based on the Differential Evolution (DE) heuristic, which is a well-established optimization approach inspired by the theory [...] Read more.
In this paper, a novel global optimization approach in the form of an adaptive hyper-heuristic, namely HyperDE, is proposed. As the naming suggests, the method is based on the Differential Evolution (DE) heuristic, which is a well-established optimization approach inspired by the theory of evolution. Additionally, two other similar approaches are introduced for comparison and validation, HyperSSA and HyperBES, based on Sparrow Search Algorithm (SSA) and Bald Eagle Search (BES), respectively. The method consists of a genetic algorithm that is adopted as a high-level online learning mechanism, in order to adjust the hyper-parameters and facilitate the collaboration of a homogeneous set of low-level heuristics with the intent of maximizing the performance of the search for global optima. Comparison with the heuristics that the proposed methodologies are based on, along with other state-of-the-art methods, is favorable. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

10 pages, 352 KiB  
Article
A Multi-Objective Degree-Based Network Anonymization Method
by Ola N. Halawi, Faisal N. Abu-Khzam and Sergio Thoumi
Algorithms 2023, 16(9), 436; https://doi.org/10.3390/a16090436 - 11 Sep 2023
Viewed by 1177
Abstract
Enormous amounts of data collected from social networks or other online platforms are being published for the sake of statistics, marketing, and research, among other objectives. The consequent privacy and data security concerns have motivated the work on degree-based data anonymization. In this [...] Read more.
Enormous amounts of data collected from social networks or other online platforms are being published for the sake of statistics, marketing, and research, among other objectives. The consequent privacy and data security concerns have motivated the work on degree-based data anonymization. In this paper, we propose and study a new multi-objective anonymization approach that generalizes the known degree anonymization problem and attempts at improving it as a more realistic model for data security/privacy. Our suggested model guarantees a convenient privacy level, based on modifying the degrees in a way that respects some given local restrictions, per node, such that the total modifications at the global level (in the whole graph/network) are bounded by some given value. The corresponding multi-objective graph realization approach is formulated and solved using Integer Linear Programming to obtain an optimum solution. Our thorough experimental studies provide empirical evidence of the effectiveness of the new approach, by specifically showing that the introduced anonymization algorithm has a negligible effect on the way nodes are clustered, thereby preserving valuable network information while significantly improving data privacy. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 2561 KiB  
Article
A Hybrid Simulation and Reinforcement Learning Algorithm for Enhancing Efficiency in Warehouse Operations
by Jonas F. Leon, Yuda Li, Xabier A. Martin, Laura Calvet, Javier Panadero and Angel A. Juan
Algorithms 2023, 16(9), 408; https://doi.org/10.3390/a16090408 - 27 Aug 2023
Cited by 2 | Viewed by 2771
Abstract
The use of simulation and reinforcement learning can be viewed as a flexible approach to aid managerial decision-making, particularly in the face of growing complexity in manufacturing and logistic systems. Efficient supply chains heavily rely on steamlined warehouse operations, and therefore, having a [...] Read more.
The use of simulation and reinforcement learning can be viewed as a flexible approach to aid managerial decision-making, particularly in the face of growing complexity in manufacturing and logistic systems. Efficient supply chains heavily rely on steamlined warehouse operations, and therefore, having a well-informed storage location assignment policy is crucial for their improvement. The traditional methods found in the literature for tackling the storage location assignment problem have certain drawbacks, including the omission of stochastic process variability or the neglect of interaction between various warehouse workers. In this context, we explore the possibilities of combining simulation with reinforcement learning to develop effective mechanisms that allow for the quick acquisition of information about a complex environment, the processing of that information, and then the decision-making about the best storage location assignment. In order to test these concepts, we will make use of the FlexSim commercial simulator. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

11 pages, 775 KiB  
Article
A Neural-Network-Based Competition between Short-Lived Particle Candidates in the CBM Experiment at FAIR
by Artemiy Belousov, Ivan Kisel and Robin Lakos
Algorithms 2023, 16(8), 383; https://doi.org/10.3390/a16080383 - 9 Aug 2023
Viewed by 1436
Abstract
Fast and efficient algorithms optimized for high performance computers are crucial for the real-time analysis of data in heavy-ion physics experiments. Furthermore, the application of neural networks and other machine learning techniques has become more popular in physics experiments over the last years. [...] Read more.
Fast and efficient algorithms optimized for high performance computers are crucial for the real-time analysis of data in heavy-ion physics experiments. Furthermore, the application of neural networks and other machine learning techniques has become more popular in physics experiments over the last years. For that reason, a fast neural network package called ANN4FLES is developed in C++, which will be optimized to be used on a high performance computer farm for the future Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR, Darmstadt, Germany). This paper describes the first application of ANN4FLES used in the reconstruction chain of the CBM experiment to replace the existing particle competition between Ks-mesons and Λ-hyperons in the KF Particle Finder by a neural network based approach. The raw classification performance of the neural network reaches over 98% on the testing set. Furthermore, it is shown that the background noise was reduced by the neural network-based competition and therefore improved the quality of the physics analysis. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 451 KiB  
Article
Evolving Multi-Output Digital Circuits Using Multi-Genome Grammatical Evolution
by Michael Tetteh, Allan de Lima, Jack McEllin, Aidan Murphy, Douglas Mota Dias and Conor Ryan
Algorithms 2023, 16(8), 365; https://doi.org/10.3390/a16080365 - 28 Jul 2023
Viewed by 1363
Abstract
Grammatical Evolution is a Genetic Programming variant which evolves problems in any arbitrary language that is BNF compliant. Since its inception, Grammatical Evolution has been used to solve real-world problems in different domains such as bio-informatics, architecture design, financial modelling, music, software testing, [...] Read more.
Grammatical Evolution is a Genetic Programming variant which evolves problems in any arbitrary language that is BNF compliant. Since its inception, Grammatical Evolution has been used to solve real-world problems in different domains such as bio-informatics, architecture design, financial modelling, music, software testing, game artificial intelligence and parallel programming. Multi-output problems deal with predicting numerous output variables simultaneously, a notoriously difficult problem. We present a Multi-Genome Grammatical Evolution better suited for tackling multi-output problems, specifically digital circuits. The Multi-Genome consists of multiple genomes, each evolving a solution to a single unique output variable. Each genome is mapped to create its executable object. The mapping mechanism, genetic, selection, and replacement operators have been adapted to make them well-suited for the Multi-Genome representation and the implementation of a new wrapping operator. Additionally, custom grammar syntax rules and a cyclic dependency-checking algorithm have been presented to facilitate the evolution of inter-output dependencies which may exist in multi-output problems. Multi-Genome Grammatical Evolution is tested on combinational digital circuit benchmark problems. Results show Multi-Genome Grammatical Evolution performs significantly better than standard Grammatical Evolution on these benchmark problems. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

14 pages, 670 KiB  
Article
Design of Cloud-Based Real-Time Eye-Tracking Monitoring and Storage System
by Mustafa Can Gursesli, Mehmet Emin Selek, Mustafa Oktay Samur, Mirko Duradoni, Kyoungju Park, Andrea Guazzini and Antonio Lanatà
Algorithms 2023, 16(7), 355; https://doi.org/10.3390/a16070355 - 24 Jul 2023
Cited by 7 | Viewed by 2210
Abstract
The rapid development of technology has led to the implementation of data-driven systems whose performance heavily relies on the amount and type of data. In the latest decades, in the field of bioengineering data management, among others, eye-tracking data have become one of [...] Read more.
The rapid development of technology has led to the implementation of data-driven systems whose performance heavily relies on the amount and type of data. In the latest decades, in the field of bioengineering data management, among others, eye-tracking data have become one of the most interesting and essential components for many medical, psychological, and engineering research applications. However, despite the large usage of eye-tracking data in many studies and applications, a strong gap is still present in the literature regarding real-time data collection and management, which leads to strong constraints for the reliability and accuracy of on-time results. To address this gap, this study aims to introduce a system that enables the collection, processing, real-time streaming, and storage of eye-tracking data. The system was developed using the Java programming language, WebSocket protocol, and Representational State Transfer (REST), improving the efficiency in transferring and managing eye-tracking data. The results were computed in two test conditions, i.e., local and online scenarios, within a time window of 100 seconds. The experiments conducted for this study were carried out by comparing the time delay between two different scenarios, even if preliminary results showed a significantly improved performance of data management systems in managing real-time data transfer. Overall, this system can significantly benefit the research community by providing real-time data transfer and storing the data, enabling more extensive studies using eye-tracking data. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

12 pages, 2900 KiB  
Article
Neural-Network-Based Quark–Gluon Plasma Trigger for the CBM Experiment at FAIR
by Artemiy Belousov, Ivan Kisel, Robin Lakos and Akhil Mithran
Algorithms 2023, 16(7), 344; https://doi.org/10.3390/a16070344 - 18 Jul 2023
Viewed by 1454
Abstract
Algorithms optimized for high-performance computing, which ensure both speed and accuracy, are crucial for real-time data analysis in heavy-ion physics experiments. The application of neural networks and other machine learning methodologies, which are fast and have high accuracy, in physics experiments has become [...] Read more.
Algorithms optimized for high-performance computing, which ensure both speed and accuracy, are crucial for real-time data analysis in heavy-ion physics experiments. The application of neural networks and other machine learning methodologies, which are fast and have high accuracy, in physics experiments has become increasingly popular over recent years. This paper introduces a fast neural network package named ANN4FLES developed in C++, which has been optimized for use on a high-performance computing cluster for the future Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR, Darmstadt, Germany). The use of neural networks for classifying events during heavy-ion collisions in the CBM experiment is under investigation. This paper provides a detailed description of the application of ANN4FLES in identifying collisions where a quark–gluon plasma (QGP) was produced. The methodology detailed here will be used in the development of a QGP trigger for event selection within the First Level Event Selection (FLES) package for the CBM experiment. Fully-connected and convolutional neural networks have been created for the identification of events containing QGP, which are simulated with the Parton–Hadron–String Dynamics (PHSD) microscopic off-shell transport approach, for central Au + Au collisions at an energy of 31.2 A GeV. The results show that the convolutional neural network outperforms the fully-connected networks and achieves over 95% accuracy on the testing dataset. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 902 KiB  
Article
Optimization of Selection and Use of a Machine and Tractor Fleet in Agricultural Enterprises: A Case Study
by Andrei A. Efremov, Yuri N. Sotskov and Yulia S. Belotzkaya
Algorithms 2023, 16(7), 311; https://doi.org/10.3390/a16070311 - 21 Jun 2023
Cited by 2 | Viewed by 1675
Abstract
This article presents a realized application of a model and algorithm to optimize the formation and use of a machine and tractor fleet of an agricultural enterprise in crop farming. The concepts and indicators characterizing the processes of agricultural operations of the machine [...] Read more.
This article presents a realized application of a model and algorithm to optimize the formation and use of a machine and tractor fleet of an agricultural enterprise in crop farming. The concepts and indicators characterizing the processes of agricultural operations of the machine fleet in the agrarian business are considered. A classification of approaches for optimizing the implementation of a complex of mechanized agro-technical operations is given. We systemize different views on the problems under study and possible solutions. The advantages of the proposed model and algorithm, as well as the problematic aspects of their information and instrumental support are discussed. The problem of choosing the optimality criterion when setting the formal problem of optimizing agricultural operations by a fleet of machines in the agricultural field is considered. A modification of the economic and mathematical model for optimizing the structure and production schedules of the machine and tractor fleet is developed. The model is applied in a numerical experiment using real data of a specific agricultural enterprise, and the economic interpretation of the results is discussed. We apply an approach for determining the economic effect of the use of the developed model and algorithm. The possibilities for practical application of the obtained results of the study are substantiated. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

15 pages, 591 KiB  
Article
Learning Individualized Hyperparameter Settings
by Vittorio Maniezzo and Tingting Zhou
Algorithms 2023, 16(6), 267; https://doi.org/10.3390/a16060267 - 26 May 2023
Cited by 2 | Viewed by 1296
Abstract
The performance of optimization algorithms, and consequently of AI/machine learning solutions, is strongly influenced by the setting of their hyperparameters. Over the last decades, a rich literature has developed proposing methods to automatically determine the parameter setting for a problem of interest, aiming [...] Read more.
The performance of optimization algorithms, and consequently of AI/machine learning solutions, is strongly influenced by the setting of their hyperparameters. Over the last decades, a rich literature has developed proposing methods to automatically determine the parameter setting for a problem of interest, aiming at either robust or instance-specific settings. Robust setting optimization is already a mature area of research, while instance-level setting is still in its infancy, with contributions mainly dealing with algorithm selection. The work reported in this paper belongs to the latter category, exploiting the learning and generalization capabilities of artificial neural networks to adapt a general setting generated by state-of-the-art automatic configurators. Our approach differs significantly from analogous ones in the literature, both because we rely on neural systems to suggest the settings, and because we propose a novel learning scheme in which different outputs are proposed for each input, in order to support generalization from examples. The approach was validated on two different algorithms that optimized instances of two different problems. We used an algorithm that is very sensitive to parameter settings, applied to generalized assignment problem instances, and a robust tabu search that is purportedly little sensitive to its settings, applied to quadratic assignment problem instances. The computational results in both cases attest to the effectiveness of the approach, especially when applied to instances that are structurally very different from those previously encountered. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

21 pages, 740 KiB  
Article
Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems
by Alicia Cordero, Javier G. Maimó, Antmel Rodríguez-Cabral and Juan R. Torregrosa
Algorithms 2023, 16(3), 163; https://doi.org/10.3390/a16030163 - 16 Mar 2023
Cited by 1 | Viewed by 1583
Abstract
In this manuscript, we carry out a study on the generalization of a known family of multipoint scalar iterative processes for approximating the solutions of nonlinear systems. The convergence analysis of the proposed class under various smooth conditions is provided. We also study [...] Read more.
In this manuscript, we carry out a study on the generalization of a known family of multipoint scalar iterative processes for approximating the solutions of nonlinear systems. The convergence analysis of the proposed class under various smooth conditions is provided. We also study the stability of this family, analyzing the fixed and critical points of the rational operator resulting from applying the family on low-degree polynomials, as well as the basins of attraction and the orbits (periodic or not) that these points produce. This dynamical study also allows us to observe which members of the family are more stable and which have chaotic behavior. Graphical analyses of dynamical planes, parameter line and bifurcation planes are also studied. Numerical tests are performed on different nonlinear systems for checking the theoretical results and to compare the proposed schemes with other known ones. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

40 pages, 6231 KiB  
Article
Automatic Fault Detection and Diagnosis in Cellular Networks and Beyond 5G: Intelligent Network Management
by Arun Kumar Sangaiah, Samira Rezaei, Amir Javadpour, Farimasadat Miri, Weizhe Zhang and Desheng Wang
Algorithms 2022, 15(11), 432; https://doi.org/10.3390/a15110432 - 17 Nov 2022
Cited by 10 | Viewed by 3471
Abstract
Handling faults in a running cellular network can impair the performance and dissatisfy the end users. It is important to design an automatic self-healing procedure to not only detect the active faults, but also to diagnosis them automatically. Although fault detection has been [...] Read more.
Handling faults in a running cellular network can impair the performance and dissatisfy the end users. It is important to design an automatic self-healing procedure to not only detect the active faults, but also to diagnosis them automatically. Although fault detection has been well studied in the literature, fewer studies have targeted the more complicated task of diagnosing. Our presented method aims to tackle fault detection and diagnosis using two sets of data collected by the network: performance support system data and drive test data. Although performance support system data is collected automatically by the network, drive test data are collected manually in three mode call scenarios: short, long and idle. The short call can identify faults in a call setup, the long call is designed to identify handover failures and call interruption, and, finally, the idle mode is designed to understand the characteristics of the standard signal in the network. We have applied unsupervised learning, along with various classified algorithms, on performance support system data. Congestion and failures in TCH assignments are a few examples of the detected and diagnosed faults with our method. In addition, we present a framework to identify the need for handovers. The Silhouette coefficient is used to evaluate the quality of the unsupervised learning approach. We achieved an accuracy of 96.86% with the dynamic neural network method. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

16 pages, 1290 KiB  
Article
Rendezvous on the Line with Different Speeds and Markers That Can Be Dropped at Chosen Time
by Pierre Leone and Nathan Cohen
Algorithms 2022, 15(2), 41; https://doi.org/10.3390/a15020041 - 27 Jan 2022
Cited by 1 | Viewed by 2192
Abstract
In this paper, we introduce a linear program (LP)-based formulation of a rendezvous game with markers on the infinite line and solve it. In this game one player moves at unit speed while the second player moves at a speed bounded by [...] Read more.
In this paper, we introduce a linear program (LP)-based formulation of a rendezvous game with markers on the infinite line and solve it. In this game one player moves at unit speed while the second player moves at a speed bounded by vmax1. We observe that in this setting, a slow-moving player may have interest to remain still instead of moving. This shows that in some conditions the wait-for-mummy strategy is optimal. We observe as well that the strategies are completely different if the player that holds the marker is the fast or slow one. Interestingly, the marker is not useful when the player without marker moves slowly, i.e., the fast-moving player holds the marker. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Back to TopTop