Parallel/Distributed Combinatorics and Optimization

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (15 September 2022) | Viewed by 3554

Special Issue Editors


E-Mail Website
Guest Editor
FSTM/DCS / SnT, University of Luxembourg, L-4364 Esch-sur-Alzette, Luxembourg
Interests: artificial intelligence; metaheuristics; automated algorithm design; unmanned autonomous systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Distributed Computing and Asynchronism Team (CDA), LAAS-CNRS, Toulouse, France
Interests: applied mathematics and parallel and distributed computing; heterogeneous computing and peer-to-peer computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent advances in parallel computing capacities, e.g., multi-core and many-core processors, nowadays allow tackling of large-scale complex optimization problems.

However, novel parallel/distributed optimization paradigms have to be designed and new implementations provided in order to fully exploit such novel and possibly heterogeneous computing environments. These new solvers additionally need to target different conflicting objectives such as obtaining high-quality solutions while minimizing computing time or energy usage.

The goal of this Special Issue is to gather the most important contemporary developments and designs in parallel and distributed combinatorics, as well as optimization methods for solving difficult optimization problems.

Topics of interests include (but are not limited to):

  • Global optimization, combinatorial optimization, multi-objective optimization, dynamic optimization;
  • Computational intelligence methods (e.g., evolutionary algorithms, swarm intelligence, ant colonies, cellular automata, DNA and molecular computing);
  • Cooperative/competitive methods;
  • Hybrid algorithms;
  • Peer-to-peer computing and optimization problems;
  • Real-world applications: e.g., cloud computing, planning, logistics, manufacturing, finance, telecommunications,

The organization of this special issue is linked to the 12th IEEE Workshop Parallel / Distributed Combinatorics and Optimization (PDCO 2022), which will take place together with the 36th IEEE International Parallel and Distributed Symposium (IPDPS 2022) in Lyon, France, May–June 2022. The most outstanding papers from the workshop will be considered for possible publication in the Special Issue, after a peer-review process.

The Special Issue is open to any submissions, and not restricted to papers from the PDCO workshop.

Dr. Grégoire Danoy
Prof. Dr. Didier El Baz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 1117 KiB  
Article
Evolutionary Statistical System Based on Novelty Search: A Parallel Metaheuristic for Uncertainty Reduction Applied to Wildfire Spread Prediction
by Jan Strappa, Paola Caymes-Scutari and Germán Bianchini
Algorithms 2022, 15(12), 478; https://doi.org/10.3390/a15120478 - 15 Dec 2022
Cited by 1 | Viewed by 1387
Abstract
The problem of wildfire spread prediction presents a high degree of complexity due in large part to the limitations for providing accurate input parameters in real time (e.g., wind speed, temperature, moisture of the soil, etc.). This uncertainty in the environmental values has [...] Read more.
The problem of wildfire spread prediction presents a high degree of complexity due in large part to the limitations for providing accurate input parameters in real time (e.g., wind speed, temperature, moisture of the soil, etc.). This uncertainty in the environmental values has led to the development of computational methods that search the space of possible combinations of parameters (also called scenarios) in order to obtain better predictions. State-of-the-art methods are based on parallel optimization strategies that use a fitness function to guide this search. Moreover, the resulting predictions are based on a combination of multiple solutions from the space of scenarios. These methods have improved the quality of classical predictions; however, they have some limitations, such as premature convergence. In this work, we evaluate a new proposal for the optimization of scenarios that follows the Novelty Search paradigm. Novelty-based algorithms replace the objective function by a measure of the novelty of the solutions, which allows the search to generate solutions that are novel (in their behavior space) with respect to previously evaluated solutions. This approach avoids local optima and maximizes exploration. Our method, Evolutionary Statistical System based on Novelty Search (ESS-NS), outperforms the quality obtained by its competitors in our experiments. Execution times are faster than other methods for almost all cases. Lastly, several lines of future work are provided in order to significantly improve these results. Full article
(This article belongs to the Special Issue Parallel/Distributed Combinatorics and Optimization)
Show Figures

Figure 1

23 pages, 4718 KiB  
Article
Batch Acquisition for Parallel Bayesian Optimization—Application to Hydro-Energy Storage Systems Scheduling
by Maxime Gobert, Jan Gmys, Jean-François Toubeau, Nouredine Melab, Daniel Tuyttens and François Vallée
Algorithms 2022, 15(12), 446; https://doi.org/10.3390/a15120446 - 26 Nov 2022
Cited by 3 | Viewed by 1661
Abstract
Bayesian Optimization (BO) with Gaussian process regression is a popular framework for the optimization of time-consuming cost functions. However, the joint exploitation of BO and parallel processing capabilities remains challenging, despite intense research efforts over the last decade. In particular, the choice of [...] Read more.
Bayesian Optimization (BO) with Gaussian process regression is a popular framework for the optimization of time-consuming cost functions. However, the joint exploitation of BO and parallel processing capabilities remains challenging, despite intense research efforts over the last decade. In particular, the choice of a suitable batch-acquisition process, responsible for selecting promising candidate solutions for batch-parallel evaluation, is crucial. Even though some general recommendations can be found in the literature, many of its hyperparameters remain problem-specific. Moreover, the limitations of existing approaches in terms of scalability, especially for moderately expensive objective functions, are barely discussed. This work investigates five parallel BO algorithms based on different batch-acquisition processes, applied to the optimal scheduling of Underground Pumped Hydro-Energy Storage stations and classical benchmark functions. Efficient management of such energy-storage units requires parallel BO algorithms able to find solutions in a very restricted time to comply with the responsive energy markets. Our experimental results show that for the considered methods, a batch of four candidates is a good trade-off between execution speed and relevance of the candidates. Analysis of each method’s strengths and weaknesses indicates possible future research directions. Full article
(This article belongs to the Special Issue Parallel/Distributed Combinatorics and Optimization)
Show Figures

Figure 1

Back to TopTop