Mathematical Models and Their Applications II

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 17914

Special Issue Editors


E-Mail Website
Guest Editor
Department of System Analysis and Operations Research, Siberian Institute of Applied System Analysis, Reshetnev Siberian State University of Science and Technology, 41031 Krasnoyarsk, Russia
Interests: modeling and optimization of complicated systems; computational intelligence; evolutionary algorithms; artificial intelligence; data mining
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Insitute of Neural Information Processing, Ulm University, James Frank Ring, 89081 Ulm, Germany
Interests: artificial neural networks; pattern recognition; cluster analysis; statistical learning theory; data mining; multiple classifier systems; sensor fusion; affective computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Cybernetics & Decision Support Systems Laboratory at the University of Maribor, Faculty of Organizational Sciences, Kranj, Slovenia
Interests: modeling and simulation, optimization, system dynamics modeling; Internet of things; systems theory; decision processes; cyber-physical systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The current Special Issue “Mathematical Models and Their Applications II” of Algorithm is intended as an international forum for the presentation of original mathematical modeling results for software and hardware applications in various fields. It aims to stimulate lively discussion among researchers as well as industrialists.

Papers may discuss theories, applications, evaluation, limitations, general tools, and techniques. Discussion papers that critically evaluate approaches or processing strategies and prototype demonstrations are especially welcome.

The Special Issue will cover a broad range of research topics including, but not limited to:

  • Mathematical models and their applications;
  • Mathematical modeling techniques;
  • Optimization techniques, including multi-criterion optimization and decision-making support;
  • Data mining and knowledge discovery;
  • Machine learning;
  • Pattern recognition;
  • Learning in evolutionary algorithms;
  • Genetic programming;
  • Artificial neural networks;
  • Computational intelligence and its applications;
  • Bio-inspired and swarm intelligence;
  • Text/web/data mining;
  • Human–computer interaction;
  • Natural language processing;
  • Applications in engineering, natural sciences, social sciences, and computer science.

The previous Special Issue “Mathematical Models and Their Applications” (Algorithms 2020, 13) presented, among other things, selected and revisited papers proposed within the 8th International Workshop on Mathematical Models and their Applications (https://sites.google.com/view/iwmma2019). All papers of IWMMA 2019 have been double-blind peer-reviewed by at least two members of the program committee, which includes about 40 academic researchers from 15 countries. The papers recommended for the Special issue have been selected by the reviewers from about 50 studies.

The IWMMA 2020 continues the tradition (https://sites.google.com/view/iwmma2020), and extended versions of the papers selected by program committee and presented on the conference will be recommended for publication in the Special Issue.

Dr. Eugene Semenkin
Dr. Friedhelm Schwenker
Dr. Andrej Škraba
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mathematical modeling
  • optimization
  • machine learning
  • data mining
  • computational intelligence
  • applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4925 KiB  
Article
Investigation of Improved Cooperative Coevolution for Large-Scale Global Optimization Problems
by Aleksei Vakhnin and Evgenii Sopov
Algorithms 2021, 14(5), 146; https://doi.org/10.3390/a14050146 - 30 Apr 2021
Cited by 12 | Viewed by 3428
Abstract
Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for [...] Read more.
Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

27 pages, 2771 KiB  
Article
Analysis of Data Presented by Multisets Using a Linguistic Approach
by Liliya A. Demidova and Julia S. Sokolova
Algorithms 2021, 14(5), 135; https://doi.org/10.3390/a14050135 - 25 Apr 2021
Viewed by 2195
Abstract
The problem of the analysis of datasets formed by the results of group expert assessment of objects by a certain set of features is considered. Such datasets may contain mismatched, including conflicting values of object evaluations by the analyzed features. In addition, the [...] Read more.
The problem of the analysis of datasets formed by the results of group expert assessment of objects by a certain set of features is considered. Such datasets may contain mismatched, including conflicting values of object evaluations by the analyzed features. In addition, the values of the assessments for the features can be not only point, but also interval due to the incompleteness and inaccuracy of the experts’ knowledge. Taking into account all the results of group expert assessment of objects for a certain set of features, estimated pointwise, can be carried out using the multiset toolkit. To process interval values of assessments, it is proposed to use a linguistic approach which involves the use of a linguistic scale in order to describe various strategies for evaluating objects: conservative, neutral and risky, and implement various decision-making strategies in the problems of clustering, classification, and ordering of objects. The linguistic approach to working with objects assessed by a group of experts with setting interval values of assessments has been successfully applied to the analysis of the dataset presented by competitive projects. A herewith, for the dataset under consideration, using various assessment strategies, solutions of clustering, classification, and ordering problems were obtained with the study of the influence of the chosen assessment strategy on the results of solving the corresponding problem. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

30 pages, 2320 KiB  
Article
Self-Configuring (1 + 1)-Evolutionary Algorithm for the Continuous p-Median Problem with Agglomerative Mutation
by Lev Kazakovtsev, Ivan Rozhnov and Guzel Shkaberina
Algorithms 2021, 14(5), 130; https://doi.org/10.3390/a14050130 - 22 Apr 2021
Cited by 4 | Viewed by 3178
Abstract
The continuous p-median problem (CPMP) is one of the most popular and widely used models in location theory that minimizes the sum of distances from known demand points to the sought points called centers or medians. This NP-hard location problem is also useful [...] Read more.
The continuous p-median problem (CPMP) is one of the most popular and widely used models in location theory that minimizes the sum of distances from known demand points to the sought points called centers or medians. This NP-hard location problem is also useful for clustering (automatic grouping). In this case, sought points are considered as cluster centers. Unlike similar k-means model, p-median clustering is less sensitive to noisy data and appearance of the outliers (separately located demand points that do not belong to any cluster). Local search algorithms including Variable Neighborhood Search as well as evolutionary algorithms demonstrate rather precise results. Various algorithms based on the use of greedy agglomerative procedures are capable of obtaining very accurate results that are difficult to improve on with other methods. The computational complexity of such procedures limits their use for large problems, although computations on massively parallel systems significantly expand their capabilities. In addition, the efficiency of agglomerative procedures is highly dependent on the setting of their parameters. For the majority of practically important p-median problems, one can choose a very efficient algorithm based on the agglomerative procedures. However, the parameters of such algorithms, which ensure their high efficiency, are difficult to predict. We introduce the concept of the AGGLr neighborhood based on the application of the agglomerative procedure, and investigate the search efficiency in such a neighborhood depending on its parameter r. Using the similarities between local search algorithms and (1 + 1)-evolutionary algorithms, as well as the ability of the latter to adapt their search parameters, we propose a new algorithm based on a greedy agglomerative procedure with the automatically tuned parameter r. Our new algorithm does not require preliminary tuning of the parameter r of the agglomerative procedure, adjusting this parameter online, thus representing a more versatile computational tool. The advantages of the new algorithm are shown experimentally on problems with a data volume of up to 2,000,000 demand points. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

18 pages, 1180 KiB  
Article
Difference-Based Mutation Operation for Neuroevolution of Augmented Topologies
by Vladimir Stanovov, Shakhnaz Akhmedova and Eugene Semenkin
Algorithms 2021, 14(5), 127; https://doi.org/10.3390/a14050127 - 21 Apr 2021
Cited by 1 | Viewed by 2648
Abstract
In this paper, a novel search operation is proposed for the neuroevolution of augmented topologies, namely the difference-based mutation. This operator uses the differences between individuals in the population to perform more efficient search for optimal weights and structure of the model. The [...] Read more.
In this paper, a novel search operation is proposed for the neuroevolution of augmented topologies, namely the difference-based mutation. This operator uses the differences between individuals in the population to perform more efficient search for optimal weights and structure of the model. The difference is determined according to the innovation numbers assigned to each node and connection, allowing tracking the changes. The implemented neuroevolution algorithm allows backward connections and loops in the topology, and uses a set of mutation operators, including connections merging and deletion. The algorithm is tested on a set of classification problems and the rotary inverted pendulum control problem. The comparison is performed between the basic approach and modified versions. The sensitivity to parameter values is examined. The experimental results prove that the newly developed operator delivers significant improvements to the classification quality in several cases, and allow finding better control algorithms. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

11 pages, 814 KiB  
Article
Median Filter Aided CNN Based Image Denoising: An Ensemble Approach
by Subhrajit Dey, Rajdeep Bhattacharya, Friedhelm Schwenker and Ram Sarkar
Algorithms 2021, 14(4), 109; https://doi.org/10.3390/a14040109 - 28 Mar 2021
Cited by 14 | Viewed by 4745
Abstract
Image denoising is a challenging research problem that aims to recover noise-free images from those that are contaminated with noise. In this paper, we focus on the denoising of images that are contaminated with additive white Gaussian noise. For this purpose, we propose [...] Read more.
Image denoising is a challenging research problem that aims to recover noise-free images from those that are contaminated with noise. In this paper, we focus on the denoising of images that are contaminated with additive white Gaussian noise. For this purpose, we propose an ensemble learning model that uses the output of three image denoising models, namely ADNet, IRCNN, and DnCNN, in the ratio of 2:3:6, respectively. The first model (ADNet) consists of Convolutional Neural Networks with attention along with median filter layers after every convolutional layer and a dilation rate of 8. In the case of the second model, it is a feed forward denoising CNN or DnCNN with median filter layers after half of the convolutional layers. For the third model, which is Deep CNN Denoiser Prior or IRCNN, the model contains dilated convolutional layers and median filter layers up to the dilated convolutional layers with a dilation rate of 6. By quantitative analysis, we note that our model performs significantly well when tested on the BSD500 and Set12 datasets. Full article
(This article belongs to the Special Issue Mathematical Models and Their Applications II)
Show Figures

Figure 1

Back to TopTop