Optimal Control Theory

A special issue of Games (ISSN 2073-4336).

Deadline for manuscript submissions: closed (30 November 2020) | Viewed by 33767

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, Texas Woman’s University, Denton, TX 76204, USA
Interests: optimal control theory; game theory; modeling and control of epidemics; optimal control of HIV, allergy and other immune disorders; math education (methods of solving complex math problems)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Optimal control theory is a modern extension of the classical calculus of variations. Converting a calculus of variation problems into an optimal control problem requires one more conceptual extension—the addition of control variables to state equations. While the main result of the calculus of variations was the Euler equation, the Pontryagin maximum principle is main result of optimal control theory. The maximum principle was developed by a group of Russian mathematicians in the 1950s. This principle gives the necessary conditions for optimality in a wide range of dynamic optimization problems. The maximum principle includes all the necessary conditions from classical theory of calculus of variations but can be applied to a significantly wider range of problems. At present, for deterministic control models described by ordinary differential equations, the Pontryagin maximum principle is used as often as Bellman’s dynamic programming method. 

The optimal control problem includes the calculation of the optimal control and the synthesis of the optimal control system. Optimal control, as a rule, is calculated by numerical methods for finding the extremum of an objective function or by solving a two-point boundary value problem for a system of differential equations. The synthesis of optimal control from a mathematical point of view is a nonlinear programming problem in function spaces. 

This Special Issue will gather research regarding research focused on the development of novel analytical and numerical methods for solutions of optimal control or of dynamic optimization problems, including changing and incomplete information about the investigated objects, application to medicine, infectious diseases, and economic or physical phenomena. Investigations of new classes of optimization problems, optimal control of nonlinear systems, as well as the task of reconstructing input signals are invited. For example, we are interested in papers that develop new algorithms to implement some of the principles of regularization using constructive iterative procedures or in papers that create an optimal control model that can accumulate experience and improve its work on this basis or the so-called learning optimal control system. The applied papers focused on control models of economic, physical, medical (i.e., infectious diseases) or environmental processes or resource allocation on the specified time interval or on the infinite planning horizon would be of special interest. 

We will be happy to consider original research papers on new advances of optimal control and differential games; deterministic and stochastic control processes; combined methods of synthesis of both deterministic and stochastic systems with full information about parameters, state and perturbations; i.e., all papers that allow the use of analytical methods to study the various tasks of optimal control and its evaluation, as well as applications of optimal controls and differential games to describe complex nonlinear phenomena.

Dr. Ellina Grigorieva
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Games is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • optimal control problems
  • differential games
  • Pontryagin maximum principle
  • nonlinear control models

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 159 KiB  
Editorial
Optimal Control Theory: Introduction to the Special Issue
by Ellina Grigorieva
Games 2021, 12(1), 29; https://doi.org/10.3390/g12010029 - 22 Mar 2021
Cited by 1 | Viewed by 3346
Abstract
Optimal control theory is a modern extension of the classical calculus of variations [...] Full article
(This article belongs to the Special Issue Optimal Control Theory)

Research

Jump to: Editorial

9 pages, 250 KiB  
Article
An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations
by Alexander Arguchintsev and Vasilisa Poplevko
Games 2021, 12(1), 23; https://doi.org/10.3390/g12010023 - 3 Mar 2021
Cited by 6 | Viewed by 2497
Abstract
This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled [...] Read more.
This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach. Full article
(This article belongs to the Special Issue Optimal Control Theory)
12 pages, 1026 KiB  
Article
Optimal Control and Positional Controllability in a One-Sector Economy
by Nikolai Grigorenko and Lilia Luk’yanova
Games 2021, 12(1), 11; https://doi.org/10.3390/g12010011 - 1 Feb 2021
Cited by 3 | Viewed by 1975
Abstract
A model of production funds acquisition, which includes two differential links of the zero order and two series-connected inertial links, is considered in a one-sector economy. Zero-order differential links correspond to the equations of the Ramsey model. These equations contain scalar bounded control, [...] Read more.
A model of production funds acquisition, which includes two differential links of the zero order and two series-connected inertial links, is considered in a one-sector economy. Zero-order differential links correspond to the equations of the Ramsey model. These equations contain scalar bounded control, which determines the distribution of the available funds into two parts: investment and consumption. Two series-connected inertial links describe the dynamics of the changes in the volume of the actual production at the current production capacity. For the considered control system, the problem is posed to maximize the average consumption value over a given time interval. The properties of optimal control are analytically established using the Pontryagin maximum principle. The cases are highlighted when such control is a bang-bang, as well as the cases when, along with bang-bang (non-singular) portions, control can contain a singular arc. At the same time, concatenation of singular and non-singular portions is carried out using chattering. A bang-bang suboptimal control is presented, which is close to the optimal one according to the given quality criterion. A positional terminal control is proposed for the first approximation when a suboptimal control with a given deviation of the objective function from the optimal value is numerically found. The obtained results are confirmed by the corresponding numerical calculations. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

22 pages, 4037 KiB  
Article
Necessary Optimality Conditions for a Class of Control Problems with State Constraint
by Adam Korytowski and Maciej Szymkat
Games 2021, 12(1), 9; https://doi.org/10.3390/g12010009 - 18 Jan 2021
Cited by 2 | Viewed by 3050
Abstract
An elementary approach to a class of optimal control problems with pathwise state constraint is proposed. Based on spike variations of control, it yields simple proofs and constructive necessary conditions, including some new characterizations of optimal control. Two examples are discussed. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

18 pages, 1523 KiB  
Article
Boltzmann Distributed Replicator Dynamics: Population Games in a Microgrid Context
by Gustavo Chica-Pedraza, Eduardo Mojica-Nava and Ernesto Cadena-Muñoz
Games 2021, 12(1), 8; https://doi.org/10.3390/g12010008 - 15 Jan 2021
Cited by 3 | Viewed by 2814
Abstract
Multi-Agent Systems (MAS) have been used to solve several optimization problems in control systems. MAS allow understanding the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between [...] Read more.
Multi-Agent Systems (MAS) have been used to solve several optimization problems in control systems. MAS allow understanding the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between agents is always available, which means the employment of a full-information model. Some tendencies have been growing in importance to tackle scenarios where information constraints are relevant issues. In this sense, game theory approaches appear as a useful technique that use a strategy concept to analyze the interactions of the agents and achieve the maximization of agent outcomes. In this paper, we propose a distributed control method of learning that allows analyzing the effect of the exploration concept in MAS. The dynamics obtained use Q-learning from reinforcement learning as a way to include the concept of exploration into the classic exploration-less Replicator Dynamics equation. Then, the Boltzmann distribution is used to introduce the Boltzmann-Based Distributed Replicator Dynamics as a tool for controlling agents behaviors. This distributed approach can be used in several engineering applications, where communications constraints between agents are considered. The behavior of the proposed method is analyzed using a smart grid application for validation purposes. Results show that despite the lack of full information of the system, by controlling some parameters of the method, it has similar behavior to the traditional centralized approaches. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

8 pages, 213 KiB  
Article
A Turnpike Property of Trajectories of Dynamical Systems with a Lyapunov Function
by Alexander J. Zaslavski
Games 2020, 11(4), 63; https://doi.org/10.3390/g11040063 - 14 Dec 2020
Cited by 9 | Viewed by 2077
Abstract
In this paper, we study the structure of trajectories of discrete disperse dynamical systems with a Lyapunov function which are generated by set-valued mappings. We establish a weak version of the turnpike property which holds for all trajectories of such dynamical systems which [...] Read more.
In this paper, we study the structure of trajectories of discrete disperse dynamical systems with a Lyapunov function which are generated by set-valued mappings. We establish a weak version of the turnpike property which holds for all trajectories of such dynamical systems which are of a sufficient length. This result is usually true for models of economic growth which are prototypes of our dynamical systems. Full article
(This article belongs to the Special Issue Optimal Control Theory)
25 pages, 523 KiB  
Article
Biological and Chemical Control of Mosquito Population by Optimal Control Approach
by Juddy Heliana Arias-Castro, Hector Jairo Martinez-Romero and Olga Vasilieva
Games 2020, 11(4), 62; https://doi.org/10.3390/g11040062 - 14 Dec 2020
Cited by 9 | Viewed by 3556
Abstract
This paper focuses on the design and analysis of short-term control intervention measures seeking to suppress local populations of Aedes aegypti mosquitoes, the major transmitters of dengue and other vector-borne infections. Besides traditional measures involving the spraying of larvicides and/or insecticides, we include [...] Read more.
This paper focuses on the design and analysis of short-term control intervention measures seeking to suppress local populations of Aedes aegypti mosquitoes, the major transmitters of dengue and other vector-borne infections. Besides traditional measures involving the spraying of larvicides and/or insecticides, we include biological control based on the deliberate introduction of predacious species feeding on the aquatic stages of mosquitoes. From the methodological standpoint, our study relies on application of the optimal control modeling framework in combination with the cost-effectiveness analysis. This approach not only enables the design of optimal strategies for external control intervention but also allows for assessment of their performance in terms of the cost-benefit relationship. By examining numerous scenarios derived from combinations of chemical and biological control measures, we try to find out whether the presence of predacious species at the mosquito breeding sites may (partially) replace the common practices of larvicide/insecticide spraying and thus reduce their negative impact on non-target organisms. As a result, we identify two strategies exhibiting the best metrics of cost-effectiveness and provide some useful insights for their possible implementation in practical settings. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

16 pages, 612 KiB  
Article
Games with Adaptation and Mitigation
by Natali Hritonenko, Victoria Hritonenko and Yuri Yatsenko
Games 2020, 11(4), 60; https://doi.org/10.3390/g11040060 - 7 Dec 2020
Cited by 2 | Viewed by 2495
Abstract
We formulate and study a nonlinear game of n symmetric countries that produce, pollute, and spend part of their revenue on pollution mitigation and environmental adaptation. The optimal emission, adaptation, and mitigation investments are analyzed in both Nash equilibrium and cooperative cases. Modeling [...] Read more.
We formulate and study a nonlinear game of n symmetric countries that produce, pollute, and spend part of their revenue on pollution mitigation and environmental adaptation. The optimal emission, adaptation, and mitigation investments are analyzed in both Nash equilibrium and cooperative cases. Modeling assumptions and outcomes are compared to other publications in this fast-developing area of environmental economics. In particular, our analysis implies that: (a) mitigation is more effective than adaptation in a crowded multi-country world; (b) mitigation increases the effectiveness of adaptation; (c) the optimal ratio between mitigation and adaptation investments in the competitive case is larger for more productive countries and is smaller when more countries are involved in the game. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

6 pages, 227 KiB  
Article
An Extremum Principle for Smooth Problems
by Dariusz Idczak and Stanisław Walczak
Games 2020, 11(4), 56; https://doi.org/10.3390/g11040056 - 27 Nov 2020
Cited by 3 | Viewed by 1906
Abstract
We derive an extremum principle. It can be treated as an intermediate result between the celebrated smooth-convex extremum principle due to Ioffe and Tikhomirov and the Dubovitskii–Milyutin theorem. The proof of this principle is based on a simple generalization of the Fermat’s theorem, [...] Read more.
We derive an extremum principle. It can be treated as an intermediate result between the celebrated smooth-convex extremum principle due to Ioffe and Tikhomirov and the Dubovitskii–Milyutin theorem. The proof of this principle is based on a simple generalization of the Fermat’s theorem, the smooth-convex extremum principle and the local implicit function theorem. An integro-differential example illustrating the new principle is presented. Full article
(This article belongs to the Special Issue Optimal Control Theory)
10 pages, 3699 KiB  
Article
A Stochastic Characterization of the Capture Zone in Pursuit-Evasion Games
by Simone Battistini
Games 2020, 11(4), 54; https://doi.org/10.3390/g11040054 - 20 Nov 2020
Cited by 4 | Viewed by 2468
Abstract
Pursuit-evasion games are used to define guidance strategies for multi-agent planning problems. Although optimal strategies exist for deterministic scenarios, in the case when information about the opponent players is imperfect, it is important to evaluate the effect of uncertainties on the estimated variables. [...] Read more.
Pursuit-evasion games are used to define guidance strategies for multi-agent planning problems. Although optimal strategies exist for deterministic scenarios, in the case when information about the opponent players is imperfect, it is important to evaluate the effect of uncertainties on the estimated variables. This paper proposes a method to characterize the game space of a pursuit-evasion game under a stochastic perspective. The Mahalanobis distance is used as a metric to determine the levels of confidence in the estimation of the Zero Effort Miss across the capture zone. This information can be used to gain an insight into the guidance strategy. A simulation is carried out to provide numerical results. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

26 pages, 661 KiB  
Article
Optimal CAR T-cell Immunotherapy Strategies for a Leukemia Treatment Model
by Evgenii Khailov, Ellina Grigorieva and Anna Klimenkova
Games 2020, 11(4), 53; https://doi.org/10.3390/g11040053 - 18 Nov 2020
Cited by 6 | Viewed by 2945
Abstract
CAR T-cell immunotherapy is a new development in the treatment of leukemia, promising a new era in oncology. Although so far, this procedure only helps 50–90% of patients and, like other cancer treatments, has serious side effects. In this work, we have proposed [...] Read more.
CAR T-cell immunotherapy is a new development in the treatment of leukemia, promising a new era in oncology. Although so far, this procedure only helps 50–90% of patients and, like other cancer treatments, has serious side effects. In this work, we have proposed a controlled model for leukemia treatment to explore possible ways to improve immunotherapy methodology. Our model is described by four nonlinear differential equations with two bounded controls, which are responsible for the rate of injection of chimeric cells, as well as for the dosage of the drug that suppresses the so-called “cytokine storm”. The optimal control problem of minimizing the cancer cells and the activity of the cytokine is stated and solved using the Pontryagin maximum principle. The five possible optimal control scenarios are predicted analytically using investigation of the behavior of the switching functions. The optimal solutions, obtained numerically using BOCOP-2.2.0, confirmed our analytical findings. Interesting results, explaining, why therapies with rest intervals (for example, stopping injections in the middle of the treatment interval) are more effective (within the model), rather than with continuous injections, are presented. Possible improvements to the mathematical model and method of immunotherapy are discussed. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

21 pages, 1293 KiB  
Article
On Optimal Leader’s Investments Strategy in a Cyclic Model of Innovation Race with Random Inventions Times
by Sergey M. Aseev and Masakazu Katsumoto
Games 2020, 11(4), 52; https://doi.org/10.3390/g11040052 - 16 Nov 2020
Cited by 2 | Viewed by 2364
Abstract
In this paper, we develop a new dynamic model of optimal investments in R&D and manufacturing for a technological leader competing with a large number of identical followers on the market of a technological product. The model is formulated in the form of [...] Read more.
In this paper, we develop a new dynamic model of optimal investments in R&D and manufacturing for a technological leader competing with a large number of identical followers on the market of a technological product. The model is formulated in the form of the infinite time horizon stochastic optimization problem. The evolution of new generations of the product is treated as a Poisson-type cyclic stochastic process. The technology spillovers effect acts as a driving force of technological change. We show that the original probabilistic problem that the leader is faced with can be reduced to a deterministic one. This result makes it possible to perform analytical studies and numerical calculations. Numerical simulations and economic interpretations are presented as well. Full article
(This article belongs to the Special Issue Optimal Control Theory)
Show Figures

Figure 1

Back to TopTop