Next Article in Journal
Numerical Simulations of DDT Limits in Hydrogen-Air Mixtures in Obstacle Laden Channel
Next Article in Special Issue
P2PEdge: A Decentralised, Scalable P2P Architecture for Energy Trading in Real-Time
Previous Article in Journal
Multi-Scale Microfluidics for Transport in Shale Fabric
Previous Article in Special Issue
Demand Flexibility Management for Buildings-to-Grid Integration with Uncertain Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Computational Framework for Large-Scale Stochastic Convex Optimization

by
Vahab Rostampour
1,* and
Tamás Keviczky
2
1
Engineering and Technology Institute Groningen (ENTEG), University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands
2
Delft Center for Systems and Control, Delft University of Technology, 2628 CD Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Energies 2021, 14(1), 23; https://doi.org/10.3390/en14010023
Submission received: 9 November 2020 / Revised: 14 December 2020 / Accepted: 18 December 2020 / Published: 23 December 2020
(This article belongs to the Special Issue Cyber-Physical Systems for Smart Grids)

Abstract

:
This paper presents a distributed computational framework for stochastic convex optimization problems using the so-called scenario approach. Such a problem arises, for example, in a large-scale network of interconnected linear systems with local and common uncertainties. Due to the large number of required scenarios to approximate the stochasticity of these problems, the stochastic optimization involves formulating a large-scale scenario program, which is in general computationally demanding. We present two novel ideas in this paper to address this issue. We first develop a technique to decompose the large-scale scenario program into distributed scenario programs that exchange a certain number of scenarios with each other to compute local decisions using the alternating direction method of multipliers (ADMM). We show the exactness of the decomposition with a-priori probabilistic guarantees for the desired level of constraint fulfillment for both local and common uncertainty sources. As our second contribution, we develop a so-called soft communication scheme based on a set parametrization technique together with the notion of probabilistically reliable sets to reduce the required communication between the subproblems. We show how to incorporate the probabilistic reliability notion into existing results and provide new guarantees for the desired level of constraint violations. Two different simulation studies of two types of interconnected network, namely dynamically coupled and coupling constraints, are presented to illustrate advantages of the proposed distributed framework.

1. Introduction

Stochastic optimization has attracted significant attention in the recent control literature, due to its ability to provide an alternative, often less conservative way to handle uncertainty in dynamical systems. Stochastic optimization takes into account the stochastic characteristics of the uncertainties and thereby the system constraints are treated in a probabilistic sense, i.e., using chance constraints [1]. Using stochastic optimization in the so-called model predictive control (MPC), one computes an optimal control sequence that minimizes a given objective function subject to the uncertain system dynamics model and chance constraints in a receding horizon fashion [2]. Chance constraints enable stochastic optimization to offer an alternative approach to achieve a less conservative solution compared to robust optimization [3], since it directly incorporates the trade-off between constraint feasibility and performance of objective function.
Distributed optimization has been an active research area in recent decades, due to its applicability to handle large-scale problems with constraints. In distributed optimization one replaces large-scale problems stemming from centralized optimization with several smaller-scale problems that can be solved in parallel. These problems make use of information from other subproblems to formulate a local optimization problem. In the presence of uncertainties, however, the main challenge in the formulation of distributed optimization is how one should exchange information through a communication scheme among subproblems (see, e.g., [4], and references therein). This highlights the necessity of developing distributed strategies to cope with the uncertainties in subproblems while at the same time minimizing information exchange through a communication framework.
To handle uncertainties in distributed setting, some approaches are based on robustification [5]. Assuming that the uncertainty is bounded, a robust optimization problem is solved, leading to a decision that satisfies the constraints for all admissible values of the uncertainty. The resulting solution using such an approach tends to be conservative in many cases. Tube-based robust optimization, see for example [6] and the references therein, was considered in a decentralized setup (plug-and-play) in [7], and it has been recently extended to distributed control systems [8] for a collection of linear stochastic subsystems with independent dynamics.

1.1. Related Works

Although in [8] coupled chance constraints were considered separately at each sampling time, in this paper we consider a chance constraint on the feasibility of trajectories of dynamically coupled subsystems. Our approach is motivated by [7] to reduce the conservativeness of the control design. Other representative approaches for stochastic optimization of a single stochastic system include affine parametrization of the control policy [9], the randomized (scenario) approach [10,11,12,13], and the combined randomized and robust approach [14,15,16]. None of these approaches, to the best of our knowledge, have been considered in a distributed control setting.
This paper aims to develop a systematic approach to distributed stochastic optimization using the scenario-based technique. Scenario optimization approximates stochastic problem via the so-called scenario (sample) approach [17], and if the underlying optimization problem is convex with respect to the decision variables, finite sample guarantees can be provided. Following such an approach, the computation time for a realistic large-scale problem of interest becomes prohibitive, because the number of samples to be extracted tends to be high, and consequently leads to many constraints in the resulting optimization problem.
To overcome the computational burden caused by the large number of constraints, in [18,19] a heuristic sample-based approach was used in an iterative distributed fashion via dual decomposition such that all subsystems collaboratively optimize a global performance index. In another interesting work [20], a multi-agent consensus algorithm was presented to achieve consensus on a common value of the decision vector subject to random constraints such that a probabilistic bound on the tails of the consensus violation was also established. However, in most of the aforementioned references the aim to reduce communication among subsystems, which we refer to as agents, has not been addressed.

1.2. Contributions

Our work in this paper differs from the above references in two important aspects which have not been, to the best of our knowledge, considered in the literature. We develop two different distributed scenario optimization frameworks. In the first version, each agent generates its own scenarios of the local uncertainty sources, and takes into consideration an estimation of the neighboring possible state trajectories, such that using the so-called alternating direction method of multipliers (ADMM), all the agents start to improve their local estimation until they agree on the consistency of exchange scenarios in the proposed distributed setting. In the second version, we develop a new set-based communication setup together with the notion of probabilistically reliable information to reduce the communications between agents required by the proposed first version of distributed scenario optimization framework. To quantify the error introduced by such a new communication scheme, we incorporate the probabilistic reliability notion into existing results and provide new guarantees for the desired level of constraint violations. To summarize the main contributions of this paper are as follows:
(a) By reformulating a centralized scenario optimization problem into a decomposable scenario program, we first present the exactness of decomposition and then, quantify the level of robustness of resulting solutions by providing new a-priori probabilistic guarantees for the desired level of constraint fulfillment under some mild conditions.
(b) To solve the proposed decomposable scenario program, we develop a so-called distributed scenario exchange scheme between subproblems (neighboring agents) using the alternating direction method of multipliers (ADMM). Such a new scheme is used to provide a distributed scenario optimization framework to handle two types of large-scale network problems in control literature, namely dynamically coupled and coupling constraints network of agents.
(c) To reduce the communication in the proposed distributed setting, we develop a novel inter-agent soft communication scheme based on a set parametrization technique together with the notion of probabilistically reliable set to reduce the required communication between the subproblems. We show how to incorporate the probabilistic reliability notion into existing results and provide new guarantees for the desired level of constraint violations.
(d) Using the so-called soft communication scheme, we present a distributed framework such that the neighboring agents just need to communication a set together with the level of reliability of the set. Each agent then solves a local robust-communication scenario program at each sampling time. We also establish a practical plug-and-play (PnP) distributed framework that considers network changes by agents which want to join or leave the network.
The structure of this paper is as follows. Section 2 describes a problem formulation leading to a large-scale scenario optimization problem. In Section 3, we first reformulate the large-scale scenario problem into a decomposable scenario problem. In Section 4, we then provide the results on equivalent relations and analyze robustness of the obtained solutions using the proposed distributed scenario exchange scheme. Section 5 introduces a novel inter-agent communication scheme between the subproblems, namely soft communications, and then proceeds to quantify the robustness of the proposed schemes. Using the proposed soft communication scheme, in Section 6 we establish a practical PnP distributed framework. Section 7 presents two different simulation studies, whereas in Section 8, we conclude this paper with some remarks and future work.

2. Problem Formulation

Without loss of generality, this section presents a mathematical problem formulation in the context of advanced control problem, namely stochastic model predictive control (SMPC), leading to a large-scale scenario optimization problem at each time step. Please note that in this paper we focus on the large-scale scenario problem instance from the optimization point of view and derive probabilistic guarantees for constraint fulfillment in a distributed setting. Analyzing the closed-loop asymptotic behavior of SMPC (e.g., stability and recursive feasibility) are not in the scope of this paper.
Consider a discrete-time uncertain linear system with additive disturbance in a compact form as follows:
x k + 1 = A ( δ k ) x k + B ( δ k ) u k + C ( δ k ) w k ,
with a given initial condition x 0 R n . Here k T : = { 0 , 1 , , T 1 } denotes the time instance, x k X R n and u k U R m correspond to the state and control input, respectively, and  w k R p represents an additive disturbance. The system matrices A ( δ k ) R n × n and B ( δ k ) R n × m as well as C ( δ k ) R n × p are random, since they are known functions of an uncertain variable δ k that influences the system parameters at each time step k. It is important to mention that given the initial measured state variable x 0 , the predicted future states are modeled using (1) at each sampling time k. For avoiding crowded notation, when there is no ambiguity, in the rest of the paper we will drop the conventional index for the predicted time steps, i.e.,  k + | k , and simply use k + . We also define T + : = { 1 , , T } to denote the future predicted time steps.
Throughout this paper, there are three technical assumptions. Assumption 1 is a classical assumption where we define the probability spaces associated with the random variables and highlight the fact that in this paper we only require the availability of a “sufficient number” of samples of such random variables. Using such samples to formulate an optimization problem, Assumption 2 is given as a technical requirement to be able to use a so-called randomization technique and to guarantee a nonempty feasibility domain of such an optimization problem. Assumption 3 is focused on the structure of large-scale system dynamics of interest to be able to explore and exploit its structure for decomposition into an interconnection of smaller subsystems.
Assumption 1.
Random variables w : = { w k } k T and δ : = { δ k } k T are defined on probability spaces ( W , B ( W ) , P w ) and ( Δ , B ( Δ ) , P δ ) , respectively. w and δ are two independent random processes, where P w and P δ are two different probability measures defined over W and Δ, respectively, and  B ( · ) denotes a Borel σ-algebra. The support sets W and Δ of w and δ , respectively, together with their probability measures P w and P δ are entirely generic. In fact, W , Δ and P w , P δ do not need to be known explicitly. Instead, the only requirement is availability of a “sufficient number” of samples, which will become concrete in later parts of the paper. Such samples can be for instance obtained by a model learned from available historical data [21].
The system in (1) is subject to constraints on the system state trajectories and control input. Consider the state and control input constraint sets to be compact convex in the following form
X : = { x R n : G x g } , U : = { u R m : H u h } ,
where G R q × n , g R q , and  H R r × m , h R r . Keeping the state inside a feasible set X R n for the entire prediction horizon may be too conservative and result in loss of performance. In particular, this is the case when the best performance is achieved close to the boundary of X , and thus, constraint violations will be unavoidable due to the fact that the system parameters in (1) are imperfect and uncertain. To tackle such a problem, we will consider chance constraints on the state trajectories to avoid violation of the state variable constraints probabilistically even if the disturbance w or uncertainty δ has unbounded support. Notice that a robust problem formulation [3] cannot cope with problems having an unbounded disturbance set.
Following the traditional MPC approach to find a stabilizing full-information controller that leads to admissible control inputs u : = { u k } k T and satisfies the state constraints, one can rely on the standard assumption of the existence of a suitable prestabilizing control law, see, e.g., (Proposition 1, [7]). Based on [9], one can also employ a so-called parametrized feedback policy and split the control input to cope with the state prediction under uncertainties: u k = K x k + v k with v k R m as a free correction input variable to compensate for disturbances and K is a given pre-stabilizing feedback gain for the nominal (uncertain-free) system (1).
The control objective is to minimize a cumulative quadratic stage cost of a finite horizon cost J ( · ) : R n × R m R that is defined as follows:
J ( x , u ) = E k = 0 T 1 x k Q x k + u k R u k + x T P x T ,
with Q R 0 n × n , and  R R 0 m × m . Consider x : = { x k } k T , ( A , Q 1 2 ) to be detectable and P to be the solution of the discrete-time Lyapunov equation:
E [ A c l ( δ k ) P A c l ( δ k ) ] + Q + K R K P 0 ,
for the closed-loop system with A c l ( δ k ) = A ( δ k ) + B ( δ k ) K . Each stage cost term is taken in expectation E [ · ] , since the argument x k is a random variable. Using v = { v k } k T , consider now the following stochastic control (optimization) problem:
min v R T m J ( x , u )
s . t . x k + 1 = A ( δ k ) x k + B ( δ k ) u k + C ( δ k ) w k ,
P [ x k + X , T + ] 1 ε ,
u k = K x k + v k U , k T ,
where x 0 is initialized based on the measured current state, and  ε ( 0 , 1 ) is the admissible state constraint violation parameter of the large-scale system (1). The state trajectory x k + , T + , has a dependency on the random variables w and δ , and thus, the chance constraint can be interpreted as follows: the probability of violating the state constraint at the future time step T + is restricted to ε , given that the state of the system in (1) is measurable at each time step k T . Even though U and X are compact convex sets, due to the chance constraint on the state trajectory, the feasible set of the optimization problem in (5) is a non-convex set, in general.
To handle the chance constraint (5c), we recall a scenario-based approximation [22]. w k and δ k at each sampling time k T are not necessarily independent and identically distributed (i.i.d.). In particular, they may have time-varying distributions and/or be correlated in time. We assume that a “sufficient number” of i.i.d. samples of the disturbance w W and δ Δ can be obtained either empirically or by a random number generator. We denote the sets of given finite samples (scenarios) with S w : = { w 1 , , w S } W S and S δ : = { δ 1 , , δ S } Δ S , respectively. Following the approach in [23], we approximate the expected value of the objective function empirically by averaging the value of its argument for some number of different scenarios, which plays a tuning parameter role. Using S ¯ as the tuning parameter, consider S ¯ number of different scenarios of w and δ to build
S ¯ w , δ = ( w s , δ s ) : w s W , δ s Δ , s = 1 , , S ¯ ,
which has the cardinality | S ¯ w , δ | = S ¯ . We then approximate the cost function empirically as follows:
J ( x , u ) = E ( w , δ ) k = 0 T 1 V ( x k ( w k , δ k ) , u k ) 1 S ¯ ( w s , δ s ) S ¯ w , δ k = 0 T 1 V ( x k ( w k s , δ k s ) , u k ) ,
where V ( x k ( w , δ k ) , u k ) represents a compact notation of the objective function ( x k ( w k , δ k ) Q x k ( w k , δ k ) + u k R u k ) + x T ( w k , δ k ) P x T ( w k , δ k ) . Notice that x k ( w k , δ k ) indicates the dependency of the state variables on the random variables.
We are now in a position to formulate an approximated version of the proposed stochastic optimization problem in (5) using the following finite horizon scenario program:
min v R T m J ( x , u )
s . t . x k + 1 s = A ( δ k s ) x k s + B ( δ k s ) u k s + C ( δ k s ) w k s ,
x k + s X , T + , w s S w , δ s S δ ,
u k s = K x k s + v k U , k T ,
where superscript s indicates a particular sample realization. The solution of (6) is the optimal control input sequence v * = { v k * , , v k + T 1 * } . Based on the MPC paradigm, the current input is implemented as u k : = K x k + v k * and we proceed in a receding horizon fashion. This means that the problem (6) is solved at each time step k by using the current measurement of the state x k . Please note that new scenarios are needed to be generated at each sampling time k N + . It is important to note that the proposed large-scale scenario optimization problem (6) should be feasible and its feasibility domain must be a nonempty domain. This is a technical requirement (Assumption 1, [17]) to be able to use such a randomization technique. In case of infeasible solution, we need to generate new set of scenarios and resolve the problem.
Assumption 2.
The proposed large-scale scenario optimization problem (6) is feasible and its feasibility domain has a nonempty domain.
Following Assumption 2 and based on the Weierstrass’ theorem (Proposition A.8, [24]), the proposed large-scale scenario program (6) admits at least one optimal solution, and therefore the Slater’s constraint qualification [25] holds for the proposed problem formulations. The key features of the optimization problem (6) are as follows: (1) there is no need to know the probability measures P w and P δ explicitly, only the capability of obtaining random scenarios is enough, (2) formal results to quantify the level of approximations are available. In particular, the results follow the so-called scenario approach [17], which allows the binding a-priori of the violation probability of the obtained solution via (6).
In the following theorem, we restate the explicit theoretical bound of (Theorem 1, [17]), which measures the finite scenarios behavior of (6).
Theorem 1.
Let ε , β ( 0 , 1 ) and S N ( ε , β , T m ) , where
N ( ε , β , T m ) : = min N N | i = 0 T m 1 N i ε i ( 1 ε ) N i β .
If the optimizer of problem (6) v * R T m is applied to the discrete-time dynamical system (1) for a finite horizon of length T, then, with at least confidence 1 β , the original constraint (5c) is satisfied for all k T with probability more than 1 ε .
It was shown in [17] that the above bound is tight. The interpretation of Theorem 1 is as follows: when applying v * in a finite horizon control problem, the probability of constraint violation of the state trajectory remains below ε with confidence 1 β :
P S S w W S , S δ Δ S : Vio ( v * ) ε 1 β ,
with Vio ( v * ) : = P [ w W , δ Δ : x k + = A c l ( δ k ) x k + B ( δ k ) v k * + C ( δ k ) w k X , T + | x k = x 0 ] , where A c l ( δ k ) = A ( δ k ) + B ( δ k ) K . It is worth mentioning that while the proposed constraint on the control input (6d) is also met in a probabilistic sense for all prediction time steps, except at the initial time step, due to the feedback parametrization and the nature of the scenario approach that appears in the proposed optimization problem (6). At the initial time step such a constraint (6d) is deterministic, and the superscript s can be dropped, as there is only one measured current state x k . This holds for all the following proposed formulations.

3. Decomposed Problem Reformulation

Consider a partitioning of the system dynamics (1) through a decomposition into N subsystems and let N = { 1 , 2 , , N } be the set of subsystem indices. The state variables x k , control input signals u k and the additive disturbance w k can be considered to be x k = col i N ( x i , k ) , u k = col i N ( u i , k ) and w k = col i N ( w i , k ) , respectively, where x i , k R n i , u i , k R m i , w i , k R p i , and  i N n i = n , i N m i = m , i N p i = p . The following assumption is important to be able to partition the system parameters.
Assumption 3.
The control input and the additive disturbance of the subsystems are decoupled, e.g.,  u i , k and w i , k only affect subsystem i N for all k T . Consider the state and control input constraint sets X and U of large-scale system dynamics (1) as defined in (2) to be X = i N X i , and  U = i N U i such that X i and U i for all subsystems i N can be in the following form:
X i : = { x R m i : G i x g i } , U i : = { u R p i : H i u h i } ,
where G = d i a g i N ( G i ) , H = d i a g i N ( H i ) , g = c o l i N ( g i ) and h = col i N ( h i ) . Also consider the objective function of each subsystem i N in the following form:
J i ( x i , u i ) : = E k = 0 T 1 x i , k Q i x i , k + u k R i u i , k + x i , T P i x i , T ,
where Q i R 0 n i × n i , R i R 0 m i × m i such that Q = d i a g i N ( Q i ) , and  R = d i a g i N ( R i ) . Please note that x i = c o l k T ( x i , k ) and u i = c o l k T ( u i , k ) such that x = c o l i N ( x i ) and u = c o l i N ( u i ) .
It is important to note that under Assumption 3 there are no direct coupling constraints between each subsystem i N and its neighboring subsystems j N i . Instead, the subsystem i N is dynamically coupled with all its neighboring subsystems j N i as it is presented in (8).
We refer to the additive disturbance w i , k as a local uncertainty source of each subsystem i N , since it is assumed that it affects only the subsystem i N . Motivated by an application to Smart Thermal Grids (STGs) in [26] where a network of agents is interconnected via an uncertain common resource pool between neighbors, we consider the uncertain variable δ k as a common uncertainty source between all subsystems i N . Such a STG application will also be presented in Section 7.2 as a second case study. Observe the fact that every common uncertain phenomenon can be considered to be a local uncertain variable, e.g., the outside weather condition for neighboring buildings. Therefore, we also consider having δ k = col i N ( δ i , k ) and refer to both random variables w i , k and δ i , k as local uncertainty sources.
We are now able to decompose the large-scale system matrices B ( δ k ) = diag i N ( B i ( δ i , k ) ) R n × m , C ( δ k ) = diag i N ( C i ( δ i , k ) ) R n × p , and consider A ( δ k ) R n × n in the following form:
A ( δ k ) = A 11 ( δ 1 , k ) A 1 N ( δ 1 , k ) A N 1 ( δ N , k ) A N N ( δ N , k ) ,
where A i j ( δ i , k ) R n i × n j , B i ( δ i , k ) R n i × m i , and  C i ( δ i , k ) R n i × p i . Define the set of neighboring subsystems of subsystem i as follows:
N i = j N \ { i } | A i j ( δ i , k ) 0 ,
where 0 denotes a matrix of all zeros with proper dimension. Please note that if subsystems are decoupled, they remain decoupled regardless of the uncertainties δ i , k for all i N . Consider now a large-scale network that consists of N interconnected subsystems, and each subsystem can be described by an uncertain discrete-time linear time-invariant system with additive disturbance of the form
x i , k + 1 = A i i ( δ i , k ) x i , k + B i ( δ i , k ) u i , k + q i , k q i , k = j N i A i j ( δ i , k ) x j , k + C i ( δ i , k ) w i , k .
Following Assumption 3, one can consider a linear feedback gain matrix K i for each subsystem i N such that K = diag i N ( K i ) . Using K i in each subsystem, we assume that there exists P i for each subsystem i N such that P = diag i N ( P i ) preserves the condition in (4).
Define σ i : = col s { 1 , , S i } ( σ i s ) such that σ i s : = col j N i ( z i j s ) , where z i j s = col k T ( z i j , k s ) R T n j , to be auxiliary decision variables, and using v i = col k T ( v i , k ) R T m i such that v = col i N ( v i ) , we are now able to reformulate the optimization problem in (6) into the following decomposable finite horizon scenario program:
min { σ i , v i } i N i N J i ( x i , u i )
s . t . x i , k + 1 s = A i i ( δ i , k s ) x i , k s + B i ( δ i , k s ) u i , k s + q i , k s ,
q i , k s = j N i A i j ( δ i , k s ) z i j , k s + C i ( δ i , k s ) w i , k s ,
z i j , k s = x j , k s , x j s S x j , j N i ,
x i , k + s X i , T + , w i s S w i , δ i s S δ i ,
u i , k s = K i x i , k s + v i , k U i , k T , i N
where w i = col k T ( w i , k ) W i and δ i = col k T ( δ i , k ) Δ i such that W = i N W i and Δ = i N Δ i . The sets S w i : = { w i 1 , , w i S i } W i S i and S δ i : = { δ i 1 , , δ i S i } Δ i S i denote sets of given finite samples (scenarios) of disturbance and uncertainties in each subsystem i N , such that S w = i N S w i and S δ = i N S δ i . It is important to highlight that the proposed optimization problem in (9) is decomposable into N subproblems with coupling only through the equality constraints (9d) between neighboring agents. The auxiliary variables z i j , k s are defined between agent i N and its neighboring agent j N j to represent the local estimation of each scenario information of the neighbor’s state variable x j s = col k T ( x j , k s ) X j T : = X j at each sampling time k T . By taking into consideration that the interaction dynamics model A i j ( δ i , k s ) by each neighboring agent j N i is available for agent i N , the only information that subsystem i N needs is a set of scenarios of the state variable S x j : = { x j 1 , , x j S i } X j S i at each sampling time k T from all its neighboring agents j N i . Please note that the cardinality of S x j is S i , which is assigned by the agent i N for all neighboring agents j N i , and should be sent by the neighboring agents at each sampling time k T , and this is known as the hard communication scheme between the neighboring agents.
In the following proposition, we provide a connection between the proposed optimization problem in (9) and the optimization problem in (6).
Proposition 1.
Given Assumption 3 and the block-diagonal structure for the state-feedback controller K = d i a g i N ( K i ) for the large-scale system dynamics (1), the optimization problem in (9) is an exact decomposition of the optimization problem in (6), such that the optimal objective value of the decomposed problem (9) is equal to the optimal objective of the problem (6).
The proof is provided in the Appendix A.
An important key feature of the proposed decomposable scenario program in (9) compared to the centralized scenario problem in (6) is as follows. Such a decomposition yields a reduction of computation time complexity of the centralized scenario program compared to the decomposable scenario program (9). This however requires more communication between each subsystem, since at each time k T each agent i N should send to and receive from all the neighboring agents j N i a set of their state variable scenarios S x i and S x j , respectively.
Remark 1.
The proposed constraint (9e) represents an approximation of the following chance constraint on the state of each subsystem i N :
P [ x i , k + X i , T + ] 1 ε i ,
where ε i ( 0 , 1 ) is the admissible state constraint violation parameter of each subsystem (8). One can also consider α i = 1 ε i as the desired level of state feasibility parameter of each subsystem (8).
The following theorem can be considered to be the main result of this section to quantify the robustness of the solutions obtained by (9).
Theorem 2.
Let ε i , β i ( 0 , 1 ) be chosen such that ε = i N ε i ( 0 , 1 ) , β = i N β i ( 0 , 1 ) , and  N s i N ( ε i , β i , T m i ) for all subsystem i N . If v * = c o l i N ( v i * ) , the collection of the optimizers of problem (9) for all subsystem i N , is applied to the discrete-time dynamical system (1) for a finite horizon of length T, then, with at least confidence 1 β , the original constraint (5c) is satisfied for all k T with probability more than 1 ε .
The proof is provided in the Appendix B.
The interpretation of Theorem 2 is as follows. In the proposed decomposable scenario program (9), each subsystem i N can have a desired level of constraint violation ε i and a desired level of confidence level 1 β i . To keep the robustness level of the collection of solutions in a probabilistic sense (5c) for the discrete-time dynamical system (1), these choices have to follow a certain design rule, e.g.,  ε = i N ε i ( 0 , 1 ) and β = i N β i ( 0 , 1 ) . This yields a fixed ε , β for the large-scale system (1) and the individual ε i , β i for each subsystem i M . It is important to mention that to maintain the violation level for the large-scale system with many partitions, i.e.,  | N | = N , the violation level of individual agents needs to decrease, i.e.,  ε i , which may lead to conservative results for each subsystem, since the number of samples needs to increase, i.e.,  S i . Addressing such a limitation is subject of ongoing research work. The following corollary is a direct result of Theorem 2.
Corollary 1.
If the optimal solution v i * for each agent i N obtained via the proposed decomposable problem (9) is applied to the agent dynamical system (8) for a finite horizon of length T, then, with at least confidence 1 β i , the original constraint (10) is satisfied for all k T with probability more than 1 ε i .

4. Distributed Scenario Optimization

In this section, we continue by developing a distributed framework using ADMM to solve the proposed decomposable formulation in (9). It has been proven that ADMM for this type of problem converges linearly [27]. We follow a similar approach as in [28,29] to solve the decomposable problem (9) in a distributed manner by extending to the scenario program. Define the local feasibility set denoted with L i for all agents i N :
L i : = { { σ i , v i } | u i , k s = K i x i , k s + v i , k U i , k T , x i , k + 1 s = A i i ( δ i , k s ) x i , k s + B i ( δ i , k s ) u i , k s + q i , k s , q i , k s = j N i A i j ( δ i , k s ) z i j , k s + C i ( δ i , k s ) w i , k s , x i , k + s X i , T + , w i s S w i , δ i s S δ i } ,
where all the system parameters together with the local scenario sets, S w i , S δ i , are fixed and available locally for each agent i N at each sampling time k T . We are now able to define the augmented Lagrangian function as follows:
L ( { σ i , v i } i N ) : = i N ( J i ( x i , u i ) + I L i ( { σ i , v i } ) + j N i s = 1 S i μ 2 z i j s x j s + Λ i j s μ 2 2 + 1 2 μ Λ i j s 2 2 ) ,
where I L i ( { σ i , v i } ) is a convex indicator function for the local constraint set L i that maps to infinity if one of the constraints is violated, and to zero otherwise. Step size μ is a fixed constant, and multipliers Λ i j s R n j are introduced for every coupling constraint of each scenario. The indicator function makes sure all local constraints are satisfied, and the squared norm penalty term forces constraint (9d) to be satisfied.
We now describe the steps of the ADMM algorithm as follows:
(1)
Update primal variables
The multipliers Λ i j s and the local estimation of neighboring state scenario z i j s for all scenarios s = 1 , , S i and for all the neighbors j N i are fixed at their value of the previous iteration. Consider the following minimization problem for all i N :
v i ( l + 1 ) = arg min v i J i ( x i , u i ) + I L i ( { σ i ( l ) , v i } ) + j N i s = 1 S j μ 2 z j i s , ( l ) x i s + Λ j i s , ( l ) μ 2 2 .
It is important to note that the augmented part of the aforementioned step in each agent i N refers to the coupling constraints from the neighboring agent j N i point of view and z j i s , ( l ) is the estimation of neighboring agent j N i from the state variable scenario of agent i N for all s { 1 , , S j } . Since the minimization is only in v i , all terms of Λ i j s drop out. This results in | N | = N separate convex programs.
(2)
Update and exchange the set of state scenarios
The resulting primarily decision variables v i ( l + 1 ) for all i N are used to first update and then exchange the set of state scenarios. Please note that each agent only needs to communicate the updated local state scenarios set to all its neighboring agents and receive the updated set of all its neighboring state scenarios. Consider now the following steps which we refer to as the Scenario Updating Steps (SUS) for a generic agent i N as follows:
x i , k + 1 s , ( l + 1 ) = A i i ( δ i , k s ) x i , k s , ( l + 1 ) + B i ( δ i , k s ) u i , k s , ( l + 1 ) + q i , k s , ( l ) q i , k s , ( l ) = j N i A i j ( δ i , k s ) z i j , k s , ( l ) + C i ( δ i , k s ) w i , k s u i , k s , ( l + 1 ) = K i x i , k s , ( l + 1 ) + v i , k ( l + 1 ) , k T .
By repeating SUS for all local scenarios, w i s S w i and δ i s S δ i , the agent i N should first update the set of its local state scenarios x i s , ( l + 1 ) S x i ( l + 1 ) and send it to all its neighboring agents. Then, the agent i N should receive the updated set of all its neighboring state scenarios S x j ( l + 1 ) . It is important to note that the set of local scenarios, S w i and S δ i , is fixed for all the SUS steps and iteration steps, i.e.,  l N + .
(3)
Update and exchange the state estimation variables
Using the updated set of all neighboring state scenarios S x j ( l + 1 ) and the updated primal variables, consider the following projection problem for all i N :
σ i ( l + 1 ) = arg min σ i I L i ( { σ i , v i ( l + 1 ) } ) + j N i s = 1 S i μ 2 z i j s x j s , ( l + 1 ) + Λ i j s , ( l ) μ 2 2 ,
Since the projection is only in σ i , all terms of Λ i j s drop out and this results in | N | = N separate convex programs. Each agent i N should now send the updated local estimation of the neighboring state variable z i j s , ( l + 1 ) to its neighbor j N i .
(4)
Update and exchange the multiplier variables
The multipliers are updated as follows i N , j N i , and  s = 1 , , S i :
Λ i j s , ( l + 1 ) = Λ i j s , ( l ) + μ z i j s , ( l + 1 ) x j s , ( l + 1 ) .
Notice that no information exchange is needed for the update of the multiplier, since agents receive the updated set of all their neighboring state scenarios x j s , ( l + 1 ) S x j ( l + 1 ) . As the final step, each agent i N should receive the updated multiplier variable Λ j i s , ( l + 1 ) from all its neighboring agents j N i for all the scenarios s = 1 , , S j .
In Algorithm 1, we summarize the proposed distributed scenario exchange framework using ADMM algorithm to illustrate the calculation and communication steps for each agent i N . Each agent needs to solve two small-scale scenario programs in Lines 4 and 8 at each iteration that can be considered to be the highest computational cost in the proposed algorithm. In Line 5 each agent first updates S x i ( l + 1 ) using SUS, which works with only simple operations, e.g., addition, subtraction and scaling, and then, it broadcasts and receives the updated local S x i ( l + 1 ) and S x j ( l + 1 ) to and from the neighbors, respectively. Finally, each agent in Lines 9 and 11 broadcasts the updated local estimation from the neighboring agents state variable scenarios and the related multipliers.
Algorithm 1 Distributed Scenario Exchange Algorithm
1:
for all  i N do
2:
    initialize: x i , 0 X i , l = 0 , Λ i j s , ( l ) , z i j s , ( l ) , z j i s , ( l ) , s { 1 , , S i } , and  j N i
3:
    while η i ( l ) > ϵ i des using (15) do
4:
        Update v i ( l + 1 ) using (11)
5:
        Update S x i ( l + 1 ) using SUS procedure (12)
6:
        Broadcast S x i ( l + 1 ) to all j N i
7:
        Receive S x j ( l + 1 ) from all j N i
8:
        Update z i j s , ( l + 1 ) using (13) for all j N i and for all s { 1 , , S i } ,
9:
        Broadcast z i j s , ( l + 1 ) to all j N i
10:
        Update Λ i j s , ( l + 1 ) using (14) for all j N i and for all s { 1 , , S i }
11:
        Broadcast Λ i j s , ( l + 1 ) to all j N i and for all s { 1 , , S i }
12:
        set l l + 1
13:
    output: v i * = v i l + 1
We define the agreement on the local estimation from the state variable scenario of the neighbors z i j s as the convergence criteria of the proposed ADMM algorithm for each agent i N as follows:
η i ( l ) = j N i s = 1 S i μ 2 z i j s , ( l ) x j s , ( l ) 2 2 ,
and the overall convergence criteria is η ( l ) : = i N η i ( l ) . If the residual sequence η ( l ) l = 1 + is sufficiently small, less than or equal to i N ϵ i des where ϵ i des is the desired level of convergence error for each agent i N , then all neighboring agents have reached a consistent set of state variable scenarios. In light of Assumption 2, we can now provide convergence of Algorithm 1 based on the results in [30].
Theorem 3.
Assume that Slater’s condition [25] holds for the decomposable scenario optimization problem (9), and consider the iterative steps given in Algorithm 1. Then the following statements hold:
  • The residual sequence { η ( k ) } k = 0 + tends to 0 in a non-increasing way as l goes to + , and consequently, for all j N j , and each i N :
    z i j s , ( + ) = x j s , ( + ) .
  • The sequence { v i ( l ) } i N generated by Algorithm 1 converges to an optimal solution { v i * } i N of the decomposable scenario program (9) as l tends to + .
Proof. 
The theorem follows from [30] that studies the convergence of a standard ADMM problem. The details are omitted for brevity.    □
Remark 2.
The proposed distributed scenario exchange algorithm uses a Gauss–Siedel update on the primal variables and the set of state variable scenarios, after which the multiplier variables are updated. Since either the primal or the set of state variable scenarios are fixed in the Gauss–Siedel steps, the problem can be distributed for all the agents. The main advantage to distribute a large-scale scenario optimization problem (6) is the ability of finding local solutions for each agent based on the information received in the previous iteration. Such calculations can therefore be carried out in parallel. Although an actual parallel implementation is outside the scope of this work, it is important to mention that the proposed algorithm is amenable to such an implementation. ADMM algorithms typically need a large number of iterations to converge to high accuracy, so the local agent problems need to be solved many times before finding a good enough solution. Thus, the ADMM approach without parallelization might not be the quickest method to solve the large-scale scenario program. Such a distributed framework is advantageous especially when the global scenario optimization problem is too large and cannot be solved within polynomial time due to the curse of dimensionality or memory limitations and computational constraints.
We next present the overall steps needed to execute the proposed distributed scenario optimization. Algorithm 2 summarizes all the steps such that after decomposing the largescale dynamical system (1) and determining the index sets of the neighboring agents, each agent first starts to generate some scenarios of its local uncertainty sources to approximate its cost function and achieve the feasibility of its chance constraint with high confidence level following Theorem 2. Next, all the agents solve their own local problem using the proposed distributed scenario exchange scheme (Algorithm 1) to obtain their local optimal solution. To execute Algorithm 1, it is assumed that the feedback control gain matrices K i for all agents i N are given (4), and the coupling terms A i j ( δ i , k ) are known between each agent i N and its neighboring agents j N i . Using the obtained solution via Algorithm 1, each agent generates a new set of local state variable scenarios to send and receive to and from all its neighboring agents, respectively. Finally, the first optimal control input is applied to the real system and the new state variables are measured, and proceed in the classical receding horizon fashion.
Remark 3.
The proposed decomposable scenario program in (9) is a general deterministic optimization problem for each agent that are coupled via (9d). Therefore, the proposed technique to solve (9) in Algorithm 1, namely the distributed scenario exchange scheme, can be easily considered to be a general solution method to solve the agents’ problem with the nonempty intersection between their local feasible sets. Such a case where agents are only coupled via coupling constraints has been presented in Section 7.2 as a second case study.
Algorithm 2 Distributed Scenario Optimization (DSO)
1:
Decompose the large-scale dynamical system (1) into N agents as the proposed form in (8)
2:
Determine the index set of neighboring agents N i for each agent i N
3:
for all agent i N do
4:
    fix initial state x i , 0 X i , ε i ( 0 , 1 ) , and  β i ( 0 , 1 )   such that ε = i N ε i ( 0 , 1 ) , β = i N β i ( 0 , 1 )
5:
    determine S ¯ i ( 0 , + ) to approximate the objective function, and  S i following Theorem 2 to approximate
                    the chance constraint (10) in an equivalent sense
6:
    generate S ¯ i , S i scenarios of w i , δ i to determine the sets of S ¯ w i , δ i and S w i , S δ i
7:
    solve the proposed optimization problem in (9) using the proposed distributed scenario exchange
              algorithm (Algorithm 1) and determine v i *
8:
    generate S i scenarios of x i using SUS procedure (12) and taking into consideration v i * and S w i , S δ i
9:
    send the set S x i to all neighboring agents j N i
10:
    receive the sets S x j from all neighboring agents j N i
11:
    apply the first input of solution u i , k * = K i x i , k + v i , k * into the uncertain subsystem (8)
12:
    measure the state and substitute it as the initial state of the next step x i , 0
13:
    set k k + 1
14:
    go to Step (4)

5. Information Exchange Scheme

When the proposed distributed scenario optimization framework (Algorithm 2) is applied to the large-scale scenario program (6), all neighboring agent j N i of the agent i N should send a set of scenarios of the state variable S x j : = { x j 1 , , x j S i } X j S i to agent i at each sampling time k T following the proposed distributed scenario exchange scheme in Algorithm 1. Based on ADMM in Algorithm 1, it may require a large number of iterations to achieve an agreement between neighboring agents on the consistency of exchange scenarios, which may also turn out to be too costly in terms of required communication bandwidth. To address this shortcoming, we propose a set-based information exchange scheme which is referred to as a soft communication protocol.
We now describe the soft communication protocol in more detail. The neighboring agent j N i must first generate S ˜ j samples of x j in order to build the set S x j . It is important to notice that in the soft communication protocol the number S ˜ j of samples generated by agent j may be different than the one needed by agent i, which is S i , as will be remarked later. Let us then introduce B j R n j as a bounded set containing all the elements of S x j . We assume for simplicity that B j is an axis-aligned hyper-rectangular set [14]. This is not a restrictive assumption and any convex set, e.g., ellipsoids and polytopes, could have been chosen instead as described in [26]. We can define B j : = [ b j , b j ] as an interval, where the vector b j R n j defines the hyper-rectangle bounds.
Consider now the following optimization problem that aims to determine the set B j with minimal volume:
min b j R n j b j 1 s . t . x j l [ b j , b j ] , x j l S x j l = 1 , , S ˜ j ,
where S ˜ j is the number of samples x j S x j that neighboring agent j must take into account in order to determine B j . If we denote by B ˜ j = [ b ˜ j , b ˜ j ] the optimal solution of (16) computed by the neighbor agent j, then for implementing the soft communication protocol, agent j needs to communicate only the vector b ˜ j along with the level of reliability α ˜ j to the agent i.
Definition 1.
A set B ˜ j is called α ˜ j reliable if
P x j X j : x j [ b ˜ j , b ˜ j ] 1 α ˜ j ,
and we refer to α ˜ j as the level of reliability of the set B ˜ j .
We now provide the following theorem to determine α ˜ j as the level of reliability of the set B ˜ j .
Theorem 4.
Fix β ˜ j ( 0 , 1 ) and let
α ˜ j = β ˜ j S ˜ j n j κ .
where κ = S ˜ j n j is the degree of the root. We then have
P S ˜ j { { x j 1 , , x j S ˜ j } X j S ˜ j : P x j X j : x j [ b ˜ j , b ˜ j ] 1 α ˜ j } 1 β ˜ j .
The proof is provided in the Appendix C.
Theorem 4 implies that given an hypothetical new sample x j X j , agent j N i has a confidence of at least 1 β ˜ j that the probability of x j B ˜ j = [ b ˜ j , b ˜ j ] is at least α ˜ j . Therefore, one can rely on B ˜ j up to α ˜ j probability. The number of samples S ˜ j in the proposed formulation (16) is a design parameter chosen by the neighboring agent j N i . We however remark that one can also set a given α ˜ j as the desired level of reliability and obtain from (18) the required number of samples S ˜ i .
When an agent i N and its neighbor j N i adopt the soft communication scheme, there is an important effect on the probabilistic feasibility of agent i, following Remark 1. Such a scheme introduces some level of stochasticity on the probabilistic feasibility of agent i, due to the fact that the neighboring information is only probabilistically reliable. This will affect the local probabilistic robustness guarantee of feasibility as discussed in Theorem 2 and consequently in Theorem 1. To accommodate the level of reliability of neighboring information, we need to marginalize a joint cumulative distribution function (cdf) probability of x i and the generic sample x j X j appearing in Theorem 4. We thus have the following theorem, which can be regarded as the main theoretical result of this section.
Theorem 5.
Given α ˜ j ( 0 , 1 ) for all j N i and a fixed α i = 1 ε i ( 0 , 1 ) , the state trajectory of a generic agent i N is probabilistically α ¯ i –feasible for all w i W i , δ i Δ i , i.e., 
P x i , k + X i , T + α ¯ i ,
where α ¯ i = 1 1 α i α ˜ i such that α ˜ i = j N i ( α ˜ j ) .
The proof is provided in the Appendix D.
Following the statement of Theorem 5, it is straightforward to observe that if for all neighboring agents j N i , α ˜ j 1 then α ¯ i α i . This means that if the level of reliability of the neighboring information is one, i.e.,  P x j B ˜ j : j N i = 1 , then, the state feasibility of agent i will have the same probabilistic level of robustness as the hard communication scheme, i.e.,  P x i X i α i = 1 ε i . Combining this result with the statement of Theorem 2, the proposed soft communication scheme introduces some level of stochasticity on the feasibility of the large-scale system as in (5c). In particular, ε i ( 0 , 1 ) the level of constraint violation in each agent i N will increase, since it is proportional with the inverse of j N i ( α ˜ j ) ( 0 , 1 ) , and therefore, ε = i N ε i ( 0 , 1 ) will also increase. After receiving the parametrization of B ˜ j and the level of reliability α ˜ j , agent i N should immunize itself against all possible variation of x j B ˜ j by taking the worst-case of B ˜ j , similar to the worst-case reformulation proposed in (Proposition 1, [16]). It is important to notice that in this way, we decoupled the sample generation of agent j N i from agent i N .

6. Plug-and-Play Operational Framework

Using the soft communication scheme, each agent i N should first receive the parametrization of B ˜ j from all its neighboring agents j N i together with the level of reliability α ˜ j . Then, all agents should immunize themselves against all possible variation of x j B ˜ j . To this end, we formulate the following robust-communication scenario program for each agent i N :
min v i J i ( x i , u i )
s . t . x i , k + 1 s = A i i ( δ i , k s ) x i , k s + B i ( δ i , k s ) u i , k s + q i , k s ,
x i , k + s X i , T + , w i s S w i , δ i s S δ i ,
u i , k s = K i x i , k s + v i , k U i , k T ,
q i , k s = j N i A i j ( δ i , k s ) x j , k s + C i ( δ i , k s ) w i , k s x j s B ˜ j , j N i .
It is important to highlight that we refer to the aforementioned formulation as the robust-communication scenario program, since the communicated variable between neighboring agents should be taken into consideration by the worst-case of B ˜ j . In this way, there is no need for the many iterations of Algorithm 2, and instead, a robustification against all possible variation of x j B ˜ j is used. It is also important to mention that another feature of using the soft communication scheme is the relaxation of the condition on the required number of scenarios, e.g., each agent i N requests S i number of scenarios from all its neighboring agents j N i , which may give rise to privacy concerns for the neighboring agents. The proposed problem (21), specifically the robust constraint (21e), can be solved using a robust (worst-case) reformulation similar to (Proposition 1, [16]) and [7]. It is important to highlight that based on Assumption 2, the proposed optimization problem (21) should be feasible and its feasibility domain has to be a nonempty domain. In case of infeasible solution, each agent i N needs to generate new set of local scenarios, S w i , S δ i , and also needs to receive a new set x j B ˜ j from all its neighbors j N i .
We summarize the proposed distributed scenario optimization using soft communication scheme in Algorithm 3. Similar to Algorithm 2, in Algorithm 3 it is also assumed that the feedback control gain matrices K i for all agents i N are given (4), and the coupling terms A i j ( δ i , k ) are known between each agent i N and its neighboring agents j N i . It is important to note that Step 5 of Algorithm 3, initializes B ˜ j of all neighboring agents j N i , which is only used for the initial iteration in Step 8, and then, at each iteration all agents i N will send and receive B ˜ j from all its neighboring agents j N i as in Steps 11 and 12, respectively.
We also summarize the steps that are needed for plug-in and plug-out operations of each agent i N in Algorithm 4. Please note that in a plugged-in or plugged-out operation all agents i N have to update their ε i with β i to respect the condition in Theorem 2 as in (4) to achieve the desired level of constraint feasibility for the large-scale system (1). One can also redesign K i to potentially improve the local control performance of each agent i N .
Algorithm 3 DSO using Soft Communication Scheme
1:
Decompose the large-scale dynamical system (1) into N agents as the proposed form in (8)
2:
Determine the index set of neighboring agents N i for each agent i N
3:
for all  i N do
4:
    fix initial state x i , 0 X i , ε i ( 0 , 1 ) , and  β i ( 0 , 1 ) such that ε = i N ε i ( 0 , 1 ) , β = i N β i ( 0 , 1 )
5:
    initialize B ˜ j for all neighboring agents j N i
6:
    determine S ¯ i ( 0 , + ) to approximate the objective function, and  S i following Theorem 2 to approximate
                    the chance constraint (10) in an equivalent sense
7:
    generate S ¯ i , S i scenarios of w i , δ i to determine the sets of S ¯ w i , δ i and S w i , S δ i
8:
    solve the optimization problem in (21) by taking into account the worst-case of B ˜ j and determine v i *
9:
    generate S ˜ i scenarios of x i using the dynamics of agent i in form of (8) and v i * together with S w i , S δ i
10:
    determine set B ˜ i by solving the optimization problem (16)
11:
    send the set B ˜ i to all neighboring agents j N i
12:
    receive the sets B ˜ j from all neighboring agents j N i
13:
    apply the first input of solution u i , k * = K i x i , k + v i , k * into the uncertain subsystem (8)
14:
    measure the state and substitute it as the initial state of the next step x i , 0
15:
    set k k + 1
16:
    go to Step (7)
Algorithm 4 Plug-and-Play Operation
  • Plug-in Operation
1:
Add the number of new subsystems into the previous number of subsystems,
        e.g., one additional agent N N + 1 such that | N | = N + 1
2:
Update the index set of neighboring agents N i for each agent i N
3:
Go to Step (3) of Algorithm 3
  • Plug-out Operation
4:
Remove the number of departing subsystems from the previous number of subsystems,
             e.g., one agent leaving leads to N N 1 such that | N | = N 1
5:
Update the index set of neighboring agents N i for each agent i N
6:
Go to Step (3) of Algorithm 3

Comparison: DSO without vs. with Soft Communication

We now provide a comparison between the proposed distributed scenario optimization without vs. with soft communication in Algorithm 2 and Algorithm 3, respectively, in terms of their computational complexities and the conservatism level of obtained solutions as follows:
(1) Computational complexity: It is easy to realize that the proposed distributed scenario optimization (Algorithm 3) using soft communication scheme is computationally advantageous compared to the distributed scenario optimization without soft communication (Algorithm 2). This can be clearly seen by just comparing Step 7 and Step 8 in Algorithms 2 and 3, respectively. In Step 7 of Algorithm 2 each agent i N needs to execute the proposed distributed scenario exchange scheme in Algorithm 1 until all its neighboring agents agree on consistency of the set of their state variable scenarios S x j . However, in Step 8 of Algorithm 3 each agent i N needs to solve only once the robust-communication scenario program in (21) and also solve the proposed optimization problem (16) in Step 10 of Algorithm 3 in order to determine the set B ˜ i .
(2) Conservatism level: As shown in Theorem 2, the proposed distributed scenario optimization using Algorithm 2 retrieves exactly the same property of the original problem (5) under certain conditions, i.e., ε = i N ε i ( 0 , 1 ) , β = i N β i ( 0 , 1 ) . This is however different in the distributed scenario optimization using the proposed soft communication scheme as presented in Theorem 5. In fact, such a scheme introduces some level of stochasticity on the probabilistic feasibility of the agent i N , due to the probabilistic reliability of the neighboring information. It is important to mention that guaranteeing the optimality of the obtained solutions in terms of performance objective(s) using the proposed distributed scenario optimization without or with soft communication in Algorithm 2 and Algorithm 3, respectively, is not included in the scope of this paper and it is subject of our ongoing research work.

7. Numerical Study

This section presents two different case studies to illustrate the functionality of our proposed framework to deal with dynamically coupled (Section 7.1) and coupling constraints (Section 7.2) neighboring agents with private and common uncertainty sources in networked control problems. The simulation environment for both cases was MATLAB with YALMIP as the interface [31] and Gurobi as the solver.
We simulate four different problem formulations, namely a centralized SMPC (CSMPC) using (6), a distributed SMPC (DSMPC) via Algorithm 2, and DSMPC with the proposed soft communication scheme with 0.85 reliability (DSMPCS 0.85 ) as described in Definition 1 and DSMPCS 0.50 , both following Algorithm 3 in a closed-loop control system framework. For comparison purposes, we also present the results obtained via decoupled SMPC (DeSMPC), where the impact of coupling neighboring dynamics in (8) are relaxed.
In Figure 1 and Figure 2, we evaluate our proposed framework in terms of a-posteriori feasibility validation of the obtained results in both case studies. The “red” line represents the results obtained via DeSMPC, the “blue” line shows the results obtained via CSMPC, the “magenta” presents the results obtained by using DSMPC, the “dark green” and “light green” lines show the results obtained via DSMPCS 0.85 and DSMPCS 0.50 , respectively. The “black” lines indicate the bounds of the three dynamically coupled systems.

7.1. Three-Room Case Study

We simulate a building climate comfort system with three rooms such that the temperature of rooms is dynamically coupled without any common constraints. The outside weather temperature and the related losses, e.g., through windows, is considered to be the private uncertainty.
Consider now a three-room building system dynamic:
x k + 1 = A ( δ k ) x k + B ( δ k ) u k + C ( δ k ) w k ,
where B = diag ( [ 0.01 , 0.01 , 0.01 ] ) , C = diag ( [ 0.02 , 0.02 , 0.02 ] ) , and A as follows:
A = 0.2 0.3 0 0.2 0.1 0.1 0.2 0 0.4
such that A ( δ k ) = A + δ k and B ( δ k ) = B + δ k as well as C ( δ k ) = C + δ k . The system matrices are a simplified model of a three-room building such that the states x i , k for i = 1 , 2 , 3 , denote the temperature of rooms. The uncertain variable δ k R represents the modeling errors, losses through windows, and w k R can be realized as the outside weather temperatures.
To generate random scenarios from the additive disturbance, we built a discrete normal process such that one day hourly based forecasted (nominal) outside weather temperature is used which varies within 10 % of its nominal scenario at each sampling time. As for the uncertainty δ k , we generate a random variable from a normal distribution with a mean value 0, variance 1 and a maximal magnitude of 0.01 at each sampling time.
The initial state variables are [ 21 19 23 ] and the objective is to keep the temperature of rooms within our desired lower [ 20.5 18.5 22.5 ] and [ 21.5 19.5 23.5 ] upper bounds at the minimum control input u k . The control input u k is also constrained to be within 1.5 [kWh] and 1.5 [kWh] for all three rooms, due to actuator saturation. The initialization of the B ˜ j for all neighboring agents j N i as in Step 5 in Algorithm 3 can be done for instance by assuming the initial temperature of the neighboring rooms are known for all rooms.
In Figure 1, the dynamically coupled state trajectories for all three rooms are shown. One can clearly see in Figure 1 that the dynamically coupled state trajectories are feasible in a probabilistic sense, since the agent operations are within the lower and upper bounds compared to DeSMPC that violates the constraints completely; the obtained solution via DeSMPC is completely outside of the feasible areas after the first sampling time and thus, we just keep the other results for our discussions.This is a direct result of Theorem 2 such that the obtained solutions via our proposed formulations have to be probabilistically feasible, that can be clearly seen in Figure 1, since the trajectories are on the lower bounds. Italics, space, bold, superscript, subscript.

7.2. Three-Building (ATES Systems) Case Study

We next simulate the thermal energy demands of three buildings modeled using realistic parameters and the actual registered weather data in the city center of Utrecht, The Netherlands, where these buildings are located and these had been equipped with aquifer thermal energy storage (ATES) systems. A simulation study is carried out for one year based on actual weather conditions from March 2011–March 2012 in a weekly based time steps with three months prediction horizon to meet the coupling constraints between ATES systems. We refer interested readers to [26] for the detailed explanations on this case study.
To generate random scenarios from the additive disturbance, we built a discrete normal process such that the nominal scenario is 10 % of the amplitude of the energy content in a deterministic ATES system model and varies within 5 % of its nominal scenario at each sampling time. As for the uncertainty δ k , we generate a random variable from a Gaussian distribution with a mean value 0, variance 0.3 and a maximal magnitude of 0.03 at each sampling time.
In Figure 2 we show a-posteriori feasibility validation of the coupling constraints between each agent i = 1 , 2 , 3 , and neighboring agents, such that (a) shows the results from mid-May to mid-August 2011, (b) presents the results of December 2011 to February 2012, and (c) depicts the results of mid-April to mid-July 2011. As a first desired achievement, one can clearly see in Figure 2 that the constraints are feasible in a probabilistic sense, since the agent operations are quite close to the upper bounds in the critical time periods compared to DeSMPC that violates the constraints. Strictly speaking, using our proposed framework one can achieve the maximum usage of the aquifer (subsurface) to store thermal energy without affecting the neighboring thermal storage. This is a direct result of Theorem 2 such that the obtained solutions via our proposed formulations have to be probabilistically feasible.
It is worth mentioning that both Figure 1 and Figure 2 illustrate our other two main contributions more precisely: (1) the obtained results via CSMPC (blue line) and DSMPC (magenta line) are practically equivalent throughout the simulation studies; this is due to Proposition 1 and Theorem 2. Actually, the solutions via DSMPC are slightly more conservative compared to the results via CSMPC, and this is a direct consequence of Theorem 2. In fact, the level of violation in CSMPC is considered to be ε = 0.05 and leading to ε i = 0.0167 for all agents, which is more restrictive. (2) the proposed soft communication scheme yields fewer conservative solutions as explicitly derived in Theorem 5, and can be clearly seen in Figure 1 and Figure 2 with the obtained results via DSMPCS 0.85 (dark green) and DSMPCS 0.50 (light green). Following Theorem 5 the new violation level using DSMPCS 0.85 is ε ¯ i = 0.0231 , and using DSMPCS 0.50 is ε ¯ i = 0.0668 . It is important to notice that the violation level of global chance constraint will increase to ε ¯ = 0.0702 and ε ¯ = 0.2004 using DSMPCS 0.85 and DSMPCS 0.50 , respectively.

8. Concluding Remarks and Future Work

In this paper, we presented a rigorous approach to distributed stochastic optimization using the scenario-based approximation for large-scale linear systems with local and common uncertainty sources. To highlight the main outcomes of this paper, the following significant findings are listed:
  • Extension of the existing results to quantify the robustness of the resulting solutions for both cases of local and common uncertainties in a distributed stochastic optimization framework using the so-called distributed scenario exchange scheme based on ADMM.
  • A novel inter-agent soft communication scheme is developed to minimize the amount of information exchange between each subsystem.
  • Using a set-based parametrization technique, a novel reliability notion is introduced and quantified the level of feasibility of the obtained solutions via the PnP distributed scenario optimization integrated with the so-called soft communication scheme in a probabilistic sense.
  • The theoretical guarantees of the proposed distributed framework coincide with its centralized counterpart.
Potential future research directions are as follows:
  • Following the interpretation of Theorem 2, when the number of agents are increasing the violation level of individual agents needs to decrease, which may lead to conservative results for each agent, since the number of samples needs to increase. Addressing such a scalability limitation is subject of our current research work.
  • Enhancing the proposed distributed framework to formally address the recursive feasibility and stability of the closed-loop control problems.
  • Analyzing the performance objective(s) by means of optimality of the scenario program solutions in a distributed setting.
  • Investigating the possibility of extending our proposed frameworks to the fault detection and isolation problems following the results in [32].

Author Contributions

Conceptualization, V.R.; Data curation, V.R.; Formal analysis, V.R.; Funding acquisition, T.K.; Investigation, V.R.; Methodology, V.R.; Project administration, T.K.; Resources, T.K.; Software, V.R.; Supervision, T.K.; Validation, V.R.; Visualization, V.R.; Writing—original draft, V.R.; Writing—review & editing, V.R. & T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Uncertainty Reduction in Smart Energy Systems (URSES) research program funded by the Dutch organization for scientific research (NWO) and Shell under the project Aquifer Thermal Energy Storage Smart Grids (ATES-SG) with grant number 408-13-030.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

Given Assumption 3 and following the proposed decomposition technique in Section 3, any optimizer of each subsystem v i * yields a feasible pair of the state and control input variables of its subsystem { x i * , u i * } X i × U i such that X i = X i T , and X i = U i T . Therefore, the collection of the optimizers v * = col i N ( v i * ) will yield the collection of feasible pairs of the state and control input variables of their subsystem:
x * = col i N ( x i * ) , u * = col i N ( u i * ) X × U ,
where X : = X T = i N X i and U : = U T = i N U i , which eventually yields a feasible point for the optimization problem in (6). It is straightforward to use the above relation and to show that any optimizer of the optimization problem in (6) v * also yields a feasible solution for the proposed optimization problem in (9). We then have to show that both optimization problems will have the same performance index in terms of their objective function values. Due to the proposed decomposition technique and under Assumption 3, it is easy to see that the objective function in (6) can be formulated as additive components such that each component represents the objective function of each subsystem i N , and thus: J ( x * , u * ) = i N J i ( x i * , u i * ) . The proof is completed.

Appendix B. Proof of Theorem 2

Define ξ i , k : = ( w i , k , δ i , k , { x j , k } j N i ) to be a concatenated uncertain variable for each agent i N such that ξ i : = { ξ i , k } k T is defined on probability space ( Ξ i , B ( Ξ i ) , P ξ i ) , where P ξ i is a probability measure defined over Ξ i : = W i × Δ i × j N i X j and B ( · ) denotes a Borel σ -algebra. Following this definition, it is straightforward to consider ξ = col i N ( ξ i ) and Ξ = i N Ξ i . Consider also the sample set S ξ i = S w i × S δ i × j N i S x j for each agent i N such that S ξ = i N S ξ i .
Consider now v * to be the optimizer of the centralized scenario MPC problem (6) and define Vio ( v * ) to be the violation probability of the chance constraint (5c) as follows:
Vio ( v * ) : = P ξ ξ Ξ : g ( v * , ξ ) X ,
where g ( v * , ξ ) represents the predicted state trajectory of large-scale system dynamics (1) in a more compact form. In particular, the violation probability can be precisely written as Vio ( v * ) : = P [ w W , δ Δ : x k + = A c l ( δ k ) x k + B ( δ k ) v k * + C ( δ k ) w k X , T + | x k = x 0 ] , where A c l ( δ k ) = A ( δ k ) + B ( δ k ) K . Following Theorem 1, we have
P ξ S S ξ Ξ S : Vio ( v * ) ε 1 β .
Given Theorem 1, the problem (9) is an exact decomposition of the problem (6). This yields the following equivalence relations:
v * : = col i N ( v i * ) , X : = i N X i ,
where v * is the optimizer of the problem (6) with v i * as the optimizer of each agent i N obtained via the problem (9). To this end, it is necessary to prove that the above statements (A1) still hold under the aforementioned relations (A2). We now break down the proof into two steps. We first show the results for each agent i N , and then extend into the large-scale scenario optimization problem (6).
(1) Define Vio ( v i * ) to be the violation probability of each agent i N for the chance constraint (10) as follows:
Vio ( v i * ) : = P ξ i ξ i Ξ i : g i ( v i * , ξ i ) X i ,
where g i ( v i * , ξ i ) corresponds to the predicted state trajectory of subsystem dynamics (8) for each agent i N . Applying the existing results in Theorem 1 for each agent i N , we have
P ξ i S i S ξ i Ξ i S i : Vio ( v i * ) ε i 1 β i .
(2) Following the relations (A2), it is easy to rewrite Vio ( v * ) in the following form:
Vio ( v * ) = P ξ ξ Ξ : g ( v * , ξ ) i N X i .
It is then sufficient to show that for S = max i M S i :
P ξ S S ξ Ξ S : Vio ( v * ) ε β .
where ε = i N ε i ( 0 , 1 ) and β = i N β i ( 0 , 1 ) . Hence
Vio ( v * ) = P ξ ξ Ξ : g ( v * , ξ ) i N X i = P ξ ξ Ξ : i N , g ( v i * , ξ i ) X i = P ξ i N ξ i Ξ i : g ( v i * , ξ i ) X i i N P ξ i ξ i Ξ i : g ( v i * , ξ i ) X i = i N Vio ( v i * ) .
The last statement implies that Vio ( v * ) i N Vio ( v i * ) , and thus, we have
P ξ S S ξ Ξ S : Vio ( v * ) ε P ξ S S ξ Ξ S : i N Vio ( v i * ) i N ε i = P ξ S i N S ξ i Ξ i S i : Vio ( v i * ) ε i i N P ξ i S i S ξ i Ξ i S i : Vio ( v i * ) ε i i N β i = β .
The obtained bounds in the above procedure are the desired assertions as stated in the theorem. The proof is completed by noting that the feasible set X = i N X i of the problem (6) and (9) has a nonempty interior, and it thus admits at least one feasible solution v * = col i N ( v i * ) .

Appendix C. Proof of Theorem 4

Equation (19) is a direct result of the scenario approach theory in [33], if β ˜ j is chosen such that
S ˜ j n j α ˜ j S ˜ j n j β ˜ j .
Considering the worst-case equality in the above relation and some algebraic manipulations, one can obtain the above assertion. The proof is completed.

Appendix D. Proof of Theorem 5

The proof consists of two main steps. We first provide an analytical expression for the robustness of the solution in agent i by taking into account the effect of just one neighboring agent j N j , and then extend the obtained results for the case when the agent i interacts with more neighboring agents, e.g., for all j N j .
Following Remark 1, we have the following updated situation:
α i P x i X i , x j B ˜ j ,
which is a joint probability of x i X i and x j B ˜ j . Such a joint probability can be equivalently written as a joint cumulative distribution function (CDF):
α i P x i X i , x j B ˜ j = X i B ˜ j p ( x i , x j ) d x i d x j = F x i , x j ( X i , B ˜ j ) ,
where p ( x i , x j ) is the joint probability density function (PDF) of x i and x j . Our goal is to calculate:
P x i X i = X i p ( x i ) d x i = F x i ( X i ) ,
where p ( x i ) is the PDF of x i . To transform the joint CDF into the marginal CDF of x i , one can take the limit of the joint CDF as B ˜ j approaches R n j :
P x i X i = F x i ( X i ) = lim B ˜ j R n j F x i , x j ( X i , B ˜ j ) = lim B ˜ j R n j F x i | x j ( X i | B ˜ j ) F x j ( B ˜ j ) = F x i ( X i ) lim B ˜ j R n j F x j ( B ˜ j ) ,
where the last equality is due to the fact that x i and x j are conditionally independent.
To determine lim B ˜ j R n j F x j ( B ˜ j ) , one can calculate:
lim B ˜ j R n j F x j ( B ˜ j ) = R n j p ( x j ) d x j = R n j B ˜ j p ( x j ) d x j + B ˜ j p ( x j ) d x j = P x j B ˜ j + P x j B ˜ j = ( 1 α ˜ j ) + α ˜ j = 1 ,
where p ( x j ) is the PDF of x j , and the last equality is a direct result of Theorem 4.
We now put all the steps together as follows:
α i P x i X i , x j B ˜ j = F x i , x j ( X i , B ˜ j ) F x i ( X i ) lim B ˜ j R n j F x j ( B ˜ j ) = P x i X i R n j B ˜ j p ( x j ) d x j + B ˜ j p ( x j ) d x j R n j B ˜ j p ( x j ) d x j + P x i X i B ˜ j p ( x j ) d x j = ( 1 α ˜ j ) + α ˜ j P x i X i ,
where the first inequality and equality is due to (A7), the second inequality is due to (A8), the second and last equality is due to (A9), and the last inequality is considering the worst-case situation, e.g., P x i X i | x j B ˜ j = 1 .
By rearranging the last equation in above result:
α i ( 1 α ˜ j ) α ˜ j = 1 1 α i α ˜ j = α ¯ i P x i X i .
This completes the proof of first part. We now need to show the effect of having more than one neighboring agent. To this end, the most straightforward step, in order to extend the current results, is to use the fact that all neighboring agents are independent from each other. We therefore can apply the previous results for a new situation where the agent i with the probabilistic level of feasibility α ¯ i have another neighboring agent ν N i with α ˜ ν the level of reliability of B ˜ ν . By using (A10), we have the following relations for j , ν N i
1 1 α ¯ i α ˜ ν = 1 1 1 1 α i α ˜ j α ˜ ν = 1 1 α i α ˜ j α ˜ ν P x i X i .
By continuing the similar arguments for all neighboring agents, one can obtain α ¯ i = 1 1 α i α ˜ i P x i X i such that α ˜ i = α ˜ 1 α ˜ j α ˜ ν α ˜ | N i | = j N i ( α ˜ j ) . The proof is completed.

References

  1. Kouvaritakis, B.; Cannon, M.; Tsachouridis, V. Recent developments in stochastic MPC and sustainable development. Annu. Rev. Control 2004, 28, 23–35. [Google Scholar] [CrossRef]
  2. Primbs, J.A. A soft constraint approach to stochastic receding horizon control. In Proceedings of the Conference on Decision and Control (CDC), New Orleans, LA, USA, 12–14 December 2007; pp. 4797–4802. [Google Scholar]
  3. Bemporad, A.; Morari, M. Robust model predictive control: A survey. In Robustness in Identification and Control; Springer: Berlin/Heidelberg, Germany, 1999; pp. 207–226. [Google Scholar]
  4. Lavaei, J.; Aghdam, A.G. Control of continuous-time LTI systems by means of structurally constrained controllers. Automatica 2008, 44, 141–148. [Google Scholar] [CrossRef] [Green Version]
  5. Richards, A.; How, J.P. Robust distributed model predictive control. Int. J. Control 2007, 80, 1517–1531. [Google Scholar] [CrossRef]
  6. Cannon, M.; Kouvaritakis, B.; Rakovic, S.V.; Cheng, Q. Stochastic tubes in model predictive control with probabilistic constraints. IEEE Trans. Autom. Control 2011, 56, 194–200. [Google Scholar] [CrossRef]
  7. Riverso, S.; Farina, M.; Ferrari-Trecate, G. Plug-and-play decentralized model predictive control for linear systems. IEEE Trans. Autom. Control 2013, 58, 2608–2614. [Google Scholar] [CrossRef] [Green Version]
  8. Dai, L.; Xia, Y.; Gao, Y.; Cannon, M. Distributed Stochastic MPC of Linear Systems with Additive Uncertainty and Coupled Probabilistic Constraints. IEEE Trans. Autom. Control 2016, 62, 3474–3481. [Google Scholar] [CrossRef]
  9. Hokayem, P.; Cinquemani, E.; Chatterjee, D.; Ramponi, F.; Lygeros, J. Stochastic receding horizon control with output feedback and bounded controls. Automatica 2012, 48, 77–88. [Google Scholar] [CrossRef] [Green Version]
  10. Prandini, M.; Garatti, S.; Lygeros, J. A randomized approach to stochastic model predictive control. In Proceedings of the Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; pp. 7315–7320. [Google Scholar]
  11. Calafiore, G.C.; Fagiano, L. Stochastic model predictive control of LPV systems via scenario optimization. Automatica 2013, 49, 1861–1866. [Google Scholar] [CrossRef] [Green Version]
  12. Schildbach, G.; Morari, M. Scenario MPC for linear time-varying systems with individual chance constraints. In Proceedings of the American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015; pp. 415–421. [Google Scholar]
  13. Lorenzen, M.; Allgöwer, F.; Dabbene, F.; Tempo, R. Scenario-based Stochastic MPC with guaranteed recursive feasibility. In Proceedings of the Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 4958–4963. [Google Scholar]
  14. Margellos, K.; Goulart, P.; Lygeros, J. On the road between robust optimization and the scenario approach for chance constrained optimization problems. IEEE Trans. Autom. Control 2014, 59, 2258–2263. [Google Scholar] [CrossRef]
  15. Zhang, X.; Margellos, K.; Goulart, P.; Lygeros, J. Stochastic model predictive control using a combination of randomized and robust optimization. In Proceedings of the Conference on Decision and Control (CDC), Florence, Italy, 10–13 December 2013; pp. 7740–7745. [Google Scholar]
  16. Rostampour, V.; Keviczky, T. Robust randomized model predictive control for energy balance in smart thermal grids. In Proceedings of the European Control Conference (ECC), Aalborg, Denmark, 29 June–1 July 2016; pp. 1201–1208. [Google Scholar]
  17. Campi, M.C.; Garatti, S. The exact feasibility of randomized solutions of uncertain convex programs. SIAM J. Optim. 2008, 19, 1211–1230. [Google Scholar] [CrossRef] [Green Version]
  18. Long, Y.; Liu, S.; Xie, L.; Johansson, K.H. A scenario-based distributed stochastic MPC for building temperature regulation. In Proceedings of the Conference on Automation Science and Engineering (CASE), Taipei, Taiwan, 18–22 August 2014; pp. 1091–1096. [Google Scholar]
  19. Liu, S.; Sadowska, A.; Hellendoorn, H.; De Schutter, B. Scenario-based Distributed Model Predictive Control for freeway networks. In Proceedings of the International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1779–1784. [Google Scholar]
  20. Calafiore, G.C.; Lyons, D. Random convex programs for distributed multi-agent consensus. In Proceedings of the European Control Conference (ECC), Zurich, Switzerland, 17–19 July 2013; pp. 250–255. [Google Scholar]
  21. Papaefthymiou, G.; Klockl, B. MCMC for wind power simulation. IEEE Trans. Energy Convers. 2008, 23, 234–240. [Google Scholar] [CrossRef] [Green Version]
  22. Schildbach, G.; Fagiano, L.; Frei, C.; Morari, M. The scenario approach for stochastic model predictive control with bounds on closed-loop constraint violations. Automatica 2014, 50, 3009–3018. [Google Scholar] [CrossRef] [Green Version]
  23. Rostampour, V.; Esfahani, P.M.; Keviczky, T. Stochastic nonlinear model predictive control of an uncertain batch polymerization reactor. In Proceedings of the IFAC Conference on Nonlinear Model Predictive Control (NMPC), Seville, Spain, 17–20 September 2015; Volume 48, pp. 540–545. [Google Scholar]
  24. Bertsekas, D.P.; Tsitsiklis, J.N. Parallel and Distributed Computation: Numerical Methods; Prentice Hall: Englewood Cliffs, NJ, USA, 1989; Volume 23. [Google Scholar]
  25. Slater, M. Lagrange Multipliers Revisited; Technical Report; No. 80; Cowles Foundation for Research in Economics, Yale University: New Haven, CT, USA, 1959. [Google Scholar]
  26. Rostampour, V.; Keviczky, T. Probabilistic Energy Management for Building Climate Comfort in Smart Thermal Grids with Seasonal Storage Systems. IEEE Trans. Smart Grid 2018. [Google Scholar] [CrossRef]
  27. Shi, W.; Ling, Q.; Yuan, K.; Wu, G.; Yin, W. On the Linear Convergence of the ADMM in Decentralized Consensus Optimization. IEEE Trans. Signal Process. 2014, 62, 1750–1761. [Google Scholar] [CrossRef] [Green Version]
  28. Rostampour, V.; ter Haar, O.; Keviczky, T. Distributed Stochastic Reserve Scheduling in AC Power Systems With Uncertain Generation. IEEE Trans. Power Syst. 2017. [Google Scholar] [CrossRef] [Green Version]
  29. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  30. He, B.; Yuan, X. On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numerische Mathematik 2015, 130, 567–577. [Google Scholar] [CrossRef]
  31. Löfberg, J. YALMIP: A toolbox for modeling and optimization in MATLAB. In Proceedings of the International Symposium on Computer Aided Control Systems Design, New Orleans, LA, USA, 2–4 September 2004; pp. 284–289. [Google Scholar]
  32. Rostampour, V.; Ferrari, R.; Keviczky, T. A set based probabilistic approach to threshold design for optimal fault detection. In Proceedings of the American Control Conference (ACC), Seattle, WA, USA, 24–26 May 2017; pp. 5422–5429. [Google Scholar]
  33. Calafiore, G.C.; Campi, M.C. The scenario approach to robust control design. IEEE Trans. Autom. Control 2006, 51, 742–753. [Google Scholar] [CrossRef]
Figure 1. A-posteriori feasibility validation of the obtained results. The “red”, “blue”, “magenta”, “dark green”, and “light green” lines are related to the obtained results via DeSMPC, CSMPC, DSMPC, DSMPCS 0.85 , and DSMPCS 0.50 , respectively. The “black” lines are related to the upper bound values. The top, middle, and down plots are related to the temperatures of room 1 , 2 , and 3, respectively.
Figure 1. A-posteriori feasibility validation of the obtained results. The “red”, “blue”, “magenta”, “dark green”, and “light green” lines are related to the obtained results via DeSMPC, CSMPC, DSMPC, DSMPCS 0.85 , and DSMPCS 0.50 , respectively. The “black” lines are related to the upper bound values. The top, middle, and down plots are related to the temperatures of room 1 , 2 , and 3, respectively.
Energies 14 00023 g001
Figure 2. A-posteriori feasibility validation of the obtained results. The “red”, “blue”, “magenta”, “solid green”, and “dashed green” lines are related to the obtained results via DeSMPC, CSMPC, DSMPC, DSMPCS 0.85 , and DSMPCS 0.50 , respectively. The “black” lines are related to the upper bound values.
Figure 2. A-posteriori feasibility validation of the obtained results. The “red”, “blue”, “magenta”, “solid green”, and “dashed green” lines are related to the obtained results via DeSMPC, CSMPC, DSMPC, DSMPCS 0.85 , and DSMPCS 0.50 , respectively. The “black” lines are related to the upper bound values.
Energies 14 00023 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rostampour, V.; Keviczky, T. Distributed Computational Framework for Large-Scale Stochastic Convex Optimization. Energies 2021, 14, 23. https://doi.org/10.3390/en14010023

AMA Style

Rostampour V, Keviczky T. Distributed Computational Framework for Large-Scale Stochastic Convex Optimization. Energies. 2021; 14(1):23. https://doi.org/10.3390/en14010023

Chicago/Turabian Style

Rostampour, Vahab, and Tamás Keviczky. 2021. "Distributed Computational Framework for Large-Scale Stochastic Convex Optimization" Energies 14, no. 1: 23. https://doi.org/10.3390/en14010023

APA Style

Rostampour, V., & Keviczky, T. (2021). Distributed Computational Framework for Large-Scale Stochastic Convex Optimization. Energies, 14(1), 23. https://doi.org/10.3390/en14010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop