1. Introduction
The “bottleneck problem” proposed by Bellman [
1] initiated the formulation of a continuous-time linear programming problem. These kinds of problem involving integrals have received considerable attention for a long time. We denote by
the space of all nonnegative and square-integrable real-valued functions defined on the time interval
. Tyndall [
2,
3] studied this problem as follows:
where
and
are nonnegative constants for
and
. Levinson [
4] generalized the results of Tyndall by replacing the constants
and
as the nonnegative real-valued functions
and
defined on
and
, respectively, for
and
. In other words, the following problem was studied:
This complicated problem has been solved numerically by Wu [
5,
6] in which the functions
,
,
and
were assumed to be piecewise continuous.
In the real world, the known data
,
,
and
that appeared in the continuous-time linear programming problem may be imprecise or uncertain. It means that the known data may be subject to perturbation. Developing the numerical methodology is an important issue for studying the different types of optimization problems. Therefore, Wu [
7] developed a methodology to obtain the so-called robust solutions of uncertain continuous-time linear programming problems with time-dependent matrices in which the uncertainties of known data were assumed to fall into the pre-determined compact intervals. More precisely, the following problem was studied by Wu [
7]:
where the uncertain functions
,
,
and
were assumed to fall into compact intervals
,
,
and
, respectively. For example, the compact intervals
are taken to be
where
are the known nominal functions of
, and
are the uncertainties satisfying
.
In this paper, we shall propose the extended form of robust counterpart by using a similar concept that was introduced by Bertsimas and Sim [
8]. The basic idea is described below. Recall that
denotes the set of indices, which says that
is uncertain for
and that
is certain for
. Although the data
for
should be uncertain, sometimes, some of
for
still remain certain (i.e., the data
still remain unchanged for
) in a considered problem. Given any fixed
, let
denote the number of indices in the set
. In the real situation, we may only know that the number of
for
which are subject to be uncertain is
. We are not able to know the exact indices for making sure the uncertain data
. In this case, we need to consider all subsets
of
with
, where
denotes the number of elements in the set
. The integer
can be regarded as the
robustness with respect to the uncertain functions
when
i is fixed. The problem studied in Wu [
7] implicitly assumes
. In other words, the problem studied in this paper is indeed an extended form of the problem formulated in Wu [
7]. This kind of extended problem will be more complicated and hard to solve. The purpose of this paper is to develop a computational procedure to solve this new kind of optimization problem.
Many theoretical results of continuous-time linear programming problem have been obtained by Meidan and Perold [
9], Papageorgiou [
10], and Schechter [
11]. A subclass of continuous-time linear programming problem called the separated continuous-time linear programming problem has been studied by Anderson et al. [
12,
13,
14], Fleischer and Sethuraman [
15] and Pullan [
16,
17,
18,
19,
20]. This special type of problem is given below
where
G and
H are constant matrices; the dimensions of
,
and
are
,
and
, respectively; the functions
,
,
and
are bounded and measurable on
; the functions
and
are absolutely continuous. This problem can be used to model the job-shop scheduling problems by referring to Anderson et al. ([
12], p. 758). On the other hand, a simplex-like algorithm has also been proposed by Weiss [
21] to solve this separated continuous-time linear programming problem.
The vectorial form of linear type of continuous-time linear programming problems is written as follows:
In general, Farr and Hanson [
22,
23], Grinold [
24,
25], Hanson and Mond [
26], Reiland [
27,
28], Reiland and Hanson [
29] and Singh [
30] studied the nonlinear type of continuous-time optimization problems. More precisely, the nonlinear problem is formulated as follows:
where
for
,
for
,
an
m-dimensional vector-valued function defined on
,
an
n-dimensional bounded and measurable vector-valued function defined on
and
an
time-dependent matrices whose entries are bounded and measurable on
. In particular, when we take
we see that the nonlinear type covers the linear type.
Zalmai [
31,
32,
33,
34] investigated the continuous-time fractional programming problems. Those articles just presented the theoretical results without suggesting useful numerical methods. On the other hand, many different numerical methods for solving the continuous-time linear fractional programming problem were developed by Wu [
35], and Wen and Wu [
36,
37,
38]. More precisely, this problem is formulated as follows:
where
,
,
,
,
, and
B and
K are nonnegative constant matrices.
The optimization problems that involve uncertain data are an attractive research topic. The stochastic optimization was first introduced by Dantzig [
39] in which the probability theory was invoked to model uncertain data when the exact probability distributions of uncertain data are not known for sure. The technique of robust optimization suggests another methodology to solve the optimization problems with uncertain data. Ben-Tal and Nemirovski [
40,
41] and El Ghaoui [
42,
43] independently proposed some concepts to study the robust optimization. For the main articles on this topic, one can also refer to the articles contributed by Averbakh and Zhao [
44], Ben-Tal et al. [
45], Bertsimas et al. [
8,
46,
47], Chen et al. [
48], Erdoǧan and Iyengar [
49], and Zhang [
50]. In this paper, we are going to propose an extended form of a robust counterpart of the continuous-time linear programming problem. We also develop a practical computational procedure to solve this really complicated problem.
In
Section 2, we introduce an extended form of a robust counterpart of a continuous-time linear programming problem using the similar concept introduced by Bertsimas and Sim [
8]. This extended form of robust counterpart is going to be converted into a traditional form of continuous-time linear programming. In
Section 3, in order to solve the primal problem obtained in
Section 2, we formulate a dual problem by introducing two bilinear forms, which is inspired by the concept proposed by Anderson and Nash [
51]. Under this formulation, the weak duality theorem can be established. In
Section 4, the discretization problem of the transformed continuous-time linear programming problem will be proposed. As a matter of fact, this discretization problem is a large-scale linear programming problem. In order to estimate the error bound, a dual problem of the discretization problem is formulated. The optimal solutions obtained from the discretization problem are used to construct the feasible solutions of original continuous-time linear programming problem. In
Section 5, an analytic formula of error bound is derived to obtain the
-optimal solutions. In
Section 6, the properties of weak convergence of approximate solutions are studied, which will also be used to prove the strong duality theorem. In the final
Section 7, based on the previous results, we design a computational procedure.
2. Robust Continuous-Time Linear Programming Problems
We consider the following continuous-time linear programming problem:
where
and
are assumed to be the nonnegative real-valued functions defined on
and
, respectively, for
and
. We also assume that some of the functions
,
,
and
are subject to be pointwise-uncertain. It means that, given each fixed
and each fixed
, the uncertain data
,
,
, and
should fall into the corresponding compact intervals
,
,
and
. We also allow some of those functions to be certain. In order not to complicate the considered problem, when any one of the functions
,
,
or
is assumed to be certain, it will mean that each function value
,
or
is assumed to be certain for all
. However, when any one of the functions
,
,
or
is assumed to be uncertain, we assume that each function value
,
or
may be certain for some
.
Let and be the sets of indices such that the functions and are uncertain for and , respectively. For each fixed , let and be the set of indices such that and are uncertain for and , respectively. It is clear to see that and are subsets of .
The robust counterpart of problem (CLP) is formulated as follows:
where each piece of uncertain data is assumed to lie in the corresponding uncertainty sets. We assume that all the uncertain functions will fall into the compact intervals that are described below.
For
with
and
with
, we assume that the uncertain functions
and
will fall into the following compact intervals
and
respectively. The known nominal functions
and
of
and
, respectively, are assumed to be nonnegative. The uncertainties
and
are, of course, nonnegative satisfying
For , we denote by the certain functions with uncertainty . We also denote by the certain functions with uncertainty for .
For
with
and
with
, we take the following compact intervals
The known nominal functions and of and , respectively, are not necessarily nonnegative. However, the uncertainties and of and , respectively, should be nonnegative. For , we denote by the certain function with uncertainties . We also denote by the certain function with uncertainties for .
In Wu [
7], we have derived the following robust counterpart of
which is equivalent to the following problem
Although
is the set of indices saying that
is uncertain for
and that
is certain for
, sometimes, some of
for
still remain certain in a problem. In the real situation, given any fixed
, we may only know that the number of
for
which are subject to be uncertain is
. In this case, we need to consider all subsets
of
with
, where
denotes the number of elements in the set
. The integer
can be regarded as the
robustness with respect to the uncertain functions
when
i is fixed. This similar idea was also suggested by Bertsimas and Sim [
8]. Now, we can consider the robustness
,
and
for the uncertain functions
,
and
, respectively. The notations
,
and
can be similarly realized as the subsets of
,
and
, respectively. In this paper, we assume that
,
,
and
are nonempty sets, which says that the integers
,
,
and
are nonzero.
As we have observed that the robust counterpart (RCLP1) given above shows that the constraints are taken to be the worst case, in the general situation, the robust counterpart (RCLP2) that will be formulated below also needs to consider the constraints to be the worst case. In order to formulate this general type of robust counterpart, for
, we consider the following optimization problems:
Since the original problem (CLP) can be rewritten as
the extended form of the robust counterpart of (CLP) is formulated below:
It is obvious that the robust counterpart (RCLP1) is a special case of the extended form (RCLP2) in the sense of
,
and
. Since the uncertainties shown in (
1)–(
3) are regarded as the largest uncertainties, the constraints given in (RCLP2) are realized to be the worst case. The main reason is that, if a feasible solution satisfies the constraints formulated in the worst case, then it will satisfy the constraints formulated by any uncertainties.
The extended form of robust counterpart (RCLP2) is not easy to solve. In order to transform the robust counterpart (RCLP2) into a solvable form, we are going to apply the strong duality theorem of conventional linear programming problem. We first provide some useful propositions.
Lemma 1. Given and for , suppose that . If is an integer, where , then Proposition 1. Given , we have the following properties.
- (i)
The value is equal to the optimal objective value of the following linear programming problem:where are the decision variables for . Moreover, there is an optimal solution and a subset of with satisfying for and for . - (ii)
For , given any , the value is equal to the optimal objective value of the following linear programming problem:where are the decision variables for , and the optimal objective values depends on . Moreover, there is an optimal solution and a subset of with satisfying for and for . - (iii)
For , given any , the value is equal to the optimal objective value of the following linear programming problem:where are the decision variables for , and the optimal objective values depends on . Moreover, there is an optimal solution and a subset of with satisfying for and for .
Proof. We just prove part (i), since parts (ii) and (iii) can be similarly obtained. Suppose that
is an optimal solution of problem (
4). Since
and
are nonnegative, in order to maximize the objective function, we must have
We are going to claim that there exists an alternative optimal solution
satisfying
for each
; that is, there is a subset
of
with
satisfying
for
and
for
. Let
be a subset of
satisfying
for
and
for
, where
r is a positive integer. From (
7), we see that
We re-arrange the following finite set
in ascending order as
where
and
. Now, we define a new feasible solution
as follows:
Let
. Then,
satisfying
for
and
for
. Next, we are going to claim that
is an optimal solution. Let
Then, the optimal objective value with respect to the optimal solution
is given by
which says that
is an optimal solution. Therefore, we conclude that the optimal objective value of problem (
4) can be obtained by taking
variables of
with value 1, which is the selection of subset
with
and the corresponding objective value
given by (
8). This shows that the optimal objective value of problem (
4) is less than the value
.
On the other hand, if
is an optimal solution of problem (
1), then
with optimal objective value
which is equivalent to
for some feasible solution
of problem (
4) satisfying
for
and
for
. This shows that the value
is less than the optimal objective value of problem (
4), and the proof is complete. □
In order to rewrite the robust counterpart (RCLP2) to make it solvable by a numerical algorithm, we need to consider the dual problems of linear programming problems (
4)–(
6).
The dual of problem (
4) is given by
The optimal objective value is denoted by .
For
, given any
, the dual of problem (
5) is given by
The optimal objective value is denoted by .
For
, given any
, the dual of problem (
6) is given by
The optimal objective value is denoted by .
For
, we define the real-valued functions
on
by
We are going to obtain the equivalent form of robust counterpart (RCLP2), which will turn into a conventional continuous-time linear programming problem.
Proposition 2. The robust counterpart is equivalent to the following problem: Proof. We consider the primal-dual pairs of problems (
4) and (
9). Since problem (
4) is feasible and bounded, using Proposition 1 and the strong duality theorem for linear programming problem, the dual problem (
9) is also feasible and bounded satisfying
. Similarly, we also have
for any
. Therefore, we conclude that the problems (RCLP2) and (RCLP3) are equivalent. This completes the proof.
For optimization problem (P), we write to denote the optimal objective value of problem (P). □
Theorem 1. The robust counterpart is equivalent to the following continuous-time linear programming problem: Proof. Let be an optimal solution of problem (RCLP3). Given this , the optimal solutions of problems , and are given below:
Let
be an optimal solution of problem
. Then, we have
For
, given any
, let
be an optimal solution of problem
, where the components of
are
for
. Then, we have
For
, given any
, let
be an optimal solution of problem
, where the components of
are
for
. Then, we have
Moreover, we have the optimal objective values as follows:
Since can be any value in , it follows that is a feasible solution of problem (RCLP4), which shows that .
Conversely, if
is an optimal solution of problem (RCLP4), then, given any fixed
, we see that
,
and
are the feasible solutions of problems
,
and
, respectively, for
. Therefore, we have
which implies
Therefore, we obtain
which says that the first constraint of problem (RCLP3) is satisfied. Since
and
are the feasible solutions of problems
and
, respectively, for
, we also have
and
Since can be any number in , it follows that is a feasible solution of problem (RCLP3). In other words, we have . Therefore, we obtain , which shows the equivalence of problems (RCLP4) and (RCLP3). This completes the proof. □
Problem (RCLP4) is equivalent to the following continuous-time linear programming problem:
When the real-valued functions are assumed to be nonnegative for all , it is clear to see that the zero vector-valued function is a feasible solution of problem .
4. Discretization
In order to design the computational procedure, we are going to formulate a discretization problem, which will be a conventional linear programming problem. Therefore, we need some mild assumptions to achieve this purpose.
- (A1)
The real-valued functions , , , , and are assumed to be nonnegative satisfying and .
- (A2)
The real-valued functions , , , , , , and are assumed to be piecewise continuous on .
- (A3)
For each
and
, we assume that the following positivities are satisfied:
- (A4)
We assume that the following positivities are satisfied:
and
In other words, given any , if , then , and if , then for all and .
Since the involved functions are assumed to be piecewise continuous, it means that the discontinuities should be excluded in the discussion. We denote by
,
,
and
the set of discontinuities of the corresponding real-valued functions
,
,
and
. It is clear to see that
,
and
are finite subsets of
, and that
is a finite subset of
. For convenience, we also write
where
and
are a finite subset of
. In order to formulate the discretization problem, we need to divide the time interval
into many subintervals by determining a finite subset of points in
. In order to make the involved functions to be continuous on the subintervals, we are going to take the discontinuities to be the set of partition of
. Therefore, we consider the following set
Then,
is a finite subset of
written by
where
and
. We remark that
and
can be the continuities of functions
,
,
and
for
and
.
Let
be a partition of
satisfying
, which means that each compact interval
for
is also divided into many compact subintervals. Now, we write
where
and
. In this case, we have
n compact subintervals that are denoted by
For further discussion, we also write
We denote by
the length of compact interval
. We also define the following quantity:
and assume that
We also assume that there exists
satisfying
It is clear to see that implies . In the paper, when we say that , it implicitly means .
Let denote the length of compact interval for . We consider two different types of partitions of .
Example 1. We assume that each compact interval is equally divided by subintervals for . In this case, the total number of subintervals are satisfying . It is clear to see that Example 2. We assume that that each compact subinterval is equally divided by subintervals for . Then, the total number of subintervals is given by Assume that the partition satisfies It is clear to see that implies .
Now, we can construct a partition
according to the above setting. In this case, we see that the real-valued functions
,
and
are continuous on the open interval
. We also see that the real-valued function
is continuous on the open rectangle
for
. For
, from (
10), we define
For
, we also define
It is clear to see that
for
,
and
.
Considering the matrices
and
, the
-th entries of matrices
and
, for
, are denoted and defined by
From (
22), it follows that, if
, then
for all
,
and
. It is clear to see that
for
and
, respectively, for
.
Remark 1. From (21), we see that implies for all , and . Given any fixed , from (20), for any , there exists satisfying , which also says that , i.e., . Therefore, for each j and l, there exists satisfying . Considering the matrices
and
, the
-th entries of matrices
and
, for
, are denoted and defined by
It is clear to see that
for
and
, respectively, for
. We also see that
implies
for
.
Now, we formulate the following linear programming problem:
We note that the constraint (35) is redundant. However, we present it for the purpose of comparing it to the constraint (36). We also adopt the following notations:
The vector has all entries 1.
The vector has all entries 1 with indices .
For each , the vector has all entries 1 with indices .
For each , the vector has all entries 1 with indices .
Given a vector , we denote by a diagonal matrix with for appearing in the diagonal.
Now, the problem
is rewritten as the following standard form:
where the decision vector
is defined by
and
for
and
. The data
and
are defined by
and
where
To obtain the matrix
M, we need to carefully determine its submatrices that are given below
The details for these submatrices are described below.
For the first row, we have
where the vectors
and
consist of
and
, respectively, for
.
For the second row, we first define
with the entries
for
. Now, we define
For the third row, we first define
with the entries
for
. Now, we define
Then, the submatrix
is defined by
We also define the following submatrices:
For the fourth row, we first define
with the entries
for
. Now, we define
Then, the submatrix
is defined by
We also define the following submatrices:
Now, the dual problem of
is given by the following standard form:
where the decision vector
is defined by
and
for
and
. After some algebraic calculations, it can be written as the following form:
Then, we obtain
which is equivalent to the following problem, by re-naming the decision variables,
The feasible solution of problem
is denoted by
where
In addition, the feasible solution of problem
is denoted by
where
Recall that
denotes the length of compact interval
. Now, we define
Then,
which says that
for
. We also adopt the following notations:
and the following notations
It is obvious that
for any
and
. We also define
for
.
Given a feasible solution
of problem
, we define
and
From the constraints (43) and (44), we have
. From the constraints (40) and (41), we also have
For
, we write
Proposition 4. Given a feasible solution of problem , we have the following properties.
- (i)
For , and , let Then,is a feasible solution of problem satisfying the following inequalities:for all , and . - (ii)
For and , let Then, is a feasible solution of problem satisfying the following inequalitiesfor all , , and . We further assume that each is nonnegative and is an optimal solution of problem . Then, is also an optimal solution of problem .
Proof. By (
20), for each
j, there exists
satisfying
. Therefore, by referring to (
27), for each
j and
l, there exists
satisfying
, which also implies
. For
, we have
Since
it follows that, for
,
Therefore, from (
66), we obtain
which implies, by (
61)–(
63),
which shows that the constraint (
37) is satisfied. For
, from (
66), we also have
which implies, by the nonnegativity,
This shows that the constraint (38) is satisfied. From (
60), we have
and
which says that the constraints (40) and (41) are satisfied. The other constraints can be easily realized. This shows that
is indeed a feasible solution of problem
. On the other hand, from (
23) and (
47), we have
Since
i.e.,
, this proves (
64).
To prove part (ii), for each
and
, we define the index set
We also define the index set
For each fixed and , we consider the following cases.
Suppose that
, i.e., there exists
satisfying
and
. In this case, we have
Combining (
66) and (
67), we obtain
for
and
Therefore, we obtain
for
, and
which show that the constraints (
37) and (
38) are satisfied.
Suppose that
, i.e.,
for
. In this case, for
, using (
68), we have
For
, since
and
, using (
72), we have
which implies, by using (62) and (63) and adding
on both sides,
For
, by (
68), we also have
which implies
which show that the constraints (
37) and (
38) are satisfied.
From (
60), we have
and
which says that the constraints (40) and (41) are satisfied. The other constraints can be easily realized. This shows that
is a feasible solution of problem
. In addition, the inequality (
65) follows from (
64) immediately.
Finally, since the objective values satisfy
it is clear to see that if
is an optimal solution of problem
, then
is an optimal solution of problem
. This completes the proof. □
Next, we shall prove that the feasible solutions of problem are uniformly bounded.
Proposition 5. Let be a feasible solution of primal problem given by Then, we have the following properties.
- (i)
We havefor all , and . - (ii)
We havewhich says that there is a constant satisfyingfor all , , and . - (iii)
Suppose that is an optimal solution of problem given by We havewhich says that there is a constant satisfyingfor all and .
Proof. To prove part (i), by the feasibility, we have
and, for
,
By (
20), for each
j, there exists
satisfying
. Therefore, by referring to (
27), for each
j and
l, there exists
satisfying
, which also implies
by (
48) and (
56). From (
56), if
, then
, which says that the matrix
is a zero matrix. In this case, using (
77) and (
52), we have
which implies
For the case of
, we want to show that
for all
and
. We shall prove it by induction on
l. For
, from (
79), we have
which says that
Therefore, for each
j, we obtain
Suppose that
for
. Then, for each
j, we have
Therefore, for each
j, we obtain
which implies
Therefore, by induction, we obtain
for all
,
and
. From (
23), we have
Since
i.e.,
, this proves (
74).
To prove part (ii), by the feasibility of
, for each
, it follows that
which implies
Since
,
,
and
are nonnegative, according to (
80), they must be uniformly bounded. Therefore, we conclude that there exists a constant
such that (
75) is satisfied.
To prove part (iii), according to the objective function of problem
, since
is a feasible solution, we have
, i.e.,
which implies
which says that
and
must be uniformly bounded for
and
by the nonnegativity of
and
. Therefore, we conclude that there exists a constant
such that (
76) is satisfied. This completes the proof. □
The feasible solution of problem
is denoted by
where
Let
be an optimal solution of problem
given by
We construct the vector-valued step functions
and
as follows:
where, for each
and
,
We also construct the vector-valued step functions
by
and
where
and
for
and
. We also write
and
Then, we have the following result.
Proposition 6. Suppose that is an optimal solution of primal problem . Let be constructed from according to the above procedure given by Then, is a feasible solution of problem .
Proof. We first have, for
,
.
We consider the following cases:
Suppose that
for
. For
, we have
For
, we have
For
, by (
30) and (
34), we have
For
, by (
30) and (
36), we have
Suppose that
. We have
This shows that is a feasible solution of problem , and the proof is complete.
5. Analytic Formula of the Error Bound
Recall that
denotes the optimal objective value of problem (P). Since
is an optimal solution of problem
and
constructed from
is a feasible solution of problem
, it follows that
According to the weak duality theorem for the primal and dual pair of problems
and
presented in Theorem 2, we see that
In the sequel, we are going to claim
The feasible solution of problem
is denoted by
where
Let
be an optimal solution of problem
. According to part (ii) of Proposition 4, we can construct another optimal solution
of problem
such that the following inequalities are satisfied:
for all
,
and
. From (
58) and (
59), we also define
For each
and
, we define the real-valued functions
and
on
, and
on
, respectively, by
Then, given any
and
for
, we have
For each
and
, we define the real-valued functions
on
by
For each
, we also define the constants
For
, let
and
Then,
which says that
and
for
and
. We want to prove
Some useful lemmas will be provided below.
Lemma 2. For , and , we have Proof. Using the argument of Lemma 4.1 in Wu [
6], for
, we can show that
and
Therefore, for
, we obtain
and, for
, we obtain
This completes the proof. □
Proof. Since
is bounded by (
92), using the argument of Lemma 4.2 in Wu [
6], for
, we can show that
and
Therefore, for
, we obtain
This completes the proof. □
Proof. From (
23), since
it follows that, by (
90),
Using Lemmas 2 and 3, we complete the proof. □
For convenience, we adopt the following notations:
and
From (
20) and (
21), we see that
Now we define the real-valued functions
and
on
by
and
Using (
90) and Lemmas 2 and 3, we see that the sequence
is uniformly bounded. This also says that the sequence
is uniformly bounded. In this case, there exists a constant
satisfying
for all
and
. For further discussion, we define a real-valued function
on
given by
Lemma 5. We define a real-valued function given by Then, for , we have We also have that the sequence of real-valued functions is uniformly bounded.
Proof. For
, from (
105), we have
Since
using (
106), it follows that, for
,
For
, we also have
For each and , we consider the following cases:
For
, from (
108), we have
For
, by (
100) and (
107), we have
Finally, using (
103) and (
104), it is clear to see that the sequence of real-valued functions
is uniformly bounded, and the proof is complete. □
Now, we define a vector-valued function
by
Remark 2. Lemma 5 says that the sequence of real-valued functions is uniformly bounded. Using (90), it is clear to see that the family of vector-valued functions is uniformly bounded. Proposition 7. Let be an optimal solution of problem , and let be another optimal solution of problem constructed from part (ii) of Proposition 4. Let be constructed from . We define for and , where and are defined in . Then, is a feasible solution of problem .
Proof. For
and
, we define the real-valued functions
on
by
which implies
Therefore, by adding the term
on both sides, we obtain
which implies
Now, from (
109) and (
110), we obtain
which says that, by (
92)–(
94),
that is,
Suppose that
. We define
which implies
Now, we obtain
which says that, by (
92) and (
94),
that is,
Therefore, we conclude that the constraint (
13) is satisfied.
From (
60) and (
91), we have
and
which say that the constraints (15) and (16) are satisfied. The other constraints can be easily realized, which says that
is a feasible solution of problem
. This completes the proof. □
For
and
, we define the step functions
and
as follows:
and
respectively. For
, we also define step function
by
Lemma 6. For and , we have Proof. We can see that the following functions
are continuous a.e. on
, which says that they are Riemann-integrable on
. For
and
, using Lemma 2, we have
and
which implies
Using Proposition 5, we see that the sequence
is uniformly bounded. Since the Riemann integral and Lebesgue integral are identical in this lemma, we can use the Lebesgue bounded convergence theorem to obtain (
113). Using (
90), we also see that the sequence
is uniformly bounded. Therefore, we can use the Lebesgue bounded convergence theorem again to obtain (
114). This completes the proof. □
Theorem 3. We have the following properties.
- (i)
satisfying as . There also exists a convergent subsequence of satisfying - (ii)
(
No Duality Gap)
. Suppose that the primal problem is feasible. Then, we have
Proof. To prove part (i), we have
which implies
From (
101), we have
and
for all
n. Using Lemma 4, we have
as
. Therefore, we obtain
as
a.e. on
. This also shows that
as
a.e. on
. We can use the Lebesgue bounded convergence theorem to obtain
From (
119) and (
120), we conclude that
as
. From (
120), we also obtain
Using part (ii) of Proposition 4, it follows that
is a bounded sequence. This says that there exists a convergent subsequence
of
. Using (
118), we can obtain the equality (
116). It is clear to see that
can be written as (
115).
To prove part (ii), using part (i) and inequality (
88), we obtain
Since
for each
, we also have
and
This completes the proof.
Proposition 8. We have the following properties.
- (i)
Suppose that is an optimal solution of primal problem . Let be constructed from as given in Proposition 6 by The error between the optimal objective value and the objective value at is less than or equal to defined in , i.e., - (ii)
Suppose that is an optimal solution of problem . Let be another optimal solution of problem constructed from part (ii) of Proposition 4. Let be constructed from . We define for and , where and are defined in . Then, is a feasible solution of problem , and the error between the optimal objective value and the objective value at is less than or equal to , i.e.,
Proof. To prove part (i), using Proposition 6, we see that
is a feasible solution of problem
with the following objective value
To prove part (ii), we have
This completes the proof.
Definition 1. Given any , we say that the feasible solution of problem is an-optimal solution
when We say that the feasible solution of problem is an-optimal solution
when Theorem 4. Given any , we have the following properties.
- (i)
There exists an integer such that obtained from Proposition 8 satisfies and . This also means that the ϵ-optimal solution of problem exists.
- (ii)
There exists an integer such that obtained from Proposition 8 satisfies and . This also means that the ϵ-optimal solutions of problem exists.
Proof. Part (i) of Theorem clp2t30 says that as . Given any , using Proposition 8, there exists satisfying . In this case, the desired results follow immediately. □
6. Convergence of Approximate Solutions
We are going to study the convergent properties of the sequence that is constructed from the optimal solutions of problem and the sequence that is constructed from the optimal solutions of problem .
Let
be a feasible solution of dual problem
given by
For
, we define the functions
and
The constraints (18) and (19) say that
and
for
. From the constraints (15) and (16), we also have
For
and
, we also define the functions
and
Let
, where
and
are given in (
21) and (
22), respectively. For convenience, we define a real-valued function
on
by
Then, a useful lemma is given below.
Lemma 7. Let be a feasible solution of dual problem given by For each and , we define where is defined in . We also define the functions where and are defined in and , respectively. Then, is a feasible solution of dual problem.
Proof. We first have
which implies
For any fixed
, we define the index sets
and consider
Then, for each fixed j, three cases are considered below.
Suppose that
(i.e., the second sum is zero). Then,
for all
i. Therefore, from (
127), we have
Suppose that
and
for all
. Then,
Using (
129) again, we obtain
Suppose that
and there exists
with
. If
, i.e.,
by (
21) then
. If
then
, i.e.,
by (
22). Therefore, we conclude that
From (
126), we see that
for
. By (53) and (
128)–(
130), for each
and
, we have
Therefore, we obtain
which implies, by the nonnegativity,
Combining the above cases, we conclude that
This shows that the constraint (
13) is satisfied.
From (
123), we have
and
which say that the constraints (15) and (16) are satisfied. The other constraints can be easily realized. This shows that
is indeed a feasible solution of
, and the proof is complete. □
We also need the following useful lemmas.
Lemma 8. (Riesz and Sz.-Nagy ([
52], p. 64)).
Suppose that the sequence in is uniformly bounded with respect to . Then, exists a subsequence such that it weakly converges to . More precisely, we have Lemma 9. (Levinson [
4])
Suppose that the sequence is uniformly bounded on with respect to such that it weakly converges to . Then, we have Lemma 10. Suppose that and are two sequences in such that they weakly converge to and in , respectively.
- (i)
If the function η defined on is bounded, then the sequence weakly converges to .
- (ii)
The sequence weakly converges to .
Proof. To prove part (i), given any
, it is clear to see that
. The concept of weak convergence says that
This shows that the sequence weakly converges to .
To prove part (ii), given any
, we have
This completes the proof. □
Let
be a sequence of
m-dimensional vector-valued functions, i.e.,
We say that
weakly converges to another
m-dimensional vector-valued function
when each sequence
weakly converges to
for
.
Proposition 9. is a sequence constructed from the optimal solutions of problem according to part (i) of Proposition 8. Then, there is a subsequence of such that the following properties hold true.
The subsequences of functions , , , and are weakly convergent to some , , , and , respectively.
The subsequences of constants and are convergent to some and , respectively,
The vector-valued function formed by the above limits is a feasible solution of primal problem .
Proof. From Proposition 5, we see that the sequence
is uniformly bounded with respect to
. Let
be the
jth component of
. We also regard
and
as constant functions. Lemma 8 says that there exists a subsequence
of
that weakly converges to some
. Using Lemma 8 again, there exists a subsequence
of
that weakly converges to some
. By induction, there exists a subsequence
of
that weakly converges to some
for each
j. Therefore, we can construct a subsequence
that weakly converges to
, where
and
which also says that the sequences
and
converge to
and
, respectively, and the sequences
,
,
,
and
weakly converge to
,
,
,
,
, respectively. Then, the subsequence
of
is weakly convergent to
. From Lemma 9, for each
j, we have
which says that
Since
is a feasible solution of problem
for each
, we have
We define the following sequences of vector-valued functions:
by
Then, the sequences
,
and
are uniformly bounded, since the sequence
is uniformly bounded. Lemma 10 also says that the sequences
,
and
weakly converge to
,
and
, respectively, given by
By the weak convergence for the sequence
, the inequalities (135) and (137) say that
and
respectively. Let
,
and
be the subsets of
on which the inequalities (
138)–(
140) are violated, respectively, for all
and
, let
be the subset of
on which
by referring to (
133), and let
We define
where the set
has measure zero. Then,
for
.
Suppose that
. Since
a.e. in
, from (
138), we have
Suppose that
. We have
which shows that is a feasible solution of primal problem . Since a.e. on , we see that the subsequence is weakly convergent to , and the proof is complete.
Proposition 10. is a sequence that is constructed from the optimal solutions of problem according to part (ii) of Proposition 8. For , we define the functions For each and , we define where is defined in . We also define the functions for . Then, eachis a feasible solution of dual problem , and there is a subsequence of such that the following properties hold true. The subsequence of functions is weakly convergent to .
The subsequence of constants is convergent to .
The vector-valued function formed by the above limits and given by in the proof below is a feasible solution of dual problem , where and are constructed from the subsequence .
Proof. Since
is a feasible solution of problem
, Lemma 7 says that
is also a feasible solution of problem
satisfying
,
and
for each
,
and
. Remark 2 also says that the sequence
is uniformly bounded, which implies that the sequence
is uniformly bounded. Therefore, the sequence
is also uniformly bounded with respect to
. Using Lemma 8 and the proof of Proposition 9, we can similarly show that there is a subsequence
of
such that the subsequence of functions
weakly converges to some
and the subsequence of constants
converges to some
. According to the constraints (17) and (14), we have
which implies, by taking the limit on both sides,
By applying the equalities (
141) and (
142) to the constraints (15) and (16), we have
We also see that
and
. Let
Then,
and
. From (
144), we obtain
and
Then,
and
. Since
is a feasible solution of problem
, we have
for
and
, and
for
and
. From Lemma 9, for each
i, we have
We define the following sequence of vector-valued functions:
by
Then, the sequence
is uniformly bounded, since the sequence
is uniformly bounded. Lemma 10 also says that the sequence
weakly converges to some
given by
Since
, it is clear to see that
for
. Since
for
, it follows that
for
. Using Lemma 9, we obtain
Let
and
be the subsets of
on which the inequalities (
151) and (
153) are violated for all
, let
be the subset of
on which
, and let
. We define
and
where the set
has measure zero. Therefore, we have
for
and
a.e. on
, which implies, using (
153),
Then, we define
,
Since
,
,
and
, it follows that
Then, a.e. on . Next, we want to show that is a feasible solution of .
Suppose that
. We have
From (
145)–(
148), we obtain
and
Suppose that
. From (
152), we see that
for
. By (
20), (
21) and (
157), for each
and
, we have
From (
155), we also have
and
Finally, from (
143), we have
which shows that
is a feasible solution of
. Since
a.e. on
, we see that the subsequence
is weakly convergent to
, and the proof is complete. □
Theorem 5 (Strong Duality Theorem).
According to Proposition 8, assume that the sequence is constructed from the optimal solutions of problem , and that the sequence is constructed from the optimal solutions of problem . Then, the feasible solutions and obtained from Propositions 9 and 10, respectively, are also the optimal solutions of primal problem and dual problem , respectively. Moreover, we have Proof. In Proposition 9, regarding the subsequence
with
we have the primal objective value
where
is weakly convergent to
. In Proposition 10, regarding the subsequence
with
we also have the dual objective value
Since
and
, where
is given in the proof of Proposition 10 satisfying that
is weakly convergent to
, from (
159) and (
160), we have
Using Lemma 6, we have
and
By taking limit on both sides of (
161), and using (
120), (
162) and (
163), we obtain
Using the weak convergence, we also obtain
According to the weak duality theorem between problems
and
, we have that
which show that
and
are the optimal solutions of problems
and
, respectively. Theorem 3 also says that
. This completes the proof. □
7. Computational Procedure and Numerical Example
In the sequel, we are are going to design the computational procedure. The purpose is to obtain the approximate optimal solutions of problem . According to the above settings, we see that the approximate optimal solutions are step functions. We shall also use Proposition 8 to obtain the appropriate step functions.
Theorem 3 says that the error between the approximate objective value and the (theoretical) optimal objective value is
In order to obtain
, using (
97), we have to solve the following problem
We first note that the function
in (
96) can be rewritten as follows:
For
and
, we define
and
Then, the real-valued function
can be rewritten as
Now we define the real-valued function
on
by
Since
,
,
and
are continuous on
, and
and
are continuous on
, respectively, for all
, we see that
is also continuous on
, which implies that
is continuous on the interval
. Therefore, we have
which says that the supremum in (
165) can be obtained by the following equality:
In order to use the Newton’s method, we assume that the functions
,
,
,
,
and
are twice-differentiable on
and
, respectively, which also says that the functions
,
,
,
,
and
are twice-differentiable on the open interval
and open rectangle
, respectively, for all
. According to (
168), the following simple type of optimization problem will be solved
Then, the optimal solution is given by
Using (
167), we see that the optimal solution of problem (
169) is given by
We denote by
the set of all zeros of the real-valued function
. Then, we have
Therefore, using (
168) and (
170), we can obtain the desired supremum (
165). From (
166), we also have
Two cases are considered below.
Suppose that
is a linear function of
t on
assumed by
for
. Using (
168), we obtain
Suppose that
is not a linear function of
t. We are going to apply the Newton’s method to generate a sequence
satisfying
as
such that
is the zero of
. More precisely, the iteration is given by
where
and
for
. The initial guess is
. We are going to apply the Newton’s method by taking as many as possible for the initial guesses
’s in order to obtain all the zeros of the real-valued function
.
Now, the more detailed computational procedure is presented below.
Step 1. Set the error tolerance that is used to stop the iteration. Set the initial value of natural number , where the new value of n for the next iteration can refer to Step 6.
Step 2. Use the simplex method to solve the dual problem that is a large-scale linear programming problem. In this case, we can obtain the optimal objective value and the optimal solution .
Step 3. Use the Newton method presented in (
172) to obtain the set
of all zeros of the real-valued function
.
Step 4. Use (
170) and (
171) to evaluate the maximum presented in (
169). Use (
168) to evaluate the supremum presented in (
165).
Step 5. Use the supremum obtained in Step 4 to evaluate
presented in (
97). Use the values of
to evaluate
presented in (
98).
Step 6. Use formula (
164) to evaluate the error bound
. If
, then go to Step 7. Otherwise, one more subdivision for each compact subinterval must be taken. Set
for some integer
and go to Step 2, where
is the number of new points of subdivisions for all the compact subintervals. For example, in Example 1, we can set
. In this case, we have
. In Example 2, we can set
for
. In this case, we also have
.
Step 7. Use the simplex method to solve the primal problem that is a large-scale linear programming problem. In this case, we obtain the optimal solution .
Step 8. Use (
81) to set the step function
. This step function is the approximate optimal solution of problem
. Using Proposition 8, we see that the actual error between the optimal objective value
and the objective value at
is less than or equal to the error bound
. In this case, the error tolerance
is reached for this partition
.
A numerical example is given below
where the desired functions are taken to be the piecewise continuous functions on the time interval
with
. The data
and
are assumed to be uncertain with the nominal data
and the uncertainties
respectively. The data
and
are assumed to be uncertain with the nominal data
and the uncertainties
The uncertain time-dependent matrices
and
are given below:
and
The data
and
are assumed to be uncertain with the nominal data
and
and the uncertainties
and
The data
,
,
and
are assumed to be uncertain with the nominal data
and the uncertainties
It is clear to see that the component functions
and
satisfy the required assumptions. In order to set the partition
, we consider the discontinuities of
,
,
,
,
,
,
,
,
,
. In this example, we see that
and
We also take . This means that each compact interval is equally divided by two subintervals for . Therefore, we have and obtain a partition .
From the robust counterpart
, we see that the robustness
does not appear in the problem. In other words, the robustness
does not affect the robust counterpart. In this example, we take the robustness
for each
and 2.
The approximate optimal objective value of problem
is denoted by
Using Theorem 3 and Proposition 8, we see that
and
Now, the numerical results are presented in the following table.
| | | | |
2 | 16 | | | |
10 | 80 | | | |
50 | 400 | | | |
100 | 800 | | | |
200 | 1600 | | | |
300 | 2400 | | | |
400 | 3200 | | | |
500 | 4000 | | | |
We use the commercial software MATLAB to perform this computation. The active set method that is built in MATLAB is used to solve the large scale linear programming problems. Assume that the decision-maker can tolerate the error . It means that is sufficient to achieve this error tolerance in which the corresponding error bound is given by satisfying .
8. Conclusions
The main issue of this paper is to solve the continuous-time linear programming problem with time-dependent matrices by considering the data to be uncertain and laying in the specified bounded closed intervals. In this case, the technique of so-called robust optimization is adopted to formulate an extended form of robust counterpart. Solving this extended form is indeed difficult even for the time-dependent matrices are involved in the problem. Using some technical derivations, this extended form of robust counterpart is transformed into a conventional form of continuous-time linear programming problem with time-dependent matrices. The remaining effort is to solve this more complicated transformed problem by using the discretization technique.
It is impossible to directly solve the original problem, since the Riemann integrals are involved in the problem. Instead of solving the original problem, we solve its corresponding discretization problem. The technique for formulating the discretization problem has also been adopted in Wu [
7]. In fact, the discretization problem is a conventional linear programming problem such that the well-known simplex method can be used. However, the challenge issue is to estimate the error between the actual solution and approximate solution. Theorem 3 presents an analytic formula of the error bound
satisfying
where problem
is the discretization problem (i.e., a linear programming problem) of the original problem
.
The weak convergence of approximate solutions to the actual solution is also studied to demonstrate its asymptotic behavior by referring to Propositions 9 and 10. Finally, a computational procedure is also designed to obtain the approximate optimal solutions.
The important issue of this paper is to derive an analytic formula of error bound given by
which is presented in Theorem 3. In order to calculate this error bound, we need to solve the dual problem
to obtain
and
. We also have
where
as
. Therefore, studying the dual problems
and
is another important issue. Theorem 3 also shows that the primal problem
and dual problem
have no duality gap by saying that their optimal objective values are identical given by
. Moreover, the strong duality is also established in Theorem 5 saying that the optimal solutions of problems
and
indeed exist such that their optimal objective values are identical with
. In the theory of optimization, when we want to say that a newly formulated problem is a dual problem of the original primal problem, we need to establish their strong duality. In this case, instead of solving the primal problem, we can just solve its dual problem. Because the strong duality is established in Theorem 5, we can really say that
and
are primal and dual pair of problems. In other words, instead of solving the primal problem
, we can also solve the dual problem
. This paper is mainly solving the dual problem to obtain the analytic formula of error bound
as shown in (
173), which includes the quantities
and
of dual problem. Therefore, based on the strong duality theorem, it is possible to design a more efficient computational procedure to obtain another analytic formula of error bound
by solving the primal problem, which can be future research.
The discretization problem formulated in this paper is a large scale of linear programming problem. Solving this large scale problem consumes huge computer resources and sometimes the personal computer is not capable of handling the computations. In order to increase the performance and efficiency of the methodology proposed in this paper, we may need some high-level computers like super computers. In future research, we may try to develop a new computational procedure involving parallel computation that can save on the running time of computation.