Next Article in Journal
Change-Point Detection in Homogeneous Segments of COVID-19 Daily Infection
Next Article in Special Issue
An Improved Elephant Herding Optimization for Energy-Saving Assembly Job Shop Scheduling Problem with Transportation Times
Previous Article in Journal
Finite Time Blowup in a Fourth-Order Dispersive Wave Equation with Nonlinear Damping and a Non-Local Source
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Form of Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan
Axioms 2022, 11(5), 211; https://doi.org/10.3390/axioms11050211
Submission received: 22 February 2022 / Revised: 15 April 2022 / Accepted: 27 April 2022 / Published: 1 May 2022
(This article belongs to the Special Issue Computational Optimization and Applications)

Abstract

:
An extended form of robust continuous-time linear programming problem with time-dependent matrices is formulated in this paper. This complicated problem is studied theoretically in this paper. We also design a computational procedure to solve this problem numerically. The desired data that appeared in the problem are considered to be uncertain quantities, which are treated according to the concept of robust optimization. A discretization problem of the robust counterpart is formulated and solved to obtain the ϵ -optimal solutions.

1. Introduction

The “bottleneck problem” proposed by Bellman [1] initiated the formulation of a continuous-time linear programming problem. These kinds of problem involving integrals have received considerable attention for a long time. We denote by L + 2 [ 0 , T ] the space of all nonnegative and square-integrable real-valued functions defined on the time interval [ 0 , T ] . Tyndall [2,3] studied this problem as follows:
max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j · z j ( t ) c i ( t ) + j = 1 q 0 t K i j · z j ( s ) d s for   t [ 0 , T ]   and   i = 1 , , p z j L + 2 [ 0 , T ]   for   all   j = 1 , , q ,
where B i j and K i j are nonnegative constants for i = 1 , , p and j = 1 , , q . Levinson [4] generalized the results of Tyndall by replacing the constants B i j and K i j as the nonnegative real-valued functions B i j ( t ) and K i j ( t , s ) defined on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively, for i = 1 , , p and j = 1 , , q . In other words, the following problem was studied:
max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for   t [ 0 , T ]   and   i = 1 , , p z j L + 2 [ 0 , T ] for   all   j = 1 , , q .
This complicated problem has been solved numerically by Wu [5,6] in which the functions a j , c i , B i j and K i j were assumed to be piecewise continuous.
In the real world, the known data a j , c i , B i j and K i j that appeared in the continuous-time linear programming problem may be imprecise or uncertain. It means that the known data may be subject to perturbation. Developing the numerical methodology is an important issue for studying the different types of optimization problems. Therefore, Wu [7] developed a methodology to obtain the so-called robust solutions of uncertain continuous-time linear programming problems with time-dependent matrices in which the uncertainties of known data were assumed to fall into the pre-determined compact intervals. More precisely, the following problem was studied by Wu [7]:
max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for   t [ 0 , T ]   and   i = 1 , , p ; z j L + 2 [ 0 , T ]   for   j = 1 , , q   and   t [ 0 , T ] ; a j ( t ) V a j ( t )   for   t [ 0 , T ]   and   j = 1 , , q ; c i ( t ) V c i ( t )   for   t [ 0 , T ]   and   i = 1 , , p ; B i j ( t ) U B i j ( t )   for   t [ 0 , T ] , i = 1 , , p   and   j = 1 , , q ; K i j ( t , s ) U K i j ( t , s )   for   ( t , s ) [ 0 , T ] × [ 0 , T ] , i = 1 , , p   and   j = 1 , , q ,
where the uncertain functions a j ( t ) , c i ( t ) , B i j ( t ) and K i j ( t , s ) were assumed to fall into compact intervals U a j ( t ) , U c i ( t ) , U B i j ( t ) and U K i j ( t , s ) , respectively. For example, the compact intervals U K i j ( t , s ) are taken to be
U K i j ( t , s ) = K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) , K i j ( 0 ) ( t , s ) + K ^ i j ( t , s ) ,
where K i j ( 0 ) ( t , s ) 0 are the known nominal functions of K i j ( t , s ) , and K ^ i j ( t , s ) 0 are the uncertainties satisfying K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) 0 .
In this paper, we shall propose the extended form of robust counterpart by using a similar concept that was introduced by Bertsimas and Sim [8]. The basic idea is described below. Recall that I i ( B ) denotes the set of indices, which says that B i j is uncertain for j I i ( B ) and that B i j is certain for j I i ( B ) . Although the data B i j for j I i ( B ) should be uncertain, sometimes, some of B i j for j I i ( B ) still remain certain (i.e., the data B i j ( t ) still remain unchanged for t [ 0 , T ] ) in a considered problem. Given any fixed i { 1 , , p } , let | I i ( B ) | denote the number of indices in the set I i ( B ) . In the real situation, we may only know that the number of B i j for j I i ( B ) which are subject to be uncertain is γ i ( B ) | I i ( B ) | . We are not able to know the exact indices for making sure the uncertain data B i j . In this case, we need to consider all subsets S i ( B ) of I i ( B ) with | S i ( B ) | = γ i ( B ) , where | S i ( B ) | denotes the number of elements in the set S i ( B ) . The integer γ i ( B ) can be regarded as the robustness with respect to the uncertain functions { B i j : j I i ( B ) } when i is fixed. The problem studied in Wu [7] implicitly assumes γ i ( B ) = | I i ( B ) | . In other words, the problem studied in this paper is indeed an extended form of the problem formulated in Wu [7]. This kind of extended problem will be more complicated and hard to solve. The purpose of this paper is to develop a computational procedure to solve this new kind of optimization problem.
Many theoretical results of continuous-time linear programming problem have been obtained by Meidan and Perold [9], Papageorgiou [10], and Schechter [11]. A subclass of continuous-time linear programming problem called the separated continuous-time linear programming problem has been studied by Anderson et al. [12,13,14], Fleischer and Sethuraman [15] and Pullan [16,17,18,19,20]. This special type of problem is given below
max 0 T a ( t ) x ( t ) d t subject   to y ( t ) + 0 t G x ( s ) d s = c ( t ) H x ( t ) + z ( t ) = b ( t ) x ( t ) , y ( t ) , z ( t ) 0   for   t [ 0 , T ] ,
where G and H are constant matrices; the dimensions of x , y and z are n 1 , n 2 and n 3 , respectively; the functions x , z , b and a are bounded and measurable on [ 0 , T ] ; the functions c and y are absolutely continuous. This problem can be used to model the job-shop scheduling problems by referring to Anderson et al. ([12], p. 758). On the other hand, a simplex-like algorithm has also been proposed by Weiss [21] to solve this separated continuous-time linear programming problem.
The vectorial form of linear type of continuous-time linear programming problems is written as follows:
max 0 T a ( t ) z ( t ) d t subject   to B ( t ) z ( t ) c ( t ) + 0 t K ( t , s ) z ( s ) d s   for   0 t T z ( t ) 0   for   t [ 0 , T ] ,
In general, Farr and Hanson [22,23], Grinold [24,25], Hanson and Mond [26], Reiland [27,28], Reiland and Hanson [29] and Singh [30] studied the nonlinear type of continuous-time optimization problems. More precisely, the nonlinear problem is formulated as follows:
max 0 T ϕ ( z ( t ) , t ) d t subject   to f ( z ( t ) , t ) c ( t ) + 0 t K ( t , s ) g ( z ( s ) , s ) d s   for   0 t T z ( t ) 0   for   t [ 0 , T ] ,
where f ( z ( t ) , t ) R m for t [ 0 , T ] , g ( z ( t ) , t ) R p for t [ 0 , T ] , c ( t ) an m-dimensional vector-valued function defined on [ 0 , T ] , z ( t ) an n-dimensional bounded and measurable vector-valued function defined on [ 0 , T ] and K ( t , s ) an m × p time-dependent matrices whose entries are bounded and measurable on [ 0 , T ] × [ 0 , T ] . In particular, when we take
ϕ ( z ( t ) , t ) = a ( t ) z ( t ) , f ( z ( t ) , t ) = B ( t ) z ( t )   and   and   g ( z ( s ) , s ) = z ( s ) ,
we see that the nonlinear type covers the linear type.
Zalmai [31,32,33,34] investigated the continuous-time fractional programming problems. Those articles just presented the theoretical results without suggesting useful numerical methods. On the other hand, many different numerical methods for solving the continuous-time linear fractional programming problem were developed by Wu [35], and Wen and Wu [36,37,38]. More precisely, this problem is formulated as follows:
maxmize   f 0 + 0 T ( f ( t ) ) x ( t ) d t h 0 + 0 T ( h ( t ) ) x ( t ) d t subject   to B x ( t ) g ( t ) + 0 t K x ( s ) d s   for   t [ 0 , T ] x L ( [ 0 , T ] , R + q ) ,
where h 0 > 0 , f 0 0 , f C ( [ 0 , T ] , R q ) , h C ( [ 0 , T ] , R + q ) , g C ( [ 0 , T ] , R + p ) , and B and K are nonnegative constant matrices.
The optimization problems that involve uncertain data are an attractive research topic. The stochastic optimization was first introduced by Dantzig [39] in which the probability theory was invoked to model uncertain data when the exact probability distributions of uncertain data are not known for sure. The technique of robust optimization suggests another methodology to solve the optimization problems with uncertain data. Ben-Tal and Nemirovski [40,41] and El Ghaoui [42,43] independently proposed some concepts to study the robust optimization. For the main articles on this topic, one can also refer to the articles contributed by Averbakh and Zhao [44], Ben-Tal et al. [45], Bertsimas et al. [8,46,47], Chen et al. [48], Erdoǧan and Iyengar [49], and Zhang [50]. In this paper, we are going to propose an extended form of a robust counterpart of the continuous-time linear programming problem. We also develop a practical computational procedure to solve this really complicated problem.
In Section 2, we introduce an extended form of a robust counterpart of a continuous-time linear programming problem using the similar concept introduced by Bertsimas and Sim [8]. This extended form of robust counterpart is going to be converted into a traditional form of continuous-time linear programming. In Section 3, in order to solve the primal problem obtained in Section 2, we formulate a dual problem by introducing two bilinear forms, which is inspired by the concept proposed by Anderson and Nash [51]. Under this formulation, the weak duality theorem can be established. In Section 4, the discretization problem of the transformed continuous-time linear programming problem will be proposed. As a matter of fact, this discretization problem is a large-scale linear programming problem. In order to estimate the error bound, a dual problem of the discretization problem is formulated. The optimal solutions obtained from the discretization problem are used to construct the feasible solutions of original continuous-time linear programming problem. In Section 5, an analytic formula of error bound is derived to obtain the ϵ -optimal solutions. In Section 6, the properties of weak convergence of approximate solutions are studied, which will also be used to prove the strong duality theorem. In the final Section 7, based on the previous results, we design a computational procedure.

2. Robust Continuous-Time Linear Programming Problems

We consider the following continuous-time linear programming problem:
( CLP ) max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for   all   t [ 0 , T ]   and   i = 1 , , p z j L + 2 [ 0 , T ] for   all   j = 1 , , q ,
where B i j and K i j are assumed to be the nonnegative real-valued functions defined on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively, for i = 1 , , p and j = 1 , , q . We also assume that some of the functions a j , c i , B i j and K i j are subject to be pointwise-uncertain. It means that, given each fixed t [ 0 , T ] and each fixed ( t , s ) [ 0 , T ] × [ 0 , T ] , the uncertain data a j ( t ) , c i ( t ) , B i j ( t ) , and K i j ( t , s ) should fall into the corresponding compact intervals U a j ( t ) , U c i ( t ) , U B i j ( t ) and U K i j ( t , s ) . We also allow some of those functions to be certain. In order not to complicate the considered problem, when any one of the functions a j , c i , B i j or K i j is assumed to be certain, it will mean that each function value a j ( t ) c i ( t ) , B i j ( t ) or K i j ( t , s ) is assumed to be certain for all s , t [ 0 , T ] . However, when any one of the functions a j , c i , B i j or K i j is assumed to be uncertain, we assume that each function value a j ( t ) c i ( t ) , B i j ( t ) or K i j ( t , s ) may be certain for some s , t [ 0 , T ] .
Let I ( a ) and I ( c ) be the sets of indices such that the functions a j and c i are uncertain for j I ( a ) and i I ( c ) , respectively. For each fixed i = 1 , , p , let I i ( B ) and I i ( K ) be the set of indices such that B i j and K i j are uncertain for j I i ( B ) and j I i ( K ) , respectively. It is clear to see that I i ( B ) and I i ( K ) are subsets of { 1 , 2 , , q } .
The robust counterpart of problem (CLP) is formulated as follows:
( RCLP ) max j = 1 q 0 T a j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for   t [ 0 , T ]   and   i = 1 , , p ; z j L + 2 [ 0 , T ]   for   j = 1 , , q   and   t [ 0 , T ] ; a j ( t ) V a j ( t )   for   t [ 0 , T ]   and   j = 1 , , q ; c i ( t ) V c i ( t )   for   t [ 0 , T ]   and   i = 1 , , p ; B i j ( t ) U B i j ( t )   for   t [ 0 , T ] , i = 1 , , p   and   j = 1 , , q ; K i j ( t , s ) U K i j ( t , s )   for   ( t , s ) [ 0 , T ] × [ 0 , T ] , i = 1 , , p   and   j = 1 , , q ,
where each piece of uncertain data is assumed to lie in the corresponding uncertainty sets. We assume that all the uncertain functions will fall into the compact intervals that are described below.
  • For B i j with j I i ( B ) and K i j with j I i ( K ) , we assume that the uncertain functions B i j and K i j will fall into the following compact intervals
    U B i j ( t ) = B i j ( 0 ) ( t ) B ^ i j ( t ) , B i j ( 0 ) ( t ) + B ^ i j ( t )
    and
    U K i j ( t , s ) = K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) , K i j ( 0 ) ( t , s ) + K ^ i j ( t , s ) ,
    respectively. The known nominal functions B i j ( 0 ) ( t ) and K i j ( 0 ) ( t , s ) of B i j ( t ) and K i j ( t , s ) , respectively, are assumed to be nonnegative. The uncertainties B ^ i j ( t ) and K ^ i j ( t , s ) are, of course, nonnegative satisfying
    B i j ( 0 ) ( t ) B ^ i j ( t ) 0   and   K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) 0 .
    For j I i ( B ) , we denote by B i j ( 0 ) ( t ) the certain functions with uncertainty B ^ i j ( t ) = 0 . We also denote by K i j ( 0 ) ( t , s ) the certain functions with uncertainty K ^ i j ( t , s ) = 0 for j I i ( K ) .
  • For a j with j I ( a ) and c i with i I ( c ) , we take the following compact intervals
    U a j ( t ) = a j ( 0 ) ( t ) a ^ j ( t ) , a j ( 0 ) ( t ) + a ^ j ( t )   and   U c i ( t ) = c i ( 0 ) ( t ) c ^ i ( t ) , c i ( 0 ) ( t ) + c ^ i ( t ) ,
    The known nominal functions a j ( 0 ) ( t ) and c i ( 0 ) ( t ) of a j ( t ) and c i ( t ) , respectively, are not necessarily nonnegative. However, the uncertainties a ^ j ( t ) and c ^ i ( t ) of a j ( t ) and c i ( t ) , respectively, should be nonnegative. For j I ( a ) , we denote by a j ( 0 ) ( t ) the certain function with uncertainties a ^ j ( t ) = 0 . We also denote by c i ( 0 ) ( t ) the certain function with uncertainties c ^ j ( t ) = 0 for i I ( c ) .
In Wu [7], we have derived the following robust counterpart of ( CLP )
( RCLP 1 ) max j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t subject   to j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) ( t , s ) · 0 t z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s   for   t [ 0 , T ]   and   i I ( c ) ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) ( t , s ) · 0 t z j ( s ) d s { j : j I i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s   for   t [ 0 , T ]   and   i I ( c ) ; z j ( t ) L + 2 [ 0 , T ]   for   j = 1 , , q ,
which is equivalent to the following problem
( RCLP 1 ) max φ subject   to φ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) 0 T a ^ j ( t ) · z j ( t ) d t ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q K i j ( 0 ) ( t , s ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j ( t , s ) · 0 t z j ( s ) d s   for   t [ 0 , T ]   and   i I ( c ) ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + { j : j I i ( B ) } B ^ i j ( t ) · z j ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) ( t , s ) · 0 t z j ( s ) d s { j : j I i ( K ) } K ^ i j ( t , s ) · 0 t z j ( s ) d s   for   t [ 0 , T ]   and   i I ( c ) ; φ R   and   z j ( t ) L + 2 [ 0 , T ]   for   j = 1 , , q .
Although I i ( B ) is the set of indices saying that B i j is uncertain for j I i ( B ) and that B i j is certain for j I i ( B ) , sometimes, some of B i j for j I i ( B ) still remain certain in a problem. In the real situation, given any fixed i { 1 , , p } , we may only know that the number of B i j for j I i ( B ) which are subject to be uncertain is γ i ( B ) . In this case, we need to consider all subsets S i ( B ) of I i ( B ) with | S i ( B ) | = γ i ( B ) , where | S i ( B ) | denotes the number of elements in the set S i ( B ) . The integer γ i ( B ) can be regarded as the robustness with respect to the uncertain functions { B i j : j I i ( B ) } when i is fixed. This similar idea was also suggested by Bertsimas and Sim [8]. Now, we can consider the robustness γ i ( K ) , γ ( a ) and γ ( c ) for the uncertain functions { K i j : j I i ( K ) } , { a j : j I ( a ) } and { c i : i I ( c ) } , respectively. The notations S i ( K ) , S ( a ) and S ( c ) can be similarly realized as the subsets of I i ( K ) , I ( a ) and I ( c ) , respectively. In this paper, we assume that I i ( B ) , I i ( K ) , I ( a ) and I ( c ) are nonempty sets, which says that the integers γ i ( B ) , γ i ( K ) , γ ( a ) and γ ( c ) are nonzero.
As we have observed that the robust counterpart (RCLP1) given above shows that the constraints are taken to be the worst case, in the general situation, the robust counterpart (RCLP2) that will be formulated below also needs to consider the constraints to be the worst case. In order to formulate this general type of robust counterpart, for i = 1 , , p , we consider the following optimization problems:
Γ ¯ ( a ) z = max { S ( a ) : S ( a ) I ( a ) , | S ( a ) | = γ ( a ) } j S ( a ) 0 T a ^ j ( t ) · z j ( t ) d t
Γ ¯ i ( B ) z ( t ) = max { S i ( B ) : S i ( B ) I i ( B ) , | S i ( B ) | = γ i ( B ) } { j : j S i ( B ) } B ^ i j ( t ) · z j ( t )
Γ ¯ i ( K ) z ( t ) = max { S i ( K ) : S i ( K ) I i ( K ) , | S i ( K ) | = γ i ( K ) } { j : j S i ( K ) } 0 t K ^ i j ( t , s ) · z j ( s ) d s .
Since the original problem (CLP) can be rewritten as
( CLP ) max φ subject   to φ j = 1 q 0 T a j ( t ) · z j ( t ) d t ; j = 1 q B i j ( t ) · z j ( t ) c i ( t ) + j = 1 q 0 t K i j ( t , s ) · z j ( s ) d s for   t [ 0 , T ]   and   i = 1 , , p ; φ R   and   z j L + 2 [ 0 , T ] for   all   j = 1 , , q ,
the extended form of the robust counterpart of (CLP) is formulated below:
( RCLP 2 ) max φ subject   to φ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t Γ ¯ ( a ) z ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + Γ ¯ i ( B ) z ( t ) c i ( 0 ) ( t ) c ^ i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s Γ ¯ i ( K ) z ( t )   for   t [ 0 , T ]   and   i I ( c ) ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + Γ ¯ i ( B ) z ( t ) c i ( 0 ) ( t ) + j = 1 q K i j ( 0 ) ( t , s ) · 0 t z j ( s ) d s Γ ¯ i ( K ) z ( t )   for   t [ 0 , T ]   and   i I ( c ) ; φ R   and   z j ( t ) L + 2 [ 0 , T ]   for   j = 1 , , q .
It is obvious that the robust counterpart (RCLP1) is a special case of the extended form (RCLP2) in the sense of γ ( a ) = | I ( a ) | , γ i ( B ) = | I i ( B ) | and γ i ( K ) = | I i ( K ) | . Since the uncertainties shown in (1)–(3) are regarded as the largest uncertainties, the constraints given in (RCLP2) are realized to be the worst case. The main reason is that, if a feasible solution satisfies the constraints formulated in the worst case, then it will satisfy the constraints formulated by any uncertainties.
The extended form of robust counterpart (RCLP2) is not easy to solve. In order to transform the robust counterpart (RCLP2) into a solvable form, we are going to apply the strong duality theorem of conventional linear programming problem. We first provide some useful propositions.
Lemma 1.
Given e i 0 and 0 λ i < 1 for i = 1 , , n , suppose that e 1 e 2 e n . If λ 1 + λ 2 + + λ n = κ is an integer, where κ < n , then
λ 1 e 1 + λ 2 e 2 + + λ n e n e 1 + e 2 + + e κ .
Proposition 1.
Given z = ( z 1 , , z q ) , we have the following properties.
(i)
The value Γ ¯ ( a ) z is equal to the optimal objective value of the following linear programming problem:
Γ ( a ) z max j I ( a ) y j ( 1 ) · 0 T a ^ j ( t ) · z j ( t ) d t subject   to j I ( a ) y j ( 1 ) γ ( a ) ; 0 y j ( 1 ) 1   for   j I ( a ) ,
where y j ( 1 ) are the decision variables for j I ( a ) . Moreover, there is an optimal solution y ¯ ( 1 * ) and a subset S ¯ ( a ) of I ( a ) with | S ¯ ( a ) | = γ ( a ) satisfying y ¯ j ( 1 * ) = 1 for j S ¯ ( a ) and y ¯ j ( 1 * ) = 0 for j S ¯ ( a ) .
(ii)
For i = 1 , , p , given any t * [ 0 , T ] , the value Γ ¯ i ( B ) z ( t * ) is equal to the optimal objective value of the following linear programming problem:
Γ i ( B ) z ( t * ) max j I i ( B ) y i j ( 2 ) · B ^ i j ( t * ) · z j ( t * ) subject   to j I i ( B ) y i j ( 2 ) γ i ( B ) ; 0 y i j ( 2 ) 1   for   j I i ( B ) ,
where y i j ( 2 ) are the decision variables for j I i ( B ) , and the optimal objective values depends on t * . Moreover, there is an optimal solution y ¯ i ( 2 * ) and a subset S ¯ i ( B ) of I i ( B ) with | S ¯ i ( B ) | = γ i ( B ) satisfying y ¯ i j ( 2 * ) = 1 for j S ¯ i ( B ) and y ¯ i j ( 2 * ) = 0 for j S ¯ i ( B ) .
(iii)
For i = 1 , , p , given any t * [ 0 , T ] , the value Γ ¯ i ( K ) z ( t * ) is equal to the optimal objective value of the following linear programming problem:
Γ i ( K ) z ( t * ) max j I i ( K ) y i j ( 3 ) · 0 t * K ^ i j ( t * , s ) · z j ( s ) d s subject   to j I i ( K ) y i j ( 3 ) γ i ( K ) ; 0 y i j ( 3 ) 1   for   j I i ( K ) ,
where y i j ( 3 ) are the decision variables for j I i ( K ) , and the optimal objective values depends on t * . Moreover, there is an optimal solution y ¯ i ( 3 * ) and a subset S ¯ i ( K ) of I i ( K ) with | S ¯ i ( K ) | = γ i ( K ) satisfying y ¯ i j ( 3 * ) = 1 for j S ¯ i ( K ) and y ¯ i j ( 3 * ) = 0 for j S ¯ i ( K ) .
Proof. 
We just prove part (i), since parts (ii) and (iii) can be similarly obtained. Suppose that y ¯ is an optimal solution of problem (4). Since a ^ j and z j are nonnegative, in order to maximize the objective function, we must have
j I ( a ) y ¯ j = γ ( a ) .
We are going to claim that there exists an alternative optimal solution y ¯ * satisfying y ¯ j * { 0 , 1 } for each j I ( a ) ; that is, there is a subset S ¯ ( a ) of I ( a ) with | S ¯ ( a ) | = γ ( a ) satisfying y ¯ j * = 1 for j S ¯ ( a ) and y ¯ j * = 0 for j S ¯ ( a ) . Let J r = { j 1 , , j r } be a subset of I ( a ) satisfying y ¯ j = 1 for j J r and 0 y ¯ j < 1 for j J r , where r is a positive integer. From (7), we see that
j I ( a ) J r y ¯ j = γ ( a ) r κ < I ( a ) J r .
We re-arrange the following finite set
0 T a ^ j ( t ) · z j ( t ) d t j I ( a ) J r
in ascending order as
0 T a ^ j 1 * ( t ) · z j 1 * ( t ) d t 0 T a ^ j 2 * ( t ) · z j 2 * ( t ) d t 0 T a ^ j s * ( t ) · z j s * ( t ) d t ,
where s = | I ( a ) J r | and κ < s . Now, we define a new feasible solution y ¯ * as follows:
y ¯ j * = 1 i f j J r 1 i f j { j 1 * , , j κ * } 0 i f j { j κ + 1 * , , j s * }
Let S ¯ ( a ) = J r { j 1 * , , j κ * } . Then, | S ¯ ( a ) | = γ ( a ) satisfying y ¯ j * = 1 for j S ¯ ( a ) and y ¯ j * = 0 for j S ¯ ( a ) . Next, we are going to claim that y ¯ * is an optimal solution. Let
e i = 0 T a ^ j i * ( t ) · z j i * ( t ) d t   and   λ i = y ¯ j i *   for   i = 1 , , s .
Then, the optimal objective value with respect to the optimal solution y ¯ is given by
j I ( a ) y ¯ j · 0 T a ^ j ( t ) · z j ( t ) d t = j I ( a ) J r y ¯ j · 0 T a ^ j ( t ) · z j ( t ) d t + j J r 0 T a ^ j ( t ) · z j ( t ) d t = j I ( a ) J r λ i e i + j J r 0 T a ^ j ( t ) · z j ( t ) d t i = 1 κ 0 T a ^ j i * ( t ) · z j i * ( t ) d t + j J r 0 T a ^ j ( t ) · z j ( t ) d t ( using   Lemma 1 ) = j I ( a ) y ¯ j * · 0 T a ^ j ( t ) · z j ( t ) d t ,
which says that y ¯ * is an optimal solution. Therefore, we conclude that the optimal objective value of problem (4) can be obtained by taking γ ( a ) variables of y j ( 1 ) with value 1, which is the selection of subset S ¯ ( a ) = J r { j 1 * , , j κ * } I ( a ) with | S ¯ ( a ) | = γ ( a ) and the corresponding objective value
j S ¯ ( a ) 0 T a ^ j ( t ) · z j ( t ) d t
given by (8). This shows that the optimal objective value of problem (4) is less than the value Γ ¯ ( a ) z .
On the other hand, if S ¯ ( a ) is an optimal solution of problem (1), then | S ¯ ( a ) | = γ ( a ) with optimal objective value
j S ¯ ( a ) 0 T a ^ j ( t ) · z j ( t ) d t ,
which is equivalent to
j I ( a ) y ¯ j ( 1 ) · 0 T a ^ j ( t ) · z j ( t ) d t
for some feasible solution y ¯ ( 1 ) of problem (4) satisfying y ¯ j ( 1 ) = 1 for j S ¯ ( a ) and y ¯ j ( 1 ) = 0 for j S ¯ ( a ) . This shows that the value Γ ¯ ( a ) z is less than the optimal objective value of problem (4), and the proof is complete. □
In order to rewrite the robust counterpart (RCLP2) to make it solvable by a numerical algorithm, we need to consider the dual problems of linear programming problems (4)–(6).
  • The dual of problem (4) is given by
    Λ ( a ) z min γ ( a ) · u ( 1 ) + j I ( a ) d j subject   to u ( 1 ) + d j 0 T a ^ j ( t ) · z j ( t ) d t   for   j I ( a ) ; u ( 1 ) 0   and   d j 0   for   j I ( a ) .
    The optimal objective value is denoted by Λ ¯ ( a ) z .
  • For i = 1 , , p , given any t * [ 0 , T ] , the dual of problem (5) is given by
    Λ i ( B ) z ( t * ) min γ i ( B ) · u i ( 2 ) ( t * ) + { j : j I i ( B ) } u i j ( 4 ) ( t * ) subject   to u i ( 2 ) ( t * ) + u i j ( 4 ) ( t * ) B ^ i j ( t * ) · z j ( t * )   for   j I i ( B ) ; u i ( 2 ) ( t * ) 0   and   u i j ( 4 ) ( t * ) 0   for   j I i ( B ) .
    The optimal objective value is denoted by ( Λ ¯ i ( B ) ( z ) ) ( t * ) .
  • For i = 1 , , p , given any t * [ 0 , T ] , the dual of problem (6) is given by
    Λ i ( K ) z ( t * ) min γ i ( K ) · u i ( 3 ) ( t * ) + { j : j I i ( K ) } u i j ( 5 ) ( t * ) subject   to u i ( 3 ) ( t * ) + u i j ( 5 ) ( t * ) 0 t * K ^ i j ( t * , s ) · z j ( s ) d s   for   j I i ( K ) ; u i ( 3 ) ( t * ) 0   and   u i j ( 5 ) ( t * ) 0   for   j I i ( K ) .
    The optimal objective value is denoted by ( Λ ¯ i ( K ) ( z ) ) ( t * ) .
For i = 1 , , p , we define the real-valued functions c i on [ 0 , T ] by
c i ( t ) = c i ( 0 ) ( t ) c ^ i ( t )   if   i I ( c ) c i ( 0 ) ( t )   if   i I ( c ) .
We are going to obtain the equivalent form of robust counterpart (RCLP2), which will turn into a conventional continuous-time linear programming problem.
Proposition 2.
The robust counterpart ( RCLP 2 ) is equivalent to the following problem:
( RCLP 3 ) max φ subject   to φ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t Λ ¯ ( a ) z j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + Λ ¯ i ( B ) z ( t ) + Λ ¯ i ( K ) z ( t ) c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s   for   t [ 0 , T ]   and   i = 1 , , p φ R   and   z j ( t ) L + 2 [ 0 , T ]   for   j = 1 , , q ,
Proof. 
We consider the primal-dual pairs of problems (4) and (9). Since problem (4) is feasible and bounded, using Proposition 1 and the strong duality theorem for linear programming problem, the dual problem (9) is also feasible and bounded satisfying Γ ¯ ( a ) z = Λ ¯ ( a ) z . Similarly, we also have
Γ ¯ i ( B ) z ( t ) = Λ ¯ i ( B ) z ( t )   and   Γ ¯ i ( K ) z ( t ) = Λ ¯ i ( K ) z ( t )
for any t [ 0 , T ] . Therefore, we conclude that the problems (RCLP2) and (RCLP3) are equivalent. This completes the proof.
For optimization problem (P), we write V ( P ) to denote the optimal objective value of problem (P). □
Theorem 1.
The robust counterpart ( RCLP 3 ) is equivalent to the following continuous-time linear programming problem:
( RCLP 4 ) max φ subject   to φ j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t γ ( a ) · u ( 1 ) + j I ( a ) d j ; j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + γ i ( B ) · u i ( 2 ) ( t ) + γ i ( K ) · u i ( 3 ) ( t ) + { j : j I i ( B ) } u i j ( 4 ) ( t ) + { j : j I i ( K ) } u i j ( 5 ) ( t ) c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s   for   t [ 0 , T ]   and   i = 1 , , p ; u ( 1 ) + d j 0 T a ^ j ( t ) · z j ( t ) d t   for   j I ( a ) ; u i ( 2 ) ( t ) + u i j ( 4 ) ( t ) B ^ i j ( t ) · z j ( t )   for   i = 1 , , p , j I i ( B )   and   t [ 0 , T ] ; u i ( 3 ) ( t ) + u i j ( 5 ) ( t ) 0 t K ^ i j ( t , s ) · z j ( s ) d s   for   i = 1 , , p , j I i ( K )   and   t [ 0 , T ] ; φ R   and   u ( 1 ) 0 ; d j 0   for   j I ( a ) ; z j L + 2 [ 0 , T ]   for   j = 1 , , q ; u i ( 2 ) , u i ( 3 ) L + 2 [ 0 , T ]   for   i = 1 , , p ; u i j ( 4 ) L + 2 [ 0 , T ]   for   i = 1 , , p   and   j I i ( B ) ; u i j ( 5 ) L + 2 [ 0 , T ]   for   i = 1 , , p   and   j I i ( K ) .
Proof. 
Let ( φ ¯ , z ¯ ) be an optimal solution of problem (RCLP3). Given this z ¯ , the optimal solutions of problems Λ ( a ) ( z ¯ ) , ( Λ i ( B ) ( z ¯ ) ) ( t ) and ( Λ i ( K ) ( z ¯ ) ( t ) ) are given below:
  • Let ( u ¯ ( 1 ) , d ¯ ) be an optimal solution of problem Λ ( a ) ( z ¯ ) . Then, we have
    u ¯ ( 1 ) + d ¯ j 0 T a ^ j ( t ) · z ¯ j ( t ) d t , u ¯ ( 1 ) 0   and   d ¯ j 0   for   j I ( a ) .
  • For i = 1 , , p , given any t * [ 0 , T ] , let ( u ¯ i ( 2 ) ( t * ) , u ¯ i ( 4 ) ( t * ) ) be an optimal solution of problem ( Λ i ( B ) ( z ¯ ) ) ( t * ) , where the components of u ¯ i ( 4 ) ( t * ) are u ¯ i j ( 4 ) ( t * ) for j I i ( B ) . Then, we have
    u ¯ i ( 2 ) ( t * ) + u ¯ i j ( 4 ) ( t * ) B ^ i j ( t ) · z ¯ j ( t * ) , u ¯ i ( 2 ) ( t * ) 0   and   u ¯ i j ( 4 ) ( t * ) 0   for   j I i ( B )
    We also write
    u ¯ ( 2 ) ( t * ) = u ¯ 1 ( 2 ) ( t * ) u ¯ 2 ( 2 ) ( t * ) u ¯ p ( 2 ) ( t * )   and   u ¯ ( 4 ) ( t * ) = u ¯ 1 ( 4 ) ( t * ) u ¯ 2 ( 4 ) ( t * ) u ¯ p ( 4 ) ( t * ) .
  • For i = 1 , , p , given any t * [ 0 , T ] , let ( u ¯ i ( 3 ) ( t * ) , u ¯ i ( 5 ) ( t * ) ) be an optimal solution of problem ( Λ i ( K ) ( z ¯ ) ( t ) ) ( t * ) , where the components of u ¯ i ( 5 ) ( t * ) are u ¯ i j ( 5 ) ( t * ) for j I i ( K ) . Then, we have
    u ¯ i ( 3 ) ( t * ) + u ¯ i j ( 5 ) ( t * ) 0 t * K ^ i j ( t * , s ) · z ¯ j ( s ) d s , u ¯ i ( 3 ) ( t * ) 0   and   u ¯ i j ( 5 ) ( t * ) 0   for   j I i ( K )
    We also write
    u ¯ ( 3 ) ( t * ) = u ¯ 1 ( 3 ) ( t * ) u ¯ 2 ( 3 ) ( t * ) u ¯ p ( 3 ) ( t * )   and   u ¯ ( 5 ) ( t * ) = u ¯ 1 ( 5 ) ( t * ) u ¯ 2 ( 5 ) ( t * ) u ¯ p ( 5 ) ( t * ) .
Moreover, we have the optimal objective values as follows:
Λ ¯ a z ¯ = γ ( a ) · u ¯ ( 1 ) + j I ( a ) d ¯ j Λ ¯ i ( B ) z ¯ ( t * ) = γ i ( B ) · u ¯ i ( 2 ) ( t * ) + { j : j I i ( B ) } u ¯ i j ( 4 ) ( t * ) Λ ¯ i ( K ) z ¯ ( t * ) = γ i ( K ) · u ¯ i ( 3 ) ( t * ) + { j : j I i ( K ) } u ¯ i j ( 5 ) ( t * ) .
Since t * can be any value in [ 0 , T ] , it follows that ( φ ¯ , z ¯ , d ¯ , u ¯ ( 1 ) , u ¯ ( 2 ) , u ¯ ( 3 ) , u ¯ ( 4 ) , u ¯ ( 5 ) ) is a feasible solution of problem (RCLP4), which shows that V ( RCLP 3 ) V ( RCLP 4 ) .
Conversely, if ( φ ¯ , z ¯ , d ¯ , u ¯ ( 1 ) , u ¯ ( 2 ) , u ¯ ( 3 ) , u ¯ ( 4 ) , u ¯ ( 5 ) ) is an optimal solution of problem (RCLP4), then, given any fixed t * [ 0 , T ] , we see that ( u ¯ ( 1 ) , d ¯ ) , ( u ¯ i ( 2 ) ( t * ) , u ¯ i ( 4 ) ( t * ) ) and ( u ¯ i ( 3 ) ( t * ) , u ¯ i ( 5 ) ( t * ) ) are the feasible solutions of problems Λ ( a ) ( z ¯ ) , ( Λ i ( B ) ( z ¯ ) ) ( t * ) and ( Λ i ( K ) ( z ¯ ) ) ( t * ) , respectively, for i = 1 , , p . Therefore, we have
γ ( a ) u ¯ ( 1 ) + j I ( a ) d ¯ j Λ ¯ ( a ) z ¯ ,
which implies
j = 1 q 0 T a j ( 0 ) ( t ) · z ¯ j ( t ) d t γ ( a ) u ¯ ( 1 ) + j I ( a ) d ¯ j j = 1 q 0 T a j ( 0 ) ( t ) · z ¯ j ( t ) d t Λ ¯ ( a ) z ¯
Therefore, we obtain
φ ¯ j = 1 q 0 T a j ( 0 ) ( t ) · z ¯ j ( t ) d t Λ ¯ ( a ) z ¯ ,
which says that the first constraint of problem (RCLP3) is satisfied. Since ( u ¯ i ( 2 ) ( t * ) , u ¯ i ( 4 ) ( t * ) ) and ( u ¯ i ( 3 ) ( t * ) , u ¯ i ( 5 ) ( t * ) ) are the feasible solutions of problems ( Λ i ( B ) ( z ¯ ) ) ( t * ) and ( Λ i ( K ) ( z ¯ ) ) ( t * ) , respectively, for i = 1 , , p , we also have
γ i ( B ) u ¯ i ( 2 ) ( t * ) + { j : j I i ( B ) } u ¯ i j ( 4 ) ( t * ) Λ ¯ i ( B ) z ¯ ( t * )
and
γ i ( K ) u ¯ i ( 3 ) ( t * ) + { j : j I i ( K ) } u ¯ i j ( 5 ) ( t * ) Λ ¯ i ( K ) z ¯ ( t * ) .
Therefore, we obtain
j = 1 q B i j ( 0 ) ( t ) · z ¯ j ( t * ) + Λ ¯ i ( B ) z ¯ ( t * ) + Λ ¯ i ( K ) z ¯ ( t * ) j = 1 q B i j ( 0 ) ( t ) · z ¯ j ( t * ) + γ i ( B ) u ¯ i ( 2 ) ( t * ) + { j : j I i ( B ) } u ¯ i j ( 4 ) ( t * ) + γ i ( K ) u ¯ i ( 3 ) ( t * ) + { j : j I i ( K ) } u ¯ i j ( 5 ) ( t * ) ( using   ( 11 )   and   ( 12 ) ) c i ( t * ) + j = 1 q 0 t * K i j ( 0 ) ( t * , s ) · z ¯ j ( s ) d s ( by   the   feasibility   of   ( φ ¯ , z ¯ , d ¯ , u ¯ ( 1 ) , u ¯ ( 2 ) , u ¯ ( 3 ) , u ¯ ( 4 ) , u ¯ ( 5 ) ) for   problem   ( RCLP 4 ) ) .
Since t * can be any number in [ 0 , T ] , it follows that ( φ ¯ , z ¯ ) is a feasible solution of problem (RCLP3). In other words, we have V ( RCLP 4 ) V ( RCLP 3 ) . Therefore, we obtain V ( RCLP 4 ) = V ( RCLP 3 ) , which shows the equivalence of problems (RCLP4) and (RCLP3). This completes the proof. □
Problem (RCLP4) is equivalent to the following continuous-time linear programming problem:
( RCLP * ) max j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t j I ( a ) d j γ ( a ) · u ( 1 ) subject   to j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + γ i ( B ) · u i ( 2 ) ( t ) + γ i ( K ) · u i ( 3 ) ( t ) + { j : j I i ( B ) } u i j ( 4 ) ( t ) + { j : j I i ( K ) } u i j ( 5 ) ( t ) c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s   for   t [ 0 , T ]   and   i = 1 , , p ; u ( 1 ) + d j 0 T a ^ j ( t ) · z j ( t ) d t   for   j I ( a ) ; u i ( 2 ) ( t ) + u i j ( 4 ) ( t ) B ^ i j ( t ) · z j ( t )   for   i = 1 , , p , j I i ( B )   and   t [ 0 , T ] ; u i ( 3 ) ( t ) + u i j ( 5 ) ( t ) 0 t K ^ i j ( t , s ) · z j ( s ) d s   for   i = 1 , , p , j I i ( K )   and   t [ 0 , T ] ; u ( 1 ) 0   and   d j 0   for   j I ( a ) ; z j L + 2 [ 0 , T ]   for   j = 1 , , q ; u i ( 2 ) , u i ( 3 ) L + 2 [ 0 , T ]   for   i = 1 , , p ; u i j ( 4 ) L + 2 [ 0 , T ]   for   i = 1 , , p   and   j I i ( B ) ; u i j ( 5 ) L + 2 [ 0 , T ]   for   i = 1 , , p   and   j I i ( K ) .
When the real-valued functions c i are assumed to be nonnegative for all i = 1 , , p , it is clear to see that the zero vector-valued function is a feasible solution of problem ( RCLP * ) .

3. Formulation of the Dual Problem

For each i = 1 , , p , we define the vector-valued functions u i ( 4 ) ( t ) and u i ( 5 ) that consist of u i j ( 4 ) for j I i ( B ) and u i j ( 5 ) for j I i ( K ) , respectively. We also write
u ( 4 ) = u 1 ( 4 ) , , u p ( 4 )   and   and   u ( 5 ) = u 1 ( 5 ) , , u p ( 5 ) .
Now, we define the following functions:
A i ( 1 ) z ( t ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) = j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + γ i ( B ) · u i ( 2 ) ( t ) + γ i ( K ) · u i ( 3 ) ( t ) + { j : j I i ( B ) } u i j ( 4 ) ( t ) + { j : j I i ( K ) } u i j ( 5 ) ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s   for   i = 1 , , p ; A j ( 2 ) z ( t ) , d , u ( 1 ) = u ( 1 ) d j + 0 T a ^ j ( t ) · z j ( t ) d t   for   j I ( a ) ; A i j ( 3 ) z ( t ) , u ( 2 ) ( t ) , u ( 4 ) ( t ) = u i ( 2 ) ( t ) u i j ( 4 ) ( t ) + B ^ i j ( t ) · z j ( t )   for   i = 1 , , p   and   j I i ( B ) ; A i j ( 4 ) z ( t ) , u ( 3 ) ( t ) , u ( 5 ) ( t ) = u i ( 3 ) ( t ) u i j ( 5 ) ( t ) + 0 t K ^ i j ( t , s ) · z j ( s ) d s   for   i = 1 , , p   and   j I i ( K ) .
We write
α ( B ) = i = 1 p I i ( B )   and   α ( K ) = i = 1 p I i ( K )
and consider the following product spaces
L = L 2 ( [ 0 , T ] , R q ) × R | I ( a ) | × R × L 2 ( [ 0 , T ] , R p ) × L 2 ( [ 0 , T ] , R p ) × L 2 [ 0 , T ] , R α ( B ) × L 2 [ 0 , T ] , R α ( K )
and
L * = L 2 ( [ 0 , T ] , R p ) × R | I ( a ) | × L 2 [ 0 , T ] , R α ( B ) × L 2 [ 0 , T ] , R α ( K ) .
Then, we define the operator A : L L * by
A z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) = A ( 1 ) z ( t ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , A ( 2 ) z ( t ) , d , u ( 1 ) , A ( 3 ) z ( t ) , u ( 2 ) ( t ) , u ( 4 ) ( t ) , A ( 4 ) z ( t ) , u ( 3 ) ( t ) , u ( 5 ) ( t ) ,
where A ( 1 ) consists of A i ( 1 ) for i = 1 , , p , A ( 2 ) consists of A j ( 2 ) for j I ( a ) , A ( 3 ) consists of A i j ( 3 ) for i = 1 , , p and j I i ( B ) , and A ( 4 ) consists of A i j ( 4 ) for i = 1 , , p and j I i ( K ) . We also define a bilinear form · , · 1 on L × L by
z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , z ¯ ( t ) , d ¯ , u ¯ ( 1 ) , u ¯ ( 2 ) ( t ) , u ¯ ( 3 ) ( t ) , u ¯ ( 4 ) ( t ) , u ¯ ( 5 ) ( t ) 1 = j = 1 q 0 T z j ( t ) · z ¯ j ( t ) d t + u ( 1 ) · u ¯ ( 1 ) + j I ( a ) d j · d ¯ j + i = 1 p 0 T u i ( 2 ) ( t ) · u ¯ i ( 2 ) ( t ) d t + i = 1 p 0 T u i ( 3 ) ( t ) · u ¯ i ( 3 ) ( t ) d t + i = 1 p j I i ( B ) 0 T u i j ( 4 ) ( t ) · u ¯ i j ( 4 ) ( t ) d t + i = 1 p j I i ( K ) 0 T u i j ( 5 ) ( t ) · u ¯ i j ( 5 ) ( t ) d t .
We write c ( t ) = ( c 1 ( t ) , , c p ( t ) ) . We denote by 1 ( a ) a vector in R | I ( a ) | consisting of all 1 with indices in I ( a ) . Now, the problem ( RCLP * ) is rewritten as the following compact form:
( RCLP * ) max z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , a ( 0 ) ( t ) , 1 ( a ) , γ ( a ) , 0 , 0 , 0 , 0 1 subject   to A z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) c ( t ) , 0 , 0 , 0 for   t [ 0 , T ] u ( 1 ) 0 , d R + | I ( a ) | , z L 2 ( [ 0 , T ] , R + q ) , u ( 2 ) , u ( 3 ) L 2 ( [ 0 , T ] , R + p ) u ( 4 ) L 2 [ 0 , T ] , R + α ( B )   and   u ( 5 ) L 2 [ 0 , T ] , R + α ( K ) .
Let v ( 1 ) be a vector consisting of v j ( 1 ) for j I ( a ) . For each i = 1 , , p , we define the vector-valued functions v i ( 2 ) and v i ( 3 ) that consist of v i j ( 2 ) for j I i ( B ) and v i j ( 3 ) for j I i ( K ) , respectively. We also write
v ( 2 ) = v 1 ( 2 ) , , v p ( 2 )   and   v ( 3 ) = v 1 ( 3 ) , , v p ( 5 ) .
We define the following indicator functions on the finite set { 1 , 2 , , q } :
χ j ( a ) = 1   if   j I ( a ) 0   if   j I ( a ) , χ i j ( B ) = 1   if   j I i ( B ) 0   if   j I i ( B )   and   and   χ i j ( K ) = 1   if   j I i ( K ) 0   if   j I i ( K )
We also define the following functions:
A j ( * 1 ) w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w i ( t ) i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( s ) d s + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v i j ( 2 ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( s , t ) · v i j ( 3 ) ( s ) d s   for   j = 1 , , q ; A j ( * 2 ) ( v ( 1 ) ) = v j ( 1 )   for   j I ( a ) ; A ( * 3 ) v ( 1 ) = j I ( a ) v j ( 1 ) ; A i ( * 4 ) w ( t ) , v ( 2 ) ( t ) = γ i ( B ) · w i ( t ) j I i ( B ) v i j ( 2 ) ( t )   for   i = 1 , , p ; A i ( * 5 ) w ( t ) , v ( 3 ) ( t ) = γ i ( K ) · w i ( t ) j I i ( K ) v i j ( 3 ) ( t )   for   i = 1 , , p ; A i j ( * 6 ) w ( t ) , v ( 2 ) ( t ) = w i ( t ) v i j ( 2 ) ( t )   for   i = 1 , , p   and   j I i ( B ) ; A i j ( * 7 ) w ( t ) , v ( 3 ) ( t ) = w i ( t ) v i j ( 3 ) ( t )   for   i = 1 , , p   and   j I i ( K ) ,
which are used to define the following operator: A * : L * L by
A * w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) = A ( * 1 ) w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) , A ( * 2 ) v ( 1 ) , A ( * 3 ) v ( 1 ) , A ( * 4 ) w ( t ) , v ( 2 ) ( t ) , A ( * 5 ) w ( t ) , v ( 3 ) ( t ) , A ( * 6 ) w ( t ) , v ( 2 ) ( t ) , A ( * 7 ) w ( t ) , v ( 3 ) ( t )
We define another bilinear form · , · 2 on L * × L * by
w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) , w ¯ ( t ) , v ¯ ( 1 ) , v ¯ ( 2 ) ( t ) , v ¯ ( 3 ) ( t ) 2 = i = 1 p 0 T w i ( t ) · w ¯ i ( t ) d t + j I ( a ) v j ( 1 ) · v ¯ j ( 1 ) + i = 1 p j I i ( B ) 0 T v i j ( 2 ) ( t ) · v ¯ i j ( 2 ) ( t ) d t + i = 1 p j I i ( K ) 0 T v i j ( 3 ) ( t ) · v ¯ i j ( 3 ) ( t ) d t .
Now, we can propose the dual problem of ( RClP * ) as follows:
( DRCLP * ) min c ( t ) , 0 , 0 , 0 , w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 2 subject   to A * w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) a ( 0 ) ( t ) , 1 ( a ) , γ ( a ) , 0 , 0 , 0 , 0 for   t [ 0 , T ] v ( 1 ) R + | I ( a ) | , w L 2 ( [ 0 , T ] , R + p ) v ( 2 ) L 2 [ 0 , T ] , R + α ( B )   and   v ( 3 ) L 2 [ 0 , T ] , R + α ( K ) .
After some algebraic calculations, the dual problem ( DRCLP * ) can be rewritten as follows:
( DRCLP * ) min i = 1 p 0 T c i ( t ) · w i ( t ) d t = i = 1 p 0 T c i ( 0 ) ( t ) · w i ( t ) d t i I ( c ) 0 T c ^ i ( t ) · w i ( t ) d t subject   to i = 1 p B i j ( 0 ) ( t ) · w i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v i j ( 2 ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( s , t ) · v i j ( 3 ) ( s ) d s a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( s ) d s for   t [ 0 , T ]   and   j = 1 , , q ;
γ ( a ) j I ( a ) v j ( 1 ) ;
γ i ( B ) · w i ( t ) j I i ( B ) v i j ( 2 ) ( t )   for   t [ 0 , T ]   and   i = 1 , , p ;
γ i ( K ) · w i ( t ) j I i ( K ) v i j ( 3 ) ( t )   for   t [ 0 , T ]   and   i = 1 , , p ;
1 v j ( 1 )   for   j I ( a ) ;
w i ( t ) v i j ( 2 ) ( t ) for   t [ 0 , T ] , i = 1 , , p   and   j I i ( B ) ;
w i ( t ) v i j ( 3 ) ( t ) for   t [ 0 , T ] , i = 1 , , p   and   j I i ( K ) ; w i L 2 ( [ 0 , T ] , R + )   for   i = 1 , , p ; v j ( 1 ) 0   for   j I ( a ) ; v i j ( 2 ) L 2 ( [ 0 , T ] , R + )   for   i = 1 , , p   and   j I i ( B ) ; v i j ( 3 ) L 2 ( [ 0 , T ] , R + )   for   i = 1 , , p   and   j I i ( K ) .
Proposition 3.
A * is an adjoint operator of A in the sense of
A z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 2 = z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , A * w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 1 .
Proof. 
Applying the Fubini’s theorem to K i j ( 0 ) and K ^ i j , we have
A z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 2 = A ( 1 ) ( · ) , A ( 2 ) ( · ) , A ( 3 ) ( · ) , A ( 4 ) ( · ) , w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 2 = i = 1 p 0 T w i ( t ) · j = 1 q B i j ( 0 ) ( t ) · z j ( t ) + γ i ( B ) · u i ( 2 ) ( t ) + γ i ( K ) · u i ( 3 ) ( t ) + { j : j I i ( B ) } u i j ( 4 ) ( t ) + { j : j I i ( K ) } u i j ( 5 ) ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z j ( s ) d s d t + j I ( a ) v j ( 1 ) · u ( 1 ) d j + 0 T a ^ j ( t ) · z j ( t ) d t + i = 1 p j I i ( B ) 0 T v i j ( 2 ) ( t ) · u i ( 2 ) ( t ) u i j ( 4 ) ( t ) + B ^ i j ( t ) · z j ( t ) d t + i = 1 p j I i ( K ) 0 T v i j ( 3 ) ( t ) · u i ( 3 ) ( t ) u i j ( 5 ) ( t ) + 0 t K ^ i j ( t , s ) · z j ( s ) d s d t = j = 1 q 0 T z j ( t ) · i = 1 p B i j ( 0 ) ( t ) · w i ( t ) i = 1 p t T K i j ( 0 ) ( s , t ) · w i ( s ) d s + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v i j ( 2 ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( s , t ) · v i j ( 3 ) ( s ) d s d t u ( 1 ) · j I ( a ) v j ( 1 ) j I ( a ) d j · v j ( 1 ) + i = 1 p 0 T u i ( 2 ) ( t ) · γ i ( B ) · w i ( t ) j I i ( B ) v i j ( 2 ) ( t ) d t + i = 1 p 0 T u i ( 3 ) ( t ) · γ i ( K ) · w i ( t ) j I i ( K ) v i j ( 3 ) ( t ) d t + i = 1 p j I i ( B ) 0 T u i j ( 4 ) ( t ) · w i ( t ) v i j ( 2 ) ( t ) d t + i = 1 p j I i ( K ) 0 T u i j ( 5 ) ( t ) · w i ( t ) v i j ( 3 ) ( t ) d t = z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , A ( * 1 ) ( · ) , A ( * 2 ) ( · ) , , A ( * 7 ) ( · ) , 1 = z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , A * w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 1 .
This completes the proof. □
The feasibility of primal and dual pair of problems ( RCLP * ) and ( DRCLP * ) will be given below in Propositions 6 and 7, respectively.
Theorem 2
(Weak Duality Theorem). Consider the primal and dual pair of problems ( RCLP * ) and ( DRCLP * ) . Given any feasible solution ( z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) ) of primal problem ( RCLP * ) and any feasible solution ( w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) ) of dual problem ( DRCLP * ) , we have the following inequality:
j I ( a ) d j γ ( a ) · u ( 1 ) + j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t i = 1 p 0 T c i ( t ) · w i ( t ) d t .
In other words, we have
V ( RCLP * ) V ( DRCLP * ) .
Proof. 
We have
j I ( a ) d j γ ( a ) · u ( 1 ) + j = 1 q 0 T a j ( 0 ) ( t ) · z j ( t ) d t = z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , a ( 0 ) ( t ) , γ ( a ) , 1 ( a ) , 0 , 0 , 0 , 0 1 z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , A * w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 1 ( by   the   constraint   of   dual   problem   ( DRCLP * ) ) = A z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) , w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 2 ( since A * is   the   adjoint   of   A   by   Proposition 3 ) c ( t ) , 0 , 0 , 0 , w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) 2 ( by   the   constraint   of   primal   problem   ( RCLP * ) ) = 0 T c ( t ) w ( t ) d t = i = 1 p 0 T c i ( t ) · w i ( t ) d t .
This completes the proof. □

4. Discretization

In order to design the computational procedure, we are going to formulate a discretization problem, which will be a conventional linear programming problem. Therefore, we need some mild assumptions to achieve this purpose.
(A1)
The real-valued functions B i j ( 0 ) , K i j ( 0 ) , a ^ j , c ^ i , B ^ i j and K ^ i j are assumed to be nonnegative satisfying B i j ( 0 ) B ^ i j 0 and K i j ( 0 ) K ^ i j 0 .
(A2)
The real-valued functions a j ( 0 ) , c i ( 0 ) , B i j ( 0 ) , K i j ( 0 ) , a ^ j , c ^ i , B ^ i j and K ^ i j are assumed to be piecewise continuous on [ 0 , T ] .
(A3)
For each t [ 0 , T ] and j = 1 , , q , we assume that the following positivities are satisfied:
i = 1 p B i j ( 0 ) ( t ) > 0   and   i = 1 p B ^ i j ( 0 ) ( t ) > 0 ;
(A4)
We assume that the following positivities are satisfied:
min i = 1 , , p min j = 1 , , q inf t [ 0 , T ] B i j ( 0 ) ( t ) : B i j ( 0 ) ( t ) > 0 σ > 0
and
min i = 1 , , p min j = 1 , , q inf t [ 0 , T ] B i j ( 0 ) ( t ) : B ^ i j ( 0 ) ( t ) > 0 σ ^ > 0 .
In other words, given any t [ 0 , T ] , if B i j ( 0 ) ( t ) 0 , then B i j ( 0 ) ( t ) σ , and if B ^ i j ( 0 ) ( t ) 0 , then B ^ i j ( 0 ) ( t ) σ ^ for all i = 1 , , p and j = 1 , , q .
Since the involved functions are assumed to be piecewise continuous, it means that the discontinuities should be excluded in the discussion. We denote by A j , S i , B i j and K i j the set of discontinuities of the corresponding real-valued functions a j , c i , B i j and K i j . It is clear to see that A j , S i and B i j are finite subsets of [ 0 , T ] , and that K i j is a finite subset of [ 0 , T ] × [ 0 , T ] . For convenience, we also write
K i j = K i j ( 1 ) × K i j ( 2 ) ,
where K i j ( 1 ) and K i j ( 2 ) are a finite subset of [ 0 , T ] . In order to formulate the discretization problem, we need to divide the time interval [ 0 , T ] into many subintervals by determining a finite subset of points in [ 0 , T ] . In order to make the involved functions to be continuous on the subintervals, we are going to take the discontinuities to be the set of partition of [ 0 , T ] . Therefore, we consider the following set
D = j = 1 q A j i = 1 p S i i = 1 p j = 1 q B i j i = 1 p j = 1 q K i j ( 1 ) i = 1 p j = 1 q K i j ( 2 ) 0 , T .
Then, D is a finite subset of [ 0 , T ] written by
D = d 0 , d 1 , d 2 , , d r ,
where d 0 = 0 and d r = T . We remark that d 0 and d r can be the continuities of functions a j , c i , B i j and K i j for i = 1 , , p and j = 1 , , q .
Let P n be a partition of [ 0 , T ] satisfying D P n , which means that each compact interval [ d v , d v + 1 ] for v = 0 , 1 , , r 1 is also divided into many compact subintervals. Now, we write
P n = e 0 ( n ) , e 1 ( n ) , , e n ( n ) ,
where e 0 ( n ) = 0 and e n ( n ) = T . In this case, we have n compact subintervals that are denoted by
E ¯ l ( n ) = e l 1 ( n ) , e l ( n )   for   l = 1 , , n .
For further discussion, we also write
E l ( n ) = e l 1 ( n ) , e l ( n )   and   F l ( n ) = e l 1 ( n ) , e l ( n ) .
We denote by d l ( n ) the length of compact interval E ¯ l ( n ) . We also define the following quantity:
P n = max l = 1 , , n d l ( n )
and assume that
P n 0   as   n .
We also assume that there exists n * , n * N satisfying
n * · r n n * · r   and   P n T n * .
It is clear to see that n * implies n . In the paper, when we say that n , it implicitly means n * .
Let l v denote the length of compact interval [ d v , d v + 1 ] for v = 0 , 1 , , r 1 . We consider two different types of partitions of [ 0 , T ] .
Example 1.
We assume that each compact interval [ d v , d v + 1 ] is equally divided by n * subintervals for v = 0 , 1 , , r 1 . In this case, the total number of subintervals are n = n * · r satisfying n * = n * . It is clear to see that
P n = 1 n * · max v = 0 , 1 , , r 1 l v T n * ,   and   n   if   and   only   if   n * .
Example 2.
Let
l * = max v = 0 , 1 , , r 1 l v .
We assume that that each compact subinterval [ d v , d v + 1 ] is equally divided by n v subintervals for v = 0 , 1 , , r 1 . Then, the total number of subintervals is given by
n = v = 1 r n v .
Let
n * = max v = 0 , 1 , , r 1 n v   and   n * = min v = 0 , 1 , , r 1 n v .
Assume that the partition satisfies
n * n * l * T .
Then, we obtain
n * · r n n * · r   and   P n = max v = 0 , 1 , , r 1 l v n v l * n * T n * .
It is clear to see that n * implies n .
Now, we can construct a partition P n according to the above setting. In this case, we see that the real-valued functions a j , c i and B i j are continuous on the open interval E l ( n ) . We also see that the real-valued function K i j is continuous on the open rectangle E l ( n ) × E k ( n ) for l , k = 1 , , n . For i = 1 , , p , from (10), we define
c l i ( n ) = inf t E ¯ l ( n ) c i ( t ) = inf t E ¯ l ( n ) c i ( 0 ) ( t ) c ^ i ( t ) if   i I ( c ) c i ( 0 ) ( t ) if   i I ( c ) .
For j = 1 , , q , we also define
a ^ l j ( n ) = sup t E ¯ l ( n ) a ^ j ( t )   for   j I ( a )   and   a l j ( n ) = inf t E ¯ l ( n ) a j ( 0 ) ( t ) .
It is clear to see that
a j ( 0 ) ( t ) a l j ( n ) , a ^ j ( t ) a ^ l j ( n )   and   c i ( t ) c l i ( n )
for i = 1 , , p , j = 1 , , q and t E ¯ l ( n ) .
Considering the matrices B ( 0 ) and K ( 0 ) , the ( i , j ) -th entries of matrices B l ( n ) and K l k ( n ) , for l , k = 1 , , n , are denoted and defined by
B l i j ( n ) = sup t E ¯ l ( n ) B i j ( 0 ) ( t )   and   K l k i j ( n ) = inf ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) K i j ( 0 ) ( t , s ) .
From (22), it follows that, if B l i j ( n ) 0 , then B l i j ( n ) σ > 0 for all i = 1 , , p , j = 1 , , q and l = 1 , , n . It is clear to see that
B ( 0 ) ( t ) B l ( n )   and   K ( 0 ) ( t , s ) K l k ( n )
for t E ¯ l ( n ) and ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) , respectively, for l , k = 1 , , n .
Remark 1.
From (21), we see that B l i j ( n ) 0 implies B l i j ( n ) σ > 0 for all i = 1 , , p , j = 1 , , q and l = 1 , , n . Given any fixed t E ¯ l ( n ) , from (20), for any j = 1 , , q , there exists i j { 1 , , p } satisfying B i j j ( t ) > 0 , which also says that B l i j j ( n ) 0 , i.e., B l i j j ( n ) σ > 0 . Therefore, for each j and l, there exists i l j { 1 , 2 , , p } satisfying B l i l j j ( n ) σ > 0 .
Considering the matrices B ^ and K ^ , the ( i , j ) -th entries of matrices B ^ l ( n ) and K ^ l k ( n ) , for l , k = 1 , , n , are denoted and defined by
B ^ l i j ( n ) = sup t E ¯ l ( n ) B ^ i j ( t )   and   K ^ l k i j ( n ) = sup ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) K ^ i j ( t , s ) .
It is clear to see that
B ^ ( t ) B ^ l ( n )   and   K ^ ( t , s ) K ^ l k ( n )
for t E ¯ l ( n ) and ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) , respectively, for l , k = 1 , , n . We also see that K l k i j ( n ) K ^ l k i j ( n ) 0 implies K i j ( 0 ) ( t , s ) K ^ i j ( t , s ) 0 for ( t , s ) E ¯ l ( n ) × E ¯ k ( n ) .
Now, we formulate the following linear programming problem:
( P n ) max l = 1 n j = 1 q d l ( n ) · a l j ( n ) · z l j j I ( a ) d j γ ( a ) · u ( 1 ) subject   to j = 1 q B 1 i j ( n ) · z 1 j + γ i ( B ) · u 1 i ( 2 ) + γ i ( K ) · u 1 i ( 3 ) + { j : j I i ( B ) } u 1 i j ( 4 ) + { j : j I i ( K ) } u 1 i j ( 5 )
c 1 i ( n )   for   i = 1 , , p ; j = 1 q B l i j ( n ) · z l j + γ i ( B ) · u l i ( 2 ) + γ i ( K ) · u l i ( 3 ) + { j : j I i ( B ) } u l i j ( 4 ) + { j : j I i ( K ) } u l i j ( 5 ) c l i ( n ) + j = 1 q k = 1 l 1 d k ( n ) · K l k i j ( n ) · z k j   for   i = 1 , , p   and   l = 2 , , n ;
u ( 1 ) + d j l = 1 n d l ( n ) · a ^ l j ( n ) · z l j   for   j I ( a ) ;
u l i ( 2 ) + u l i j ( 4 ) B ^ l i j ( n ) · z l j   for   i = 1 , , p , j I i ( B )   and   l = 1 , , n ;
u 1 i ( 3 ) + u 1 i j ( 5 ) 0   for   i = 1 , , p   and   j I i ( K ) ;
u l i ( 3 ) + u l i j ( 5 ) k = 1 l 1 d k ( n ) · K ^ l k i j ( n ) · z k j ;   for   i = 1 , , p , j I i ( K )   and   l = 2 , , n ; u ( 1 ) 0   and   d j 0   for   j I ( a ) ; z l j , u l i ( 2 ) , u l i ( 3 ) 0   for   l = 1 , , n ; u l i j ( 4 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ; u l i j ( 5 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) .
We note that the constraint (35) is redundant. However, we present it for the purpose of comparing it to the constraint (36). We also adopt the following notations:
  • The vector 1 p R p has all entries 1.
  • The vector 1 ( a ) R | I ( a ) | has all entries 1 with indices j I ( a ) .
  • For each i = 1 , , p , the vector 1 i ( B ) R | I i ( B ) | has all entries 1 with indices j I i ( B ) .
  • For each i = 1 , , p , the vector 1 i ( K ) R | I i ( K ) | has all entries 1 with indices j I i ( K ) .
  • Given a vector a R q , we denote by d i a g ( a ) a q × q diagonal matrix with a j for j = 1 , , q appearing in the diagonal.
Now, the problem ( P n ) is rewritten as the following standard form:
( P n ) max b x subject   to M x b ¯   and   x 0 ,
where the decision vector x is defined by
x = z 1 , , z n , d , u ( 1 ) , u 1 ( 2 ) , , u n ( 2 ) , u 1 ( 3 ) , , u n ( 3 ) , u 11 ( 4 ) , , u 1 p ( 4 ) , , u n 1 ( 4 ) , , u n p ( 4 ) , u 11 ( 5 ) , , u 1 p ( 5 ) , , u n 1 ( 5 ) , , u n p ( 5 )
and
u l i ( 4 ) = u l i j ( 4 ) j I i ( B )   and   u l i ( 5 ) = u l i j ( 5 ) j I i ( K )
for l = 1 , , n and i = 1 , , p . The data b and b ¯ are defined by
b = d 1 ( n ) a 1 ( n ) , d 2 ( n ) a 2 ( n ) , , d n ( n ) a n ( n ) , 1 ( a ) , γ ( a ) , 0 , 0 , 0 , 0
and
b ¯ = c 1 ( n ) , c 2 ( n ) , , c n ( n ) , 0 , 0 , 0 ,
where
c l ( n ) = c l 1 , c l 2 , , c l p .
To obtain the matrix M, we need to carefully determine its submatrices that are given below
M = M 1 ( 1 ) M 2 ( 1 ) M 3 ( 1 ) M 4 ( 1 ) M 5 ( 1 ) M 6 ( 1 ) M 7 ( 1 ) M 1 ( 2 ) M 2 ( 2 ) M 3 ( 2 ) M 4 ( 2 ) M 5 ( 2 ) M 6 ( 2 ) M 7 ( 2 ) M 1 ( 3 ) M 2 ( 3 ) M 3 ( 3 ) M 4 ( 3 ) M 5 ( 3 ) M 6 ( 3 ) M 7 ( 3 ) M 1 ( 4 ) M 2 ( 4 ) M 3 ( 4 ) M 4 ( 4 ) M 5 ( 4 ) M 6 ( 4 ) M 7 ( 4 ) .
The details for these submatrices are described below.
  • For the first row, we have
    M 1 ( 1 ) = B 1 ( n ) 0 0 0 d 1 ( n ) K 21 ( n ) B 2 ( n ) 0 0 d 1 ( n ) K 31 ( n ) d 2 ( n ) K 32 ( n ) B 3 ( n ) 0 d 1 ( n ) K n 1 ( n ) d 2 ( n ) K n 2 ( n ) d 3 ( n ) K n 3 ( n ) B n ( n )
    M 2 ( 1 ) = M 3 ( 1 ) = 0
    M 4 ( 1 ) = diag γ B 0 0 0 diag γ B 0 0 0 diag γ B
    M 5 ( 1 ) = diag γ K 0 0 0 diag γ K 0 0 0 diag γ K
    M 6 ( 1 ) = M ¯ 6 ( 1 ) 0 0 0 M ¯ 6 ( 1 ) 0 0 0 M ¯ 6 ( 1 )   with   M ¯ 6 ( 1 ) = 1 1 ( B ) 0 0 0 1 2 ( B ) 0 0 0 1 p ( B )
    M 7 ( 1 ) = M ¯ 7 ( 1 ) 0 0 0 M ¯ 7 ( 1 ) 0 0 0 M ¯ 7 ( 1 )   with   M ¯ 7 ( 1 ) = 1 1 ( K ) 0 0 0 1 2 ( K ) 0 0 0 1 p ( K )
    where the vectors γ B and γ K consist of γ i ( B ) and γ i ( K ) , respectively, for i = 1 , , p .
  • For the second row, we first define a ^ l ( n ) R | I ( a ) | with the entries a ^ l j ( n ) for j I ( a ) . Now, we define
    M 1 ( 2 ) = d 1 ( n ) diag a ^ 1 ( n ) d 2 ( n ) diag a ^ 2 ( n ) d n ( n ) diag a ^ n ( n )
    M 2 ( 2 ) = diag 1 ( a ) , M 3 ( 2 ) = 1 ( a )   and   M 4 ( 2 ) = M 5 ( 2 ) = M 6 ( 2 ) = M 7 ( 2 ) = 0 .
  • For the third row, we first define b ^ l i ( n ) R | I i ( B ) | with the entries B ^ l i j ( n ) for j I i ( B ) . Now, we define
    M ¯ l ( 3 ) = diag b ^ l 1 ( n ) diag b ^ l 2 ( n ) diag b ^ l p ( n ) .
    Then, the submatrix M 1 ( 3 ) is defined by
    M 1 ( 3 ) = M ¯ 1 ( 3 ) 0 0 0 0 M ¯ 2 ( 3 ) 0 0 0 0 M ¯ 3 ( 3 ) 0 0 0 0 M ¯ n ( 3 ) .
    We also define the following submatrices:
    M 2 ( 3 ) = M 3 ( 3 ) = M 5 ( 3 ) = M 7 ( 3 ) = 0
    M 4 ( 3 ) = M ¯ 4 ( 3 ) 0 0 0 M ¯ 4 ( 3 ) 0 0 0 M ¯ 4 ( 3 )   with   M ¯ 4 ( 3 ) = 1 1 ( B ) 0 0 0 1 2 ( B ) 0 0 0 1 p ( B )
    M 6 ( 3 ) = M ¯ 6 ( 3 ) 0 0 0 M ¯ 6 ( 3 ) 0 0 0 M ¯ 6 ( 3 )   with   M ¯ 6 ( 3 ) = diag 1 1 ( B ) 0 0 0 diag 1 2 ( B ) 0 0 0 diag 1 p ( B )
  • For the fourth row, we first define k ^ l k i ( n ) R | I i ( K ) | with the entries K ^ l k i j ( n ) for j I i ( K ) . Now, we define
    M ¯ l k ( 4 ) = d k ( n ) diag k ^ l k 1 ( n ) d k ( n ) diag k ^ l k 2 ( n ) d k ( n ) diag k ^ l k p ( n ) .
    Then, the submatrix M 1 ( 4 ) is defined by
    M 1 ( 4 ) = 0 0 0 0 M ¯ 21 ( 4 ) 0 0 0 M ¯ 31 ( 4 ) M ¯ 32 ( 4 ) 0 0 M ¯ n 1 ( 4 ) M ¯ n 2 ( 4 ) M ¯ n , n 1 ( 4 ) 0 .
    We also define the following submatrices:
    M 2 ( 4 ) = M 3 ( 4 ) = M 4 ( 4 ) = M 6 ( 4 ) = 0
    M 5 ( 4 ) = M ¯ 5 ( 4 ) 0 0 0 M ¯ 5 ( 4 ) 0 0 0 M ¯ 5 ( 4 )   with   M ¯ 5 ( 4 ) = 1 1 ( K ) 0 0 0 1 2 ( K ) 0 0 0 1 p ( K )
    M 7 ( 4 ) = M ¯ 7 ( 4 ) 0 0 0 M ¯ 7 ( 4 ) 0 0 0 M ¯ 7 ( 4 )   with   M ¯ 7 ( 4 ) = diag 1 1 ( K ) 0 0 0 diag 1 2 ( K ) 0 0 0 diag 1 p ( K )
Now, the dual problem of ( P n ) is given by the following standard form:
( D ^ n ) min b ¯ y subject   to M y b   and   y 0 ,
where the decision vector y is defined by
y = w 1 , , w n , v ( 1 ) , v 11 ( 2 ) , , v 1 p ( 2 ) , , v n 1 ( 2 ) , , v n p ( 2 ) , v 11 ( 3 ) , , v 1 p ( 3 ) , , v n 1 ( 3 ) , , v n p ( 3 ) ,
and
v l i ( 2 ) = v l i j ( 2 ) j I i ( B )   and   v l i ( 3 ) = v l i j ( 3 ) j I i ( K )
for l = 1 , , n and i = 1 , , p . After some algebraic calculations, it can be written as the following form:
( D ^ n ) min l = 1 n i = 1 p c l i ( n ) · w l i subject   to i = 1 p B l i j ( n ) · w l i + χ j ( a ) · d l ( n ) · a ^ l j ( n ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · v l i j ( 2 ) + k = l + 1 n i = 1 p χ i j ( K ) · d l ( n ) · K ^ k l i j ( n ) · v k i j ( 3 ) d l ( n ) · a l j ( n ) + k = l + 1 n i = 1 p d l ( n ) · K k l i j ( n ) · w k i   for   l = 1 , , n 1   and   j = 1 , , q ; i = 1 p B n i j ( n ) · w n i + χ j ( a ) · d n ( n ) · a ^ n j ( n ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · v n i j ( 2 ) d n ( n ) · a n j ( n )   for   j = 1 , , q ; v j ( 1 ) 1   for   j I ( a ) ; γ ( a ) j I ( a ) v j ( 1 ) ; γ i ( B ) · w l i j I i ( B ) v l i j ( 2 )   for   l = 1 , , n   and   i = 1 , , p ; γ i ( K ) · w l i j I i ( K ) v l i j ( 3 )   for   l = 1 , , n   and   i = 1 , , p ; w l i v l i j ( 2 )   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ; w l i v l i j ( 3 )   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) ; w l i 0   for   l = 1 , , n   and   i = 1 , , p ; v j ( 1 ) 0   for   j I ( a ) ; v l i j ( 2 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ; v l i j ( 3 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) .
Let
w l i = d l ( n ) · w ˜ l i , v l i j ( 2 ) = d l ( n ) · v ˜ l i j ( 2 )   and   v l i j ( 3 ) = d l ( n ) · v ˜ l i j ( 3 ) .
Then, we obtain
( D n ) min l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w ˜ l i subject   to i = 1 p d l ( n ) · B l i j ( n ) · w ˜ l i + d l ( n ) · χ j ( a ) · a ^ l j ( n ) · v j ( 1 ) + i = 1 p d l ( n ) · χ i j ( B ) · B ^ l i j ( n ) · v ˜ l i j ( 2 ) + k = l + 1 n i = 1 p χ i j ( K ) · d l ( n ) · d k ( n ) · K ^ k l i j ( n ) · v ˜ k i j ( 3 ) d l ( n ) · a l j ( n ) + k = l + 1 n i = 1 p d l ( n ) · d k ( n ) · K k l i j ( n ) · w ˜ k i   for   l = 1 , , n 1   and   j = 1 , , q ; i = 1 p d n ( n ) · B n i j ( n ) · w ˜ n i + d n ( n ) · χ j ( a ) · a ^ n j ( n ) · v j ( 1 ) + i = 1 p d n ( n ) · χ i j ( B ) · B ^ n i j ( n ) · v ˜ n i j ( 2 ) d n ( n ) · a n j ( n )   for   j = 1 , , q ; v j ( 1 ) 1   for   j I ( a ) ; γ ( a ) j I ( a ) v j ( 1 ) ; γ i ( B ) · d l ( n ) · w ˜ l i j I i ( B ) d l ( n ) · v ˜ l i j ( 2 )   for   l = 1 , , n   and   i = 1 , , p ; γ i ( K ) · d l ( n ) · w ˜ l i j I i ( K ) d l ( n ) · v ˜ l i j ( 3 )   for   l = 1 , , n   and   i = 1 , , p ; d l ( n ) · w ˜ l i d l ( n ) · v ˜ l i j ( 2 )   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ; d l ( n ) · w ˜ l i d l ( n ) · v ˜ l i j ( 3 ) f o r l = 1 , , n , i = 1 , , p   and   j I i ( K ) ; d l ( n ) · w ˜ l i 0   for   l = 1 , , n   and   i = 1 , , p ; v j ( 1 ) 0   for   j I ( a ) ; d l ( n ) · v ˜ l i j ( 2 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ; d l ( n ) · v ˜ l i j ( 3 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) .
which is equivalent to the following problem, by re-naming the decision variables,
( D n ) min l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w l i
subject   to i = 1 p B l i j ( n ) · w l i + χ j ( a ) · a ^ l j ( n ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · v l i j ( 2 ) + k = l + 1 n i = 1 p χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · v k i j ( 3 ) a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K k l i j ( n ) · w k i   for   l = 1 , , n 1   and   j = 1 , , q ;
i = 1 p B n i j ( n ) · w n i + χ j ( a ) · a ^ n j ( n ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · v n i j ( 2 ) a n j ( n )   for   j = 1 , , q ;
γ ( a ) j I ( a ) v j ( 1 ) ;
γ i ( B ) · w l i j I i ( B ) v l i j ( 2 )   for   l = 1 , , n   and   i = 1 , , p ;
γ i ( K ) · w l i j I i ( K ) v l i j ( 3 )   for   l = 1 , , n   and   i = 1 , , p ;
1 v j ( 1 )   for   j I ( a ) ;
w l i v l i j ( 2 )   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ;
w l i v l i j ( 3 )   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) ; w l i 0   for   l = 1 , , n   and   i = 1 , , p ;
v j ( 1 ) 0   for   j I ( a ) ; v l i j ( 2 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( B ) ; v l i j ( 3 ) 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) .
The feasible solution of problem ( P n ) is denoted by
x ( n ) = z ( n ) , d ( n ) , u ( 1 ; n ) , u ( 2 ; n ) , u ( 3 ; n ) , u ( 4 ; n ) , u ( 5 ; n ) ,
where
z ( n ) = z 1 ( n ) , , z l ( n ) , , z n ( n ) with z l ( n ) = z l 1 ( n ) , z l 2 ( n ) , , z l q ( n ) u ( 2 ; n ) = u 1 ( 2 ; n ) , , u l ( 2 ; n ) , , u n ( 2 ; n )   with   u l ( 2 ; n ) = u l 1 ( 2 ; n ) , u l 2 ( 2 ; n ) , , u l q ( 2 ; n ) u ( 3 ; n ) = u 1 ( 3 ; n ) , , u l ( 3 ; n ) , , u n ( 3 ; n )   with   u l ( 3 ; n ) = u l 1 ( 3 ; n ) , u l 2 ( 3 ; n ) , , u l q ( 2 ; n ) u ( 4 ; n ) = u 11 ( 4 ; n ) , , u 1 p ( 4 ; n ) , , u l 1 ( 4 ; n ) , , u l p ( 4 ; n ) , , u n 1 ( 4 ; n ) , , u n p ( 4 ; n )   and   u ( 5 ; n ) = u 11 ( 5 ; n ) , , u 1 p ( 5 ; n ) , , u l 1 ( 5 ; n ) , , u l p ( 5 ; n ) , , u n 1 ( 5 ; n ) , , u n p ( 5 ; n )   with   u l i ( 4 ; n ) = u l i 1 ( 4 ; n ) , u l i 2 ( 4 ; n ) , , u l i q ( 4 ; n )   and   u l i ( 5 ; n ) = u l i 1 ( 5 ; n ) , u l i 2 ( 5 ; n ) , , u l i q ( 5 ; n )   for   l = 1 , , n   and   i = 1 , , p .
In addition, the feasible solution of problem ( D n ) is denoted by
y ( n ) = w ( n ) , v ( 1 ; n ) , v ( 2 ; n ) , v ( 3 ; n ) ,
where
w ( n ) = w 1 ( n ) , , w l ( n ) , , w n ( n )   with   w l ( n ) = w l 1 ( n ) , w l 2 ( n ) , , w l p ( n ) v ( 2 ; n ) = v 11 ( 2 ; n ) , , v 1 p ( 2 ; n ) , , v l 1 ( 2 ; n ) , , v l p ( 2 ; n ) , , v n 1 ( 2 ; n ) , , v n p ( 2 ; n )   and   v ( 3 ; n ) = v 11 ( 3 ; n ) , , v 1 p ( 3 ; n ) , , v l 1 ( 3 ; n ) , , v l p ( 3 ; n ) , , v n 1 ( 3 ; n ) , , v n p ( 3 ; n )   with   v l i ( 2 ; n ) = v l i 1 ( 2 ; n ) , v l i 2 ( 2 ; n ) , , v l i q ( 2 ; n )   and   v l i ( 3 ; n ) = v l i 1 ( 3 ; n ) , v l i 2 ( 3 ; n ) , , v l i q ( 3 ; n )   for   l = 1 , , n   and   i = 1 , , p .
Recall that d l ( n ) denotes the length of compact interval E ¯ l ( n ) . Now, we define
s l ( n ) = max k = l , , n d k ( n ) .
Then,
s l ( n ) = max d l ( n ) , d l + 1 ( n ) , , d n ( n ) = max d l ( n ) , s l + 1 ( n ) ,
which says that
s l ( n ) d l ( n )   and   P n s l ( n ) s l + 1 ( n )
for l = 1 , , n 1 . We also adopt the following notations:
σ ¯ l ( n ) = min i = 1 , , p min j = 1 , , q B l i j ( n ) : B l i j ( n ) > 0   and   σ l ( n ) = min k = l , , n σ ¯ k ( n )
ν ¯ l ( n ) = max k = 1 , , n max j = 1 , , q i = 1 p K k l i j ( n )   and   ν l ( n ) = max k = l , , n ν ¯ k ( n )
ϕ ¯ l ( n ) = max k = 1 , , n max i = 1 , , p j = 1 q K k l i j ( n )   and   ϕ l ( n ) = max k = l , , n ϕ ¯ k ( n )
τ ¯ l ( n ) = max j = 1 , , q a l j ( n )   and   τ l ( n ) = max k = l , , n τ ¯ k ( n )
and the following notations
ζ = max i = 1 , , p sup t [ 0 , T ] c i ( t )
τ = max j = 1 , , q sup t [ 0 , T ] a j ( 0 ) ( t )
ν = max j = 1 , , q sup ( t , s ) [ 0 , T ] × [ 0 , T ] i = 1 p K i j ( 0 ) ( t , s )
ϕ = max i = 1 , , p sup ( t , s ) [ 0 , T ] × [ 0 , T ] j = 1 q K i j ( 0 ) ( t , s ) .
It is obvious that
τ ¯ l ( n ) τ l ( n ) τ , ν ¯ l ( n ) ν l ( n ) ν , ϕ ¯ l ( n ) ϕ l ( n ) ϕ   and   σ ¯ l ( n ) σ l ( n ) σ > 0
for any n N and l = 1 , , n . We also define
w l ( n ) = τ l ( n ) σ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n l
for l = 1 , , n .
Given a feasible solution y ( n ) = ( w ( n ) , v ( 1 ; n ) , v ( 2 ; n ) , v ( 3 ; n ) ) of problem ( D n ) , we define
θ l i j ( B ; n ) = v l i j ( 2 ; n ) w l i ( n ) if   w l i ( n ) 0 0 if   w l i ( n ) = 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( B )
and
θ l i j ( K ; n ) = v l i j ( 3 ; n ) w l i ( n ) if   w l i ( n ) 0 0 if   w l i ( n ) = 0   for   l = 1 , , n , i = 1 , , p   and   j I i ( K ) .
From the constraints (43) and (44), we have θ l i j ( B ; n ) , θ l i j ( K ; n ) [ 0 , 1 ] . From the constraints (40) and (41), we also have
γ i ( B ) j I i ( B ) θ l i j ( B ; n )   and   γ i ( K ) j I i ( K ) θ l i j ( K ; n ) for   l = 1 , , n   and   i = 1 , , p .
For l = 1 , , n , we write
a ˘ l j ( n ) = a l j ( n ) χ j ( a ) · v j ( 1 ; n ) · a ^ l j ( n )
B ˘ l i j ( n ) = B l i j ( n ) + χ i j ( B ) · θ l i j ( B ; n ) · B ^ l i j ( n )
K ˘ k l i j ( n ) = K k l i j ( n ) χ i j ( K ) · θ k i j ( K ; n ) · K ^ k l i j ( n ) .
Proposition 4.
Given a feasible solution y ( n ) = ( w ( n ) , v ( 1 ; n ) , v ( 2 ; n ) , v ( 3 ; n ) ) of problem ( D n ) , we have the following properties.
(i)
For i = 1 , , p , j = 1 , , q and l = 1 , , n , let
w ^ l i ( n ) = w l ( n ) , v ^ l i j ( 2 ; n ) = θ l i j ( B ; n ) · w l ( n ) , v ^ l i j ( 3 ; n ) = θ l i j ( K ; n ) · w l ( n )   and   v ^ j ( 1 ; n ) = v j ( 1 ; n ) .
Then,
y ^ ( n ) = w ^ ( n ) , v ^ ( 1 ; n ) , v ^ ( 2 ; n ) , v ^ ( 3 ; n )
is a feasible solution of problem ( D n ) satisfying the following inequalities:
w ^ l i ( n ) = w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n .
(ii)
For i = 1 , , p and l = 1 , , n , let
w ¯ l i ( n ) = min w l i ( n ) , w l ( n ) , v ¯ l i j ( 2 ; n ) = θ l i j ( B ; n ) · w ¯ l i ( n ) , v ¯ l i j ( 3 ; n ) = θ l i j ( K ; n ) · w ¯ l i ( n )   and   v ¯ j ( 1 ; n ) = v j ( 1 ; n ) .
Then, y ¯ ( n ) = ( w ¯ ( n ) , v ¯ ( 1 ; n ) , v ¯ ( 2 ; n ) , v ¯ ( 3 ; n ) ) is a feasible solution of problem ( D n ) satisfying the following inequalities
w ¯ l i ( n ) w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p , j = 1 , , q and l = 1 , , n . We further assume that each c l i ( n ) is nonnegative and y ( n ) is an optimal solution of problem ( D n ) . Then, y ¯ ( n ) is also an optimal solution of problem ( D n ) .
Proof. 
By (20), for each j, there exists i j { 1 , 2 , , p } satisfying B i j j ( 0 ) ( t ) > 0 . Therefore, by referring to (27), for each j and l, there exists i l j { 1 , 2 , , p } satisfying B l i l j j ( n ) > 0 , which also implies 0 < σ l ( n ) B l i l j j ( n ) B ˘ l i l j j ( n ) . For l = 1 , , n , we have
i = 1 p B ˘ l i j ( n ) · w l ( n ) B ˘ l i l j j ( n ) · w l ( n ) = B ˘ l i l j j ( n ) · τ l ( n ) σ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n l τ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n l .
Since
i = 1 p d k ( n ) · K ˘ k l i j ( n ) · w k ( n ) i = 1 p s k ( n ) · K k l i j ( n ) · τ k ( n ) σ k ( n ) · 1 + s k ( n ) · ν k ( n ) σ k ( n ) n k ( by   ( 47 )   and   K ˘ k l i j ( n ) K k l i j ( n ) ) s k ( n ) · ν l ( n ) · τ k ( n ) σ k ( n ) 1 + s k ( n ) · ν k ( n ) σ k ( n ) n k ( by   ( 49 ) ) ,
it follows that, for l = 1 , , n 1 ,
a ˘ l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K ˘ k l i j ( n ) · w k ( n ) a l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K ˘ k l i j ( n ) · w k ( n ) τ l ( n ) + k = l + 1 n s k ( n ) · ν l ( n ) · τ k ( n ) σ k ( n ) · 1 + s k ( n ) · ν k ( n ) σ k ( n ) n k ( by   ( 51 ) ) = τ l ( n ) · 1 + k = l + 1 n s l ( n ) · ν l ( n ) σ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n k = τ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n l .
Therefore, from (66), we obtain
i = 1 p B ˘ l i j ( n ) · w l ( n ) k = l + 1 n i = 1 p d k ( n ) · K ˘ k l i j ( n ) · w k ( n ) a ˘ l j ( n ) .
which implies, by (61)–(63),
i = 1 p B l i j ( n ) · w l ( n ) + χ j ( a ) · a ^ l j ( n ) · v j ( 1 ; n ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · θ l i j ( B ; n ) · w l ( n ) + i = 1 p k = l n χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · θ k i j ( K ; n ) · w k ( n ) a l j ( n ) + i = 1 p k = l + 1 n d k ( n ) · K k l i j ( n ) · w k ( n ) ,
which shows that the constraint (37) is satisfied. For l = n , from (66), we also have
i = 1 p B ˘ n i j ( n ) · w n ( n ) τ n ( n ) a ˘ n j ( n ) ,
which implies, by the nonnegativity,
i = 1 p B n i j ( n ) · w n ( n ) + χ j ( a ) · a ^ n j ( n ) + i = 1 p χ i j ( B ) · θ n i j ( B ; n ) · B ^ n i j ( n ) · w n ( n ) a n j ( n ) .
This shows that the constraint (38) is satisfied. From (60), we have
γ i ( B ) · w ^ l i ( n ) = γ i ( B ) · w l ( n ) j I i ( B ) θ l i j ( B ; n ) · w l ( n ) = j I i ( B ) v ^ l i j ( 2 ; n )
and
γ i ( K ) · w ^ l i ( n ) = γ i ( K ) · w l ( n ) j I i ( K ) θ l i j ( K ; n ) · w l ( n ) = j I i ( K ) v ^ l i j ( 3 ; n ) ,
which says that the constraints (40) and (41) are satisfied. The other constraints can be easily realized. This shows that y ˘ ( n ) is indeed a feasible solution of problem ( D n ) . On the other hand, from (23) and (47), we have
τ l ( n ) σ l ( n ) · 1 + s l ( n ) · ν l ( n ) σ l ( n ) n l τ σ · 1 + P n · ν σ n τ σ · 1 + T n * · ν σ n τ σ · 1 + r · T n · ν σ n .
Since
1 + r · T n · ν σ n exp r · T · ν σ as   n * in   the   limiting   case ,
i.e., n , this proves (64).
To prove part (ii), for each l = 1 , , n and j = 1 , , q , we define the index set
J l j = { i : B ˘ l i j ( n ) > 0 } .
Then,
i = 1 p B ˘ l i j ( n ) w ¯ l i ( n ) = i J l j B ˘ l i j ( n ) w ¯ l i ( n ) .
We also define the index set
J ¯ l j = i J l j : w ¯ l i ( n ) = w l ( n ) .
For each fixed j = 1 , , q and l = 1 , , n , we consider the following cases.
  • Suppose that J ¯ l j , i.e., there exists i l j satisfying B l i l j j ( n ) > 0 and w ¯ l i l j ( n ) = w l ( n ) . In this case, we have
    i = 1 p B ˘ l i j ( n ) · w ¯ l i ( n ) B ˘ l i l j j ( n ) · w ¯ l i l j ( n ) = B ˘ l i l j j ( n ) · w l ( n ) .
    Combining (66) and (67), we obtain
    a ˘ l j ( n ) + i = 1 p k = l + 1 n d k ( n ) K ˘ k l i j ( n ) · w ¯ k i ( n ) a ˘ l j ( n ) + k = l + 1 n i = 1 p d k ( n ) · K ˘ k l i j ( n ) · w k ( n ) i = 1 p B ˘ l i j ( n ) · w ¯ l i ( n )
    for l = 1 , , n 1 and
    a ˘ n j ( n ) i = 1 p B ˘ n i j ( n ) · w ¯ n i ( n ) .
    Therefore, we obtain
    i = 1 p B l i j ( n ) · w ¯ l i ( n ) + χ j ( a ) · a ^ l j ( n ) · v ¯ j ( 1 ; n ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · θ l i j ( B ; n ) · w ¯ l i ( n ) + i = 1 p k = l n χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · θ k i j ( K ; n ) · w ¯ k i ( n ) a l j ( n ) + i = 1 p k = l + 1 n d k ( n ) · K k l i j ( n ) · w ¯ k i ( n ) ( by   ( 61 ) ( 63 )   and   ( 70 ) )
    for l = 1 , , n 1 , and
    i = 1 p B n i j ( n ) · w ¯ n i ( n ) + χ j ( a ) · a ^ n j ( n ) · v ¯ j ( 1 ; n ) + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · θ n i j ( B ; n ) · w ¯ n i ( n ) a n j ( n ) ( by ( 71 ) ) ,
    which show that the constraints (37) and (38) are satisfied.
  • Suppose that J ¯ l j = , i.e., w ¯ l i ( n ) = w l i ( n ) for i J l j . In this case, for l = 1 , , n , using (68), we have
    i = 1 p B ˘ l i j ( n ) w ¯ l i ( n ) = i J l j B ˘ l i j ( n ) w ¯ l i ( n ) = i J l j B ˘ l i j ( n ) w l i ( n ) = i = 1 p B ˘ l i j ( n ) w l i ( n ) .
    For l = 1 , , n 1 , since w k i ( n ) w ¯ k i ( n ) and K ˘ k l i j ( n ) 0 , using (72), we have
    i = 1 p B ˘ l i j ( n ) w ¯ l i ( n ) k = l + 1 n d k ( n ) · K ˘ k l i j ( n ) · w ¯ k i ( n ) i = 1 p B ˘ l i j ( n ) w l i ( n ) k = l + 1 n d k ( n ) · K ˘ k l i j ( n ) · w k i ( n ) .
    which implies, by using (62) and (63) and adding v ¯ j ( 1 ; n ) = v j ( 1 ; n ) on both sides,
    i = 1 p B l i j ( n ) · w ¯ l i ( n ) i = 1 p k = l + 1 n d k ( n ) · K k l i j ( n ) · w ¯ k i ( n ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · θ l i j ( B ; n ) · w ¯ l i ( n ) + i = 1 p k = l + 1 n χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · θ k i j ( K ; n ) · w ¯ k i ( n ) + χ j ( a ) · a ^ l j ( n ) · v ¯ j ( 1 ; n ) i = 1 p B l i j ( n ) · w l i ( n ) i = 1 p k = l + 1 n d k ( n ) · K k l i j ( n ) · w k i ( n ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · θ l i j ( B ; n ) · w l i ( n ) + i = 1 p k = l + 1 n χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · θ k i j ( K ; n ) · w k i ( n ) + χ j ( a ) · a ^ l j ( n ) · v j ( 1 ; n ) = i = 1 p B l i j ( n ) · w l i ( n ) i = 1 p k = l + 1 n d k ( n ) · K k l i j ( n ) · w k i ( n ) + χ j ( a ) · a ^ l j ( n ) · v j ( 1 ; n ) + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · v l i j ( 2 ; n ) + i = 1 p k = l + 1 n χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · v k i j ( 3 ; n ) ( by   ( 58 )   and   ( 59 ) ) a l j ( n ) ( by   the   feasibility   of   y ( n ) ) .
    For l = n , by (68), we also have
    i = 1 p B ˘ n i j ( n ) w ¯ n i ( n ) = i I n j B ˘ n i j ( n ) w ¯ n i ( n ) = i I n j B ˘ n i j ( n ) w n i ( n ) = i = 1 p B ˘ n i j ( n ) w n i ( n ) ,
    which implies
    i = 1 p B n i j ( n ) · w ¯ n i ( n ) + χ j ( a ) · a ^ n j ( n ) · v ¯ j ( 1 ; n ) + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · θ n i j ( B ; n ) · w ¯ n i ( n ) = i = 1 p B n i j ( n ) · w n i ( n ) + χ j ( a ) · a ^ n j ( n ) · v j ( 1 ; n ) + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · θ n i j ( B ; n ) · w n i ( n ) a n j ( n ) ( by   the   feasibility   of   y ( n ) ) ,
    which show that the constraints (37) and (38) are satisfied.
From (60), we have
γ i ( B ) · w ¯ l i ( n ) j I i ( B ) θ l i j ( B ; n ) · w ¯ l i ( n ) = j I i ( B ) v ¯ l i j ( 2 ; n )
and
γ i ( K ) · w ¯ l i ( n ) j I i ( K ) θ l i j ( K ; n ) · w ¯ l i ( n ) = j I i ( K ) v ¯ l i j ( 3 ; n ) ,
which says that the constraints (40) and (41) are satisfied. The other constraints can be easily realized. This shows that y ¯ ( n ) is a feasible solution of problem ( D n ) . In addition, the inequality (65) follows from (64) immediately.
Finally, since the objective values satisfy
l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w ¯ l i l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w l i ,
it is clear to see that if y ( n ) is an optimal solution of problem ( D n ) , then y ¯ ( n ) is an optimal solution of problem ( D n ) . This completes the proof. □
Next, we shall prove that the feasible solutions of problem ( P n ) are uniformly bounded.
Proposition 5.
Let x ( n ) be a feasible solution of primal problem ( P n ) given by
x ( n ) = z ( n ) , d ( n ) , u ( 1 ; n ) , u ( 2 ; n ) , u ( 3 ; n ) , u ( 4 ; n ) , u ( 5 ; n ) .
Then, we have the following properties.
(i)
We have
0 z l j ( n ) ζ σ · exp r · T · ϕ σ ,
for all j = 1 , , q , l = 1 , , n and n N .
(ii)
We have
γ i ( B ) · u l i ( 2 ; n ) + γ i ( K ) · u l i ( 3 ; n ) + { j : j I i ( B ) } u l i j ( 4 ; n ) + { j : j I i ( K ) } u l i j ( 5 ; n ) ζ + r · ϕ · T · ζ σ · exp r · T · ϕ σ ,
which says that there is a constant k 1 satisfying
max u l j ( 2 ; n ) , u l j ( 3 ; n ) , u l i j ( 4 ; n ) , u l i j ( 5 ; n ) k 1
for all i = 1 , , p , j = 1 , , q , l = 1 , , n and n N .
(iii)
Suppose that x ¯ ( n ) is an optimal solution of problem ( P n ) given by
x ¯ ( n ) = z ¯ ( n ) , d ¯ ( n ) , u ¯ ( 1 ; n ) , u ¯ ( 2 ; n ) , u ¯ ( 3 ; n ) , u ¯ ( 4 ; n ) , u ¯ ( 5 ; n ) .
We have
γ ( a ) · u ¯ ( 1 ; n ) + j I ( a ) d ¯ j ( n ) q · τ · r · T · ζ σ · exp r · T · ϕ σ ,
which says that there is a constant k 2 satisfying
max u ¯ ( 1 ; n ) , d ¯ j ( n ) k 2
for all j = 1 , , q and n N .
Proof. 
To prove part (i), by the feasibility, we have
j = 1 q B 1 i j ( n ) · z 1 j + γ i ( B ) · u 1 i ( 2 ; n ) + γ i ( K ) · u 1 i ( 3 ; n ) + { j : j I i ( B ) } u 1 i j ( 4 ; n ) + { j : j I i ( K ) } u 1 i j ( 5 ; n ) c 1 i ( n )
and, for l = 2 , , n ,
j = 1 q B l i j ( n ) · z l j ( n ) c l i ( n ) + j = 1 q k = 1 l 1 d k ( n ) · K l k i j ( n ) · z k j ( n ) γ i ( B ) · u l i ( 2 ; n ) γ i ( K ) · u l i ( 3 ; n ) { j : j I i ( B ) } u l i j ( 4 ; n ) { j : j I i ( K ) } u l i j ( 5 ; n ) c l i ( n ) + j = 1 q k = 1 l 1 d k ( n ) · K l k i j ( n ) · z k j ( n ) ( by   the   nonnegativity ) c l i ( n ) + P n · j = 1 q k = 1 l 1 K l k i j ( n ) · z k j ( n )
By (20), for each j, there exists i j { 1 , 2 , , p } satisfying B i j j ( 0 ) ( t ) > 0 . Therefore, by referring to (27), for each j and l, there exists i l j { 1 , 2 , , p } satisfying B l i l j j ( n ) > 0 , which also implies B l i l j j ( n ) σ > 0 by (48) and (56). From (56), if ϕ = 0 , then ϕ ¯ l ( n ) = 0 , which says that the matrix K l k ( n ) is a zero matrix. In this case, using (77) and (52), we have
0 σ · z l j ( n ) B l i l j j ( n ) · z l j ( n ) s = 1 q B l i l j s ( n ) · z l s ( n ) c l i l j ( n ) ζ
which implies
0 z l j ( n ) ζ σ .
For the case of ϕ 0 , we want to show that
z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1
for all j = 1 , , q and l = 1 , , n . We shall prove it by induction on l. For l = 1 , from (79), we have
j = 1 q B 1 i j ( n ) · z 1 j c 1 i ( n ) ,
which says that
B l i l j j ( n ) · z 1 j ( n ) s = 1 q B l i l j s ( n ) · z 1 s ( n ) c 1 i l j ( n ) ,
Therefore, for each j, we obtain
z 1 j ( n ) c 1 i l j ( n ) B l i l j j ( n ) ζ σ .
Suppose that
z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1
for l = 1 , 2 , , n 1 . Then, for each j, we have
l = 1 n 1 z l j ( n ) l = 1 n 1 ζ σ · 1 + P n · ϕ σ l 1 ζ ϕ · P n · 1 + P n · ϕ σ n 1 1 .
Therefore, for each j, we obtain
B n i n j j ( n ) · z n j ( n ) h = 1 q B n i n j h ( n ) · z n h ( n ) c n i n j ( n ) + P n · k = 1 n 1 h = 1 q K n k i n j h ( n ) z k h ( n ) ( by   ( 78 ) ) c n i n j ( n ) + P n · h = 1 q k = 1 n 1 K i n j h ( 0 ) ( t , s ) · z k h ( n ) for   some   ( t , s ) E ¯ n × E ¯ k c n i n j ( n ) + P n · h = 1 q K i n j h ( 0 ) ( t , s ) · ζ ϕ · P n · 1 + P n · ϕ σ n 1 1 ( by ( 79 ) ) ζ + ζ 1 + P n · ϕ σ n 1 1 ( by   ( 55 ) ) ,
which implies
z n j ( n ) ζ σ · 1 + P n · ϕ σ n 1 .
Therefore, by induction, we obtain
z l j ( n ) ζ σ · 1 + P n · ϕ σ l 1 ζ σ · 1 + P n · ϕ σ n
for all j = 1 , , q , l = 1 , , n and n N . From (23), we have
ζ σ · 1 + P n · ϕ σ n ζ σ · 1 + T n * · ϕ σ n ζ σ · 1 + r · T n · ϕ σ n .
Since
1 + r · T n · ϕ σ n exp r · T · ϕ σ   as   n *   in   the   limiting   case ,
i.e., n , this proves (74).
To prove part (ii), by the feasibility of x ( n ) , for each l = 2 , , n , it follows that
j = 1 q B l i j ( n ) · z l j ( n ) c l i ( n ) + P n · j = 1 q k = 1 l 1 K l k i j ( n ) · z k j ( n ) γ i ( B ) · u l i ( 2 ; n ) γ i ( K ) · u l i ( 3 ; n ) { j : j I i ( B ) } u l i j ( 4 ; n ) { j : j I i ( K ) } u l i j ( 5 ; n ) c l i ( n ) + ( l 1 ) · P n · ζ σ · exp r · T · ϕ σ · j = 1 q K l k i j ( n ) γ i ( B ) · u l i ( 2 ; n ) γ i ( K ) · u l i ( 3 ; n ) { j : j I i ( B ) } u l i j ( 4 ; n ) { j : j I i ( K ) } u l i j ( 5 ; n ) ( using   inequality   ( 74 ) )
which implies
γ i ( B ) · u l i ( 2 ; n ) + γ i ( K ) · u l i ( 3 ; n ) + { j : j I i ( B ) } u l i j ( 4 ; n ) + { j : j I i ( K ) } u l i j ( 5 ; n ) c l i ( n ) + ( l 1 ) · P n · ζ σ · exp r · T · ϕ σ · j = 1 q K l k i j ( n ) j = 1 q B l i j ( n ) · z l j ( n ) ζ + ( l 1 ) · ϕ · r · T n · ζ σ · exp r · T · ϕ σ ( by   ( 23 ) ,   ( 50 ) and   ( 56 ) ) ζ + r · ϕ · T · ζ σ · exp r · T · ϕ σ ( since   l 1 n )
Since u l ( 2 ; n ) , u l ( 3 ; n ) , u l ( 4 ; n ) and u l ( 5 ; n ) are nonnegative, according to (80), they must be uniformly bounded. Therefore, we conclude that there exists a constant k 1 such that (75) is satisfied.
To prove part (iii), according to the objective function of problem ( P n ) , since x ( n ) = 0 is a feasible solution, we have 0 V ( P n ) , i.e.,
0 l = 1 n j = 1 q d l ( n ) · a l j ( n ) · z ¯ l j ( n ) j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n ) ,
which implies
γ ( a ) · u ¯ ( 1 ; n ) + j I ( a ) d ¯ j ( n ) l = 1 n j = 1 q d l ( n ) · a l j ( n ) · z ¯ l j ( n ) ζ σ · exp r · T · ϕ σ · l = 1 n j = 1 q d l ( n ) · a l j ( n ) ( using   inequality   ( 74 ) ) P n · ζ σ · exp r · T · ϕ σ · l = 1 n j = 1 q a l j ( n ) n · q · τ · P n · ζ σ · exp r · T · ϕ σ n · q · τ · r · T n · ζ σ · exp r · T · ϕ σ ( by   ( 23 ) ) = q · τ · r · T · ζ σ · exp r · T · ϕ σ .
which says that u ( 1 ; n ) and d j ( n ) must be uniformly bounded for j = 1 , , q and n N by the nonnegativity of u ( 1 ; n ) and d ( n ) . Therefore, we conclude that there exists a constant k 2 such that (76) is satisfied. This completes the proof. □
The feasible solution of problem ( RCLT * ) is denoted by
x ( t ) = z ( t ) , d , u ( 1 ) , u ( 2 ) ( t ) , u ( 3 ) ( t ) , u ( 4 ) ( t ) , u ( 5 ) ( t ) ,
where
z ( t ) = z 1 ( t ) , , z j ( t ) , , z q ( t ) d = d 1 , , d j , , d q u ( 2 ) ( t ) = u 1 ( 2 ) ( t ) , , u j ( 2 ) ( t ) , , u q ( 2 ) ( t ) u ( 3 ) ( t ) = u 1 ( 3 ) ( t ) , , u j ( 3 ) ( t ) , , u q ( 3 ) ( t ) u ( 4 ) ( t ) = u 11 ( 4 ) ( t ) , , u 1 q ( 4 ) ( t ) , , u i 1 ( 4 ) ( t ) , , u i q ( 4 ) ( t ) , , u p 1 ( 4 ) ( t ) , , u p q ( 4 ) ( t ) u ( 5 ) ( t ) = u 11 ( 5 ) ( t ) , , u 1 q ( 5 ) ( t ) , , u i 1 ( 5 ) ( t ) , , u i q ( 5 ) ( t ) , , u p 1 ( 5 ) ( t ) , , u p q ( 5 ) ( t ) .
Let x ¯ ( n ) be an optimal solution of problem ( P n ) given by
x ¯ ( n ) = z ¯ ( n ) , d ¯ ( n ) , u ¯ ( 1 ; n ) , u ¯ ( 2 ; n ) , u ¯ ( 3 ; n ) , u ¯ ( 4 ; n ) , u ¯ ( 5 ; n ) .
We construct the vector-valued step functions z ^ ( n ) : [ 0 , T ] R q and u ^ ( 2 ; n ) , u ^ ( 3 ; n ) : [ 0 , T ] R p as follows:
z ^ ( n ) ( t ) = z ^ 1 ( n ) ( t ) , z ^ 2 ( n ) ( t ) , , z ^ q ( n ) ( t ) u ^ ( 2 ; n ) ( t ) = u ^ 1 ( 2 ; n ) ( t ) , u ^ 2 ( 2 ; n ) ( t ) , , u ^ p ( 2 ; n ) ( t ) u ^ ( 3 ; n ) ( t ) = u ^ 1 ( 3 ; n ) ( t ) , u ^ 2 ( 3 ; n ) ( t ) , , u ^ p ( 3 ; n ) ( t )
where, for each i = 1 , , p and j = 1 , , q ,
z ^ j ( n ) ( t ) = z ¯ l j ( n ) if   t F l ( n )   for   l = 1 , , n z ¯ n j ( n ) if   t = T
u ^ i ( 2 ; n ) ( t ) = u ¯ l i ( 2 ; n ) if   t F l ( n )   for   l = 1 , , n u ¯ n i ( 2 ; n ) if   t = T
u ^ i ( 3 ; n ) ( t ) = u ¯ l i ( 3 ; n ) if   t F l ( n )   for   l = 1 , , n u ¯ n i ( 3 ; n )   if   t = T .
We also construct the vector-valued step functions u ^ i ( 4 ; n ) , u ^ i ( 5 ; n ) : [ 0 , T ] R q by
u ^ i ( 4 ; n ) ( t ) = u ^ i 1 ( 4 ; n ) ( t ) , u ^ i 2 ( 4 ; n ) ( t ) , , u ^ i q ( 4 ; n ) ( t )
and
u ^ i ( 5 ; n ) ( t ) = u ^ i 1 ( 5 ; n ) ( t ) , u ^ i 2 ( 5 ; n ) ( t ) , , u ^ i q ( 5 ; n ) ( t ) ,
where
u ^ i j ( 4 ; n ) ( t ) = u ¯ l i j ( 4 ; n )   if   t F l ( n )   for   l = 1 , , n u ¯ n i j ( 4 ; n )   if   t = T
and
u ^ i j ( 5 ; n ) ( t ) = u ¯ l i j ( 5 ; n ) + e l 1 ( n ) t K ^ l l i j ( n ) · z ^ j ( n ) ( s ) d s   if   t F l ( n )   for   l = 1 , , n u ¯ n i j ( 5 ; n ) + E ¯ n ( n ) K ^ n n i j ( n ) · z ^ j ( n ) ( s ) d s   if   t = T = u ¯ l i j ( 5 ; n ) + t e l 1 ( n ) · K ^ l l i j ( n ) · z ¯ l j ( n )   if   t F l ( n ) f o r l = 1 , , n u ¯ n i j ( 5 ; n ) + d n ( n ) · K ^ n n i j ( n ) · z ¯ n j ( n )   if   t = T
for i = 1 , , p and j = 1 , , q . We also write
u ( 4 ; n ) ( t ) = u ^ 1 ( 4 ; n ) ( t ) , u ^ 2 ( 4 ; n ) ( t ) , , u ^ p ( 4 ; n ) ( t )
and
u ( 5 ; n ) ( t ) = u ^ 1 ( 5 ; n ) ( t ) , u ^ 2 ( 5 ; n ) ( t ) , , u ^ p ( 5 ; n ) ( t ) .
Then, we have the following result.
Proposition 6.
Suppose that x ¯ ( n ) is an optimal solution of primal problem ( P n ) . Let x ^ ( n ) be constructed from x ¯ ( n ) according to the above procedure given by
x ^ ( n ) ( t ) = z ^ ( n ) ( t ) , d ¯ ( n ) , u ¯ ( 1 ; n ) , u ^ ( 2 ; n ) ( t ) , u ^ ( 3 ; n ) ( t ) , u ^ ( 4 ; n ) ( t ) , u ^ ( 5 ; n ) ( t ) .
Then, x ^ ( n ) is a feasible solution of problem ( RCLP * ) .
Proof. 
We first have, for l = 1 , , n ,
0 T a ^ j ( t ) · z ^ j ( n ) ( t ) d t = l = 1 n E ¯ l ( n ) a ^ j ( t ) · z ^ j ( n ) ( t ) d t l = 1 n d l ( n ) · a ^ l j ( n ) · z ¯ l j ( n ) ( by   ( 26 ) ) u ¯ ( 1 ; n ) + d ¯ j ( by   ( 33 ) )
.
We consider the following cases:
  • Suppose that t F l ( n ) for l = 1 , , n . For l = 1 , we have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n ) ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ^ i ( 2 ; n ) ( t ) + γ i ( K ) · u ^ i ( 3 ; n ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; n ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; n ) ( t ) = j = 1 q B i j ( 0 ) ( t ) · z ¯ 1 j ( n ) j = 1 q 0 t K 11 i j ( n ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ¯ 1 i ( 2 ; n ) + γ i ( K ) · u ¯ 1 i ( 3 ; n ) + { j : j I i ( B ) } u ¯ 1 i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ 1 i j ( 5 ; n ) + { j : j I i ( K ) } 0 t K ^ 11 i j ( n ) · z ^ j ( n ) ( s ) d s ( by   ( 85 ) ) j = 1 q B 1 i j ( n ) · z ¯ 1 j ( n ) + γ i ( B ) · u ¯ 1 i ( 2 ; n ) + γ i ( K ) · u ¯ 1 i ( 3 ; n ) + { j : j I i ( B ) } u ¯ 1 i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ 1 i j ( 5 ; n ) j = 1 q 0 t K 11 i j ( n ) K ^ 11 i j ( n ) · z ^ j ( n ) ( s ) d s ( by   ( 28 ) ) j = 1 q B 1 i j ( n ) · z ¯ 1 j ( n ) + γ i ( B ) · u ¯ 1 i ( 2 ; n ) + γ i ( K ) · u ¯ 1 i ( 3 ; n ) + { j : j I i ( B ) } u ¯ 1 i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ 1 i j ( 5 ; n ) ( since K 11 i j ( n ) K ^ 11 i j ( n ) = K ˘ 11 i j 0 ) c 1 i ( n ) ( by   ( 31 ) ) = c i ( t ) ( by   ( 26 ) ) .
    For l = 2 , , n , we have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n ) ( t ) j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ^ i ( 2 ; n ) ( t ) + γ i ( K ) · u ^ i ( 3 ; n ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; n ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; n ) ( t ) j = 1 q B l i j ( n ) · z ^ j ( n ) ( t ) j = 1 q k = 1 l 1 E ¯ k ( n ) K l k i j ( n ) · z ^ j ( n ) ( s ) d s + e l 1 ( n ) t K l l i j ( n ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ^ i ( 2 ; n ) ( t ) + γ i ( K ) · u ^ i ( 3 ; n ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; n ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; n ) ( t ) ( by   ( 28 ) ) = j = 1 q B l i j ( n ) · z ¯ l j ( n ) j = 1 q k = 1 l 1 d k ( n ) · K l k i j ( n ) · z ¯ k j ( n ) + γ i ( B ) · u ¯ l i ( 2 ; n ) + γ i ( K ) · u ¯ l i ( 3 ; n ) + { j : j I i ( B ) } u ¯ l i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ l i j ( 5 ; n ) + { j : j I i ( K ) } e l 1 ( n ) t K ^ l l i j ( n ) · z ^ j ( n ) ( s ) d s j = 1 q e l 1 ( n ) t K l l i j ( n ) · z ^ j ( n ) ( s ) d s ( by   ( 85 ) ) j = 1 q B l i j ( n ) · z ¯ l j ( n ) j = 1 q k = 1 l 1 d k ( n ) · K l k i j ( n ) · z ¯ k j ( n ) + γ i ( B ) · u ¯ l i ( 2 ; n ) + γ i ( K ) · u ¯ l i ( 3 ; n ) + { j : j I i ( B ) } u ¯ l i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ l i j ( 5 ; n ) j = 1 q e l 1 ( n ) t K l l i j ( n ) K ^ l l i j ( n ) · z ^ j ( n ) ( s ) d s ( s i n c e K ^ l l i j ( n ) 0 ) j = 1 q B l i j ( n ) · z ¯ l j ( n ) j = 1 q k = 1 l 1 d k ( n ) · K l k i j ( n ) · z ¯ k j ( n ) + γ i ( B ) · u ¯ l i ( 2 ; n ) + γ i ( K ) · u ¯ l i ( 3 ; n ) + { j : j I i ( B ) } u ¯ l i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ l i j ( 5 ; n ) ( since K l l i j ( n ) K ^ l l i j ( n ) = K ˘ l l i j ( n ) 0 ) c l i ( n ) ( by   ( 32 ) ) = c i ( t ) ( by   ( 26 ) ) .
    For l = 1 , , n , by (30) and (34), we have
    B ^ i j ( 0 ) ( t ) · z j ( n ) ( t ) B ^ l i j ( n ) · z ¯ l j ( n ) u ¯ l i ( 2 ; n ) + u ¯ l i j ( 4 ; n ) = u ^ i ( 2 ; n ) ( t ) + u ^ i j ( 4 ; n ) ( t ) .
    For l = 1 , , n , by (30) and (36), we have
    0 t K ^ i j ( 0 ) ( t , s ) · z ^ j ( n ) ( s ) d s k = 1 l 1 E ¯ k ( n ) K ^ l k i j ( n ) · z ^ j ( n ) ( s ) d s + e l 1 ( n ) t K ^ l l i j ( n ) · z ^ j ( n ) ( s ) d s = k = 1 l 1 d k ( n ) · K ^ l k i j ( n ) · z ¯ k j ( n ) + e l 1 ( n ) t K ^ l l i j ( n ) · z ^ j ( n ) ( s ) d s u ¯ l i ( 3 ; n ) + u ¯ l i j ( 5 ; n ) + e l 1 ( n ) t K ^ l l i j ( n ) · z ^ j ( n ) ( s ) d s = u ^ i ( 3 ; n ) ( t ) + u ^ i j ( 5 ; n ) ( t ) .
  • Suppose that t = T . We have
    j = 1 q B i j ( 0 ) ( T ) · z ^ j ( n ) ( T ) j = 1 q 0 T K i j ( 0 ) ( T , s ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ^ i ( 2 ; n ) ( T ) + γ i ( K ) · u ^ i ( 3 ; n ) ( T ) + { j : j I i ( B ) } u ^ i j ( 4 ; n ) ( T ) + { j : j I i ( K ) } u ^ i j ( 5 ; n ) ( T ) j = 1 q B n i j ( n ) · z ^ j ( n ) ( T ) j = 1 q k = 1 n 1 E ¯ k ( n ) K n k i j ( n ) · z ^ j ( n ) ( s ) d s j = 1 q E ¯ n ( n ) K n n i j ( n ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ^ i ( 2 ; n ) ( T ) + γ i ( K ) · u ^ i ( 3 ; n ) ( T ) + { j : j I i ( B ) } u ^ i j ( 4 ; n ) ( T ) + { j : j I i ( K ) } u ^ i j ( 5 ; n ) ( T ) = j = 1 q B n i j ( n ) · z ¯ n j ( n ) j = 1 q k = 1 n 1 d k ( n ) · K n k i j ( n ) · z ¯ k j ( n ) j = 1 q E ¯ n ( n ) K n n i j ( n ) · z ^ j ( n ) ( s ) d s + γ i ( B ) · u ¯ n i ( 2 ; n ) + γ i ( K ) · u ¯ n i ( 3 ; n ) + { j : j I i ( B ) } u ¯ n i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ n i j ( 5 ; n ) + { j : j I i ( K ) } E ¯ n ( n ) K ^ n n i j ( n ) · z ^ j ( n ) ( s ) d s j = 1 q B n i j ( n ) · z ¯ n j ( n ) j = 1 q k = 1 n 1 d k ( n ) · K n k i j ( n ) · z ¯ k j ( n ) + γ i ( B ) · u ¯ n i ( 2 ; n ) + γ i ( K ) · u ¯ n i ( 3 ; n ) + { j : j I i ( B ) } u ¯ n i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ n i j ( 5 ; n ) j = 1 q E ¯ n ( n ) K n n i j ( n ) K ^ n n i j ( n ) · z ^ j ( n ) ( s ) d s j = 1 q B n i j ( n ) · z ¯ n j ( n ) j = 1 q k = 1 n 1 d k ( n ) · K n k i j ( n ) · z ¯ k j ( n ) + γ i ( B ) · u ¯ n i ( 2 ; n ) + γ i ( K ) · u ¯ n i ( 3 ; n ) + { j : j I i ( B ) } u ¯ n i j ( 4 ; n ) + { j : j I i ( K ) } u ¯ n i j ( 5 ; n ) ( s i n c e K n n i j ( n ) K ^ n n i j ( n ) = K ˘ n n i j ( n ) 0 ) e n i ( n ) ( by   ( 32 ) ) = c i ( t ) ( by   ( 26 ) ) .
    We also have
    B ^ i j ( 0 ) ( T ) · z j ( n ) ( T ) B ^ l i j ( n ) · z ¯ n j ( n ) u ¯ n i ( 2 ; n ) + u ¯ n i j ( 4 ; n ) = u ^ i ( 2 ; n ) ( T ) + u ^ i j ( 4 ; n ) ( T )
    and
    0 T K ^ i j ( 0 ) ( T , s ) · z ^ j ( n ) ( s ) d s k = 1 n E ¯ k ( n ) K ^ n k i j ( n ) · z ^ j ( n ) ( s ) d s = k = 1 n 1 d k ( n ) · K ^ n k i j ( n ) · z ¯ k j ( n ) + E ¯ n ( n ) K ^ n n i j ( n ) · z ^ j ( n ) ( s ) d s u ¯ n i ( 3 ; n ) + u ¯ n i j ( 5 ; n ) + E ¯ n ( n ) K ^ n n i j ( n ) · z ^ j ( n ) ( s ) d s = u ^ i ( 3 ; n ) ( T ) + u ^ i j ( 5 ; n ) ( T )
This shows that x ^ ( n ) is a feasible solution of problem ( RCLP * ) , and the proof is complete.

5. Analytic Formula of the Error Bound

Recall that V ( P ) denotes the optimal objective value of problem (P). Since x ¯ ( n ) is an optimal solution of problem ( P n ) and x ^ ( n ) constructed from x ¯ ( n ) is a feasible solution of problem ( RCLP * ) , it follows that
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n ) ( t ) d t j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n ) l = 1 n j = 1 q E ¯ l ( n ) a l j ( n ) · z ¯ l j ( n ) j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n ) ( b   ( 26 ) ) = l = 1 n j = 1 q d l ( n ) · a l j ( n ) · z ¯ l j ( n ) j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n ) = V ( P n ) .
Therefore, we have
V ( RCLP * ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n ) ( t ) d t j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n ) ( by   Proposition   6 ) V ( P n ) ( b y ( 87 ) ) .
According to the weak duality theorem for the primal and dual pair of problems ( DRCLP * ) and ( RCLP * ) presented in Theorem 2, we see that
V ( DRCLP * ) V ( RCLP * ) V ( P n ) = V ( D n ) .
In the sequel, we are going to claim
lim n V ( D n ) = V ( DRCLP * ) .
The feasible solution of problem ( DRCLT * ) is denoted by
y ( t ) = w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) ,
where
w ( t ) = w 1 ( t ) , , w i ( t ) , , w p ( t ) v ( 1 ) = v 1 ( 1 ) , , v j ( 1 ) , , v q ( 1 ) v ( 2 ) ( t ) = v 11 ( 2 ) ( t ) , , v 1 q ( 2 ) ( t ) , , v i 1 ( 2 ) ( t ) , , v i q ( 2 ) ( t ) , , v p 1 ( 2 ) ( t ) , , v p q ( 2 ) ( t ) v ( 3 ) ( t ) = v 11 ( 3 ) ( t ) , , v 1 q ( 3 ) ( t ) , , v i 1 ( 3 ) ( t ) , , v i q ( 3 ) ( t ) , , v p 1 ( 3 ) ( t ) , , v p q ( 3 ) ( t ) .
Let y ( n ) = ( w ( n ) , v ( 1 ; n ) , v ( 2 ; n ) , v ( 3 ; n ) ) be an optimal solution of problem ( D n ) . According to part (ii) of Proposition 4, we can construct another optimal solution y ¯ ( n ) = ( w ¯ ( n ) , v ¯ ( 1 ; n ) , v ¯ ( 2 ; n ) , v ¯ ( 3 ; n ) ) of problem ( D n ) such that the following inequalities are satisfied:
w ¯ l i ( n ) w l ( n ) τ σ · exp r · T · ν σ
for all n N , i = 1 , , p and l = 1 , , n . From (58) and (59), we also define
θ i j ( B ; n ) = min l = 1 , , n θ l i j ( B ; n )   and   and   θ i j ( K ; n ) = min l = 1 , , n θ l i j ( K ; n ) .
For each i = 1 , , p and j = 1 , , q , we define the real-valued functions B ˘ i j and a ˘ j on [ 0 , T ] , and K ˘ i j on [ 0 , T ] × [ 0 , T ] , respectively, by
B ˘ i j ( n ) ( t ) = B i j ( 0 ) ( t ) + χ i j ( B ) · θ i j ( B ; n ) · B ^ i j ( t )
K ˘ i j ( n ) ( s , t ) = K i j ( 0 ) ( s , t ) χ i j ( K ) · θ i j ( K ; n ) · K ^ i j ( s , t )
a ˘ j ( n ) ( t ) = a j ( 0 ) ( t ) χ j ( a ) · v ¯ j ( 1 ; n ) · a ^ j ( t )
Then, given any t F l ( n ) and ( s , t ) F k ( n ) × F l ( n ) for l , k = 1 , , n , we have
B ˘ i j ( n ) ( t ) B ˘ l i j ( n ) , K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n )   and   a ˘ j ( n ) ( t ) a ˘ l j ( n ) .
For each l = 1 , , n and j = 1 , , q , we define the real-valued functions h ˘ l j ( n ) on F l ( n ) by
h ˘ l j ( n ) ( t ) = e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i ( n ) + i = 1 p B ˘ l i j ( n ) B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) K ˘ i j ( n ) ( s , t ) K ˘ l l i j ( n ) · w ¯ l i ( n ) d s + k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s .
For each j = 1 , , q , we also define the constants
r j ( n ) = i = 1 p B ˘ n i j ( n ) B ˘ i j ( n ) ( T ) · w ¯ n i ( n ) .
For l = 1 , , n , let
π ¯ l ( n ) = max j = 1 , , q sup t E l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) a ˘ l j ( n )
and
π l ( n ) = max k = l , , n π ¯ k ( n ) .
Then,
π l ( n ) = max π ¯ l ( n ) , π ¯ l + 1 ( n ) , , π ¯ n ( n ) = max π ¯ l ( n ) , π l + 1 ( n )
which says that
π l ( n ) π l + 1 ( n )
and
π l ( n ) π ¯ l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) a ˘ l j ( n )
for l = 1 , , n 1 and j = 1 , , q . We want to prove
lim n π ¯ l ( n ) = 0 = lim n π l ( n ) .
Some useful lemmas will be provided below.
Lemma 2.
For i = 1 , , p , j = 1 , , q and l = 1 , , n , we have
sup t E l ( n ) a ˘ j ( n ) ( t ) a ˘ l j ( n ) 0   a n d   sup t E l ( n ) B ˘ l i j ( n ) B ˘ i j ( n ) ( t ) 0   a s   n .
Proof. 
Using the argument of Lemma 4.1 in Wu [6], for l = 1 , , n , we can show that
0 sup t E l ( n ) a ^ l j ( n ) a ^ j ( t ) < ϵ 2 · 1 χ j ( a ) · v ¯ j ( 1 ; n )   for   j I ( a )   and   0 sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) < ϵ 2
and
0 sup t E l ( n ) B ^ l i j ( n ) B ^ i j ( t ) < ϵ 2 · 1 χ j ( B ) · θ i j ( B ; n )   for   j I i ( B )   and   0 sup t E l ( n ) B i j ( 0 ) ( t ) B l i j ( n ) < ϵ 2 .
Therefore, for j I ( a ) , we obtain
sup t E l ( n ) a ˘ j ( n ) ( t ) a ˘ l j ( n ) sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) + χ j ( a ) · v ¯ j ( 1 ; n ) · a ^ l j ( n ) χ j ( a ) · v ¯ j ( 1 ; n ) · a ^ j ( t ) sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) + sup t E l ( n ) χ j ( a ) · v ¯ j ( 1 ; n ) · a ^ l j ( n ) χ j ( a ) · v ¯ j ( 1 ; n ) · a ^ j ( t ) ϵ 2 + ϵ 2 = ϵ
and, for j I i ( B ) , we obtain
sup t E l ( n ) B ˘ i j ( n ) ( t ) B ˘ l i j ( n ) sup t E l ( n ) B i j ( 0 ) ( t ) B l i j ( n ) + χ j ( a ) · θ i j ( B ; n ) · B ^ i j ( t ) χ j ( B ) · θ l i j ( B ; n ) · B ^ l i j ( n ) sup t E l ( n ) B i j ( 0 ) ( t ) B l i j ( n ) + χ j ( a ) · θ i j ( B ; n ) · B ^ i j ( t ) χ j ( B ) · θ i j ( B ; n ) · B ^ l i j ( n ) ( b y ( ) ) sup t E l ( n ) B i j ( 0 ) ( t ) B l i j ( n ) + sup t E l ( n ) χ j ( B ) · θ i j ( B ; n ) · B ^ l i j ( n ) χ j ( B ) · θ i j ( B ; n ) · B ^ i j ( t ) ϵ 2 + ϵ 2 = ϵ .
This completes the proof. □
Lemma 3.
We have
sup t E l ( n ) E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s 0   as   n .
Proof. 
Since w ¯ k i ( n ) is bounded by (92), using the argument of Lemma 4.2 in Wu [6], for l = 1 , , n , we can show that
sup t E l ( n ) E ¯ k ( n ) K ^ i j ( s , t ) K ^ k l i j ( n ) · w ¯ k i ( n ) d s < ϵ 2 · 1 χ i j ( K ) · θ i j ( K ; n )   for   j I i ( K )
and
sup t E l ( n ) E ¯ k ( n ) K i j ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s < ϵ 2 .
Therefore, for j I i ( K ) , we obtain
sup t E l ( n ) E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s sup t E l ( n ) E ¯ k ( n ) K i j ( s , t ) K k l i j ( n ) + χ i j ( K ) · θ i j ( K ; n ) · K ^ i j ( s , t ) χ i j ( K ) · θ k i j ( K ; n ) · K ^ k l i j ( n ) · w ¯ k i ( n ) d s sup t E l ( n ) E ¯ k ( n ) K i j ( s , t ) K k l i j ( n ) + χ i j ( K ) · θ i j ( K ; n ) · K ^ i j ( s , t ) χ i j ( K ) · θ i j ( K ; n ) · K ^ k l i j ( n ) · w ¯ k i ( n ) d s ( b y ( 91 ) ) sup t E l ( n ) E ¯ k ( n ) K i j ( s , t ) K k l i j ( n ) · w ¯ k i ( n ) d s + sup t E l ( n ) E ¯ k ( n ) χ i j ( K ) · θ i j ( K ; n ) · K ^ i j ( s , t ) K ^ k l i j ( n ) · w ¯ k i ( n ) d s < ϵ 2 + ϵ 2 = ϵ .
This completes the proof. □
Lemma 4.
We have
lim n π ¯ l ( n ) = 0 = lim n π l ( n ) .
Proof. 
It suffices to prove
sup t E l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) a ˘ l j ( n ) 0   as   n .
From (23), since
d l ( n ) P n r · T n 0   as   n ,
it follows that, by (90),
e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i d l ( n ) · i = 1 p K ˘ l l i j ( n ) · w ¯ l i 0 a s n .
Now we have
0 sup t E l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) a ˘ l j ( n ) d l ( n ) · i = 1 p K ˘ l l i j ( n ) · w ¯ l i ( n ) + i = 1 p w ¯ l i ( n ) · sup t E l ( n ) B ˘ l i j ( n ) B ˘ i j ( n ) ( t ) + k = l n i = 1 p sup t E l ( n ) E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s + sup t E l ( n ) a ˘ j ( n ) ( t ) a ˘ l j ( n )
Using Lemmas 2 and 3, we complete the proof. □
For convenience, we adopt the following notations:
k ¯ l ( n ) = max j = 1 , , q sup ( s , t ) [ e l 1 ( n ) , T ] × E ¯ l ( n ) i = 1 p K ˘ i j ( n ) ( s , t )
and
b ¯ l ( n ) = min j = 1 , , q inf t E ¯ l ( n ) i = 1 p B ˘ i j ( n ) ( t ) .
From (20) and (21), we see that
k ¯ l ( n ) ν   and   b ¯ l ( n ) min j = 1 , , q inf t [ 0 , T ] i = 1 p B ˘ i j ( n ) ( t ) min j = 1 , , q inf t [ 0 , T ] i = 1 p B i j ( 0 ) ( t ) σ > 0 .
Let
k l ( n ) = max k = l , , n k ¯ k ( n ) ν   and   b l ( n ) = min k = l , , n b ¯ k ( n ) σ .
Then, 0 < b l ( n ) and
k l ( n ) k l + 1 ( n )   and   b l ( n ) b l + 1 ( n ) .
Now we define the real-valued functions u ( n ) and v ( n ) on [ 0 , T ] by
u ( n ) ( t ) = k l ( n )   if   t F l ( n )   for   l = 1 , , n k n ( n )   if   t = T
and
v ( n ) ( t ) = b l ( n )   if   t F l ( n )   for   l = 1 , , n b n ( n )   if   t = T .
Then, we have
u ( n ) ( t ) ν   and   v ( n ) ( t ) σ   for   t [ 0 , T ]
Using (90) and Lemmas 2 and 3, we see that the sequence { h ˘ l j ( n ) } n = 1 is uniformly bounded. This also says that the sequence { π l ( n ) } n = 1 is uniformly bounded. In this case, there exists a constant x satisfying π l ( n ) x for all n N and l = 1 , , n . For further discussion, we define a real-valued function p ( n ) on [ 0 , T ] given by
p ( n ) ( t ) = x   if   t = e l 1 ( n )   for   l = 1 , , n π l ( n )   if   t E l ( n )   for   l = 1 , , n max j = 1 , , q r j ( n ) + a ˘ j ( n ) ( T ) a ˘ n j ( n )   if   t = e n ( n ) = T .
Then, we have
p ( n ) ( t ) x   for   all   n N   and   t [ 0 , T ) .
Lemma 5.
We define a real-valued function f ( n ) : [ 0 , T ] R + given by
f ( n ) ( t ) = p ( n ) ( t ) v ( n ) ( t ) · exp u ( n ) ( t ) · ( T t ) v ( n ) ( t )
Then, for j = 1 , , q , we have
f ( n ) ( t ) · i = 1 p B ˘ i j ( n ) ( t ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) a ˘ l j ( n ) + i = 1 p t T f ( n ) ( s ) · K ˘ i j ( n ) ( s , t ) d s   for   t F l ( n ) .
and
f ( n ) ( T ) · i = 1 p B ˘ i j ( n ) ( T ) r j ( n ) + a ˘ j ( n ) ( T ) a ˘ n j ( n ) .
We also have that the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded.
Proof. 
For t F l ( n ) , from (105), we have
t T f ( n ) ( s ) d s = t T p ( n ) ( s ) v ( n ) ( s ) · exp u ( n ) ( s ) · ( T s ) v ( n ) ( s ) d s = t e l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s + k = l + 1 n E ¯ k ( n ) π k ( n ) b k ( n ) · exp k k ( n ) · ( T s ) b k ( n ) d s t e l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s + k = l + 1 n E ¯ k ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s ( by   ( 99 )   and   ( 102 ) ) = t T π l ( n ) b l ( n ) · exp k l ( n ) · ( T s ) b l ( n ) d s = π l ( n ) k l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) 1
Since
b l ( n ) · f ( n ) ( t ) = x · exp k l ( n ) · ( T t ) b l ( n )   if   t = e l 1 ( n )   for   l = 1 , , n π l ( n ) · exp k l ( n ) · ( T t ) b l ( n )   if   t E l ( n )   for   l = 1 , , n ,
using (106), it follows that, for t F l ( n ) ,
b l ( n ) · f ( n ) ( t ) x · 1 + k l ( n ) π l ( n ) · t T f ( n ) ( s ) d s   if   t = e l 1 ( n )   for   l = 1 , , n π l ( n ) + k l ( n ) · t T f ( n ) ( s ) d s   if   t E l ( n )   for   l = 1 , , n . π l ( n ) + k l ( n ) · t T f ( n ) ( s ) d s ( since π l ( n ) x   for   all   l = 1 , , n ) .
For t = e n ( n ) = T , we also have
b n ( n ) · f ( n ) ( T ) = max j = 1 , , q r j ( n ) + a ˘ j ( n ) ( T ) a ˘ n j ( n ) .
For each j = 1 , , q and l = 1 , , n , we consider the following cases:
  • For t = e n ( n ) = T , from (108), we have
    i = 1 p B ˘ i j ( n ) ( T ) · f ( n ) ( T ) b n ( n ) · f ( n ) ( T ) r j ( n ) + a ˘ j ( n ) ( T ) a ˘ n j ( n ) .
  • For t F l ( n ) , by (100) and (107), we have
    i = 1 p B ˘ i j ( n ) ( t ) · f ( n ) ( t ) b ¯ l ( n ) · f ( n ) ( t ) b l ( n ) · f ( n ) ( t ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) a ˘ l j ( n ) + t T i = 1 p K ˘ i j ( n ) ( s , t ) · f ( n ) ( s ) d s .
Finally, using (103) and (104), it is clear to see that the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded, and the proof is complete. □
Now, we define a vector-valued function w ^ ( n ) ( t ) : [ 0 , T ] R p by
w ^ ( n ) ( t ) = w ¯ l ( n ) + f ( n ) ( t ) 1 p   if   t F l ( n )   for   l = 1 , , n w ¯ n ( n ) + f ( n ) ( T ) 1 p   if   t = T .
Remark 2.
Lemma 5 says that the sequence of real-valued functions { f ( n ) } n = 1 is uniformly bounded. Using (90), it is clear to see that the family of vector-valued functions { w ^ ( n ) } n N is uniformly bounded.
Proposition 7.
Let y ( n ) = ( w ( n ) , v ( 1 ; n ) , v ( 2 ; n ) , v ( 3 ; n ) ) be an optimal solution of problem ( D n ) , and let y ¯ ( n ) = ( w ¯ ( n ) , v ¯ ( 1 ; n ) , v ¯ ( 2 ; n ) , v ¯ ( 3 ; n ) ) be another optimal solution of problem ( D n ) constructed from part (ii) of Proposition 4. Let w ^ ( n ) be constructed from ( 109 ) . We define
v ^ j ( 1 ; n ) = v ¯ j ( 1 ; n ) , v ^ i j ( 2 ; n ) ( t ) = θ i j ( B ; n ) · w ^ i ( n ) ( t )   and   v ^ i j ( 3 ; n ) ( t ) = θ i j ( K ; n ) · w ^ i ( n ) ( t )
for i = 1 , , p and j = 1 , , q , where θ i j ( B ; n ) and θ i j ( K ; n ) are defined in ( 91 ) . Then,
y ^ ( n ) ( t ) = w ^ ( n ) ( t ) , v ^ ( 1 ; n ) , v ^ ( 2 ; n ) ( t ) , v ^ ( 3 ; n ) ( t )
is a feasible solution of problem ( DRCLP * ) .
Proof. 
For l = 1 , , n and j = 1 , , q , we define the real-valued functions b j ( n ) on F l ( n ) by
b j ( n ) ( t ) = i = 1 p B ˘ i j ( n ) ( t ) B ˘ l i j ( n ) · w ¯ l i ( n ) i = 1 p t e l ( n ) K ˘ i j ( n ) ( s , t ) K ˘ l l i j ( n ) · w ¯ l i ( n ) d s k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s + f ( n ) ( t ) · i = 1 p B ˘ i j ( n ) ( t ) i = 1 p t T f ( n ) ( s ) · K ˘ i j ( n ) ( s , t ) d s ,
which implies
b j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B ˘ i j ( n ) ( t ) i = 1 p t T f ( n ) ( s ) · K ˘ i j ( n ) ( s , t ) d s = i = 1 p B ˘ l i j ( n ) B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) K ˘ i j ( n ) ( s , t ) K ˘ l l i j ( n ) · w ¯ l i ( n ) d s + k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s .
Therefore, by adding the term ( e l ( n ) t ) · i = 1 p K ˘ l l i j ( n ) · w ¯ l i on both sides, we obtain
e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i b j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B ˘ i j ( n ) ( t ) i = 1 p t T f ( n ) ( s ) · K ˘ i j ( n ) ( s , t ) d s = e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i + i = 1 p B ˘ l i j ( n ) B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) K ˘ i j ( n ) ( s , t ) K ˘ l l i j ( n ) · w ¯ l i ( n ) d s + k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s = h ˘ l j ( n ) ( t ) .
which implies
b j ( n ) ( t ) e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i = h ˘ l j ( n ) ( t ) + f ( n ) ( t ) · i = 1 p B ˘ i j ( n ) ( t ) i = 1 p t T f ( n ) ( s ) · K ˘ i j ( n ) ( s , t ) d s a ˘ j ( n ) ( t ) a ˘ l j ( n ) ( bY   Lemma   5 )
Now, from (109) and (110), we obtain
i = 1 p B ˘ i j ( n ) ( t ) · w ^ i ( n ) ( t ) i = 1 p t T K ˘ i j ( n ) ( s , t ) · w ^ i ( n ) ( s ) d s = i = 1 p B ˘ l i j ( n ) · w ¯ l i ( n ) i = 1 p t e l ( n ) K ˘ l l i j ( n ) · w ¯ l i ( n ) d s k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s + b j ( n ) ( t ) = i = 1 p B l i j ( n ) · w ¯ l i + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · θ l i j ( B ; n ) · w ¯ l i e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i k = l + 1 n i = 1 p d k ( n ) · K k l i j ( n ) · w ¯ k i + k = l + 1 n i = 1 p χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · θ k i j ( K ; n ) · w ¯ k i + b j ( n ) ( t ) ( by   ( 62 )   and   ( 63 ) ) = i = 1 p B l i j ( n ) · w ¯ l i + i = 1 p χ i j ( B ) · B ^ l i j ( n ) · v ¯ l i j ( 2 ) k = l + 1 n i = 1 p d k ( n ) · K k l i j ( n ) · w ¯ k i + k = l + 1 n i = 1 p χ i j ( K ) · d k ( n ) · K ^ k l i j ( n ) · v ¯ k i j ( 3 ) + b j ( n ) ( t ) e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i ( by   the   construction   of   y ¯ ( n )   from   part   ( i i ) of   Proposition   4 ) a l j ( n ) χ j ( a ) · a ^ l j ( n ) · v ¯ j ( 1 ; n ) + b j ( n ) ( t ) e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i ( by   ( 37 )   for   the   feasibility   of   y ¯ ( n ) ) = a ˘ l j ( n ) + b j ( n ) ( t ) e l ( n ) t · i = 1 p K ˘ l l i j ( n ) · w ¯ l i ( by   ( 61 ) ) a ˘ l j ( n ) + a ˘ j ( n ) ( t ) a ˘ l j ( n ) ( by   ( 111 ) ) = a ˘ j ( n ) ( t )
which says that, by (92)–(94),
i = 1 p B i j ( 0 ) ( t ) · w ^ i ( n ) ( t ) + χ j ( a ) · v ¯ j ( 1 ; n ) · a ^ j ( t ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · θ i j ( B ; n ) · w ^ i ( n ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( s , t ) · θ i j ( K ; n ) · w ^ i ( n ) ( s ) d s a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ^ i ( n ) ( s ) d s ;
that is,
i = 1 p B i j ( 0 ) ( t ) · w ^ i ( n ) ( t ) + χ j ( a ) · v ^ j ( 1 ; n ) · a ^ j ( t ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ^ i j ( 2 ; n ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( s , t ) · v ^ i j ( 2 ; n ) ( s ) d s a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( s , t ) · w ^ i ( n ) ( s ) d s ,
Suppose that t = T . We define
b ^ j ( n ) = i = 1 p B ˘ i j ( n ) ( T ) B ˘ n i j ( n ) · w ¯ n i ( n ) + f ( n ) ( T ) · i = 1 p B ˘ i j ( n ) ( T ) ,
which implies
b ^ j ( n ) = r j ( n ) + f ( n ) ( T ) · i = 1 p B ˘ i j ( n ) ( T ) a ˘ j ( n ) ( T ) a ˘ n j ( n ) ( by   Lemma   5 )
Now, we obtain
i = 1 p B ˘ i j ( n ) ( T ) · w ^ i ( n ) ( T ) = i = 1 p B ˘ n i j ( n ) · w ¯ n i ( n ) + b ^ j ( n ) = i = 1 p B n i j ( n ) · w ¯ n i + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · θ n i j ( B ; n ) · w ¯ n i + b ^ j ( n ) = i = 1 p B n i j ( n ) · w ¯ n i + i = 1 p χ i j ( B ) · B ^ n i j ( n ) · v ¯ n i j ( 2 ) + b ^ j ( n ) a n j ( n ) χ j ( a ) · a ^ n j ( n ) · v ¯ j ( 1 ; n ) + b ^ j ( n ) ( by   ( 38 ) ) = a ˘ n j ( n ) + b ^ j ( n ) ( by   ( 61 ) ) a ˘ n j ( n ) + a ˘ j ( n ) ( T ) a ˘ n j ( n ) ( using   ( 112 ) ) = a ˘ j ( n ) ( T ) ,
which says that, by (92) and (94),
i = 1 p B i j ( 0 ) ( T ) · w ^ i ( n ) ( T ) + i = 1 p χ i j ( B ) · B ^ i j ( T ) · θ i j ( B ; n ) · w ^ i ( n ) ( T ) + χ j ( a ) · v ^ j ( 1 ; n ) · a ^ j ( T ) a j ( 0 ) ( T ) .
that is,
i = 1 p B i j ( 0 ) ( T ) · w ^ i ( n ) ( T ) + i = 1 p χ i j ( B ) · B ^ i j ( T ) · v ^ i j ( 2 ; n ) ( T ) + χ j ( a ) · v ^ j ( 1 ; n ) · a ^ j ( T ) a j ( 0 ) ( T ) .
Therefore, we conclude that the constraint (13) is satisfied.
From (60) and (91), we have
γ i ( B ) · w ^ i ( n ) ( t ) j I i ( B ) θ l i j ( B ; n ) · w ^ i ( n ) ( t ) j I i ( B ) θ i j ( B ; n ) · w ^ i ( n ) ( t ) = j I i ( B ) v ^ i j ( 2 ; n ) ( t )
and
γ i ( K ) · w ^ i ( n ) ( t ) j I i ( K ) θ l i j ( K ; n ) · w ^ i ( n ) ( t ) j I i ( K ) θ i j ( K ; n ) · w ^ i ( n ) ( t ) = j I i ( K ) v ^ i j ( 3 ; n ) ( t ) ,
which say that the constraints (15) and (16) are satisfied. The other constraints can be easily realized, which says that y ^ ( n ) is a feasible solution of problem ( DRCLP * ) . This completes the proof. □
For i = 1 , , p and j = 1 , , q , we define the step functions a ¯ j ( n ) : [ 0 , T ] R and c ¯ i ( n ) : [ 0 , T ] R as follows:
a ¯ j ( n ) ( t ) = a l j ( n )   if   t F l ( n )   for   l = 1 , , n a n j ( n )   if   t = T .
and
c ¯ i ( n ) ( t ) = c l i ( n )   if   t F l ( n )   for   l = 1 , , n c n i ( n )   if   t = T ,
respectively. For i = 1 , , p , we also define step function w ¯ i ( n ) ( t ) : [ 0 , T ] R by
w ¯ i ( n ) ( t ) = w ¯ l i ( n )   if   t F l ( n )   for   l = 1 , , n w ¯ n i ( n )   if   t = T .
Lemma 6.
For i = 1 , , p and j = 1 , , q , we have
0 T a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t ) d t 0   a s   n
and
0 T c i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t 0   a s   n .
Proof. 
We can see that the following functions
a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) · z ^ j ( n ) ( t )   and   c i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t )
are continuous a.e. on [ 0 , T ] , which says that they are Riemann-integrable on [ 0 , T ] . For i = 1 , , n and t E l ( n ) , using Lemma 2, we have
0 a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) = a j ( 0 ) ( t ) a l j ( n ) sup t E l ( n ) a j ( 0 ) ( t ) a l j ( n ) 0   as   n
and
0 c i ( t ) c ¯ i ( n ) ( t ) = c i ( t ) c l i ( n ) sup t E l ( n ) c i ( t ) c l i ( n ) 0   as   n ,
which implies
a j ( 0 ) ( t ) a ¯ j ( n ) ( t ) 0   and   c i ( t ) c ¯ i ( n ) ( t ) 0   as   n   a . e .   on   [ 0 , T ] .
Using Proposition 5, we see that the sequence { z ^ j ( n ) } n = 1 is uniformly bounded. Since the Riemann integral and Lebesgue integral are identical in this lemma, we can use the Lebesgue bounded convergence theorem to obtain (113). Using (90), we also see that the sequence { w ¯ i ( n ) } n = 1 is uniformly bounded. Therefore, we can use the Lebesgue bounded convergence theorem again to obtain (114). This completes the proof. □
Theorem 3.
We have the following properties.
(i)
We have
lim sup n V ( D n ) = V ( DRCLP * )   and   0 V ( DRCLP * ) V ( D n ) ε n ,
where
ε n = V ( D n ) + l = 1 n i = 1 n E ¯ l ( n ) c i ( t ) · w ¯ l i ( n ) d t + l = 1 n i = 1 n E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c i ( t ) d t
satisfying ε n 0 as n . There also exists a convergent subsequence { V ( D n k ) } k = 1 of { V ( D n ) } n = 1 satisfying
lim k V ( D n k ) = V ( DRCLP * ) .
(ii)
(No Duality Gap). Suppose that the primal problem ( P n ) is feasible. Then, we have
V ( DRCLP * ) = V ( RCLP * ) = lim sup n V ( D n ) = lim sup n V ( P n )
and
0 V ( RCLP * ) V ( P n ) ε n .
Proof. 
To prove part (i), we have
0 V ( DRCLP * ) V ( D n ) ( by ( 88 ) ) = V ( DRCLP * ) l = 1 n i = 1 p d l ( n ) · c l i ( n ) · w ¯ l i = V ( DRCLP * ) l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t i = 1 p 0 T c i ( t ) · w ^ i ( n ) ( t ) d t l = 1 n i = 1 p E ¯ l ( n ) c l i ( n ) · w ¯ l i ( n ) d t ( by   Proposition   7 ) = l = 1 n i = 1 p E ¯ l ( n ) c i ( t ) c l i ( n ) · w ¯ l i ( n ) d t + i = 1 p 0 T f ( n ) ( t ) · c i ( t ) d t ε n
which implies
V ( D n ) V ( DRCLP * ) V ( D n ) + ε n .
Using Lemma 6, we obtain
0 l = 1 n E ¯ l ( n ) c i ( t ) c l i ( n ) · w ¯ l i ( n ) d t = 0 T c i ( t ) c ¯ i ( n ) ( t ) · w ¯ i ( n ) ( t ) d t 0   as   n .
From (101), we have k l ( n ) ν and b l ( n ) σ for all n. Using Lemma 4, we have π l ( n ) 0 as n . Therefore, we obtain p ( n ) 0 as n a.e. on [ 0 , T ] . This also shows that f ( n ) 0 as n a.e. on [ 0 , T ] . We can use the Lebesgue bounded convergence theorem to obtain
0 T f ( n ) ( t ) · c i ( t ) d t 0   as   n .
From (119) and (120), we conclude that ε n 0 as n . From (120), we also obtain
lim sup n V ( D n ) V ( DRCLP * ) lim sup n V ( D n ) + lim sup n ε n = lim sup n V ( D n ) .
Using part (ii) of Proposition 4, it follows that { V ( D n ) } n = 1 is a bounded sequence. This says that there exists a convergent subsequence { V ( D n k ) } k = 1 of { V ( D n ) } n = 1 . Using (118), we can obtain the equality (116). It is clear to see that ε n can be written as (115).
To prove part (ii), using part (i) and inequality (88), we obtain
V ( DRCLP * ) V ( RCLP * ) lim sup n V ( D n ) = V ( DRCLP * ) .
Since V ( D n ) = V ( P n ) for each n N , we also have
V ( DRCLP * ) = V ( RCLP * ) = lim sup n V ( D n ) = lim sup n V ( P n )
and
0 V ( RCLP * ) V ( P n ) = V ( DCLP ) V ( D n ) ε n .
This completes the proof.
Proposition 8.
We have the following properties.
(i)
Suppose that x ¯ ( n ) is an optimal solution of primal problem ( P n ) . Let x ^ ( n ) be constructed from x ¯ ( n ) as given in Proposition 6 by
x ^ ( n ) ( t ) = z ^ ( n ) ( t ) , d ¯ ( n ) , u ¯ ( 1 ; n ) , u ^ ( 2 ; n ) ( t ) , u ^ ( 3 ; n ) ( t ) , u ^ ( 4 ; n ) ( t ) , u ^ ( 5 ; n ) ( t ) .
The error between the optimal objective value V ( RCLP * ) and the objective value at x ^ ( n ) ( t ) is less than or equal to ε n defined in ( 115 ) , i.e.,
0 V ( RCLP * ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n ) ( t ) d t j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n ) ε n .
(ii)
Suppose that y ( n ) = ( w ( n ) , v ( 1 ; n ) , v ( 2 ; n ) , v ( 3 ; n ) ) is an optimal solution of problem ( D n ) . Let y ¯ ( n ) = ( w ¯ ( n ) , v ¯ ( 1 ; n ) , v ¯ ( 2 ; n ) , v ¯ ( 3 ; n ) ) be another optimal solution of problem ( D n ) constructed from part (ii) of Proposition 4. Let w ^ ( n ) be constructed from ( 109 ) . We define
v ^ j ( 1 ; n ) = v ¯ j ( 1 ; n ) , v ^ i j ( 2 ; n ) ( t ) = θ i j ( B ; n ) · w ^ i ( n ) ( t )   and   v ^ i j ( 3 ; n ) ( t ) = θ i j ( K ; n ) · w ^ i ( n ) ( t )
for i = 1 , , p and j = 1 , , q , where θ i j ( B ; n ) and θ i j ( B ; n ) are defined in ( 91 ) . Then,
y ^ ( n ) ( t ) = w ^ ( n ) ( t ) , v ^ ( 1 ; n ) , v ^ ( 2 ; n ) ( t ) , v ^ ( 3 ; n ) ( t )
is a feasible solution of problem ( DRCLP * ) , and the error between the optimal objective value V ( DRCLP * ) and the objective value at y ^ ( n ) ( t ) is less than or equal to ε n , i.e.,
0 i = 1 p 0 T c i ( t ) · w ^ i ( n ) ( t ) d t V ( DRCLP * ) ε n
Proof. 
To prove part (i), using Proposition 6, we see that x ^ ( n ) is a feasible solution of problem ( RCLP * ) with the following objective value
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) d ¯ j γ ( a ) · u ¯ ( 1 ) .
Then,
0 V ( RCLP * ) j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( t ) d t j I ( a ) d ¯ j γ ( a ) · u ¯ ( 1 ) V ( RCLP * ) j = 1 q 0 T a l j ( n ) · z ^ j ( t ) d t j I ( a ) d ¯ j γ ( a ) · u ¯ ( 1 ) ( since   a l j ( n ) a j ( 0 ) ( t )   for   t E ¯ l ( n ) ) = V ( RCLP * ) j = 1 q l = 1 n d l ( n ) · a l j ( n ) · z l j j I ( a ) d ¯ j γ ( a ) · u ¯ ( 1 ) = V ( RCLPP * ) V ( P n ) ε n ( by   part   ( i i )   of   Theorem   3 ) .
To prove part (ii), we have
0 i = 1 p 0 T c i ( t ) · w ^ i ( n ) ( t ) d t V ( DRCLP * ) ( by   Proposition   7 ) i = 1 p 0 T c i ( t ) · w ^ i ( n ) ( t ) d t V ( D n ) ( since   V ( D n ) V ( DRCLP * ) by   part   ( i )   of   Theorem 3 ) = ε n ( by   ( 117 )   and   ( 118 ) )
This completes the proof.
Definition 1.
Given any ϵ > 0 , we say that the feasible solution
x ( ϵ ) ( t ) = z ( ϵ ) ( t ) , d ( ϵ ) , u ( 1 ; ϵ ) , u ( 2 ; ϵ ) ( t ) , u ( 3 ; ϵ ) ( t ) , u ( 4 ; ϵ ) ( t ) , u ( 5 ; ϵ ) ( t )
of problem ( RCLP * ) is an ϵ -optimal solution when
0 V ( RCLP * ) j = 1 q 0 T a j ( 0 ) ( t ) · z j ( ϵ ) ( t ) d t j I ( a ) d j ( ϵ ) γ ( a ) · u ( 1 ; ϵ ) < ϵ .
We say that the feasible solution
y ( ϵ ) ( t ) = w ( ϵ ) ( t ) , v ( 1 ; ϵ ) , v ( 2 ; ϵ ) ( t ) , v ( 3 ; ϵ ) ( t )
of problem V ( DRCLP * ) is an ϵ -optimal solution when
0 i = 1 p 0 T c i ( t ) · w i ( ϵ ) ( t ) d t V ( DRCLP * ) < ϵ .
Theorem 4.
Given any ϵ > 0 , we have the following properties.
(i)
There exists an integer n N such that x ^ ( n ) obtained from Proposition 8 satisfies x ( ϵ ) = x ^ ( n ) and ε n < ϵ . This also means that the ϵ-optimal solution of problem ( RCLP * ) exists.
(ii)
There exists an integer n N such that y ^ ( n ) obtained from Proposition 8 satisfies y ( ϵ ) = y ^ ( n ) and ε n < ϵ . This also means that the ϵ-optimal solutions of problem ( DRCLP * ) exists.
Proof. 
Part (i) of Theorem clp2t30 says that ε n 0 as n . Given any ϵ > 0 , using Proposition 8, there exists n N satisfying ε n < ϵ . In this case, the desired results follow immediately. □

6. Convergence of Approximate Solutions

We are going to study the convergent properties of the sequence { x ^ ( n ) } n = 1 that is constructed from the optimal solutions x ¯ ( n ) of problem ( P n ) and the sequence { y ^ ( n ) } n = 1 that is constructed from the optimal solutions y ¯ ( n ) of problem ( D n ) .
Let y be a feasible solution of dual problem ( DRCLP * ) given by
y ( t ) = w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) .
For t [ 0 , T ] , we define the functions
θ i j ( B ) ( t ) = v i j ( 2 ) ( t ) w i ( n ) ( t )   if   w i ( n ) ( t ) 0 0   if   w i ( n ) ( t ) = 0   for   i = 1 , , p   and   j I i ( B )
and
θ i j ( K ) ( t ) = v i j ( 3 ) ( t ) w i ( n ) ( t )   if   w i ( n ) ( t ) 0 0   if   w i ( n ) ( t ) = 0   for   i = 1 , , p   and   j I i ( K ) .
The constraints (18) and (19) say that θ i j ( B ) ( t ) 1 and θ i j ( K ) ( t ) 1 for t [ 0 , T ] . From the constraints (15) and (16), we also have
γ i ( B ) j I i ( B ) θ i j ( B ) ( t )   and   γ i ( K ) j I i ( K ) θ i j ( K ) ( t )   for   i = 1 , , p   and   t [ 0 , T ] .
For i = 1 , , p and j = 1 , , q , we also define the functions
B ˘ i j ( t ) = B i j ( 0 ) ( t ) + χ i j ( B ) · θ i j ( B ) ( t ) · B ^ i j ( t )
and
K ˘ i j ( t , s ) = K i j ( 0 ) ( t , s ) χ i j ( K ) · θ i j ( K ) ( s ) · K ^ i j ( t , s ) .
Let σ ˘ = max { σ , σ ^ } , where σ and σ ^ are given in (21) and (22), respectively. For convenience, we define a real-valued function η ˘ on [ 0 , T ] by
η ˘ ( t ) = τ σ ˘ · exp ν · ( T t ) σ ˘ .
Then, a useful lemma is given below.
Lemma 7.
Let y be a feasible solution of dual problem ( DRCLP * ) given by
y ( t ) = w ( t ) , v ( 1 ) , v ( 2 ) ( t ) , v ( 3 ) ( t ) .
For each i = 1 , , p and j = 1 , , q , we define
w ¯ i ( t ) = min { w i ( t ) , η ˘ ( t ) }   and   v ¯ j ( 1 ) = v j ( 1 ) ,
where η ˘ ( t ) is defined in ( 126 ) . We also define the functions
v ¯ i j ( 2 ) ( t ) = θ i j ( B ) ( t ) · w ¯ i ( t )   for   i = 1 , , p   and   j I i ( B )
and
v ¯ i j ( 3 ) ( t ) = θ i j ( K ) ( t ) · w ¯ i ( t )   for   i = 1 , , p   and   j I i ( K ) ,
where θ i j ( B ) ( t ) and θ i j ( K ) ( t ) are defined in ( 121 ) and ( 122 ) , respectively. Then,
y ¯ ( t ) = w ¯ ( t ) , v ¯ ( 1 ) , v ¯ ( 2 ) ( t ) , v ¯ ( 3 ) ( t )
is a feasible solution of dual problem ( DRCLP * ) .
Proof. 
We first have
i = 1 p B i j ( 0 ) ( t ) · w i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) + i = 1 p χ i j ( B ) · θ i j ( B ) ( t ) · B ^ i j ( t ) · w i ( t ) + i = 1 p t T χ i j ( K ) · θ i j ( K ) ( s ) · K ^ i j ( t , s ) · w i ( s ) d s = i = 1 p B i j ( 0 ) ( t ) · w i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v i j ( 2 ) ( t ) + i = 1 p t T χ i j ( K ) · K ^ i j ( t , s ) · v i j ( 3 ) ( s ) d s a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · w i ( s ) d s ( by   the   feasibility   of   y   again ) ,
which implies
i = 1 p B ˘ i j ( t ) · w i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) w i ( s ) d s ( by   ( 124 )   and   ( 125 ) ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) w ¯ i ( s ) d s ( since   K ˘ i j ( t , s ) 0   and   w ¯ i ( t ) w i ( t ) ) .
For any fixed t [ 0 , T ] , we define the index sets
J = j : w i ( t ) η ˘ ( t )   and   J > = j : w i ( t ) > η ˘ ( t )
and consider
i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) = i J B ˘ i j ( t ) · w ¯ i ( t ) + i J > B ˘ i j ( t ) · w ¯ i ( t ) .
Then, for each fixed j, three cases are considered below.
  • Suppose that J > = (i.e., the second sum is zero). Then, w i ( t ) = w ¯ i ( t ) for all i. Therefore, from (127), we have
    i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) = i = 1 p B ˘ i j ( t ) · w i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) · w ¯ i ( s ) d s .
  • Suppose that J > and B ˘ i j ( t ) = 0 for all i J > . Then,
    i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) = i J B ˘ i j ( t ) · w ¯ i ( t ) + i J > B ˘ i j ( t ) · w ¯ i ( t ) = i J B ˘ i j ( t ) · w i ( t ) + i J > B ˘ i j ( t ) · w i ( t ) = i = 1 p B ˘ i j ( t ) · w i ( t ) .
    Using (129) again, we obtain
    i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) · w ¯ i ( s ) d s .
  • Suppose that J > and there exists j * J > with B ˘ i j * ( t ) 0 . If B i j * ( 0 ) ( t ) 0 , i.e., B i j * ( 0 ) ( t ) σ by (21) then B ˘ i j * ( t ) B i j * ( 0 ) ( t ) σ . If B i j * ( 0 ) ( t ) = 0 then B ^ i j * ( 0 ) ( t ) 0 , i.e., B ˘ i j * ( t ) = B ^ i j * ( 0 ) ( t ) σ ^ by (22). Therefore, we conclude that
    B ˘ i j * ( t ) max σ , σ ^ = σ ˘ .
    From (54), we also have
    i = 1 p K ˘ i j ( t , s ) i = 1 p K i j ( 0 ) ( t , s ) ν .
    From (126), we see that
    σ ˘ · η ˘ ( t ) = τ + ν · t T η ˘ ( s ) d s
    for t [ 0 , T ] . By (53) and (128)–(130), for each j = 1 , , q and t [ 0 , T ] , we have
    i = 1 p B ˘ i j ( t ) · η ˘ ( t ) σ ˘ · η ˘ ( t ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) · η ˘ ( s ) d s .
    Therefore, we obtain
    i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) i J > B ˘ i j ( t ) w ¯ i ( t ) = i J > B ˘ i j ( t ) η ˘ ( t ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) η ˘ ( s ) d s ( using   ( 131 ) ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) · w ¯ i ( s ) d s ( since   w ¯ i ( t ) η ˘ ( t ) ) ,
    which implies, by the nonnegativity,
    i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) · w ¯ i ( s ) d s .
Combining the above cases, we conclude that
i = 1 p B ˘ i j ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) a j ( 0 ) ( t ) + i = 1 p t T K ˘ i j ( t , s ) · w ¯ i ( s ) d s .
Therefore, we obtain
i = 1 p B i j ( 0 ) ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v ¯ j ( 1 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ¯ i j ( 2 ) ( t ) + i = 1 p t T χ i j ( K ) · K ^ i j ( t , s ) · v ¯ i j ( 3 ) ( s ) d s = i = 1 p B i j ( 0 ) ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v ¯ j ( 1 ) + i = 1 p χ i j ( B ) · θ i j ( B ) ( t ) · B ^ i j ( t ) · w ¯ i ( t ) + i = 1 p t T χ i j ( K ) · θ i j ( K ) ( s ) · K ^ i j ( t , s ) · w ¯ i ( s ) d s i = 1 p B i j ( 0 ) ( t ) · w ¯ i ( t ) + χ j ( a ) · a ^ j ( t ) · v j ( 1 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · w ¯ i ( t ) + i = 1 p t T χ i j ( K ) · K ^ i j ( t , s ) · w ¯ i ( s ) d s ( since   v ¯ j ( 1 ) = v j ( 1 ) ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) w ¯ i ( s ) d s ( applying   ( 124 )   and   ( 125 )   to   ( 132 ) ) ,
This shows that the constraint (13) is satisfied.
From (123), we have
γ i ( B ) · w ¯ i ( t ) j I i ( B ) θ i j ( B ) ( t ) · w ¯ i ( t ) = j I i ( B ) v ¯ i j ( 2 ) ( t )
and
γ i ( K ) · w ¯ i ( t ) j I i ( K ) θ i j ( K ) ( t ) · w ¯ i ( t ) = j I i ( K ) v ¯ i j ( 3 ) ( t ) ,
which say that the constraints (15) and (16) are satisfied. The other constraints can be easily realized. This shows that y ¯ is indeed a feasible solution of ( DRCLP * ) , and the proof is complete. □
We also need the following useful lemmas.
Lemma 8.
(Riesz and Sz.-Nagy ([52], p. 64)). Suppose that the sequence { f k } k = 1 in L 2 [ 0 , T ] is uniformly bounded with respect to · 2 . Then, exists a subsequence { f k j } j = 1 such that it weakly converges to f L 2 [ 0 , T ] . More precisely, we have
lim j 0 T f k j ( t ) g ( t ) d t = 0 T f ( t ) g ( t ) d t   for   any   g L 2 [ 0 , T ] .
Lemma 9.
(Levinson [4])Suppose that the sequence { f k } k = 1 is uniformly bounded on [ 0 , T ] with respect to · 2 such that it weakly converges to f L 2 [ 0 , T ] . Then, we have
f ( t ) lim sup k f k ( t ) a n d f ( t ) lim inf k f k ( t )   a . e .   on   [ 0 , T ] .
Lemma 10.
Suppose that { f k } k = 1 and { g k } k = 1 are two sequences in L 2 [ 0 , T ] such that they weakly converge to f 0 and g 0 in L 2 [ 0 , T ] , respectively.
(i)
If the function η defined on [ 0 , T ] is bounded, then the sequence { η · f k } k = 1 weakly converges to η · f 0 .
(ii)
The sequence { f k + g k } k = 1 weakly converges to f 0 + g 0 .
Proof. 
To prove part (i), given any h L 2 [ 0 , T ] , it is clear to see that h · η L 2 [ 0 , T ] . The concept of weak convergence says that
lim k 0 T h · ( η · f k ) d t = lim k 0 T ( h · η ) · f k d t = 0 T ( h · η ) · f 0 d t = 0 T h · ( η · f 0 ) d t .
This shows that the sequence { η · f k } k = 1 weakly converges to η · f 0 .
To prove part (ii), given any h L 2 [ 0 , T ] , we have
lim k 0 T f k ( t ) + g k ( t ) · h ( t ) d t = lim k 0 T f k ( t ) · h ( t ) d t + 0 T g k ( t ) · h ( t ) d t = 0 T f 0 ( t ) · h ( t ) d t + 0 T g 0 ( t ) · h ( t ) d t ( using   the   weak   convergence   for   { f k } k = 1   and   { g k } k = 1 ) = 0 T f 0 ( t ) + g 0 ( t ) · h ( t ) d t .
This completes the proof. □
Let { f k } k = 1 be a sequence of m-dimensional vector-valued functions, i.e.,
f k = f k 1 , f k 2 , , f k m .
We say that { f k } k = 1 weakly converges to another m-dimensional vector-valued function
f 0 = f 01 , f 02 , , f 0 m
when each sequence { f k i } k = 1 weakly converges to f 0 i for i = 1 , , m .
Proposition 9.
Suppose that
x ^ ( n ) ( t ) n = 1 = z ^ ( n ) ( t ) , d ¯ ( n ) , u ¯ ( 1 ; n ) , u ^ ( 2 ; n ) ( t ) , u ^ ( 3 ; n ) ( t ) , u ^ ( 4 ; n ) ( t ) , u ^ ( 5 ; n ) ( t ) n = 1
is a sequence constructed from the optimal solutions x ¯ ( n ) of problem ( P n ) according to part (i) of Proposition 8. Then, there is a subsequence { x ^ ( n k ) } k = 1 of { x ^ ( n ) } n = 1 such that the following properties hold true.
  • The subsequences of functions { z ^ ( n k ) } k = 1 , { u ^ ( 2 ; n k ) } k = 1 , { u ^ ( 3 ; n k ) } k = 1 , { u ^ ( 4 ; n k ) } k = 1 and { u ^ ( 5 ; n k ) } k = 1 are weakly convergent to some z ^ * , u ^ ( 2 * ) , u ^ ( 3 * ) , u ^ ( 4 * ) and u ^ ( 5 * ) , respectively.
  • The subsequences of constants { d ¯ ( n k ) } k = 1 and { u ¯ ( 1 ; n k ) } k = 1 are convergent to some d ^ * and u ^ ( 1 * ) , respectively,
  • The vector-valued function
    x ^ * ( t ) = z ^ * ( t ) , d ^ * , u ^ ( 1 * ) , u ^ ( 2 * ) ( t ) , u ^ ( 3 * ) ( t ) , u ^ ( 4 * ) ( t ) , u ^ ( 5 * ) ( t )
    formed by the above limits is a feasible solution of primal problem ( RCLP * ) .
Proof. 
From Proposition 5, we see that the sequence { x ^ ( n ) } n = 1 is uniformly bounded with respect to · 2 . Let x ^ j ( n ) be the jth component of x ^ ( n ) . We also regard u ¯ ( 1 ; n ) and d ¯ ( n ) as constant functions. Lemma 8 says that there exists a subsequence x ^ 1 ( n k ( 1 ) ) k = 1 of x ^ 1 ( n ) n = 1 that weakly converges to some x ^ 1 ( 0 ) L 2 [ 0 , T ] . Using Lemma 8 again, there exists a subsequence x ^ 2 ( n k ( 2 ) ) k = 1 of x ^ 2 ( n k ( 1 ) ) k = 1 that weakly converges to some x ^ 2 ( 0 ) L 2 [ 0 , T ] . By induction, there exists a subsequence x ^ j ( n k ( j ) ) k = 1 of x ^ j ( n k ( j 1 ) ) k = 1 that weakly converges to some x ^ j ( 0 ) L 2 [ 0 , T ] for each j. Therefore, we can construct a subsequence { x ^ ( n k ) } k = 1 that weakly converges to x ^ ( 0 ) , where
x ^ ( n k ) ( t ) = z ^ ( n k ) ( t ) , d ¯ ( n k ) , u ¯ ( 1 ; n k ) , u ^ ( 2 ; n k ) ( t ) , u ^ ( 3 ; n k ) ( t ) , u ^ ( 4 ; n k ) ( t ) , u ^ ( 5 ; n k ) ( t )
and
x ^ ( 0 ) ( t ) = z ^ ( 0 ) ( t ) , d ¯ ( 0 ) , u ¯ ( 1 ; 0 ) , u ^ ( 2 ; 0 ) ( t ) , u ^ ( 3 ; 0 ) ( t ) , u ^ ( 4 ; 0 ) ( t ) , u ^ ( 5 ; 0 ) ( t ) ,
which also says that the sequences { u ¯ ( 1 ; n k ) } k = 1 and { d ¯ ( n k ) } k = 1 converge to u ¯ ( 1 ; 0 ) and d ¯ ( 0 ) , respectively, and the sequences { z ^ ( n k ) } k = 1 , { u ^ ( 2 ; n k ) } k = 1 , { u ^ ( 3 ; n k ) } k = 1 , { u ^ ( 4 ; n k ) } k = 1 and { u ^ ( 5 ; n k ) } k = 1 weakly converge to z ^ ( 0 ) , u ^ ( 2 ; 0 ) , u ^ ( 3 ; 0 ) , u ^ ( 4 ; 0 ) , u ^ ( 5 ; 0 ) , respectively. Then, the subsequence { x ^ ( n k ) } k = 1 of { x ^ ( n ) } n = 1 is weakly convergent to x ^ ( 0 ) . From Lemma 9, for each j, we have
lim sup k z ^ j ( n k ) ( t ) z ^ j ( 0 ) ( t ) lim inf k z ^ j ( n k ) ( t ) 0   a . e .   in   [ 0 , T ] lim sup k u ^ j ( 2 ; n k ) ( t ) u ^ j ( 2 ; 0 ) ( t ) lim inf k u ^ j ( 2 ; n k ) ( t ) 0   a . e .   in   [ 0 , T ] lim sup k u ^ j ( 3 ; n k ) ( t ) u ^ j ( 3 ; 0 ) ( t ) lim inf k u ^ j ( 3 ; n k ) ( t ) 0   a . e .   in   [ 0 , T ] lim sup k u ^ i j ( 4 ; n k ) ( t ) u ^ i j ( 4 ; 0 ) ( t ) lim inf k u ^ i j ( 4 ; n k ) ( t ) 0   a . e .   in   [ 0 , T ] lim sup k u ^ i j ( 5 ; n k ) ( t ) u ^ i j ( 5 ; 0 ) ( t ) lim inf k u ^ i j ( 5 ; n k ) ( t ) 0   a . e .   in   [ 0 , T ]
which says that
x ^ ( 0 ) ( t ) = z ^ ( 0 ) ( t ) , d ¯ ( 0 ) , u ¯ ( 1 ; 0 ) , u ^ ( 2 ; 0 ) ( t ) , u ^ ( 3 ; 0 ) ( t ) , u ^ ( 4 ; 0 ) ( t ) , u ^ ( 5 ; 0 ) ( t ) 0   a . e .   in   [ 0 , T ] .
Since x ^ ( n k ) is a feasible solution of problem ( RCLP * ) for each n k , we have
j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) + γ i ( B ) · u ^ i ( 2 ; n k ) ( t ) + γ i ( K ) · u ^ i ( 3 ; n k ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; n k ) ( t )
+ { j : j I i ( K ) } u ^ i j ( 5 ; n k ) ( t ) c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n k ) ( s ) d s   for   t [ 0 , T ]   and   i = 1 , , p ;
u ¯ ( 1 ; n k ) + d ¯ j 0 T a ^ j ( t ) · z ^ j ( n k ) ( t ) d t   for   j I ( a ) ;
u ^ i ( 2 ; n k ) ( t ) + u ^ i j ( 4 ; n k ) ( t ) B ^ i j ( t ) · z ^ j ( n k ) ( t )   for   i = 1 , , p , j I i ( B )   and   t [ 0 , T ] ;
u ^ i ( 3 ; n k ) ( t ) + u ^ i j ( 5 ; n k ) ( t ) 0 t K ^ i j ( t , s ) · z ^ j ( n k ) ( s ) d s   for   i = 1 , , p , j I i ( K )   and   t [ 0 , T ]   for   t [ 0 , T ] .
We define the following sequences of vector-valued functions:
f 1 ( n k ) ( t ) k = 1 = f 11 ( n k ) ( t ) , f 12 ( n k ) ( t ) , , f 1 p ( n k ) ( t ) k = 1 f 2 ( n k ) ( t ) k = 1 = f 21 ( n k ) ( t ) , f 22 ( n k ) ( t ) , , f 2 p ( n k ) ( t ) k = 1 f 3 ( n k ) ( t ) k = 1 = f 31 ( n k ) ( t ) , f 32 ( n k ) ( t ) , , f 3 p ( n k ) ( t ) k = 1
by
f 1 i ( n k ) ( t ) = j = 1 q B i j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) + γ i ( B ) · u ^ i ( 2 ; n k ) ( t ) + γ i ( K ) · u ^ i ( 3 ; n k ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; n k ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; n k ) ( t ) ; f 2 i ( n k ) ( t ) = B ^ i j ( t ) · z ^ j ( n k ) ( t ) u ^ i ( 2 ; n k ) ( t ) u ^ i j ( 4 ; n k ) ( t ) 0 ( by   ( 136 ) ) ; f 3 i ( n k ) ( t ) = u ^ i ( 3 ; n k ) ( t ) + u ^ i j ( 5 ; n k ) ( t ) .
Then, the sequences { f 1 ( n k ) } k = 1 , { f 2 ( n k ) } k = 1 and { f 3 ( n k ) } k = 1 are uniformly bounded, since the sequence { x ^ ( n ) } n = 1 is uniformly bounded. Lemma 10 also says that the sequences { f 1 ( n k ) } k = 1 , { f 2 ( n k ) } k = 1 and { f 3 ( n k ) } k = 1 weakly converge to f 1 ( 0 ) , f 2 ( 0 ) and f 3 ( 0 ) , respectively, given by
f 1 i ( 0 ) ( t ) = j = 1 q B i j ( 0 ) ( t ) · z ^ j ( 0 ) ( t ) + γ i ( B ) · u ^ i ( 2 ; 0 ) ( t ) + γ i ( K ) · u ^ i ( 3 ; 0 ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; 0 ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; 0 ) ( t ) f 2 i ( 0 ) ( t ) = B ^ i j ( t ) · z ^ j ( 0 ) ( t ) u ^ i ( 2 ; 0 ) ( t ) u ^ i j ( 4 ; 0 ) ( t ) 0 f 3 i ( 0 ) ( t ) = u ^ i ( 3 ; 0 ) ( t ) + u ^ i j ( 5 ; 0 ) ( t )
Using Lemma 9, we obtain
j = 1 q B i j ( 0 ) ( t ) · z ^ j ( 0 ) ( t ) + γ i ( B ) · u ^ i ( 2 ; 0 ) ( t ) + γ i ( K ) · u ^ i ( 3 ; 0 ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; 0 ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; 0 ) ( t ) = f 1 i ( 0 ) ( t ) lim sup k f 1 i ( n k ) ( t ) c i ( t ) + lim sup k j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( n k ) ( s ) d s ( by   taking   the   limit   superior   on   both   sides   of   ( 134 ) ) c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( 0 ) ( s ) d s   a . e .   in   [ 0 , T ]   ( by   the   weak   convergence .
We also have
B ^ i j ( t ) · z ^ j ( 0 ) ( t ) u ^ i ( 2 ; 0 ) ( t ) u ^ i j ( 4 ; 0 ) ( t ) = f 2 i ( 0 ) ( t ) lim sup k f 2 i ( n k ) ( t ) 0   a . e .   in   [ 0 , T ] .
By the weak convergence for the sequence { z j ( n k ) } k = 1 , the inequalities (135) and (137) say that
u ¯ ( 1 ; 0 ) + d ¯ j 0 T a ^ j ( t ) · z ^ j ( 0 ) ( t ) d t .
and
0 t K ^ i j ( t , s ) · z ^ j ( 0 ) ( s ) d s = lim k 0 t K ^ i j ( t , s ) · z ^ j ( n k ) ( s ) d s ( by   the   weak   convergence ) = lim inf k 0 t K ^ i j ( t , s ) · z ^ j ( n k ) ( s ) d s lim inf k u ^ i ( 3 ; n k ) ( t ) + u ^ i j ( 5 ; n k ) ( t ) = lim inf k f 3 i ( n k ) ( t ) f 3 i ( 0 ) ( t )   a . e .   in   [ 0 , T ]   ( by   Lemma 9 ) = u ^ i ( 3 ; 0 ) ( t ) + u ^ i j ( 5 ; 0 ) ( t )   a . e .   in   [ 0 , T ] ,
respectively. Let N 1 , N 2 and N 3 be the subsets of [ 0 , T ] on which the inequalities (138)–(140) are violated, respectively, for all i = 1 , , p and j = 1 , , q , let N 4 be the subset of [ 0 , T ] on which x ^ ( 0 ) ( t ) 0 by referring to (133), and let
N = N 1 N 2 N 3 N 4 .
We define
z ^ * ( t ) , d ^ * , u ^ ( 1 * ) , u ^ ( 2 * ) ( t ) , u ^ ( 3 * ) ( t ) , u ^ ( 4 * ) ( t ) , u ^ ( 5 * ) ( t ) = x ^ * ( t ) = x ^ ( 0 ) ( t )   if   t N 0   if   t N ,
where the set N has measure zero. Then, x ^ * ( t ) 0 for t [ 0 , T ] .
  • Suppose that t N . Since z ^ * ( t ) = z ^ ( 0 ) ( t ) a.e. in [ 0 , T ] , from (138), we have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j * ( t ) + γ i ( B ) · u ^ i ( 2 * ) ( t ) + γ i ( K ) · u ^ i ( 3 * ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 * ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 * ) ( t ) = j = 1 q B i j ( 0 ) ( t ) · z ^ j ( 0 ) ( t ) + γ i ( B ) · u ^ i ( 2 ; 0 ) ( t ) + γ i ( K ) · u ^ i ( 3 ; 0 ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 ; 0 ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 ; 0 ) ( t ) c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j ( 0 ) ( s ) d s = c i ( t ) + j = 1 q 0 t K i j ( 0 ) ( t , s ) · z ^ j * ( s ) d s .
    From (139), we have
    B ^ i j ( t ) · z ^ j * ( t ) u ^ i ( 2 * ) ( t ) u ^ i j ( 4 * ) ( t ) = B ^ i j ( t ) · z ^ j ( 0 ) ( t ) u ^ i ( 2 ; 0 ) ( t ) u ^ i j ( 4 ; 0 ) ( t ) 0
    From (140), we have
    0 t K ^ i j ( t , s ) · z ^ j * ( s ) d s = 0 t K ^ i j ( t , s ) · z ^ j ( 0 ) ( s ) d s u ^ i ( 3 ; 0 ) ( t ) + u ^ i j ( 5 ; 0 ) ( t ) = u ^ i ( 3 * ) ( t ) + u ^ i j ( 5 * ) ( t ) .
  • Suppose that t N . We have
    j = 1 q B i j ( 0 ) ( t ) · z ^ j * ( t ) + γ i ( B ) · u ^ i ( 2 * ) ( t ) + γ i ( K ) · u ^ i ( 3 * ) ( t ) + { j : j I i ( B ) } u ^ i j ( 4 * ) ( t ) + { j : j I i ( K ) } u ^ i j ( 5 * ) ( t ) = 0 c i ( t ) + j = 1 q K i j ( 0 ) ( t , s ) · 0 t z ^ j * ( s ) d s .
    We also have
    B ^ i j ( t ) · z ^ j * ( t ) u ^ i ( 2 * ) ( t ) u ^ i j ( 4 * ) ( t ) = 0
    and
    0 t K ^ i j ( t , s ) · z ^ j * ( s ) d s = 0 0 = u ^ i ( 3 * ) ( t ) + u ^ i j ( 5 * ) ( t ) ,
which shows that x ^ * is a feasible solution of primal problem ( RCLP * ) . Since x ^ * ( t ) = x ^ ( 0 ) ( t ) a.e. on [ 0 , T ] , we see that the subsequence { x ^ ( n k ) } k = 1 is weakly convergent to x ^ * , and the proof is complete.
Proposition 10.
Suppose that
y ^ ( n ) ( t ) n = 1 = w ^ ( n ) ( t ) , v ^ ( 1 ; n ) , v ^ ( 2 ; n ) ( t ) , v ^ ( 3 ; n ) ( t ) n = 1
is a sequence that is constructed from the optimal solutions y ¯ ( n ) of problem ( D n ) according to part (ii) of Proposition 8. For t [ 0 , T ] , we define the functions
θ ^ i j ( B ; n ) ( t ) = v ^ i j ( 2 ; n ) ( t ) w ^ i ( n ) ( t ) i f w ^ i ( n ) ( t ) 0 0 i f w ^ i ( n ) ( t ) = 0 f o r i = 1 , , p a n d j I i ( B )
and
θ ^ i j ( K ; n ) ( t ) = v ^ i j ( 3 ; n ) ( t ) w ^ i ( n ) ( t )   if   w ^ i ( n ) ( t ) 0 0   if   w ^ i ( n ) ( t ) = 0   for   i = 1 , , p   and   j I i ( K ) .
For each i = 1 , , p and j = 1 , , q , we define
w ˘ i ( n ) ( t ) = min w ^ i ( n ) ( t ) , η ˘ ( t )   and   v ˘ j ( 1 ; n ) = v ^ j ( 1 ; n ) ,
where η ˘ ( t ) is defined in ( 126 ) . We also define the functions
v ˘ i j ( 2 ; n ) ( t ) = θ ^ i j ( B ; n ) ( t ) · w ˘ i ( n ) ( t ) f o r i = 1 , , p a n d j I i ( B )
and
v ˘ i j ( 3 ; n ) ( t ) = θ ^ i j ( K ; n ) ( t ) · w ˘ i ( n ) ( t )   for   i = 1 , , p   and   j I i ( K )
for t [ 0 , T ] . Then, each
y ˘ ( n ) ( t ) n = 1 = w ˘ ( n ) ( t ) , v ˘ ( 1 ; n ) , v ˘ ( 2 ; n ) ( t ) , v ˘ ( 3 ; n ) ( t ) n = 1
is a feasible solution of dual problem ( DRCLP * ) , and there is a subsequence { y ˘ ( n k ) } k = 1 of { y ˘ ( n ) } n = 1 such that the following properties hold true.
  • The subsequence of functions { w ˘ ( n k ) } k = 1 is weakly convergent to w ^ * .
  • The subsequence of constants v ˘ ( 1 ; n k ) k = 1 is convergent to v ^ ( 1 * ) .
  • The vector-valued function
    y ^ * ( t ) = w ^ * ( t ) , v ^ ( 1 * ) , v ^ ( 2 * ) ( t ) , v ^ ( 3 * ) ( t )
    formed by the above limits and v ^ ( 2 * ) ( t ) , v ^ ( 3 * ) ( t ) given by ( ) in the proof below is a feasible solution of dual problem ( DRCLP * ) , where v ^ ( 2 * ) ( t ) and v ^ ( 3 * ) ( t ) are constructed from the subsequence { y ˘ ( n k ) } k = 1 .
Proof. 
Since y ^ ( n ) is a feasible solution of problem ( DRCLP * ) , Lemma 7 says that y ˘ ( n ) is also a feasible solution of problem ( DRCLP * ) satisfying w ˘ i ( n ) ( t ) w ^ i ( n ) ( t ) , v ˘ i j ( 2 ; n ) ( t ) w ^ i ( n ) ( t ) and v ˘ i j ( 3 ; n ) ( t ) w ^ i ( n ) ( t ) for each i = 1 , , p , j = 1 , , q and t [ 0 , T ] . Remark 2 also says that the sequence { w ^ ( n ) } n = 1 is uniformly bounded, which implies that the sequence { y ˘ ( n ) } n = 1 is uniformly bounded. Therefore, the sequence { y ˘ ( n ) } n = 1 is also uniformly bounded with respect to · 2 . Using Lemma 8 and the proof of Proposition 9, we can similarly show that there is a subsequence y ˘ ( n k ) k = 1 of y ˘ ( n ) n = 1 such that the subsequence of functions w ˘ ( n k ) k = 1 weakly converges to some w ˘ ( 0 ) and the subsequence of constants v ˘ ( 1 ; n k ) k = 1 converges to some v ˘ ( 1 ; 0 ) . According to the constraints (17) and (14), we have
v ˘ j ( 1 ; n k ) 1   and   γ ( a ) j I ( a ) v ˘ j ( 1 ; n k ) ,
which implies, by taking the limit on both sides,
v ˘ j ( 1 ; 0 ) 1   and   γ ( a ) j I ( a ) v ˘ j ( 1 ; 0 ) .
By applying the equalities (141) and (142) to the constraints (15) and (16), we have
γ i ( B ) j I i ( B ) θ ^ i j ( B ; n ) ( t )   and   γ i ( K ) j I i ( K ) θ ^ i j ( K ; n ) ( t ) .
We also see that θ ^ i j ( B ; n ) ( t ) 1 and θ ^ i j ( K ; n ) ( t ) 1 . Let
θ ^ i j ( B ) ( t ) = lim inf n θ ^ i j ( B ; n ) ( t )   and   θ ^ i j ( K ) ( t ) = lim inf n θ ^ i j ( K ; n ) ( t ) .
Then, θ ^ i j ( B ) ( t ) 1 and θ ^ i j ( K ) ( t ) 1 . From (144), we obtain
γ i ( B ) lim inf n j I i ( B ) θ ^ i j ( B ; n ) ( t ) j I i ( B ) lim inf n θ ^ i j ( B ; n ) ( t ) = j I i ( B ) θ ^ i j ( B ) ( t )
and
γ i ( K ) lim inf n j I i ( K ) θ ^ i j ( K ; n ) ( t ) j I i ( K ) lim inf n θ ^ i j ( K ; n ) ( t ) = j I i ( K ) θ ^ i j ( K ) ( t )
Now, we define
v ˘ i j ( 2 ; 0 ) ( t ) = θ ^ i j ( B ) ( t ) · w ˘ i ( 0 ) ( t )   for   i = 1 , , p   and   j I i ( B )
and
v ˘ i j ( 3 ; 0 ) ( t ) = θ ^ i j ( K ) ( t ) · w ˘ i ( 0 ) ( t )   for   i = 1 , , p   and   j I i ( K )
Then, v ˘ i j ( 2 ; 0 ) ( t ) w ˘ i ( 0 ) ( t ) and v ˘ i j ( 3 ; 0 ) ( t ) w ˘ i ( 0 ) ( t ) . Since y ˘ ( n k ) is a feasible solution of problem ( DRCLP * ) , we have w ˘ i ( n k ) ( t ) 0 for t [ 0 , T ] and i = 1 , , p , and
i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 ; n k ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ˘ i j ( 2 ; n k ) ( t ) + i = 1 p χ i j ( K ) · K ^ i j ( t , s ) · t T v ˘ i j ( 3 ; n k ) ( s ) d s a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · w ˘ i ( n k ) ( s ) d s
for t [ 0 , T ] and j = 1 , , q . From Lemma 9, for each i, we have
lim sup k w ˘ i ( n k ) ( t ) w ˘ i ( 0 ) ( t ) lim inf k w ˘ i ( n k ) ( t ) 0 a . e . i n [ 0 , T ] .
We define the following sequence of vector-valued functions:
g ( n k ) ( t ) k = 1 = g 1 ( n k ) ( t ) , g 2 ( n k ) ( t ) , , g q ( n k ) ( t ) k = 1
by
g j ( n k ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( n k ) ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 ; n k ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ˘ i j ( 2 ; n k ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( t , s ) · v ˘ i j ( 3 ; n k ) ( s ) d s .
Then, the sequence { g ( n k ) } k = 1 is uniformly bounded, since the sequence { y ˘ ( n ) } n = 1 is uniformly bounded. Lemma 10 also says that the sequence { g ( n k ) } k = 1 weakly converges to some g ( 0 ) given by
g i ( 0 ) ( t ) = i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( 0 ) ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 ; 0 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ˘ i j ( 2 ; 0 ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( t , s ) · v ˘ i j ( 3 ; 0 ) ( s ) d s .
Then, we obtain
i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( 0 ) ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 ; 0 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ˘ i j ( 2 ; 0 ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( t , s ) · v ˘ i j ( 3 ; 0 ) ( s ) d s = g j ( 0 ) ( t ) lim inf n k g j ( n k ) ( t ) ( by   Lemma   9 ) a j ( 0 ) ( t ) + lim inf n k i = 1 p t T K i j ( 0 ) ( t , s ) · w ˘ i ( n k ) ( s ) d s ( by   taking   the   limit   inferior   on   both   sides   of   ( 149 ) ) = a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · w ˘ i ( 0 ) ( s ) d s   a . e .   in   [ 0 , T ]   ( by   the   weak   convergence   )
We define
η ( t ) = τ σ · exp ν · ( T t ) σ .
Since σ ˘ σ , it is clear to see that η ˘ ( t ) η ( t ) for t [ 0 , T ] . Since w ˘ i ( n k ) ( t ) η ˘ ( t ) for t [ 0 , T ] , it follows that w ˘ i ( n k ) ( t ) η ( t ) for t [ 0 , T ] . Using Lemma 9, we obtain
w ˘ i ( 0 ) ( t ) lim sup n k w ˘ i ( n k ) ( t ) η ( t )   a . e .   on   [ 0 , T ] .
Let N ^ 0 and N ^ 1 be the subsets of [ 0 , T ] on which the inequalities (151) and (153) are violated for all j = 1 , , q , let N ^ 2 be the subset of [ 0 , T ] on which w ˘ ( 0 ) ( t ) 0 , and let N ^ = N ^ 0 N ^ 1 N ^ 2 . We define
η ( t ) = η ( t ) , η ( t ) , , η ( t ) R p .
and
w ^ * ( t ) = w ˘ ( 0 ) ( t )   if   t N ^ η ( t )   if   t N ^ ,
where the set N ^ has measure zero. Therefore, we have w ^ * ( t ) 0 for t [ 0 , T ] and w ^ * ( t ) = w ˘ ( 0 ) ( t ) a.e. on [ 0 , T ] , which implies, using (153),
w ^ i * ( t ) η ( t )   a . e . on   [ 0 , T ] .
Let
θ ^ ( B ) ( t ) = min i { 1 , , p } , j I i ( B ) θ ^ i j ( B ) ( t )   and   θ ^ ( K ) ( t ) = min i { 1 , , p } , j I i ( K ) θ ^ i j ( K ) ( t )
Then, we define v ^ ( 1 * ) = v ˘ ( 1 ; 0 ) ,
v ^ ( 2 * ) ( t ) = v ˘ ( 2 ; 0 ) ( t )   if   t N ^ θ ^ ( B ) ( t ) · η ( t )   if   t N ^ ,   and   v ^ ( 3 * ) ( t ) = v ˘ ( 3 ; 0 ) ( t )   if   t N ^ θ ^ ( K ) ( t ) · η ( t )   if   t N ^ ,
Since v ˘ i j ( 2 ; 0 ) ( t ) w ˘ i ( 0 ) ( t ) , v ˘ i j ( 3 ; 0 ) ( t ) w ˘ i ( 0 ) ( t ) , θ ^ ( B ) ( t ) 1 and θ ^ ( K ) ( t ) 1 , it follows that
v ^ i j ( 2 * ) ( t ) w ^ i * ( t )   and   v ^ i j ( 3 * ) ( t ) w ^ i * ( t ) .
Let
y ^ * ( t ) = w ^ * ( t ) , v ^ ( 1 * ) , v ^ ( 2 * ) ( t ) , v ^ ( 3 * ) ( t ) .
Then, y ^ * ( t ) = y ˘ ( 0 ) ( t ) a.e. on [ 0 , T ] . Next, we want to show that y ^ * is a feasible solution of ( DRCLP * ) .
  • Suppose that t N ^ . We have
    i = 1 p B i j ( 0 ) ( t ) · w ^ i * ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 * ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ^ i j ( 2 * ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( t , s ) · v ^ i j ( 3 * ) ( s ) d s = i = 1 p B i j ( 0 ) ( t ) · w ˘ i ( 0 ) ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 ; 0 ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ˘ i j ( 2 ; 0 ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( t , s ) · v ˘ i j ( 3 ; 0 ) ( s ) d s a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · w ˘ i ( 0 ) ( s ) d s ( by ( 151 ) ) = a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · w ^ i * ( s ) d s ( since   w ˘ i ( 0 ) ( t ) = w ^ i * ( t )   a . e . on   [ 0 , T ] ) .
    From (145)–(148), we obtain
    γ i ( B ) · w ^ i * ( t ) = γ i ( B ) · w ˘ i ( 0 ) ( t ) j I i ( B ) θ ^ i j ( B ) ( t ) · w ˘ i ( 0 ) ( t ) = j I i ( B ) v ˘ i j ( 2 ; 0 ) ( t ) = j I i ( B ) v ^ i j ( 2 * ) ( t )
    and
    γ i ( K ) · w ^ i * ( t ) = γ i ( K ) · w ˘ i ( 0 ) ( t ) j I i ( K ) θ ^ i j ( K ) ( t ) · w ˘ i ( 0 ) ( t ) = j I i ( K ) v ˘ i j ( 3 ; 0 ) ( t ) = j I i ( K ) v ^ i j ( 3 * ) ( t ) .
  • Suppose that t N ^ . From (152), we see that
    σ · η ( t ) = τ + ν · t T η ( s ) d s
    for t [ 0 , T ] . By (20), (21) and (157), for each j = 1 , , q and t [ 0 , T ] , we have
    i = 1 p B i j ( 0 ) ( t ) · η ( t ) σ · η ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · η ( s ) d s .
    Therefore, we obtain
    i = 1 p B i j ( 0 ) ( t ) · w ^ i * ( t ) + χ j ( a ) · a ^ j ( t ) · v ˘ j ( 1 * ) + i = 1 p χ i j ( B ) · B ^ i j ( t ) · v ^ i j ( 2 * ) ( t ) + i = 1 p χ i j ( K ) · t T K ^ i j ( t , s ) · v ^ i j ( 3 * ) ( s ) d s i = 1 p B i j ( 0 ) ( t ) · w ^ i * ( t ) ( by   the   nonnegativity ) = i = 1 p B i j ( 0 ) ( t ) · η ( t ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · η ( s ) d s ( by   ( 158 ) ) a j ( 0 ) ( t ) + i = 1 p t T K i j ( 0 ) ( t , s ) · w ^ i * ( s ) d s ( by   ( 154 ) ) .
    From (155), we also have
    γ i ( B ) · w ^ i * ( t ) = γ i ( B ) · η ( t ) j I i ( B ) θ ^ i j ( B ) ( t ) · η ( t ) j I i ( B ) θ ^ ( B ) ( t ) · η ( t ) = j I i ( B ) v ^ i j ( 2 * ) ( t )
    and
    γ i ( K ) · w ^ i * ( t ) = γ i ( K ) · η ( t ) j I i ( K ) θ ^ i j ( K ) ( t ) · η ( t ) j I i ( K ) θ ^ ( K ) ( t ) · η ( t ) = j I i ( K ) v ^ i j ( 3 * ) ( t ) .
Finally, from (143), we have
v ^ j ( 1 * ) 1   and   γ ( a ) j I ( a ) v ^ j ( 1 * ) ,
which shows that y ^ * is a feasible solution of ( DRCLP * ) . Since w ^ * ( t ) = w ˘ ( 0 ) ( t ) a.e. on [ 0 , T ] , we see that the subsequence { w ˘ ( n k ) } k = 1 is weakly convergent to w ^ * , and the proof is complete. □
Theorem 5
(Strong Duality Theorem).
According to Proposition 8, assume that the sequence
x ^ ( n ) ( t ) n = 1 = z ^ ( n ) ( t ) , d ¯ ( n ) , u ¯ ( 1 ; n ) , u ^ ( 2 ; n ) ( t ) , u ^ ( 3 ; n ) ( t ) , u ^ ( 4 ; n ) ( t ) , u ^ ( 5 ; n ) ( t ) n = 1
is constructed from the optimal solutions x ¯ ( n ) of problem ( P n ) , and that the sequence
y ^ ( n ) ( t ) n = 1 = w ^ ( n ) ( t ) , v ^ ( 1 ; n ) , v ^ ( 2 ; n ) ( t ) , v ^ ( 3 ; n ) ( t ) n = 1
is constructed from the optimal solutions y ¯ ( n ) of problem ( D n ) . Then, the feasible solutions x ^ * and y ^ * obtained from Propositions 9 and 10, respectively, are also the optimal solutions of primal problem ( RCLP * ) and dual problem ( DRCLP * ) , respectively. Moreover, we have
V ( DRCLP * ) = V ( RCLP * ) .
Proof. 
In Proposition 9, regarding the subsequence { x ^ ( n k ) } k = 1 with
x ^ ( n k ) = z ^ ( n k ) ( t ) , d ¯ ( n k ) , u ¯ ( 1 ; n k ) , u ^ ( 2 ; n k ) ( t ) , u ^ ( 3 ; n k ) ( t ) , u ^ ( 4 ; n k ) ( t ) , u ^ ( 5 ; n k ) ( t ) ,
we have the primal objective value
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) d ¯ j ( n k ) γ ( a ) · u ¯ ( 1 ; n k ) = j = 1 q l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t + j = 1 q l = 1 n k E ¯ l ( n k ) a l j ( n k ) · z ^ j ( n k ) ( t ) d t j I ( a ) d ¯ j ( n k ) γ ( a ) · u ¯ ( 1 ; n k ) = j = 1 q l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + j = 1 q l = 1 n k d l ( n k ) · a l j ( n k ) · z ¯ l j ( n k ) j I ( a ) d ¯ j ( n k ) γ ( a ) · u ¯ ( 1 ; n k ) = j = 1 q l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t + V ( P n k ) ,
where { z ^ j ( n k ) } k = 1 is weakly convergent to z ^ j * . In Proposition 10, regarding the subsequence { y ^ ( n k ) } k = 1 with
y ^ ( n k ) = w ^ ( n k ) ( t ) , v ^ ( 1 ; n k ) , v ^ ( 2 ; n k ) ( t ) , v ^ ( 3 ; n k ) ( t ) ,
we also have the dual objective value
i = 1 p 0 T c i ( t ) · w ^ i ( n k ) ( t ) d t = i = 1 p l = 1 n k E ¯ l ( n k ) c i ( t ) · w ¯ l i ( n k ) d t + i = 1 p 0 T c i ( t ) · f ( n k ) ( t ) d t = i = 1 p l = 1 n k E ¯ l ( n k ) c i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + i = 1 p l = 1 n k d l ( n k ) c l i ( n k ) · w ¯ l i ( n k ) + i = 1 p 0 T c i ( t ) · f ( n k ) ( t ) d t = i = 1 p l = 1 n k E ¯ l ( n k ) c i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t + V ( D n k ) + i = 1 p 0 T c i ( t ) · f ( n k ) ( t ) d t .
Therefore, we obtain
V ( P n k ) = j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) d ¯ j ( n k ) γ ( a ) · u ¯ ( 1 ; n k ) j = 1 q l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t .
and
V ( D n k ) = i = 1 p 0 T c i ( t ) · w ^ i ( n k ) ( t ) d t i = 1 p l = 1 n k E ¯ l ( n k ) c i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t i = 1 p 0 T c i ( t ) · f ( n k ) ( t ) d t .
Since V ( P n k ) = V ( D n k ) and w ˘ i ( n k ) w ^ i ( n k ) , where w ˘ i ( n k ) is given in the proof of Proposition 10 satisfying that { w ˘ i ( n k ) } k = 1 is weakly convergent to w ^ i * , from (159) and (160), we have
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) d ¯ j ( n k ) γ ( a ) · u ¯ ( 1 ; n k ) j = 1 q l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t i = 1 p 0 T c i ( t ) · w ˘ i ( n k ) ( t ) d t i = 1 p l = 1 n k E ¯ l ( n k ) c i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t i = 1 p 0 T c i ( t ) · f ( n k ) ( t ) d t .
Using Lemma 6, we have
0 l = 1 n k E ¯ l ( n k ) a j ( 0 ) ( t ) a l j ( n k ) · z ¯ l j ( n k ) d t = 0 T a j ( 0 ) ( t ) a ¯ j ( n k ) ( t ) · z ^ j ( n k ) ( t ) d t 0   as   k
and
0 l = 1 n k E ¯ l ( n k ) c i ( t ) c l i ( n k ) · w ¯ l i ( n k ) d t = 0 T c i ( t ) c ¯ i ( n k ) ( t ) · w ¯ i ( n k ) ( t ) d t 0   as   k .
By taking limit on both sides of (161), and using (120), (162) and (163), we obtain
lim k j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n k ) ( t ) d t j I ( a ) d ¯ j ( 0 ) γ ( a ) · u ¯ ( 1 ; 0 ) lim k i = 1 p 0 T c i ( t ) · w ˘ i ( n k ) ( t ) d t .
Using the weak convergence, we also obtain
j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j * ( t ) d t j I ( a ) d ^ j * γ ( a ) · u ^ ( 1 * ) i = 1 p 0 T c i ( t ) · w ^ i * ( t ) d t .
According to the weak duality theorem between problems ( RCLP * ) and ( DRCLP * ) , we have that
V ( RCLP * ) = j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j * ( t ) d t j I ( a ) d ^ j * γ ( a ) · u ^ ( 1 * ) = i = 1 p 0 T c i ( t ) · w ^ i * ( t ) d t = V ( DRCLP * ) ,
which show that x ^ * and y ^ * are the optimal solutions of problems ( RCLP * ) and ( DRCLP * ) , respectively. Theorem 3 also says that V ( DRCLP * ) = V ( RCLP * ) . This completes the proof. □

7. Computational Procedure and Numerical Example

In the sequel, we are are going to design the computational procedure. The purpose is to obtain the approximate optimal solutions of problem ( RCLP * ) . According to the above settings, we see that the approximate optimal solutions are step functions. We shall also use Proposition 8 to obtain the appropriate step functions.
Theorem 3 says that the error between the approximate objective value and the (theoretical) optimal objective value is
ε n = V ( D n ) + l = 1 n i = 1 n E ¯ l ( n ) c i ( t ) · w ¯ l i ( n ) d t + l = 1 n i = 1 n E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c i ( t ) d t
In order to obtain π l ( n ) , using (97), we have to solve the following problem
sup t E l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) .
We first note that the function h ˘ l j ( n ) ( t ) in (96) can be rewritten as follows:
h ˘ l j ( n ) ( t ) = i = 1 p B ˘ l i j ( n ) B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) K ˘ i j ( n ) ( s , t ) · w ¯ l i ( n ) d s + k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s .
For t F l ( n ) and l = 1 , , n , we define
h ^ l j ( n ) = i = 1 p B ˘ l i j ( n ) · w ¯ l i ( n ) k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ k l i j ( n ) · w ¯ k i ( n ) d s
and
h ˜ l j ( n ) ( t ) = i = 1 p B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) K ˘ i j ( n ) ( s , t ) · w ¯ l i ( n ) d s + k = l + 1 n i = 1 p E ¯ k ( n ) K ˘ i j ( n ) ( s , t ) · w ¯ k i ( n ) d s .
Then, the real-valued function h ˘ l j ( n ) can be rewritten as
h ˘ l j ( n ) ( t ) = h ^ l j ( n ) + h ˜ l j ( n ) ( t )   for   t F l ( n ) .
Now we define the real-valued function h l j ( n ) on E ¯ l ( n ) by
h l j ( n ) ( t ) = h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) if   t E l ( n ) lim t e l 1 ( n ) + h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) if   t = e l 1 ( n ) lim t e l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) if   t = e l ( n ) .
Since a j , a ^ j , B i j and B ^ i j are continuous on E l ( n ) , and K i j and K ^ i j are continuous on E k ( n ) × E l ( n ) , respectively, for all l , k = 1 , , n , we see that h ˘ l j ( n ) + a ˘ j ( n ) is also continuous on E l ( n ) , which implies that h l j ( n ) is continuous on the interval E ¯ l ( n ) . Therefore, we have
sup t E ¯ l ( n ) h l j ( n ) ( t ) = max t E ¯ l ( n ) h l j ( n ) ( t ) ,
which says that the supremum in (165) can be obtained by the following equality:
max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , sup t E l ( n ) h ˘ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) .
In order to use the Newton’s method, we assume that the functions a j , B i j , a ^ j , B ^ i j , K i j and K ^ i j are twice-differentiable on [ 0 , T ] and [ 0 , T ] × [ 0 , T ] , respectively, which also says that the functions a j , B i j , a ^ j , B ^ i j , K i j and K ^ i j are twice-differentiable on the open interval E l ( n ) and open rectangle E k ( n ) × E l ( n ) , respectively, for all l , k = 1 , , n . According to (168), the following simple type of optimization problem will be solved
max e l 1 ( n ) t e l ( n ) h l j ( n ) ( t ) .
Then, the optimal solution is given by
t * = e l 1 ( n ) o r t * = e l ( n ) or   satisfying   d d t h l j ( n ) ( t ) t = t * = 0 .
Using (167), we see that the optimal solution of problem (169) is given by
t * = e l 1 ( n ) o r t * = e l ( n ) or   satisfying   d d t h ˜ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) t = t * = 0 .
We denote by Z l j ( n ) the set of all zeros of the real-valued function d d t ( h ˜ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) ) . Then, we have
max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , sup t * Z l j ( n ) h ˜ l j ( n ) ( t * ) + a j ( t * ) , i f Z l j ( n ) max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , i f Z l j ( n ) = .
Therefore, using (168) and (170), we can obtain the desired supremum (165). From (166), we also have
d d t h ˜ l j ( n ) ( t ) = i = 1 p K ˘ l l i j ( n ) · w ¯ l i ( n ) i = 1 p d d t B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) t K ˘ i j ( n ) ( s , t ) · w ¯ l i ( n ) d s K ˘ i j ( n ) ( t , t ) · w ¯ l i ( n ) + k = l + 1 n i = 1 p E ¯ k ( n ) t K ˘ i j ( n ) ( s , t ) · w ¯ k i ( n ) d s d 2 d t 2 h ˜ l j ( n ) ( t ) = i = 1 p d 2 d t 2 B ˘ i j ( n ) ( t ) · w ¯ l i ( n ) i = 1 p d d t K ˘ i j ( n ) ( t , t ) · w ¯ l i ( n ) + i = 1 p t e l ( n ) 2 t 2 K ˘ i j ( n ) ( s , t ) · w ¯ l i ( n ) d s t K ˘ i j ( n ) ( s , t ) s = t · w ¯ l i ( n ) + k = l + 1 n i = 1 p E ¯ k ( n ) 2 t 2 K ˘ i j ( n ) ( s , t ) · w ¯ k i ( n ) d s .
Two cases are considered below.
  • Suppose that h ˜ l j ( n ) + a ˘ j ( n ) is a linear function of t on E l ( n ) assumed by
    h ˜ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) = a j · t + b j
    for j = 1 , , q . Using (168), we obtain
    max t E ¯ l ( n ) h l j ( n ) ( t ) = max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , b j   if   a j = 0 max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , a j · e l ( n ) + b j   if   a j > 0 max h l j ( n ) e l 1 ( n ) , h l j ( n ) e l ( n ) , a j · e l 1 ( n ) + b j   if   a j < 0
  • Suppose that h ˜ l j ( n ) + a ˘ j ( n ) is not a linear function of t. We are going to apply the Newton’s method to generate a sequence { t m } m = 1 satisfying t m t * as m such that t * is the zero of d d t ( h ˜ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) ) . More precisely, the iteration is given by
    t m + 1 = t m t m t m ,
    where
    t m = d d t h l j ( n ) ( t ) t = t m + d d t h ^ l j ( n ) ( t ) t = t m + d d t a j ( 0 ) ( t ) t = t m χ j ( a ) · d d t a ^ j ( t ) t = t m
    and
    t m = d 2 d t 2 h l j ( n ) ( t ) t = t m + d 2 d t 2 h ^ l j ( n ) ( t ) t = t m + d 2 d t 2 a j ( 0 ) ( t ) t = t m χ j ( a ) · d 2 d t 2 a ^ j ( t ) t = t m
    for m = 0 , 1 , 2 , . The initial guess is t 0 . We are going to apply the Newton’s method by taking as many as possible for the initial guesses t 0 ’s in order to obtain all the zeros of the real-valued function d d t ( h ˜ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) ) .
Now, the more detailed computational procedure is presented below.
  • Step 1. Set the error tolerance ϵ that is used to stop the iteration. Set the initial value of natural number n N , where the new value of n for the next iteration can refer to Step 6.
  • Step 2. Use the simplex method to solve the dual problem ( D n ) that is a large-scale linear programming problem. In this case, we can obtain the optimal objective value V ( D n ) and the optimal solution w ¯ .
  • Step 3. Use the Newton method presented in (172) to obtain the set Z l j ( n ) of all zeros of the real-valued function d d t ( h ˜ l j ( n ) ( t ) + a ˘ j ( n ) ( t ) ) .
  • Step 4. Use (170) and (171) to evaluate the maximum presented in (169). Use (168) to evaluate the supremum presented in (165).
  • Step 5. Use the supremum obtained in Step 4 to evaluate π ¯ l ( n ) presented in (97). Use the values of π ¯ l ( n ) to evaluate π l ( n ) presented in (98).
  • Step 6. Use formula (164) to evaluate the error bound ε n . If ε n < ϵ , then go to Step 7. Otherwise, one more subdivision for each compact subinterval must be taken. Set n n + n ^ for some integer n ^ N and go to Step 2, where n ^ is the number of new points of subdivisions for all the compact subintervals. For example, in Example 1, we can set n * n * + 1 . In this case, we have n ^ = r . In Example 2, we can set n v n v + 1 for v = 0 , 1 , , r 1 . In this case, we also have n ^ = r .
  • Step 7. Use the simplex method to solve the primal problem ( P n ) that is a large-scale linear programming problem. In this case, we obtain the optimal solution z ¯ ( n ) .
  • Step 8. Use (81) to set the step function z ^ ( n ) ( t ) . This step function is the approximate optimal solution of problem V ( RCLP * ) . Using Proposition 8, we see that the actual error between the optimal objective value V ( RCLP * ) and the objective value at z ^ ( n ) ( t ) is less than or equal to the error bound ε n . In this case, the error tolerance ϵ is reached for this partition P n .
A numerical example is given below
maximize 0 1 a 1 ( t ) · z 1 ( t ) + a 2 ( t ) · z 2 ( t ) d t subject   to b 1 ( t ) · z 1 ( t ) c 1 ( t ) + 0 t k 1 ( t , s ) · z 1 ( s ) + k 2 ( t , s ) · z 2 ( s ) d s   for   t [ 0 , 1 ] b 2 ( t ) · z 2 ( t ) c 2 ( t ) + 0 t k 3 ( t , s ) · z 1 ( s ) + k 4 ( t , s ) · z 2 ( s ) d s   for   t [ 0 , 1 ] z = ( z 1 , z 2 ) L 2 2 [ 0 , 1 ] ,
where the desired functions are taken to be the piecewise continuous functions on the time interval [ 0 , T ] with T = 1 . The data a 1 and a 2 are assumed to be uncertain with the nominal data
a 1 ( 0 ) ( t ) = e t , i f 0 t 0.2 sin t , i f 0.2 < t 0.6 t 2 , i f 0.6 < t 1   and   a 2 ( 0 ) ( t ) = 2 t ,   if   0 t 0.5 t ,   if   0.5 < t 0.7 t 2 ,   if   0.7 < t 1
and the uncertainties
a ^ 1 ( t ) = 0.01 e t ,   if   0 t 0.2 0.01 sin t ,   if   0.2 < t 0.6 0.02 t 2 ,   if   0.6 < t 1   and   a ^ 2 ( t ) = 0.02 t ,   if   0 t 0.5 0.01 t ,   if   0.5 < t 0.7 0.02 t 2 ,   if   0.7 < t 1 ,
respectively. The data c 1 and c 2 are assumed to be uncertain with the nominal data
c 1 ( 0 ) ( t ) = t 3 ,   if   0 t 0.3 ( ln t ) 2 ,   if   0.3 < t 0.5 t 2 ,   if   0.5 < t 0.8 cos t ,   if   0.8 < t 1   and   c 2 ( 0 ) ( t ) = t ,   if   0 t 0.4 5 t ,   if   0.4 < t 0.5 t 3 ,   if   0.5 < t 0.8 t 2 ,   if   0.8 < t 1 .
and the uncertainties
c ^ 1 ( t ) = 0.01 t 3 ,   if   0 t 0.3 0 ,   if   0.3 < t 0.5 0.03 t 2 ,   if   0.5 < t 0.8 0 ,   if   0.8 < t 1   and   c ^ 2 ( t ) = 0.01 t ,   if   0 t 0.4 0.02 t ,   if   0.4 < t 0.5 0.01 t 3 ,   if   0.5 < t 0.8 0.02 t 2 ,   if   0.8 < t 1 .
The uncertain time-dependent matrices B ( t ) and K ( t , s ) are given below:
B ( t ) = B 11 ( t ) B 12 ( t ) B 21 ( t ) B 22 ( t ) = b 1 ( t ) 0 0 b 2 ( t )
and
K ( t , s ) = K 11 ( t , s ) K 12 ( t , s ) K 21 ( t , s ) K 22 ( t , s ) = k 1 ( t , s ) k 2 ( t , s ) k 3 ( t , s ) k 4 ( t , s ) .
The data b 1 = B 11 and b 2 = B 22 are assumed to be uncertain with the nominal data
B 11 ( 0 ) ( t ) = b 1 ( 0 ) ( t ) = 20 cos t ,   if   0 t 0.2 25 sin t ,   if   0.2 < t 0.6 27 t 2 ,   if   0.6 < t 1
and
B 22 ( 0 ) ( t ) = b 2 ( 0 ) ( t ) = 25 cos t ,   if   0 t 0.5 22 t , i f 0.5 < t 0.7 25 t 2 ,   if   0.7 < t 1
and the uncertainties
B ^ 11 ( t ) = b ^ 1 ( t ) = 0.01 cos t ,   if   0 t 0.2 0.01 sin t ,   if   0.2 < t 0.6 0.03 t 2 ,   if   0.6 < t 1
and
B ^ 22 ( t ) = b ^ 2 ( t ) = 0.01 cos t ,   if   0 t 0.5 0.01 t ,   if   0.5 < t 0.7 0.02 t 2 ,   if   0.7 < t 1
The data k 1 = K 11 , k 2 = K 12 , k 3 = K 21 and k 4 = K 22 are assumed to be uncertain with the nominal data
K 11 ( 0 ) ( t , s ) = k 1 ( 0 ) ( t , s ) = t 3 + s 2 ,   if   0 t 0.8   and   0 s 0.5 t 2 + sin s ,   if   0 t 0.8   and   0.5 < s 1 ( ln t ) 2 + 3 e s ,   if   0.8 < t 1   and   0 s 0.5 cos t + 5 e s ,   if   0.8 < t 1   and   0.5 < s 1 K 12 ( 0 ) ( t , s ) = k 2 ( 0 ) ( t , s ) = t 3 · s 2 ,   if   0 t 0.6   and   0 s 0.7 t 2 · sin s ,   if   0 t 0.6   and   0.7 < s 1 ( ln t ) 2 · e s ,   if   0.6 < t 1   and   0 s 0.7 3 t 2 · sin s ,   if   0.6 < t 1   and   0.7 < s 1 K 21 ( 0 ) ( t , s ) = k 3 ( 0 ) ( t , s ) = 3 t 2 · sin s ,   if   0 t 0.3   and   0 s 0.6 2 t · s 2 ,   if   0 t 0.3   and   0.6 < s 1 ( ln t ) 2 + ( cos s ) 2 ,   if   0.3 < t 1   and   0 s 0.6 t 3 · s 2 ,   if   0.3 < t 1   and   0.6 < s 1 K 22 ( 0 ) ( t , s ) = k 4 ( 0 ) ( t , s ) = t 2 + s 2 ,   if   0 t 0.5   and   0 s 0.3 sin t + s 2 ,   if   0 t 0.5   and   0.3 < s 1 ( cos t ) 2 + 3 e s ,   if   0.5 < t 1   and   0 s 0.3 2 t 3 · s 2 , i f 0.5 < t 1 a n d 0.3 < s 1 .
and the uncertainties
K ^ 11 ( t , s ) = k ^ 1 ( t , s ) = 0.05 t 3 + 0.02 s 2 ,   if   0 t 0.8   and   0 s 0.5 0.03 t 2 + 0.02 sin s ,   if   0 t 0.8   and   0.5 < s 1 0.01 e s ,   if   0.8 < t 1   and   0 s 0.5 0.01 e s ,   if   0.8 < t 1   and   0.5 < s 1 K ^ 12 ( t , s ) = k ^ 2 ( t , s ) = 0.02 t 3 · s 2 ,   if   0 t 0.6   and   0 s 0.7 0.03 t 2 · sin s ,   if   0 t 0.6   and   0.7 < s 1 0.01 e s ,   if   0.6 < t 1   and   0 s 0.7 0.02 t 2 · sin s ,   if   0.6 < t 1   and   0.7 < s 1 K ^ 21 ( t , s ) = k ^ 3 ( t , s ) = 0.03 t 2 · sin s ,   if   0 t 0.3   and   0 s 0.6 0.04 t · s 2 ,   if   0 t 0.3   and   0.6 < s 1 0 ,   if   0.3 < t 1   and   0 s 0.6 0.01 t 3 · s 2 ,   if   0.3 < t 1   and   0.6 < s 1 K ^ 22 ( t , s ) = k ^ 4 ( t , s ) = 0.01 t 2 + 0.02 s 2 ,   if   0 t 0.5   and   0 s 0.3 0.01 sin t + 0.02 s 2 ,   if   0 t 0.5   and   0.3 < s 1 0.03 e s ,   if   0.5 < t 1   and   0 s 0.3 0.02 t 3 · s 2 ,   if   0.5 < t 1   and   0.3 < s 1 .
It is clear to see that the component functions B i j ( 0 ) ( t ) and B ^ i j ( t ) satisfy the required assumptions. In order to set the partition P n , we consider the discontinuities of a 1 , a 2 , c 1 , c 2 , b 1 , b 2 , k 1 , k 2 , k 3 , k 4 . In this example, we see that r = 8 and
D = d 0 = 0 , d 1 = 0.2 , d 2 = 0.3 , d 3 = 0.4 , d 4 = 0.5 , d 5 = 0.6 , d 6 = 0.7 , d 7 = 0.8 , d 8 = 1 .
We also take n * = 2 . This means that each compact interval [ d v , d v + 1 ] is equally divided by two subintervals for v = 0 , 1 , 7 . Therefore, we have n = 2 · 8 = 16 and obtain a partition P 16 .
From the robust counterpart ( RCLP * ) , we see that the robustness γ ( c ) does not appear in the problem. In other words, the robustness γ ( c ) does not affect the robust counterpart. In this example, we take the robustness
γ ( a ) = γ i ( B ) = γ i ( K ) = 1
for each i = 1 and 2.
The approximate optimal objective value of problem ( RCLP * ) is denoted by
V ( RCLP n * ) = j = 1 q 0 T a j ( 0 ) ( t ) · z ^ j ( n ) ( t ) d t j I ( a ) d ¯ j ( n ) γ ( a ) · u ¯ ( 1 ; n )
Using Theorem 3 and Proposition 8, we see that
0 V ( RCLP * ) V ( RCLP n * ) ε n
and
0 V ( RCLP n * ) V ( P n ) V ( RCLP * ) V ( P n ) ε n .
Now, the numerical results are presented in the following table.
n * n = n * · 8 ε n V ( P n ) V ( RCLP n * )
216 0.0287743 0.0303098 0.0327923
1080 0.0057072 0.0369034 0.0374853
50400 0.0012095 0.0383882 0.0385083
100800 0.0005689 0.0385781 0.0386384
2001600 0.0002850 0.0386741 0.0387004
3002400 0.0001902 0.0387057 0.0387259
4003200 0.0001427 0.0387219 0.0387370
5004000 0.0001142 0.0387316 0.0387437
We use the commercial software MATLAB to perform this computation. The active set method that is built in MATLAB is used to solve the large scale linear programming problems. Assume that the decision-maker can tolerate the error ϵ = 0.0002 . It means that n * = 300 is sufficient to achieve this error tolerance ϵ in which the corresponding error bound is given by ε n = 0.0001902 satisfying ε n < ϵ .

8. Conclusions

The main issue of this paper is to solve the continuous-time linear programming problem with time-dependent matrices by considering the data to be uncertain and laying in the specified bounded closed intervals. In this case, the technique of so-called robust optimization is adopted to formulate an extended form of robust counterpart. Solving this extended form is indeed difficult even for the time-dependent matrices are involved in the problem. Using some technical derivations, this extended form of robust counterpart is transformed into a conventional form of continuous-time linear programming problem with time-dependent matrices. The remaining effort is to solve this more complicated transformed problem by using the discretization technique.
It is impossible to directly solve the original problem, since the Riemann integrals are involved in the problem. Instead of solving the original problem, we solve its corresponding discretization problem. The technique for formulating the discretization problem has also been adopted in Wu [7]. In fact, the discretization problem is a conventional linear programming problem such that the well-known simplex method can be used. However, the challenge issue is to estimate the error between the actual solution and approximate solution. Theorem 3 presents an analytic formula of the error bound ε n satisfying
0 V ( RCLP * ) V ( P n ) ε n   and   ε n 0   as   n ,
where problem ( P n ) is the discretization problem (i.e., a linear programming problem) of the original problem ( RCLP * ) .
The weak convergence of approximate solutions to the actual solution is also studied to demonstrate its asymptotic behavior by referring to Propositions 9 and 10. Finally, a computational procedure is also designed to obtain the approximate optimal solutions.
The important issue of this paper is to derive an analytic formula of error bound given by
ε n = V ( D n ) + l = 1 n i = 1 n E ¯ l ( n ) c i ( t ) · w ¯ l i ( n ) d t + l = 1 n i = 1 n E ¯ l ( n ) π l ( n ) b l ( n ) · exp k l ( n ) · ( T t ) b l ( n ) · c i ( t ) d t
which is presented in Theorem 3. In order to calculate this error bound, we need to solve the dual problem ( D n ) to obtain V ( D n ) and w ¯ l ( n ) . We also have
lim sup n V ( D n ) = V ( DRCLP * )   and   0 V ( DRCLP * ) V ( D n ) ε n ,
where ε n 0 as n . Therefore, studying the dual problems ( D n ) and ( DRCLP * ) is another important issue. Theorem 3 also shows that the primal problem ( RCLP * ) and dual problem ( DRCLP * ) have no duality gap by saying that their optimal objective values are identical given by V ( DRCLP * ) = V ( RCLP * ) . Moreover, the strong duality is also established in Theorem 5 saying that the optimal solutions of problems ( RCLP * ) and ( DRCLP * ) indeed exist such that their optimal objective values are identical with V ( DRCLP * ) = V ( RCLP * ) . In the theory of optimization, when we want to say that a newly formulated problem is a dual problem of the original primal problem, we need to establish their strong duality. In this case, instead of solving the primal problem, we can just solve its dual problem. Because the strong duality is established in Theorem 5, we can really say that ( RCLP * ) and ( DRCLP * ) are primal and dual pair of problems. In other words, instead of solving the primal problem ( RCLP * ) , we can also solve the dual problem ( DRCLP * ) . This paper is mainly solving the dual problem to obtain the analytic formula of error bound ε n as shown in (173), which includes the quantities V ( D n ) and w ¯ l ( n ) of dual problem. Therefore, based on the strong duality theorem, it is possible to design a more efficient computational procedure to obtain another analytic formula of error bound ε n by solving the primal problem, which can be future research.
The discretization problem formulated in this paper is a large scale of linear programming problem. Solving this large scale problem consumes huge computer resources and sometimes the personal computer is not capable of handling the computations. In order to increase the performance and efficiency of the methodology proposed in this paper, we may need some high-level computers like super computers. In future research, we may try to develop a new computational procedure involving parallel computation that can save on the running time of computation.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bellman, R.E. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  2. Tyndall, W.F. A Duality Theorem for a Class of Continuous Linear Programming Problems. J. Soc. Ind. Appl. Math. 1965, 15, 644–666. [Google Scholar] [CrossRef] [Green Version]
  3. Tyndall, W.F. An Extended Duality Theorem for Continuous Linear Programming Problems. SIAM J. Appl. Math. 1967, 15, 1294–1298. [Google Scholar] [CrossRef]
  4. Levinson, N. A Class of Continuous Linear Programming Problems. J. Math. Anal. Appl. 1966, 16, 73–83. [Google Scholar] [CrossRef] [Green Version]
  5. Wu, H.-C. Solving the Continuous-Time Linear Programming Problems Based on the Piecewise Continuous Functions. Numer. Funct. Anal. Optim. 2016, 37, 1168–1201. [Google Scholar] [CrossRef]
  6. Wu, H.-C. Numerical Method for Solving the Continuous-Time Linear Programming Problems with Time-Dependent Matrices and Piecewise Continuous Functions. AIMS Math. 2020, 5, 5572–5627. [Google Scholar] [CrossRef]
  7. Wu, H.-C. Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices. Mathematics 2021, 9, 885. [Google Scholar] [CrossRef]
  8. Bertsimas, D.; Sim, M. The Price of Robustness. Oper. Res. 2004, 52, 35–53. [Google Scholar] [CrossRef] [Green Version]
  9. Meidan, R.; Perold, A.F. Optimality Conditions and Strong Duality in Abstract and Continuous-Time Linear Programming. J. Optim. Theory Appl. 1983, 40, 61–77. [Google Scholar] [CrossRef]
  10. Papageorgiou, N.S. A Class of Infinite Dimensional Linear Programming Problems. J. Math. Anal. Appl. 1982, 87, 228–245. [Google Scholar] [CrossRef] [Green Version]
  11. Schechter, M. Duality in Continuous Linear Programming. J. Math. Anal. Appl. 1972, 37, 130–141. [Google Scholar] [CrossRef] [Green Version]
  12. Anderson, E.J.; Nash, P.; Perold, A.F. Some Properties of a Class of Continuous Linear Programs. SIAM J. Control Optim. 1983, 21, 758–765. [Google Scholar] [CrossRef]
  13. Anderson, E.J.; Philpott, A.B. On the Solutions of a Class of Continuous Linear Programs. SIAM J. Control Optim. 1994, 32, 1289–1296. [Google Scholar] [CrossRef]
  14. Anderson, E.J.; Pullan, M.C. Purification for Separated Continuous Linear Programs. Math. Methods Oper. Res. 1996, 43, 9–33. [Google Scholar] [CrossRef]
  15. Fleischer, L.; Sethuraman, J. Efficient Algorithms for Separated Continuous Linear Programs: The Multicommodity Flow Problem with Holding Costs and Extensions. Math. Oper. Res. 2005, 30, 916–938. [Google Scholar] [CrossRef] [Green Version]
  16. Pullan, M.C. An Algorithm for a Class of Continuous Linear Programs. SIAM J. Control Optim. 1993, 31, 1558–1577. [Google Scholar] [CrossRef] [Green Version]
  17. Pullan, M.C. Forms of Optimal Solutions for Separated Continuous Linear Programs. SIAM J. Control Optim. 1995, 33, 1952–1977. [Google Scholar] [CrossRef]
  18. Pullan, M.C. A Duality Theory for Separated Continuous Linear Programs. SIAM J. Control Optim. 1996, 34, 931–965. [Google Scholar] [CrossRef]
  19. Pullan, M.C. Convergence of a General Class of Algorithms for Separated Continuous Linear Programs. SIAM J. Optim. 2000, 10, 722–731. [Google Scholar] [CrossRef] [Green Version]
  20. Pullan, M.C. An Extended Algorithm for Separated Continuous Linear Programs. Math. Program. 2002, 93, 415–451. [Google Scholar] [CrossRef]
  21. Weiss, G. A Simplex Based Algorithm to Solve Separated Continuous Linear Programs. Math. Program. 2008, 115, 151–198. [Google Scholar] [CrossRef]
  22. Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Constraints. J. Math. Anal. Appl. 1974, 45, 96–115. [Google Scholar] [CrossRef] [Green Version]
  23. Farr, W.H.; Hanson, M.A. Continuous Time Programming with Nonlinear Time-Delayed. J. Math. Anal. Appl. 1974, 46, 41–61. [Google Scholar] [CrossRef] [Green Version]
  24. Grinold, R.C. Continuous Programming Part One: Linear Objectives. J. Math. Anal. Appl. 1969, 28, 32–51. [Google Scholar] [CrossRef]
  25. Grinold, R.C. Continuous Programming Part Two: Nonlinear Objectives. J. Math. Anal. Appl. 1969, 27, 639–655. [Google Scholar] [CrossRef] [Green Version]
  26. Hanson, M.A.; Mond, B. A Class of Continuous Convex Programming Problems. J. Math. Anal. Appl. 1968, 22, 427–437. [Google Scholar] [CrossRef] [Green Version]
  27. Reiland, T.W. Optimality Conditions and Duality in Continuous Programming I: Convex Programs and a Theorem of the Alternative. J. Math. Anal. Appl. 1980, 77, 297–325. [Google Scholar] [CrossRef]
  28. Reiland, T.W. Optimality Conditions and Duality in Continuous Programming II: The Linear Problem Revisited. J. Math. Anal. Appl. 1980, 77, 329–343. [Google Scholar] [CrossRef] [Green Version]
  29. Reiland, T.W.; Hanson, M.A. Generalized Kuhn-Tucker Conditions and Duality for Continuous Nonlinear Programming Problems. J. Math. Anal. Appl. 1980, 74, 578–598. [Google Scholar] [CrossRef] [Green Version]
  30. Singh, C. A Sufficient Optimality Criterion in Continuous Time Programming for Generalized Convex Functions. J. Math. Anal. Appl. 1978, 62, 506–511. [Google Scholar] [CrossRef] [Green Version]
  31. Zalmai, G.J. Duality for a Class of Continuous-Time Homogeneous Fractional Programming Problems. Z. Oper. Res. 1986, 30, A43–A48. [Google Scholar] [CrossRef]
  32. Zalmai, G.J. Duality for a Class of Continuous-Time Fractional Programming Problems. Util. Math. 1987, 31, 209–218. [Google Scholar] [CrossRef]
  33. Zalmai, G.J. Optimality Conditions and Duality for a Class of Continuous-Time Generalized Fractional Programming Problems. J. Math. Anal. Appl. 1990, 153, 365–371. [Google Scholar] [CrossRef] [Green Version]
  34. Zalmai, G.J. Optimality Conditions and Duality models for a Class of Nonsmooth Constrained Fractional Optimal Control Problems. J. Math. Anal. Appl. 1997, 210, 114–149. [Google Scholar] [CrossRef] [Green Version]
  35. Wu, H.-C. Parametric Continuous-Time Linear Fractional Programming Problems. J. Inequ. Appl. 2015, 251. [Google Scholar] [CrossRef] [Green Version]
  36. Wen, C.-F.; Wu, H.-C. Using the Dinkelbach-Type Algorithm to Solve the Continuous-Time Linear Fractional Programming Problems. J. Glob. Optim. 2011, 49, 237–263. [Google Scholar] [CrossRef]
  37. Wen, C.-F.; Wu, H.-C. Approximate Solutions and Duality Theorems for the Continuous-Time Linear Fractional Programming Problems. Numer. Funct. Anal. Optim. 2012, 33, 80–129. [Google Scholar] [CrossRef]
  38. Wen, C.-F.; Wu, H.-C. Using the Parametric Approach to Solve the Continuous-Time Linear Fractional Max-Min Problems. J. Glob. Optim. 2012, 54, 129–153. [Google Scholar] [CrossRef]
  39. Dantzig, G.B. Linear Programming under Uncertainty. Manag. Scienve 1995, 1, 197–206. [Google Scholar]
  40. Ben-Tal, A.; Nemirovski, A. Robust Convex Optimization. Math. Oper. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef] [Green Version]
  41. Ben-Tal, A.; Nemirovski, A. Robust Solutions of Uncertain Linear Programs. Oper. Res. Lett. 1999, 25, 1–13. [Google Scholar] [CrossRef] [Green Version]
  42. El Ghaoui, L.; Lebret, H. Robust Solutions to Least-Squares Problems with Uncertain Data. SIAM J. Matrix Anal. Appl. 1997, 18, 1035–1064. [Google Scholar] [CrossRef]
  43. El Ghaoui, L.; Oustry, F.; Lebret, H. Robust Solutions to Uncertain Semidefinite Programs. SIAM J. Optim. 1998, 9, 33–52. [Google Scholar] [CrossRef]
  44. Averbakh, I.; Zhao, Y.-B. Explicit Reformulations for Robust Optimization Problems with General Uncertainty Sets. SIAM J. Optim. 2008, 18, 1436–1466. [Google Scholar] [CrossRef]
  45. Ben-Tal, A.; Boyd, S.; Nemirovski, A. Extending Scope of Robust Optimization: Comprehensive Robust Counterpart of Uncertain Problems. Math. Program. 2006, 107, 63–89. [Google Scholar] [CrossRef]
  46. Bertsimas, D.; Natarajan, K.; Teo, C.-P. Persistence in Discrete Optimization under Data Uncertainty. Math. Program. 2006, 108, 251–274. [Google Scholar] [CrossRef]
  47. Bertsimas, D.; Sim, M. Tractable Approximations to Robust Conic Optimization Problems. Math. Program. 2006, 107, 5–36. [Google Scholar] [CrossRef] [Green Version]
  48. Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058–1071. [Google Scholar] [CrossRef]
  49. Erdoǧan, E.; Iyengar, G. Ambiguous Chance Constrained Problems and Robust Optimization. Math. Program. 2006, 107, 37–61. [Google Scholar] [CrossRef] [Green Version]
  50. Zhang, Y. General Robust Optimization Formulation for Nonlinear Programming. J. Optim. Theory Appl. 2007, 132, 111–124. [Google Scholar] [CrossRef] [Green Version]
  51. Anderson, E.J.; Nash, P. Linear Programming in Infinite Dimensional Spaces; John Wiley & Sons: New York, NY, USA, 1987. [Google Scholar]
  52. Riesz, F.; Nagy, B.S. Functional Analysis; Frederick Ungar Publishing Co.: New York, NY, USA, 1955. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, H.-C. Extended Form of Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices. Axioms 2022, 11, 211. https://doi.org/10.3390/axioms11050211

AMA Style

Wu H-C. Extended Form of Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices. Axioms. 2022; 11(5):211. https://doi.org/10.3390/axioms11050211

Chicago/Turabian Style

Wu, Hsien-Chung. 2022. "Extended Form of Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices" Axioms 11, no. 5: 211. https://doi.org/10.3390/axioms11050211

APA Style

Wu, H. -C. (2022). Extended Form of Robust Solutions for Uncertain Continuous-Time Linear Programming Problems with Time-Dependent Matrices. Axioms, 11(5), 211. https://doi.org/10.3390/axioms11050211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop