1. Introduction
Multi-objective optimization is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. It is one of the most complex decision-making problems and has significant theoretical and applied potential in combination with a wide diversity of models. Many optimal solution methods have been proposed for linear- or nonlinear-optimization problems, such as the simplex method, the Karush–Kuhn–Tucker approach, heuristic algorithms (including the genetic algorithm, the simulated-annealing algorithm, particle-swarm optimization, etc.) and so on (see, e.g., Refs. [
1,
2,
3,
4]). In real-word applications, Zhang et al. [
5], for example, established the Karush–Kuhn–Tucker-based-optimization algorithm for solving the torque-allocation problem, with the objective being to improve the stability performance of distributed-drive electric vehicles. Shafigh et al. [
6] developed a linear-programming-embedded simulated-annealing algorithm for solving a comprehensive model for large-size problems in distributed-layout-based manufacturing systems, with the aim of minimizing the total cost of material handling, machine relocation, inventory holding, and internal-part production cost. Gangwar et al. [
7] built a network-reconfiguration method for an unbalanced distribution system by using the repository-based constrained-nondominated-sorting genetic algorithm, with the aim being to minimize daily energy loss, energy not supplied and the cumulative-current-unbalance factor. The simplex method [
3] perfectly solves linear-programming problems whose objective and constraint functions are both real linear functions. The Karush–Kuhn–Tucker approach [
1] is effective for nonlinear optimization problems with a single objective. Heuristic algorithms [
2] based on intuition and experience have a wide scope of application, while the deviation degree between the feasible solution and the optimal solution searched for by the heuristic algorithms cannot be estimated, in general. It is of great significance to develop an accurate method to solve specific types of multi-objective-nonlinear-optimization problems.
Max-plus algebra has a nice algebraic structure and is effectively used to model, analyze, control and optimize some nonlinear-time-evolution systems with synchronization but no concurrency (see, e.g., Refs. [
8,
9,
10,
11]). These nonlinear systems can be described by a max-plus linear-time-invariant model, which is called the max-plus linear system. Max-plus linear systems have wide applications in manufacturing, transportation, scheduling, robotics and high-throughput screening, as well as reinforcement learning and other fields (see, e.g., Refs. [
12,
13,
14,
15,
16,
17]). Many methods have been put forward to solve all kinds of optimization problems for max-plus linear systems. For example, Butkovič and MacCaig [
18] studied the integer optimization of max-plus linear systems and found out the integer solutions. Xu et al. [
19] investigated the optimistic optimization of max-plus linear systems, and they established efficient algorithms to find the approximation of globally optimal solutions for general nonlinear optimization. Gaubert et al. [
20] studied the tropical-linear-fractional-programming problem, whose objective function is a max-plus rational function and a constraint condition that is a two-sided max-plus linear inequality, and they reduced such a problem to the auxiliary-mean-payoff-game problem. Goncalves et al. [
21] provided efficient algorithms to solve the tropical-linear-fractional-programming problem, whose objective function is a max-plus function and a constraint condition that is a two-sided max-plus linear equation. Marotta et al. [
22] presented a solution to the tropical-lexicographic-synchronization-optimization problem, whose objective function is a max-plus rational function and a constraint condition that is a two-sided max-plus linear equation. Tao et al. [
23,
24,
25,
26] studied the global-optimization problem of max-plus linear systems, whose objective function is a max-plus vector-valued function and a constraint function that is a real-affine function, and they provided the necessary and sufficient conditions for the existence and uniqueness of globally optimal solutions. Shu and Yang [
27] solved the minimax-programming problem, in which the objective function is to minimize the maximum of all variables, while the constraint is a system of max-plus inequalities.
During the practical operation of a physical system, parameter perturbations are inevitable because of disturbances and errors in the estimation of processes. In practical applications, it is usually necessary to make an optimal decision in uncertainty environments. Necoara et al. [
28] found a solution to a class of finite-horizon min-max controls for uncertain max-plus linear systems. Le Corronc et al. [
29] synthesized an optimal controller to reduce the uncertainty at the output of interval max-plus linear systems. Myskova and Plavka [
30,
31,
32] studied the robustness of interval max-plus linear systems, and they presented the necessary and sufficient conditions for interval matrices to be robust. Farahani et al. [
33] constructed a solution for the optimization of stochastic max-min-plus scaling systems by using an approximation method based on moment-generating functions. Wang et al. [
34] studied the optimal input design for uncertain max-plus linear systems, and they constructed the exact interval input to minimize the range of input that ensures the system can output at the desired point.
This paper studies the multi-objective-optimization problem for uncertain max-plus linear systems whose parameters are not exactly known but belong to an interval. Multi-objective optimization for interval max-plus systems is formulated as an interval-valued-optimization problem. The strong and weak solvabilities of the interval-valued-optimization problem are introduced based on the solvability of its sub-problems with deterministic parameters. The solvability criteria for the interval-valued-optimization problem are established. For one thing, it is pointed out that the problem is strongly solvable only if the interval objective function is degenerated to a max-plus function with deterministic coefficients, and then the strong solvability is reduced to the solvability of its unique sub-problem. A necessary and sufficient condition for the strong solvability of the multi-objective-optimization problem is established. For another, the weak solvability of the bi-objective-optimization problem is studied. A necessary and sufficient condition of weak solvability is provided and all the solvable sub-problems are found out. The interval optimal solution is obtained by constructing the set of all optimal solutions of the solvable sub-problems. To demonstrate the effectiveness of the proposed results in real-life examples, the bi-objective-optimization technique is applied in the load distribution of distributed systems, by which the minimum completion time can be advanced.
Comparing with the previous works, the main novelty and contribution of this paper are summarized as follows:
The multi-objective-optimization problem is investigated. The global optimal solution and the global minimum are obtained.
The hybrid-optimization problem is considered. More specifically, the constraint function is a real-affine function, while the objective function is a nonlinear function.
The interval-valued-optimization problem is studied. The solvability criteria are established, and the interval optimal solution is constructed, to make the optimal decisions under uncertainty.
The remainder of this paper is organized as follows.
Section 2 recalls some basic concepts and results from max-plus algebra.
Section 3 establishes the multi-objective-optimization model of interval max-plus systems, and gives a necessary and sufficient condition for strong solvability.
Section 4 studies the weak solvability of bi-objective optimization for interval max-plus systems, and finds the interval optimal solution.
Section 5 presents an application example in optimal load distribution.
Section 6 draws conclusions and highlights future works.
2. Preliminaries
This section introduces some notations, terminologies and properties from max-plus algebra, most of which can be found in Refs. [
8,
9,
10,
11] for more details.
Let
be the set of real numbers,
be the set of natural numbers and
be the set of positive integers. For
, denote by
the set
. For
, let
where
and
. The algebraic structure
is called
max-plus algebra and is simply denoted by
, in which
and 0 are the zero and identity elements, denoted by
and
e, respectively. The symbol
is used to represent the conventional −, i.e., for
,
which is valued in
not just in
. Note that by definition,
Let and be the sets of n-dimensional vectors and matrices with entries in , respectively. To prevent confusion, matrices and vectors are represented by bold-type letters. The addition ⊕, multiplication ⊗ and scalar multiplication ∘ of max-plus matrices are defined as follows:
For ,
For
and
,
For and ,
In addition, for and ,
For
, let
be the transposition of
, and
be the
ith row of
, i.e.,
For , if . For , if . For , if . If and , then
For
, the vector-valued function
is called a
max-plus function of type
. A
max-plus linear system is a system that can be described by max-plus functions.
Given
and
, a
system of max-plus linear equations with unknown
is represented by
where
. System (
1) is said to be
solvable if there exists
, such that
and
is called a
solution of system (
1). For
,
is called a
subsolution of system (
1) if
. The greatest subsolution is constructed as below, to establish a criterion for the solvability of system (
1).
Lemma 1 ([
8]).
The greatest subsolution of system (1), denoted by , exists and is given by In a conventional framework,
can be expressed as
The greatest subsolution naturally satisfies the following properties:
- (i)
;
- (ii)
if
is a subsolution of system (
1), then
. In particular, if
is a solution of system (
1), then
.
System (
1) is solvable if and only if the greatest subsolution is a solution, i.e.,
.
A (closed)
interval in
is a set of the form
where
,
are the lower and upper bounds of interval
, respectively (see, e.g., [
35]). Denote by
the set of closed intervals in
.
An
interval matrix in
is defined by
where
. Let
. Then,
Denote by the set of interval matrices in . Specifically, and are the sets of n-dimensional row and column interval vectors in , respectively, which are simply denoted by .
For
, the interval-valued function
is called a
max-plus interval function of type
, which is a set of max-plus functions, i.e.,
An interval max-plus linear system is a system that can be described by max-plus interval functions.
3. Multi-Objective Optimization of Interval Max-Plus Systems
This section establishes the multi-objective-optimization model for interval max-plus systems and considers the solvability of such an interval-valued-optimization problem.
The
multi-objective-optimization problem for interval max-plus systems is formulated as
where the decision variable is
; the objective function is a max-plus interval function
where
is given in (
3); the constraint set is
which can be normalized as
where
and
. Without loss of generality, assume that
in
in the discussion later in this paper.
For each
, the multi-objective-optimization problem
is called a
sub-problem of the interval-valued-optimization problem (
4). The objective function of problem (
5) is a max-plus function
, where
Definition 1 ([
24]).
Problem (5) is said to be solvable
if there exists , such that for any and is called an optimal solution of problem (5). Next, let us introduce the solvability of interval-valued-optimization problem (
4) based on the solvability of its sub-problems.
Definition 2. Problem (4) is said to be weakly solvable if there exists , such that sub-problem (5) is solvable. Problem (4) is said to be strongly solvable if for any sub-problem (5) is solvable. In other words, interval-valued-optimization problem (
4) is weakly solvable if it has at least one solvable sub-problem, and problem (
4) is strongly solvable if each of its sub-problems is solvable. Before establishing the solvability criteria of problem (
4), it is necessary to study the solvability of sub-problem (
5).
Lemma 2 ([
24]).
For sub-problem (5), let be defined byThen, is the greatest lower bound of , i.e.,
- (i)
for any ;
- (ii)
if is a lower bound of , then .
Lemma 3 ([
24,
25]).
Sub-problem (5) is solvable if and only ifwhere is the greatest lower bound given in (6). Moreover, if Equation (7) holds, then is the unique optimal solution of sub-problem (5). Next, let us give a necessary condition for strong solvability.
Theorem 1. If problem (4) is strongly solvable, then . Proof. Since problem (
4) is strongly solvable, it follows that for any
, sub-problem (
5) is solvable. Then, Equation (
7) holds. It can be known from (
2) that
for any
and
. Suppose there exists
, such that
. Then,
This contradiction implies that
for any
and
. Let
. Then, for any
and
,
That is, for any
,
Specifically, for
, we have
for any
and
. Suppose that there exist
and
such that
. Let
be defined by
It follows from (
9) that
, i.e.,
. Hence,
This contradiction implies that . □
It can be seen from the theorem above that interval-valued-optimization problem (
4) cannot be strongly solvable if
. In other words, problem (
4) is strongly solvable only if the interval objective function is degenerated to a max-plus function with deterministic coefficients. Consequently, the strong solvability of problem (
4) is reduced to the solvability of its unique sub-problem (
5). Then, the following necessary and sufficient condition for the strong solvability can be obtained from Theorem 1 and Lemma 3.
Corollary 1. Problem (4) is strongly solvable if and only if and Equation (7) holds for (or ). 4. Weak Solvability of Bi-Objective Optimization Problem
This section studies the weak solvability of bi-objective optimization problem for interval max-plus linear systems, that is, the weak solvability of problem (
4) in the case of
.
Consider the
bi-objective optimization problem
where
and
are the same decision variable and constraint set as problem (
4), respectively, and the objective function
is a special case of problem (
4) for
, i.e.,
Let us first establish a weak solvability criterion for problem (
10).
Theorem 2. Problem (10) is weakly solvable if and only if Proof. Necessity. Since interval problem (
10) is weakly solvable, there exists
, such that sub-problem (
5) is solvable. It can be known from the proof of Theorem 1 that Equation (
8) holds. It follows that
. Let
. Then,
. Hence,
This implies that
i.e., Inequality (
11) holds.
Sufficiency. Since Inequality (
11) holds, there exists
, such that
for any
. Let
be defined by
For
, if
, then
if
, then
Hence,
. It can be seen from (
12) and (
13) that
for any
. Then,
It follows that
, and so
From Lemma 3, the sub-problem with objective function
is solvable. Hence, problem (
10) is weakly solvable. □
Let us illustrate the theorem above with a numerical example.
Example 1. Consider the interval-valued-optimization problemwhere the objective function is ,and the constraint set is , which can be normalized as It then follows from Theorem 2 that problem (14) is weakly solvable. Indeed, let . According to (12) and (13), construct a matrix as below: Then, the sub-problem with objective function is solvable, whose minimal value is , and which is attained at the point . This implies that problem (14) is weakly solvable. Next, let us find all the solvable sub-problems of the interval-valued-optimization problem (
10). For convenience of presentation, let
It can be seen from the proof of sufficiency of Theorem 2 that the solvable sub-problems have a common characteristic that and are proportional in the max-plus framework. The following corollary shows the existence of the matrices in whose rows are proportional.
Corollary 2. For , let Then, if and only if .
Proof. Necessity. Since
, there exists
, such that
for any
. Then,
This implies that
.
Sufficiency. For
, let
be defined by (
12) and (
13). It has been proved in Theorem 2 that
and
. Hence,
, and so
. □
For
, let
where
is defined by
Then, all the solvable sub-problems of problem (
10) can be represented as follows.
Theorem 3. If problem (10) is weakly solvable, then all the solvable sub-problems have the formwhere , and . Proof. Since problem (
10) is weakly solvable, it follows from Theorem 2 that
. Next, let us prove
for
, where
is given in Corollary 2. On the one hand, let
where
defined by (
15) are the lower and upper bounds of
, respectively. It has been proved in the sufficiency of Theorem 2 that
. Similarly, we can prove
. Indeed, for
, if
, then
if
, then
Hence, for any
,
and
i.e.,
. Therefore,
and so
. On the other hand, for any
, let us prove that
. Suppose there exists
such that
. If
, then
if
, then
both of which are contradicted to
. Suppose that there exists
, such that
. If
, then
if
, then
both of which are also contradicted to
. Hence,
for any
, i.e.,
. This implies that
and, hence,
. Therefore,
. Hence, all the solvable sub-problems have the form (
16). □
Definition 3. The sub-problems with objective functions and are called the lower and upper extreme solvable sub-problems
(relative to d) of problem (10), respectively, where and are given in (17). Let us illustrate the theorem above with the following example.
Example 2 (continued from Example 1).
Find all solvable sub-problems of problem (14). For , let be defined byHence, all the solvable sub-problems have the formwhere and Specifically, for example, let . Then, The solvable sub-problems (relative to 1) have the formwhere , and Finally, let us present the optimal solutions of problem (
10).
Theorem 4. The set of all optimal solutions of solvable sub-problems of problem (10) iswhere is defined by (15), and Proof. For
and
, we have
for any
. By (
6),
Then,
. According to Lemma 3 and Theorem 3, the set of all optimal solutions of solvable sub-problems of problem (
10) can be represented by (
19). □
Definition 4. For each , the set of optimal solutions given in (19) is called an interval optimal solution
of problem (10). Let us find the interval optimal solution of problem (
14) by using the theorem above.
Example 3 (continued from Example 2).
According to Theorem 4, the set of all optimal solutions of solvable sub-problems of problem (14) iswhere is given by (18), andSpecifically, for example, let . It has been shown in Example 2 thatwhere The set of optimal solutions of the solvable sub-problems with an objective function obtained from is . Forthe sub-problem with objective function is solvable. By a direct calculation, the minimal value of is , which is attained at the pointwhere and are given in (20). 5. Application Example
Bi-objective-interval-valued-optimization problems appear on many occasions in real-life examples. This section takes the load-distribution problem as an example, to illustrate how the obtained results work in practical applications.
Consider the distributed system with the task precedence graph shown in
Figure 1, in which an overall task is partitioned into 7 subtasks
,
, ...,
, the circles represent subtasks, the number inside each circle represents the corresponding task execution time, and the number associated with each link corresponds to the interprocessor communication time. It has been presented in Example 4 of Ref. [
24] that the load-distribution problem can be described by the multi-objective-optimization problem
where
,
and
. The global minimum of problem (
21) is
, which is attained at the point
. This implies that if the execution times of tasks
and
are both 2 units, then the completion times of tasks
,
,
,
and
are 12, 7, 19, 27 and 37 units, respectively. Moreover, the overall completion time is 37 units.
Suppose that the entries of
can be varied in the interval
Next, let us minimize the completion time of the overall task by solving the interval-valued-optimization problem with coefficient matrix
. By a direct calculation,
Since
, it follows from Theorem 2 that the interval-valued-optimization problem is solvable. This implies that the completion time of each subtask can be minimized simultaneously. Let
. By (
17), the coefficient matrix of the objective function of the lower extreme solvable sub-problem is
If the coefficient matrix of the objective function of problem (
21) is changed to
, then it follows from (
6) that the global minimum is reduced to
. This implies that the completion time of each subtask is advanced through adjusting the value of parameters within the allowable range
. Moreover, the overall completion time is reduced from 37 to 35.8.