Next Article in Journal
Optimal Decisions on Greenness, Carbon Emission Reductions, and Flexibility for Imperfect Production with Partial Outsourcing
Next Article in Special Issue
A Finite-Time Disturbance Observer for Tracking Control of Nonlinear Systems Subject to Model Uncertainties and Disturbances
Previous Article in Journal
Asymptotic Duration for Optimal Multiple Stopping Problems
Previous Article in Special Issue
Temperature-Controlled Laser Cutting of an Electrical Steel Sheet Using a Novel Fuzzy Logic Controller
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bi-Objective Optimization for Interval Max-Plus Linear Systems

1
School of Electrical Engineering, Yanshan University, Qinhuangdao 066000, China
2
Handan Institute of Environmental Protection, Handan 056001, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 653; https://doi.org/10.3390/math12050653
Submission received: 16 January 2024 / Revised: 21 February 2024 / Accepted: 22 February 2024 / Published: 23 February 2024
(This article belongs to the Special Issue Advances in Control Systems and Automatic Control)

Abstract

:
This paper investigates the interval-valued-multi-objective-optimization problem, whose objective function is a vector-valued max-plus interval function and the constraint function is a real-affine function. The strong and weak solvabilities of the interval-valued-optimization problem are introduced, and the solvability criteria are established. A necessary and sufficient condition for the strong solvability of the multi-objective-optimization problem is provided. In particular, for the bi-objective-optimization problem, a necessary and sufficient condition of the weak solvability is provided, and all the solvable sub-problems are found out. The interval optimal solution is obtained by constructing the set of all optimal solutions of the solvable sub-problems. The optimal load distribution is used to demonstrate how the presented results work in real-life examples.

1. Introduction

Multi-objective optimization is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. It is one of the most complex decision-making problems and has significant theoretical and applied potential in combination with a wide diversity of models. Many optimal solution methods have been proposed for linear- or nonlinear-optimization problems, such as the simplex method, the Karush–Kuhn–Tucker approach, heuristic algorithms (including the genetic algorithm, the simulated-annealing algorithm, particle-swarm optimization, etc.) and so on (see, e.g., Refs. [1,2,3,4]). In real-word applications, Zhang et al. [5], for example, established the Karush–Kuhn–Tucker-based-optimization algorithm for solving the torque-allocation problem, with the objective being to improve the stability performance of distributed-drive electric vehicles. Shafigh et al. [6] developed a linear-programming-embedded simulated-annealing algorithm for solving a comprehensive model for large-size problems in distributed-layout-based manufacturing systems, with the aim of minimizing the total cost of material handling, machine relocation, inventory holding, and internal-part production cost. Gangwar et al. [7] built a network-reconfiguration method for an unbalanced distribution system by using the repository-based constrained-nondominated-sorting genetic algorithm, with the aim being to minimize daily energy loss, energy not supplied and the cumulative-current-unbalance factor. The simplex method [3] perfectly solves linear-programming problems whose objective and constraint functions are both real linear functions. The Karush–Kuhn–Tucker approach [1] is effective for nonlinear optimization problems with a single objective. Heuristic algorithms [2] based on intuition and experience have a wide scope of application, while the deviation degree between the feasible solution and the optimal solution searched for by the heuristic algorithms cannot be estimated, in general. It is of great significance to develop an accurate method to solve specific types of multi-objective-nonlinear-optimization problems.
Max-plus algebra has a nice algebraic structure and is effectively used to model, analyze, control and optimize some nonlinear-time-evolution systems with synchronization but no concurrency (see, e.g., Refs. [8,9,10,11]). These nonlinear systems can be described by a max-plus linear-time-invariant model, which is called the max-plus linear system. Max-plus linear systems have wide applications in manufacturing, transportation, scheduling, robotics and high-throughput screening, as well as reinforcement learning and other fields (see, e.g., Refs. [12,13,14,15,16,17]). Many methods have been put forward to solve all kinds of optimization problems for max-plus linear systems. For example, Butkovič and MacCaig [18] studied the integer optimization of max-plus linear systems and found out the integer solutions. Xu et al. [19] investigated the optimistic optimization of max-plus linear systems, and they established efficient algorithms to find the approximation of globally optimal solutions for general nonlinear optimization. Gaubert et al. [20] studied the tropical-linear-fractional-programming problem, whose objective function is a max-plus rational function and a constraint condition that is a two-sided max-plus linear inequality, and they reduced such a problem to the auxiliary-mean-payoff-game problem. Goncalves et al. [21] provided efficient algorithms to solve the tropical-linear-fractional-programming problem, whose objective function is a max-plus function and a constraint condition that is a two-sided max-plus linear equation. Marotta et al. [22] presented a solution to the tropical-lexicographic-synchronization-optimization problem, whose objective function is a max-plus rational function and a constraint condition that is a two-sided max-plus linear equation. Tao et al. [23,24,25,26] studied the global-optimization problem of max-plus linear systems, whose objective function is a max-plus vector-valued function and a constraint function that is a real-affine function, and they provided the necessary and sufficient conditions for the existence and uniqueness of globally optimal solutions. Shu and Yang [27] solved the minimax-programming problem, in which the objective function is to minimize the maximum of all variables, while the constraint is a system of max-plus inequalities.
During the practical operation of a physical system, parameter perturbations are inevitable because of disturbances and errors in the estimation of processes. In practical applications, it is usually necessary to make an optimal decision in uncertainty environments. Necoara et al. [28] found a solution to a class of finite-horizon min-max controls for uncertain max-plus linear systems. Le Corronc et al. [29] synthesized an optimal controller to reduce the uncertainty at the output of interval max-plus linear systems. Myskova and Plavka [30,31,32] studied the robustness of interval max-plus linear systems, and they presented the necessary and sufficient conditions for interval matrices to be robust. Farahani et al. [33] constructed a solution for the optimization of stochastic max-min-plus scaling systems by using an approximation method based on moment-generating functions. Wang et al. [34] studied the optimal input design for uncertain max-plus linear systems, and they constructed the exact interval input to minimize the range of input that ensures the system can output at the desired point.
This paper studies the multi-objective-optimization problem for uncertain max-plus linear systems whose parameters are not exactly known but belong to an interval. Multi-objective optimization for interval max-plus systems is formulated as an interval-valued-optimization problem. The strong and weak solvabilities of the interval-valued-optimization problem are introduced based on the solvability of its sub-problems with deterministic parameters. The solvability criteria for the interval-valued-optimization problem are established. For one thing, it is pointed out that the problem is strongly solvable only if the interval objective function is degenerated to a max-plus function with deterministic coefficients, and then the strong solvability is reduced to the solvability of its unique sub-problem. A necessary and sufficient condition for the strong solvability of the multi-objective-optimization problem is established. For another, the weak solvability of the bi-objective-optimization problem is studied. A necessary and sufficient condition of weak solvability is provided and all the solvable sub-problems are found out. The interval optimal solution is obtained by constructing the set of all optimal solutions of the solvable sub-problems. To demonstrate the effectiveness of the proposed results in real-life examples, the bi-objective-optimization technique is applied in the load distribution of distributed systems, by which the minimum completion time can be advanced.
Comparing with the previous works, the main novelty and contribution of this paper are summarized as follows:
  • The multi-objective-optimization problem is investigated. The global optimal solution and the global minimum are obtained.
  • The hybrid-optimization problem is considered. More specifically, the constraint function is a real-affine function, while the objective function is a nonlinear function.
  • The interval-valued-optimization problem is studied. The solvability criteria are established, and the interval optimal solution is constructed, to make the optimal decisions under uncertainty.
The remainder of this paper is organized as follows. Section 2 recalls some basic concepts and results from max-plus algebra. Section 3 establishes the multi-objective-optimization model of interval max-plus systems, and gives a necessary and sufficient condition for strong solvability. Section 4 studies the weak solvability of bi-objective optimization for interval max-plus systems, and finds the interval optimal solution. Section 5 presents an application example in optimal load distribution. Section 6 draws conclusions and highlights future works.

2. Preliminaries

This section introduces some notations, terminologies and properties from max-plus algebra, most of which can be found in Refs. [8,9,10,11] for more details.
Let R be the set of real numbers, N be the set of natural numbers and N + be the set of positive integers. For n N + , denote by N n the set { 1 , 2 , , n } . For a , b R { } , let
a b = max { a , b } and a b = a + b ,
where max { a , } = a and a + ( ) = . The algebraic structure ( R { } , , ) is called max-plus algebra and is simply denoted by R max , in which and 0 are the zero and identity elements, denoted by ε and e, respectively. The symbol ϕ is used to represent the conventional −, i.e., for a , b R max , a ϕ b = a b , which is valued in R ¯ max ( = R max { + } ) not just in R max . Note that by definition, ( + ) ( ) = and ( ) ϕ ( ) = + .
Let R max n and R max m × n be the sets of n-dimensional vectors and m × n matrices with entries in R max , respectively. To prevent confusion, matrices and vectors are represented by bold-type letters. The addition ⊕, multiplication ⊗ and scalar multiplication ∘ of max-plus matrices are defined as follows:
  • For A = ( a i j ) , B = ( b i j ) R max m × n , ( A B ) i j = a i j b i j ;
  • For A = ( a i j ) R max m × r and B = ( b i j ) R max r × n ,
    ( A B ) i j = k = 1 r a i k b k j ;
  • For d R max and A = ( a i j ) R max m × n , ( d A ) i j = d a i j .
In addition, for d R max and x = ( x j ) R max n , ( d ϕ x ) j = d ϕ x j .
For A = ( a i j ) R max m × n , let A be the transposition of A , and A i . be the ith row of A , i.e.,
A i = ( a i 1 a i 2 a i n ) .
For a , b R max , a b if a b = b . For x , y R max n , x y if x y = y . For A , B R max m × n , A B if A B = B . If A B and x y , then A x B y .
For A R max m × n , the vector-valued function
F : R max n R max m x A x
is called a max-plus function of type ( n , m ) . A max-plus linear system is a system that can be described by max-plus functions.
Given A = ( a i j ) R max m × n and b = ( b i ) R max m , a system of max-plus linear equations with unknown x is represented by
A x = b ,
where x = ( x j ) R ¯ max n . System (1) is said to be solvable if there exists x ˜ R ¯ max n , such that A x ˜ = b and x ˜ is called a solution of system (1). For x ˜ R ¯ max n , x ˜ is called a subsolution of system (1) if A x ˜ b . The greatest subsolution is constructed as below, to establish a criterion for the solvability of system (1).
Lemma 1 
([8]). The greatest subsolution of system (1), denoted by x * ( A , b ) , exists and is given by
e ϕ x * ( A , b ) = ( e ϕ b ) A .
In a conventional framework, x * ( A , b ) = ( x j * ( A , b ) ) R ¯ max n can be expressed as
x j * ( A , b ) = min i N m { b i a i j } , j N n .
The greatest subsolution x * ( A , b ) naturally satisfies the following properties:
(i)
A x * ( A , b ) b ;
(ii)
if x ˜ is a subsolution of system (1), then x ˜ x * ( A , b ) . In particular, if x ˜ is a solution of system (1), then x ˜ x * ( A , b ) .
System (1) is solvable if and only if the greatest subsolution is a solution, i.e., A x * ( A , b ) = b .
A (closed) interval in R max is a set of the form
U = [ u ̲ , u ¯ ] = { u R max | u ̲ u u ¯ } ,
where u ̲ , u ¯ R max are the lower and upper bounds of interval U , respectively (see, e.g., [35]). Denote by I ( R max ) the set of closed intervals in R max .
An interval matrix in R max is defined by
A = A 11 A 12 A 1 n A 21 A 22 A 2 n A m 1 A m 2 A m n = [ a ̲ 11 , a ¯ 11 ] [ a ̲ 12 , a ¯ 12 ] [ a ̲ 1 n , a ¯ 1 n ] a ̲ 21 , a ¯ 21 ] [ a ̲ 22 , a ¯ 22 ] [ a ̲ 2 n , a ¯ 2 n ] a ̲ m 1 , a ¯ m 1 ] [ a ̲ m 2 , a ¯ m 2 ] [ a ̲ m n , a ¯ m n ] ,
where A i j = [ a ̲ i j , a ¯ i j ] I ( R max ) . Let A ̲ = ( a ̲ i j ) , A ¯ = ( a ¯ i j ) R max m × n . Then,
A = [ A ̲ , A ¯ ] = { A R max m × n | A ̲ A A ¯ } .
Denote by I ( R max m × n ) the set of m × n interval matrices in R max . Specifically, I ( R max 1 × n ) and I ( R max n × 1 ) are the sets of n-dimensional row and column interval vectors in R max , respectively, which are simply denoted by I ( R max n ) .
For A = [ A ̲ , A ¯ ] I ( R max m × n ) , the interval-valued function
F : R max n I ( R max m ) x A x = [ A ̲ x , A ¯ x ]
is called a max-plus interval function of type ( n , m ) , which is a set of max-plus functions, i.e.,
F ( x ) = A x = { F ( x ) = A x A A } .
An interval max-plus linear system is a system that can be described by max-plus interval functions.

3. Multi-Objective Optimization of Interval Max-Plus Systems

This section establishes the multi-objective-optimization model for interval max-plus systems and considers the solvability of such an interval-valued-optimization problem.
The multi-objective-optimization problem for interval max-plus systems is formulated as
min x X F ( x ) ,
where the decision variable is x = ( x j ) R n ; the objective function is a max-plus interval function F ( x ) = A x , where A I ( R max m × n ) is given in (3); the constraint set is
X = x R n | j = 1 n k j x j = c , k j > 0 , c R ,
which can be normalized as
X ˜ = x R n | j = 1 n k ˜ j x j = c ˜ , j = 1 n k ˜ j = 1 , k ˜ j > 0 , c R ,
where k ˜ j = k j / j = 1 n k j and c ˜ = c / j = 1 n k j . Without loss of generality, assume that j = 1 n k j = 1 in X in the discussion later in this paper.
For each F ( x ) F ( x ) , the multi-objective-optimization problem
min x X F ( x )
is called a sub-problem of the interval-valued-optimization problem (4). The objective function of problem (5) is a max-plus function F ( x ) = A x , where A A .
Definition 1 
([24]). Problem (5) is said to be solvable if there exists x ˜ X , such that F ( x ˜ ) F ( x ) for any x X and x ˜ is called an optimal solution of problem (5).
Next, let us introduce the solvability of interval-valued-optimization problem (4) based on the solvability of its sub-problems.
Definition 2. 
Problem (4) is said to be weakly solvable if there exists F ( x ) F ( x ) , such that sub-problem (5) is solvable. Problem (4) is said to be strongly solvable if for any F ( x ) F ( x ) sub-problem (5) is solvable.
In other words, interval-valued-optimization problem (4) is weakly solvable if it has at least one solvable sub-problem, and problem (4) is strongly solvable if each of its sub-problems is solvable. Before establishing the solvability criteria of problem (4), it is necessary to study the solvability of sub-problem (5).
Lemma 2 
([24]). For sub-problem (5), let b = ( b i ) R max m be defined by
b i = j = 1 n k j a i j + c , i N m .
Then, b is the greatest lower bound of F ( x ) , i.e.,
(i)
F ( x ) b for any x X ;
(ii)
if y is a lower bound of F ( x ) , then y b .
Lemma 3 
([24,25]). Sub-problem (5) is solvable if and only if
j = 1 n k j x j * ( A , b ) = c ,
where b is the greatest lower bound given in (6). Moreover, if Equation (7) holds, then x * ( A , b ) is the unique optimal solution of sub-problem (5).
Next, let us give a necessary condition for strong solvability.
Theorem 1. 
If problem (4) is strongly solvable, then A ̲ = A ¯ .
Proof. 
Since problem (4) is strongly solvable, it follows that for any A A , sub-problem (5) is solvable. Then, Equation (7) holds. It can be known from (2) that x j * ( A , b ) b i ϕ a i j for any i N m and j N n . Suppose there exists j 0 N n , such that x j 0 * ( A , b ) < b i ϕ a i j 0 . Then,
c = j = 1 n k j x j * ( A , b ) < j = 1 n k j ( b i a i j ) = b i j = 1 n k j a i j = j = 1 n k j a i j + c j = 1 n k j a i j = c .
This contradiction implies that x j * ( A , b ) = b i ϕ a i j for any i N m and j N n . Let p j = x 1 * ( A , b ) ϕ x j * ( A , b ) . Then, for any i N m and j N n ,
a i 1 p j = a i 1 x 1 * ( A , b ) ϕ x j * ( A , b ) = a i 1 ( b i ϕ a i 1 ) ϕ ( b i ϕ a i j ) = a i j .
That is, for any A = ( a i j ) A ,
a i j ϕ a i 1 = a 1 j ϕ a 11 = p j , i N m , j N n .
Specifically, for A ̲ A , we have a ̲ i j ϕ a ̲ i 1 = a ̲ 1 j ϕ a ̲ 11 for any i N m and j N n . Suppose that there exist i 0 N m and j 0 N n such that a ̲ i 0 j 0 < a ¯ i 0 j 0 . Let A = ( a i j ) A be defined by
a i j = a ¯ i j , i = i 0 , j = j 0 , a ̲ i j , others .
It follows from (9) that a i 0 j 0 ϕ a i 0 1 = a 1 j 0 ϕ a 11 , i.e., a ¯ i 0 j 0 ϕ a ̲ i 0 1 = a ̲ 1 j 0 ϕ a ̲ 11 . Hence,
a ̲ 1 j 0 ϕ a ̲ 11 = a ¯ i 0 j 0 ϕ a ̲ i 0 1 > a ̲ i 0 j 0 ϕ a ̲ i 0 1 = a ̲ 1 j 0 ϕ a ̲ 11 .
This contradiction implies that A ̲ = A ¯ . □
It can be seen from the theorem above that interval-valued-optimization problem (4) cannot be strongly solvable if A ̲ A ¯ . In other words, problem (4) is strongly solvable only if the interval objective function is degenerated to a max-plus function with deterministic coefficients. Consequently, the strong solvability of problem (4) is reduced to the solvability of its unique sub-problem (5). Then, the following necessary and sufficient condition for the strong solvability can be obtained from Theorem 1 and Lemma 3.
Corollary 1. 
Problem (4) is strongly solvable if and only if A ̲ = A ¯ and Equation (7) holds for A = A ̲ (or A ¯ ).

4. Weak Solvability of Bi-Objective Optimization Problem

This section studies the weak solvability of bi-objective optimization problem for interval max-plus linear systems, that is, the weak solvability of problem (4) in the case of m = 2 .
Consider the bi-objective optimization problem
min x X F ( x ) ,
where x and X are the same decision variable and constraint set as problem (4), respectively, and the objective function F ( x ) = A x is a special case of problem (4) for m = 2 , i.e.,
A = A 11 A 12 A 1 n A 21 A 22 A 2 n = [ a ̲ 11 , a ¯ 11 ] [ a ̲ 12 , a ¯ 12 ] [ a ̲ 1 n , a ¯ 1 n ] a ̲ 21 , a ¯ 21 ] [ a ̲ 22 , a ¯ 22 ] [ a ̲ 2 n , a ¯ 2 n ] .
Let us first establish a weak solvability criterion for problem (10).
Theorem 2. 
Problem (10) is weakly solvable if and only if
max j N n { a ̲ 2 j ϕ a ¯ 1 j } min j N n { a ¯ 2 j ϕ a ̲ 1 j } .
Proof. 
Necessity. Since interval problem (10) is weakly solvable, there exists A = ( a i j ) A , such that sub-problem (5) is solvable. It can be known from the proof of Theorem 1 that Equation (8) holds. It follows that a 2 j ϕ a 1 j = ( a 21 p j ) ϕ ( a 11 p j ) = a 21 ϕ a 11 . Let d = a 21 ϕ a 11 . Then, a 2 j = a 1 j d . Hence, a ̲ 2 j ϕ a ¯ 1 j a 2 j ϕ a 1 j = d a ¯ 2 j ϕ a ̲ 1 j , j N n . This implies that max j N n { a ̲ 2 j ϕ a ¯ 1 j } d min j N n { a ¯ 2 j ϕ a ̲ 1 j } , i.e., Inequality (11) holds.
Sufficiency. Since Inequality (11) holds, there exists d R , such that a ̲ 2 j ϕ a ¯ 1 j d a ¯ 2 j ϕ a ̲ 1 j for any j N n . Let A = ( a i j ) R max 2 × n be defined by
a 1 j = a ̲ 2 j ϕ d , if d [ a ̲ 2 j ϕ a ¯ 1 j , a ̲ 2 j ϕ a ̲ 1 j ] , a ̲ 1 j , if d [ a ̲ 2 j ϕ a ̲ 1 j , a ¯ 2 j ϕ a ̲ 1 j ] ;
a 2 j = a ̲ 2 j , if d [ a ̲ 2 j ϕ a ¯ 1 j , a ̲ 2 j ϕ a ̲ 1 j ] , a ̲ 1 j d , if d [ a ̲ 2 j ϕ a ̲ 1 j , a ¯ 2 j ϕ a ̲ 1 j ] .
For j N n , if d [ a ̲ 2 j ϕ a ¯ 1 j , a ̲ 2 j ϕ a ̲ 1 j ] , then
a ̲ 1 j = a ̲ 2 j ϕ ( a ̲ 2 j ϕ a ̲ 1 j ) a 1 j = a ̲ 2 j ϕ d a ̲ 2 j ϕ ( a ̲ 2 j ϕ a ¯ 1 j ) = a ¯ 1 j , a ̲ 2 j = a 2 j a ¯ 2 j ;
if d [ a ̲ 2 j ϕ a ̲ 1 j , a ¯ 2 j ϕ a ̲ 1 j ] , then
a ̲ 1 j = a 1 j a ¯ 1 j , a ̲ 2 j = a ̲ 1 j a ̲ 2 j ϕ a ̲ 1 j a 2 j = a ̲ 1 j d a ̲ 1 j a ¯ 2 j ϕ a ̲ 1 j = a ¯ 2 j .
Hence, A A . It can be seen from (12) and (13) that a 2 j = d a 1 j for any j N n . Then,
b 2 = j = 1 n k j a 2 j + c = j = 1 n k j ( a 1 j + d ) + c = j = 1 n k j a 1 j + c + d = b 1 + d .
It follows that b 2 a 2 j = ( b 1 + d ) ( a 1 j + d ) = b 1 a 1 j , and so
x j * ( A , b ) = min i N 2 { b i a i j } = b 1 a 1 j , j N n .
Hence,
j = 1 n k j x j * ( A , b ) = j = 1 n k j ( b 1 a 1 j ) = b 1 j = 1 n k j a 1 j = j = 1 n k j a 1 j + c j = 1 n k j a 1 j = c .
From Lemma 3, the sub-problem with objective function F ( x ) = A x is solvable. Hence, problem (10) is weakly solvable. □
Let us illustrate the theorem above with a numerical example.
Example 1. 
Consider the interval-valued-optimization problem
min x X F ( x ) ,
where the objective function is F ( x ) = A x ,
A = [ 1 , 2 ] [ 2 , 3 ] [ 1 , 4 ] [ 0 , 4 ] [ 3 , 0 ] [ 0 , 5 ] ,
and the constraint set is X = x R 3 | 2 x 1 + 3 x 2 + x 3 = 6 , which can be normalized as
X = x R 3 | 1 3 x 1 + 1 2 x 2 + 1 6 x 3 = 1 .
By a direct calculation,
max j N 3 { a ̲ 2 j ϕ a ¯ 1 j } = max { 2 , 6 , 4 } = 2 < min j N 3 { a ¯ 2 j ϕ a ̲ 1 j } = min { 3 , 2 , 6 } = 2 .
It then follows from Theorem 2 that problem (14) is weakly solvable.
Indeed, let d = 0 [ 2 , 2 ] . According to (12) and (13), construct a matrix as below:
A = a ̲ 11 a ̲ 12 a ̲ 23 a ̲ 11 a ̲ 12 a ̲ 23 = 1 2 0 1 2 0 A .
Then, the sub-problem with objective function F ( x ) = A x is solvable, whose minimal value is b = 1 3 1 3 , and which is attained at the point x * ( A , b ) = 2 3 7 3 1 3 . This implies that problem (14) is weakly solvable.
Next, let us find all the solvable sub-problems of the interval-valued-optimization problem (10). For convenience of presentation, let
α ̲ = max j N n { a ̲ 2 j ϕ a ¯ 1 j } and α ¯ = min j N n { a ¯ 2 j ϕ a ̲ 1 j } .
It can be seen from the proof of sufficiency of Theorem 2 that the solvable sub-problems have a common characteristic that A 1 . and A 2 . are proportional in the max-plus framework. The following corollary shows the existence of the matrices in A whose rows are proportional.
Corollary 2. 
For d R , let T ( d ) = { A A | A 2 . = d A 1 . } . Then, T ( d ) if and only if d [ α ̲ , α ¯ ] .
Proof. 
Necessity. Since T ( d ) , there exists A A , such that a 2 j = d a 1 j for any j N n . Then,
a ̲ 2 j ϕ a ¯ 1 j d = a 2 j ϕ a 1 j a ¯ 2 j ϕ a ̲ 1 j , j N n .
This implies that d [ α ̲ , α ¯ ] .
Sufficiency. For d [ α ̲ , α ¯ ] , let A = ( a i j ) R max 2 × n be defined by (12) and (13). It has been proved in Theorem 2 that A A and A 2 . = d A 1 . . Hence, A T ( d ) , and so T ( d ) . □
For d [ α ̲ , α ¯ ] , let
S ( d ) = A R max 2 × n | A 1 . U ( d ) , A 2 . = d A 1 . ,
where U ( d ) = [ u ̲ , u ¯ ] = ( [ u ̲ j , u ¯ j ] ) I ( R max 1 × n ) is defined by
u ̲ j = a ̲ 2 j ϕ d , if d [ a ̲ 2 j ϕ a ¯ 1 j , a ̲ 2 j ϕ a ̲ 1 j ] , a ̲ 1 j , if d [ a ̲ 2 j ϕ a ̲ 1 j , a ¯ 2 j ϕ a ̲ 1 j ] ; u ¯ j = a ¯ 1 j , if d [ a ̲ 2 j ϕ a ¯ 1 j , a ¯ 2 j ϕ a ¯ 1 j ] , a ¯ 2 j ϕ d , if d [ a ¯ 2 j ϕ a ¯ 1 j , a ¯ 2 j ϕ a ̲ 1 j ] .
Then, all the solvable sub-problems of problem (10) can be represented as follows.
Theorem 3. 
If problem (10) is weakly solvable, then all the solvable sub-problems have the form
min x X F ( x ) ,
where F ( x ) = A x , A S ( d ) and d [ α ̲ , α ¯ ] .
Proof. 
Since problem (10) is weakly solvable, it follows from Theorem 2 that α ̲ α ¯ . Next, let us prove S ( d ) = T ( d ) for d [ α ̲ , α ¯ ] , where T ( d ) is given in Corollary 2. On the one hand, let
A ̲ ( d ) = u ̲ d u ̲ and A ¯ ( d ) = u ¯ d u ¯ ,
where u ̲ , u ¯ R max n defined by (15) are the lower and upper bounds of U ( d ) , respectively. It has been proved in the sufficiency of Theorem 2 that A ̲ ( d ) A . Similarly, we can prove A ¯ ( d ) = ( a i j ) A . Indeed, for j N n , if d [ a ̲ 2 j ϕ a ¯ 1 j , a ¯ 2 j ϕ a ¯ 1 j ] , then
a ̲ 1 j a 1 j = a ¯ 1 j , a ̲ 2 j = a ¯ 1 j a ̲ 2 j ϕ a ¯ 1 j a 2 j = a ¯ 1 j d a ¯ 1 j a ¯ 2 j ϕ a ¯ 1 j = a ¯ 2 j ;
if d [ a ¯ 2 j ϕ a ¯ 1 j , a ¯ 2 j ϕ a ̲ 1 j ] , then
a ̲ 1 j = a ¯ 2 j ϕ ( a ¯ 2 j ϕ a ̲ 1 j ) a 1 j = a ¯ 2 j ϕ d a ¯ 2 j ϕ ( a ¯ 2 j ϕ a ¯ 1 j ) = a ¯ 1 j , a ̲ 2 j a 2 j = a ¯ 2 j .
Hence, for any A S ( d ) , A 2 . = d A 1 . and A ̲ A ̲ ( d ) A A ¯ ( d ) A ¯ , i.e., A A . Therefore, A T ( d ) and so S ( d ) T ( d ) . On the other hand, for any A T ( d ) , let us prove that A 1 . U ( d ) . Suppose there exists j 1 N n such that a 1 j 1 < u ̲ j 1 . If d [ a ̲ 2 j 1 ϕ a ¯ 1 j 1 , a ¯ 2 j 1 ϕ a ¯ 1 j 1 ] , then
a 2 j 1 = a 1 j 1 d < u ̲ j 1 d = a ̲ 2 j ϕ d d = a ̲ 2 j 1 ;
if d [ a ̲ 2 j 1 ϕ a ̲ 1 j 1 , a ¯ 2 j 1 ϕ a ̲ 1 j 1 ] , then a 1 j 1 < u ̲ j 1 = a ̲ 1 j 1 , both of which are contradicted to A A . Suppose that there exists j 2 N n , such that a 1 j 2 > u ¯ j 2 . If d [ a ̲ 2 j 2 ϕ a ¯ 1 j 2 , a ¯ 2 j 2 ϕ a ¯ 1 j 2 ] , then a 1 j 2 > u ¯ j 2 = a ¯ 1 j 2 ; if d [ a ̲ 2 j 2 ϕ a ̲ 1 j 2 , a ¯ 2 j 2 ϕ a ̲ 1 j 2 ] , then
a 2 j 2 = a 1 j 2 d > u ¯ j 2 d = a ¯ 2 j ϕ d d = a ¯ 2 j 2 ,
both of which are also contradicted to A A . Hence, a 1 j [ u ̲ j , u ¯ j ] for any j N n , i.e., A 1 . U ( d ) . This implies that A S ( d ) and, hence, T ( d ) S ( d ) . Therefore, T ( d ) = S ( d ) . Hence, all the solvable sub-problems have the form (16). □
Definition 3. 
The sub-problems with objective functions F ( x ) = A ̲ ( d ) x and F ( x ) = A ¯ ( d ) x are called the lower and upper extreme solvable sub-problems (relative to d) of problem (10), respectively, where A ̲ ( d ) and A ¯ ( d ) are given in (17).
Let us illustrate the theorem above with the following example.
Example 2 
(continued from Example 1). Find all solvable sub-problems of problem (14). For d [ 2 , 2 ] , let U ( d ) = [ u ̲ , u ¯ ] = ( [ u ̲ j , u ¯ j ] ) I ( R max 3 ) be defined by
u ̲ 1 = e ϕ d , if d [ 2 , 1 ] , 1 , if d [ 1 , 3 ] ; u ¯ 1 = 2 , if d [ 2 , 2 ] , 4 ϕ d , if d [ 2 , 3 ] ; u ̲ 2 = 3 ϕ d , if d [ 6 , 1 ] , 2 , if d [ 1 , 2 ] ; u ¯ 2 = 3 , if d [ 6 , 3 ] , e ϕ d , if d [ 3 , 2 ] ; u ̲ 3 = e ϕ d , if d [ 4 , 1 ] , 1 , if d [ 1 , 6 ] ; u ¯ 3 = 4 , if d [ 4 , 1 ] , 5 ϕ d , if d [ 1 , 6 ] .
That is,
U ( d ) = [ d , 2 ] [ 3 d , d ] [ d , 4 ] , if d [ 2 , 1 ] ; [ 1 , 2 ] [ 2 , d ] [ d , 4 ] , if d [ 1 , 1 ] ; [ 1 , 2 ] [ 2 , d ] [ 1 , 5 d ] , if d [ 1 , 2 ] .
Hence, all the solvable sub-problems have the form
min x X F ( x ) ,
where F ( x ) = A x , A S ( d ) and
S ( d ) = A R max 2 × n | A 1 . U ( d ) , A 2 . = d A 1 . , d [ 2 , 2 ] .
Specifically, for example, let d = 1 . Then, U ( 1 ) = [ 1 , 2 ] [ 2 , 1 ] [ 1 , 4 ] . The solvable sub-problems (relative to 1) have the form
min x X F ( x ) ,
where F ( x ) = A x , A S ( 1 ) and S ( 1 ) = A R max 2 × n | A 1 . U ( 1 ) , A 2 . = 1 A 1 . .
Finally, let us present the optimal solutions of problem (10).
Theorem 4. 
The set of all optimal solutions of solvable sub-problems of problem (10) is
H A 1 . + c | A 1 . U ( d ) , d [ α ̲ , α ¯ ] ,
where U ( d ) is defined by (15), and
H = k 1 1 k 2 k n k 1 k 2 1 k n k 1 k 2 k n 1 , c = c c c .
Proof. 
For d [ α ̲ , α ¯ ] and A S ( d ) , we have a 2 j = d a 1 j for any j N n . By (6),
b 1 = j = 1 n k j a 1 j + c , b 2 = j = 1 n k j a 2 j + c = j = 1 n k j ( d + a 1 j ) + c = j = 1 n k j a 1 j + c + d = b 1 + d .
By (2), for j 0 N n ,
x j 0 * ( A , b ) = min { b 1 a 1 j 0 , b 2 a 2 j 0 } = min { b 1 a 1 j 0 , ( b 1 + d ) ( a 1 j 0 + d ) } = b 1 a 1 j 0 = j = 1 n k j a 1 j + c a 1 j 0 = j j 0 k j a 1 j + k j 0 a 1 j 0 a 1 j 0 + c = j j 0 k j a 1 j + ( k j 0 1 ) a 1 j 0 + c = H j 0 A 1 . + c .
Then, x * ( A , b ) = H A 1 . + c . According to Lemma 3 and Theorem 3, the set of all optimal solutions of solvable sub-problems of problem (10) can be represented by (19). □
Definition 4. 
For each d [ α ̲ , α ¯ ] , the set of optimal solutions given in (19) is called an interval optimal solution of problem (10).
Let us find the interval optimal solution of problem (14) by using the theorem above.
Example 3 
(continued from Example 2). According to Theorem 4, the set of all optimal solutions of solvable sub-problems of problem (14) is
H A 1 . + c | A 1 . U ( d ) , d [ 2 , 2 ] ,
where U ( d ) is given by (18), and
H = 1 6 4 3 1 2 3 1 2 3 5 , c = 1 1 1 .
Specifically, for example, let d = 1 . It has been shown in Example 2 that
S ( 1 ) = A R max 2 × n | A 1 . U ( 1 ) , A 2 . = 1 A 1 . ,
where U ( 1 ) = [ 1 , 2 ] [ 2 , 1 ] [ 1 , 4 ] . The set of optimal solutions of the solvable sub-problems with an objective function obtained from S ( 1 ) is H A 1 . + c | A 1 . U ( 1 ) . For
A = 1 1 2 2 0 3 S ( 1 ) A ,
the sub-problem with objective function F ( x ) = A x is solvable. By a direct calculation, the minimal value of F ( x ) is b = 1 6 7 13 , which is attained at the point
x * ( A , b ) = 1 6 1 13 5 = H A 1 . + c ,
where H and c are given in (20).

5. Application Example

Bi-objective-interval-valued-optimization problems appear on many occasions in real-life examples. This section takes the load-distribution problem as an example, to illustrate how the obtained results work in practical applications.
Consider the distributed system with the task precedence graph shown in Figure 1, in which an overall task is partitioned into 7 subtasks T 1 , T 2 , ..., T 7 , the circles represent subtasks, the number inside each circle represents the corresponding task execution time, and the number associated with each link corresponds to the interprocessor communication time. It has been presented in Example 4 of Ref. [24] that the load-distribution problem can be described by the multi-objective-optimization problem
min x X F ( x ) ,
where X = { x R 2 | 0.6 x 1 + 0.4 x 2 = 2 } , F ( x ) = H x and H = 10 5 17 25 35 10 5 17 25 35 . The global minimum of problem (21) is b = ( 12 7 19 27 37 ) , which is attained at the point x * ( A , b ) = ( 2 2 ) . This implies that if the execution times of tasks T 1 and T 2 are both 2 units, then the completion times of tasks T 3 , T 4 , T 5 , T 6 and T 7 are 12, 7, 19, 27 and 37 units, respectively. Moreover, the overall completion time is 37 units.
Suppose that the entries of H can be varied in the interval
H = [ 9 , 11 ] [ 2 , 6 ] [ 16 , 18 ] [ 22 , 27 ] [ 33 , 38 ] [ 8 , 11 ] [ 4 , 7 ] [ 15 , 18 ] [ 23 , 28 ] [ 32 , 36 ] : = A .
Next, let us minimize the completion time of the overall task by solving the interval-valued-optimization problem with coefficient matrix A . By a direct calculation,
α ̲ = max j N 5 { a ̲ 2 j ϕ a ¯ 1 j } = max { 3 , 2 , 3 , 4 , 6 } = 2 , α ¯ = min j N 5 { a ¯ 2 j ϕ a ̲ 1 j } = min { 2 , 5 , 2 , 6 , 3 } = 2 .
Since α ̲ < α ¯ , it follows from Theorem 2 that the interval-valued-optimization problem is solvable. This implies that the completion time of each subtask can be minimized simultaneously. Let d = 2 . By (17), the coefficient matrix of the objective function of the lower extreme solvable sub-problem is
A ̲ ( 2 ) = u ̲ 2 u ̲ = 9 2 16 22 33 11 4 18 24 35 A .
If the coefficient matrix of the objective function of problem (21) is changed to A ̲ ( 2 ) , then it follows from (6) that the global minimum is reduced to ( 11.8 4.8 18.8 24.8 35.8 ) . This implies that the completion time of each subtask is advanced through adjusting the value of parameters within the allowable range A . Moreover, the overall completion time is reduced from 37 to 35.8.

6. Conclusions

This paper investigated multi-objective optimization for interval max-plus linear systems, which was formulated as an interval-valued-optimization problem. The strong and weak solvabilities were studied based on the solvability of sub-problems. It was found that the solvability of sub-problems is determined by the proportional relation of rows of the coefficient matrix of the objective function. Such a characteristic is a key to establish the solvability criteria for the interval-valued-optimization problem. A necessary and sufficient condition for the strong solvability of the multi-objective-optimization problem was established. For the bi-objective-optimization problem, a necessary and sufficient condition of the weak solvability was provided, and all the solvable sub-problems were found out. The interval optimal solution was obtained by constructing the set of all optimal solutions of the solvable sub-problems. The bi-objective-optimization technique was then used in load distribution, to advance the minimum completion time of the distributed system.
The global-optimization problem studied in Ref. [24] is a specific sub-problem of the interval-valued-multi-objective-optimization problem introduced. The interval model will have extensive applications in engineering practice. The solvability of the interval-valued-optimization problem with more than two objectives deserves further research.

Author Contributions

Conceptualization, C.W., J.Z. and P.C.; Methodology, C.W., J.Z., P.C. and H.Z.; Investigation, H.Z.; Resources, H.Z.; Writing—original draft, C.W., J.Z. and P.C.; Writing—review & editing, C.W., J.Z. and P.C.; Funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China under grant 61903037.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kuhn, H.W.; Tucker, A.W. Nonlinear Programming; University of California Press: Berkeley, CA, USA, 1951. [Google Scholar]
  2. Michalewicz, Z.; Fogel, D.B. How to Solve It: Modern Heuristics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  3. Thie, P.R.; Keough, G.E. An Introduction to Linear Programming and Game Theory, 3rd ed.; John Wiley and Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  4. Martins, J.R.R.A.; Ning, A. Engineering Design Optimization; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  5. Zhang, X.; Gohlich, D.; Zheng, W. Karush-Kuhn-Tuckert based global optimization algorithm design for solving stability torque allocation of distributed drive electric vehicles. J. Frankl. Inst. 2017, 354, 8134–8155. [Google Scholar] [CrossRef]
  6. Shafigh, F.; Defersha, F.M.; Moussa, S.E. A linear programming embedded simulated annealing in the design of distributed layout with production planning and systems reconfiguration. Int. J. Adv. Manuf. Technol. 2017, 88, 1119–1140. [Google Scholar] [CrossRef]
  7. Gangwar, P.; Mallick, A.; Chakrabarti, S. Short-term forecastingbased network reconfiguration for unbalanced distribution systems with distributed generators. IEEE Trans. Ind. Inform. 2020, 16, 4378–4389. [Google Scholar] [CrossRef]
  8. Cuninghame-Green, R.A. Minimax Algebra; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
  9. Baccelli, F.; Cohen, G.; Olsder, G.J.; Quadrat, J.P. Synchronization and Linearity: An Algebra for Discrete Event Systems; John Wiley and Sons: New York, NY, USA, 1992. [Google Scholar]
  10. Heidergott, B.; Olsder, G.J.; van der Woude, J. Max-Plus at Work: Modeling and Analysis of Synchronized Systems; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  11. Butkovic, P. Max-Linear Systems: Theory and Algorithms; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  12. Cohen, G.; Dubois, D.; Quadrat, J.P.; Viot, M. A linear-system-theoretic view of discrete-event processes and its use for performance evaluation in manufacturing. IEEE Trans. Autom. Control. 1985, 30, 210–220. [Google Scholar] [CrossRef]
  13. Shang, Y.; Hardouin, L.; Lhommeau, M.; Maia, C.A. An integrated control strategy to solve the disturbance decoupling problem for max-plus linear systems with applications to a high throughput screening system. Automatica 2016, 63, 338–348. [Google Scholar] [CrossRef]
  14. Goto, H.; Murray, A.T. Optimization of project schedules in the critical-chain project-management max-plus-linear framework. Int. Rev. Model. Simul. 2018, 11, 206–214. [Google Scholar] [CrossRef]
  15. Witczak, M.; Majdzik, P.; Stetter, R.; Bocewicz, G. Interval max-plus fault-tolerant control under resource conflicts and redundancies: Application to the seat assembly. Int. J. Control 2020, 93, 2662–2674. [Google Scholar] [CrossRef]
  16. Goncalves, V.M. Max-plus approximation for reinforcement learning. Automatica 2021, 129, 109623. [Google Scholar] [CrossRef]
  17. Kubo, S.; Nishinari, K. Applications of max-plus algebra to flow shop scheduling problems. Discret. Appl. Math. 2018, 247, 278–293. [Google Scholar] [CrossRef]
  18. Butkovic, P.; MacCaig, M. On the integer max-linear programming problem. Discret. Appl. Math. 2014, 162, 128–141. [Google Scholar] [CrossRef]
  19. Xu, J.; van den Boom, T.; De Schutter, B. Optimistic optimization for model predictive control of max-plus linear systems. Automatica 2016, 74, 16–22. [Google Scholar] [CrossRef]
  20. Gaubert, S.; Katz, R.D.; Sergeev, S. Tropical linear-fractional programming and parametric mean payoff games. J. Symb. Comput. 2012, 47, 1447–1478. [Google Scholar] [CrossRef]
  21. Goncalves, V.M.; Maia, C.A.; Hardouin, L. On tropical fractional linear programming. Linear Algebra Appl. 2014, 459, 384–396. [Google Scholar] [CrossRef]
  22. Marotta, A.M.; Goncalves, V.M.; Maia, C.A. Tropical lexicographic optimization: Synchronizing timed event graphs. Symmetry 2020, 12, 1597. [Google Scholar] [CrossRef]
  23. Tao, Y.; Liu, G.P.; Chen, W. Globally optimal solutions of max-min systems. J. Glob. Optim. 2007, 39, 347–363. [Google Scholar] [CrossRef]
  24. Tao, Y.; Wang, C. Global optimization for max-plus linear systems and applications in distributed systems. Automatica 2020, 119, 109104. [Google Scholar] [CrossRef]
  25. Wang, C.; Tao, Y. Locally and globally optimal solutions of global optimization for max-plus linear systems. IET Control. Theory Appl. 2022, 16, 219–228. [Google Scholar] [CrossRef]
  26. Wang, C.; Xia, Y.; Li, Z.; Tao, Y. Approximately global optimal control for max-plus linear systems and its application on load distribution. Int. J. Control 2023, 96, 1104–1115. [Google Scholar] [CrossRef]
  27. Shu, Q.; Yang, X. A specific minimax programming with the max-plus inequalities constraints. Fuzzy Sets Syst. 2024, 474, 108743. [Google Scholar] [CrossRef]
  28. Necoara, I.; Kerrigan, E.C.; De Schutter, B.; van den Boom, T. Finite-horizon min-max control of max-plus- linear systems. IEEE Trans. Autom. Control. 2007, 52, 1088–1093. [Google Scholar] [CrossRef]
  29. Le Corronc, E.; Cottenceau, B.; Hardouin, L. Control of uncertain (max,+)-linear systems in order to decrease uncertainty. In Proceedings of the 4th International Symposium on Leveraging Applications ISoLA 2010, Crete, Greece, 18–21 October 2010; Volume 43, pp. 400–405. [Google Scholar]
  30. Myskova, H. Interval max-plus systems of linear equations. Linear Algebra Appl. 2012, 437, 1992–2000. [Google Scholar] [CrossRef]
  31. Myskova, H.; Plavka, J. The robustness of interval matrices in max-plus algebra. Linear Algebra Appl. 2014, 445, 85–102. [Google Scholar] [CrossRef]
  32. Myskova, H.; Plavka, J. Interval robustness of (interval) max-plus matrices. Discret. Appl. Math. 2020, 284, 8–19. [Google Scholar] [CrossRef]
  33. Farahani, S.S.; van den Boom, T.; De Schutter, B. On optimization of stochastic max-min-plus-scaling systems: An approximation approach. Automatica 2017, 83, 20–27. [Google Scholar] [CrossRef]
  34. Wang, C.; Tao, Y.; Yan, H. Optimal input design for uncertain max-plus linear systems. Int. J. Robust Nonlinear Control 2018, 28, 4816–4830. [Google Scholar] [CrossRef]
  35. Litvinov, G.L.; Sobolevskii, A.N. Idempotent interval analysis and optimization problems. Reliab. Comput. 2001, 7, 353–377. [Google Scholar] [CrossRef]
Figure 1. Task precedence graph of a distributed system [24].
Figure 1. Task precedence graph of a distributed system [24].
Mathematics 12 00653 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Zhang, J.; Chen, P.; Zhao, H. Bi-Objective Optimization for Interval Max-Plus Linear Systems. Mathematics 2024, 12, 653. https://doi.org/10.3390/math12050653

AMA Style

Wang C, Zhang J, Chen P, Zhao H. Bi-Objective Optimization for Interval Max-Plus Linear Systems. Mathematics. 2024; 12(5):653. https://doi.org/10.3390/math12050653

Chicago/Turabian Style

Wang, Cailu, Jiye Zhang, Pengcheng Chen, and Haichao Zhao. 2024. "Bi-Objective Optimization for Interval Max-Plus Linear Systems" Mathematics 12, no. 5: 653. https://doi.org/10.3390/math12050653

APA Style

Wang, C., Zhang, J., Chen, P., & Zhao, H. (2024). Bi-Objective Optimization for Interval Max-Plus Linear Systems. Mathematics, 12(5), 653. https://doi.org/10.3390/math12050653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop