1. Introduction
In recent decades, impressive climate change has been observed. This is due to greenhouse gas emission. These greenhouse gas emissions are strongly related to excessive industrial activities and fuel energy consumption. According to the US Energy Information Administration, the consumed energy in the industrial sector is approximately 54% of the total consumed energy [
1]. Consequently, production planners should take into account these environmental issues when preparing their plans. The common way to struggle against these environmental concerns is the efficient use of energy in manufacturing industries [
2]. Several recent studies have presented scheduling approaches as important tools to improve energy efficiency and to reduce consumed energy in manufacturing industries [
3,
4].
It is noteworthy that scheduling is the allocation of resources (processors or machines) to jobs (or tasks) within a given time window in order to optimize one or several objectives [
5]. The scheduling strategies intended to reduce energy consumption and/or improve energy efficiency are called “energy-efficient scheduling” or “green scheduling problems” (GSPs) [
2]. Therefore, GSPs involve the allocation of jobs to machines in order to minimize the total cost (makespan, tardiness, etc.) and consumed energy and/or improve the energy efficiency [
1]. Thus, compared to classical scheduling, energy-efficient scheduling takes into account the environmental issues over saving the consumed energy.
In GSPs, there are two ways to consider the minimization of consumed energy: (1) by explicitly including the energy in the objective function [
2], or (2) by including the energy consumption as a constraint [
6,
7,
8]. In this study, the second method is adopted through a special focus on the machine idle times. Idle machine times are periods when a machine is ready for processing jobs but there is no job to be treated. During these idle periods, a machine consumes energy without providing any services. In addition, it has been observed that machines in service stay in idle mode most of the time and, during this idle phase, 80% of the total energy is consumed [
9]. This energy is referred to as idle energy. Therefore, energy could be saved by better controlling the idle machine periods [
10].
In order to save idle energy, two approaches can be adopted. The first one consists of shutting down the idle machines. This method is applicable for electric machines where moving from an idle state to an active state is easy [
11]. For other shop environments, such as for furnaces [
12], the first strategy is not applicable. Indeed, for such shops, turning off the machines and then turning them on again is highly energy consuming. Alternatively, no-idle machine time constraints can be adopted. These no-idle machine constraints consist of forcing each machine to process all the assigned jobs continuously, in order to avoid idle times. In this study, no-idle machine constraints were adopted: this is the energy-saving solution that comprises the green aspect of the treated problem.
GSPs are important from a practical point of view because they model several real-life and practical production processes pertaining to environmental issues. In this context, real-life scheduling problems have been addressed for furnaces used in the steel [
12,
13,
14] and automobile industries [
15]. These production processes are intended to produce different steel products. The shop is composed of identical parallel furnaces to avoid a single-furnace bottleneck. In addition, the pieces requiring processing in the furnaces arrive at the shop on different dates; these are the release dates. After finishing processing, a produced piece is still hot and requires a certain time for cooling; this is the delivery time. The objective is to minimize the maximum completion time (cost) and reduce the furnaces’ high consumption of energy by eliminating the idle machine times.
Motivated by the abovementioned real-life problems and their practical relevance in the steel and car industries, this study addresses a green scheduling problem. This problem is the parallel machine scheduling problem with release dates, delivery times, and no-idle machines. With no-idle machines, no idle time is permitted between two consecutive processed jobs in any machine. By preventing machines from having idle times, the total consumed energy is reduced, and the green requirement is satisfied. This is the green aspect for this scheduling problem. The objective of this is to determine a feasible schedule without idle machine times that minimizes the maximum completion time (makespan). This problem occurs frequently in plants using furnaces with high energy consumption [
14]. The objective of this study was to develop methods that allow optimal or near optimal solutions to be obtained for the studied green scheduling problem.
GSPs are a subclass of production scheduling problems, which are proven to be NP-hard in the majority of cases; therefore, this hard complexity also holds for the GSPs. The procedures solving the hard scheduling problems consist of three categories: the exact methods, the heuristics, and the metaheuristics (evolutionary: genetic algorithm (GA) and simulated annealing (SA) [
5]. The heuristics and metaheuristics algorithms are called approximate methods. The two main criteria for selecting the appropriate method to be used are the computational time and the solution quality. The exact methods are known to be time consuming whilst they solve the problems. Due to this drawback, these methods are generally used to solve small-sized problems. The heuristics methods are characterized by a low computational time and a moderate solution quality. The metaheuristics are reputed to have an acceptable computational time and a near optimal solution [
5,
16,
17]. According to a recent survey study [
2], almost 59% of published research works for GSP successfully utilize the evolutionary algorithms (metaheuristics) as a solving approach. Encouraged by these successes, this study investigates the usage of metaheuristics, such as the GA and SA, in solving the problem in question.
The main contributions of this work are presented as follows: (1) The studying of the green parallel machine scheduling problem with release dates, delivery times, and no-idle machines; (2) the development of new efficient lower bounds on the makespan for the studied problem; (3) the successful adaptation of some metaheuristics, GA and SA, to efficiently solve the problem in question. These methods outperform the existing procedures in solving the problem, as shown in the experimental study section.
The remainder of this paper is organized as follows. In
Section 2, an exhaustive literature review is presented. In
Section 3, the studied problem is defined, some of its useful proprieties are proposed, and new lower bounds are developed. A mixed integer linear formulation is proposed in
Section 4. The proposed metaheuristics are the topic of
Section 5. An extensive experimental study, intended to assess the performance of the proposed procedures, is presented in
Section 6. Finally, a conclusion summarizing the main findings and presenting future research directions is presented.
3. Problem Definition and Proprieties
In this section, the studied problem is defined, some of its useful proprieties are presented, and a set of lower bounds is proposed.
3.1. Problem Definition
The deterministic parallel machine scheduling problem with release dates, delivery times, and no-idle machines is stated as follows. A set
of
jobs (components) has to be processed on
identical parallel machines (furnaces), with
. The idle time between two consecutive processed jobs in a machine is not permitted. Indeed, during an idle machine time, a machine is consuming energy without performing any task; this is the idle energy. In a recent study, the idle energy derived from the idle machine times represents 80% of the total consumed energy [
9]. Therefore, eliminating the idle machine times is seen as an efficient means to totally eliminate idle energy and consequently to reduce the total consumed energy.
Each job, , has a release date (head or arrival time), , from which the job, , is available for processing in any available machine. Each job, , is assigned to a machine where it is processed during . After being processed, a job exits the machine where it is processed, and it remains in the system (shop) during . This is the delivery time () and it corresponds to a cooling period of the job that leaves a furnace.
Each machine processes at most only one job at the same time. It is worth noting that the completion time,
, of a job,
, is the date it exits the system (shop). In other terms,
, where
is the finishing processing date of job
in a machine. Its objective is to find a feasible schedule that minimizes the maximum completion time,
(makespan), of all jobs, which is expressed as follows:
The scheduling is performed under the following assumptions:
All machines are available for the processing of the jobs at time zero and the entire time horizon.
A job is assigned to exactly one machine at the same time.
Preemption during the processing of a job is not allowed.
The processing times, ; the release dates, ; and the delivery times, , are assumed to be deterministic and integral .
According to the three-field notation [
47], this problem is denoted as
, where
NI indicates the non-idle machine time constraint.
In the following, an illustrative example is given.
Example 1. Consider two furnaces that must heat different pieces (components) to a certain temperature. The pieces arrive at the system on different dates (release dates). The pieces have a delivery time that must elapse between completion processing in a furnace and exiting the system (shop). This corresponds to a cooling period of the processed piece. Maintaining a furnace’s required temperature while it is idle is an energy-consuming period without performing any task. This is a waste of energy and consequently an environmental issue. Furthermore, the idle period adds an extra energy cost. This can be avoided by not allowing the idle machine times between consecutive jobs. This could be seen as a green practice.
The data for the given example are presented in
Table 1.
The example consists of five jobs
and two machines
, where the input parameters are listed in
Table 1.
Figure 1 shows an optimal solution for Example 1.
The date of the finishing processing for job 1 is 8; this is the date when job 1 exits machine 1. The delivery time (cooling duration) of job 1 is 3. Then, job 1 leaves the system at 8 + 3 = 11; this is the completion time of job 1.
It is worth noting that in the previous solution (
Figure 1), job 4 can start at its release date,
, but if so, the no-idle machine constraint is not satisfied, and an idle time appears from time 6 to 7. Thus, job 4 is constrained to start later at time 4.
3.2. Problem Properties
In this subsection, several useful and relevant proprieties of the studied problem are presented.
3.2.1. Complexity
Proposition 1. The problemis NP-hard in the strong sense.
Proof of Proposition 1. Indeed, the problem
, which is a relaxation of
, is NP-hard in the strong sense [
48]. □
3.2.2. Symmetry
Remark 1. Theis symmetric in the following sense. The problemsandhave the same optimal solutions, which means that the roles of the release dates and the delivery time are symmetric.
Proof of Remark 1. Indeed, if is an upper bound and is the timeline, then considering the new timeline allows us to retrieve the same schedules for both problems. In particular, this holds for the optimal schedules. □
An immediate consequence of the last remark (Remark 1) is to systematically investigate the original problem () and its symmetric (), while running the algorithms, in order to improve the obtained solution. This is applicable for the lower bounds and the metaheuristics that are proposed later in this work.
In the following, the original problem () is referred to as the Forward problem, and the symmetric problem () is referred to as the Backward problem.
3.2.3. Relationship between the Problems Pm|rj,qj|Cmax and Pm,NI|rj,qj|Cmax
Proposition 2. Ifis a lower bound for the problem, then LB is also a lower bound for the problem.
Proof of Proposition 2. An optimal schedule, , for with an optimal value, , is a feasible schedule for . If is the optimal value of , then . Consequently, if is a lower bound for , then . □
Corollary 1. If denotes the optimal solution of , then is a lower bound of .
Proof of Corollary 1. An optimal solution of is also a lower bound for Based on Proposition 2, this optimal solution is a lower bound for . □
Remark 2. The lower bound(Corollary 1) is the optimal solution of the. In this study, the Branch and Bound (B&B) algorithm developed in [48] was used to optimally solve the problemdue to its efficiency. In addition,is NP-hard, and the (B&B) might fail to solve it optimally within a prefixed time limit (300 s). In this case, the best obtained lower bound while running the (B&B) is considered as the lower bound for the current studied problem. In the following, the obtained lower bound from Corollary 1 and Remark 2 will be adopted as a lower bound for the problem , and will be denoted .
5. Metaheuristics Procedures
The studied problem is NP-hard in the strong sense. In order to provide a near optimal solution, several metaheuristics were investigated and adapted. More precisely, two metaheuristics were developed in order to solve the studied scheduling problem. These metaheuristics are the genetic algorithm (GA) and the simulated annealing algorithm (SA). This choice is justified by a study of the most recent literature, for the green parallel machine scheduling problems using metaheuristics. According to this literature [
49,
50,
51,
52,
53,
54,
55], the genetic algorithm and simulated annealing are successful methods for solving parallel machine scheduling problems. In this context, three variants of GA were developed. These variants use three different crossover operators. The algorithm parameter tuning is included in the experimental section.
5.1. Genetic Algorithms
Genetic algorithms (
GA) were developed first by John Holland in [
56]. GAs attempt to simulate the process of natural evolution through (genetic) selection by following the principles of natural evolution in a given environment. Natural evolution does not act directly on living beings, but it operates on the chromosomes’ DNA. However, the used vocabulary in GAs is similar to that in natural genetics, but the natural genetics processes are much more complicated than the processes of the GA models.
The terminology used in GAs depends on natural genetic processes: chromosomes are the elements from which they are made (individuals). These chromosomes are grouped into populations, and a combination of chromosomes is considered the reproductive stage. This is performed utilizing a mutation operator and/or a crossing operator. Other concepts are specific to the GA field, such as the quality index (fitness), also called the performance index, which is a measure used to classify chromosomes. The same goes for the evaluation function, representing the theoretical formula for calculating and finding the quality index of a chromosome.
For a given optimization problem, an individual represents a feasible solution in the solution space. Every individual associated with the value of the objective function is optimized. The iterative search generates populations of individuals on which we apply selection, crossing, and mutation processes. The selection aims to promote the best elements of the population for the criterion considered (the best suited), mutation, and crossing, which ensure an exploration of the solution space.
Figure 2 shows the simplified iterative operation of GAs.
To create GAs and solve problems effectively, it is important to identify how to represent the solutions (chromosome representation), define an evaluation function, develop the different operators, and determine the parameters, such as the choice of stopping criteria and the probability of an operator’s application. Representation, fitness, initial population, genetic operators, and standard stopping rules are discussed in detail below.
To implement a GA for the problem under study, the first step is to form a good representation of the problem’s information in the form of a chromosome. This chromosome’s representation must be complete, considering all the possible solutions to the problem, which can be codified using this representation. Additionally, all the adjustments on this chromosome must correspond to feasible solutions (principle of validity).
5.1.1. Chromosome Representation
For the problem considered in this paper, the chromosome’s representation should simultaneously reflect two main characteristics:
In this study, a chromosome is represented by a permutation of jobs. The decoding consists of assigning the first unscheduled job in the permutation to the most available machine, with the elimination of the idle time by right shifting the jobs until no idle time is detected.
Example 2. The chromosome whenandis presented in Figure 3. From the chromosome representation, the sequence on machine 1 is , and the sequence on machine 2 is .
5.1.2. Initial Population
The GA’s initial population represents the algorithm’s starting population, which can strongly condition the speed of an algorithm. If the optimum position in the solution space is completely unknown, it is natural to randomly generate individuals by making uniform draws of the solution space. On the other hand, if a priori information on the problem is available, it seems to create individuals in a particular sub-domain to accelerate convergence. The prior knowledge from our research literature could be utilized for some dispatching rules or simple heuristics, such as the earliest release date, Jackson’s algorithm, or the shortest processing time. In this study, the initial population was randomly generated.
5.1.3. Selection Operator
Unlike other optimization techniques, genetic algorithms do not require any particular hypothesis on the objective function’s regularity. In particular, the genetic algorithm does not use its successive derivatives, making its field of application very broad. No continuity assumption is required either. The selection operator chooses the most suitable individuals to enable the closest solution population to converge towards the global optimum. However, in the literature, many selection techniques are more or less adapted to problems they deal with. In this study, the roulette wheel selection rule was used [
57].
5.1.4. Reproduction
Reproduction means the cloning of an individual without modification, which will pass to the next generation. In this way, reproduction is an alternative genetic operator to crossing and mutation, since they modify the individuals that pass into the next generation. The purpose of reproduction is to keep individuals with high fitness of the present age in the next generation.
5.1.5. Crossover Operator
Crossover aims to enrich the diversity of the population by manipulating the structure of the chromosomes [
58]. Conventionally, crosses are considered with two parents and generate two children. The selection process chooses both parents. Crossbreeding allows innovation (children are different from their parents) and is based on the idea that two successful parents will produce better children. The crossover rate
represents the proportion of parents on which a crossover operator will act. There are several crossover operators in the literature regarding the creation of new child chromosomes with the best fitness value. In the following, the most used crossover operators are as follows: (1) the block order crossover operator (BOX) [
2], (2) linear order crossover operator (LOX) [
3], and (3) position-based crossover operator (POX) [
4]. In this study, three variants of GA are proposed. These variants use crossover operators (BOX), (LOX), and (POX), respectively. These metaheuristics are denoted as
,
, and
, respectively. All three variants of GA share the same operators, except the crossover ones.
5.1.6. Mutation
The mutation is considered a primary operator, providing a small randomness element in individuals in the population [
59]. Although it is recognized that the crossing operator is responsible for searching for the space of possible solutions, the mutation operator is responsible for increasing or reducing the search space within the genetic algorithm as well as for promoting the genetic variability of the individuals in the population. The probability (or ratio)
p_m defines the probability of mutating each element (gene) of representation. There are several techniques for applying the mutation to individuals in a population, but the most commonly used is to mutate the percentage of total genes in the population.
5.1.7. Replacement Strategies
The replacement phase concerns the survivor selection of both the parent and offspring populations. When the population’s size is constant, it allows individuals to be withdrawn according to a given selection strategy. Elitism always consists of selecting the best individuals from the parents and the offspring. This method leads to rapid convergence, and a premature convergence could occur. Sometimes, selecting bad (in terms of fitness) individuals is necessary to avoid the sampling error problem. These replacement strategies may be stochastic or deterministic.
5.2. Simulated Annealing
The simulated annealing (SA) algorithm was firstly proposed in [
60] to deal with highly non-linear problems, and afterward, the authors suggested it for solving combinatorial optimization problems. Since then, SA has had a significant impact on the field of metaheuristic research for its simplicity and efficiency in solving combinatorial optimization problems by escaping local optima. It has also been extensively studied to deal with continuous optimization problems also applied to numerous other areas. It originates from the fields of material science and physics [
5], where the SA algorithm simulates the energy changes in a system subjected to a cooling process until it converges to an equilibrium state (steady frozen state).
The SA is one of the popular metaheuristics successfully applied to various combinatorial optimization problems. SA is a memoryless algorithm because the algorithm does not use any information gathered during the search. From an initial solution, SA proceeds in several iterations.
The SA improves a solution by iteratively moving the current solution
to a neighborhood solution
, generated randomly. If
is better than s, then the movement from s to
is accepted, i.e., s is replaced by
. If
is worse than
, it is accepted with a probability of
, called an uphill move, where
represents the difference between the objective function values of
and
, and
is a parameter called the temperature. As the algorithm progresses, the probability that such moves are accepted decreases. The distribution of the probability equation is as follows:
When the temperature increases, the probability of accepting the worst move also increases. At a given temperature, the lower the objective function’s increase is, the more significant the likelihood of accepting the move is.
T is initially set to
, and is decreased after every iteration. The algorithm (Algorithm 1) is terminated if temperature
reaches
. It is worth noting that
Tmax and
Tmin are the maximum and minimum temperatures, respectively. The best solution found at the beginning of the search is stored in addition to the current solution.
Algorithm 1 SA algorithm |
Input: Cooling schedule. |
; %Generation of the initial solution |
; % Starting temperature |
Repeat |
Repeat % At a fixed temperature |
% Generate a random neighbor |
; |
If then % Accept the neighbor solution |
Else accept with a probability |
Until Equilibrium condition |
%, e.g., a given number of iterations executed at each temperature T |
T = g(T); %Temperature update |
Until Stopping criteria satisfied %, e.g., |
Output: Best solution found |
A few parameters control the search’s progress, which is the temperature and the number of iterations performed at each temperature. The main elements of SA can be summarized as follows:
The acceptance probability function: the main element of SA that enables non-improving neighbors to be selected.
The cooling schedule: tis defines the temperature at each step of the algorithm. It plays an essential role in the efficiency and effectiveness of the algorithm.
Regarding the stopping condition, the theory suggests a final temperature equal to . In practice, one can stop the search when the probability of accepting a move is negligible. The following stopping criteria may be utilized:
Reaching a final temperature .
Achieving a predetermined number of iterations without improvement.
The objective function reaches a pre-specified threshold value (e.g., lower bound).
A predetermined number of evaluations.
6. Experimental Study and Results
This section evaluates the performance of the proposed MILP, GAs, and SA, and compares the proposed procedures to the existing ones. Computational experiments were carried out with 4000 instances for the proposed algorithms. The proposed algorithms were coded using C++ and the MILP using CPLEX software. The computational experiments were undertaken on an MSI computer with the following specifications: processor: Intel (R) Core™ i7-7700 HQ CPU at 2.8 GHz; RAM: 8 GB.
6.1. Data Set Generation
The test problems were generated as in previous studies [
45]. Two classes of instances were generated (Class A and Class B) with different problem sizes of jobs
. Parameters for Class A and Class B are presented as follows.
6.1.1. Class A
The processing times were randomly and uniformly generated in .
The release dates, , and delivery times, , were randomly and uniformly generated in , where .
6.1.2. Class B
The processing times, , were randomly and uniformly generated in .
The release dates, , were randomly and uniformly generated in .
The delivery times, , were randomly and uniformly generated in , where .
Combinations of different parameters resulted in a total of 4000 test problems.
6.2. Parameters Tuning
In this section, the parameter design of GAs and SA is investigated and tuned. Different combinations of parameters returned different results for the metaheuristic algorithms, which means the parameter values used in each algorithm affect their performance.
To identify each design parameter’s proper settings, a pilot run was conducted based on screening and the literature. The Taguchi design of L9 [
32] was used to study the effect of the parameters of the proposed algorithm on the makespan,
, and the best settings for each proposed metaheuristic parameters were determined. The detailed results for the tuned variables are given in
Table 3 and
Figure A1,
Figure A2,
Figure A3,
Figure A4,
Figure A5 and
Figure A6 (
Appendix A).
More precisely, the GA parameters that should be determined are as follows:
Population size;
Crossover rate;
Mutation rate;
Stopping condition.
The SA parameters that should be determined before the experimental study are as follows:
Temperature;
Maximum number of iterations: MAX_ITER;
Temperature reduction factor: ALPHA;
Number of function evaluations before temperature reduction: NT factor.
The determination of these parameters requires a preliminary experimental phase, performed in a reduced number of instances, and on a given set of values for each parameter (
Table 4). For example, the population size parameter belongs to the set
, and one of these values has to be selected. The design of this experimentation is performed following the Taguchi design of L9 [
32].
Figure A1,
Figure A2,
Figure A3,
Figure A4 and
Figure A5 present the average relative percent deviation (ARPD) for each parameter and for each number of jobs (
). Each figure contains four curves, where each one represents a specific parameter. If, for example, we focus on the population size parameter selection, we observe that for all numbers of jobs, the minimum
is reached for
. Therefore, the adopted value of the population size parameter is 120. This reasoning is valid for the rest of the parameters.
Figure A6 is reserved for the SA parameter selection, and the same reasoning as for GA holds.
The summary of the main parameters and their levels regarding GAs and SA are shown in
Table 4.
6.3. Results and Discussions
In this subsection, the proposed procedures (GAs, SA, MILP and ) are assessed. First, a pairwise comparison between the metaheuristics was performed using the average relative percent deviation () and the average computation time (). Then, the best metaheuristic was selected, and its absolute efficiency was evaluated throughout the average relative gap using the lower bound (). In addition, the best obtained metaheuristic was compared to the MILP formulation. Finally, a comparison of the best metaheuristic with existing algorithms was carried out.
6.3.1. Metaheuristics Pairwise Comparison
First, the following should be noted:
denotes the genetic algorithm based on the block order crossover operator.
denotes the genetic algorithm based on a linear order crossover operator.
denotes the genetic algorithm based on a position-based crossover operator.
denotes the simulated annealing algorithm.
For instance, we adopted the following notations and definitions:
denotes the makespan obtained via .
denotes the makespan obtained via .
denotes the makespan obtained via .
denotes the makespan obtained via .
.
For a given algorithm,
, the relative percent deviation (with reference to
) is defined as follows:
For a subset of instances, is the average .
The three variants of GA and the SA algorithm were compared using the
and the average consumed time,
, running the algorithm. The detailed results are presented in
Table 5 and
Table 6 and
Figure 4 and
Figure 5.
Based on
Table 5 and
Figure 4, the
metaheuristic largely outperforms the other metaheuristics for Class A, with an average of
. In addition, the average consumed computation time is satisfactory and
. The SA presents the best average time with
. According to
Table 6 and
Figure 5, the same behavior is detected for Class B. Indeed, the
algorithm is the leading metaheuristic with
. More precisely,
, except for
, where
.
For the two classes (A and B), the worst performance is detected for the metaheuristic in terms of and . Indeed, for this metaheuristic, the and for Class A, and and for Class B. The two remaining metaheuristics (GALOX,SA) behave almost in the same way.
Except the , the other metaheuristics are very sensitive to and . Indeed, for these metaheuristics, the and increase dramatically as and increase. Therefore, the metaheuristic is selected as the best metaheuristic among the four proposed ones.
6.3.2. Performance of the Metaheuristic GAPOX
Based on the previous subsection,
presents the best performance compared to the others. In this subsection, and based on the proposed lower bound
(in
Section 3.2.3; Remark 2), a relative percent deviation (
) between
and the obtained
was used to assess the performance of the proposed algorithm. The relative percent deviation is expressed as follows:
The
provides an upper bound on the relative absolute distance between the makespan of the metaheuristic
and the optimal solution (which is unknown). Indeed, if
denotes the optimal value of the makespan, then
. Therefore,
Therefore, the
allows the solution quality to be evaluated. The related results are detailed in
Table 7. In the latter table, the used key performance measures are the average, the minimum, and the maximum
, which are denoted, respectively, as Average, Min, and Max.
Based on
Table 7, the average relative deviation is
for Class A and
for Class B. This is strong evidence that the proposed metaheuristic performs well, since the
is very low. This means that the provided solution is too close to the optimal solution. In addition, the maximum
for Classes A and B are, respectively
and
. The maximum
is obtained for both
.
The minimum for each is observed for and . For each number of machines, , the presents a maximum for a certain number of jobs ; for example, for , the maximum is reached for (Class A). This maximum depends on .
6.3.3. Comparison of MILP with GAPOX
Since the problem is NP-hard in the strong sense, developing an exact method (MILP) to solve small instances for the studied problem could be useful. Indeed, the solution for MILP can provide a good basis for the experimental assessment of the developed approximation approach
. In this study, the MILP (as expected) is able to optimally solve only small-sized instances within an acceptable computational time. In order to test the efficiency of the proposed
, the optimal solution
provided by MILP (for
,
) was compared to the results obtained by
, throughout the computational time (
) and the relative percent deviation, as follows:
The results are reported in
Table 8. Based on this table, we observe that the optimal solution is reached by the
for all the instances within an average time of
. Consequently, this represents a preliminary satisfactory result for
. In addition, we observe that the
MILP procedure is a time-consuming procedure, since for some small-sized instances, the average computation time exceeds 1000 s.
6.3.4. GAPOX Metaheuristic Comparison with Existing Methods
To evaluate the performance of
, a comparison with existing methods [
45] was performed. The comparison was carried out on the same test problems. The existing methods are a set of four heuristics, namely,
,
,
, and
. These heuristics are two-phase procedures. In the first phase, a feasible solution is produced, while in the second phase, an improvement procedure is carried out. These heuristics are based on the Modified Schrage algorithm (
) and/or Exact Procedure (EP) (see [
45] for more details). The comparison between the
and the four existing heuristics was performed over
relatively to the lower bound,
. In addition, the average computation time (
) was used to complete the comparison. The global results are provided in
Table 9.
Based on
Table 9, the
outperforms the existing heuristics since it reaches the minimum
, which is
, for all test problems. In addition, the average computational time is satisfactory with
, which is ranked second after the
algorithm. Indeed,
is a polynomial heuristic. This heuristic should be discarded due to its high
. In addition, the reduction in the
is
×
compared to the best previous heuristic,
. This provides strong evidence of the efficiency and the performance of the
metaheuristic. More detailed results are presented in
Table 10 and
Table 11.
Based on
Table 10, the
outperforms the existing heuristics since it reaches the minimum average relative gap, which is
. In addition, for all combinations
,
presents the minimum
compared to the other heuristics. For Class B, and based on
Table 10, the
almost presents the best
with
versus
for the
heuristic. However, in terms of the average time,
for
and
for
. Therefore, globally, the
outperforms the existing heuristics. Remarkably, the average consumed time for
is almost linear compared to the number of jobs
. This remark could be further investigated in future research works.
6.4. Green Aspect Analysis
The objective of this study was to reduce the total consumed energy by the elimination of idle energy. This section aims to numerically evaluate the efficiency of the proposed methods to achieve this goal. In this context, for each test problem, the
algorithm was applied first with the no-idle constraint and then without the no-idle constraint. The two obtained respective schedules are denoted as
(schedule with no idle) and
(schedule with idle). The respective makespans are denoted as
and
. In addition, the consumed energy is assumed to be proportional to the duration [
2]. In other terms, if
is a period of time and
is the corresponding consumed energy, then a positive real
exists, such that:
Therefore, the percent of saved idle energy,
, is expressed as follows:
where
is the total idle time for the schedule,
, which is proportional to the idle energy. The expression
is proportional to the total consumed energy for the schedule
.
In addition,
denotes the percent of makespan augmentation from
to
.
is expressed as follows:
In the following table (
Table 12), the average
(
) and
(
) after running the
algorithm on all the test problems are presented.
According to
Table 12, an important reduction in the total consumed energy (29.57%) was achieved by the elimination of idle machine times. In addition, only an augmentation of 0.12% in the total cost is observed. This is strong evidence of the efficiency of the proposed procedure in saving energy and enforcing the green scheduling benefits. More detailed results are presented in
Table 13.
Based on
Table 12, the maximum
and
were reached for
,
and
,
, respectively. In addition,
and
decreased when
increased. The same behavior was exhibited by
and
when
increased.
7. Conclusions and Future Works
This study investigated a scheduling problem for m identical parallel machines under release dates, delivery times, and no-idle time constraints. The objective was makespan minimization. This problem is among the green scheduling problems since idle time is not permitted. Indeed, during the idle time, the machine is available without processing jobs. This is a waste of energy. By eliminating the idle time, the consumed energy is saved and minimized; this is the green aspect of this problem. In order to solve the studied problem, we propose a mixed-integer programming model (MILP) and a family of metaheuristics. This is due to the NP-hardness of the studied problem. The family of metaheuristics is composed of three variants of the GA and SA algorithm. The three variants of the GA are based on three different crossover operators. In addition, a lower bound () is proposed. This lower bound was used to evaluate the produced solution. This was performed using the relative percent deviation. Extensive computational experiments were carried out to evaluate the performance and efficiency of the proposed procedures (metaheuristic MILP and ). The relative percent deviation and computation time were used as performance measures. The MILP reached the optimal solution for small-sized instances. For large-sized instances, the MILP was unable to reach the optimal solution. In addition, the numerical results indicate the superiority of the proposed in comparison to the MILP and the remaining proposed metaheuristics. Furthermore, it was shown that outperforms the existing heuristics.
Future research works have to explore other metaheuristics and provide an efficient exact algorithm for the studied problem. The proposed procedures in this work could be useful as initial point for building such exact algorithms. In addition, more realistic constraints could be included to study problems such as the setup times, the maintenance intervention dates, or the machine unavailability constraints. Additionally, the investigation of other objective functions, such as the total tardiness minimization, could be considered.