Next Article in Journal
Comparison of Dimethylformamide with Dimethylsulfoxide for Quality Improvement of Distillate Recovered from Waste Plastic Pyrolysis Oil
Next Article in Special Issue
Risk Propagation of Concentralized Distribution Logistics Plan Change in Cruise Construction
Previous Article in Journal
Supercritical Fluid Extraction of Compounds of Pharmaceutical Interest from Wendita calysina (Burrito)
Previous Article in Special Issue
Simulation-Based Optimization of a Two-Echelon Continuous Review Inventory Model with Lot Size-Dependent Lead Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling Two Identical Parallel Machines Subjected to Release Times, Delivery Times and Unavailability Constraints

Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia
*
Author to whom correspondence should be addressed.
Processes 2020, 8(9), 1025; https://doi.org/10.3390/pr8091025
Submission received: 17 July 2020 / Revised: 13 August 2020 / Accepted: 17 August 2020 / Published: 21 August 2020
(This article belongs to the Special Issue Optimization Algorithms Applied to Sustainable Production Processes)

Abstract

:
This paper proposes a genetic algorithm (GA) for scheduling two identical parallel machines subjected to release times and delivery times, where the machines are periodically unavailable. To make the problem more practical, we assumed that the machines are undergoing periodic maintenance rather than making them always available. The objective is to minimize the makespan ( C m a x ). A lower bound (LB) of the makespan for the considered problem was proposed. The GA performance was evaluated in terms of the relative percentage deviation (RPD) (the relative distance to the LB) and central processing unit (CPU) time. Response surface methodology (RSM) was used to optimize the GA parameters, namely, population size, crossover probability, mutation probability, mutation ratio, and pressure selection, which simultaneously minimize the RPD and CPU time. The optimized settings of the GA parameters were used to further analyze the scheduling problem. Factorial design of the scheduling problem input variables, namely, processing times, release times, delivery times, availability and unavailability periods, and number of jobs, was used to evaluate their effects on the RPD and CPU time. The results showed that increasing the release time intervals, decreasing the availability periods, and increasing the number of jobs increase the RPD and CPU time and make the problem very difficult to reach the LB.

1. Introduction

Scheduling is a process of accomplishing determined tasks by effectively using restricted resources. Various scheduling problems have been investigated in various research fields, such as manufacturing operations [1], project management [2], construction [3], airline crew scheduling [4], outpatient appointments [5], maintenance [6], cloud manufacturing [7], and personnel scheduling [8]. Production scheduling is an important problem that has been widely investigated in the literature, such as the work presented by Pindo [9]. Production scheduling problem mostly refers to assigning a job to be processed in one or more machines in a specific sequence that leads to an optimal objective subjected to specific constraints. Abedinnia et al. [10] conducted a comprehensive literature review of the production scheduling problem.
Parallel machines scheduling problem refers to assigning a set of jobs for a specific number of identical machines by allocating each machine to a specific job(s) to optimize a specific objective(s) [11]. Each job must be processed in one machine. Alternately, each machine can only process one job at the same time. When the processing time of jobs is identical for all machines (machines have the same speed), the machines are called identical [12]. However, when the processing time depends on the machine and is different from one machine to another, either uniformly or arbitrary (unrelated), the machines are called nonidentical [13,14]. Many research studies in the literature have investigated the parallel machines scheduling problem with different objectives and constraints. The parallel machines scheduling problem has a different solution approach in the literature, depending on the desired objectives, which are related, but not limited, to minimizing the flow time [15], total/weighted tardiness [16,17], number of tardy jobs [18], and makespan [19]. In this context, the makespan is the maximum completion time when assigning each job to one machine [20]. The makespan objective, which is the objective of this research, has been studied in many production environments, rather than parallel machines, such as flow shop [21], job shop [22], flexible open shop [23], and single-machine scheduling problem [24]. Furthermore, various constraints have been addressed in the parallel machine scheduling problem, such as setup times [25], machine available times [26,27], ready times [28], release dates [29], and delivery times [30]. Release date, delivery date, and availability constraints have been investigated in the literature separately [27,31]. In this study, these constraints will be studied all together.
The release date refers to the arrival time of the task to the system [32]. In the literature, it has been studied as a constraint of different production environments. Liu et al. [33] presented a single-machine scheduling problem with release time. Other scheduling problems with release time are parallel machines [34], job shop [35], and open shop [36]. Moreover, the delivery time represents the period between the completion time of the job and its exit from the system [37]. Several studies have addressed the delivery time in different scheduling problems [38,39,40]. Another constraint that will be investigated in this research is the availability time. This constraint is due to preventive maintenance (PM) of the machines. PM should be well planned to minimize the makespan, considering that the jobs cannot be interrupted. In this regard, the machines are undergoing planned maintenance in a prespecific period, to maintain a well-defined operational performance. PM activities usually refer to lubrication, inspections, cleaning, alignment, replacement, adjustment, and repair [41]. The availability of machines was considered in machine scheduling in different situations. Since the machine in the real case will not be available all the time. Usually, machines are subjected to PM, sudden breakdowns, processing of special tasks or high-priority jobs, completing unfinished jobs from the previous period, or tool changes. These situations of machine unavailability were investigated in the literature [42,43,44,45,46,47,48,49,50].
The objective of this paper is to minimize the makespan of parallel machines with release dates, delivery dates, and machine availability constraints. To the best of our knowledge, no research study has investigated this problem. However, many reports in the literature have investigated the problem of minimizing the makespan of parallel machines with the release and delivery dates without considering machine availability [32,37,51].
The succeeding topics are divided as follows: Section 2 discusses the research problem. Section 3 explains the algorithm used in this study. Section 4 explains the experimental designs. Section 5 presents the results and discussions. Finally, Section 6 is the conclusion.

2. Problem Description

The scheduling problem of two identical parallel machines subjected to release time, delivery time, and machine unavailability is formally defined as follows. A set of n jobs j 1 , ,   j n have to be processed by a set of m identical parallel machines m 1   and   m 2 . Each job j has a release time r j from which it will be ready to be processed. Job j has a processing time p j and has to be processed on the available machines m i . A particular machine is available for a period of t i after which it will be unavailable for a time s i , a required time to conduct a PM action on the machine m i . Since the two machines are identical, the available and unavailable periods for the two machines will be considered to be the same. In other words, t 1 = t 2 = t and s 1 = s 2 = s . Each j     J must spend a duration called delivery time q j after processing had been finished on the machine m i . The processing of all jobs on the identical parallel machines is conducted under the following assumptions:
  • Each machine can process only one job at a time, and all jobs are non-preemptive.
  • If machine i is unavailable (down for a PM action), it will not be capable of processing jobs until s is finished.
  • PM activities can be done early (before the end of the period t ), but for the possibility of failure occurring, they cannot be delayed.
  • The machines are available for processing again after the unavailable period (PM activity).
  • The release time r j , delivery time q j , and processing time p j     for each job j are known in advance.
  • The availability t and unavailability s periods of a particular machine are deterministic and known in advance.
The objective is to minimize the makespan ( C m a x ). According to [37], the related problem without addressing machine’s availability and PM is an NP-hard problem. Therefore, the considered problem is NP-hard, and can be expressed using Graham’s notations as follows:   P 2 | r j , q j , a v p m | C m a x . “ a v p m ” denotes the constraint of machine unavailability due to preventive maintenance (PM). The following notations will be used to define the considered problem:
  • n : number of jobs;
  • m : number of machines;
  • j : job’s index ( j = 1 , , n );
  • i : machine’s index ( i = 1 , 2 );
  • p j : processing time of job j ;
  • r j : release time of job   j ;
  • q j : delivery time of job   j ;
  • s t j : starting time for processing job   j ;
  • t : machine available time;
  • s : machine unavailable time, the required time to perform a PM action;
  • C j : completion time of job   j , C j = s t j + p j + q j ;
  • C m a x : maximum completion time, max ( C j ).
The proposed objective function for the considered scheduling problem under investigation is stated as follows:
M i n i m i z e   C m a x
where C m a x can be calculated as max j n C j , where C j = s t j + p j + q j . The following example illustrates the schedule construction of the considered problem,     P 2 | r j , q j , a v p m | C m a x . Consider a set of eight jobs, with the processing times, release times, and delivery times as shown in Table 1. The jobs have to be scheduled on two identical machines: m1 and m2. The machine available periods and the unavailable periods, the required time for performing the PM actions, are presented in Table 2. For example, machine 1 is considered to be available for 9 units of time before a PM action, and then, it will be down (unavailable) for 2 units of time (under maintenance). Let the current job sequence π be (7, 5, 3, 8, 1, 6, 2, 4). According to the given sequence, the jobs are assigned to the nearest available machine.
The constructed schedule of the given sequence π is presented on a Gantt chart shown in Figure 1. From the π sequence, job 7 is the first job to be scheduled. Since the two machines are available at time 0 and job 7 is ready at time 1 ( r 7 = 1 ), job 7 can be assigned to one of the two machines. Here, job 7 is assigned to m 1 , and it is processed for 6 units of time ( p 7 = 6 ) starting at time 1 ( s t 7 = 1 ) and finishing at 7. Job 7 has a delivery time of 4 units of time ( q 7 = 4 ), so its completion time is C 7 = 11 ( C 7 = s t 7 + p 7 + q 7 =   1 + 6 + 4 = 11). The second job to be scheduled is job 5, which is available at time 2 ( r 5 = 2 ). Machine m 2 is available at time 2. Hence, job 5 is assigned to machine 2, and the starting processing time of job 5 is 2. Furthermore, it will be processed for 5 units ( p 5 = 6 ), finishing at time 8. Job 5 has a delivery time of 6 units of time ( q 5 = 6 ), so its completion time is C 5 = 14 . The rest of the unscheduled jobs will be scheduled on the same procedure until all jobs in π are scheduled. Since machines are not always available as they need periodic PM, jobs cannot be processed when machines are under PM actions. From Table 2, machines 1 and 2 should have a PM every 9 operational times ( t = 9 ) , and the required time to do the PM is 2 units ( s = 2 ) . A PM for machine 1 will be performed for 2-unit times at time 9, despite machine 1 having worked for only 8 units of time. This is because job 1, the next scheduled job in π , has a processing time of 2 units, which will exceed the availability period ( t = 9 ) , and for the possibility of failure, the PM on machine 1 will be performed before starting processing job 1 at time 9. Machine 1 will be available again at time 11 and start processing job 1. Note that the machine will be in its best condition after the PM action. Machine 2 will be operated for 9 units until time 11, so it will be unavailable for 2 units. After finishing PM on machine 2, job 2 will be processed by machine 2. From Figure 1 the C m a x , for the given sequence, is 23.
It is worth mentioning that the number of jobs, processing times, release times, delivery times, and availability and unavailability periods affect the C m a x . For example, machine 1 will wait 1 unit of time for job 7 to be ready, and machine 2 will wait for 2 units for job 5 to be ready for processing. Besides, jobs 1 and 2 will wait for processing on machines 1 and 2, respectively, as they are not available.
For this problem, we propose the lower bound (LB) for the considered problem, P 2 | r j , q j , a v p m | C m a x , as follows:
Clearly, no job j can be finished earlier than p j + r j + q j [37]. Furthermore, since the machine availability period must be greater than the maximum processing time such that
t m a x j J { p j }
where t is the machine availability period, an LB that is similar to that for the problem p | r j , q j | C m a x is as follows:
L B 1 = m a x j J { p j + r j + q j }
As machines are subjected to PM actions, the minimum PM times can be calculated based on [50,52]:
1 2 ( j J ( p j ) t )
where t is the machine availability period before a PM action is needed, and y denotes the largest integer that is smaller than or equal to y.
The minimum time that a machine will not be available under PM activities can be calculated as follows:
s 1 2 ( j J ( p j ) t )
where s is the PM time.
By adding the unavailable time due to the PM activities to the two LBs developed by [51] for the p | r j , q j | C m a x , we get the following LBs:
L B 2 = 1 2 j J ( p j ) + m i n j J ( r j ) + m i n j J ( q j ) + s 1 2 ( j J ( p j ) t )
Ranking the jobs in an increasing manner as regards their release times and delivery times, such that r ¯ 1 ( J ) r ¯ 2 ( J )   and q ¯ 1 ( J ) q ¯ 2 ( J ) ( r ¯ 1 ( J )     a n d     r ¯ 2 ( J ) are the two smallest release times and q ¯ 1 ( J )     a n d     q ¯ 2 ( J ) are the two smallest delivery times), we get
L B 3 = 1 2 ( j J ( p j ) + r ¯ 1 ( J ) + r ¯ 2 ( J ) + q ¯ 1 ( J ) + q ¯ 2 ( J ) ) + s 1 2 ( j J ( p j ) t )
L B ( J ) = max { L B 1 ,   L B 2 ,   L B 3 }

3. Genetic Algorithm

Genetic algorithms (GAs) are adaptive metaheuristic search algorithms used for solving combinatorial optimization problems. GAs represent one branch in the field of evolutionary algorithms (EAs). GAs use natural evolution-inspired techniques in which they imitate the biological processes of reproduction and natural selection to solve for the “fittest” solutions [53]. GA was first proposed by Holland in 1975, in an attempt to solve some optimization problems by imitating the genetic process of living organisms and the law of the evolution of species [54]. In 1989, Golberg [55] applied GA to optimization and machine learning.
To solve the problem under study, we propose a GA to present high-quality solutions within allowable computing times. The reasons behind choosing the GA are its well-known reputation for high performance when solving combinatorial optimization problems, and more importantly, its effectiveness in solving related problems as reported in previous studies (see for example [14,17]). Several genetic operators are considered, and the selection of the best GA operator parameters has been optimized based on response surface methodology (RSM) experimental design, as detailed in Section 4.3. The proposed GA presents the following operators: generation of the initial population, fitness evaluation, parent selection, crossover, and mutation. In the next subsections, the GA main elements are discussed.

3.1. Chromosome Encoding

Genes are the main component to build GAs, since chromosomes are a sequence of genes. When building a GA for scheduling problems, each job in the schedule is represented as a gene in a chromosome. This operation, that is, representing individual genes, is known as encoding. Several encoding schemes are presented in related studies. The most used encoding scheme is the permutation encoding in which every chromosome is represented as a permutation list of jobs. A permutation list is a linear order list of all the jobs with no repeated values, where an extra gene is used to distinguish the machines on the chromosome. See Section 2 (Figure 1) for an illustration.

3.2. Initial Population

A population is a collection of chromosomes. In the GA, the two main aspects of a population are the size of the population and its initial generation process. Usually, the initial population is generated randomly. However, in some cases, for complicated problems, to increase the quality of the generated population, a small part of the initial population is generated using a problem-specific heuristic, and the rest is generated randomly to ensure population diversity. In the proposed GA, the population size is determined using a preliminary study as detailed in Section 4. However, the initial population is generated randomly to make sure the initial population is diverse.

3.3. Chromosome Evaluation

The evaluation process is used to compute the fitness value of each individual in the present population. In the evaluation process, the current population is evaluated by decoding each chromosome in that population and calculating its objective function value. As the generated chromosomes do not include the PM activities, the following procedure (Procedure 1) is required to decode each chromosome and calculate its objective value. In the proposed procedure, the evaluation starts by delivering all the inputs for parameters, including the randomly generated sequence π [ n ] . Starting with the first job of the sequence, there are two possibilities. The first is that the current machine age plus the processing time of that first job is less than or equal to the availability time of the machine before PM. Then, there is no PM, the machine’s age will be updated, the job will be scheduled on that available machine, the completion time of the scheduled job will be calculated, and finally, the machine availability will be updated. The second possibility is that the current machine age, plus the processing time of that first job, is larger than the availability time of the machine before PM. Then, a PM activity should be assigned to that machine first before scheduling the job. The starting and completion time of the PM activity is going to be determined. Then, the machine’s age and availability are going to be calculated. Finally, the job is going to be scheduled on the machine and the completion time of the scheduled job is going to be obtained. This process is going to be repeated for all the jobs in the input sequence π [ n ] . The final step of the procedure is to calculate C m a x as the maximum of all job’s completion times. All details can be found in the proposed procedure (Procedure 1).

3.4. Selection and Reproduction Process

The selection process ascertains the chromosomes that are chosen for mating and reproduction. The main purpose of the selection process is as follows: “the better an individual is, the higher its chance of being a parent.” In this study, Roulette wheel selection was used in which each individual is assigned a selection probability that is proportional to its relative fitness, p i = f i / j = 1 n f j . The selection of beta individuals is performed by beta-independent spins of the roulette wheel. Each spin will select a single individual, and the better individual will be selected. The roulette wheel selection operator is selected based on some preliminary experiments, it has been found that, for the problem under investigation, the roulette wheel selection operator performs better than the tournament selection operator and random selection. This was consistent with previous research (for example [14,17,53,56]), as they have suggested that roulette wheel selection operator is the best operator for scheduling problems.

3.5. Crossover

Reproduction in GA is conducted by applying crossovers and mutations. The crossover process consists of a combination of two parents to create a new child. The crossover operators were selected using a permutation list; that is, the proposed operator was applied to a permutation of coded operations. In this study, a position-based crossover (PBX) proposed by Syswerda [57] was used. In a PBX operator, a subset of positions from the first parent is randomly selected and copies the components at these positions to the offspring chromosome at the same position and then fills the other positions with the remaining components in the same order as in the second parent. An example of a PBX is given in Figure 2. The most important variable that needs to be tuned in the crossover process is the crossover rate P c ( P c [ 0 , 1 ] ) , which represents the proportion of parents on which a crossover operator will occur.
Procedure 1 Chromosome Evaluation
1Inputs:
n is the number of jobs;
m is the number of machines, which is 2;
p j are the processing times of each job;
r j are the ready times of each job;
q j are the delivery times of each job;
t is the machine available time;
s is the machine unavailable time;
π [ n ] is the generated sequence (randomly generated as explained in Section 3.2);
2For h = 1   t o   n ,
j = x [ h ] ; and i is the assigned machine;
If j is the 1st job in the assigned machine i :
If machine i age + p j t ; %% No PM action
machine i age = machine i age + p j ;
s t j   =   r j ;
C j   =   s t j + p j + q j ;
machine i availability = C j q j ;
Else%% PM action is required
s t P M   = machine i availability;
c t P M   =   s t P M +   s ;
machine i availability = c t P M ;
machine i age = p j ;
s t j   =   max ( machine   i   availability , r j   ) ;
C j   =   s t j + p j + q j ;
machine i availability = C j q j ;
End (if);
Else
If machine i age + p j t ; %% No PM action
machine i age = machine i age + p j ;
s t j   =   max ( machine   i   availability , r j   ) ;
C j   =   s t j + p j + q j ;
machine i availability = C j q j ;
Else%% PM action is required
s t P M   = machine i availability;
c t P M   =   s t P M +   s ;
machine i availability = c t P M ;
machine i age = p j ;
s t j   =   max ( machine   i   availability , r j   ) ;
C j   =   s t j + p j + q j ;
machine i availability = C j q j ;
End (if);
End (if);
End (for);
3Output:
C m a x   =   max ( C j ) ;

3.6. Mutation

Mutation occurs after crossover is completed. This operator applies the changes randomly to one or more “genes” to produce a new offspring. Hence, it creates new adaptive solutions good in avoiding local optima. The percentage of genes that will be mutated is denoted as Mu. Mutation probability (Pm) determines how many chromosomes (offspring) should be mutated in one generation. Pm is in the range of [0, 1]. In this paper, we used a combination of swap, reversion, and insertion mutation operators for producing the mutation offspring. Operators are selected randomly with equal probability, i.e., every time the mutation function was called by GA, one of the three operators will be randomly chosen with equal probability.

3.7. Replacement and Termination Condition

The replacement phase concerns the survivor selection of both the parent and the offspring populations. In this study, the population was updated based on elitism in which the best individuals are selecting from the parents and offspring. The GA preserves the solution with the best value of the objective function in the next generation when adopting elitism [57].
Computational time and convergence of the fitness value are considered simultaneously for terminating the GA iterations. In this paper, the search process will be terminated if the fitness reaches the LB, no improvement in the best fitness values of the current population values for a given number of successive iterations, or the number of iterations reaches the maximum allowable number. The algorithm is repeated until the termination condition is satisfied.

4. Experimental Design

In this section, we describe the design of the experiments performed to select the best GA parameters proposed in Section 3. In the next sections, we present the indicators used for the evaluations, describe the test instances, and discuss the selection of the GA parameters.

4.1. Indicator of the Evaluation

The statistics used in the analysis of the computational experiments to assess the performance of the proposed algorithm are based on the proposed LB presented in Equation (9). For each instance, the relative distance to the LB was calculated. Thus, for instance i , the relative gap between the C m a x i and L B i is calculated using the relative percentage deviation (RPD), denoted as R P D i in Equation (9):
R P D i   =   C m a x i L B i L B i 100
In addition to the RPD, the computation time taken to solve each instance C P U t i m e i was presented.

4.2. Description of Test Instances

To study the effect of the scheduling problem constraints, an experimental study based on the factorial design was conducted using a set of instances. The test instances were generated randomly based on the data set designed by previous studies. Two sets of processing times p j were similar to Liao et al. [42], and the sets were generated uniformly in the intervals of ( a = 20 , b = 50 ) and ( a = 20 , b = 100 ) . Two sets of release time times r j were uniformly distributed in the intervals of ( 1 ,   a ) and ( 1 ,   b n m ) , where a and b correspond to p j , n is the number of jobs, and m is the number of machines (m = 2). The first set was similar to that in [37], while the second one was generated so that the jobs arrive throughout the scheduling plan which makes the problem more practical. Two sets of delivery times q j were generated, with random values uniformly distributed in the intervals of ( 1 , 0.5 b ) , and ( 1 , 1.5 b ) considering the delivery time as a function of the processing time. Machine availability and unavailability periods were generated similarly to that in Liao et al. [42] considering the t     a n d     s as a function of the processing times and number of jobs. Two sets of machine availability periods t of were generated such that t   ( a + b ) n 4 and ( a + b ) n . Two sets of machine unavailability periods s were generated such that s   ( a + b ) n 12 and ( a + b ) n 6 .
For every combination of p j , r j , q j , t , and s , different problem sizes of n jobs were generated where n ∈ (10, 20, 30, 40, 50, 100, 200, 300, 400, and 500). For each size, five instances were randomly generated. The summary of the instance characteristics of the problem is presented in Table 3.

4.3. Response Surface Methodology

The influence of the GA parameters, discussed in Section 3, on its performance was evaluated through an experimental design approach. RSM was used for the parametric study and optimization of the GA parameters for efficient solving of the scheduling problem under study. In this work, RSM’s face-centered central composite (FCC) design, a second-order experimental design with three levels, was employed to analyze the effects of the selected five GA parameters on output response ( R P D i and C P U t i m e i ). Based on the literature and a preliminary screening, five GA parameters and their respective levels were selected, as shown in Table 4. The design consists of a full factorial having 52 runs including 32 cube points and 20 center points (10 center points in a cube and 10 center points along the axis).
Analysis of variance (ANOVA) was performed to investigate the significance of the GA parameters and their interactions in relation to RPD and central processing unit (CPU) time. P-values less than 0.05 indicate model terms are significant. Model reduction was performed using the forward elimination process to improve the model by deleting insignificant terms without a statistically significant loss of fit. The data sets, that were used to experimentally optimize the GA’s parameters, were selected from the generated instances described in Section 4.2, taking into account the sample size, n ∈ (10, 500). The GA was run 10 times, and the average RPD and CPU time were reported for the selected instances.
Table 5 and Table 6 present the reduced ANOVA tables for the RPD and central processing unit (CPU) time, respectively. For the RPD, the population size, mutation ratio, and beta are statistically significant factors as their P-values less than 0.05 (Table 5). The interactions of the population size with mutation probability and beta are statistically significant. For the CPU time, the population size, mutation rate, and mutation probability are statistically significant (Table 6). The interactions of population size with mutation probability and mutation rate and the interactions of mutation probability with mutation rate are statistically significant. Note that the two-way interactions have a contribution of 36.04%, which indicates that a second-order model should be adapted for optimizing the GA parameters. It is worth noting that the normality assumption is satisfied and the adjusted R2 for the RPD and CPU time models are 90.14% and 82.45%, respectively.
The mathematical models in terms of actual factors have been obtained to make predictions and multiobjective optimization. The mathematical models of RPD and CPU time are given in Equations (10) and (11), respectively.
P R D   =   0.02862   0.000200   P o p S i z e   0.00464   P m +   0.02278   M u +   0.000401   B e t a   +   0.0000003   P o p S i z e P o p S i z e +   0.000035   P o p S i z e P m   0.000001   P o p S i z e B e t a 0.0193   P m M u
C P U   t i m e   =   70.4     0.177   P o p S i z e   +   4.4   P c     121.1   P m     439   m u     0.198   P o p S i z e P c   +   0.709   P o p S i z e P m   +   1.662   P o p S i z e M u   +   847   P m M u
Desirability function, embedded in Minitab software, has been used for multiobjective optimization of the GA parameter settings. Desirability function is a mathematical method used for multiple response optimization, proposed by Derringer and Suich [58]. In the desirability function, each response is transformed into a desirability function that ranges from 0 to 1, where 0 is least desirable. Overall desirability of the multi-response system is measured by combining the individual desirabilities of the responses. The objective is to find parameter settings that maximize the overall desirability. In this study, the optimal solution is to minimize RPD and CPU time. Table 7 shows the optimal combination values of the GA parameters as a result of multiobjective optimization with overall desirability of 0.9867.

5. Results and Discussions

In this section, the effect of the scheduling problem variables on the RPD and CPU time is investigated. A full factorial experimental study is conducted using a set of instances generated randomly, described in Section 4.2, based on five factors, which are process times (p), release times (r), delivery times (q), availability period (t), unavailability period (s), and the number of jobs ( n ). Two levels were considered for these variables except for the number of jobs in which there were 10 levels (Table 3), which resulted in the generation of 320 instances. Furthermore, every instance was generated five times. A total of 1600 instances was generated. The GA was run five times for every instance, and the average C m a x and CPU time was reported.
The proposed GA has been coded using MATLAB software (The MathWorks Inc., MA, USA), and the computational experiments for all instances have been conducted on the same computer with the following specifications: Intel® Core™ i5-3230M at 2.6 GHz for the CPU processor (Intel Corporation, CA, USA) and 4.0 GB for RAM.
The computational results of the GA for the RPD and CPU times are given in Table 8 and Table 9, respectively. Results are also presented in Figure 3a–f. The following notations are used in Table 8, Table 9, and Figure 3:
  • n denotes the number of jobs.
  • 1,2 refer to the low and high levels, respectively, corresponding to variables p, r, q, t, and s.
  • RPD-1 and RPD-2 refer to the RPD at low and high levels, respectively, corresponding to variables p, r, q, t, and s.
  • CPU-1 and CPU-2 refer to the CPU time at low and high levels, respectively, corresponding to variables p, r, q, t, and s.
Computational results of the RPD and CPU time are summarized in Table 8 and Table 9. It is evident from Table 8 that the release time factor has the most effect on the RPD. Increasing the release time interval from low level ( U ( 1 , a ) ) to the high level ( U ( 1 , b n m ) ) makes the problem very difficult to reach the LB and take high CPU time. The maximum error (RPD) at a low level of r was 1.71%, while the error reaches 18.50% at a high level of r . The maximum CPU time at a low level of r was 55.7 seconds, while the CPU time reached 1113.9 seconds at a high level of r . However, the RPD and CPU time are relatively reduced at a high level of r when the availability period is high ( ( a + b ) n ). The CPU time is significantly increased with an increase in n .
Figure 3a and Figure 4f show the effect of these variables on the average RPD and CPU time with the changes in the number of jobs. Figure 3a shows that an increase in the number of jobs increases the RPD and CPU time. Figure 3b shows the average RPD and CPU time at low and high p variable. Moreover, there is no difference between them as the trends at low and high p are very close to each other. Figure 3c shows a significant effect of the r variable on the average RPD and CPU time. Figure 3d shows the average RPD and CPU time at low and high q variable. Furthermore, there is no difference between them as the trends at low and high q are very close to each other. Figure 3e shows a significant effect of the t variable on the average RPD. However, the effect of r on the CPU time is significant at a high number of jobs.
The effects of these variables were also studied statistically using ANOVA. In this section, ANOVA analysis will be used to investigate the effect of five factors on each response. First, the ANOVA table for the CPU time has been presented. Table 10 shows the ANOVA table of the CPU time. Note that the CPU time data was transformed using Box-Cox transformation λ = 0.5 (square root) to meet normality assumption. After conducting a model fitting, it was revealed that a two-way interaction model is the best fit for the data. P-values less than 0.05 indicate model terms are significant. In this case, processing times ( p ), release times ( r ), availability period ( t ), delivery times ( q ), and sample sizes ( n ) are significant model terms. In other words, these factors significantly affect the transformed CPU time. P-values greater than 0.05 indicate that the model terms are not significant. Moreover, the interactions between the release times ( r ) and the processing times ( p ), available period ( t ), delivery times ( q ), and sample sizes ( n ) have significant effects on the CPU times. This is also true for the interactions between the sample sizes ( n ) and the available period ( t ). It is worth noting that the adjusted R 2 is equal to 98.17%, which indicates an excellent representation of the variability of the data by the model terms. Moreover, the contribution of the model is very high, since it is more than 98%. The most significant effects are contributed by changing the sample size ( n ) factor (48.47%), and then followed by the interaction between the release times ( r ) and the sample sizes ( n ) (25.01%). Release time ( r ) has a contribution of (20.65%).
Table 11 presents the ANOVA results for the RPD. It is worth noting that the data was transformed using square root transformation to meet the normality assumption. Since the used data are more than 300; the normality assumption has been verified using the values of skewness and kurtosis [59]. The results revealed that the processing times (p), unavailability period (s), available period (t), release times (r), and sample sizes (n) have a significant effect on the RPD, since their P-values are all less than 0.05. Consequently, the processing times (p), unavailability period (s), available period (t), release times (r), and sample sizes (n) have a significant effect on the quality of the solution obtained by the model. Moreover, the interactions between the processing times (p) and unavailability period (s); between processing times (p) and release times (r); between unavailability period (s) and available period (t); between unavailability period (s) and release times (r); between available period (t) and release times (r); between delivery times (q) and release times (r); and between release times (r) and sample sizes (n), all have a significant effect on the quality of the solution obtained by the model. This model has shown a good R2 value = 0.83, which indicates the data variability is more represented by the model. Moreover, the contribution of the model is high since it is more than 83%. However, changing the release times (r) (45.06%) and its interaction with availability times (t) (12.81%) contribute more effects.
Furthermore, the results have been visualized to represent the direction impacts of the factors and their interactions on the corresponding response. Figure 4 shows the main effects of the factors being studied on the RPD and CPU time. Figure 4a shows that the more the availability period (t), the less the CPU time needed for obtaining a good solution for the corresponding problem, regardless of the other factors. Furthermore, the first level of the release time factor (r) shows less CPU time. This is caused by the short period receiving orders in the first level of the release time factor. In addition, as it is shown, the greater the sample size, the more CPU time is needed. Moreover, Figure 4b shows that the processing time, availability period, and delivery time are inversely proportional to the RPD. However, the unavailability period, release time, and sample size are directly proportional to the RPD.
Moreover, the interaction plot of the factors that affect the RPD and CPU time are plotted in Figure 5. Figure 5a shows that there is a significant effect in the interaction between the availability period and release time in the CPU. Additionally, the more availability period and release time, the CPU time is less. Furthermore, the release time increases the CPU time with the increase in sample size. Figure 5b shows the interaction plot for the RPD. It is worth noting that almost the same interaction effects of the CPU times are applied to the RPD.

6. Conclusions

The scheduling problem of two identical parallel machines subjected to release times and delivery times, where the machines are periodically unavailable, is tackled in this study. The machines were assumed undergoing periodic maintenance instead of assuming they are always available. This makes the scheduling problem more practical. The objective is to schedule all jobs such that the C m a x is minimized. An LB was proposed for the problem. A GA was proposed to solve the problem, since the problem is considered an NP-hard problem. The GA performance was evaluated in terms of the RPD (the relative distance to the LB) and CPU time. For better performance of the GA, RSM was used to optimize the GA parameters, namely, population size (Popsize), crossover probability (Pc), mutation probability (Pm), mutation ratio (Mu), and pressure selection (Beta), while minimizing the RPD and CPU time. The optimization of multiple responses (RPD and CPU time) was performed using the desirability analysis. Optimized settings of the GA parameters were used to further evaluate the scheduling problem. Factorial design of the scheduling problem input variables, namely, processing times, release times, delivery times, availability and unavailability periods, and the number of jobs, was used to evaluate the GA performance.
The results show that the GA parameters have significant effects on the RPD and CPU time with R2 being equal to 90.14% and 82.45%, respectively. Furthermore, the GA parameter interactions have high contributions to the RPD and CPU time. To minimize the RPD and CPU time, the GA parameters should be set as follows: Popsize is 200, Pc is 0.9, Pm is 0.14, Mu is 0.001, and Beta is 1. Regarding the scheduling problem input variables, increasing the release time interval makes the problem very difficult to reach the LB. Moreover, increasing the release time interval leads to a significant increase in the CPU time when the number of jobs is high. Increasing the number of jobs leads to an increase in CPU time. Decreasing the availability periods leads to a significant increase in the RPD and CPU time when the number of jobs is greater than 50 jobs.
Release times and availability period variables and their interaction most affect the RPD. Finally, the most significant variables that affect the CPU time are release times and sample sizes and their interactions.

Author Contributions

Conceptualization, M.S.; Data curation, M.S. and M.A.; Funding acquisition, A.M.A.-S.; Investigation, A.M.A.-S., M.S., M.A. and M.G.; Methodology, A.M.A.-S., M.S. and M.A.; Resources, M.S. and M.G.; Supervision, A.M.A.-S. and M.S.; Validation, M.S.; Visualization, M.S. and M.A.; Writing—original draft, M.S. and M.A.; Writing—review & editing, A.M.A.-S., M.S., M.A. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Vice Deanship of Scientific Research Chairs (DSRVCH) and the APC was funded by Vice Deanship of Scientific Research Chairs (DSRVCH).

Acknowledgments

The authors are grateful to the Deanship of Scientific Research, King Saud University for funding this research project through Vice Deanship of Scientific Research Chairs (DSRVCH) and the authors thank RSSU at King Saud University for their technical support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pinedo, M.; Chao, X. Operations Scheduling with Applications in Manufacturing and Services; McGraw Hill: New York, NY, USA, 1999. [Google Scholar]
  2. Kerzner, H. Project Management: A Systems Approach to Planning, Scheduling, and Controlling; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  3. Callahan, M.T.; Quackenbush, D.G.; Rowings, J.E. Construction Project Scheduling; McGraw-Hill: New York, NY, USA, 1992. [Google Scholar]
  4. Barnhart, C.; Cohn, A.M.; Johnson, E.L.; Klabjan, D.; Nemhauser, G.L.; Vance, P.H. Airline crew scheduling. In Handbook of Transportation Science; Kluwer’s International Series; Springer: New York, NY, USA, 2003; pp. 517–560. [Google Scholar]
  5. Kaandorp, G.C.; Koole, G. Optimal outpatient appointment scheduling. Health Care Manag. Sci. 2007, 10, 217–229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Malik, M.A.K. Reliable preventive maintenance scheduling. AIIE Trans. 1979, 11, 221–228. [Google Scholar] [CrossRef]
  7. Liu, Y.; Xu, X.; Zhang, L.; Wang, L.; Zhong, R.Y. Workload-based multi-task scheduling in cloud manufacturing. Robot. Comput.-Integr. Manuf. 2017, 45, 3–20. [Google Scholar] [CrossRef]
  8. Porto, A.F.; Henao, C.A.; López-Ospina, H.; González, E.R. Hybrid flexibility strategy on personnel scheduling: Retail case study. Comput. Ind. Eng. 2019, 133, 220–230. [Google Scholar] [CrossRef]
  9. Pinedo, M. Scheduling: Theory, Algorithms, and Systems; Springer: New York, NY, USA, 2012; Volume 5. [Google Scholar]
  10. Abedinnia, H.; Glock, C.H.; Grosse, E.H.; Schneider, M. Machine scheduling problems in production: A tertiary study. Comput. Ind. Eng. 2017, 111, 403–416. [Google Scholar] [CrossRef]
  11. Cheng, T.; Sin, C. A state-of-the-art review of parallel-machine scheduling research. Eur. J. Oper. Res. 1990, 47, 271–292. [Google Scholar] [CrossRef]
  12. Wang, S.; Wu, R.; Chu, F.; Yu, J. Identical Parallel Machine Scheduling with Assurance of Maximum Waiting Time for an Emergency Job. Comput. Oper. Res. 2020, 104918. [Google Scholar] [CrossRef]
  13. Li, K.; Yang, S.-l. Non-identical parallel-machine scheduling research with minimizing total weighted completion times: Models, relaxations and algorithms. Appl. Math. Model. 2009, 33, 2145–2158. [Google Scholar] [CrossRef]
  14. Vallada, E.; Ruiz, R. A genetic algorithm for the unrelated parallel machine scheduling problem with sequence dependent setup times. Euro. J. Oper. Res. 2011, 211, 612–622. [Google Scholar] [CrossRef] [Green Version]
  15. Anand, S.; Bringmann, K.; Friedrich, T.; Garg, N.; Kumar, A. Minimizing maximum (weighted) flow-time on related and unrelated machines. Algorithmica 2017, 77, 515–536. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, J.-Y. Minimizing the total weighted tardiness of overlapping jobs on parallel machines with a learning effect. J. Oper. Res. Soc. 2019, 71, 910–927. [Google Scholar] [CrossRef]
  17. Chaudhry, I.A.; Elbadawi, I.A. Minimisation of total tardiness for identical parallel machine scheduling using genetic algorithm. Sādhanā 2017, 42, 11–21. [Google Scholar] [CrossRef] [Green Version]
  18. Zheng, F.; Huang, J. Uniform parallel-machine scheduling to minimize the number of tardy jobs in the MapReduce system. In Proceedings of the 2019 International Conference on Industrial Engineering and Systems Management (IESM), Shanghai, China, 25–27 September 2019; pp. 1–6. [Google Scholar]
  19. Ozturk, O.; Begen, M.A.; Zaric, G.S. A branch and bound algorithm for scheduling unit size jobs on parallel batching machines to minimize makespan. Int. J. Prod. Res. 2017, 55, 1815–1831. [Google Scholar] [CrossRef]
  20. Martello, S.; Soumis, F.; Toth, P. Exact and approximation algorithms for makespan minimization on unrelated parallel machines. Discret. Appl. Math. 1997, 75, 169–188. [Google Scholar] [CrossRef] [Green Version]
  21. Mansouri, S.A.; Aktas, E.; Besikci, U. Green scheduling of a two-machine flowshop: Trade-off between makespan and energy consumption. Eur. J. Oper. Res. 2016, 248, 772–788. [Google Scholar] [CrossRef] [Green Version]
  22. Dao, T.-K.; Pan, T.-S.; Pan, J.-S. Parallel bat algorithm for optimizing makespan in job shop scheduling problems. J. Intell. Manuf. 2018, 29, 451–462. [Google Scholar] [CrossRef]
  23. Bai, D.; Zhang, Z.-H.; Zhang, Q. Flexible open shop scheduling problem to minimize makespan. Comput. Oper. Res. 2016, 67, 207–215. [Google Scholar] [CrossRef]
  24. Shabtay, D.; Zofi, M. Single machine scheduling with controllable processing times and an unavailability period to minimize the makespan. Int. J. Prod. Econ. 2018, 198, 191–200. [Google Scholar] [CrossRef]
  25. Hamzadayi, A.; Yildiz, G. Modeling and solving static m identical parallel machines scheduling problem with a common server and sequence dependent setup times. Comput. Ind. Eng. 2017, 106, 287–298. [Google Scholar] [CrossRef]
  26. Lee, C.-Y. Parallel machines scheduling with nonsimultaneous machine available time. Discret. Appl. Math. 1991, 30, 53–61. [Google Scholar] [CrossRef] [Green Version]
  27. Kaabi, J.; Harrath, Y. Scheduling on uniform parallel machines with periodic unavailability constraints. Int. J. Prod. Res. 2019, 57, 216–227. [Google Scholar] [CrossRef]
  28. Pfund, M.; Fowler, J.W.; Gadkari, A.; Chen, Y. Scheduling jobs on parallel machines with setup times and ready times. Comput. Ind. Eng. 2008, 54, 764–782. [Google Scholar] [CrossRef]
  29. Al-harkan, I.M.; Qamhan, A.A. Optimize Unrelated Parallel Machines Scheduling Problems With Multiple Limited Additional Resources, Sequence-Dependent Setup Times and Release Date Constraints. IEEE Access 2019, 7, 171533–171547. [Google Scholar] [CrossRef]
  30. Mensendiek, A.; Gupta, J.N. Scheduling Identical Parallel Machines with a Fixed Number of Delivery Dates. In Operations Research Proceedings 2014; Springer: Cham, Switzerland, 2016; pp. 393–398. [Google Scholar]
  31. Hermès, F.; Ghédira, K. Scheduling Jobs with Releases Dates and Delivery Times on M Identical Non-idling Machines. In Proceedings of the ICINCO (1), Madrid, Spain, 26–28 July 2017; pp. 82–91. [Google Scholar]
  32. Hidri, L.; Al-Samhan, A.M.; Mabkhot, M.M. Bounding Strategies for the Parallel Processors Scheduling Problem With No-Idle Time Constraint, Release Date, and Delivery Time. IEEE Access 2019, 7, 170392–170405. [Google Scholar] [CrossRef]
  33. Liu, P.; Gu, M.; Li, G. Two-agent scheduling on a single machine with release dates. Comput. Oper. Res. 2019, 111, 35–42. [Google Scholar] [CrossRef]
  34. Schutten, J.M.; Leussink, R. Parallel machine scheduling with release dates, due dates and family setup times. Int. J. Prod. Econ. 1996, 46, 119–125. [Google Scholar] [CrossRef] [Green Version]
  35. Timkovsky, V.G. A polynomial-time algorithm for the two-machine unit-time release-date job-shop schedule-length problem. Discret. Appl. Math. 1997, 77, 185–200. [Google Scholar] [CrossRef] [Green Version]
  36. Bai, D.; Tang, L. Open shop scheduling problem to minimize makespan with release dates. Appl. Math. Model. 2013, 37, 2008–2015. [Google Scholar] [CrossRef]
  37. Gharbi, A.; Haouari, M. An approximate decomposition algorithm for scheduling on parallel machines with heads and tails. Comput. Oper. Res. 2007, 34, 868–883. [Google Scholar] [CrossRef]
  38. Woeginger, G.J. Heuristics for parallel machine scheduling with delivery times. Acta Inform. 1994, 31, 503–512. [Google Scholar] [CrossRef]
  39. Mastrolilli, M. Efficient approximation schemes for scheduling problems with release dates and delivery times. J. Sched. 2003, 6, 521–531. [Google Scholar] [CrossRef]
  40. Kacem, I.; Kellerer, H. Approximation algorithms for no idle time scheduling on a single machine with release times and delivery times. Discret. Appl. Math. 2014, 164, 154–160. [Google Scholar] [CrossRef]
  41. Das, K.; Lashkari, R.; Sengupta, S. Machine reliability and preventive maintenance planning for cellular manufacturing systems. Eur. J. Oper. Res. 2007, 183, 162–180. [Google Scholar] [CrossRef]
  42. Liao, C.-J.; Shyur, D.-L.; Lin, C.-H. Makespan minimization for two parallel machines with an availability constraint. Eur. J. Oper. Res. 2005, 160, 445–456. [Google Scholar] [CrossRef]
  43. Xu, D.; Cheng, Z.; Yin, Y.; Li, H. Makespan minimization for two parallel machines scheduling with a periodic availability constraint. Comput. Oper. Res. 2009, 36, 1809–1812. [Google Scholar] [CrossRef]
  44. Liao, C.; Chen, C.; Lin, C. Minimizing makespan for two parallel machines with job limit on each availability interval. J. Oper. Res. Soc. 2007, 58, 938–947. [Google Scholar] [CrossRef]
  45. Lin, C.-H.; Liao, C.-J. Makespan minimization for two parallel machines with an unavailable period on each machine. Int. J. Adv. Manuf. Technol. 2007, 33, 1024–1030. [Google Scholar] [CrossRef]
  46. Liu, M.; Zheng, F.; Chu, C.; Xu, Y. Optimal algorithms for online scheduling on parallel machines to minimize the makespan with a periodic availability constraint. Theor. Comput. Sci. 2011, 412, 5225–5231. [Google Scholar] [CrossRef] [Green Version]
  47. Xu, D.; Yang, D.-L. Makespan minimization for two parallel machines scheduling with a periodic availability constraint: Mathematical programming model, average-case analysis, and anomalies. Appl. Math. Model. 2013, 37, 7561–7567. [Google Scholar] [CrossRef]
  48. Hashemian, N.; Diallo, C.; Vizvári, B. Makespan minimization for parallel machines scheduling with multiple availability constraints. Ann. Oper. Res. 2014, 213, 173–186. [Google Scholar] [CrossRef]
  49. Huo, Y. Parallel machine makespan minimization subject to machine availability and total completion time constraints. J. Sched. 2017, 1–15. [Google Scholar] [CrossRef]
  50. Huang, H.; Xiong, Y.; Zhou, Y. A larger pie or a larger slice? Contract negotiation in a closed-loop supply chain with remanufacturing. Comput. Ind. Eng. 2020, 142, 106377. [Google Scholar] [CrossRef]
  51. Carlier, J. Scheduling jobs with release dates and tails on identical machines to minimize the makespan. Eur. J. Oper. Res. 1987, 29, 298–306. [Google Scholar] [CrossRef]
  52. Low, C.; Ji, M.; Hsu, C.-J.; Su, C.-T. Minimizing the makespan in a single machine scheduling problems with flexible and periodic maintenance. Appl. Math. Model. 2010, 34, 334–342. [Google Scholar] [CrossRef]
  53. Kundakcı, N.; Kulak, O. Hybrid genetic algorithms for minimizing makespan in dynamic job shop scheduling problem. Comput. Ind. Eng. 2016, 96, 31–51. [Google Scholar] [CrossRef]
  54. Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; De La Iglesia, I.; Perallos, A. Crossover versus mutation: A comparative analysis of the evolutionary strategy of genetic algorithms applied to combinatorial optimization problems. Sci. World J. 2014, 2014, 1–22. [Google Scholar] [CrossRef] [Green Version]
  55. Goldberg, D. Genetic Algorithms in Search, Optimization, and Machine learning; Addison-Wesley Pub. Co.: Boston, MA, USA, 1989; ISBN 0201157675. [Google Scholar]
  56. Abreu, L.R.; Cunha, J.O.; Prata, B.A.; Framinan, J.M. A genetic algorithm for scheduling open shops with sequence-dependent setup times. Comput. Oper. Res. 2020, 113, 104793. [Google Scholar] [CrossRef]
  57. Syswerda, G. Scheduling optimization using genetic algorithms. In Handbook of Genetic Algorithms; Van Nostrand Reinhold: New York, NY, USA, 1991. [Google Scholar]
  58. Derringer, G.; Suich, R. Simultaneous optimization of several response variables. J. Qual. Technol. 1980, 12, 214–219. [Google Scholar] [CrossRef]
  59. Kim, H.-Y. Statistical notes for clinical researchers: Assessing normal distribution (2) using skewness and kurtosis. Restor. Dent. Endod. 2013, 38, 52–54. [Google Scholar] [CrossRef]
Figure 1. Gantt chart of the example ( C m a x = 23 ) .
Figure 1. Gantt chart of the example ( C m a x = 23 ) .
Processes 08 01025 g001
Figure 2. Example of a position-based crossover (PBX) operator.
Figure 2. Example of a position-based crossover (PBX) operator.
Processes 08 01025 g002
Figure 3. Effects of variables (a) number of jobs, n ; (b) processing times, p ; (c) ready times, r ; (d) delivery times, q , (e) machine availability time, t ; and (f) machine unavailability time, s on the RPD and CPU time.
Figure 3. Effects of variables (a) number of jobs, n ; (b) processing times, p ; (c) ready times, r ; (d) delivery times, q , (e) machine availability time, t ; and (f) machine unavailability time, s on the RPD and CPU time.
Processes 08 01025 g003
Figure 4. Main effects of the factors being studied on (a) the CPU time and (b) PRD.
Figure 4. Main effects of the factors being studied on (a) the CPU time and (b) PRD.
Processes 08 01025 g004
Figure 5. Interaction plots for (a) the CPU time and (b) RPD.
Figure 5. Interaction plots for (a) the CPU time and (b) RPD.
Processes 08 01025 g005
Table 1. Processing times, release times, and delivery times of the example.
Table 1. Processing times, release times, and delivery times of the example.
j12345678
r j 11242312
p j 25216263
q j 35746442
Table 2. Available and unavailable periods of machines 1 and 2 of the example.
Table 2. Available and unavailable periods of machines 1 and 2 of the example.
t s
m 1 ,   m 2   92
Table 3. Experimental characteristics of the generated instances.
Table 3. Experimental characteristics of the generated instances.
InputLevels
12345678910
pU(20,50)U(20,100)
rU(1,a)U(1, b n m )
qU(1,0.5b)U(1,1.5b)
t ( a + b ) n 4 ( a + b ) n
s ( a + b ) n 12 ( a + b ) n 6
n1020304050100200300400500
Table 4. The genetic algorithm (GA) parameters and their respective levels.
Table 4. The genetic algorithm (GA) parameters and their respective levels.
Parameter−101
Popsize20210400
Pc0.10.6950.99
Pm0.10.50.9
Mu0.0010.15050.3
Beta11019
Table 5. Reduced analysis of variance (ANOVA) table for the relative percentage deviation (RPD).
Table 5. Reduced analysis of variance (ANOVA) table for the relative percentage deviation (RPD).
SourceDFSeq SSContributionAdj SSAdj MSF-valueP-value
Model80.00776590.14%0.0077650.00097149.160
Linear40.00609270.72%0.0060920.00152377.140
Popsize10.00585868.00%0.0058580.005858296.710
Pm100.00%000.010.935
Mu10.0001311.52%0.0001310.0001316.610.014
Beta10.0001031.20%0.0001030.0001035.220.027
Square10.0013115.21%0.001310.0013166.370
Popsize210.0013115.21%0.001310.0013166.370
Two-way interaction30.0003634.21%0.0003630.0001216.130.001
Popsize×Pm10.0002292.66%0.0002290.00022911.610.001
Popsize×Beta10.0000911.06%0.0000910.0000914.60.038
Pm×Mu10.0000430.50%0.0000430.0000432.170.148
Error430.0008499.86%0.0008490.00002
Pure error90.000030.35%0.000030.000003
Total510.008614100.00%
R-sq: 90.14%; R-sq(adj): 88.31%; R-sq(pred): 83.95%.
Table 6. Reduced ANOVA table for the central processing unit (CPU) time.
Table 6. Reduced ANOVA table for the central processing unit (CPU) time.
Source.DFSeq SSContributionAdj SSAdj MSF-Valuep-Value
Model8572,59682.45%572,59671,57425.260
Linear4322,33146.42%322,33180,58328.440
Popsize1102,73114.79%102,731102,73136.250
Pc141200.59%412041201.450.234
Pm1131,00918.87%131,009131,00946.230
Mu184,47112.16%84,47184,47129.810
Two-way interaction4250,26536.04%250,26562,56622.080
Popsize×Pc139600.57%396039601.40.244
Popsize×Pm192,87513.37%92,87592,87532.770
Popsize×Mu171,35610.28%71,35671,35625.180
Pm×Mu182,07411.82%82,07482,07428.960
Error43121,85217.55%121,8522834
Pure error920390.29%2039227
Total51694,448100.00%
R-sq: 82.45%; R-sq(adj): 79.19%; R-sq(pred): 69.06%.
Table 7. The optimal combination values of the GA parameters.
Table 7. The optimal combination values of the GA parameters.
GA ParametersPopsizePcPmMuBeta
Best Settings2000.900.140.0011
Table 8. Computational results of the RPD.
Table 8. Computational results of the RPD.
Inputn
prqts1020304050100200200400500
111110.100.050.030.010.0020.0030.00080.0030.003
20.290.040.03000.01600.0010.0010.003
210.720.01000.020.0010.0050.0010.0030.001
20.110.01000.030.0020.0030.0010.0040.002
2110.2300.050.010.020.0320.0210.0080.0080.008
21.1100.010.010.050.0120.0140.0080.0130.008
210.910.10.020.030.010.0180.0110.0130.0090.004
210.3700.020.040.0180.0250.0110.0080.007
21112.513.5177.463.370.59216.3411.485.8272.464
24.457.084.642.733.265.5925.5314.78916.4113.79
2101.660.390.31.890.9051.6921.21.5191.717
24.950.190.280.890.871.5361.9162.1041.9891.754
2112.032.080.473.37.7613.0214.288.6187.2245.802
23.92.943.933.797.5912.285.3428.067.59710.45
2100.150.110.160.270.5411.2341.1842.0021.592
20.260.290.250.790.111.0591.4481.5031.6261.583
211110.54000.030.010.0130.0040.0010.0080.002
20.40.030.020.010.020.0050.0020.0050.0020.004
210.6300.050.010.020.0110.0150.0020.0040.003
2000.010.020.020.0070.0070.0050.0030.002
2111.710.020.020.020.050.0310.0170.0110.0130.01
20.820.180.040.010.030.0220.0170.0120.0070.01
210.8600.040.030.020.0310.0220.0180.0170.007
20.750.0500.040.020.0230.0220.0190.0080.008
21112.070.92.671.930.592.3273.9213.4493.7816.719
22.255.293.64.073.4717.6515.4110.7118.514.65
210.350.592.450.6400.1020.4510.4420.6950.537
22.980.890.870.540.620.3760.9630.5550.8280.537
2111.30.023.030.430.222.2532.6663.1326.5143.623
20.475.276.167.399.571.0119.5889.40713.6510.79
211.57000.070.140.0820.5151.0290.4620.741
200.020.240.630.060.3940.5170.4530.3330.614
Table 9. Computational results of the CPU time.
Table 9. Computational results of the CPU time.
Inputn
prqts1020304050100200200400500
111112.10.63.63.52.32.96.25.115.020.2
23.82.52.30.51.86.12.85.710.523.6
217.20.70.50.62.82.87.66.115.110.2
22.50.80.61.04.62.56.07.118.618.9
2113.40.33.22.93.97.216.615.628.537.9
26.10.51.82.76.27.617.717.631.134.6
217.73.72.33.22.35.814.122.426.723.3
26.04.30.92.35.66.924.721.123.631.6
21113.98.713.815.718.347.2204.6368.1746.3962.6
25.212.98.512.618.471.7274.5440.5561.61113.9
210.16.47.55.110.831.4139.0231.3333.2530.7
27.42.37.39.713.143.7143.4239.2404.6565.5
2115.57.86.613.527.648.6223.7501.0793.01057.9
27.48.312.613.724.975.9243.1370.2794.51008.9
210.12.62.15.14.232.7148.2200.3369.2519.3
21.92.24.07.87.732.9125.5239.9400.2534.1
211116.51.00.73.34.59.211.310.731.728.3
23.43.14.62.85.34.59.222.915.636.2
216.30.85.31.44.48.922.112.022.329.7
20.20.41.94.04.66.415.919.822.327.7
2117.72.12.43.44.113.417.232.450.355.7
28.79.42.33.24.011.720.830.038.349.9
217.91.34.13.64.012.725.230.644.646.5
28.12.70.85.63.511.223.537.236.848.0
21115.22.45.08.311.442.7154.5404.9666.9779.8
25.17.17.815.612.246.5142.3438.2743.7983.5
211.63.76.65.60.417.165.3167.5316.1452.3
23.62.04.94.44.727.7106.1188.8376.8383.2
2111.62.09.05.17.550.3161.4308.2595.2845.2
21.88.913.79.922.765.4156.9394.6653.8744.8
212.00.10.22.78.013.993.2244.0318.0522.8
20.11.92.53.52.422.881.7174.3263.6455.4
Table 10. ANOVA for the CPU time.
Table 10. ANOVA for the CPU time.
SourceDFSeq SSContributionAdj SSAdj MSF-Valuep-Value
Model3517,129.598.17%17,129.5489.41435.050
Linear1412,301.470.50%12,301.4878.67781.070
p14.50.03%4.54.544.030.046
s13.20.02%3.23.182.820.094
t1223.11.28%223.1223.09198.310
q110.70.06%10.710.689.490.002
r13603.220.65%3603.23603.243202.970
n98456.748.47%8456.7939.63835.250
2-Way Interactions21482827.67%4828229.91204.370
p × r 1710.41%7171.0363.140
t × r 1214.81.23%214.8214.8190.940
t × n 9155.80.89%155.817.3115.390
q × r 121.70.12%21.721.6719.260
r × n 94364.725.01%4364.7484.97431.10
Error284319.51.83%319.51.12
Total31917,448.9100.00%
Table 11. ANOVA for the RPD.
Table 11. ANOVA for the RPD.
SourceDFSeq SSContributionAdj SSAdj MSF-Valuep-Value
Model29273.92583.35%273.9259.44650.060
Linear14203.53161.93%203.53114.53877.050
p11.5090.46%1.5091.50980.005
s13.4591.05%3.4593.45918.330
t142.62712.97%42.62742.627225.910
q10.2540.08%0.2540.2541.350.247
r1148.07245.06%148.072148.072784.730
n97.612.32%7.610.8464.480
2-Way Interactions1570.39521.42%70.3954.69324.870
p × s 10.9950.30%0.9950.9955.270.022
p × r 11.9320.59%1.9321.93210.240.002
s × t 11.8960.58%1.8961.89610.050.002
s × r 13.7681.15%3.7683.76819.970
t × r 142.09112.81%42.09142.091223.070
q × r 11.870.57%1.871.879.910.002
r × n 917.8435.43%17.8431.98310.510
Error29054.72116.65%54.7210.189
Total319328.646100.00%

Share and Cite

MDPI and ACS Style

Al-Shayea, A.M.; Saleh, M.; Alatefi, M.; Ghaleb, M. Scheduling Two Identical Parallel Machines Subjected to Release Times, Delivery Times and Unavailability Constraints. Processes 2020, 8, 1025. https://doi.org/10.3390/pr8091025

AMA Style

Al-Shayea AM, Saleh M, Alatefi M, Ghaleb M. Scheduling Two Identical Parallel Machines Subjected to Release Times, Delivery Times and Unavailability Constraints. Processes. 2020; 8(9):1025. https://doi.org/10.3390/pr8091025

Chicago/Turabian Style

Al-Shayea, Adel M., Mustafa Saleh, Moath Alatefi, and Mageed Ghaleb. 2020. "Scheduling Two Identical Parallel Machines Subjected to Release Times, Delivery Times and Unavailability Constraints" Processes 8, no. 9: 1025. https://doi.org/10.3390/pr8091025

APA Style

Al-Shayea, A. M., Saleh, M., Alatefi, M., & Ghaleb, M. (2020). Scheduling Two Identical Parallel Machines Subjected to Release Times, Delivery Times and Unavailability Constraints. Processes, 8(9), 1025. https://doi.org/10.3390/pr8091025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop