Next Article in Journal
Topological Photonic Media and the Possibility of Toroidal Electromagnetic Wavepackets
Next Article in Special Issue
Multi Objective for PMU Placement in Compressed Distribution Network Considering Cost and Accuracy of State Estimation
Previous Article in Journal
Influence of Nitrogen Source and Growth Phase on Extracellular Biosynthesis of Silver Nanoparticles Using Cultural Filtrates of Scenedesmus obliquus
Previous Article in Special Issue
Voltage/Frequency Deviations Control via Distributed Battery Energy Storage System Considering State of Charge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function

1
Department of Computer Science Engineering, Amity School of Engineering and Technology, Noida 226010, India
2
Department of Electrical and Computer Engineering, Hawassa University, Awasa P.O. Box 05, Ethiopia
3
Department of CSE, Jaypee Institute of Information Technology Noida, Noida 201309, India
4
Department of Electrical Power Engineering, Tishreen University, Lattakia 2230, Syria
5
Department of Management & Innovation Systems, University of Salerno, 84084 Salerno, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(7), 1467; https://doi.org/10.3390/app9071467
Submission received: 25 February 2019 / Revised: 2 April 2019 / Accepted: 4 April 2019 / Published: 8 April 2019
(This article belongs to the Special Issue Energy Management and Smart Grids)

Abstract

:
Efficient job scheduling reduces energy consumption and enhances the performance of machines in data centers and battery-based computing devices. Practically important online non-clairvoyant job scheduling is studied less extensively than other algorithms. In this paper, an online non-clairvoyant scheduling algorithm Highest Scaled Importance First (HSIF) is proposed, where HSIF selects an active job with the highest scaled importance. The objective considered is to minimize the scaled importance based flow time plus energy. The processor’s speed is proportional to the total scaled importance of all active jobs. The performance of HSIF is evaluated by using the potential analysis against an optimal offline adversary and simulating the execution of a set of jobs by using traditional power function. HSIF is 2-competitive under the arbitrary power function and dynamic speed scaling. The competitive ratio obtained by HSIF is the least to date among non-clairvoyant scheduling. The simulation analysis reflects that the performance of HSIF is best among the online non-clairvoyant job scheduling algorithms.

1. Introduction

In the current era, the importance of the reduction of energy consumption in data centers and battery based computing devices is emerging. Energy consumption has become a prime concern in the design of modern microprocessors, especially for battery based devices and data centers. Modern microprocessors [1,2] use dynamic speed scaling to save energy. The processors are designed in such a way that they can vary its speed to conserve energy using dynamic speed scaling. The software developed assists operating system to vary the speed of a processor and save energy. As per United States Protection Agency [3], data centers represent 1.5% of total US electricity consumption. The US data center workload requires estimated for 2020 requiring a total electricity use that varies by about 135 billion kWh. Data center workloads continue to grow exponentially; comparable increases in electricity demand have been avoided through the adoption of key energy efficiency measures [4]. Energy consumption can be reduced by scheduling jobs in an appropriate order. In the last few years, a lot of job scheduling algorithms are proposed with dual objectives [5,6]. The objectives considered are: the first, to optimize some scheduling quality (criteria, e.g., flow time, weighted flow) and the second, to minimize energy consumption. Scheduling algorithms with dual objectives have two components [7]: Job Selection: It determines that out of active jobs which job to execute first on a processor. Speed Scaling: At any time t, it determines the speed of a processor.
The traditional power function (power P = s α , where s and α > 1 are speed of a processor and a constant, respectively [8,9]) is used widely for the analysis of scheduling algorithms. In this paper, the arbitrary power function [10] is considered. The arbitrary power function is having certain advantages over traditional power function [10]. The motivation to use the arbitrary power function rather than traditional power function is explained comprehensively by the Bansal et al. [10]. Different types of job scheduling models are available in literature. A job is a unit of work/task that an operating system performs. It is like the applications you execute on computer (email client, word-processing, web browsing, printing, information transfer over the Internet, or a specific action accomplished by the computer). Any user/system activity on a computer is handled through some job. The size of a job is the set of operations and microoperations required to be executed for completing some course of action on a computer. In offline job scheduling, the complete job sequence is known in advance, whereas jobs arrive arbitrarily in online job scheduling. To minimize the flow time, big jobs execute at high speed with respect to their actual importance and small jobs execute at low speed with respect to their actual importance. In non-clairvoyant job scheduling, there is no information regarding the size of jobs at arrival time, whereas in clairvoyant job scheduling, the size of any job is known at its arrival time. The practical importance of online non-clairvoyant job scheduling is higher than clairvoyant scheduling [11]. Most processors do not have natural deadlines associated with them, for example in Linux and Microsoft Windows [12]. The non-clairvoyant scheduling problem is faced by the operating system in a time sharing environment [13]. There are several situations where the scheduler has to schedule jobs without knowing the sizes of the jobs [14]. The Shortest Elapsed Time First (SETF) algorithm, a variant of which is used in the Windows NT and Unix operating system scheduling policies, is a non-clairvoyant for minimizing mean slowdown [14].
The theoretical study of speed scaling was initiated by Yao et al. [15]. Motwani et al. [13] introduced the analysis of non-clairvoyant scheduling algorithm. Initial researches [16,17,18,19,20,21] considered the objective to minimize the flow time, i.e., only the quality of service criteria. Later on, some new algorithms were proposed with an objective of minimizing the weighted/prioritized flow time [22,23,24], i.e., not only the quality of service but also the reduction in energy consumption by the machines. Albers and Fujiwara [25] studied the scheduling problem with an objective to minimize the flow time plus energy in the dynamic speed scaling approach. Online non-clairvoyant job scheduling algorithms are studied less extensively than online clairvoyant job scheduling algorithms. Highest Density First (HDF) is optimal [10] in online clairvoyant settings for the objective of fractional weighted/importance-based flow time plus energy. HDF cannot operate in the non-clairvoyant settings. HDF [10] algorithm always runs the job of highest density and the density of a job is its importance divided by its size. In non-clairvoyant settings, the complete size of a job is only known at the completion of it. Therefore, the HDF cannot be used directly in the non-clairvoyant settings. Azar et al. [11] proposed an algorithm (Non-Clairvoyant) NC for the known job densities in the online non-clairvoyant settings on a uniprocessor, using the traditional power function. In NC, the density (i.e., the importance/size) is known at arrival time. Speed scaling and job assignment policy used in non-clairvoyant algorithm NC-PAR (Non-Clairvoyant on Parallel identical machines) is based on a clairvoyant algorithmic approach, which shows that NC-PAR is not a pure non-clairvoyant algorithm. WLAPS (Weighted Latest Arrival Processor Sharing) [26] provides high priority to some latest jobs which increases the average response time. WLAPS does not schedule a fixed portion of active jobs rather it selects jobs having total importance equal to a fixed portion of the total importance of all active jobs. It needs to update the importance of some job to avoid under-scheduling or over-scheduling. It does not consider the importance of jobs in appropriate manner and suffers from high average response time. The above-mentioned deficiencies motivated us to continue the study in this field for the objective of minimizing importance-based importance based flow time plus energy.
In this paper, an online non-clairvoyant scheduling Highest Scaled Importance First (HSIF) is proposed with an objective of minimizing the scaled importance-based flow time plus energy. In HSIF, rather than the complete importance of a job the scaled importance of a job is considered. The scaled importance of a job increases if the job is new and it does not get the chance to execute; consequently, the starvation condition is avoided. If a job executes then the scaled importance will decrease. It the HSIF, the impotence of any job is calculated and it is the scaled value of the fixed importance of that job. As the importance is time dependent it can be termed as dynamic importance/scaled importance. This balances the speed and energy consumption. The speed of a processor is a function of the total scaled importance of all active jobs. The competitive ratio of HSIF is analysed using the arbitrary power function and amortized potential function analysis.
The remaining paper is segregated in the following sections: Next section describes some related previous scheduling algorithms and their results. Section 3 provides notations used in our paper and definitions necessary for discussion. In Section 4, the authors have explained a 2-comptitive scheduling Highest Scaled Importance First (HSIF), which includes the algorithm as well as the comparison of HSIF with the optimal algorithm using amortized analysis (potential function). In Section 5, a set of jobs and traditional power function is used to examine the performance of HSIF. Section 6 draws some concluding remarks and future scope of this study.

2. Related Work

In this section, review of some related work on the online non-clairvoyant job scheduling algorithms using the traditional power function is presented. Irn et al. [27] proposed a concept of migration of jobs and gave an online non-clairvoyant algorithm Selfish Migrate (SelMig). SelMig is O(α2)-competitive using traditional power function with an objective of minimizing the total weighted flow time plus energy on unrelated machines. Azar et al. [11] presented an online non-clairvoyant uni-processor algorithm NC, wherein all jobs arrive with uniform density (i.e., w e i g h t / s i z e = 1 ). NC is ( 2 + 1 α 1 ) -competitive using the traditional power function with an objective of minimizing the fractional flow time plus energy. NC uses unbounded speed model. Most of the studies using arbitrary power function have been conducted with clairvoyant settings. Bansal et al. [12] showed that an online clairvoyant algorithm ALG (Algorithm proposed by Bansal et al.) is γ -competitive with an objective of minimizing the fractional weighted/importance-based flow time plus energy. ALG uses Highest Density First (HDF) for job selection. The competitive ratio γ = ( 2 ( α 1 ) α ( α 1 ) 1 1 ( α 1 ) ) , more specifically γ = 2 for 1 < α 2 , γ = 2 ( α 1 ) for α > 2 , γ α 1 for α 2 + e . For large α , the value of γ ( 2 α l n α ) . Bansal et al. [10] introduced the concept of arbitrary power function and proved that an online clairvoyant algorithm (OCA) is ( 2 + ϵ ) -competitive with an objective of minimizing the fractional weighted flow time plus energy. Authors presented [28] an expert and intelligent system that applies various energy policies to maximize the energy-efficiency of data-center resources. Authors claimed that around 20% of energy consumption can be saved in without exerting any noticeable impact on data-center performance. Duy et al. [29] described a design, implementation, and evaluation of a green scheduling algorithm using a neural network predictor to predict future load demand based on historical demand for optimizing server power consumption in cloud computing. The algorithm turns off unused servers (and restarts them whenever required) to minimize the number of running servers; thus, minimizing the energy consumption. Authors defined [30] an architectural framework and principles for energy-efficient cloud computing. They presented an energy-aware resource provisioning heuristics that improves energy efficiency of the data center, while delivering the negotiated Quality of Service. Sohrabi et al. [31] introduced a Bayesian Belief Network. It learns over time that which of the overloaded virtual machines is best to be removed from a host. The probabilistic choice is made among virtual machines that are grouped by their degree of Central processing unit (CPU) usage. Juarez et al. [32] proposed a real-time dynamic scheduling system to execute efficiently task-based applications on distributed computing platforms in order to minimize the energy consumption. They presented a polynomial-time algorithm that combines a set of heuristic rules and a resource allocation technique in order to get good solutions on an affordable time scale. In OCA, the work and weights/importance are arbitrary. It uses HDF for job selection and the power consumed is calculated on the basis of speed of a processor, which is a function of fractional weights of all active jobs. Chan et al. [26] showed that an online non-clairvoyant job scheduling algorithm named Weighted Latest Arrival Processor Sharing (WLAPS) is 16 ( 1 + 1 ϵ ) 2 -competitive under the arbitrary power model with an objective of minimizing the weighted flow time plus energy, where ϵ > 1 . The value of α is commonly believed to be 2 or 3 [26]. HDF is optimal [10] in online clairvoyant settings for the objective of fractional weighted/importance-based flow time plus energy. In clairvoyant job scheduling, the size of a job is known at arrival time but the same is not true in case of non-clairvoyant scheduling, therefore HDF cannot be applied in non-clairvoyant setting. In this paper, a variant strategy of HDF is considered but in online non-clairvoyant setting for the objective of minimizing the scaled importance based flow time plus energy. Authors proposed a new strategy Highest Scaled Importance First (HSIF) in which rather than the complete importance of a job the scaled importance of a job is considered. The scaled importance of a job increases if the job is new and it does not get the chance to execute; consequently, the starvation condition is avoided. If a job executes then the scaled importance will decrease. This balances the speed and energy consumption. The speed of a processor is a function of the total scaled importance of all active jobs. 2-competitive HSIF is analysed using the amortized potential function against an offline adversary and arbitrary power function. The results of HSIF and other related online non-clairvoyant job scheduling algorithm are provided in Table 1.

3. Definitions and Notations

The necessary definitions, explanation of the terms for the study, the concept of arbitrary power function and amortized potential function analysis are as follows:

3.1. Scheduling Basics

An online non-clairvoyant uni-processor job scheduling HSIF is proposed, where jobs arrive over time and there is no information about the sizes of jobs. The importance/weight/priority (generated by the system) i m p ( j ) of any job j is known at job’s arrival and size is known only at the completion of a job. Jobs are sequential in nature and preemption is permitted with no penalty. The speed of a processor s is a rate at which the work is completed. At any time t, a job j is active if arrival time a r ( j ) t and the remaining work r e m ( j , t ) > 0 . At time t, the scaled importance of a job j is p r ( j ,   t ) . Executed time e x t ( j ,   t ) of a job j is current time t minus arrival time a r ( j ) , i.e., e x t ( j ,   t ) = ( t a r ( j ) ) . The scaled importance based flow of a job is integral over times between the job’s release time and its completion time of its scaled importance at that time. The ascending inverse density ( a ) of a job j is executed time divided by its importance, i.e., a ( j ) = ( e x t ( j ,   t ) i m p ( j ) ) . The ascending inverse density is recalculated discretely either on arrival of a new job or on completion of any job. The response time of a job is the time interval between the starting time of execution and arrival time of a job. The turnaround time is the time duration between completion time and arrival time of a job. The weight, importance and significance of a job are used as the synonyms of the priority of jobs.

3.2. Power Function

The power function P ( s ) specifies the power used when processor executes at speed s. Any reasonable power function which satisfies the following conditions is permitted [33]:
  • Acceptable speeds are a countable collection of disjoint subintervals of [ 0 ,   )
  • All the intervals, excluding probably the rightmost, are closed on both ends
  • The rightmost interval may be open on the right if the power P ( s ) approaches infinity, as the speed s approaches the rightmost endpoint of that interval
  • P ( s ) is non-negative, continuous and differentiable on all but countable many points
  • Either there is a maximum allowable speed T, or the limit inferior of ( P ( s ) s ) as s approaches infinity is not zero Without loss of generality, it can be assumed that [24]:
  • P ( 0 ) = 0
  • P is strictly convex and increasing
  • P is unbounded, continuous and differentiable
Let Q be P 1 , i.e., Q ( x ) provides the speed that a processor can run at, if the limit of x is specified.

3.3. Amortized Local Competitive Analysis

The objective considered is (G) scaled importance-based flow time plus energy. Let G A ( t ) and G o ( t ) be the increase in the objective in the schedule for any algorithm A and offline adversary Opt, respectively at time t. Opt optimizes G. At any time t, for algorithm A, G A ( t ) is P A t ( s A t ) + p r A t , where s A t , P A t ( s A t ) and p r A t are speed of processor, power at speed s a t and scaled importance of all active jobs, respectively. To prove that A is c-competitive a potential function Φ ( t ) is required which follows the following conditions: Boundary Condition: Initially, when no job is released and at the end, after all jobs are completed Φ = 0. Job Arrival and Completion Condition: There is no increment in Φ, when any job arrives or completes. Running Condition: At any other time when no job arrives or completes, G A ( t ) plus the rate of change of Φ is no more than c times of G o ( t ) : G A ( t ) +   d Φ ( t ) d t c · G o ( t ) .
Lemma 1. 
(Young’s Inequality [34]) Let f be any real-valued, continuous and strictly increasing function such that f ( 0 ) = 0 . Then   m ,   n 0
0 m f ( x )   d x +       0 n f 1 ( y )   d y   m · n
where, f 1 is the inverse function of f .

4. A 2-Comptitive Scheduling Highest Scaled Importance First (HSIF)

4.1. Scaled Importance-Based Flow Plus Energy

An online non-clairvoyant uni-processor scheduling algorithm Highest Scaled Importance First (HSIF) is proposed. In HSIF, all jobs arrive arbitrarily along with their importance and without information about their sizes. The sizes of jobs are known only on the completion of jobs. The possible speeds of a processor are a countable collection of disjoint subintervals of [ 0 ,   ) . The working of HSIF is observed using amortized potential analysis. HSIF is 2-competitive for the objective to minimize the scaled importance based flow time plus energy.

4.1.1. Algorithm HSIF

The algorithm HSIF always selects an active job with the highest scaled importance at any time, where the scaled importance p r ( j i , t ) of a job j i is computed as follows:
p r ( j i , t ) = { i m p ( j i , t ) 1 2 + l o g ( e x t ( j i ,   t ) ) ,   i f   j o b   i s   e x e c u t i n g i m p ( j i , t ) ( 1 + l o g ( e x t ( j i ,   t ) ) )   i f   j o b   i s   n o t   e x e c u t i n g
The executed time e x t ( j i ,   t ) of a job j is e x t ( j i ,   t ) = ( t a r ( j i ) ) . At any time t, the processor executes at speed s h t =   Q ( p r h t ) , where Q = P 1 and p r h t is the total scaled importance of all active jobs for HSIF. As the algorithm HSIF is non-clairvoyant, the executed time assumed is its current size. The intension here is that the instantaneous importance/priority must depend on its importance (system generated) and size. If the job is not executing (job is waiting) then the scaled importance will increase and if the job starts execution the partial importance of a job j will decrease with respect to increase in execution.
Algorithm Highest Scaled Importance First (HSIF)
Input: n a number of active jobs { j 1 , ,     j i ,   ,   j n a } . At time t, the importance of all n a active jobs { i m p ( j 1 , t ) , ,   i m p ( j i , t ) , ,   i m p (   j n a , t ) } and the executed time for all active jobs { e x t ( j 1 ,   t ) , ,   e x t ( j i ,   t ) , , e x t (   j n a ,   t ) } .
Output: The speed of all processors and execution sequence of jobs.
  1. On arrival of a job j i
  2.   If CPU is idle allocate the job to CPU
  3.       p r ( j i , t ) = i m p ( j i , t ) / ( 1 / 2 + log ( e x t ( j i ,   t ) ) )
  4.      speed of CPU s h t =   Q ( p r h t )
  5.   else if CPU is executing some job   j k
  6.       p r ( j i , t ) = i m p ( j i , t ) ( 1 + log ( e x t ( j i ,   t ) ) )
  7.       p r h t =   p r h t + p r ( j i , t )
  8.      speed of CPU s h t =   Q ( p r h t )
  9. On completion of a job j i
  10. if n a 0
  11.     p r h t =   p r h t p r ( j i , t )
  12.        select the job j k with m a x ( p r ( t ) )
  13.         i m p ( j k , t ) = p r ( j k , t )
  14.         p r ( j k , t ) = i m p ( j k , t ) / ( 1 / 2 + log ( e x t ( j k ,   t ) ) )
  15. else speed of CPU s h t = 0
Theorem 1. 
An online non-clairvoyant uni-processor scheduling Highest Scaled Importance First (HSIF) selects job with highest partial importance and consumes power equal to the total partial importance of all active jobs under dynamic speed scaling. HSIF is 2-comptitive for the objective of minimizing scaled importance-based flow time plus energy on arbitrary-work and arbitrary-importance of jobs.
In the rest of this section Theorem 1 is proven. For amortized local competitive analysis of HSIF, a potential function is provided in next sub section.

4.1.2. Potential Function Φ ( t )

Let Opt be the optimal offline adversary that minimizes scaled importance based flow time plus energy. At any time t, let p r o t and p r h t be the total scaled importance of all active jobs for Opt and HSIF, respectively. At any time t, let p r o t ( a ) and p r h t ( a ) be the total scaled importance of all active jobs with at least a ascending inverse density in Opt and HSIF, respectively. Let p r t ( a )   b e   ( p r h t ( a ) p r o t ( a ) ) + , where ( · ) + = m a x { 0 ,   · } . A potential function can be defined as follows:
Φ ( t ) = 2 a = 0 x = 0 p r t ( a ) P ( Q ( x ) ) d x   d a
Since P ( x ) and Q ( x ) are increasing, P ( Q ( x ) ) is an increasing function of x . Therefore,
d Φ ( t ) d t = 2 0 P ( Q ( p r t ( a ) ) ) d ( p r t ( a ) ) d t   d a
To observe the effectiveness of the algorithm, it is required to observe the boundary condition, job arrival and completion condition, and running conditions.
For the boundary condition, one can observe that before arrival of any job and after completion of all jobs p r t ( a ) = 0 , a. Therefore, Φ ( t ) = 0 . On arrival of any job, the value of p r t ( a ) remains the same for all a, therefore Φ ( t ) remains the same. The scaled importance of a job decreases continuously when a job is executed by the HSIF or Opt, hence Φ ( t ) does not decrease on completion of a job. At any other time t when no job arrives or completes, it is required to prove that the following inequality follows:
p r h t + P ( s h t ) + d Φ ( t ) d t c · ( p r o t + P ( s o t ) ) , w h e r e   c = 2
Since t is the current time only, the superscript t is omitted from the parameters in the rest of the analysis. Let a o and a h be the minimum ascending inverse densities of an active job using Opt and HSIF, respectively. Let a h (or a o ) be if HSIF (or Opt) has no active job. HSIF executes jobs on the basis of the highest scaled importance first at a speed s h . Therefore, p r h ( a ) decreases at the rate of ( s h / a h ) , a [ 0 ,   a h ] , and p r h ( a ) remains the same for a > a h . Similarly, p r o ( a ) changes at the rate of ( s o / a o ) a [ 0 ,   a o ] , and p r o ( a ) remains the same for a > a o . Rest of the analysis is based on the three cases depending on p r o > p r h , p r o > p r h and   p r o = p r h .
Case 1: If p r o > p r h then, one can observe that
(a) a [ 0 ,   a o ] , p r o ( a ) = p r o > p r h p r h ( a )
a [ 0 ,   a o ] , p r ( a ) = ( p r h t ( a ) p r o t ( a ) ) + = 0 . Therefore, pr ( a ) remains the same. Hence for a a o the rate of change of p r ( a ) = 0 , i.e., d d t p r ( a ) = 0 .
(b) If a > a o , p r o ( a ) remains the same, therefore the rate of change of p r ( a ) 0 , i.e., d d t p r ( a ) 0 . Considering both the sub cases it is observed that
d Φ d t = 2 0 P ( Q ( p r ( a ) ) ) d ( p r ( a ) ) d t   d a   0
p r h + P ( s h ) + d Φ d t 2   p r h < ( p r o + P ( s o ) )
Hence the running condition is satisfied for p r o > p r h .
Case 2: If p r o = p r h then, one can observe that a [ 0 ,   a o ] there is a decrement in p r o ( a ) at the rate of ( s o / a o ) , due to which the possible maximum rate of increment in Φ is:
d Φ d t 2 0 a o P ( Q ( p r ( a ) ) ) ( s o a o )     d a
a [ 0 ,   a o ] , p r ( a ) = ( p r h ( a ) p r o ( a ) ) + p r h p r o = 0 ,
P ( Q ( p r ( a ) ) ) = P ( Q ( 0 ) )
In Equation (1) substituting the values for f ( x ) = P ( x ) , m = s o and n = P ( Q ( 0 ) ) , it provides:
0 s o f ( x )   d x +       0 P ( Q ( 0 ) ) f 1 ( x )   d x   s o · P ( Q ( 0 ) )
Using Equations (3) and (4) in (2), it provides
d Φ d t 2 0 a o P ( Q ( p r ( a ) ) ) ( s o a o )   d a   = 2   s o · P ( Q ( 0 ) )
2 ( 0 s o f ( x )   d x +   0 P ( Q ( 0 ) ) f 1 ( x )   d x )   = 2   P ( s o )
p r h + P ( s h ) + d Φ d t 2   p r h + 2   P ( s o ) = 2   ( p r o + P ( s o ) )
Hence the running condition is satisfied for p r o = p r h .
Case 3: If p r o < p r h then, one can observe that a decrement in p r h ( a ) creates a decrement in Φ and a decrement in p r o ( a ) creates an increment in Φ .
a [ 0 ,   a h ] , there is a decrement in p r h ( a ) at the rate of ( s h / a h ) , due to which the possible rate of change of Φ is:
d Φ d t = 2 0 a h P ( Q ( p r ( a ) ) ) ( s h a h )     d a
a [ 0 ,   a h ] , ( a ) = ( p r h ( a ) p r o ( a ) ) + ( p r h p r o ) , thus
d Φ d t 2 0 a h P ( Q ( p r h p r o ) ) ( s h a h )     d a   = 2 P ( Q ( p r h p r o ) ) · s h
a [ 0 ,   a o ] , there is a decrement in p r o ( a ) at the rate of ( s o / a o ) , due to which the possible rate of change of Φ is
d Φ d t 2 0 a o P ( Q ( p r ( a ) ) ) ( s o a o ) d a
a [ 0 ,   a o ] , ( a ) = ( p r h ( a ) p r o ( a ) ) + ( p r h p r o ) , thus
d Φ d t 2 0 a o P ( Q ( p r h p r o ) ) ( s o a o )   d a   = 2 P ( Q ( p r h p r o ) ) · s o
Adding the Equations (6) and (8)
d Φ d t 2 P ( Q ( p r h p r o ) ) · s o 2 P ( Q ( p r h p r o ) ) · s h
Let i, s h and s o ≥ 0 be real numbers. Since P is strictly increasing and convex, P ( 0 ) 0 and P ( x ) is strictly increasing. Substituting the values of f ( x ) = P ( x ) , m = s o and n = P ( Q ( pr h pr o ) ) in Equation (1), it provides
s o · P ( Q ( p r h p r o ) ) 0 s o f ( x )   d x +       0 P ( Q ( p r h p r o ) ) f 1 ( x )   d x
= P ( s o ) + [ x f 1 ( x ) ] f ( 0 ) f ( Q ( p r h p r o ) ) f ( 0 ) f ( Q ( p r h p r o ) ) x   d ( f 1 ( x ) )
= P ( s o ) + P ( Q ( p r h p r o ) ) · Q ( p r h p r o ) 0 Q ( p r h p r o ) f ( y )   d y
= P ( s o ) + P ( Q ( p r h p r o ) ) · Q ( p r h p r o ) ( p r h p r o )
Substituting the values from Equation (11) to (9), it provides
d Φ d t 2 P ( s o ) + 2 P ( Q ( p r h p r o ) ) · Q ( p r h p r o ) 2 ( p r h p r o ) 2 P ( Q ( p r h p r o ) ) s h = 2 ( P ( s o ) ( p r h p r o ) ) + 2 P ( Q ( p r h p r o ) ) · ( Q ( p r h p r o ) s h )
Since s h = Q ( p r h ) Q ( p r h p r o ) ( Q ( p r h p r o ) s h ) 0 , thus using this value in Equation (12), it provides
d Φ d t 2 ( P ( s o ) ( p r h p r o ) )
p r h + P ( s h ) + d Φ d t     2   p r h + 2   P ( s o ) 2 ( p r h p r o )   =   2   ( P ( s o ) + p r o )
Hence the running condition is satisfied for   p r o < p r h .

5. Illustrative Example

To examine the performance of HSIF, a set of seven jobs and the traditional power model is considered, where P o w e r   =   s p e e d α and 2 α 3 . The jobs arrived along with their importance but the size of jobs was only known at their completion. The jobs are executed by using algorithms HSIF and NC (the best known to date [11]); their executions are simulated. To demonstrate the effectiveness of proposed scheduling, a simulator is used which is developed using Linux kernel. Simulator facilitates to segregate the scheduling algorithm and decisively do not include the effects of other activity present in a real kernel implementation. The jobs are considered as independent. The proposed algorithm is for the identical homogeneous machines. To evaluate the performance of the algorithm the (average) turnaround time and (average) response time is considered. The lesser value of average response time reflects the prompt response of a request of jobs, which helps in avoiding starvation condition. The least value of the turnaround time gives the indication that the algorithm is capable to fulfil the resource requirement of all jobs in minimum time, which is a parameter of better resource utilization. The hardware specifications are mentioned in the Table 2.
The details of jobs and results computed are shown in the Table 3 and Figure 1, Figure 2, Figure 3 and Figure 4.
As per the result stated in the Table 3, the response time (and turnaround time) of most of the jobs and average response time (and average turnaround time) of all jobs executed by HSIF are lesser than NC. It shows that the performance of HSIF is better than NC with respect to scheduling criteria. Table 4 reflects that HSIF consumes less energy and performance objective importance based flow time as well as importance based flow time plus energy is better for HSIF.
After observing the graphs of Figure 1a,b, it is clear that the HSIF adjusts the sum of the importance of active jobs frequently (count of maxima), but the change in the values is small (difference in the consecutive maxima). This shows that HSIF is maintaining the consistency in the performance. The speed of a processor depends on the sum of importance; therefore, the speed of a processor is having the same reflection. This frequent but less change in the speed makes the HSIF consistent in performance. In NC, the frequency of change in the sum of the importance of active jobs is lesser, but the difference in the change is very high. The speed of a processor using NC depends on the sum of executed size of active jobs; therefore, the speed of a processor variates highly. This high variation makes the NC less consistent in performance.
In Figure 1c, the number of high speed change (local minima) is six when processor is executing jobs by using NC, which is due to the completion and start of execution of the jobs. There is a big change in the speed of processor when the executing job is changed. There is no affect on the new job’s arrival (accumulation of importance based flow) on the execution speed of executing job. In the speed growth graph of processor using HSIF, more than six maxima and minima are available; it shows that the speed of a processor increases on arrival of a new job, i.e., increases on accumulation of scaled importance based flow time. It eliminates the possibility of starvation condition and improves the performance. It shows that HSIF is capable to adjust the speed for maintaining and improving the performance.
Figure 2a shows that initially, at any time the total energy consumed by processor using HSIF is higher than NC, but at the later stage the total energy consumed by processor using NC increases. The total flowtime of all active jobs when executed using NC is more than HSIF; consequently, the energy consumed by processor when using NC is more than HSIF. The energy consumed by most of the individual jobs when they are executed by HSIF is more than NC, as shown is Figure 2b.
The importance based flow time and importance based flow time plus energy values of individual jobs are shown in the Figure 3 and Figure 4 respectively. Most of the individual jobs are having the lesser values of importance based flow time and importance based flow time plus energy when they are executed by using HSIB than NC. HSIF and NC both are competing to reduce the value of the sum of importance based flow time and importance based flow time plus energy, as shown in Figure 3a and Figure 4a, respectively. In the later stage the total values for HSIF is lesser than NC. The total value of importance based flow time and importance based flow time plus energy for a processor by using HSIF is lesser than NC. The value of objective considered is lesser at most of the time when using HSIF. From the above observation, it is concluded that the performance of the HSIF is better and consistent than the best known algorithm NC.
To extend the analysis of the performance of HSIF, a second set of ten jobs and the traditional power model is considered. The jobs arrived along with their importance but the size of jobs was only known at their completion. This case is designed by assuming that the jobs arrive in the increasing order of size. The jobs are executed by using algorithms HSIF and NC (the best known to date [11]); their executions are simulated. The analysed data is mentioned in the Table 5 and Table 6. In the Table 5 the job’s arrival time and importance are mentioned. The size is computed and observed at the completion of the jobs. On the basis of the arrival time, starting time of execution and computed completion time the metrics of quality are computed. In this analysis, the metrics of quality considered are turnaround time, response time, power consumed and important based flow time. The computed results are mentioned in the Table 5 and Table 6. The lower values computed using HSIF and NC are marked in bold. The jobs details such as arrival time, completion time importance and size are same for table as well as Table 6.
In Table 5, the turnaround time of nine jobs (out of ten) is lesser using HSIF than NC. It is clearly visible from the data of Table 4 that nine jobs are having response time lesser using HSIF than NC. As well as the average value of turnaround time and response time is lesser using HSIF than NC. On the basis of such observations one can conclude that the working of HSIF is better than the best-known NC in the special case also where the jobs may arrive in the increasing order of size. In Table 6, three values of three objectives energy consumed, importance-based flow time and importance-based flow time plus energy are mentioned. As per the values of energy consumed by the jobs six out of ten jobs consumes lesser power using HSIF than NC; although, the total energy consumed by all ten jobs is more using HSIF than NC. The importance is one of the main factors which forced the schedule of execution of the jobs. The importance based flow times of eight jobs (out of ten) are lesser using HSIF than NC. It is clearly visible from the data of Table 6 that the total importance based flow times of all ten jobs are also lesser using HSIF than NC. This lesser value of metric reflects the better performance of HSIF than NC. The third metric importance-based flow time plus energy (the main objective of the proposed algorithm) of eight jobs (out of ten) are lesser using HSIF than NC. As well as this, the average values of importance-based flow time plus energy of all ten jobs lesser using HSIF than NC. It can be concluded from the observations mentioned above that the objective is better fulfilled by HSIF than NC.
To extend the analysis and increase the performance evaluation, a set of fifty arbitrary jobs with arbitrary arrival time is considered. The size of jobs is computed at the completion time only. Five different objective sets of values turnaround time, completion time, response time, important based flow time, and importance-based flow time plus energy are computed. The simulation results are stated in the Table 7 and Table 8.
On the simulation data provided in the Table 7 and Table 8, the statistical analysis is conducted. The Independent Samples t Test is used to compare the means of two independent groups in order to determine whether there is statistical evidence that the associated objective means are significantly different.
In the first Table 9, Group Statistics, provides basic information about the group comparisons, including the sample size (n), mean, standard deviation, and standard error for objectives by group. In the second section, Independent Samples Test, displays the results most relevant to the Independent Samples t Test. There are two parts that provide different pieces of information: t-test for Equality of Means and Levene’s Test for Equality of Variances. If the p value is less than or equal to the 0.05, then one should use the lower row of the output (the row labeled “Equal variances not assumed”). If the p value is greater than 0.05, then one should use the upper row of the output (the row labeled “Equal variances assumed”). Based on the results provided in the Table 9 and Table 10, the following conclusive remarks are considered:
  • For Turnaround Time p-value is less than 0.05 in Levene’s Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected. The lower row of the output (the row labeled “Equal variances not assumed”) is considered. A t test passed to reveal a statistically reliable difference between the mean values of Turnaround Time of HSIF (M = 13.42, s = 12.511366261) and NC (M = 20.6, s = 19.786616792) with t(82.782) = 2.17, p = 0.033.
  • The total Turnaround Time for HSIF is 359 time unit lesser than the total Turnaround Time for NC. The average Turnaround Time for HSIF is 7.18 time unit lesser than the average Turnaround Time for NC.
  • For Response Time p-value is less than 0.05 in Levene’s Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected. The lower row of the output (the row labeled “Equal variances not assumed”) is considered. A t test passed to reveal a statistically reliable difference between the mean values of Response Time of HSIF (M = 11.72, s = 12.748813) and NC (M = 18.32, s = 19.976966) with t(83.23) = 2.17, p = 0.05.
  • The total Response Time for HSIF is 330 time unit lesser than the total Response Time for NC. The average Response Time for HSIF is 6.6 time unit lesser than the average Response Time for NC.
  • For Completion Time p-value is greater than 0.05 in Levene’s Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is considered. The upper row of the output (the row labeled “Equal variances assumed”) is considered. A t test failed to reveal a statistically reliable difference between the mean values of Completion Time of HSIF (M = 67.4, s = 30.651431) and NC (M = 74.82, s = 34.902014) with t(98) = 1.13, p = 0.261.
  • For Energy Consumed p-value is greater than 0.05 in Levene’s Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is considered. The upper row of the output (the row labeled “Equal variances assumed”) is considered. A t test failed to reveal a statistically reliable difference between the mean values of Energy Consumed of HSIF (M = 196.7, s = 160.31869) and NC (M = 274.778, s = 291.01057) with t(98) = 1.66, p = 0. 1.
  • Although, the statistical test failed to identify the difference in HSIF and NC on the basis of energy consumed, the total Energy Consumed for HSIF is 3909.937747 unit lesser than the total Energy Consumed for NC. The average Energy Consumed for HSIF is 78.07875 unit lesser than the average Energy Consumed for NC.
  • For Importance-based Flow Time p-value is less than 0.05 in Levene’s Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected. The lower row of the output (the row labeled “Equal variances not assumed”) is considered. A t test passed to reveal a statistically reliable difference between the mean values of Importance-based Flow Time of HSIF (M = 2479.15, s = 3625.2051) and NC (M = 15373.3, s = 21122.893) with t(63.08) = 1.95, p = 0.05.
  • The total Importance-based Flow Time for HSIF is 139381.7662 unit lesser than the total Importance-based Flow Time for NC. The average Importance-based Flow Time for HSIF is 2787.635324 unit lesser than the average Importance-based Flow Time for NC.
  • For Importance-based Flow Time plus Energy p-value is less than 0.05 in Levene’s Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected. The lower row of the output (the row labeled “Equal variances not assumed”) is considered. A t test passed to reveal a statistically reliable difference between the mean values of Importance-based Flow Time plus Energy of HSIF (M = 2675.83, s = 3774.8105) and NC (M = 5541.57, s = 9740.346) with t(63.39) = 1.94, p = 0.05.
  • The total Importance-based Flow Time plus Energy for HSIF is 143286.703 unit lesser than the total Importance-based Flow Time plus Energy for NC. The average Importance-based Flow Time plus Energy for HSIF is 2865.73406 unit lesser than the average Importance-based Flow Time plus Energy for NC.
It is clearly evident from the above statistical analysis and deduced results that HSIF performance better than the best available scheduling algorithm NC.
To extend the perfection of the analysis of the evaluation of working of HSIF in comparison to NC, the normalized Z-values of Energy Consumed by individual job (ECiJ) and importance-based flow time of individual job (IbFTiJ) are computed and provided in the Table 11 and Table 12. The sum of Z values of Energy Consumed by individual job (ECiJ) and the importance-based flow time of individual job (IbFTiJ) are added and converted in to the range [0 1] for individual job, as shown in the Table 11 and Table 12. For all jobs, the total of normalized values of ECiJ+IbFTiJ and the average of the normalized values of ECiJ+IbFTiJ are provided (in the Table 11 and Table 12) to reflect the difference between the working of both algorithms. The normalized total values and average values of ECiJ+IbFTiJ are lesser for HSIF than NC. It reflects that the normalized value of dual objective (i.e., the sum of energy consumed and importance based flow time) for HSIF is lesser and better than NC. It is concluded from the above analysis that HSIF performs better than NC.

6. Conclusions and Future Work

An online non-clairvoyant job scheduling algorithm Highest Scaled Importance First (HSIF) is proposed with an objective to minimize the sum of scaled importance based flow time and energy consumed. HSIF uses the arbitrary power function and dynamic speed scaling policy for uni-processor system. The working of HSIF is analysed using the amortized potential function analysis against an optimal offline adversary. The competitive ratio of HSIF is 2. The competitive ratio of HSIF is lesser than the non-clairvoyant scheduling algorithms LAPS, SelMig, NC, R 3 , EtRR, ALG, and WLAPS; similar to an online clairvoyant scheduling Alg. Additionally, a set of jobs is considered as an illustrative example and the execution of the jobs on a processor is simulated by using HSIF and the best known algorithm NC. The simulation results show that the performance of the HSIF is consistent and better than the other online non-clairvoyant algorithm. On the basis of amortized potential function analysis and simulation results, it is concluded that the HSIF performs better than any other online non-clairvoyant algorithm. Use of HSIF in data centres and in battery based devices will reduce power consumption and improve computing capability. The further enhancement of our study will be to evaluate the working of HSIF in the multi-processor environment and the experiments will be conducted in the real time environment as well as with more number of test cases. Along with the amortized analysis and simulation the result will be analysed using statistical tests. The working of HSIF will be evaluated in the cloud/fog environment for resource allocation and energy optimization. One open problem is to reduce the competitive ratio that is achieved in this paper. In the further extension of this work, the number of jobs may be increased significantly to enhance the analysis of the algorithmic evaluation.

Author Contributions

All authors have worked on this manuscript together and all authors have read and approved the final manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, P. Prashast Release Round Robin: R3 an energy-aware non-clairvoyant scheduling on speed bounded processors. Karbala Int. J. Mod. Sci. 2015, 1, 225–236. [Google Scholar] [CrossRef]
  2. Singh, P.; Wolde-Gabriel, B. Executed-time Round Robin: EtRR an online non-clairvoyant scheduling on speed bounded processor with energy management. J. King Saud Univ. Comput. Inf. Sci. 2016, 29, 74–84. [Google Scholar] [CrossRef]
  3. U.S. Environmental Protection Agency. EPA Report on server and data centre energy efficiency. Available online: https://www.energystar.gov/index.cfm?c=prod_development.server_efficiency_study (accessed on 10 June 2015).
  4. Shehabi, A.; Smith, S.J.; Masanet, E.; Koomey, J. Data center growth in the United States: Decoupling the demand for services from electricity use. Environ. Res. Lett. 2018, 13, 1–12. [Google Scholar] [CrossRef]
  5. Fernández-Cerero, D.; Jakobik, A.; Grzonka, D.; Kołodziej, J.; Fernandez-Montes, A. Security supportive energy-aware scheduling and energy policies for cloud environments. J. Parallel Distrib. Comput. 2018, 119, 191–202. [Google Scholar] [CrossRef]
  6. Lei, H.; Wang, R.; Zhang, T.; Liu, Y.; Zha, Y. A multi-objective co-evolutionary algorithm for energy-efficient scheduling on a green data center. Comput. Oper. Res. 2016, 75, 103–117. [Google Scholar] [CrossRef]
  7. Chan, H.-L.; Edmonds, J.; Pruhs, K. Speed Scaling of Processes with Arbitrary Speedup Curves on a Multiprocessor. Theory Comput. Syst. 2011, 49, 817–833. [Google Scholar] [CrossRef] [Green Version]
  8. Bansal, N.; Kimbrel, T.; Pruhs, K. Dynamic speed scaling to manage energy and temperature. J. ACM 2007, 54, 1–39. [Google Scholar] [CrossRef]
  9. Pruhs, K.; Uthaisombut, P.; Woeginger, G. Getting the best response for your erg. ACM Trans. Algorithms 2008, 4, 1–17. [Google Scholar] [CrossRef] [Green Version]
  10. Bansal, N.; Chan, H.L.; Pruhs, K. Speed scaling with an arbitrary power function. In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, New York, NY, USA, 4–6 January 2009; pp. 693–701. [Google Scholar]
  11. Azar, Y.; Devanur, N.R.; Huang, Z.; Panigrahi, D. Speed Scaling in the Non-clairvoyant Model. In Proceedings of the Annual ACM Symposium on Parallelism in Algorithms and Architectures, Portland, OR, USA, 13–15 June 2015; pp. 133–142. [Google Scholar]
  12. Bansal, N.; Pruhs, K.; Stein, C. Speed Scaling for Weighted Flow Time. SIAM J. Comput. 2009, 39, 1294–1308. [Google Scholar] [CrossRef]
  13. Motwani, R.; Phillips, S.; Torng, E. Nonclairvoyant scheduling. Theor. Comput. Sci. 1994, 30, 17–47. [Google Scholar] [CrossRef]
  14. Bansal, N.; Dhamdhere, K.; Konemann, J.; Sinha, A. Non-clairvoyant Scheduling for Minimizing Mean Slowdown. Algorithmica 2004, 40, 305–318. [Google Scholar] [CrossRef]
  15. Yao, F.; Demers, A.; Shenker, S. A scheduling model for reduced CPU energy. In Proceedings of the Annual Symposium on Foundations of Computer Science, Berkeley, CA, USA, 23–25 October 1995; pp. 374–382. [Google Scholar]
  16. Leonardi, S.; Raz, D. Approximating total flow time on parallel machines. In Proceedings of the ACM Symposium on Theory of Computing, El Paso, TX, USA, 4–6 May 1997; pp. 110–119. [Google Scholar]
  17. Chekuri, C.; Khanna, S.; Zhu, A. Algorithms for minimizing weighted flow time. In Proceedings of the ACM Symposium on Theory of Computing, Crete, Greece, 6–8 July 2001; pp. 84–93. [Google Scholar]
  18. Awerbuch, B.; Azar, Y.; Leonardi, S.; Regev, O. Minimizing the Flow Time Without Migration. SIAM J. Comput. 2002, 31, 1370–1382. [Google Scholar] [CrossRef] [Green Version]
  19. Avrahami, N.; Azar, Y. Minimizing total flow time and total completion time with immediate dispatching. In Proceedings of the ACM Symposium on Parallelism in Algorithms and Architectures, San Diego, CA, USA, 7–9 June 2003; p. 11. [Google Scholar]
  20. Chekuri, C.; Goel, A.; Khanna, S.; Kumar, A. Multiprocessor scheduling to minimize flow time with epsilon resource augmentation. In Proceedings of the ACM Symposium on Theory of Computing, Chicago, IL, USA, 13–15 June 2004; pp. 363–372. [Google Scholar]
  21. Pruhs, K.; Sgall, J.; Torng, E. Online Scheduling. In Handbook of Scheduling: Algorithms, Models, and Performance Analysis, 1st ed.; Leung, J.Y.-T., Ed.; CRC Press: Boca Raton, FL, USA, 2004; pp. 15–41. [Google Scholar]
  22. Becchetti, L.; Leonardi, S.; Marchetti-Spaccamela, A.; Pruhs, K. Online weighted flow time and deadline scheduling. J. Discret. Algorithms 2006, 4, 339–352. [Google Scholar] [CrossRef] [Green Version]
  23. Chadha, J.; Garg, N.; Kumar, A.; Muralidhara, V. A competitive algorithm for minimizing weighted flow time on unrelated processors with speed augmentation. In Proceedings of the Annual ACM Symposium on Theory of Computing, Bethesda, MD, USA, 31 May–2 June 2009; pp. 679–684. [Google Scholar]
  24. Anand, S.; Garg, N.; Kumar, A. Resource Augmentation for Weighted Flow-time explained by Dual Fitting. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, Kyoto, Japan, 17–19 January 2012; pp. 1228–1241. [Google Scholar]
  25. Albers, S.; Fujiwara, H. Energy-efficient algorithms for flow time minimization. ACM Trans. Algorithms 2007, 3, 49. [Google Scholar] [CrossRef] [Green Version]
  26. Chan, S.-H.; Lam, T.-W.; Lee, L.-K. Non-clairvoyant Speed Scaling for Weighted Flow Time. In Proceedings of the Annual European Symposium, Liverpool, UK, 6–8 September 2010; pp. 23–35. [Google Scholar]
  27. Im, S.; Kulkarni, J.; Munagala, K.; Pruhs, K. SelfishMigrate: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS), Philadelphia, PA, USA, 18–21 October 2014; pp. 531–540. [Google Scholar]
  28. Fernández-Cerero, D.; Fernández-Montes, A.; Ortega, J.A. Energy policies for data-center monolithic schedulers. Expert Syst. Appl. 2018, 110, 170–181. [Google Scholar] [CrossRef]
  29. Duy, T.V.T.; Sato, Y.; Inoguchi, Y. Performance evaluation of a Green Scheduling Algorithm for energy savings in Cloud computing. In Proceedings of the IEEE international Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), Atlanta, GA, USA, 19–23 April 2010; pp. 1–8. [Google Scholar]
  30. Beloglazov, A.; Abawajy, J.; Buyya, R. Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing. Futur. Gener. Comput. Syst. 2012, 28, 755–768. [Google Scholar] [CrossRef] [Green Version]
  31. Sohrabi, S.; Tang, A.; Moser, I.; Aleti, A. Adaptive virtual machine migration mechanism for energy efficiency. In Proceedings of the 2016 IEEE/ACM 5th International Workshop on Green and Sustainable Software (GREENS), Austin, TX, USA, 16 May 2016; pp. 8–14. [Google Scholar]
  32. Juarez, F.; Ejarque, J.; Badia, R.M. Dynamic energy-aware scheduling for parallel task-based application in cloud computing. Futur. Gener. Comput. Syst. 2018, 78, 257–271. [Google Scholar] [CrossRef] [Green Version]
  33. Gupta, A.; Krishnaswamy, R.; Pruhs, K. Scalably Scheduling Power-Heterogeneous Processors. In Proceedings of the 37th International Colloquium Conference on Automata, Languages and Programming, Bordeaux, France, 6–10 July 2010; pp. 312–323. [Google Scholar]
  34. Sun, H.; He, Y.; Hsu, W.-J.; Fan, R. Energy-efficient multiprocessor scheduling for flow time and makespan. Theor. Comput. Sci. 2014, 550, 1–20. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The execution results of jobs and processor speed with respect to time.
Figure 1. The execution results of jobs and processor speed with respect to time.
Applsci 09 01467 g001aApplsci 09 01467 g001b
Figure 2. Energy Consumption of processes and jobs.
Figure 2. Energy Consumption of processes and jobs.
Applsci 09 01467 g002
Figure 3. Importance based flow time of jobs.
Figure 3. Importance based flow time of jobs.
Applsci 09 01467 g003
Figure 4. Importance based flow time with energy of jobs.
Figure 4. Importance based flow time with energy of jobs.
Applsci 09 01467 g004
Table 1. Summary of previous results. SelMIg: Selfish Migrate; NC: Non-Clairvoyant; ALG: algorithm proposed by Bansal et al.; WLAPS: Weighted Latest Arrival Processor Sharing; OCA: online clairvoyant algorithm; HSIF: Highest Scaled Importance First.
Table 1. Summary of previous results. SelMIg: Selfish Migrate; NC: Non-Clairvoyant; ALG: algorithm proposed by Bansal et al.; WLAPS: Weighted Latest Arrival Processor Sharing; OCA: online clairvoyant algorithm; HSIF: Highest Scaled Importance First.
Function Type UsedAlgorithmsCompetitivenessClairvoyant/Non-Clairvoyant
General   α α = 2 α = 3
Traditional Power FunctionSelMig [27] α 2 49Non-clairvoyant
NC [11] ( 2 + 1 α 1 ) 32.5Non-clairvoyant
ALG [12] ( 2 α l n α )
( f o r   l a r g e   v a l u e   o f   α )
22.52Clairvoyant
Arbitrary Power Function WLAPS [26] 16 ( 1 + 1 ϵ ) 2
where ϵ > 1
>16>16Non-clairvoyant
OCA [10]222Clairvoyant
HSIF [this paper]222Non-clairvoyant
Table 2. Hardware specifications.
Table 2. Hardware specifications.
Simulation ParametersValues
CPUIntel(R) Core(TM) i5-4210U CPU @ 1.70 GHz
RAM4.00 GB RAM
Hard Drive1.0 TB
Operating SystemRed Hat Linux 6.1
KernelLinux kernel version 2.2.12
Table 3. Job details and execution information using HSIF and NC.
Table 3. Job details and execution information using HSIF and NC.
JobArrival TimeImportanceSizeCompletion TimeTurnaround TimeResponse Time
HSIFNCHSIFNCHSIFNC
J11617.19796800
J25425.061620111535
J31027.55572647164511
J413542.6726401327414
J518763.7237571939923
J622113.51686546433636
J732356.2254812249634
Average values23.42928.14314.71417.571
Table 4. Three objectives values for jobs using HSIF and NC.
Table 4. Three objectives values for jobs using HSIF and NC.
JobEnergy Consumed by Individual JobImportance Based Flow Time of Individual JobImportance Based Flow Time Plus Energy of Individual Job
HSIFNCHSIFNCHSIFNC
J159.7737762.24247253.36673429.27161313.1405490.5141
J254.20484107.6397333.410951348.8623387.61581456.502
J3209.484920.232065397.3745316.854585606.859337.0866
J482.46035218.886584.231935219.856666.69235438.742
J5208.9329388.03752226.151913632.1012435.08514020.14
J690.8249744.051482118.06121816.35442208.8861860.406
J782.03271324.3166906.7407714714.469988.773515038.79
Total787.71441165.40611819.33837477.76912607.0538642.17
Table 5. Details and execution information of jobs with increasing-order of size using HSIF and NC.
Table 5. Details and execution information of jobs with increasing-order of size using HSIF and NC.
JobArrival TimeImportanceSizeCompletion TimeTurnaround TimeResponse Time
HSIFNCHSIFNCHSIFNC
J1135453400
J23667114823
J3758101831115
J491101525616210
J510210443234222916
J6158141941426118
J715417385023351927
J8187212360542233
J9209222871851441
J10289233382554144
Average values9.526.96.119.7
Table 6. Three objectives values for jobs arriving with increasing-order of size using HSIF and NC.
Table 6. Three objectives values for jobs arriving with increasing-order of size using HSIF and NC.
JobEnergy Consumed by Individual JobImportance Based Flow Time of Individual JobImportance Based Flow Time Plus Energy of Individual Job
HSIFNCHSIFNCHSIFNC
J115.537611.4142133.5897947.6568549.1273860.07107
J230.3953118.6761986.8327140.9953117.228159.6715
J320.8959928.2320650.98298291.462271.87897319.6943
J47.88332830.2320630.62239466.622538.50572496.8546
J5134.586530.232062437.376648.01492571.962678.2469
J640.1059858.05148114.93471445.479155.04071503.531
J7169.094761.051482119.9932075.9432289.0882136.994
J848.4126382.24247166.96213353.273215.37473435.516
J995.6372104.5797452.13165172.41547.76885276.99
J1052.16065105.5797171.55015541.149223.71075646.729
Total614.7099530.29145664.97519183.016279.68519714.3
Table 7. Details and execution information of jobs with random-order of size and importance using HSIF and NC.
Table 7. Details and execution information of jobs with random-order of size and importance using HSIF and NC.
JobArrival TimeImportanceSizeDensityCompletion TimeTurnaround TimeResponse Time
HSIFNCHSIFNCHSIFNC
J1155.66061.13212342300
J24911.22921.2476889793511
J3646.91921.72989153928
J47520.11734.02346171310673
J510109.67710.967711234224122
J61294.44180.49353331342130126
J7151929.51571.553457923208531
J817110.457910.45794022235194
J922215.29297.64645332511371
J1023617.66432.9440528295613
J112835.33681.778933335317362
J123529.66884.834443378260
J1342713.20211.8860143746332213120
J14431340.24113.095469249496611
J15451425.55831.82559295266721519
J1648840.8535.10662556548652
J17501112.12691.10244556610516551554
J18521554.833.6553333625910773
J19531626.86551.67909385876523421
J2055714.05542.00791437861236225
J2155912.16211.3513444679912441143
J22571710.37020.610011865127870766
J23571911.88380.625463264122765663
J24572013.23650.66182563119662560
J255767.59211.2653510410147444643
J26661211.54220.9618569110344243
J276679.59161.37022861029148254724
J28661137.08953.371772772736745
J29661412.13890.867064368114248147
J3066813.93071.74133758474188177
J3166840.54565.06828769213181
J32661112.11441.101309173106740639
J337055.36071.0721410610836383538
J347066.52981.088310510735373437
J357078.64471.23495711031033333320
J3670812.64661.580825938223122211
J37731713.26720.780423575116243142
J38741928.43461.4965579778531129
J397458.05651.611310880346327
J4076913.89681.54408898882126117
J417642.22680.556710913133553252
J4276811.34471.4180875100852492310
J43761129.65232.695663682786241
J44771528.08421.8722880793223
J45811614.78640.9241583112231130
J4681912.18141.3534889999218111710
J4781810.5311.3163751019920181919
J48871754.27473.192629492905320
J49931940.11272.111194798955220
J50932028.36241.4181295982513
Total6711030586916
Table 8. Three objectives values for jobs arriving with random-order of size using HSIF and NC.
Table 8. Three objectives values for jobs arriving with random-order of size using HSIF and NC.
JobEnergy Consumed by Individual Job (ECiJ)Importance Based Flow Time of Individual Job (IbFTiJ)Importance Based Flow Time Plus Energy of Individual Job (ECiJ+IbFTiJ)
HSIFNCHSIFNCHSIFNC
J122.35903576.77021225237.83514421.7825001559.1941828.55271241
J246.6127888336.02011586100.76936149.8960335147.3821185.9161493
J319.2522663740.574628549.274854228.88138568.52712269.4560135
J482.7884123664.15138995516.081323.9872543598.8694388.1386443
J532.71807139229.793620565.6702872770.89265798.388363000.686277
J620.23553431237.10209631.4710693251.628551.70663488.730596
J7202.7829624100.2795249983.04991428.75900591185.833529.0385308
J842.2736730360.70577844529.99832345.0057207572.272405.7114991
J933.7634238150.0710023215.89783545.5858599249.6613695.6568622
J1042.07948789114.1502519134.28054647.9364288176.36762.0866808
J1134.3397777614.82308387158.9926643.4028688193.332458.22595267
J1225.3386524759.42091636102.6656697.67231059128.0043157.093227
J13481.3604361161.0750958793.24761932.7090829274.6082093.784177
J1485.00788703276.5769064315.447641565.904674400.45551842.48158
J15152.3533413341.901121692.167744081.84265844.52114423.743771
J16168.7093341444.4016939839.371682652.1377031008.0813096.539397
J17336.5378155607.66760073232.099317099.834423568.63717707.50202
J18235.6998431431.60382711408.92372838.6990321644.6243270.302859
J19127.5172457376.2169431458.290664628.367088585.80795004.584031
J20326.680140558.091375454373.3849265.6356714700.065323.7270464
J21197.7173871396.67567221471.38218940.405251669.19337.080922
J22236.54084661136.4680471249.90238599.642921486.44339736.11097
J23228.33165021206.1446631083.784838904.034811312.11640110.17947
J24203.58563351213.137134857.1813137422.774581060.76738635.91172
J25645.8262181264.63267517000.0185968.47037517645.846233.10305
J2661.42174437526.4527132160.736811821.89117222.158512348.34388
J27551.6457635175.685114311254.2912292.81297111805.942468.498086
J28100.7695008196.6953921406.784591250.246015507.55411446.941407
J2950.85532029666.0633792106.3515416186.67205157.206916852.73543
J30280.593886864.870668752998.2111295.83601883278.805360.7066875
J31318.7016553216.31895963782.0731771.39526774100.775987.7142273
J32132.192008442.7510756627.454369143.243446759.64649585.994522
J33394.0326882190.536078038.77943725.906738432.8123916.4428
J34457.5086268222.544159087.21764238.67779544.7264461.22185
J35498.2490083231.61747869359.24873947.9942719857.4984179.61175
J36373.34873296.79041254998.1542634.27536255371.503731.065775
J3761.75288892720.4549719129.1411615634.62855190.89416355.08352
J3891.44826525212.2669018234.055561311.454543325.50381523.721444
J39362.052692135.805656900.7966146.44527262.849182.25085
J40197.717387163.772044441471.3821258.17635561669.1321.9484
J41284.713719210.19244825348.14215632.7592185632.8565842.951667
J42392.374465180.709043755464.8046447.79948135857.179528.508525
J43106.38654553.26049632437.28887136.4336571543.6754189.6941535
J4472.1959988858.38507639184.7807157.6041655256.9767215.9892419
J4558.12036605489.3002581121.544627737.146183179.6658226.446441
J46315.668122799.676744443372.9875602.12093333688.656701.7976778
J47317.1910961152.65818753732.17051533.163754049.3621685.821938
J48117.5735219291.1073658405.47938974.5465744523.05291265.65394
J49142.616673193.38620681503.28556247.963412645.9022341.3496188
J5065.43614279119.1602493131.34057454.3746597196.7767573.534909
Total9834.97868313738.91643123957.6903263339.4565133791.6699277078.3729
Table 9. Group statistics of objectives values for HSIF and NC.
Table 9. Group statistics of objectives values for HSIF and NC.
Group Statistics
SchedulingNMean (M)Std. Deviation (s)Std. Error Mean
Turnaround_timeHSIF5013.4212.5113661.7693744
NC5020.619.7866172.7982502
Responce_timeHSIF5011.7212.7488131.8029545
NC5018.3219.9769662.8251697
Completion_timeHSIF5067.430.6514314.3347669
NC5074.8234.9020144.9358902
Energy_consumedHSIF50196.7160.3186922.672486
NC50274.778291.0105741.155109
Importance_based_flow_timeHSIF502479.153625.2051512.68143
NC5015373.321122.8932987.2282
Importance_based_flow_time_plus_energyHSIF502675.833774.8105533.83883
NC505541.579740.3461377.4929
Table 10. Statistics of objectives values for HSIF and NC using Independent Samples t Test.
Table 10. Statistics of objectives values for HSIF and NC using Independent Samples t Test.
Independent Samples Test
Objectivest-Test for Equality of MeansLevene’s Test for Equality of Variances
tdfp-Value (2-tailed)Mean DifferenceStd. Error Difference95% Confidence Interval of the DifferenceFp-Value
LowerUpper
Turnaround_TimeEqual Variances Assumed−2.17980.033−7.183.31072346−13.7500229−0.6099770514.190
Equal Variances Not assumed−2.1782.780.033−7.183.31072346−13.765152−0.59484797
Responce_TimeEqual Variances Assumed−1.97980.05−6.63.35145171−13.25084680.05084684613.270
Equal Variances Not assumed−1.9783.230.05−6.63.35145171−13.26562550.065625546
Completion_TimeEqual Variances Assumed−1.13980.261−7.426.56911077−20.45618655.6161865311.2770.26
Equal variances Not assumed−1.1396.390.261−7.426.56911077−20.45890435.618904335
Energy_ConsumedEqual Variances Assumed−1.66980.1−78.07875546.9870691−171.32306415.165554425.6450.02
Equal Variances Not assumed−1.6676.230.101−78.07875546.9870691−171.65696915.499459
Importance_based_flow_timeEqual Variances Assumed−1.95980.05−2787.6353241432.92269−5631.2237755.953117929.1680
Equal Variances Not assumed−1.9563.080.05−2787.6353241432.92269−5651.0288275.75817343
Importance_Based_Flow_time_plus_energyEqual Variances Assumed−1.94980.05−2865.7340611477.31876−5797.4250665.956934298.9530
Equal Variances Not assumed−1.9463.390.05−2865.7340611477.31876−5817.5610386.09291177
Table 11. Normalized objectives values for HSIF using z-score.
Table 11. Normalized objectives values for HSIF using z-score.
JobSimple ValuesZ ValuesSum
(ZHSIF_ECiJ + ZHSIF_IbFTiJ)
Normalized Sum (in range [0 1])
HSIF_ECiJHSIF_IbFTiJZHSIF_ECiJZHSIF_IbFTiJ
J122.35937.835144−1.08746−0.67343−1.760890.147822
J246.6128100.76936−0.93618−0.65607−1.592250.18155
J319.252349.274854−1.10684−0.67027−1.777110.144578
J482.7884516.081−0.71053−0.54151−1.252040.249592
J532.718165.670287−1.02285−0.66575−1.68860.16228
J620.235531.471069−1.10071−0.67518−1.775890.144822
J7202.783983.049910.03795−0.41269−0.374740.425052
J842.2737529.99832−0.96324−0.53767−1.500910.199818
J933.7634215.89783−1.01633−0.62431−1.640640.171872
J1042.0795134.28054−0.96445−0.64682−1.611270.177746
J1134.3398158.99266−1.01273−0.64001−1.652740.169452
J1225.3387102.66566−1.06888−0.65555−1.724430.155114
J13481.36048793.24761.775591.741723.517311.203462
J1485.0079315.44764−0.69669−0.59685−1.293540.241292
J15152.3533692.16774−0.27661−0.49293−0.769540.346092
J16168.7093839.37168−0.17459−0.45233−0.626920.374616
J17336.53783232.09930.872250.20771.079950.71599
J18235.69981408.92370.24327−0.29522−0.051950.48961
J19127.5172458.29066−0.43153−0.55745−0.988980.302204
J20326.68014373.38490.810760.522521.333280.766656
J21197.71741471.38210.00635−0.27799−0.271640.445672
J22236.54081249.9020.24851−0.33908−0.090570.481886
J23228.33171083.78480.19731−0.38491−0.18760.46248
J24203.5856857.181310.04295−0.44742−0.404470.419106
J25645.826217000.0182.801464.005536.806991.861398
J2661.4217160.7368−0.84381−0.63953−1.483340.203332
J27551.645811254.2912.2142.420594.634591.426918
J28100.7695406.78459−0.59837−0.57166−1.170030.265994
J2950.8553106.35154−0.90971−0.65453−1.564240.187152
J30280.59392998.21110.52330.143180.666480.633296
J31318.70173782.07310.7610.359411.120410.724082
J32132.192627.45436−0.40237−0.51078−0.913150.31737
J33394.03278038.77941.230881.53362.764481.052896
J34457.50869087.21761.626821.822813.449631.189926
J35498.2499359.24871.880941.897853.778791.255758
J36373.34874998.15421.101860.694861.796720.859344
J3761.7529129.14116−0.84174−0.64824−1.489980.202004
J3891.4483234.05556−0.65651−0.6193−1.275810.244838
J39362.05276900.79661.03141.219692.251090.950218
J40197.71741471.38210.00635−0.27799−0.271640.445672
J41284.71375348.14210.548990.79141.340390.768078
J42392.37455464.80461.220540.823582.044120.908824
J43106.3865437.28887−0.56333−0.56324−1.126570.274686
J4472.196184.7807−0.7766−0.63289−1.409490.218102
J4558.1204121.54462−0.8644−0.65034−1.514740.197052
J46315.66813372.98750.742080.246560.988640.697728
J47317.19113732.17050.751580.345641.097220.719444
J48117.5735405.47938−0.49355−0.57202−1.065570.286886
J49142.6167503.28556−0.33735−0.54504−0.882390.323522
J5065.4361131.34057−0.81877−0.64764−1.466410.206718
Average196.699572479.1538052 × 10−78.88178 × 10−182 × 10−70.50000004
Total9834.9785123957.69031 × 10−501 × 10−525.000002
Table 12. Normalized objectives values for NC using z-score.
Table 12. Normalized objectives values for NC using z-score.
JobSimple ValuesZ ValuesSum
(ZNC_ECiJ + ZNC_IbFTiJ)
Normalized Sum (in range [0 1])
NC_ECiJNC_IbFTiJZNC_ECiJZNC_IbFTiJ
J16.770221.7825−0.92096−0.55435−1.475310.204938
J236.0201149.896034−0.82045−0.54081−1.361260.227748
J340.5746228.881385−0.80479−0.53246−1.337250.23255
J464.1514323.987254−0.72378−0.52241−1.246190.250762
J5229.79362770.892657−0.15458−0.26379−0.418370.416326
J6237.10213251.6285−0.12947−0.21298−0.342450.43151
J7100.2795428.759006−0.59963−0.51133−1.110960.277808
J860.7058345.005721−0.73562−0.52019−1.255810.248838
J9150.071545.58586−0.42853−0.49899−0.927520.314496
J10114.1503647.936429−0.55197−0.48817−1.040140.291972
J1114.823143.402869−0.89328−0.55206−1.445340.210932
J1259.420997.672311−0.74003−0.54633−1.286360.242728
J13161.07511932.709082−0.39072−0.35238−0.74310.35138
J14276.57691565.9046740.00618−0.39115−0.384970.423006
J15341.90114081.842650.23065−0.125240.105410.521082
J16444.40172652.1377030.58288−0.276340.306540.561308
J17607.667617099.834421.143911.250642.394550.97891
J18431.60382838.6990320.5389−0.256630.282270.556454
J19376.21694628.3670880.34857−0.067480.281090.556218
J2058.0914265.635671−0.7446−0.52858−1.273180.245364
J21396.67578940.405250.418880.388270.807150.66143
J221136.46838599.642922.961033.522976.4841.7968
J231206.144738904.034813.200463.555156.755611.851122
J241213.137137422.774583.224483.398596.623071.824614
J25264.63275968.470375−0.034860.074160.03930.50786
J26526.452711821.891170.864830.692811.557640.811528
J27175.68512292.812971−0.34051−0.31432−0.654830.369034
J28196.69541250.246015−0.26832−0.42451−0.692830.361434
J29666.063416186.672051.344571.154132.49870.99974
J3064.8707295.836019−0.72131−0.52538−1.246690.250662
J31216.319771.395268−0.20088−0.47512−0.6760.3648
J32442.75119143.2434460.57720.409710.986910.697382
J33190.53613725.90673−0.28948−0.16286−0.452340.409532
J34222.54424238.6777−0.17949−0.10866−0.288150.44237
J35231.61753947.994271−0.14831−0.13938−0.287690.442462
J3696.7904634.275363−0.61162−0.48961−1.101230.279754
J37720.45515634.628551.531481.095782.627261.025452
J38212.26691311.454543−0.21481−0.41804−0.632850.37343
J3935.8057146.4452−0.82118−0.54117−1.362350.22753
J4063.772258.176356−0.72508−0.52936−1.254440.249112
J41210.19245632.759218−0.221940.03868−0.183260.463348
J4280.709447.799481−0.66688−0.50932−1.17620.26476
J4353.2605136.433657−0.7612−0.54223−1.303430.239314
J4458.3851157.604166−0.74359−0.53999−1.283580.243284
J45489.30037737.1461830.737160.261090.998250.69965
J4699.6767602.120933−0.6017−0.49301−1.094710.281058
J47152.65821533.16375−0.41964−0.39461−0.814250.33715
J48291.1074974.5465740.05611−0.45365−0.397540.420492
J4993.3862247.963412−0.62332−0.53044−1.153760.269248
J50119.1602454.37466−0.53475−0.50863−1.043380.291324
Average274.778335266.7891292 × 10−74 × 10−76 × 10−70.50000012
Total13738.9165263339.45651 × 10−52 × 10−53 × 10−525.000006

Share and Cite

MDPI and ACS Style

Singh, P.; Khan, B.; Vidyarthi, A.; Haes Alhelou, H.; Siano, P. Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function. Appl. Sci. 2019, 9, 1467. https://doi.org/10.3390/app9071467

AMA Style

Singh P, Khan B, Vidyarthi A, Haes Alhelou H, Siano P. Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function. Applied Sciences. 2019; 9(7):1467. https://doi.org/10.3390/app9071467

Chicago/Turabian Style

Singh, Pawan, Baseem Khan, Ankit Vidyarthi, Hassan Haes Alhelou, and Pierluigi Siano. 2019. "Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function" Applied Sciences 9, no. 7: 1467. https://doi.org/10.3390/app9071467

APA Style

Singh, P., Khan, B., Vidyarthi, A., Haes Alhelou, H., & Siano, P. (2019). Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function. Applied Sciences, 9(7), 1467. https://doi.org/10.3390/app9071467

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop