Next Article in Journal
Towards the Development of a Universal Expression for the Configurational Entropy of Mixing
Previous Article in Journal
Thermal Characteristics of a Primary Surface Heat Exchanger with Corrugated Channels
Previous Article in Special Issue
Choice Overload and Height Ranking of Menus in Partially-Ordered Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Reorder Point-Lot Size (r,Q) Inventory Model under Maximum Entropy Principle

by
Davide Castellano
Dipartimento di Ingegneria Civile e Industriale, Università di Pisa, Largo Lucio Lazzarino, Pisa 56122, Italy
Entropy 2016, 18(1), 16; https://doi.org/10.3390/e18010016
Submission received: 17 October 2015 / Revised: 24 November 2015 / Accepted: 23 December 2015 / Published: 30 December 2015
(This article belongs to the Special Issue Entropy, Utility, and Logical Reasoning)

Abstract

:
This paper takes into account the continuous-review reorder point-lot size (r,Q) inventory model under stochastic demand, with the backorders-lost sales mixture. Moreover, to reflect the practical circumstance in which full information about the demand distribution lacks, we assume that only an estimate of the mean and of the variance is available. Contrarily to the typical approach in which the lead-time demand is supposed Gaussian or is obtained according to the so-called minimax procedure, we take a different perspective. That is, we adopt the maximum entropy principle to model the lead-time demand distribution. In particular, we consider the density that maximizes the entropy over all distributions with given mean and variance. With the aim of minimizing the expected total cost per time unit, we then propose an exact algorithm and a heuristic procedure. The heuristic method exploits an approximated expression of the total cost function achieved by means of an ad hoc first-order Taylor polynomial. We finally carry out numerical experiments with a twofold objective. On the one hand we examine the efficiency of the approximated solution procedure. On the other hand we investigate the performance of the maximum entropy principle in approximating the true lead-time demand distribution.

1. Introduction

When it is necessary to assign probabilities to mutually exclusive events in a sample space and there is not any prior knowledge about them, then we should assume that all these events have equal probability. This is called principle of insufficient reason or principle of indifference. It was originally stated by J. Bernoulli in 1713 and then endorsed by Laplace in 1814 [1].
In the case that we become aware of some information about the probability distribution of the outcomes, then we can adjust the assignment of probabilities accordingly. This is possible by means of the maximum entropy (MaxEnt) principle, which is a variational method of statistical inference originally proposed by Jaynes [2,3,4]. This principle basically works as follows: in the task concerned with determining a probability density (or mass) function subject to constraints, we should use the distribution satisfying those constraints that has the largest entropy. It is possible to prove that the principle of indifference derives from the MaxEnt principle in the case of a finite sample space when no constraint is imposed [5].
The MaxEnt principle has been widely adopted in a variety of fields. Taking into account some of the most recent works, we can cite the following contexts: urban planning [6]; queueing systems [7]; structural dynamics [8]; insurance [9]; computer vision and multimedia [10]; computer algebra [11]; geology [12]; biomechanics [13]; signal processing [14]. The literature gives some reviews concerning applications in ecology, finance, physics, chemistry, and biology [15,16,17,18].
In recent times, the MaxEnt principle has been introduced in the operations management area to approach the problem of evaluating the demand distribution or obtaining a demand forecast when only partial information about the demand are accessible [19,20,21]. Information are partial in the case the exact distribution is not available and only a certain number of observations is given.
With particular regard to the inventory management context, we would observe that, in practice, the decision-maker may actually know only the estimate of the mean and of the variance of the demand, but not the distribution type. In this circumstance, the traditional approach is to consider the demand within a given period as a normally distributed random variable [22]. This also follows from the assumption that individual demands are independent and identically distributed (i.i.d.) random variables, and then, according to the central limit theorem, the gaussianity of their sum can readily be deduced. However, this procedure is hardly valid in reality. In fact, single demands are generally not i.i.d. random variables [20]. In addition, one should also consider that the normal distribution is not recommended for items characterized by demand with large coefficient of variation [23]. A different approach to model the demand distribution is based on adopting the so-called minimax distribution-free procedure [24,25,26]. However, this gives an upper bound of the true cost, which may turn out in a not negligible error with respect to the optimal policy. To overcome the limitations characterizing the previous two methods, researches have recently introduced the MaxEnt principle in the task of optimizing an inventory system under partial information about the distribution of the quantity demanded [20]. We would however observe that, to the authors’ knowledge, the number of applications of the MaxEnt principle in the inventory management field are so far quite limited.
In the inventory management theory, the continuous-review reorder point-lot size (r,Q) model is a well-known policy. The system is continuously reviewed and, whenever the inventory position drops to or below r, an amount of Q units of goods is issued. The two control variables r and Q have different purposes. The replenishment quantity Q affects the trade-off between production or order frequency and inventory: a larger Q leads to few replenishments but higher average inventory levels; a smaller Q turns out to produce low average inventory but more replenishments. The reorder point r affects the stockout probability: a higher r gives larger inventory to assure a smaller stockout probability; a smaller r reduces inventory at the expense of a greater stockout probability.
As real inventory systems are typically subject to demand uncertainties, the (r,Q) policy under stochastic demand is therefore more practical. The literature proposes numerous works concerning stochastic (r,Q) models. In this regard, there exist studies involving Gaussian lead-time demand (e.g., [27,28,29,30]) or exploiting the minimax approach (e.g., [31,32,33]), as well as models that consider single- (e.g., [28,31,32]) or multi-echelon (e.g., [27,29,30]) systems. However, to the best of our knowledge, the MaxEnt principle has never been implemented into the (r,Q) policy.
We would point out that backorders-lost sales mixtures should not be neglected in a stochastic inventory model. In fact, these mixtures are generally adopted to model the different purchasing behaviors of customers when facing stockouts. Actually, some of them may wait until demand is satisfied (such demands are backordered); while others may be impatient (such demands are lost). Numerous studies involve this aspect (see, e.g., [34,35,36]).
Owing to the above observations, we consider the stochastic continuous-review reorder point-lot size (r,Q) model with backorders-lost sales mixtures. We derive the total cost function taking into account the MaxEnt principle. More precisely, we adopt the MaxEnt principle to model the lead-time demand distribution, given certain mean and variance. The purpose is to determine the replenishment policy that minimizes the expected total cost per time unit.
We present an exact algorithm and a heuristic solution procedure. The heuristic algorithm exploits an approximated expression of the total cost function achieved by means of an ad hoc first-order Taylor polynomial (i.e., a first-order truncation of the Taylor series expansion). We finally carry out numerical experiments with a twofold objective. On the one hand we examine the efficiency of the approximated solution procedure. On the other hand we investigate the performance of the MaxEnt principle in approximating the true lead-time demand distribution. In this regard, the MaxEnt principle is firstly compared with the Gaussian approximation and the minimax procedure. Then, a comparison with the approximation provided by the Weibull density is presented, as well. We would, in fact, compare the MaxEnt principle with the modeling capability provided by the Weibull density.
The remainder of the paper is as follows. Section 2 introduces notation, assumptions and optimization problem. In Section 3, we give the exact optimization procedure. In Section 4, we propose an approximated optimization approach. Section 5 presents numerical experiments. Finally, Section 6 deals with conclusions and further remarks.

2. Notation, Assumptions and Problem Definition

The main notation adopted is the following:
Decision variables:
Q  Order quantity (quantity units).
r  Reorder point (quantity units).
Parameters:
h  Unit holding cost rate (monetary unit/quantity unit/time unit).
A  Fixed ordering cost per order (monetary unit/order).
π  Fixed penalty cost per unit short (monetary unit/quantity unit).
π0  Marginal profit per unit (monetary unit/quantity unit).
L  Replenishment lead time (time unit).
μ ~   Average demand rate (quantity unit/time unit).
σ ~   Standard deviation of the demand rate (quantity unit/time unit).
β  Fraction of shortage (i.e., demand during the stockout period) that will be lost.
Random variables:
X  Lead-time demand, i.e., quantity demanded during the lead time.
Functions and operators:
f ¯ ( ) Probability density function (p.d.f.) of the lead-time demand.
x+  Maximum between 0 and x, i.e., x + max { 0 , x } .
The main assumptions are the following:
  • Inventory is continuously reviewed. An order of size Q is issued when the on-hand inventory reaches the reorder point r.
  • The reorder point r is positive.
  • The distribution of X is unknown/unspecified, but only its mean μ μ ˜ L and variance σ 2 σ ˜ 2 L can be evaluated.
  • The random variable X is continuous and nonnegative. That is, the lead-time demand can take any value in 0 + .
  • Shortages are allowed and partially backordered with ratio 1 − β. The fraction of shortage with ratio β is lost.
  • The time horizon is infinite.
Under our assumptions, the expected total cost per time unit is given by:
C ¯ ( Q , r ) = A μ ˜ Q + h [ Q 2 + r μ ˜ L + β B ¯ ( r ) ] + π ¯ μ ˜ Q B ¯ ( r )
where π ¯ π + π 0 β , and
B ¯ ( r ) = E [ ( X r ) + ] = r + ( x r ) f ¯ ( x ) d x
is the expected shortage at the end of the cycle. In Equation (2), E [ ] represents the mathematical expectation. Cost function (Equation (1)) consists of the ordering cost, the inventory holding cost, and the shortage cost [37]. Note that Equation (2) can be determined once the density f ¯ has been specified.
Optimizing replenishments in the inventory system under consideration means solving the following problem:
( P1 ) min ( Q , r ) C ¯ ( Q , r ) , s.t. Q + , r + ,
where C ¯ ( Q , r ) is given by Equation (1). We remind that we are in the case where the actual density f ¯ of X is unknown/unspecified, but only the mean μ and the variance σ 2 of X can be assessed. In this circumstance, we cannot solve problem (P1) directly, as the quantity B ¯ ( r ) cannot explicitly be calculated (since f ¯ is not given). Therefore, the exact solution that optimizes replenishments in the considered system cannot be determined. Consequently, the problem can only be approached adopting a suitable approximation.
In this regard, the procedure that is typically adopted in the literature (see, e.g., [27,28,29,30,31,32,33]) consists in assuming that the density of X is Gaussian or is derived according to the minimax method. We take here a different perspective, which is based on adopting the MaxEnt principle. That is, in Equation (2) we replace f ¯ with the density f maximizing the entropy over all densities defined on 0 + with mean μ and variance σ 2 . This maximization problem can be approached according to the following proposition [5]:
Proposition 1. 
(Maximum entropy distribution) Let us consider the problem of maximizing the entropy h ( g ) = S g ( x ) log g ( x ) d x over all p.d.f.s g with support S satisfying the following constraints:
  • g ( x ) 0 , with equality outside the support S,
  • S g ( x ) d x = 1 ,
  • S p i ( x ) g ( x ) d x = α i , for i = 1 , 2 , ... , m ,
where, for i = 1 , 2 , ... , m , Pi is a (measurable) function and α i is a real number. Let g * ( x ) = exp { λ 0 + i = 1 m λ i p i ( x ) } , x S , where λ 0 , ... , λ m are chosen so that g * satisfies the previous constraints. Then g * uniquely maximizes h(g) over all probability density functions g satisfying the previous constraints.
Thanks to Proposition 1, the density f maximizing the entropy over all densities defined on 0 + with mean μ and variance σ 2 is given by
f ( x ) = exp { a x 2 + b x + c } , x 0 +
where the quantities a, b, and c can readily be obtained by solving the following system of equations:
{ 0 + exp { a x 2 + b x + c } d x = 1 , 0 + x exp { a x 2 + b x + c } d x = μ , 0 + x 2 exp { a x 2 + b x + c } d x = σ 2 + μ 2 .
In what follows, the density f will be also referred to as the maximum entropy (or MaxEnt) density.
If we replace f ¯ with f in Equation (2), the expected shortage at the end of the cycle becomes:
B ( r ) = r + ( x r ) f ( x ) d x
and substituting B ¯ ( r ) with B ( r ) in Equation (1) we get:
C ( Q , r ) = A μ ˜ Q + h [ Q 2 + r μ ˜ L + β B ( r ) ] + π ¯ μ ˜ Q B ( r )
Ultimately, instead of solving problem (P1) directly (that is not possible, as the quantity B ¯ ( r ) in Equation (1) cannot explicitly be determined) we turn to approach the following:
( P2 ) min ( Q , r ) C ( Q , r ) , s.t. Q + , r + ,
where C ( Q , r ) is given by Equation (5). Evidently, solving problem (P2) is not equivalent to determining the optimal solution to problem (P1). That is, the solution ( Q * , r * ) to problem (P2) is not optimal to problem (P1). In fact, the maximum entropy density f is used to “approximate” the true density f ¯ of the lead-time demand, which is unknown by assumption. Therefore, ( Q * , r * ) may be considered a “heuristic” (i.e., not optimal) solution to problem (P1). If the decision-maker had full information about f ¯ , then it would clearly be preferable to approach problem (P1) directly.

3. Exact Procedure to Solve Problem (P2)

We can first note that the integrals in the above system of equations can converge if and only if a < 0. Under this condition, it is possible to check that the complementary c.d.f. F0 associated to the density f given by Equation (3) is expressed as follows:
F 0 ( x ) = x + f ( t ) d t = 1 2 π a e b 2 4 a + c [ 1 erf ( 2 a x + b 2 a ) ] ,
where erf ( ) is the Error function [38]. The related c.d.f. F is evidently given by F ( x ) = 1 F 0 ( x ) . Moreover, the quantity B in Equation (5), i.e., the expected shortage at the end of the cycle, can be determined with the following relation:
B ( r ) = r + ( x r ) f ( x ) d x = r + F 0 ( x ) d x = 1 2 a { e a x 2 + b x + c + 1 2 π a ( 2 a r + b ) e b 2 4 a + c [ 1 erf ( 2 a r + b 2 a ) ] } = 1 2 a [ f ( r ) + ( 2 a r + b ) F 0 ( r ) ] .
The equality r + ( x r ) f ( x ) d x = r + F 0 ( x ) d x in Equation (7) is known in literature [39]. Equations (6) and (7) can be obtained exploiting the numerous properties of the Error function given in [40].
From (7), we have that d 2 d r 2 B ( r ) = f ( r ) . Hence, we can affirm that B ( r ) is a convex function of r. Moreover, observing that E [ ( X r ) + ] / Q is a convex function of ( Q , r ) , for Q > 0 and r > 0 , for any p.d.f. of X [41], the convexity of C ( Q , r ) in ( Q , r ) , for Q > 0 and r > 0 , can readily be deduced. Therefore, the solution to problem (P2) can be obtained by solving the First-Order Conditions of optimality in ( Q , r ) .
If we impose the First-Order Condition in r, we obtain:
F 0 ( r ( Q ) ) = h h β + π ¯ μ ˜ Q
where r ( Q ) , which is the optimal r value for any Q, must satisfy Equation (8). In other terms, the optimal r value for any Q can be found from Equation (8) by inverting F (we remind that F 0 ( x ) = 1 F ( x ) ):
r ( Q ) = F 1 ( 1 h h β + π ¯ μ ˜ Q )
where F 1 is the inverse function of F. We can further observe that (5) can conveniently be rewritten as follows:
C ( Q , r ) = A μ ˜ Q + h ( Q 2 + r μ ˜ L ) + B ( r ) ( h β + π ¯ μ ˜ Q )
Moreover, inserting Equations (7)–(9) into Equation (10) we obtain, after some algebraic manipulations, the following expression:
C ( Q ) = A μ ˜ Q + h ( Q 2 μ ˜ L ) 1 2 a [ ( h β + π ¯ μ ˜ Q ) f ( r ( Q ) ) + h b ]
In conclusion, the Q-component Q * of the solution to problem (P2) can be found by minimizing Equation (11) in Q. Note that this can only be achieved by means of a numerical technique. In fact, the First-Order Condition in Q imposed to Equation (11) (i.e., the equation d C ( Q ) / d Q = 0 ) cannot be solved in Q in closed form. In other words, a closed-form formula that solves the equation d C ( Q ) / d Q = 0 in Q does not exist. To minimize Equation (11) in Q, a standard constrained nonlinear minimization algorithm can be used, e.g., the interior-point algorithm. This algorithm can, in fact, be implemented numerically. The r-component r * of the solution to problem (P2) can then be determined inserting Q * into Equation (9).

4. An Approximated Procedure to Solve Problem (P2)

We would observe that optimization approaches that are effective but also difficult to be implemented in practice, e.g., because of large computational time and/or lack of a simple solution procedure (which may consist in a simple formula), may have little relevance under a practical point of view. For example, when a closed-form expression to the First-Order Conditions of optimality lacks, the optimal solution may be obtained with a not negligible effort, which may make unpractical the model itself. This may be the case of a large retailer that typically needs to manage thousands of items, and the control variables are often required to be recalculated frequently. Therefore, the development of efficient and practically applicable approximated optimization procedures is strongly encouraged in these circumstances [42,43,44].
Owing to the above observations, in this section we propose an efficient approximated procedure to approach problem (P2). In fact, we remind that the optimal solution can only be achieved by means of a numerical technique, and this may limit the application of the model in practice. The presented approximation method permits us to obtain a simple formula, which resembles the classic EOQ expression.
The proposed near-optimal procedure is based on an ad hoc approximation of part of cost function (11). In particular, we replace f ( r ( Q ) ) with a first-order truncation of its Taylor series expansion in Q centered in
Q ¯ 2 μ ˜ A h
which is the well-known optimal solution in the EOQ model. We would observe that we do not investigate the convergence properties of the Taylor series expansion of f ( r ( Q ) ) . This for two main reasons: (i) this task is particularly hard to be analytically accomplished (in this regard, one should also consider that the model involves several parameters, and the convergence is plausibly strongly dependent on their specific values); and (ii) we rely on the fact that, implementing an approximation around the optimal solution in deterministic conditions, the true cost function is “close” to the approximated expression (in particular around the minimum), with an error that is intuitively even smaller as the variability in the system decreases. Although this last point is actually a heuristic reasoning, it is supported by experimental evidences. The reader is referred to the numerical study section (Section 5), where tests show the efficiency of our approximation method. We would finally remind that a similar approach has successfully been implemented in previous researches [44,45].
Taking the first-order derivative in Q of both side of Equation (8), we have:
d d Q F 0 ( r ( Q ) ) = d d Q ( h h β + π ¯ μ ˜ Q ) d F 0 ( r ( Q ) ) d r ( Q ) d r ( Q ) d Q = h π ¯ μ ˜ ( π ¯ μ ˜ + h β Q ) 2 f ( r ( Q ) ) d r ( Q ) d Q = h π ¯ μ ˜ ( π ¯ μ ˜ + h β Q ) 2 d r ( Q ) d Q = 1 f ( r ( Q ) ) h π ¯ μ ˜ ( π ¯ μ ˜ + h β Q ) 2 .
We can then note that:
d d Q f ( r ( Q ) ) = d f ( r ( Q ) ) d r ( Q ) d r ( Q ) d Q = ( 2 a r ( Q ) + b ) f ( r ( Q ) ) [ 1 f ( r ( Q ) ) h π ¯ μ ˜ ( π ¯ μ ˜ + h β Q ) 2 ] = ( 2 a r ( Q ) + b ) h π ¯ μ ˜ ( π ¯ μ ˜ + h β Q ) 2 .
Therefore, with reference to a neighborhood of Q ¯ , we can write:
f ( r ( Q ) ) p 0 + p 1 ( Q Q ¯ )
where:
p 0 f ( r ( Q ¯ ) ) , p 1 ( 2 a r ( Q ¯ ) + b ) h π ¯ μ ˜ ( π ¯ μ ˜ + h β Q ¯ ) 2 .
With some algebraic manipulations, Equation (12) can conveniently be rewritten as follows:
f ( r ( Q ) ) s 0 + s 1 Q ,
where:
s 0 p 0 p 1 Q ¯ , s 1 p 1 .
Finally, inserting Equation (13) into Equation (11), we can approximate C ( Q ) in a neighborhood of Q ¯ with the following expression:
C ^ ( Q ) = u Q + v Q + y ,
where:
u A μ ˜ 1 2 a s 0 π ¯ μ ˜ , v h 2 1 2 a s 1 h β , y 1 2 a ( s 0 h β + s 1 π ¯ μ ˜ + h b ) h μ ˜ L .
We can observe that C ^ ( Q ) resembles the deterministic cost structure plus a constant term in Q. The near-optimal Q-component Q ^ of the solution to problem (P2) can therefore be found solving the equation d C ^ ( Q ) / d Q = 0 , which is equivalent to
N ( Q ) v Q 2 u = 0
Under the assumption that u and v are positive quantities, N ( Q ) admits a unique positive root, which evidently coincides with Q ^ . In the numerical section, we will show that this fundamental assumption is licit for a reasonably wide range of parameter values. Therefore, Q ^ is given by
Q ^ = u v
Once Q ^ has been obtained, the corresponding near-optimal value r ^ of r can be found inserting Q ^ into Equation (9).
We would finally remark that the approximation procedure has allowed the derivation of a near-optimal solution (in Q) in closed form. Moreover, this expression is particularly simple, as it resembles the classic EOQ formula. Under a practical point of view, it is evidently simpler implementing Equation (15) rather than minimizing Equation (11) with a numerical procedure. The approximated solution method may therefore foster the implementation in practice of the model proposed in this paper. In the next section, we will show that the solution found with Equation (15) is efficient for a wide range of parameter values.

5. Numerical Experiments

This section presents numerical experiments whose purpose is that of examining two different questions. First, we numerically evaluate the efficiency of the approximated optimization procedure given in Section 4. The efficiency is assessed in terms of both computational effort required and error achieved. Then, we investigate the performance of the MaxEnt principle in approximating the true lead-time demand distribution under limited information, i.e., when only an estimate of the mean and of the variance of the lead-time demand is available. In the first part of this analysis, we compare the MaxEnt principle with two alternative procedures, i.e., the Gaussian approximation and the minimax approach, taking into account several classes of demand distributions. In the second, and last, part, we present a comparison between the MaxEnt principle and the approximation provided by the Weibull density.
These experiments were performed on a PC with an Intel® Core i7 processor at 2.4 GHz and with 16 GB of RAM. Moreover, MATLAB® R2013b was used as computing environment.

5.1. Efficiency of the Approximated Optimization Method

Let us consider the following quantity:
APE | C ( Q * , r * ) C ( Q ^ , r ^ ) | C ( Q * , r * ) × 100
which is the Absolute Percentage Error (APE). In Equation (16), cost function C is given by Equation (5), ( Q * , r * ) is the minimum-cost solution (obtained according to the procedure given in Section 3), and ( Q ^ , r ^ ) is the near-optimal solution (obtained according to the procedure given in Section 4).
The error is evaluated by means of Equation (16) for different combinations of parameter values. Parameter values are randomly drawn within the intervals shown in Table 1. Although the ranges chosen are purely indicative, values are similar to those typically adopted in the inventory management literature (see, e.g., [24,37,44]). Note that in Table 1 we consider c v σ ˜ / μ ˜ instead of σ ˜ . Once μ ˜ and cv are fixed, the corresponding value of σ ˜ is therefore given by σ ˜ = c v μ ˜ .
Table 1. Intervals where parameters take values.
Table 1. Intervals where parameters take values.
ParametersIntervalsUnits of Measurement
μ ˜ [ 100 , 1000 ] (units/year)
cv [ 0.05 , 0.80 ] -
L [ 0.03 , 0.17 ] (years)
A [ 100 , 250 ] ($/order)
h [ 1 , 25 ] ($/unit/year)
π [ 20 , 70 ] ($/unit)
π0 [ 80 , 150 ] ($/unit)
β [ 0.1 , 0.9 ] -
Results of the error analysis are shown in Table 2. We can note that the approximated optimization method mainly achieves a very small APE (i.e., APE < 1%). This happens for a reasonable wide range of parameter values. In some experiments, it is less than 0.01%, and it can therefore be considered negligible. In a few cases, the APE reaches values greater than 1%, but smaller than 2%. In general, we can thus affirm that the error achieved is sufficiently small to consider reasonably good the approximation made. It is worth noting that, in the cases where the error is large, i.e., APE > 1%, cv is high. That is, it seems that a greater variability in the system leads to higher error. This is expected, as the approximation is based on a Taylor series expansion around the optimal solution in deterministic conditions.
Table 2. Results of the error analysis.
Table 2. Results of the error analysis.
Test Number μ ˜ cvLAhππ0βAPE
11420.310.1321914541430.140.04%
22760.590.1323215231450.74<0.01%
39870.590.151651248990.700.92%
43770.150.1015420591270.210.14%
53710.750.1714321651220.811.93%
67560.480.0316717461060.850.08%
74350.500.1524017301260.160.61%
89410.670.1021411691500.790.32%
93220.690.152371450900.820.38%
109100.620.151431753880.431.88%
113550.720.1515813551390.590.54%
125110.590.152081541110.450.01%
133920.230.0815614481080.42<0.01%
149560.590.09255423850.230.07%
151100.450.0412216631490.560.01%
165640.300.09174265840.45<0.01%
176520.660.152405331430.570.15%
188380.450.0616811691240.66<0.01%
195650.470.0518418411390.690.10%
204480.630.1316418681350.660.67%
216320.390.0413421201410.160.13%
222960.480.052011522840.220.05%
238490.510.10230366870.51<0.01%
241040.630.152382545990.180.34%
257870.110.121785671210.45<0.01%
265070.680.101831838960.560.01%
272010.380.0716021401070.390.07%
281780.370.0714411261150.670.02%
291660.350.03133129900.31<0.01%
306390.730.1613313391170.311.52%
312560.070.161652558800.64<0.01%
325970.210.1413410651400.42<0.01%
339200.730.1115022421440.131.46%
342610.300.061481147830.540.01%
353190.170.162412157920.39<0.01%
366630.460.0914313581340.560.23%
372110.430.081134301270.450.01%
381080.450.072422340810.64<0.01%
391510.390.1120318531310.400.06%
401510.780.071892529930.370.33%
413460.160.09156442860.59<0.01%
428110.230.091852451250.28<0.01%
438620.430.072126681240.580.03%
443290.690.1620519311200.750.84%
451810.290.1010919481170.76<0.01%
With regard to the error analysis, we have carried out an additional investigation. APE has been evaluated over 1000 randomly generated problems, with parameter values drawn within the intervals in Table 1. Results are as follows:
  • In about 42.49% of cases, APE < 0.01%;
  • In about 51.16% of cases, 0.01% < APE ≤ 1%;
  • In about 4.44% of cases, 1% < APE ≤ 2%;
  • In about 1.92% of cases, APE > 2%;
  • The maximum value achieved is 4.8%.
We can observe that, in more than 93% of cases, APE is smaller than or equal to 1%. While, in more than 98% of cases, we have that APE ≤ 2%.
We now evaluate the computational effort required by the approximated method and by the exact algorithm to solve problem (P2). To this aim, we consider the time required to solve 1000 random problems. In fact, although the time difference on a single problem is practically negligible, the ratio of the computational times may become significant over several problems. In each problem, parameter values are randomly drawn within the intervals shown in Table 1. Both algorithms are tested on the same problems. That is, the comparison is made in terms of time needed to solve the same batch of 1000 random problems, where, in each problem, parameter values are (randomly) drawn within the intervals in Table 1. Results are as follows: the exact algorithm needed 99.42 s, while the approximated solution method spent 6.84 s. That is, over identical 1000 random problems, the percentage of computational time reduction achieved by the approximated solver is more than 93%.
In conclusion, we can assert that the approximated solution method is efficient, in terms of both error achieved and computational effort required. It seems therefore promising for a practical application. We would finally observe that, in every test, the assumption that u and v are positive has been satisfied.

5.2. Comparative Analysis

In this subsection, we investigate the performance of the MaxEnt principle in approximating the true lead-time demand distribution, when only an estimate of the mean and of the variance is given. In the first part of this subsection, this experiment is made taking into account several classes of demand distributions: lognormal, gamma, and Weibull. The MaxEnt principle is compared with two alternative procedures: the Gaussian approximation and the minimax approach. In the second, and last, part of this subsection, the MaxEnt principle is compared with the approximation provided by the Weibull density. In this second analysis, the true density of the lead-time demand is assumed to be a mixture of lognormal distributions.
Let us begin with the first study. Table 3 shows the parameters whose value is kept fixed. These parameter values have been randomly drawn within the ranges in Table 1. The other parameters, i.e., cv, L and h, take several different values. This is to study the sensitivity of the response with respect to variations in the value of these parameters, which significantly affect the optimal replenishment policy.
Table 3. Parameters whose value is kept fixed.
Table 3. Parameters whose value is kept fixed.
ParametersValuesUnits of Measurement
μ ˜ 834(units/year)
A237($/order)
π24($/unit)
π099($/unit)
β0.54-
Each single experiment, which is defined for a given set of parameter values, is carried out as follows.
The values of μ ˜ , cv, and L are used to sample ten observations from the true distribution of the lead-time demand. The parameters of the true lead-time demand distribution have clearly to be determined by imposing that the mean and the standard deviation are μ ˜ L and c v μ ˜ L , respectively. Note that the true distribution of the lead-time demand is unknown to the decision-maker in a real-world application, but only an estimate of its mean and its variance can be obtained. Parameters h, A, π, π0, L, and β are instead reasonably supposed to be known to the decision-maker. The observations of the lead-time demand are used to calculate a guess of its true mean and variance. These estimates are then exploited to find the (sub-)optimal replenishment policy in each “approximated” model considered (imposing that the mean and the variance in the approximation model are equal to the estimates of the true statistics). We also determine the optimal replenishment policy, and the corresponding minimum cost, we would have adopted under complete information (i.e., if the true lead-time demand distribution were known). Finally, we evaluate the true cost of the ordering rule obtained under Gaussian approximation, minimax approach or MaxEnt principle. These true costs are compared in terms of Absolute Percentage Error with respect to the true minimum cost.
With a certain lead-time demand distribution and for given parameter values, the optimal policy is determined by minimizing Equation (1) in ( Q , r ) . The quantity B ¯ ( r ) can readily be obtained by means of Equation (2) once the density f ¯ of the lead-time demand is specified. We remind that the density p of the lead-time demand under the minimax approach is given by [39]:
p ( z ) = 1 2 1 ( 1 + z 2 ) 3
where z = ( r μ ) / σ .
Concerning the task of assessing the mean and the variance of the lead-time demand, we would observe that ten observations are used as a trade-off between obtaining a good estimate and a too special value. A similar argument was raised in [20].
For each combination of parameter values, under a given true lead-time demand distribution, five independent runs have been made. Results are shown in Table 4 and in Table 5 for the cases h = 5 $/unit/year and h = 15 $/unit/year, respectively. In these tables, the smallest error achieved in each run is written in bold.
Table 4. Results of the comparative analysis for the case h = 5 $/unit/year. L = (days).
Table 4. Results of the comparative analysis for the case h = 5 $/unit/year. L = (days).
LognormalWeibullGamma
cvLGauss.MinimaxMaxEntGauss.MinimaxMaxEntGauss.MinimaxMaxEnt
0.05100.08%1.42%0.07%0.10%1.81%0.09%0.65%4.36%0.63%
0.20%0.74%0.17%0.63%0.82%0.62%0.01%2.08%0.01%
0.15%1.53%0.13%0.78%0.91%0.74%0.09%2.15%0.09%
0.23%0.61%0.14%0.20%2.04%0.20%0.43%0.85%0.43%
0.25%2.91%0.62%0.97%4.76%0.99%0.89%4.99%0.90%
302.16%9.74%2.18%0.45%2.21%0.45%0.81%6.66%0.81%
0.58%1.49%0.58%0.74%6.12%0.74%1.74%0.72%1.74%
1.94%0.54%1.94%0.28%2.17%0.28%0.32%1.91%0.32%
1.01%1.29%1.01%0.07%3.66%0.07%1.01%6.88%1.01%
0.09%2.93%0.09%0.20%3.74%0.20%0.68%4.50%0.68%
0.1102.17%0.35%0.88%2.87%0.50%1.88%8.37%0.48%0.34%
0.48%1.73%0.03%0.08%3.45%0.10%0.83%1.56%0.18%
0.75%1.41%0.31%1.02%1.59%0.45%1.13%9.12%2.58%
0.35%3.55%0.32%1.13%1.36%0.20%0.24%3.92%0.20%
0.67%1.82%0.48%1.03%1.47%0.22%0.35%2.62%0.10%
302.17%13.9%2.32%1.41%3.05%1.31%1.83%2.11%1.82%
1.87%2.07%1.62%3.85%16.29%4.79%2.01%2.31%1.29%
4.82%0.85%4.75%4.39%1.28%4.39%0.39%7.46%0.39%
0.21%4.30%0.20%1.71%2.70%1.66%6.47%21.88%6.77%
0.40%8.38%0.41%1.68%2.62%1.67%4.08%1.18%3.52%
0.4108.89%0.14%6.73%4.42%1.00%2.91%1.83%14.07%0.78%
1.95%5.35%1.94%2.82%16.90%1.25%12.33%41.67%8.31%
12.65%1.82%8.45%15.54%44.96%11.27%5.72%35.48%9.75%
2.22%1.46%1.19%44.09%17.38%38.74%0.84%16.47%2.11%
17.75%56.42%13.32%3.27%2.18%2.12%33.77%12.12%33.13%
3030.02%77.78%41.28%3.14%4.73%0.47%1.74%22.78%4.97%
7.30%1.08%3.17%6.71%2.36%1.75%17.78%2.52%11.01%
3.39%5.79%2.36%48.75%12.09%35.85%7.78%2.73%1.87%
4.40%3.13%2.07%3.71%27.39%7.84%2.30%6.14%0.07%
2.41%5.31%1.66%4.54%3.78%1.58%4.03%4.59%0.45%
0.8104.28%4.06%3.96%62.38%29.4857.97%59.99%27.33%56.03%
19.38%44.14%15.34%11.06%9.85%2.69%7.36%9.78%7.20%
4.15%13.00%5.18%42.95%39.25%16.66%38.41%97.31%32.12%
18.69%52.28%8.56%5.78%15.41%6.80%15.63%14.12%11.44%
49.02%21.78%42.87%23.90%4.51%15.76%6.47%24.02%8.12%
3012.76%14.23%12.65%10.02%19.32%9.11%9.48%26.01%7.38%
10.68%12.15%10.10%11.72%9.36%5.53%35.53%56.80%24.43%
22.08%49.01%26.87%13.32%23.93%11.69%16.02%0.84%9.35%
59.61%95.20%50.06%44.93%79.67%36.74%19.49%13.96%1.23%
19.23%1.42%15.25%59.82%14.67%47.38%6.79%15.96%6.79%
Table 5. Results of the comparative analysis for the case h = 15 $/unit/year. L = (days).
Table 5. Results of the comparative analysis for the case h = 15 $/unit/year. L = (days).
LognormalWeibullGamma
cvLGauss.MinimaxMaxEntGauss.MinimaxMaxEntGauss.MinimaxMaxEnt
0.05100.61%0.98%0.61%5.81%1.69%5.81%0.10%1.00%0.08%
3.69%8.90%4.14%0.97%1.15%0.96%0.58%3.22%0.58%
5.22%7.15%1.54%0.24%2.34%0.23%0.17%0.69%0.14%
0.68%0.21%0.67%0.22%2.46%0.22%5.97%1.28%4.60%
0.04%0.95%0.04%0.29%0.55%0.28%0.02%1.49%0.02%
304.51%0.87%4.51%1.60%6.43%1.60%0.35%3.58%0.35%
0.10%1.71%0.10%0.04%2.05%0.04%0.59%4.71%0.59%
0.05%1.93%0.05%0.39%4.13%0.39%0.05%1.76%0.05%
1.50%6.22%0.87%0.89%5.08%0.89%12.55%3.55%12.55%
0.69%0.85%0.54%1.59%0.57%1.59%0.09%1.68%0.09%
0.1101.90%0.25%1.01%7.67%11.49%3.53%1.01%1.07%0.90%
6.06%11.44%3.39%3.57%11.81%5.29%0.55%3.82%0.13%
0.95%0.47%0.38%3.44%1.05%3.30%0.40%3.92%0.20%
0.74%0.58%0.18%1.59%7.46%1.79%4.60%0.99%3.79%
19.46%9.78%2.05%9.64%3.09%0.13%0.52%0.96%0.08%
300.08%3.35%0.06%16.25%5.70%15.32%3.22%1.34%2.20%
0.38%6.00%0.46%1.13%8.15%1.07%4.01%1.46%3.80%
1.79%1.23%1.77%3.05%2.92%1.67%0.51%6.69%0.08%
1.11%1.63%0.99%0.19%3.12%0.15%0.32%5.84%0.12%
2.88%11.44%2.80%0.02%4.12%0.02%0.01%4.30%0.01%
0.41042.57%10.27%5.72%45.67%39.91%25.42%45.89%40.65%20.36%
7.16%5.78%0.50%3.59%2.09%2.99%24.15%20.00%7.81%
71.38%63.72%52.63%2.13%6.52%1.87%43.84%38.95%21.29%
33.73%16.26%29.05%77.76%70.58%53.89%16.72%35.84%12.60%
7.53%18.78%9.79%78.57%96.12%67.02%42.80%85.00%7.28%
3062.65%34.62%28.38%60.80%26.07%49.36%6.62%0.83%2.57%
9.05%0.17%4.59%32.01%48.62%24.21%52.75%40.63%19.79%
20.44%2.18%12.44%10.02%16.37%7.68%8.27%16.60%5.99%
7.11%13.86%9.19%10.14%0.34%3.81%84.95%35.44%15.45%
10.49%15.52%7.41%15.16%25.96%9.61%9.56%0.66%3.92%
0.81056.30%52.72%39.39%39.65%60.42%34.22%73.52%19.11%44.42%
49.82%44.38%30.34%34.35%14.10%9.72%50.13%46.24%18.41%
32.46%14.42%25.41%50.12%46.07%20.51%56.26%51.99%21.75%
81.37%76.52%54.42%28.73%8.95%15.81%25.50%6.47%23.44%
37.60%34.12%15.54%63.66%59.47%34.73%17.25%18.67%16.50%
3056.91%79.41%63.78%18.99%5.04%15.77%17.16%5.73%14.44%
17.84%12.34%16.83%72.99%25.19%62.53%57.31%46.60%13.48%
21.47%15.97%2.85%67.05%54.49%21.08%16.78%13.63%6.51%
23.91%18.87%2.21%87.97%76.52%30.85%18.06%4.48%12.11%
16.40%15.76%13.40%25.65%37.94%19.86%20.36%16.52%2.61%
We can first observe that relative performances do not significantly change for different values of h, as well as varying L for fixed cv. We can also note that the error of all approximation methods is increasing as cv and h become larger, which confirms results of the error analysis.
For small cv, the performance of the (r,Q) policy with Gaussian lead-time demand or under MaxEnt principle is substantially similar. In fact, it is known that the Gaussian approximation works well for small cv, as in such case the normal density is nearly 0 in the negative real semi-axis. In this condition, we can argue that the normal density and the maximum entropy distribution are very close. In contrast, the performance of the Gaussian approximation evidently deteriorates for higher cv. We can also observe that its performance worsens as h increases for fixed cv.
With regard to the minimax approach, its performance improves as the coefficient of variation of the lead-time demand increases. That is, when higher variability is involved, it seems a good choice to approximate the true lead-time demand distribution. In contrast, the approximation provided by the normal density appears to be preferable for small cv.
However, results clearly highlight that using the MaxEnt principle seems the best choice, under the considered conditions. In fact, its performance is very good under all investigated configurations and overcomes that of the other approaches in the majority of cases analyzed. Moreover, it looks to be not much sensitive to variations in the parameter values. This is a significant outcome, as it makes the MaxEnt principle a promising method to model the lead-time demand distribution when the decision-maker is provided with limited information about the true distribution.
We have then carried out additional experiments to compare the performance of the MaxEnt and Weibull distributions in approximating the true lead-time demand distribution. Note that the Weibull density is not typically adopted to represent the lead-time demand (more generally, the demand in a given time interval) [39,41,46]. However, it is known to have great flexibility to model many types of data, in particular thanks to the shape parameter k that allows the density to attain several shapes [47].
These tests are performed similarly to those presented in the first part of this subsection; that is, the procedure is the same. Again, Table 3 shows the parameters whose value is kept fixed. In this session, we have considered only one value for h, i.e., h = 5 $/unit/year. Parameters cv and L take the same values that have been adopted in the previous experiments. The true density p ( x ) of the lead-time demand is assumed to be expressed as follows:
p ( x ) = 0.4 p ¯ ( x ; μ 1 , σ 1 2 ) + 0.3 p ¯ ( x ; μ 2 , σ 2 2 ) + 0.3 p ¯ ( x ; μ 3 , σ 3 2 )
where p ¯ ( x ; μ i , σ i 2 ) , for i = 1 , 2 , 3 , is a lognormal density with parameters μ i and σ i 2 . That is, p ( x ) is a mixture of lognormal densities. This choice is not based on a specific criterion; basically, we have considered a density that was not “standard”, i.e., belonging to a specific class. Parameters μ i and σ i 2 , for i = 2 , 3 , are kept fixed and take the following values: μ 2 = 1.07 , μ 3 = 2.69 , σ 2 2 = 0.23 , and σ 3 2 = 0.16 . These values are purely indicative. In each experiment, the value of μ 1 and σ 1 2 is determined to assure that the mean and the standard deviation of p ( x ) are equal to μ ˜ L and c v μ ˜ L , respectively (we, in fact, remind that the lead-time demand has mean and standard deviation respectively given by μ ˜ L and c v μ ˜ L , where c v σ ˜ / μ ˜ ).
For each combination of parameter values, three runs have been made. Table 6 shows the results of the comparison between the MaxEnt and Weibull distributions. Performance is measured in terms of Absolute Percentage Error (APE) with respect to the true minimum cost. In addition, note that Table 6 gives the value that the shape parameter k of the Weibull density takes in each run. We can note that the MaxEnt distribution achieves a better performance, as this model has been able to obtain a smaller APE in the majority of tests. In addition, the performance of the Weibull model seems to deteriorate, with respect to the performance of the MaxEnt model, as the variability in the system grows. That is, with increasing cv, the MaxEnt distribution has turned out to realize the lowest APE more frequently than for small/medium values of cv. With regard to the APE magnitude, we can observe that it is increasing in cv, as expected. This result is in accordance to the outcomes in the previous experiments. With regard to the shape parameter k of the Weibull density, this has taken relatively small values. In particular, the greatest value has been observed to be equal to 1.68. A final remark: k appears to be decreasing as the variability in the system increases; that is, in such circumstance, the Weibull density tends to have null mode.
Table 6. Results of the comparison between MaxEnt and Weibull distributions. L = (days).
Table 6. Results of the comparison between MaxEnt and Weibull distributions. L = (days).
cvLMaxEntWeibullk
0.05103.54%4.61%1.68
2.39%3.22%1.41
0.72%1.04%1.24
306.19%6.84%1.46
6.46%6.32%1.28
6.07%5.98%1.19
0.11012.15%15.04%1.08
12.22%11.95%1.22
0.64%0.99%1.27
304.76%4.92%0.58
0.49%1.37%0.75
1.56%1.20%0.67
0.41013.70%12.45%1.14
12.12%24.11%0.73
8.22%8.56%1.31
306.86%7.27%0.44
3.05%5.63%1.02
6.84%13.94%0.57
0.81080.44%86.71%1.54
59.43%57.70%1.14
7.84%14.20%0.72
3011.28%3.31%0.63
13.28%21.06%0.95
10.27%13.27%0.76

6. Conclusions and Further Remarks

In this paper, we took into account the continuous-review reorder point-lot size (r,Q) policy under stochastic demand, with backorders-lost sales mixture. We modelled the lead-time demand distribution according to the MaxEnt principle. That is, we considered the density that maximizes the entropy over all densities with given mean and variance. This approach is suitable when the decision-maker is provided with limited information about the true distribution of the lead-time demand. This is the case, for example, in which the true distribution is unknown, but only some observations of the demand are available, which allow estimating the mean and the variance of the lead-time demand.
We developed an optimization problem aimed at minimizing the expected total cost per time unit. To approach this problem, we then presented an exact algorithm and a heuristic method. The heuristic algorithm is based on an approximated expression of the cost function achieved by means of an ad hoc first-order truncation of the Taylor series expansion.
Numerical experiments were finally carried out to investigate two aspects. First, the heuristic algorithm was evaluated in terms of both error achieved and computational effort required. Tests proved that the approximated solution procedure is efficient for a wide range of parameter values. Then, we investigated the capability of the MaxEnt principle to approximate the true lead-time demand distribution under partial information. This analysis was firstly made considering the performance of two alternative approaches: the Gaussian approximation and the minimax method. Results highlighted that the MaxEnt principle performed better that the other methods. In fact, it achieved the best outcome in the majority of cases analyzed. A second set of experiments was then carried out to compare the MaxEnt principle with respect to the approximation provided by the Weibull density. Again, the MaxExt principle turned out to be preferable, as it achieved the best result in the majority of tests. In conclusion, the MaxEnt principle seems a promising practical approach to model the lead-time demand distribution under partial information.
The developed model considers the lead time a constant and deterministic quantity, which can therefore be supposed to be known to the decision-maker. However, the case of a random lead time can be taken into account, too. In this regard, it is reasonable to follow a similar argument to what said above concerning the distribution of the lead-time demand. That is, when the lead time is a random variable, full information about its true distribution may not be available to the decision-maker. In this circumstance, only some observations about the lead-time duration can be obtained. To include stochastic lead time into our model, in the relevant case characterized by partial information, we can proceed as follows. A premise is needed: we refer to the condition with sequential deliveries independent of the lead-time demand, which is characterized by orders that cannot cross in time [46]. Note that this is the most common situation in practice, since it is almost always true that orders are received in the same sequence in which they were placed [41,46]. Hence, let us assume that the decision-maker is aware of only some observations about (i) the demand in given time periods; and (ii) the length of lead time. These observations allow estimating the mean and the variance of the lead time (i.e., μ L and σ L 2 , respectively) and of the demand per time unit (i.e., μ D and σ D 2 , respectively). These quantities can then be adopted to assess the mean μ and the variance σ 2 of the lead-time demand according to the following expressions [46]:
μ = μ D μ L , σ 2 = σ D 2 μ L + μ D 2 σ L 2 .
Finally. μ and σ 2 can be exploited to estimate parameters a, b, and c of the density maximizing the entropy, which will be used to model the distribution of the lead-time demand, as described in Section 2.
Future researches may be devoted to several studies. For example, the suitability to implement the MaxEnt principle into different inventory systems (e.g., multi-echelon supply chains) may be investigated. In addition, the entropy maximization principle may be applied to generalized entropies, e.g., to Tsallis entropy [48]. Finally, a more general formulation of the expected shortage may also be considered. In particular, the expression known in the risk management literature as “shortfall risk” [49,50] may be taken into account in place of the standard quantity.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sinn, H.-W. A rehabilitation of the principle of insufficient reason. Q. J. Econ. 1980, 94, 493–506. [Google Scholar] [CrossRef]
  2. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  3. Jaynes, E.T. Information theory and statistical mechanics, II. Phys. Rev. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  4. Jaynes, E.T. Probability Theory: The Logic of Science, 1st ed.; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  5. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  6. Zhao, J.; Chai, L. A novel approach for urbanization level evaluation based on information entropy principle: A case of Beijing. Phys. A 2015, 430, 114–125. [Google Scholar] [CrossRef]
  7. Singh, A.K.; Singh, H.P. Analysis of finite buffer queue: A maximum entropy probability distribution with shifted fractional geometric and arithmetic means. IEEE Commun. Lett. 2015, 19, 163–166. [Google Scholar] [CrossRef]
  8. Piovan, M.T.; Olmedo, J.F.; Sampaio, R. Dynamics of magneto electro elastic curved beams: Quantification of parametric uncertainties. Compos. Struct. 2015, 133, 621–629. [Google Scholar] [CrossRef]
  9. Chen, H.; MacMinn, R.; Sun, T. Multi-population mortality models: A factor copula approach. Insur. Math. Econ. 2015, 63, 135–146. [Google Scholar] [CrossRef]
  10. Tang, J.; Li, Z.; Wang, M.; Zhao, R. Neighborhood discriminant hashing for large-scale image retrieval. IEEE Trans. Image Process. 2015, 24, 2827–2840. [Google Scholar] [CrossRef] [PubMed]
  11. Kern-Isberner, G.; Wilhelm, M.; Beierle, C. Probabilistic knowledge representation using the principle of maximum entropy and Gröbner basis theory. Ann. Math. Artif. Intell. 2015. [Google Scholar] [CrossRef]
  12. Davis, J.; Blesius, L. A hybrid physical and maximum-entropy landslide susceptibility model. Entropy 2015, 17, 4271–7292. [Google Scholar] [CrossRef]
  13. Sansalone, V.; Gagliardi, D.; Desceliers, C.; Bousson, V.; Laredo, J.-D.; Peyrin, F.; Haïat, G.; Naili, S. Stochastic multiscale modelling of cortical bone elasticity based on high-resolution imaging. Biomech. Model. Mechanobiol. 2015. [Google Scholar] [CrossRef] [PubMed]
  14. Arandjelović, O.; Pham, D.-S.; Venkatesh, S. Two maximum entropy-based algorithms for running quantile estimation in nonstationary data streams. IEEE Trans. Circ. Syst. Vid. 2015, 25, 1469–1479. [Google Scholar] [CrossRef]
  15. Martyushev, L.M.; Seleznev, V.D. Maximum entropy production principle in physics, chemistry and biology. Phys. Rep. 2006, 426, 1–45. [Google Scholar] [CrossRef]
  16. Banavar, J.R.; Maritan, A.; Volkov, I. Applications of the principle of maximum entropy: From physics to ecology. J. Phys.-Condens. Mat. 2010, 22, 063101. [Google Scholar] [CrossRef] [PubMed]
  17. Zhou, R.; Cai, R.; Tong, G. Applications of entropy in finance: A review. Entropy 2013, 15, 4909–4931. [Google Scholar] [CrossRef]
  18. Harte, J.; Newman, E.A. Maximum information entropy: A foundation for ecological theory. Trends Ecol. Evol. 2014, 29, 384–389. [Google Scholar] [CrossRef] [PubMed]
  19. Perakis, G.; Roels, G. Regret in the newsvendor model with partial information. Oper. Res. 2008, 56, 188–203. [Google Scholar] [CrossRef]
  20. Andersson, J.; Jörnsten, K.; Nonås, S.L.; Sandal, L.; Ubøe, J. A maximum entropy approach to the newsvendor problem with partial information. Eur. J. Oper. Res. 2013, 228, 190–200. [Google Scholar] [CrossRef] [Green Version]
  21. Maglaras, C.; Eren, S. A maximum entropy joint demand estimation and capacity control policy. Prod. Oper. Manag. 2015, 24, 438–450. [Google Scholar] [CrossRef]
  22. Moon, I.; Gallego, G. Distribution free procedures for some inventory models. J. Oper. Res. Soc. 1994, 45, 651–658. [Google Scholar] [CrossRef]
  23. Gallego, G.; Katircioglu, K.; Ramachandran, B. Inventory management under highly uncertain demand. Oper. Res. Lett. 2007, 35, 281–289. [Google Scholar] [CrossRef]
  24. Sarkar, B.; Chaudhuri, K.; Moon, I. Manufacturing setup cost reduction and quality improvement for the distribution free continuous-review inventory model with a service level constraint. J. Manuf. Syst. 2015, 34, 74–82. [Google Scholar] [CrossRef]
  25. Kumar, R.S.; Goswami, A. A continuous review production-inventory system in fuzzy random environment: Minmax distribution free procedure. Comput. Ind. Eng. 2015, 79, 65–75. [Google Scholar] [CrossRef]
  26. Raza, S.A. An integrated approach to price differentiation and inventory decisions with demand leakage. Int. J. Prod. Econ. 2015, 164, 105–117. [Google Scholar] [CrossRef]
  27. Guo, C.; Li, X. A multi-echelon inventory system with supplier selection and order allocation under stochastic demand. Int. J. Prod. Econ. 2014, 151, 37–47. [Google Scholar] [CrossRef]
  28. Chung, K.-J.; Ting, P.-S.; Hou, K.-L. A simple cost minimization procedure for the (Q,r) inventory system with a specified fixed cost per stockout occasion. Appl. Math. Model. 2009, 33, 2538–2543. [Google Scholar] [CrossRef]
  29. Hsiao, Y.-C. Integrated logistic and inventory model for a two-stage supply chain controlled by the reorder and shipping points with sharing information. Int. J. Prod. Econ. 2008, 115, 229–235. [Google Scholar] [CrossRef]
  30. Rad, R.H.; Razmi, J.; Sangari, M.S.; Ebrahimi, Z.F. Optimizing an integrated vendor-managed inventory system for a single-vendor two-buyer supply chain with determining weighing factor for vendor’s ordering cost. Int. J. Prod. Econ. 2014, 153, 295–308. [Google Scholar] [CrossRef]
  31. Chu, P.; Yang, K.-L.; Chen, P.S. Improved inventory models with service level and lead time. Comput. Oper. Res. 2005, 32, 285–296. [Google Scholar] [CrossRef]
  32. Pan, J.C.-H.; Hsiao, Y.-C. Integrated inventory models with controllable lead time and backorder discount considerations. Int. J. Prod. Econ. 2005, 93–94, 387–397. [Google Scholar] [CrossRef]
  33. Annadurai, K.; Uthayakumar, R. Controlling setup cost in (Q,r,L) inventory model with defective items. Appl. Math. Model. 2010, 34, 1418–1427. [Google Scholar] [CrossRef]
  34. Chang, C.-T.; Lo, T.Y. On the inventory model with continuous and discrete lead time, backorders and lost sales. Appl. Math. Model. 2009, 33, 2196–2206. [Google Scholar] [CrossRef]
  35. Sicilia, J.; San-José, L.A.; García-Laguna, J. An inventory model where backordered demand ratio is exponentially decreasing with the waiting time. Ann. Oper. Res. 2012, 199, 137–155. [Google Scholar] [CrossRef]
  36. Wang, D.; Tang, O. Dynamic inventory rationing with mixed backorders and lost sales. Int. J. Prod. Econ. 2014, 149, 56–67. [Google Scholar] [CrossRef]
  37. Ouyang, L.-Y.; Chen, C.-K.; Chang, H.-C. Lead time and ordering cost reductions in continuous review inventory systems with partial backorders. J. Oper. Res. Soc. 1999, 50, 1272–1279. [Google Scholar] [CrossRef]
  38. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; Dover Publications: New York, NY, USA, 1965. [Google Scholar]
  39. Zipkin, P.H. Foundations of Inventory Management; McGraw-Hill/Irwin: New York, NY, USA, 2000. [Google Scholar]
  40. Ng, E.W.; Geller, M. A table of integrals of the error functions. J. Res. Nbs. B Math. Sci. 1969, 73B, 1–20. [Google Scholar] [CrossRef]
  41. Hadley, G.; Whitin, T.M. Analysis of Inventory Systems; Prentice-Hall Inc.: Englewood Cliffs, NJ, USA, 1963. [Google Scholar]
  42. Platt, D.E.; Robinson, L.W.; Freund, R.B. Tractable (Q, R) heuristic models for constrained service levels. Manag. Sci. 1997, 43, 951–965. [Google Scholar] [CrossRef]
  43. Silver, E.A. An overview of heuristic solution methods. J. Oper. Res. Soc. 2004, 55, 936–956. [Google Scholar] [CrossRef]
  44. Eynan, A.; Kropp, D.H. Effective and simple EOQ-like solutions for stochastic demand periodic review systems. Eur. J. Oper. Res. 2007, 180, 1135–1143. [Google Scholar] [CrossRef]
  45. Braglia, M.; Castellano, D.; Frosolini, M. A novel approach to safety stock management in a coordinated supply chain with controllable lead time using present value. Appl. Stoch. Model. Bus. 2015. [Google Scholar] [CrossRef]
  46. Axsäter, S. Inventory Control, 3rd ed.; Springer: New York, NY, USA, 2015. [Google Scholar]
  47. Rinne, H. The Weibull Distribution: A Handbook; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  48. Preda, V.; Dedu, S.; Gheorghe, C. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints. Phys. A 2015, 436, 925–932. [Google Scholar] [CrossRef]
  49. Runggaldier, W.; Trivellato, B.; Vargiolu, T. A Bayesian adaptive control approach to risk management in a binomial model. In Seminar on Stochastic Analysis, Random Fields and Applications III; Dalang, R.C., Dozzi, M., Russo, F., Eds.; Birkhäuser: Basel, Switzerland, 2002; Volume 52, pp. 243–258. [Google Scholar]
  50. Trivellato, B. Replication and shortfall risk in a binomial model with transaction costs. Math. Methods Oper. Res. 2009, 69, 1–26. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Castellano, D. Stochastic Reorder Point-Lot Size (r,Q) Inventory Model under Maximum Entropy Principle. Entropy 2016, 18, 16. https://doi.org/10.3390/e18010016

AMA Style

Castellano D. Stochastic Reorder Point-Lot Size (r,Q) Inventory Model under Maximum Entropy Principle. Entropy. 2016; 18(1):16. https://doi.org/10.3390/e18010016

Chicago/Turabian Style

Castellano, Davide. 2016. "Stochastic Reorder Point-Lot Size (r,Q) Inventory Model under Maximum Entropy Principle" Entropy 18, no. 1: 16. https://doi.org/10.3390/e18010016

APA Style

Castellano, D. (2016). Stochastic Reorder Point-Lot Size (r,Q) Inventory Model under Maximum Entropy Principle. Entropy, 18(1), 16. https://doi.org/10.3390/e18010016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop