Next Article in Journal
Open Markov Processes: A Compositional Perspective on Non-Equilibrium Steady States in Biology
Next Article in Special Issue
Forecasting Energy Value at Risk Using Multiscale Dependence Based Methodology
Previous Article in Journal
The Free Energy Requirements of Biological Organisms; Implications for Evolution
Previous Article in Special Issue
A Comparison of Classification Methods for Telediagnosis of Parkinson’s Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies

Department of Economics and Quantitative Methods, Faculty of Economics, University of West Bohemia, Husova 11, Pilsen 30614, Czech Republic
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(4), 137; https://doi.org/10.3390/e18040137
Submission received: 3 December 2015 / Revised: 14 March 2016 / Accepted: 28 March 2016 / Published: 14 April 2016
(This article belongs to the Special Issue Computational Complexity)

Abstract

:
This paper discusses a unified entropy-based approach for the quantitative measurement of operational complexity of company supplier-customer relations. Classical Shannon entropy is utilized. Beside this quantification tool, we also explore the relations between Shannon entropy and (c,d)-entropy in more details. An analytic description of so called iso-quant curves is given, too. We present five case studies, albeit in an anonymous setting, describing various details of general procedures for measuring the operational complexity of supplier-customer systems. In general, we assume a problem-oriented database exists, which contains detailed records of all product forecasts, orders and deliveries both in quantity and time, scheduled and realized, too. Data processing detects important flow variations both in volumes and times, e.g., order—forecast, delivery—order, and actual production—scheduled one. The unifying quantity used for entropy computation is the time gap between actual delivery time and order issue time, which is nothing else but a lead time in inventory control models. After data consistency checks, histograms and empirical distribution functions are constructed. Finally, the entropy, information-theoretic measure of supplier-customer operational complexity, is calculated. Basic steps of the algorithm are mentioned briefly, too. Results of supplier-customer system analysis from selected Czech small and medium-sized enterprises (SMEs) are presented in various computational and managerial decision making details. An enterprise is ranked as SME one, if it has at most 250 employees and its turnover does not exceed 50 million USD per year, or its balance sheet total does not exceed 43 million USD per year, alternatively.

1. Introduction

Business economics in general knows two types of complexity of supplier-customer systems. The first type is called structural complexity, which is defined as a static system variety and its main design dimensions. From an analytic point of view, it describes structural links among various business units and their hierarchies. It has dominantly a static representation and undergoes time changes usually over long-term periods.
The second one is called operational complexity and concerns all uncertainties associated with system dynamics. From an analytic point of view, it reflects temporal changes in supplier-customer systems. In particular, an operational complexity measure should express the behavioral uncertainties of the system during time with respect to specified control levels. It has to record all possible types of flow variations within and across companies in detail, e.g., replenishment time disturbances, deviations of material in/out flows, etc. We assume that such data are available in a company management information system (MIS).
Finally, we would like to add that there are numerous types of complexity defined in various fields closely related to business economics. For example, there are three kinds of complexity within manufacturing systems—product, process and operational complexity, in particular. We refer to [1,2,3,4,5] for more details and various aspects of complexity including tools and methods applied in company management.
From a management science point of view, a supplier-customer system belongs to the broad theory of inventory control [6]. There is well-known that links within any customer-supplier system can be represented by graphs as is usual in multi-echelon inventory systems.
In this paper, the operational complexity of supplier-customer systems is analyzed using a unified entropy-oriented approach. We select five SME-ranked firms with rather diverse production lines from the West Bohemian district of the Czech Republic, and we present the results of our case studies in various details thus providing a general scheme for such type of managerial analyses. All these results are computed by entropy procedures based upon classical Shannon entropy. However, we also discuss a relation between the well-known Shannon entropy, or the Boltzmann–Gibbs–Shannon one as it is alternatively called (denoted as BGS-entropy, too), and a more general entropy, the (c,d)-entropy, in particular.
The paper is organized as follows: in Section 2, we present a short theoretical background, which is divided into three parts: (i) the definition of entropy used to measure operational complexity, with some interesting results relating BGS-entropy and (c,d)-entropy; (ii) the information schemes of supplier-customer systems and the basic structure of the problem-oriented database; and (iii) a definition and qualitative description of the building of the variable records stored in such a particular database. In Section 3, which forms the core of the paper, we present the results of our five case studies. Some 3-D bar charts are added, too, which illustrate in a practical way how to recast entropy quantities into a form suitable for managerial decision making. In Section 4, we present a discussion of the results and some concluding remarks relating our future research.

2. Theoretical Background

The theoretical framework for quantification of any system complexity is provided by information theory, in general. First, we may refer two seminal works on this field [7,8]. Entropy represents the most known quantitative measure of expected amount of information required to describe the state of a system, and builds a basic framework for the development of complexity theory [9,10]. Entropy is used for measuring the complexity of supply chains [11,12,13,14,15,16,17]. These papers bring manifold topics and approaches, and show also more or less connections with inventory control theory, in general. Another field of applications is focused on system complexity analysis, where there is engineering design and manufacturing chains, see [18,19,20,21]. In general, the complexity of a system increases with the increasing level of disorder and uncertainty of its states, as presented in [22]. A theoretical framework for joint conditional complexity is presented in [23]. Application of entropy for analysis and classification of numerical algorithms and the corresponding computational complexities is discussed in [24,25]. Paper [26] brings a good review of various multi-scale entropy algorithms and some applications, too. Two-echelon supply chain games and their information characteristics, in particular, are discussed in [27]. Finally, the papers [28,29,30] are concerned with measuring the operational complexity of supplier-customer systems, and initiated our serious interest in the field.
An overview of basic Shannon–Khinchin axioms for entropy is given in [31,32], too, concluding that given a discrete probabilistic system, the only possible function which satisfies these axioms takes the following form:
S ( p 1 ,   ,   p N ) = c i = 1 N p i log b ( p i )
where c is a positive constant, c > 0, and base of logarithms is arbitrary b > 1.

2.1. Entropy Used to Measure Operational Complexity

Basically, we use classical Shannon information-theoretic measure and corresponding entropy defined for any information system with N states being characterized with a discrete probability distribution, i.e., each system state may appear with probability pi ≥ 0, with i = 1 N p i = 1. The Shannon entropy is given by following formula, which corresponds to Equation (1) taking c = 1 and b = 2:
H ( p 1 ,   ,   p N ) = i = 1 N p i log 2 ( p i )
where H is used to denote entropy calculated using log2( x ) in particular, whilst the entropy calculated with natural logarithms log( x ), i.e., with b = e, will be denoted by S from here on.
The maximal value of H ( p 1 ,   ,   p N ) attainable is given for uniform distribution, and takes the form:
H u : =   H ( p 1 = 1 N ,   ,   p N = 1 N ) = log 2 ( N )
Recently, generalized (c,d)-entropy was presented in [31], which is particularly interesting. It provides a versatile tool for quantitatively measuring complex systems, which are often inherently non-ergodic and non-Markovian ones, thus relaxing the rigorous need to apply Shannon entropy. Moreover, the (c,d)-entropy, with system-specific scaling exponents, c and d, has yet another attractive feature. This entropy contains many known entropy functionals as special cases, including Shannon entropy among others.
Following [31], the (c,d)-entropy is defined as follows:
S c , d ( p 1 ,   ,   p N ) = e r i = 1 N Γ ( d + 1 ,   1 c log ( p i ) ) c r ,   r = ( 1 c + c d ) 1
using natural logarithmic function log( x ), and incomplete gamma function Γ( a , b ), as key-tools.
The incomplete gamma function is given by:
Γ ( a , b ) = b t a 1 exp ( t ) d t
when relaxing the lower integration bound of the Euler gamma function:
Γ ( a ) = 0 t a 1 exp ( t ) d t
which provides useful extension of the factorial, where (n − 1)! = Γ(n), for any positive integer n.
The Boltzmann–Gibbs entropy originated in thermodynamics:
S BGS ( p 1 ,   ,   p N ) = i = 1 N p i log ( 1 / p i )
It differs from Shannon entropy quantitatively just by using natural logarithms instead of logarithms with base 2. Hence, it is usually denoted by the symbol SBGS and called Boltzmann–Gibbs–Shannon entropy, in the most correct sense.
Following [31], the relation between (c,d)-entropy and the Boltzmann–Gibbs–Shannon entropy is established for c = 1, and d = 1:
S BGS ( p 1 ,   ,   p N ) = S 1 , 1 ( p 1 ,   ,   p N ) 1
This relation has motivated us to reformulate the additive term in Equation (3) in order to remove the additive constant −1 from Equation (7) intrinsically. Hence, we propose the modified formula in the following form:
S c , d * ( p 1 ,   ,   p N ) = e r i = 1 N Γ ( d + 1 ,   1 c log ( p i ) )   c ( r + 1 ) ,   r = ( 1 c + c d ) 1
and call it (c,d)-entropy as well, which still provides the desired smart relation:
S BGS ( p 1 ,   ,   p N ) = S 1 , 1 * ( p 1 ,   ,   p N )
Let ϕ denote the discrete distribution ( p 1 ,   ,   p N ) of any given probabilistic system thus providing a way to write the next formulae and equations in a more compact way.
Given ϕ , the S c , d * ( ϕ ) can be investigated as a function of arguments (c,d) over a region Ω 2, which helps significantly to elucidate Equations (7) and (9).
We have used Mathematica® (version 10.1) to perform our numerical experiments with (c,d)-entropy given by Equation (8), as well as to discover a more general relation between (c,d)-entropy and SBGS than given by Equations (7) or (9), respectively.
Set (c,d) ϵ   Ω = [0.1, 5] × [0.1, 5]   2, where × denotes a Cartesian product of two sets. The distribution ϕ is constructed from finite samples of random numbers generated by integer pseudo-random generators. In general, L will denote the length of list of such numbers, and their range will be {0, 1, …, kmax}, thus N = kmax + 1.
We use the Mathematica function Random Integer [kmax, L], which serves exactly that purpose to give a list of L pseudorandom integers ranging from 0 to kmax desired. Sorting the list for given kmax, the frequencies and corresponding empirical distribution ϕ will be calculated as an approximation of a uniform distribution of probabilistic system with N states.
We have run several numerical experiments varying both L and kmax. Since the results were very similar, we present just the following ones, see Figure 1 and Figure 2. Raw data of the generated probabilistic system are given in Table 1. In the first row, there are particular system states selected, i.e., ten bins denoted 0, …, 9 in sequel, which makes N = 10 as kmax = 9. In the second row, there are the corresponding frequencies of state occurrences. Further, the Table 2 gives the corresponding probabilities p1, …, p10, and thus the distribution ϕ is easily obtained from Table 1.
The functions S c , d * ( ϕ ) and S BGS ( ϕ ) are plotted in Figure 1, where the blue horizontal plane represents S BGS ( ϕ ) to be constant over (c,d) ϵ Ω = [0.1, 5] × [0.1, 5], as it does not depend upon (c,d) at all just by its definition. For given empirical distribution ϕ presented in Table 2, the S BGS ( ϕ ) = 2.29809, which is a little bit lower than 2.30259, as being calculated in case of exact uniform distribution, precisely.
The complicated surface of S c , d * ( ϕ ) representing (c,d)-entropy being calculated for the same distribution ϕ within a sub-region (c,d) ϵ   [1, 5] × [0.1, 0.75] is due to the numerical properties of incomplete gamma function calculations for the corresponding arguments. This surprising result has issued a question to investigate an intersection S c , d * ( ϕ ) S BGS ( ϕ ) over (c,d) ϵ   Ω = [0.1, 5] × [0.1, 5] in more details. The result is given in Figure 2, which shows the detected intersection.
In general, there are three curves representing the intersection of surface S c , d * ( ϕ ) with the horizontal plane S BGS ( ϕ ) over (c,d) ϵ   Ω = [0.1, 5] × [0.1, 5], which are plotted in Figure 2. Given ϕ , the analytic definition of set of points forming the intersection is the following:
Λ = { ( c , d ) Ω 2 |   S c , d * ( ϕ ) = S BGS ( ϕ ) }
Provided Ω = Ω1 Ω2, we are able to localize the curves more distinctly. The mutually disjoint sub-regions Ω1, Ω2, are defined using suitable separating line given by two incidence points, e.g., (1,0) and (5,5), in the following way:
Ω 1 = { ( c , d ) Ω 2 |   4 d 5 c 5 } ,   Ω 2 = { ( c , d ) Ω 2 |   4 d 5 c < 5
Hence, the sub-region Ω1 contains just one curve, denoted λ1, and called the main branch of the intersection, S c , d * ( ϕ ) S BGS ( ϕ ) over (c,d) ϵ Ω, while sub-region Ω2 contains two other curves, called secondary branches, on the contrary. The point (c,d) = (1,1), which stands in Equations (7) and (9) [31], thus belongs to λ1 evidently. Concluding, the curve λ1, i.e., the main branch, is given by:
λ 1 = { ( c , d ) Ω 2 |   S c , d * ( ϕ ) = S BGS ( ϕ ) ,   4 d 5 c 5 }  
Hence, given ϕ , the desired generalization of Equation (9) is the following:
S BGS ( p 1 ,   , p N ) = S c , d * ( p 1 ,   , p N ) ,   ( c , d ) λ 1
Of course, the secondary branches allow one to express relations similar to Equation (13), and could be also interesting, but we do not mention them in more details here. At the end of this section, we may conclude the relation between BGS-entropy and (c,d)-entropy is more complicated than a single-valued relation given just by (c,d) = (1,1), as in the Equations (9) and (7), respectively. Given the probability distribution, the value of BGS-entropy induces the iso-quant curves on the surface of (c,d)-entropy being calculated for the same probability distribution. The main branch of such iso-quant curves given by Equation (12), is that one which is incident with the (c,d) = (1,1) point, exactly.

2.2. Information Scheme of Supplier-Customer Systems and Basic Structure of a Problem-Oriented Database

Basically, we expand herein upon our previous works on the topic [33,34,35], in particular. In order to analyze the operational complexity of supplier-customer systems, we have to define the basic entities and corresponding flows of information, first. In general, we are able to identify three main entities—supplier, customer, and an interface between them, where both supplier and customer are simply distinguishable as usual as being physically localizable uniquely. On the contrary, the interface is more or less a good abstract concept insomuch as being physically localizable much harder sometimes.
In this general framework, supplier and customer both schedule and realize their productions. Thus, the operational complexity is measurable at the interface in particular, where quantities and times of forecast, order, and actual deliveries are detected. The basic information scheme of a supplier-customer system is sketched in Figure 3.
In general, we assume that we are able to detect volumetric deviations in quantities of goods delivered, as well as, time gaps between actual supply times and ordering ones, and other quantities used to monitor various replenishment deviations at the Interface Registration point.

2.3. Definition of Variables

Let us consider a set of products {P1, ..., Pn} being handled within a supplier-customer system. In general, there are two types of variables relating quantity and time to be considered for a particular product Pi, i = 1, ..., n. All of them are reported at both the supplier and customer side, and thus, they should be reported at the interface, as well. Monitoring such variables provides time series which form the core of information for measuring the operational complexity of supplier-customer system. A list of typical variables monitored for products Pi, i = 1, ..., n, is given in Table 3.
Of course, depending upon the specific supplier-customer system and collection of products handled along there, there might appear additional quantities, too. In general, we are able to define a lot of variables which are denoted systematically (a,b)Qi and (a,b)Ti, i = 1, ..., n, where a stands for a side (s—supplier, i—interface, c—customer) and b denotes a type of production in particular, i.e., scheduled (s), actual (p), forecast (f), order (o), and delivery (d), thus generalizing the scheme presented in [29].
Since we are looking mainly for flow variations, we are to consider differences, e.g., (Order—Forecast), (Delivery—Order), (Actual production—Scheduled production), etc. Hence, such quantities are expressible in the general form:
(a,b)Qi(u,v)Qi, (a,b)Ti(u,v)Ti, (a,b) ≠ (u,v), a,u ϵ {s, i, c}, b,v ϵ {s, p, f, o, d}
We assume that all of them should be kept in a problem-oriented database directly, or extractable and computable from available general manager information system (MIS) reports. However, natural and crucial questions arise—namely how to:
  • Recast such data into a suitable probabilistic system with N system states, in general;
  • Calculate all probabilities introduced, i.e., p1, …, pN.
We know, the probability estimation depends on the specific supplier-customer system investigated. So far, from theoretical point of view, we conclude that all necessary probabilities will be estimated from the corresponding data collected by monitoring the supplier-customer system and stored in a proper problem-oriented database.

3. Operational Complexity of Supplier-Customer Systems—Case Studies

General outline of data processing:
  • Case-collected data sheets are extracted from problem-oriented database properly either by structured query language (SQL) processing of reports generated by MIS, or manually, in the simplest case;
  • Check all excerpted data for logical consistency;
  • Statistical processing of the excerpted data and computation of entropy, e.g., issuing histograms (HIS), empirical distribution functions (EDF), and other additional numerical and/or graphical outputs, if necessary.
The software package we have developed for numerical realization of this general procedure consists from several programs written in Java and Mathematica, respectively. The programs communicate each other by data file transfer using compatible data formats. The first step, as well as the second one, is performed by Java programs. The third step is performed by Mathematica notebooks exclusively.
The EDF, entropy, as well as the other numerical characteristics, are calculated from raw data {yk}, k = 1, .., K, which contains all observations available including repeating values, of a random variable Y. The Y makes a theoretical framework for any Q- or T-flow observed. The procedure has four basic steps:
(i)
Sort and scale {yk} by affine map in order to get {xk} of a random variable X identically distributed as Y, but with dom(X) = [0, 1].
(ii)
Extract all repeating values form {xk} in order to get strictly increasing subset {xi}, i = 1, …, N, 0 ≤ x1 < x2 < … < xN ≤ 1, with frequencies {fi}, i = 1, ..., N of values which actually define system states.
(iii)
Calculate EDF F(ξ) = P(ξ < x), x ϵ {xi}, with dom(F(.)) = range(F(.)) = [0, 1], and HIS, called an empirical frequency function alternatively, which gives relative frequencies {pi} = {fi/K}, i = 1, …, N.
(iv)
Compute entropy and other related quantities, basically using Equation (2a,b) for calculation of H, and Hu, or equivalents S and Su based upon natural logarithms log(x) alternatively.
For the paper, we have selected five SME-ranked firms from various branches and with different business orientations and having their sites in the West Bohemian district of the Czech Republic. We denote these firms FA, FB, FC, FD, and FE, in sequence anonymously, so as to protect their business secrets: FA—building engineering, FB—fashion shop, FC—mechanical engineering, FD—lubricant shop, and FE—transportation engineering. In a similar anonymous way, we denote suppliers, e.g., S1fA and S2fA, which denote two different suppliers of the company FA, in particular.
We are concerned with time flow variations exclusively, which are detected at the interface and express time gaps between order issue times To and receipt times Td of product deliveries. Hence, concerning the interface time quantities only, we may simplify denotations as follows:
  • To: order issue time, instead of i,oTi,
  • Td: delivery time, instead of i,dTi,
where, the pre-index i denoting interface is dropped simply, and the post-index i denoting the product Pi is dropped as well, since it does not play a significant role in this analysis.
The corresponding time gap between order issue time and receipt time is given by simple difference:
Δ T d , o = T d T o .
Such a quantity is rather important in practice. It is called lead time in inventory theory, and plays a very significant role in various inventory models. In the case studies selected, we try to illustrate different aspects and variants of application entropy to measure the operational complexity of supplier-customer systems.

3.1. Medium-Sized Building Engineering Company FA

The corresponding data for time gaps Δ T d , o calculation were collected from the year 2008 till 2010, and fetched into the problem-oriented database. Naturally, the company stores more details in its own MIS database relating variations both in time and quantity of various products, but the most important ones are—concrete, solid bricks, masonry mortars, plasters and building blocks.
Figure 4, Figure 5, Figure 6 and Figure 7 show some typical outputs from programs realizing the third step of our procedure, i.e., construction of EDF-s, and other statistically oriented graphs. Even though we only show the results of two products (concrete and solid brick), we can see how delicate would be to make managerial decisions as to reach a preference decision between just two suppliers S1fA and S2fA, respectively.
The time variations of deliveries of concrete and solid bricks by supplier S1fA, in particular the time gaps Δ T d , o , are depicted in Figure 4 and Figure 5. The corresponding quantities of the second supplier S2fA are depicted in Figure 6 and Figure 7. In all cases, the most important functions for calculating entropy ratios h = H/Hu are empirical distribution functions (EDF-s). The other parts of the presented figures show some raw data depicted in the form of continuous piecewise linear functions, other raw data with outliers purged, and relative frequencies, as well.
The corresponding results are summarized in Table 4 in a purely quantitative way, whilst in Figure 8, the same results are depicted graphically, thus giving a more intuitive view of the performance comparison of both suppliers, i.e., S1fA and S2fA. Labels 1, 2, 3 subsequently denote products in the frontal direction, i.e., (1) for concrete; (2) for solid brick; and (3) for building block, in particular. The main purpose of such results is to support managerial decision making.
In the present case of building engineering firm FA, the supplier preference question seems to be answered easily. It is evident that S2fA outperforms S1fA, which is the clear managerial decision. Of course, keeping in mind that we have analyzed time gaps Δ T d , o , only.

3.2. Small-Sized Fashion Shop FB

The corresponding analysis in this case is almost similar to the previous one, i.e., company FA. However, the product line is quite different one, i.e., fashion goods, and moreover, the size of the company FB is much smaller than the size of FA.
Again, we investigated the role of two main suppliers, denoted S1fB, and S2fB, now, under the objective of lead time disorder measured by quantity Δ T d , o , too. The most important commodities for FB in the specified season are—blouses, dresses, and skirts. We present just final results in Table 5 and Figure 9. We can see that decision making process need not be as simple as in the previous case. However, the management of FB concluded that the supplier S2fB is the right choice, even though her outperformance the concurrent supplier S1fB is strictly achieved in blouses and dresses. On the contrary, the performance of supplier S1fB is slightly better being compared to supplier S2fB in the commodity of skirts. Thus, we have reached two different conclusions. In such a situation, one can either utilize an algorithm of weighted multi-objective choice of variants, in general, or apply another selection algorithm, e.g., based simply upon preferring previous good experience with a particular supplier. As we have already said, the management of FB has negotiated a business contract with supplier S2FB to be their exclusive supplier of the three commodities considered for the next period of time.

3.3. Medium-Sized Mechanical Engineering Company FC

Again, the basic steps of the corresponding analysis are almost similar to those ones done for the company FA. However, the product line is a rather specific one, i.e., production of purpose-oriented mechanical components for assembling structures, mainly civil engineering ones. The most important products for FC are—tanks, masts, and heat-exchangers.
Keeping the line of case presentations as compact as possible, we analyze the role of two main suppliers, denoted S1fC, and S2fC. We apply the objective of lead time disorder measured by quantity Δ T d , o , again. Final results of this case study are presented in Table 6 and Figure 10, in particular. Concluding both quantitative results and graphical ones, we can see the clear result—a monotone outperformance of the supplier S1fC in all three products analyzed, albeit in a relatively weak sense.

3.4. Small-Sized Lubricant Shop FD

Contrary to fashion shop FB that sells season-dependent commodities, the lubricant shop FD largely sells season-invariant products. The company has a dominant supplier whose behavior can cause serious problems by not-fulfilling the settled contracts, so we have primarily considered deviations Δ T d , o , again. The problem-oriented database consists of 452 data covering orders and deliveries of goods over the whole year 2011. The main goal of the analysis is to show the time dependence of entropy ratio h = H/Hu upon lead-time tolerance thresholds in days, denoted [bd, bu], where bd denotes the lower bound, and bu the upper one, respectively. For a more detailed discussion and further details, we refer to [30].
Inspecting the results presented in Figure 11, we may observe a clustering of deviations at 7-day long periods, which means in practice that delivery delays show week-long anomalies pre-dominantly. This is supported by results presented in Figure 12 and Figure 13. Finally, the dependence of EDF-s provides evidence that the probability distribution of lead time, denoted ϕ , again, shows a dependence upon variable threshold periods [0, b], with a variable upper bound bu = b, only. Surely, if we admit ϕ (b) to be time dependent on b, then h(b) = H(b)/Hu(b) = H(b)/Iu, will do as well.
Note: Iu is just a shorthand denotation of Hu(b), used in the label of vertical axis in the Figure 14. The calculated values of h(b) are given in Table 7 and plotted in Figure 14.
From a managerial point of view, such results, as in Figure 8, Figure 9 and Figure 10, are very important ones because they can be directly used for negotiation and settlement of proper details within supplier-consumer contracts. Another field of implementation occurs in problems related to sensitivity analysis of various management decisions in logistics and inventory control.

3.5. Top-Medium-Sized Mechanical Engineering Company FE

This company is the biggest one among all the discussed ones, and identified as belonging in the top SME size-rank. The main production line of FE consists of transportation vehicles for public transport, e.g., trolleybuses, electric locomotives, and others. The company has its own MIS DB. Corresponding data were excerpted from a huge MS-Excel file, which was generated by a MIS report generator. It contains more than 42,000 records in total. The company management was interested in lead time deviations across all products, irrelevant of which type, i.e., tangible or in-tangible ones, as well. The analysis was immense, and the discussion of results, too, so here, we just show the fractal character of the lead time deviations plot, i.e., Δ T d , o values, depicted in Figure 15, using a day as a natural unit for both horizontal and vertical axes.

3.6. Short Comparison of the Analyzed Study Cases

Although the five selected firms are very different ones not only as regarding their business orientation, but also their product lines, management, suppliers, etc., apart from the point of view of their enterprise range. In general, they fit all into the SME range in accordance with their staff headcount and balance sheet total, which are, as it is well-known, the main two factors used for an enterprise range determination. We denoted the firms FA, FB, …, FE thus maintaining their anonymity. Following a simple sub-coordination principle, their suppliers are denoted Sjfz accordingly. Here, j gives nothing else but a formal order number within a list of particular firm’s accepted suppliers, e.g., j = 1, 2, in case of two suppliers only, and z ranges {a, b, …, e}, which allows one to uniquely identify the firm. The commodities are denoted in a similar fashion, i.e., Cjfz, with j and z having the same meaning as given above.
The main results of our calculations are clearly given in Table 8. The last two rows are the most important therein. The fifth row gives the minimal values of entropy ratio h = H/Hu calculated for the particular firm and its commodity over all the firm’s suppliers. We are used to calling the argument of that particular minimization problem—the optimal supplier. They are listed in the fourth row, in particular for the firms FA, FB, and FC. In practice, these information can serve either for particular managerial decision making or to support a firm’s supplier negotiation process directly.
However, the study cases of firms FD and FE and their results are a little bit different ones. The main goal of operational complexity analysis within the firm FD was to show an investigation of dependence of entropy ratio h(b) = H(b)/Hu(b) upon an upper bound b of tolerance period [0, b], given in days. In particular, such an analysis is very important when finding a proper leverage between two firm’s aspects concerning any supplier—an acceptable tolerance in lead time variations of the supply stream versus its corresponding operation complexity measure. Because of the limited space here, we sketched this investigation of dependence of h(b) on b for one supplier, Sfd, and one commodity, Cfd, only. As far as the firm FE, we show in Figure 15, considering the limited space, too, just an illustration of complex structure of the raw input data stream of flow variations of the lead time collected from one supplier, Sfe, but for many commodities before sorting, during a period of four years.
Finally, we have to emphasize that we have analyzed flow variations of lead time Δ T d , o , only, to keep in line with our specific motivation consistency having been accepted intentionally. However, the same procedure can be used for analyses of flow variations of other quantities, too. In general, flow variations of different volumetric quantities are recommended be converted into dimensionless ones simply by some norm or rescaling processes with suitably selected denominators first, before submitting them to processing for complexity analysis by entropy-based procedures.

4. Conclusions

The measurement of operational complexity based upon entropy provides a versatile instrument for supplier-customer system analysis in practice, as well as motivations for theoretical research. In this paper, we have presented not only the well-known Shannon entropy, or Boltzmann–Gibbs–Shannon as it is alternatively called, but also a more general approach based upon the (c,d)-entropy. Our main contribution in this field presented in this paper is an investigation of the relation between BGS-entropy and (c,d)-entropy. We have shown it is more complicated than a single-valued relation given just by (c,d) = (1,1). Given the probability distribution, the value of BGS-entropy induces iso-quant curves on the surface of (c,d)-entropy being calculated for the same probability distribution. To our best knowledge, this is a new finding.
In the case studies presented, we applied a unified approach for the analysis of lead time variations of product-line main products at five SME-ranked firms. The results are briefly discussed, and possibilities of their direct application for managerial decision making are briefly outlined too. However, the firms are rather diverse from each other. They are not only of different size, but they all differ in their primary production lines. Their suppliers are different, too. Hence, we think that each case study should be concluded separately.
The main purpose of all case studies presented, and we hope our modest contribution in the field of measuring operational complexity of supplier-customer systems, is the clear promotion of entropy ratio h = H/Hu to be a suitable, versatile and effective indicator thereto. However, one specific remark should be mentioned here, before closing this paragraph. In general, firms and their management in particular, are not inclined to provide detailed information about their supplier-customer relations, and not a bit of actual commodity flow variations in time and/or in volumes, at most. They include all such information in a set of strict firm and business secret data, which is quite natural and understandable. However in practice, it hinders or even precludes diffusing any such firm sensitive analysis results in public. Nevertheless, having selected five specific SME firms, we aimed to illustrate both the general steps of the proposed entropy-based procedure in detail, and some acceptable and specific results thereof, too. We hope other researchers will apply the discussed procedure for solving similar problems quite easily, if the data were obtained from practice.
We doubt, and we were even not able to get corresponding data describing flow variations of commodity deliveries from some typical and internationally renowned companies. Another question is to generate some sample streams of such data by random simulations, which could be distributed for public use, thus providing a chance to run fully reproducible entropy calculation benchmarks. However, it was not our desire, here.
Evidently, from both the managerial and theoretical point of view there is still a lot of work to do. In particular, collection and processing of data, probability estimation of all mutually disjunctive states in specific supplier-customer system considered, and last but not least an accumulation of experience with different kinds of applications.
Another interesting topic, which is to be investigated, too, is linked with the complexity analysis and computation of particular entropy measures of information sets containing deviations of various time and other quantities flows simultaneously. One idea is based upon proper rescaling of different deviation flows in order to get dimensionless quantity flows exclusively. The other one tries to utilize various generalized entropy approaches and algorithms thereon. Our research is still ongoing in a rather broad field of entropy-based measurement of the operational complexity of supplier-customer systems both in theory and practical applications.

Acknowledgments

We gratefully acknowledge the assistance of comments and suggestions from three anonymous referees. The research project was supported by the Grant No. 15-20405S of the Grant Agency, Prague, Czech Republic.

Author Contributions

Ladislav Lukáš conceived the approach, designed and performed the numerical computations and wrote the initial version of the manuscript. Jiří Hofman collected the data used in our case studies. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reiss, M. A complexity-based approach to production management in the new economy. In Modern Concepts of the Theory of the Firm: Managing Enterprises of the New Economy; Fandel, G., Backes-Gellner, U., Schluetter, M., Staufenbiel, J.E., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 264–284. [Google Scholar]
  2. Lindemann, U.; Maurer, M.; Braun, T. Structural Complexity Management: An Approach for the Field of Product Design; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–41. [Google Scholar]
  3. McMillan, E. Complexity, Management and Dynamics of Change: Challenges for Practice; Routledge: Abingdon, UK, 2008; pp. 45–100. [Google Scholar]
  4. Blecker, T.; Kersten, W. (Eds.) Complexity Management in Supply Chains: Concepts, Tools and an Approach for the Field of Product Design; Erich Schmidt Verlag: Berlin, Germany, 2006; pp. 3–37, 161–202.
  5. Kempf, K.G. Complexity and the enterprise: The illusion of control. In Managing Complexity: Insights, Concepts, Applications; Helbig, D., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 57–87. [Google Scholar]
  6. Axsaeter, S. Inventory Control, 3rd ed.; Springer: New York, NY, USA, 2015; pp. 45–222. [Google Scholar]
  7. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  8. Khinchin, A.I. Mathematical Foundation of Information Theory; Dover Publications: Mineola, NY, USA, 1957. [Google Scholar]
  9. Gao, J.; Liu, F.; Zhang, J.; Hu, J.; Cao, Y. Information entropy as a basic building block of complexity theory. Entropy 2013, 15, 3396–3418. [Google Scholar] [CrossRef]
  10. Jacobs, M.A. Complexity: Toward an empirical measure. Technovation 2013, 33, 111–118. [Google Scholar] [CrossRef]
  11. Filiz, I. An entropy-based approach for measuring complexity in supply chains. Int. J. Prod. Res. 2010, 48, 3681–3696. [Google Scholar]
  12. Hu, S.J.; Zhu, X.; Wang, H.; Koren, Y. Product variety and manufacturing complexity in assembly systems and supply chains. CIRP Ann. 2008, 57, 45–48. [Google Scholar] [CrossRef]
  13. Martinez-Olvera, C. Entropy as an assessment tool of supply chain information sharing. Eur. J. Oper. Res. 2008, 185, 405–417. [Google Scholar] [CrossRef]
  14. Modrak, V.; Marton, D. Modelling and complexity assessment of assembly supply chain systems. Procedia Eng. 2012, 48, 428–435. [Google Scholar] [CrossRef]
  15. Prajoto, D.; Olhager, J. Supply chain integration and performance: The effects of long-term relationships, information technology and sharing, and logistics integration. Int. J. Prod. Econ. 2012, 135, 514–522. [Google Scholar] [CrossRef]
  16. Serdasaran, S. A review of supply chain complexity drivers. Comput. Ind. Eng. 2013, 66, 533–540. [Google Scholar] [CrossRef]
  17. Wu, Y.R.; Huatuco, L.H.; Frizelle, G.; Smart, J. A method for analysing operational complexity in supply chains. J. Oper. Res. Soc. 2013, 64, 654–667. [Google Scholar] [CrossRef]
  18. Efthymiou, K.; Pagoropoulos, A.; Papakostas, D.; Mourtzis, G.; Chryssolouris, G. Manufacturing systems complexity review: Challenges and outlook. Procedia CIRP 2012, 3, 644–649. [Google Scholar] [CrossRef]
  19. ElMaraghy, W.; ElMaraghy, H.; Tomiyama, T.; Monostori, L. Complexity in engineering design and manufacturing chains. CIRP Ann. 2012, 61, 793–814. [Google Scholar] [CrossRef]
  20. Jha, P.K.; Jha, R.; Datt, R.; Guha, S.K. Entropy in good manufacturing system: Tool for duality assurance. Eur. J. Oper. Res. 2011, 211, 658–665. [Google Scholar] [CrossRef]
  21. Zhang, Z. Modeling complexity of cellular manufacturing systems. Appl. Math. Model. 2011, 35, 4189–4195. [Google Scholar] [CrossRef]
  22. Feder, M.; Merhav, N. Relations between entropy and error probability. IEEE Trans. Inf. Theory 1994, 40, 259–266. [Google Scholar] [CrossRef]
  23. Vereshchagin, N.K.; Muchnik, A.A. On joint conditional complexity (Entropy). Proc. Steklov Inst. Math. 2011, 274, 90–104. [Google Scholar] [CrossRef]
  24. Takaoka, T.; Nakagawa, Y. Entropy as computational complexity. J. Inf. Proc. 2010, 18, 227–241. [Google Scholar] [CrossRef]
  25. Wu, X. Calculation of the minimum computational complexity based on information entropy. Int. J. Comput. Sci. Appl. 2012, 2, 73–82. [Google Scholar] [CrossRef]
  26. Humeau-Heutier, A. The multiscale entropy algorithm and its variants: A review. Entropy 2015, 17, 3110–3123. [Google Scholar] [CrossRef] [Green Version]
  27. Lau, A.H.L.; Lau, H.-S. Some two-echelon supply-chain games improving from deterministic-symmetric-information to stochastic-unsymmetric-information. Eur. J. Oper. Res. 2005, 161, 203–223. [Google Scholar] [CrossRef]
  28. Sivadasan, S.; Efstathiou, J.; Frizelle, G.; Shirazi, R.; Calinescu, A. An information-theoretic methodology for measuring the operational complexity of supplier-customer systems. Int. J. Oper. Prod. Manag. 2002, 22, 80–102. [Google Scholar] [CrossRef]
  29. Wu, Y.; Frizelle, G.; Efstathiou, J. A study on the cost of operational complexity in customer-supplier systems. Int. J. Prod. Econ. 2007, 106, 217–229. [Google Scholar] [CrossRef]
  30. Sivadasan, S.; Efstathiou, J.; Calinescu, A.; Huaccho Huatuco, L. Advances on measuring the operational complexity of supplier-customer systems. Eur. J. Oper. Res. 2006, 171, 208–226. [Google Scholar] [CrossRef]
  31. Hanel, R.; Thurner, S. Generalized (c,d)-entropy and aging random walks. Entropy 2013, 15, 5324–5337. [Google Scholar] [CrossRef]
  32. Prochorov, J.B.; Rozanov, J.A. Teorija Verojatnostej; Nauka: Moscow, Russia, 1967. (In Russian) [Google Scholar]
  33. Hofman, J.; Lukáš, L. Quantitative measuring of operational complexity of supplier-customer system with control thresholds. In Proceedings of the 30th International Conference on Mathematical Methods in Economics, Karvina, Czech Republic, 11–13 September 2012; pp. 302–308.
  34. Hofman, J.; Lukáš, L. Measurement of operational complexity of supplier-customer system using entropy. In Proceedings of the 31st International Conference on Mathematical Methods in Economics, Jihlava, Czech Republic, 11–13 September 2013; pp. 267–272. Available online: https://mme2013.vspj.cz/about-conference/conference-proceedings (accessed on 30 March 2016).
  35. Lukáš, L.; Plevný, M. Using entropy for quantitative measurement of operational complexity of supplier-customer system: Case studies. Cent. Eur. J. Oper. Res. 2015. [Google Scholar] [CrossRef]
Figure 1. Plot of functions S c , d * ( ϕ ) and S BGS ( ϕ ) over (c,d) ϵ Ω for given ϕ .
Figure 1. Plot of functions S c , d * ( ϕ ) and S BGS ( ϕ ) over (c,d) ϵ Ω for given ϕ .
Entropy 18 00137 g001
Figure 2. Curves representing the intersection S c , d * ( ϕ ) S BGS ( ϕ ) over (c,d) ϵ Ω.
Figure 2. Curves representing the intersection S c , d * ( ϕ ) S BGS ( ϕ ) over (c,d) ϵ Ω.
Entropy 18 00137 g002
Figure 3. Information scheme of a supplier-customer system.
Figure 3. Information scheme of a supplier-customer system.
Entropy 18 00137 g003
Figure 4. S1fA: concrete TrC 16–20, time gaps Δ T d , o ; (a) EDF; (b) values with outlier, continuous piecewise linear function.
Figure 4. S1fA: concrete TrC 16–20, time gaps Δ T d , o ; (a) EDF; (b) values with outlier, continuous piecewise linear function.
Entropy 18 00137 g004
Figure 5. S2fA: concrete TrC 16–20, time gaps Δ T d , o ; (a) EDF; (b) discrete values without outlier.
Figure 5. S2fA: concrete TrC 16–20, time gaps Δ T d , o ; (a) EDF; (b) discrete values without outlier.
Entropy 18 00137 g005
Figure 6. S1fA: solid brick CP 290 × 140 × 65, time gaps Δ T d , o ; (a) EDF; (b) empirical frequencies.
Figure 6. S1fA: solid brick CP 290 × 140 × 65, time gaps Δ T d , o ; (a) EDF; (b) empirical frequencies.
Entropy 18 00137 g006
Figure 7. S2fA: Solid brick CP 290 × 140 × 65, time gaps Δ T d , o ; (a) EDF; (b) values with outlier, continuous piecewise linear function.
Figure 7. S2fA: Solid brick CP 290 × 140 × 65, time gaps Δ T d , o ; (a) EDF; (b) values with outlier, continuous piecewise linear function.
Entropy 18 00137 g007
Figure 8. 3-D bar plot of entropy ratios h of products (1—concrete, 2—solid brick, 3—building block) summarized by suppliers S1fA and S2fA laterally.
Figure 8. 3-D bar plot of entropy ratios h of products (1—concrete, 2—solid brick, 3—building block) summarized by suppliers S1fA and S2fA laterally.
Entropy 18 00137 g008
Figure 9. 3-D bar plot of entropy ratios h of products (1—blouses, 2—dresses, 3—skirts) summarized by suppliers S1fB and S2fB laterally.
Figure 9. 3-D bar plot of entropy ratios h of products (1—blouses, 2—dresses, 3—skirts) summarized by suppliers S1fB and S2fB laterally.
Entropy 18 00137 g009
Figure 10. 3-D bar plot of entropy ratios h of products (1—Tank G100, 2—Mast LTA, 3—Exchanger P12) summarized by suppliers S1fC and S2fC laterally.
Figure 10. 3-D bar plot of entropy ratios h of products (1—Tank G100, 2—Mast LTA, 3—Exchanger P12) summarized by suppliers S1fC and S2fC laterally.
Entropy 18 00137 g010
Figure 11. Δ T d , o values; (a) from unfiltered data, set [bd, bu] = [0, 0]; (b) EDF.
Figure 11. Δ T d , o values; (a) from unfiltered data, set [bd, bu] = [0, 0]; (b) EDF.
Entropy 18 00137 g011
Figure 12. Δ T d , o values; (a) from filtered data, set [bd, bu] = [0, 7]; (b) EDF.
Figure 12. Δ T d , o values; (a) from filtered data, set [bd, bu] = [0, 7]; (b) EDF.
Entropy 18 00137 g012
Figure 13. Δ T d , o values; (a) from filtered data, set [bd, bu] = [0, 14]; (b) EDF.
Figure 13. Δ T d , o values; (a) from filtered data, set [bd, bu] = [0, 14]; (b) EDF.
Entropy 18 00137 g013
Figure 14. Entropy ratio h(b) = H(b)/Hu(b) from data filtered by [0, b], b = 0, 1, …,14 days.
Figure 14. Entropy ratio h(b) = H(b)/Hu(b) from data filtered by [0, b], b = 0, 1, …,14 days.
Entropy 18 00137 g014
Figure 15. Fractal character of Δ T d , o values plot containing all deliveries from 1 January 2007 till 31 December 2010.
Figure 15. Fractal character of Δ T d , o values plot containing all deliveries from 1 January 2007 till 31 December 2010.
Entropy 18 00137 g015
Table 1. Raw data of probabilistic system sample generated by L = 1000, kmax = 9.
Table 1. Raw data of probabilistic system sample generated by L = 1000, kmax = 9.
0123456789
10311386118941019588101101
Table 2. Probabilities p1, …, p10 defining the probabilistic system distribution ϕ .
Table 2. Probabilities p1, …, p10 defining the probabilistic system distribution ϕ .
p1p2p3p4p5p6p7p8p9p10
0.1030.1130.0860.1180.0940.1010.0950.0880.1010.101
Table 3. List of variables monitored for products Pi, i = 1, ..., n.
Table 3. List of variables monitored for products Pi, i = 1, ..., n.
QuantityTime
(A) Supplier sidescheduled productions,sQi, i = 1, ..., ns,sTi, i = 1, ..., n
actual productions,pQi, i = 1, ..., ns,pTi, i = 1, ..., n
(B) Interfaceforecasti,fQi, i = 1, ..., ni,fTi, i = 1, ..., n
orderi,oQi, i = 1, ..., ni,oTi, i = 1, ..., n
deliveryi,dQi, i = 1, ..., ni,dTi, i = 1, ..., n
(C) Customer sidescheduled productionc,sQi, i = 1, ..., nc,sTi, i = 1, ..., n
actual productionc,pQi, i = 1, ..., nc,pTi, i = 1, ..., n
Table 4. Building engineering company FA, suppliers S1fA and S2fA; values H, Hu, calculated by Equation (2a,b), and ratios h = H/Hu.
Table 4. Building engineering company FA, suppliers S1fA and S2fA; values H, Hu, calculated by Equation (2a,b), and ratios h = H/Hu.
Supplier : ProductHHuh = H/Hu
S1fA : Concrete2.557925.044390.507082
S2fA : Concrete2.665365.954200.447643
S1fA : Solid brick2.769995.087460.544474
S2fA : Solid brick2.621935.087460.515370
S1fA : Building block2.801404.523560.619292
S2fA : Building block2.937954.954200.593023
Table 5. Small-sized fashion shop FB, suppliers S1fB and S2fB; values H, Hu, calculated by Equation (2a,b), and ratios h = H/Hu.
Table 5. Small-sized fashion shop FB, suppliers S1fB and S2fB; values H, Hu, calculated by Equation (2a,b), and ratios h = H/Hu.
Supplier : ProductHHuh = H/Hu
S1fB : Blouses1.966924.000000.491729
S2fB : Blouses1.227916.044390.203148
S1fB : Dresses1.854754.459430.415917
S2fB : Dresses0.9321124.523560.206057
S1fB : Skirts1.227916.044390.203148
S2fB : Skirts1.339206.375040.210069
Table 6. Medium-sized mechanical engineering company FC, suppliers S1fC and S2fC; values H, Hu, calculated by Equation (2a,b), and ratios h = H/Hu.
Table 6. Medium-sized mechanical engineering company FC, suppliers S1fC and S2fC; values H, Hu, calculated by Equation (2a,b), and ratios h = H/Hu.
Supplier : ProductHHuh = H/Hu
S1fC : Tank G1002.312127.294620.316962
S2fC : Tank G1002.692237.294620.369071
S1fC : Mast LTA3.242676.459430.502005
S2fC : Mast LTA3.348466.459430.518383
S1fC : Exchanger P122.312127.294620.316962
S2fC : Exchanger P122.520787.294620.345568
Table 7. Entropy ratio h(b) vs. tolerance upper bound b.
Table 7. Entropy ratio h(b) vs. tolerance upper bound b.
b012345679
h(b)0.3270.3270.3240.3240.3200.3190.3020.2350.224
b1011121314
h(b)0.2120.2070.1980.1780.139
Table 8. The optimal suppliers and their minimal values of entropy ratio h = H/Hu.
Table 8. The optimal suppliers and their minimal values of entropy ratio h = H/Hu.
Study Case No.12345
FirmFAFBFCFDFE
CommodityC1faC2faC3faC1fbC2fbC3fbC1fcC2fcC3fcCfdmany
Optimal supplierS2faS2faS2faS2fbS2fbS1fbS1fcS1fcS1fcSfdSfe
min h0.4480.5150.5930.2030.2060.2030.3170.5020.3170.327-

Share and Cite

MDPI and ACS Style

Lukáš, L.; Hofman, J. Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies. Entropy 2016, 18, 137. https://doi.org/10.3390/e18040137

AMA Style

Lukáš L, Hofman J. Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies. Entropy. 2016; 18(4):137. https://doi.org/10.3390/e18040137

Chicago/Turabian Style

Lukáš, Ladislav, and Jiří Hofman. 2016. "Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies" Entropy 18, no. 4: 137. https://doi.org/10.3390/e18040137

APA Style

Lukáš, L., & Hofman, J. (2016). Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies. Entropy, 18(4), 137. https://doi.org/10.3390/e18040137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop